Thursday, May 2, 2019

Data-Free Quantization for Deep Neural Networks

(Ritchie Zhao presenting on Friday, 5/3/19.) 

Quantization is key to improving the execution time and energy efficiency of neural networks on both commodity GPUs and specialized accelerators. The majority of existing literature focuses on training quantized DNNs. However, industry shows great demand for data-free quantization - techniques that quantize a floating-point model without (re)training. Our talk focuses on this latter topic.

DNN weights and activations follow a bell-shaped distribution post-training, while practical hardware uses a linear quantization grid. This leads to challenges in dealing with outliers in the distribution. Prior work has addressed this by clipping the outliers or using specialized hardware. In this work, we propose outlier channel splitting (OCS),which duplicates channels containing outliers, then halves the duplicated values. The network remains functionally identical, but affected outliers are moved toward the center of the distribution. OCS requires no additional training and works on commodity hardware. Experimental evaluation on ImageNet classification and language modeling shows that OCS can outperform state-of-the-art clipping techniques with only minor overhead.