Gus Smith, University of Washington, presenting on May 22nd.
Deep learning is hungry for computational power, and it seems it will only be satiated through extreme hardware specialization. Google’s TPU and Intel’s Nervana both employ custom hardware to accelerate deep learning. The exploration of new numerical datatypes, which specify how mathematical values are expressed and operated on in hardware, has been key to extracting the best performance from hardware accelerators. Previously, numerical computations used IEEE 754 floating point, a standard which is designed to be general-purpose. However, the general-purpose nature of IEEE floats often leaves a lot of potential performance on the table. As a result, a number of new datatypes have sprung up as competitors to IEEE floating point, including Google's bfloat16, Intel's Flexpoint, Facebook's Deepfloat, and the Posit format from John Gustafson.
By supporting these new custom datatypes in TVM, an extensible deep learning compiler stack developed at the University of Washington, we can enable future workloads which utilize a variety of custom datatypes. In this talk, I discuss how we are taking the first steps towards supporting custom datatypes in TVM, by allowing users to "bring their own datatypes." Many datatype researchers first develop a software-emulated version of their datatype, before developing it in hardware; our framework allows users to plug these software-emulated versions of their datatypes directly into TVM, to compile and test real models.
Tuesday, May 21, 2019
Thursday, May 2, 2019
Data-Free Quantization for Deep Neural Networks
(Ritchie Zhao presenting on Friday, 5/3/19.)
Quantization is key to improving the execution time and energy efficiency of neural networks on both commodity GPUs and specialized accelerators. The majority of existing literature focuses on training quantized DNNs. However, industry shows great demand for data-free quantization - techniques that quantize a floating-point model without (re)training. Our talk focuses on this latter topic.
DNN weights and activations follow a bell-shaped distribution post-training, while practical hardware uses a linear quantization grid. This leads to challenges in dealing with outliers in the distribution. Prior work has addressed this by clipping the outliers or using specialized hardware. In this work, we propose outlier channel splitting (OCS),which duplicates channels containing outliers, then halves the duplicated values. The network remains functionally identical, but affected outliers are moved toward the center of the distribution. OCS requires no additional training and works on commodity hardware. Experimental evaluation on ImageNet classification and language modeling shows that OCS can outperform state-of-the-art clipping techniques with only minor overhead.
Subscribe to:
Posts (Atom)