Gus Smith, University of Washington, presenting on May 22nd.
Deep learning is hungry for computational power, and it seems it will only be satiated through extreme hardware specialization. Google’s TPU and Intel’s Nervana both employ custom hardware to accelerate deep learning. The exploration of new numerical datatypes, which specify how mathematical values are expressed and operated on in hardware, has been key to extracting the best performance from hardware accelerators. Previously, numerical computations used IEEE 754 floating point, a standard which is designed to be general-purpose. However, the general-purpose nature of IEEE floats often leaves a lot of potential performance on the table. As a result, a number of new datatypes have sprung up as competitors to IEEE floating point, including Google's bfloat16, Intel's Flexpoint, Facebook's Deepfloat, and the Posit format from John Gustafson.
By supporting these new custom datatypes in TVM, an extensible deep learning compiler stack developed at the University of Washington, we can enable future workloads which utilize a variety of custom datatypes. In this talk, I discuss how we are taking the first steps towards supporting custom datatypes in TVM, by allowing users to "bring their own datatypes." Many datatype researchers first develop a software-emulated version of their datatype, before developing it in hardware; our framework allows users to plug these software-emulated versions of their datatypes directly into TVM, to compile and test real models.