Tuesday, January 21, 2020
Deep Learning Acceleration with Neuron-to-Memory Transformation
(Yeseong Kim, UCSD, presenting at 11:00AM and 7:00PM Eastern Time on Wednesday, January 22, 2020)
Abstract:
In this talk, I will discuss our framework for deep neural network (DNN) acceleration, called RAPIDNN, which performs neuron-to-memory transformation for a highly-parallel, memory-centric architecture. RAPIDNN reinterprets a DNN model and maps it into a specialized accelerator, which is designed using non-volatile memory blocks that model four fundamental DNN operations. Our evaluation shows that RAPIDNN achieves 49.5× energy efficiency improvement and 10.9× speedup as compared to PipeLayer, a state-of-the-art DNN accelerator while ensuring less than 0.5% quality loss.