Tuesday, June 1, 2021

Efficient and Secure Learning across Memory Hierarchy

(Saransh Gupta, UC San Diego, presenting on Wednesday, June 2, 2021 at 1:00 & 7:00 PM ET)

Recent years have witnessed a rapid growth in the amount of generated data. Learning algorithms, like hyperdimensional (HD) computing, promise to reduce the computation complexity of processing such a huge amount of data. However, traditional computing systems are highly inefficient for such algorithms, mainly due to the limited cache capacity and memory bandwidth. In this talk, we propose a processing in-memory (PIM)-based HD computing architecture that accelerates all phases of the HD computing pipeline namely, encoding, training, retraining, and inference. Our architecture is enabled by fast and energy-efficient in-memory logic operations, combined with a hardware-friendly distance metric. Our proposed PIM solution provides 434x speedup as compared to the state-of-the-art HD computing implementations.

While this makes learning less reliant on the cloud, many applications, most notably in healthcare, finance, and defense, still need cloud computing and demand privacy which today’s solutions cannot fully provide. Fully homomorphic encryption (FHE) elevates the bar of today’s solutions by adding confidentiality of data during processing, while introducing significant data size expansion. In this talk, we also present a design of the first PIM-based accelerators of both client and server using the latest Ring-GSW based fully homomorphic encryption schemes. Our design supports various security levels and provides on average 2007x higher throughput than the best implementation while running FHE-enabled neural networks.