(Vaibhav Varma, UVA, presenting on Wed. 6/17/2020)
Artificial
intelligence (AI) and machine learning (ML) have emerged as the fastest growing
workloads in the last few years ranging from applications like object detection
and face recognition to self-driving smart cars. This rise in AI applications
combined with IoT infrastructure is leading to a new paradigm of Artificial
Intelligence of Things (AIoT) where IoT edge devices are augmented with AI/ML
capabilities to enable smart sensing applications. This has fuelled an
increased interest towards integrating AI accelerators in edge devices and
Processing-in-Memory (PiM) accelerators are prime candidates for this
integration. PiM accelerators promise improved performance and power
characteristics by breaking the memory wall but they are notoriously difficult
to program which resists their integration in the traditional computing stack.
In this talk, we present AI-PiM as a solution to this problem. AI-PiM is a hardware/software
codesign methodology which helps with efficient integration of PiM accelerators
in the RISC-V processor pipeline as functional units. Along with hardware
integration, AI-PiM also focusses on RISC-V ISA extensions which target the PiM
functional units directly, resulting in a tight integration of PiM accelerators
with the processor.