Deep Neural Networks (DNN), specifically Convolutional Neural Networks (CNNs) are often associated with a large number of data-parallel computations. Therefore, data-centric computing paradigms, such as Processing in Memory (PIM), are being widely explored for CNN acceleration applications. A recent PIM architecture, developed and commercialized by the UPMEM company, has demonstrated impressive performance boost over traditional CPU-based systems for a wide range of data parallel applications. However, the application domain of CNN acceleration is yet to be explored on this PIM platform. In this work, successful implementations of CNNs on the UPMEM PIM system are presented. Furthermore, multiple operation mapping schemes with different optimization goals are explored. Based on the data achieved from the physical implementation of the CNNs on the UPMEM system, key-takeaways for future implementations and further UPMEM improvements are presented. Finally, to compare UPMEM’s performance with other PIMs, a model is proposed that is capable of producing estimated performance results of PIMs given architectural parameters. The creation and usage of the model is covered in this work.
Library of Congress Subject Headings
Neural networks (Computer science); High performance processors; Memory management (Computer science)
Computer Engineering (MS)
Department, Program, or Center
Computer Engineering (KGCOE)
Das, Prangon, "Implementation and Evaluation of Deep Neural Networks in Commercially Available Processing in Memory Hardware" (2022). Thesis. Rochester Institute of Technology. Accessed from
RIT – Main Campus