Repository logo
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Scholalry Output
  3. Publications
  4. Analysis of Conventional, Near-Memory, and In-Memory DNN Accelerators
 
  • Details

Analysis of Conventional, Near-Memory, and In-Memory DNN Accelerators

Author(s)
T., Glint, Tom
C.K., Jha, Chandan Kumar
M., Awasthi, Manu
J., Mekie, Joycee  
DOI
10.1109/ISPASS57527.2023.00049
Start Page
14-12-1900
End Page
351
Abstract
Various DNN accelerators based on Conventional compute Hardware Accelerator (CHA), Near-Data-Processing (NDP) and Processing-in-Memory (PIM) paradigms have been proposed to meet the challenges of inferencing Deep Neural Networks (DNNs). To the best of our knowledge, this work aims to perform the first quantitative as well as qualitative comparison among the state-of-the-art accelerators from each digital DNN accelerator paradigm. Our study provides insights into selecting the best architecture for a given DNN workload. We have used workloads of the MLPerf Inference benchmark. We observe that for Fully Connected Layer (FCL) DNNs, PIM-based accelerator is 21� and 3� faster than CHA and NDP-based accelerator respectively. However, NDP is 9� and 2.5� more energy efficient than CHA and PIM for FCL. For Convolutional Neural Network (CNN) workloads, CHA is 10% and 5� faster than NDP and PIM-based accelerator respectively. Further, CHA is 1.5� and 6� more energy efficient than NDP and PIM-based accelerators respectively. � 2023 Elsevier B.V., All rights reserved.
Unpaywall
URI
https://www.scopus.com/inward/record.uri?eid=2-s2.0-85164540093&doi=10.1109%2FISPASS57527.2023.00049&partnerID=40&md5=d2f0cf0f2e2082f914b98ec371344be2
https://d8.irins.org/handle/IITG2025/29396
Keywords
Convolutional neural networks
Data handling
Energy efficiency
Convolutional neural network
Data processors
Deep neural network accelerator
Energy efficient
Hardware accelerators
Memory paradigm
Near data processor
Neural-network processing
Processing-in-memory
State of the art
Deep neural networks
IITGN Knowledge Repository Developed and Managed by Library

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Privacy policy
  • End User Agreement
  • Send Feedback
Repository logo COAR Notify