Repository logo
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Scholalry Output
  3. Publications
  4. A mathematical approach towards quantization of floating point weights in low power neural networks
 
  • Details

A mathematical approach towards quantization of floating point weights in low power neural networks

Source
Proceedings 33rd International Conference on VLSI Design Vlsid 2020 Held Concurrently with 19th International Conference on Embedded Systems
Date Issued
2020-01-01
Author(s)
Devnath, Joydeep Kumar
Surana, Neelam
Mekie, Joycee  
DOI
10.1109/VLSID49098.2020.00048
Abstract
Neural networks are both compute and memory intensive, and consume significant power while inferencing. Bit reduction of weights is one of the key techniques used to make them power and area efficient without degrading performance. In this paper, we show that inferencing accuracy changes insignificantly even when floating-point weights are represented using 10-bits (lower for certain other neural networks), instead of 32-bits. We have considered a set of 8 neural networks. Further, we propose a mathematical formula for finding the optimum number of bits required to represent the exponent of floating point weights, below which the accuracy drops drastically. We also show that mantissa is highly dependent on the number of layers of a neural network and propose a mathematical proof for the same. Our simulation results show that bit reduction gives better throughput, power efficiency, and area efficiency as compared to those of the models with full precision weights.
Unpaywall
URI
https://d8.irins.org/handle/IITG2025/24266
Subjects
CIFAR10 | Convolution neural network | Deep learning | Energy-efficient neural network | ImageNet | MNIST | Quantization
IITGN Knowledge Repository Developed and Managed by Library

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Privacy policy
  • End User Agreement
  • Send Feedback
Repository logo COAR Notify