Mixed-8T: Energy-Efficient Configurable Mixed-VTSRAM Design Techniques for Neural Networks
Source
Proceedings 2022 35th International Conference on VLSI Design Vlsid 2022 Held Concurrently with 2022 21st International Conference on Embedded Systems Es 2022
Date Issued
2022-01-01
Author(s)
Abstract
Artificial Neural Network-based applications such as pattern recognition, image classification etc. consume a significant amount of energy while accessing the memory. Various techniques to reduce these energy demands in SRAM, including heterogeneous and hybrid SRAM designs, have been proposed in earlier works. However, these designs still consume significant energy at higher voltage and suffer from area overhead. Considering the aforementioned issue, we propose 7 different homogeneous Mixed-VT 8T SRAM architectures for neural networks, which overcome these issues. We analyzed the effect of truncation on different neural networks for different datasets and further applied the truncation technique on the SRAM architecture used for ANN. We design the Mixed-V_T,8T SRAM architecture and validate it suitability for 5 different neural networks. Our proposed Mixed- V_T,8T SRAM architecture requires maximum of 0.34×(0.46×) and 0.56×(0.69×) dynamic energy(leakage power) than Het-6T and Hyb-8T/6T SRAM architecture respectively at 0.5V and maximum of 0.7×(0.84×) and 0.92×(0.90×) dynamic energy(leakage power) than Het-6T and Hyb-8T/6T SRAM array respectively at 0.7 V for 6-bit weights of neural networks.
Subjects
Approximate Memory | BER | Bit Truncation | Neural network. Image Classification | Quantization | SRAM
