Surana, NeelamNeelamSuranaBharti, Pramod KumarPramod KumarBhartiTej, Bachu VarunBachu VarunTejMekie, JoyceeJoyceeMekie2025-08-312025-08-312022-01-01[9781665485050]10.1109/VLSID2022.2022.000432-s2.0-85139235654https://d8.irins.org/handle/IITG2025/26249Artificial Neural Network-based applications such as pattern recognition, image classification etc. consume a significant amount of energy while accessing the memory. Various techniques to reduce these energy demands in SRAM, including heterogeneous and hybrid SRAM designs, have been proposed in earlier works. However, these designs still consume significant energy at higher voltage and suffer from area overhead. Considering the aforementioned issue, we propose 7 different homogeneous Mixed-VT 8T SRAM architectures for neural networks, which overcome these issues. We analyzed the effect of truncation on different neural networks for different datasets and further applied the truncation technique on the SRAM architecture used for ANN. We design the Mixed-V_T,8T SRAM architecture and validate it suitability for 5 different neural networks. Our proposed Mixed- V_T,8T SRAM architecture requires maximum of 0.34×(0.46×) and 0.56×(0.69×) dynamic energy(leakage power) than Het-6T and Hyb-8T/6T SRAM architecture respectively at 0.5V and maximum of 0.7×(0.84×) and 0.92×(0.90×) dynamic energy(leakage power) than Het-6T and Hyb-8T/6T SRAM array respectively at 0.7 V for 6-bit weights of neural networks.falseApproximate Memory | BER | Bit Truncation | Neural network. Image Classification | Quantization | SRAMMixed-8T: Energy-Efficient Configurable Mixed-VTSRAM Design Techniques for Neural NetworksConference Paper174-17920220cpConference Proceeding0