Walia, SumitSumitWaliaTej, Bachu VarunBachu VarunTejKabra, ArpitaArpitaKabraDevnath, JoydeepJoydeepDevnathMekie, JoyceeJoyceeMekie2025-08-312025-08-312022-01-0110.1109/TVLSI.2021.31316092-s2.0-85123539271https://d8.irins.org/handle/IITG2025/26333This brief compares quantized float-point representation in posit and fixed-posit formats for a wide variety of pre-trained deep neural networks (DNNs). We observe that fixed-posit representation is far more suitable for DNNs as it results in a faster and low-power computation circuit. We show that accuracy remains within the range of 0.3% and 0.57% of top-1 accuracy for posit and fixed-posit quantization. We further show that the posit-based multiplier requires higher power-delay-product (PDP) and area, whereas fixed-posit reduces PDP and area consumption by 71% and 36%, respectively, compared to (Devnath et al., 2020) for the same bit-width.falseConvolutional neural net (CNN) | deep neural network (DNN) | fixed-posit representation | posit number system | quantizationFast and Low-Power Quantized Fixed Posit High-Accuracy DNN ImplementationArticle15579999108-1111 January 202210arJournal10WOS:000732119700001