Repository logo
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Scholalry Output
  3. Publications
  4. OwlsEye: Real-Time Low-Light Video Instance Segmentation on Edge and Exploration of Fixed-Posit Quantization
 
  • Details

OwlsEye: Real-Time Low-Light Video Instance Segmentation on Edge and Exploration of Fixed-Posit Quantization

Source
Proceedings of the IEEE International Conference on VLSI Design
ISSN
10639667
Date Issued
2025-01-01
Author(s)
Shah, Gaurav
Goud, Abhinav
Momin, Zaqi
Mekie, Joycee  
DOI
10.1109/VLSID64188.2025.00090
Abstract
Video Instance Segmentation (VIS) in low light conditions presents a significant challenge when deployed on resource-constrained edge devices, especially in autonomous vehicles, surveillance, robotics, or similar applications. This paper presents OwlsEye, which, to the best of our knowledge, is the first hardware implementation for real-time Video Instance Segmentation under low-light settings using an off-the-shelf RGB camera. Implemented on the Intel Nezha Embedded platform, OwlsEye demonstrates an improvement in Frames Per Second (FPS) from 0.6 to 28 FPS using model weight quantization, brightness verification, and asynchronous FIFO Pipelining. This paper also presents the EQyTorch framework, an extension of Qtorch+ for fixed-posit numbers, used in weight quantization for the YOLOv8 architecture. We show that fixed-posit quantization achieves an improvement in latency and power utilization of 2.35×, 3.6×, and 9.02×, 87.91× compared to INT8 and FP32 for 65nm CMOS. Furthermore, this paper presents a synthetic DarkCOCO2017 validation dataset to test the OwlsEye segmentation performance in enhanced, original, and dark images. Our work highlights a novel real-time low-light VIS system and the potential of using Fixed-Posit quantization for edge AI applications.
Unpaywall
URI
https://d8.irins.org/handle/IITG2025/28318
Subjects
Computer Vision | Edge AI | Fixed Posits | low-light enhancement | Video Instance Segmentation
IITGN Knowledge Repository Developed and Managed by Library

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Privacy policy
  • End User Agreement
  • Send Feedback
Repository logo COAR Notify