OwlsEye: Real-Time Low-Light Video Instance Segmentation on Edge and Exploration of Fixed-Posit Quantization
Source
Proceedings of the IEEE International Conference on VLSI Design
ISSN
10639667
Date Issued
2025-01-01
Author(s)
Abstract
Video Instance Segmentation (VIS) in low light conditions presents a significant challenge when deployed on resource-constrained edge devices, especially in autonomous vehicles, surveillance, robotics, or similar applications. This paper presents OwlsEye, which, to the best of our knowledge, is the first hardware implementation for real-time Video Instance Segmentation under low-light settings using an off-the-shelf RGB camera. Implemented on the Intel Nezha Embedded platform, OwlsEye demonstrates an improvement in Frames Per Second (FPS) from 0.6 to 28 FPS using model weight quantization, brightness verification, and asynchronous FIFO Pipelining. This paper also presents the EQyTorch framework, an extension of Qtorch+ for fixed-posit numbers, used in weight quantization for the YOLOv8 architecture. We show that fixed-posit quantization achieves an improvement in latency and power utilization of 2.35×, 3.6×, and 9.02×, 87.91× compared to INT8 and FP32 for 65nm CMOS. Furthermore, this paper presents a synthetic DarkCOCO2017 validation dataset to test the OwlsEye segmentation performance in enhanced, original, and dark images. Our work highlights a novel real-time low-light VIS system and the potential of using Fixed-Posit quantization for edge AI applications.
Subjects
Computer Vision | Edge AI | Fixed Posits | low-light enhancement | Video Instance Segmentation
