Optimizing binary neural network quantization for fixed pattern noise robustness

Abstract This work presents a comprehensive analysis of how extreme data quantization and fixed pattern noise (FPN) from CMOS imagers affect the performance of deep neural networks for image recognition tasks. Binary neural networks (BNN) are particularly attractive for resource-constrained embedded...

Full description

Saved in:
Bibliographic Details
Main Authors: Francisco Javier Andreo-Oliver, Gines Domenech-Asensi, Jose Angel Diaz-Madrid, Ramon Ruiz-Merino, Juan Zapata-Perez
Format: Article
Language:English
Published: Nature Portfolio 2025-07-01
Series:Scientific Reports
Subjects:
Online Access:https://doi.org/10.1038/s41598-025-10833-1
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract This work presents a comprehensive analysis of how extreme data quantization and fixed pattern noise (FPN) from CMOS imagers affect the performance of deep neural networks for image recognition tasks. Binary neural networks (BNN) are particularly attractive for resource-constrained embedded systems due to their reduced memory footprint and computational requirements. However, these highly quantized networks demonstrate increased sensitivity to sensor imperfections, particularly FPN inherent to CMOS imaging devices. Taking as baseline a BNN with binary weights and 32-bit batch normalization parameters, we systematically investigate performance degradation when these parameters are quantized to lower bit-widths and when various types of FPN are applied to input images. Our experiments with CIFAR-10 and CIFAR-100 datasets reveal that decreasing batch normalization parameters to 4-bit provides a reasonable compromise between resource efficiency and accuracy, although the performance significantly deteriorates at higher noise levels. We demonstrate that this degradation can be effectively mitigated through strategic noise augmentation during training. Specifically, training with moderate (5-10%) noise levels improves resilience to similar noise during inference while models trained with column FPN show remarkable robustness across multiple noise types Our findings provide practical guidance for designing efficient and noise-tolerant BNNs for low-power vision systems, showing that appropriate training strategies can achieve up to 60% accuracy.
ISSN:2045-2322