Face Detection Under Low-Light and Low-Resolution Conditions Using Contrast-Limited Adaptive Histogram Equalization and a Modified Convolutional Neural Network
DOI:
https://doi.org/10.63682/jns.v14i32S.7793Keywords:
Face detection, Low-light imaging, Low-resolution, CLAHE, Deep CNN, Image enhancement, Hybrid model, WIDER FACE, Dark Face datasetAbstract
Background: Face detection in low-light, low-resolution images remains challenging due to poor contrast, noise, and limited detail. This study proposes a hybrid model using CLAHE with a deep CNN, optimised for robust face detection under such adverse conditions.
Methods: This research proposes a hybrid model integrating CLAHE-based preprocessing with a modified deep CNN for face detection. CLAHE enhances contrast in dark scenes while controlling noise. The Viola-Jones cascade generates candidate face regions, refined by a custom CNN with optimised kernels, batch normalisation, and Spatial Pyramid Pooling for scale invariance. Non-maximum suppression (IoU > 0.5) removes duplicates. The model is trained on WIDER FACE, Dark Face, and additional low-light images, and evaluated on standard benchmarks that focus on low-light and low-resolution accuracy.
Results: The hybrid model achieved a 94.5% face detection rate on extremely low-light, low-resolution images, outperforming YOLOv3, MTCNN, and RetinaFace. On the Dark Face dataset, it showed a higher True Positive Rate and a lower False Positive Rate. The model runs at 12 FPS on CPU, twice as fast as MTCNN, while maintaining superior accuracy. The confusion matrix (Figure 3) shows 94.5% True Positives and 3% False Positives.
Conclusion: CLAHE-enhanced images, combined with a robust modified CNN, enable accurate 94 to 95% face detection in low-light, low-resolution conditions. The hybrid model is well-suited for real-world applications, such as night-time surveillance and mobile devices.
Downloads
Metrics
References
P. Viola and M. Jones, "Rapid object detection using a boosted cascade of simple features," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), vol. 1, 2001, pp. 511–518.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, "ImageNet classification with deep convolutional neural networks," in Adv. Neural Inf. Process. Syst. (NIPS), vol. 25, 2012, pp. 1097–1105.
K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2016, pp. 770–778.
S. Ren, K. He, R. Girshick, and J. Sun, "Faster R-CNN: Towards real-time object detection with region proposal networks," in Adv. Neural Inf. Process. Syst. (NIPS), vol. 28, 2015, pp. 91–99.
J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You Only Look Once: Unified, real-time object detection," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2016, pp. 779–788.
K. Zhang, Z. Zhang, Z. Li, and Y. Qiao, "Joint face detection and alignment using multi-task cascaded convolutional networks," IEEE Signal Process. Lett., vol. 23, no. 10, pp. 1499–1503, 2016.
J. Deng, J. Guo, Y. Zhou, and S. Zafeiriou, "RetinaFace: Single-stage dense face localisation in the wild," arXiv preprint arXiv:1905.00641, 2019.
W. Yang et al., "Advancing image understanding in poor visibility environments: A collective benchmark study," IEEE Trans. Image Process., vol. 29, pp. 5737–5752, 2020.
S. Yang, P. Luo, C. C. Loy, and X. Tang, "WIDER FACE: A face detection benchmark," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2016, pp. 5525–5533.
W. Wang, W. Yang, and J. Liu, "HLA-Face: Joint high-low adaptation for low light face detection," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2021, pp. 16195–16204.
J. Liang et al., "Recurrent exposure generation for low-light face detection," IEEE Trans. Multimedia, vol. 24, pp. 1609–1621, 2022.
S. Zhang, X. Zhu, Z. Lei, H. Shi, X. Wang, and S. Z. Li, "S3FD: Single shot scale-invariant face detector," in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), 2017, pp. 1937–1945.
K. Zuiderveld, "Contrast limited adaptive histogram equalisation," in Graphics Gems IV, P. Heckbert, Ed. San Diego, CA: Academic Press, 1994, pp. 474–485.
L. N. Soni, A. Datar, S. Datar "Viola-Jones algorithm based approach for face detection of African origin people and newborn infants", International Journal of Computer Trends and Technology (IJCTT) 51 (2).
W. Lu, Q. Sun, and A. Li, "Improved CLAHE algorithm based on independent component analysis," in Proc. 3rd Int. Conf. Electron Inf. Technol. (EIT), IEEE, 2024, pp. 920–923.
J. Li et al., "DSFD: Dual shot face detector," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2019, pp. 5060–5069.
A. Neubeck and L. Van Gool, "Efficient non-maximum suppression," in Proc. 18th Int. Conf. Pattern Recognit. (ICPR), vol. 3, 2006, pp. 850–855.
L. N. Soni and A. A. Waoo, "A review of recent advances in methodologies for face detection," International Journal of Current Engineering and Technology, vol. 13, no. 2, pp. 86-92, 2023.
X. Yu, J. Zhang, W. Ma, and X. Zheng, "Single-stage face detection under extremely low-light conditions," in Proc. IEEE/CVF Int. Conf. Comput. Vis. Workshops (ICCVW), 2021, pp. 3523–3532.
L. N. Soni "LNSONI Human Face Dataset", Mendeley Data, V1, (2024), doi: 10.17632/rbczppyyx8.1
J. Redmon and A. Farhadi, "YOLOv3: An incremental improvement," arXiv preprint arXiv:1804.02767, 2018.
Y. P. Loh and C. S. Chan, "Getting to know low-light images with the exclusively dark dataset," Comput. Vis. Image Underst., vol. 178, pp. 30-42, 2019.
L. N. Soni, A. Datar, S. Datar "Implementation of Viola-Jones Algorithm based approach for human face detection", International Journal of Current Engineering and Technology, Volume-7 Issue-5 Pp. 1819-1823.
X. Guo, Y. Li, and H. Ling, "LIME: Low-light image enhancement via illumination map estimation," IEEE Trans. Image Process., vol. 26,
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
You are free to:
- Share — copy and redistribute the material in any medium or format
- Adapt — remix, transform, and build upon the material for any purpose, even commercially.
Terms:
- Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.