Real-Time Pedestrian Surveillance System Based on Deep Learning Object Detecting Algorithm
DOI:
https://doi.org/10.52783/jns.v14.2762Keywords:
Pedestrian, Yolo, Deep Learning, Jetson Nano, Object DetectingAbstract
Smart crosswalks represent a novel approach to traffic management, employing the integration of Internet of Things (IoT) sensors and real-time control of traffic signals to mitigate the risks associated with pedestrian accidents. The advent of advanced deep learning algorithms for object detection has enhanced the feasibility of developing effective real-time pedestrian detection systems. The YOLO (You Only Look Once) algorithm, renowned for its high degree of accuracy and rapid object recognition, has undergone substantial enhancements. Notably, the YOLOv8 model has demonstrated substantial advancements in performance compared to its predecessors through the integration of the C2f module, PAN module, and the adoption of an anchor-free approach. This study has established a comprehensive pedestrian dataset for the evaluation of the efficacy of various YOLO versions, and has identified the optimal algorithm version for pedestrian detection through rigorous practical training. An on-device pedestrian detection system has been developed by deploying the trained algorithm onto a Jetson Nano board, and subsequent simulation tests have been conducted to verify the accurate detection of pedestrians.
Downloads
Metrics
References
Liu, Wei, et al, “SSD: Single Shot Multbox Detector.” in 14th European Conference, Amsterdam: The Netherlands, pp. 11-14, Sep. 2016. DOI: https://doi.org/10.1007/978-3-319-46448-0_2.
Redmon, J., Divvala, S., Girshick, R., and Farhadi, A., “You Only Look Once: Unified, Real-Time Object Detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV: USA, pp. 779-788, June 2016. DOI: 10.1109/CVPR.2016.91.
Girshick, R., Donahue, J., Darrell, T., and Malik, J., “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH: USA, pp. 580-587, June 2014. DOI: 10.1109/CVPR.2014.81.
Girshick, R., “Fast R-CNN,” [Internet]. Available: https://arxiv.org/abs/1504.08083.
He, K., Gkioxari, G., Dollár, P., and Girshick, R., “Mask R-CNN,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, pp. 2961-2969, Oct. 2017. DOI: 10.1109/ICCV.2017.322.
Cai, Z., and Vasconcelos, N., “Cascade R-CNN: Delving into High Quality Object Detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT: USA, pp. 6154-6162, June 2018. DOI: 10.1109/CVPR.2018.00644.
Jocher, G., Chaurasia, A., and Qiu, J., “Ultralytics YOLOv5,” [Internet]. Available: https://github.com/ultralytics/ultralytics.
Wang, C., Bochkovskiy, A., and Liao, H. Y. M., “YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors,” [Internet]. Available: https://arxiv.org/abs/2207.02696.
Wang, C. Y., Yan, H., and Liao, H. Y. M., “YOLOv8: A New State-of-the-Art for Real-Time Object Detection,” [Internet]. Available: https://arxiv.org/abs/2307.02444.
Zhang, X., Tian, Y., Kong, Y., and Zhong, B., “Efficient Long-Range Attention Network for Image Super-Resolution,” in Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, pp. 379-395, Aug. 2020. DOI: 10.1007/978-3-030-58539-6_23.
Bochkovskiy, A., Wang, C. Y., and Liao, H. Y. M., “YOLOv4: Optimal speed and accuracy of object detection,” [Internet]. Available: https://arxiv.org/abs/2004.10934.
Lin, T. Y., Dollár, P., Girshick, R., He, K., Hariharan, B., and Belongie, S., “Feature Pyramid Networks for Object Detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI: USA, pp. 2117-2125, July 2017. DOI: 10.1109/CVPR.2017.106.
Law, H., and Deng, J., “CornerNet: Detecting Objects as Paired Keypoints,” in Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, pp. 734-750, Sept. 2018. DOI: 10.1007/978-3-030-01234-2_45.
NVIDIA, “NVIDIA Jetson Nano Developer Kit User Guide,” [Online]. Available: https://developer.nvidia.com/embedded/learn/getting-started-jetson-nano-devkit.
NVIDIA, “TensorRT Developer Guide,” [Online]. Available: https://docs.nvidia.com/deeplearning/tensorrt/archives/index.html.
[16] ONNX, “ONNX: Open Neural Network Exchange,” [Internet]. Available: https://onnx.ai/.
Copyright© by the authors. Licensee TAETI, Taiwan. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY-NC) license (http://creativecommons.org/licenses/by/4.0/).
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
You are free to:
- Share — copy and redistribute the material in any medium or format
- Adapt — remix, transform, and build upon the material for any purpose, even commercially.
Terms:
- Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.