Evaluating YOLOv8-Based Distance Estimation: A Comparison of OpenCV and Coordinate Attention Weighting in Blind Navigation Systems
DOI:
https://doi.org/10.29407/intensif.v9i2.24395Keywords:
Distance Estimation, OpenCV, Coordinate Attention Weighting, Blind Navigation, Real-time Object Detection, Model OptimizationAbstract
Background: Recent developments in assistive technologies for the visually impaired have increasingly utilized computer vision techniques for real-time distance estimation. However, challenges remain in balancing accuracy, latency, and robustness under dynamic environmental conditions. Objective: This study aimed to evaluate and compare the performance of OpenCV and Coordinate Attention Weighting (CAW) models for distance estimation in blind navigation systems, particularly focusing on their effectiveness in real-time scenarios. Methods: A quantitative experimental study was conducted using an image dataset labeled with actual distances. The baseline performances of OpenCV and CAW were measured and compared. Subsequently, targeted optimizations were applied to the OpenCV model, including adaptive image filtering, hyperparameter tuning, and integration of a Kalman filter. Results: Initial evaluation showed that CAW achieved a higher baseline accuracy of 88% compared to OpenCV. However, after optimizations, OpenCV’s accuracy improved by 15%, reaching approximately 85%. Additionally, the optimized OpenCV model demonstrated reduced latency, outperforming CAW in real-time detection speed. Under varying lighting and motion conditions, OpenCV also exhibited superior robustness compared to CAW. Conclusion: The findings suggest that with proper optimization, OpenCV can match or exceed CAW in key performance aspects, making it a viable and efficient alternative for real-time distance estimation in blind navigation systems. Future research should explore further model integration and hardware acceleration for deployment in wearable devices.
Downloads
References
[1] M. Vajgl, P. Hurtik, and T. Nejezchleba, “Dist-YOLO: Fast Object Detection with Distance Estimation,” Applied Sciences (Switzerland), vol. 12, no. 3, Feb. 2022, doi: 10.3390/app12031354.
[2] D. Bhavesh, R. Patel, S. A. Goswami, P. S. Kapatel, and Y. M. Dhakad, “Realtime Object’s Size Measurement from Distance using OpenCV and LiDAR,” 2021.
[3] Z. J. Khow, Y. F. Tan, H. A. Karim, and H. A. A. Rashid, “Improved YOLOv8 Model for a Comprehensive Approach to Object Detection and Distance Estimation,” IEEE Access, vol. 12, pp. 63754–63767, 2024, doi: 10.1109/ACCESS.2024.3396224.
[4] E. Syahrudin, E. Utami, and A. D. Hartanto, “Enhanced Yolov8 with OpenCV for Blind-Friendly Object Detection and Distance Estimation,” Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi), vol. 8, no. 2, pp. 199–207, Mar. 2024, doi: 10.29207/resti.v8i2.5529.
[5] B. Strbac, M. Gostovic, Z. Lukac, and D. Samardzija, “YOLO Multi-Camera Object Detection and Distance Estimation,” 2020 Zooming Innovation in Consumer Technologies Conference (ZINC), 2020, doi: 10.1109/ZINC50678.2020.9161805.
[6] S. Norkobil Saydirasulovich, A. Abdusalomov, M. K. Jamil, R. Nasimov, D. Kozhamzharova, and Y. I. Cho, “A YOLOv6-Based Improved Fire Detection Approach for Smart City Environments,” Sensors, vol. 23, no. 6, Mar. 2023, doi: 10.3390/s23063161.
[7] W. Zhao, M. Syafrudin, and N. L. Fitriyani, “CRAS-YOLO: A Novel Multi-Category Vessel Detection and Classification Model Based on YOLOv5s Algorithm,” IEEE Access, vol. 11, pp. 11463–11478, 2023, doi: 10.1109/ACCESS.2023.3241630.
[8] M. Zha, W. Qian, W. Yi, and J. Hua, “A lightweight yolov4-based forestry pest detection method using coordinate attention and feature fusion,” Entropy, vol. 23, no. 12, Dec. 2021, doi: 10.3390/e23121587.
[9] J. Wu, J. Dong, W. Nie, and Z. Ye, “A Lightweight YOLOv5 Optimization of Coordinate Attention,” Applied Sciences (Switzerland), vol. 13, no. 3, Feb. 2023, doi: 10.3390/app13031746.
[10] C. Xie, H. Zhu, and Y. Fei, “Deep coordinate attention network for single image super-resolution,” IET Image Process, vol. 16, no. 1, pp. 273–284, Jan. 2022, doi: 10.1049/ipr2.12364.
[11] H. V Chakri Shadakshri, V. M. B, and K. V Rudra Gana Dev, “OpenCV Implementation of Grid-based Vertical Safe Landing for UAV using YOLOv5,” IJACSA) International Journal of Advanced Computer Science and Applications, vol. 13, no. 9, 2022, doi: 10.14569/IJACSA.2022.0130957.
[12] M. Vajgl, P. Hurtik, and T. Nejezchleba, “Dist-YOLO: Fast Object Detection with Distance Estimation,” Applied Sciences (Switzerland), vol. 12, no. 3, Feb. 2022, doi: 10.3390/app12031354.
[13] Y. Y. Aung and M. M. Lwin, “Real-Time Object Distance Estimation Based on YOLOv8 Using Webcam,” in 2024 IEEE Conference on Computer Applications (ICCA), 2024, pp. 1–6. doi: 10.1109/ICCA62361.2024.10532829.
[14] Z. Li et al., “Lightweight 2D Human Pose Estimation Based on Joint Channel Coordinate Attention Mechanism,” Electronics (Switzerland), vol. 13, no. 1, Jan. 2024, doi: 10.3390/electronics13010143.
[15] C. Ding et al., “Integrating Hybrid Pyramid Feature Fusion and Coordinate Attention for Effective Small Sample Hyperspectral Image Classification,” Remote Sens (Basel), vol. 14, no. 10, May 2022, doi: 10.3390/rs14102355.
[16] F. Wahab, I. Ullah, A. Shah, R. A. Khan, A. Choi, and M. S. Anwar, “Design and implementation of real-time object detection system based on single-shoot detector and OpenCV,” Front Psychol, vol. 13, Nov. 2022, doi: 10.3389/fpsyg.2022.1039645.
[17] K. Manjari, M. Verma, and G. Singal, “A survey on Assistive Technology for visually impaired,” Internet of Things (Netherlands), vol. 11, Sep. 2020, doi: 10.1016/j.iot.2020.100188.
[18] H. Surougi, C. Zhao, and J. A. McCann, “ARAware: Assisting Visually Impaired People with Real-Time Critical Moving Object Identification,” Sensors, vol. 24, no. 13, Jul. 2024, doi: 10.3390/s24134282.
[19] S. Bin Amir and K. Horio, “YOLOv8s-NE: Enhancing Object Detection of Small Objects in Nursery Environments Based on Improved YOLOv8,” 2024, doi: 10.3390/electronics.
[20] M. Hussain, “YOLO-v1 to YOLO-v8, the Rise of YOLO and Its Complementary Nature toward Digital Manufacturing and Industrial Defect Detection,” Jul. 01, 2023, Multidisciplinary Digital Publishing Institute (MDPI). doi: 10.3390/machines11070677.
[21] M. Talib, A. H. Y. Al-Noori, and J. Suad, “YOLOv8-CAB: Improved YOLOv8 for Real-time Object Detection,” Karbala International Journal of Modern Science, vol. 10, no. 1, pp. 56–68, 2024, doi: 10.33640/2405-609X.3339.
[22] R. Wang, F. Liang, B. Wang, and X. Mou, “ODCA-YOLO: An Omni-Dynamic Convolution Coordinate Attention-Based YOLO for Wood Defect Detection,” Forests, vol. 14, no. 9, Sep. 2023, doi: 10.3390/f14091885.
[23] H. V Chakri Shadakshri, V. M. B, and K. V Rudra Gana Dev, “OpenCV Implementation of Grid-based Vertical Safe Landing for UAV using YOLOv5,” IJACSA) International Journal of Advanced Computer Science and Applications, vol. 13, no. 9, 2022, doi: 10.14569/IJACSA.2022.0130957.
[24] J. Sigut, M. Castro, R. Arnay, and M. Sigut, “OpenCV Basics: A Mobile Application to Support the Teaching of Computer Vision Concepts,” IEEE Transactions on Education, vol. 63, no. 4, pp. 328–335, 2020, doi: 10.1109/TE.2020.2993013.
[25] A. B. Abadi and S. Tahcfulloh, “Digital Image Processing for Height Measurement Application Based on Python OpenCV and Regression Analysis,” International Journal on Informatics Visualization, pp. 763–769, Dec. 2022, doi: 10.30630/joiv.6.4.1013.
[26] H. Varçin, F. Üneş, E. Gemici, and M. Zelenakova, “Development of a Three-Dimensional CFD Model and OpenCV Code by Comparing with Experimental Data for Spillway Model Studies,” Water (Switzerland), vol. 15, no. 4, Feb. 2023, doi: 10.3390/w15040756.
[27] M. A. Rahman, S. Siddika, M. A. Al-Baky, and M. J. Mia, “An automated navigation system for blind people,” Bulletin of Electrical Engineering and Informatics, vol. 11, no. 1, pp. 201–212, Feb. 2022, doi: 10.11591/eei.v11i1.3452.
[28] B. Pydala, T. P. Kumar, and K. K. Baseer, “Smart_Eye: A Navigation and Obstacle Detection for Visually Impaired People through Smart App ,” Journal of Applied Engineering and Technological Science (JAETS), vol. 4, no. 2, pp. 992–1011, Jun. 2023, doi: 10.37385/jaets.v4i2.2013.
[29] X. Li, X. Yao, and Y. Liu, “Combining Swin Transformer and Attention-Weighted Fusion for Scene Text Detection,” Neural Process Lett, vol. 56, no. 2, Apr. 2024, doi: 10.1007/s11063-024-11501-7.
[30] S. Sun, B. Mo, J. Xu, D. Li, J. Zhao, and S. Han, “Multi-YOLOv8: An infrared moving small object detection model based on YOLOv8 for air vehicle,” Neurocomputing, vol. 588, Jul. 2024, doi: 10.1016/j.neucom.2024.127685.
[31] X. Yu et al., “Early diagnosis of Alzheimer’s disease using a group self-calibrated coordinate attention network based on multimodal MRI,” Sci Rep, vol. 14, no. 1, Dec. 2024, doi: 10.1038/s41598-024-74508-z.
[32] D. N. Triwibowo, E. Utami, Sukoco, and S. Raharjo, “Analysis of Classification and Calculation of Vehicle Type at APILL Intersection Using YOLO Method and Kalman Filter,” in 3rd International Conference on Cybernetics and Intelligent Systems, ICORIS, IEEE, Institute of Electrical and Electronics Engineers Inc., 2021. doi: 10.1109/ICORIS52787.2021.9649607.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Erwin Syahrudin, Ema Utami, Anggit Dwi Hartanto, Suwanto Raharjo

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Copyright on any article is retained by the author(s).
- The author grants the journal, the right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work’s authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal’s published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.
- The article and any associated published material is distributed under the Creative Commons Attribution-ShareAlike 4.0 International License