Deep learning-based single-shot and real-time vehicle detection and ego-lane estimation
Vision-based Forward Collision Warning System (FCWS) is a promising assist feature in a car to alleviate road accidents and make roads safer. In practice, it is exceptionally hard to accurately and efficiently develop an algorithm for FCWS application due to the complexity of steps involved in FCWS....
| Main Authors: | , , , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Society of Automotive Engineers Malaysia
2020
|
| Subjects: | |
| Online Access: | http://irep.iium.edu.my/80104/ http://irep.iium.edu.my/80104/1/JSAEM-4-1-AbdulMatin.pdf |
| _version_ | 1848788903666909184 |
|---|---|
| author | Abdul Matin, M. A. A. Ahmad Fakhri, A. S. Mohd Zaki, Hasan Firdaus Zainal Abidin, Zulkifli Mohd Mustafah, Y. Abd Rahman, H. Mahamud, N. H. Hanizam, S. Ahmad Rudin, N. S. |
| author_facet | Abdul Matin, M. A. A. Ahmad Fakhri, A. S. Mohd Zaki, Hasan Firdaus Zainal Abidin, Zulkifli Mohd Mustafah, Y. Abd Rahman, H. Mahamud, N. H. Hanizam, S. Ahmad Rudin, N. S. |
| author_sort | Abdul Matin, M. A. A. |
| building | IIUM Repository |
| collection | Online Access |
| description | Vision-based Forward Collision Warning System (FCWS) is a promising assist feature in a car to alleviate road accidents and make roads safer. In practice, it is exceptionally hard to accurately and efficiently develop an algorithm for FCWS application due to the complexity of steps involved in FCWS. For FCWS application, multiple steps are involved namely vehicle detection, target vehicle verification and time-to-collision (TTC). These involve an elaborated FCWS pipeline using classical computer vision methods which limits the robustness of the overall system and limits the scalability of the algorithm. Deep neural network (DNN) has shown unprecedented performance for the task of vision-based object detection which opens the possibility to be explored as an effective perceptive tool for automotive application. In this paper, a DNN based single-shot vehicle detection and ego-lane estimation architecture is presented. This architecture allows simultaneous detection of vehicles and estimation of ego-lanes in a single-shot. SSD-MobileNetv2 architecture was used as a backbone network to achieve this. Traffic ego-lanes in this paper were defined as semantic regression points. We collected and labelled 59,068 images of ego-lane datasets and trained the feature extractor architecture MobileNetv2 to estimate where the ego-lanes are in an image. Once the feature extractor is trained for ego-lane estimation the meta-architecture single-shot detector (SSD) was then trained to detect vehicles. Our experimental results show that this method achieves real-time performance with test results of 88% total precision on the CULane dataset and 91% on our dataset for ego-lane estimation. Moreover, we achieve a 63.7% mAP for vehicle detection on our dataset. The proposed architecture shows that an elaborate pipeline of multiple steps to develop an algorithm for the FCWS application is eliminated. The proposed method achieves real-time at 60 fps performance on standard PC running on Nvidia GTX1080 proving its potential to run on an embedded device for FCWS. |
| first_indexed | 2025-11-14T17:48:13Z |
| format | Article |
| id | iium-80104 |
| institution | International Islamic University Malaysia |
| institution_category | Local University |
| language | English |
| last_indexed | 2025-11-14T17:48:13Z |
| publishDate | 2020 |
| publisher | Society of Automotive Engineers Malaysia |
| recordtype | eprints |
| repository_type | Digital Repository |
| spelling | iium-801042020-12-09T08:27:32Z http://irep.iium.edu.my/80104/ Deep learning-based single-shot and real-time vehicle detection and ego-lane estimation Abdul Matin, M. A. A. Ahmad Fakhri, A. S. Mohd Zaki, Hasan Firdaus Zainal Abidin, Zulkifli Mohd Mustafah, Y. Abd Rahman, H. Mahamud, N. H. Hanizam, S. Ahmad Rudin, N. S. TA1001 Transportation engineering (General) Vision-based Forward Collision Warning System (FCWS) is a promising assist feature in a car to alleviate road accidents and make roads safer. In practice, it is exceptionally hard to accurately and efficiently develop an algorithm for FCWS application due to the complexity of steps involved in FCWS. For FCWS application, multiple steps are involved namely vehicle detection, target vehicle verification and time-to-collision (TTC). These involve an elaborated FCWS pipeline using classical computer vision methods which limits the robustness of the overall system and limits the scalability of the algorithm. Deep neural network (DNN) has shown unprecedented performance for the task of vision-based object detection which opens the possibility to be explored as an effective perceptive tool for automotive application. In this paper, a DNN based single-shot vehicle detection and ego-lane estimation architecture is presented. This architecture allows simultaneous detection of vehicles and estimation of ego-lanes in a single-shot. SSD-MobileNetv2 architecture was used as a backbone network to achieve this. Traffic ego-lanes in this paper were defined as semantic regression points. We collected and labelled 59,068 images of ego-lane datasets and trained the feature extractor architecture MobileNetv2 to estimate where the ego-lanes are in an image. Once the feature extractor is trained for ego-lane estimation the meta-architecture single-shot detector (SSD) was then trained to detect vehicles. Our experimental results show that this method achieves real-time performance with test results of 88% total precision on the CULane dataset and 91% on our dataset for ego-lane estimation. Moreover, we achieve a 63.7% mAP for vehicle detection on our dataset. The proposed architecture shows that an elaborate pipeline of multiple steps to develop an algorithm for the FCWS application is eliminated. The proposed method achieves real-time at 60 fps performance on standard PC running on Nvidia GTX1080 proving its potential to run on an embedded device for FCWS. Society of Automotive Engineers Malaysia 2020-01-01 Article PeerReviewed application/pdf en http://irep.iium.edu.my/80104/1/JSAEM-4-1-AbdulMatin.pdf Abdul Matin, M. A. A. and Ahmad Fakhri, A. S. and Mohd Zaki, Hasan Firdaus and Zainal Abidin, Zulkifli and Mohd Mustafah, Y. and Abd Rahman, H. and Mahamud, N. H. and Hanizam, S. and Ahmad Rudin, N. S. (2020) Deep learning-based single-shot and real-time vehicle detection and ego-lane estimation. Journal of the Society of Automotive Engineers Malaysia, 4 (1). pp. 61-72. ISSN 2600-8092 E-ISSN 2550-2239 http://jsaem.saemalaysia.org.my/index.php/jsaem/article/view/119/114 |
| spellingShingle | TA1001 Transportation engineering (General) Abdul Matin, M. A. A. Ahmad Fakhri, A. S. Mohd Zaki, Hasan Firdaus Zainal Abidin, Zulkifli Mohd Mustafah, Y. Abd Rahman, H. Mahamud, N. H. Hanizam, S. Ahmad Rudin, N. S. Deep learning-based single-shot and real-time vehicle detection and ego-lane estimation |
| title | Deep learning-based single-shot and real-time vehicle detection and ego-lane estimation |
| title_full | Deep learning-based single-shot and real-time vehicle detection and ego-lane estimation |
| title_fullStr | Deep learning-based single-shot and real-time vehicle detection and ego-lane estimation |
| title_full_unstemmed | Deep learning-based single-shot and real-time vehicle detection and ego-lane estimation |
| title_short | Deep learning-based single-shot and real-time vehicle detection and ego-lane estimation |
| title_sort | deep learning-based single-shot and real-time vehicle detection and ego-lane estimation |
| topic | TA1001 Transportation engineering (General) |
| url | http://irep.iium.edu.my/80104/ http://irep.iium.edu.my/80104/ http://irep.iium.edu.my/80104/1/JSAEM-4-1-AbdulMatin.pdf |