Video Detection and Fusion Tracking for the Targets in Traffic Scenario
In smart transportation systems, obtaining timely and accurate information about the location, speed, category, and shape of traffic targets requires more than what a single sensor can offer. This shortfall has prompted increased attention and rapid development of fusion systems centered around rada...
Saved in:
| Main Authors: | , , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Wiley
2024-01-01
|
| Series: | Journal of Electrical and Computer Engineering |
| Online Access: | http://dx.doi.org/10.1155/jece/5279061 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1846108181280849920 |
|---|---|
| author | Pan Li Jianing Zhang Chengxi Han Luping Xu Baoguo Feng Yongjun Zhou Sen Zhang Bo Zhang |
| author_facet | Pan Li Jianing Zhang Chengxi Han Luping Xu Baoguo Feng Yongjun Zhou Sen Zhang Bo Zhang |
| author_sort | Pan Li |
| collection | DOAJ |
| description | In smart transportation systems, obtaining timely and accurate information about the location, speed, category, and shape of traffic targets requires more than what a single sensor can offer. This shortfall has prompted increased attention and rapid development of fusion systems centered around radar and video. Radar, while effective at measuring speed, often faces issues such as false tracks, redundant tracks, and low visibility in tracking traffic targets. On the other hand, video provides high visualization but struggles with problems such as target adhesion, low detection confidence, and difficulty in speed measurement. The complementary nature of these technologies underscores the importance of fusion systems in addressing the limitations of individual sensors to improve the accuracy and reliability of smart transportation systems. This paper introduces a fusion tracking method that combines radar and video data for tracking traffic targets. By leveraging the target’s morphological information at various pixel positions, this method supplements the radar data, thereby enhancing the accuracy of data association. The experimental results demonstrate that the association algorithm for optimal data fusion proposed in this paper improves performance by 3.86% and 2.13% compared with the traditional fusion method and the IOU matching fusion, respectively. In addition, it shows greater resilience to complex environments. |
| format | Article |
| id | doaj-art-ee28b76c1109498cb2110d1bfe7d533c |
| institution | Kabale University |
| issn | 2090-0155 |
| language | English |
| publishDate | 2024-01-01 |
| publisher | Wiley |
| record_format | Article |
| series | Journal of Electrical and Computer Engineering |
| spelling | doaj-art-ee28b76c1109498cb2110d1bfe7d533c2024-12-26T00:00:03ZengWileyJournal of Electrical and Computer Engineering2090-01552024-01-01202410.1155/jece/5279061Video Detection and Fusion Tracking for the Targets in Traffic ScenarioPan Li0Jianing Zhang1Chengxi Han2Luping Xu3Baoguo Feng4Yongjun Zhou5Sen Zhang6Bo Zhang7School of Aerospace Science and TechnologySchool of Aerospace Science and TechnologySchool of Aerospace Science and TechnologySchool of Aerospace Science and TechnologyHebei Deguanlong Technology Co. LtdScience and Technology on Near-Surface Detection LaboratorySchool of Aerospace Science and TechnologySchool of Aerospace Science and TechnologyIn smart transportation systems, obtaining timely and accurate information about the location, speed, category, and shape of traffic targets requires more than what a single sensor can offer. This shortfall has prompted increased attention and rapid development of fusion systems centered around radar and video. Radar, while effective at measuring speed, often faces issues such as false tracks, redundant tracks, and low visibility in tracking traffic targets. On the other hand, video provides high visualization but struggles with problems such as target adhesion, low detection confidence, and difficulty in speed measurement. The complementary nature of these technologies underscores the importance of fusion systems in addressing the limitations of individual sensors to improve the accuracy and reliability of smart transportation systems. This paper introduces a fusion tracking method that combines radar and video data for tracking traffic targets. By leveraging the target’s morphological information at various pixel positions, this method supplements the radar data, thereby enhancing the accuracy of data association. The experimental results demonstrate that the association algorithm for optimal data fusion proposed in this paper improves performance by 3.86% and 2.13% compared with the traditional fusion method and the IOU matching fusion, respectively. In addition, it shows greater resilience to complex environments.http://dx.doi.org/10.1155/jece/5279061 |
| spellingShingle | Pan Li Jianing Zhang Chengxi Han Luping Xu Baoguo Feng Yongjun Zhou Sen Zhang Bo Zhang Video Detection and Fusion Tracking for the Targets in Traffic Scenario Journal of Electrical and Computer Engineering |
| title | Video Detection and Fusion Tracking for the Targets in Traffic Scenario |
| title_full | Video Detection and Fusion Tracking for the Targets in Traffic Scenario |
| title_fullStr | Video Detection and Fusion Tracking for the Targets in Traffic Scenario |
| title_full_unstemmed | Video Detection and Fusion Tracking for the Targets in Traffic Scenario |
| title_short | Video Detection and Fusion Tracking for the Targets in Traffic Scenario |
| title_sort | video detection and fusion tracking for the targets in traffic scenario |
| url | http://dx.doi.org/10.1155/jece/5279061 |
| work_keys_str_mv | AT panli videodetectionandfusiontrackingforthetargetsintrafficscenario AT jianingzhang videodetectionandfusiontrackingforthetargetsintrafficscenario AT chengxihan videodetectionandfusiontrackingforthetargetsintrafficscenario AT lupingxu videodetectionandfusiontrackingforthetargetsintrafficscenario AT baoguofeng videodetectionandfusiontrackingforthetargetsintrafficscenario AT yongjunzhou videodetectionandfusiontrackingforthetargetsintrafficscenario AT senzhang videodetectionandfusiontrackingforthetargetsintrafficscenario AT bozhang videodetectionandfusiontrackingforthetargetsintrafficscenario |