Enhancing Object Estimation by Camera-LiDAR Sensor Fusion Using IMM-KF With Error Characteristics in Autonomous Robot Systems

In autonomous robot systems, accurate object recognition and estimation are crucial for ensuring reliable performance, and sensor fusion techniques that combine complementary sensors have proven to be effective in achieving this. This paper proposes a Light Detection And Ranging (LiDAR) and a camera...

Full description

Saved in:
Bibliographic Details
Main Authors: Sun Ho Lee, Woo Young Choi
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10813161/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1846099871556173824
author Sun Ho Lee
Woo Young Choi
author_facet Sun Ho Lee
Woo Young Choi
author_sort Sun Ho Lee
collection DOAJ
description In autonomous robot systems, accurate object recognition and estimation are crucial for ensuring reliable performance, and sensor fusion techniques that combine complementary sensors have proven to be effective in achieving this. This paper proposes a Light Detection And Ranging (LiDAR) and a camera sensor fusion method to improve object recognition and estimation performance in autonomous robot systems. We first calibrate the camera and LiDAR sensor to implement the proposed method. Then, data association is performed between the LiDAR sensor’s data and the object’s bounding box data, which is identified through the camera sensor with a deep learning algorithm. To improve the performance of object recognition and estimation, we identified the limitations of single-sensor recognition and measurement and established measurement noise covariance through the analysis of each sensor’s distance measurement errors. After that, we applied an Interacting Multiple Model (IMM)-Kalman Filter (KF) considering the pre-analyzed error characteristics. The usefulness of the proposed method was validated through scenario-based experiments. Experimental results show that the proposed sensor fusion method significantly improves the accuracy of object estimation and extends the Field of View (FoV) of the sensors over the conventional methods.
format Article
id doaj-art-24e36e7d379c4bf1bc6ba660e9e80612
institution Kabale University
issn 2169-3536
language English
publishDate 2024-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj-art-24e36e7d379c4bf1bc6ba660e9e806122024-12-31T00:00:59ZengIEEEIEEE Access2169-35362024-01-011219724719725810.1109/ACCESS.2024.352209010813161Enhancing Object Estimation by Camera-LiDAR Sensor Fusion Using IMM-KF With Error Characteristics in Autonomous Robot SystemsSun Ho Lee0https://orcid.org/0009-0005-4015-4607Woo Young Choi1https://orcid.org/0000-0002-6175-5533Department of Intelligent Robot Engineering, Pukyong National University, Busan, South KoreaDepartment of Control and Instrumentation Engineering, Pukyong National University, Busan, South KoreaIn autonomous robot systems, accurate object recognition and estimation are crucial for ensuring reliable performance, and sensor fusion techniques that combine complementary sensors have proven to be effective in achieving this. This paper proposes a Light Detection And Ranging (LiDAR) and a camera sensor fusion method to improve object recognition and estimation performance in autonomous robot systems. We first calibrate the camera and LiDAR sensor to implement the proposed method. Then, data association is performed between the LiDAR sensor’s data and the object’s bounding box data, which is identified through the camera sensor with a deep learning algorithm. To improve the performance of object recognition and estimation, we identified the limitations of single-sensor recognition and measurement and established measurement noise covariance through the analysis of each sensor’s distance measurement errors. After that, we applied an Interacting Multiple Model (IMM)-Kalman Filter (KF) considering the pre-analyzed error characteristics. The usefulness of the proposed method was validated through scenario-based experiments. Experimental results show that the proposed sensor fusion method significantly improves the accuracy of object estimation and extends the Field of View (FoV) of the sensors over the conventional methods.https://ieeexplore.ieee.org/document/10813161/Data associationestimationinteracting multiple modelobject detectionsensor fusion
spellingShingle Sun Ho Lee
Woo Young Choi
Enhancing Object Estimation by Camera-LiDAR Sensor Fusion Using IMM-KF With Error Characteristics in Autonomous Robot Systems
IEEE Access
Data association
estimation
interacting multiple model
object detection
sensor fusion
title Enhancing Object Estimation by Camera-LiDAR Sensor Fusion Using IMM-KF With Error Characteristics in Autonomous Robot Systems
title_full Enhancing Object Estimation by Camera-LiDAR Sensor Fusion Using IMM-KF With Error Characteristics in Autonomous Robot Systems
title_fullStr Enhancing Object Estimation by Camera-LiDAR Sensor Fusion Using IMM-KF With Error Characteristics in Autonomous Robot Systems
title_full_unstemmed Enhancing Object Estimation by Camera-LiDAR Sensor Fusion Using IMM-KF With Error Characteristics in Autonomous Robot Systems
title_short Enhancing Object Estimation by Camera-LiDAR Sensor Fusion Using IMM-KF With Error Characteristics in Autonomous Robot Systems
title_sort enhancing object estimation by camera lidar sensor fusion using imm kf with error characteristics in autonomous robot systems
topic Data association
estimation
interacting multiple model
object detection
sensor fusion
url https://ieeexplore.ieee.org/document/10813161/
work_keys_str_mv AT sunholee enhancingobjectestimationbycameralidarsensorfusionusingimmkfwitherrorcharacteristicsinautonomousrobotsystems
AT wooyoungchoi enhancingobjectestimationbycameralidarsensorfusionusingimmkfwitherrorcharacteristicsinautonomousrobotsystems