Accurate and Efficient LiDAR SLAM by Learning Unified Neural Descriptors

Point clouds generated by LiDAR sensors have been widely exploited in Simultaneous Localization and Mapping (SLAM). However, existing LiDAR SLAM approaches based on hand-crafted features easily suffer from being either overly sparse or dense, causing low-fidelity map construction or severe scalabili...

Full description

Saved in:
Bibliographic Details
Main Authors: Baihe Feng, Ying Zhang
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10982267/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Point clouds generated by LiDAR sensors have been widely exploited in Simultaneous Localization and Mapping (SLAM). However, existing LiDAR SLAM approaches based on hand-crafted features easily suffer from being either overly sparse or dense, causing low-fidelity map construction or severe scalability problems. Recent deep learning-based features under the existing SLAM framework can be poorly affected by error accumulation problems over time. To address these issues, we propose a unified architecture named DeepPointMap++, which enables both memory-efficient map representation and accurate multi-scale localization. We design a deep encoder to extract highly representative unified neural descriptors from the input point clouds and propose a novel deep decoder to find their correspondences. It also incorporates a foreground-background classifier during feature extraction, effectively separating dynamic foreground objects from the static background to boost the final localization and mapping effects. In the experiment, DeepPointMap++ outperformed other state-of-the-art methods on multiple auto-driving benchmarks. We also showcase the versatility of our framework by extending it to more challenging multi-agent collaborative SLAM.
ISSN:2169-3536