DP-Loc: Visual Localization in 2D Maps Using an Embedded Depth Prior

Recent advancements in cost-effective image-based localization using 2D maps have garnered significant attention, inspired by humans’ ability to navigate with such maps. This study addresses the limitations of monocular vision-based systems, specifically inaccurate depth information and l...

Full description

Saved in:
Bibliographic Details
Main Authors: Kyoung Eun Kim, Joo Yong Sim
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10772238/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Recent advancements in cost-effective image-based localization using 2D maps have garnered significant attention, inspired by humans’ ability to navigate with such maps. This study addresses the limitations of monocular vision-based systems, specifically inaccurate depth information and loss of geometric details, which hinder precise localization. We propose a novel neural network framework that incorporates a pretrained metric depth estimation model, such as Zoedepth, to accurately measure absolute distances and enhance map matching between 2D maps and images. Our approach introduces two key modules: an Explicit Depth Prior Fusion (EDPF) module, which constructs a depth score volume using depth maps, and an Implicit Depth Prior Fusion (IDPF) module, which integrates depth and semantic features early through positional encoding. These modules enable a single-layer-scale classifier to learn essential features for effective localization. Notably, the IDPF model with positional encoding showed over 10% performance improvement on the Mapillary dataset compared to the baseline, underscoring the advantages of combining semantic and geometric information. The proposed DP-Loc approach provides a cost-efficient solution for visual localization by leveraging publicly accessible 2D maps and monocular image inputs, making it applicable to autonomous driving, robotics, and augmented reality.
ISSN:2169-3536