See-Then-Grasp: Object Full 3D Reconstruction via Two-Stage Active Robotic Reconstruction Using Single Manipulator
In this paper, we propose an active robotic 3D reconstruction methodology for achieving full object 3D reconstruction. Existing robotic 3D reconstruction approaches often struggle to cover the entire view space of the object or reconstruct occluded regions, such as the bottom or back side. To addres...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2024-12-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/15/1/272 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In this paper, we propose an active robotic 3D reconstruction methodology for achieving full object 3D reconstruction. Existing robotic 3D reconstruction approaches often struggle to cover the entire view space of the object or reconstruct occluded regions, such as the bottom or back side. To address these limitations, we introduce a two-stage robotic active 3D reconstruction pipeline, named See-Then-Grasp (STG), that employs a robot manipulator for direct interaction with the object. The manipulator moves toward the points with the highest uncertainty, ensuring efficient data acquisition and rapid reconstruction. Our method expands the view space of the object to include the entire perspective, including occluded areas, making the previous fixed view candidate approach time-consuming for identifying uncertain regions. To overcome this, we propose a gradient-based next best view pose optimization method that efficiently identifies uncertain regions, enabling faster and more effective reconstruction. Our method optimizes the camera pose based on an uncertainty function, allowing it to identify the most uncertain regions in a short time. Through experiments with synthetic objects, we demonstrate that our approach effectively addresses the next best view selection problem, achieving significant improvements in computational efficiency while maintaining high-quality 3D reconstruction. Furthermore, we validate our method on a real robot, showing that it enables full 3D reconstruction of real-world objects. |
---|---|
ISSN: | 2076-3417 |