The Accuracy of 3D Mesh Generation of Neuralangelo Versus Photogrammetry and LiDAR Technology

Photogrammetry and LiDAR have become an essential tool for preserving evidence by creating a point cloud of objects such as vehicles and environments. While effective, these methods often struggle with capturing fine details during the mesh generation phase. Neural surface reconstruction methods, su...

Full description

Saved in:
Bibliographic Details
Main Authors: Ali Farhat, A. Sattar, Alyssa Visalli, Brian Jones
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10990256/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Photogrammetry and LiDAR have become an essential tool for preserving evidence by creating a point cloud of objects such as vehicles and environments. While effective, these methods often struggle with capturing fine details during the mesh generation phase. Neural surface reconstruction methods, such as Neuralangelo, offer a solution by using neural networks to extract 3D geometry from images. Neuralangelo enhances this process by combining numerical gradients with coarse-to-fine optimization for better detail control. In this study, we evaluated Neuralangelo’s accuracy against traditional photogrammetry and LiDAR techniques for mesh generation. Our experiments involved scanning 3D-printed objects of varying complexity. Neuralangelo produced more geometrically accurate meshes compared to industry-standard tools like Polycam and Pix4DCatch. For example, at ±3 mm, Neuralangelo showed a 53.8% improvement over Pix4DCatch and 29.5% over Polycam, with similar results at smaller tolerances. Everyday objects, like chairs, were also scanned, reinforcing Neuralangelo’s superior performance. These findings suggest that Neuralangelo is a promising alternative to traditional methods, offering greater accuracy in capturing complex details during mesh generation. This technology could be especially useful in accident reconstruction for evidence preservation, allowing for detailed mesh generation of crushed vehicles using video inputs from common smartphones.
ISSN:2169-3536