SS3DNet-AF: A Single-Stage, Single-View 3D Reconstruction Network with Attention-Based Fusion

Learning object shapes from a single image is challenging due to variations in scene content, geometric structures, and environmental factors, which create significant disparities between 2D image features and their corresponding 3D representations, hindering the effective training of deep learning...

Full description

Saved in:
Bibliographic Details
Main Authors: Muhammad Awais Shoukat, Allah Bux Sargano, Alexander Malyshev, Lihua You, Zulfiqar Habib
Format: Article
Language:English
Published: MDPI AG 2024-12-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/14/23/11424
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Learning object shapes from a single image is challenging due to variations in scene content, geometric structures, and environmental factors, which create significant disparities between 2D image features and their corresponding 3D representations, hindering the effective training of deep learning models. Existing learning-based approaches can be divided into two-stage and single-stage methods, each with limitations. Two-stage methods often rely on generating intermediate proposals by searching for similar structures across the entire dataset, a process that is computationally expensive due to the large search space and high-dimensional feature-matching requirements, further limiting flexibility to predefined object categories. In contrast, single-stage methods directly reconstruct 3D shapes from images without intermediate steps, but they struggle to capture complex object geometries due to high feature loss between image features and 3D shapes and limit their ability to represent intricate details. To address these challenges, this paper introduces SS3DNet-AF, a single-stage, single-view 3D reconstruction network with an attention-based fusion (AF) mechanism to enhance focus on relevant image features, effectively capturing geometric details and generalizing across diverse object categories. The proposed method is quantitatively evaluated using the ShapeNet dataset, demonstrating its effectiveness in achieving accurate 3D reconstructions while overcoming the computational challenges associated with traditional approaches.
ISSN:2076-3417