MobileDepth: Monocular Depth Estimation Based on Lightweight Vision Transformer
As deep learning takes off, monocular depth estimation based on convolutional neural networks (CNNs) has made impressive progress. CNNs are superior at extracting local characteristics from a single image; however, they are unable to manage long-range dependence and thus have a substantial impact on...
Saved in:
Main Authors: | Yundong Li, Xiaokun Wei |
---|---|
Format: | Article |
Language: | English |
Published: |
Taylor & Francis Group
2024-12-01
|
Series: | Applied Artificial Intelligence |
Online Access: | https://www.tandfonline.com/doi/10.1080/08839514.2024.2364159 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Residual Vision Transformer and Adaptive Fusion Autoencoders for Monocular Depth Estimation
by: Wei-Jong Yang, et al.
Published: (2024-12-01) -
Lightweight Self-Supervised Monocular Depth Estimation Through CNN and Transformer Integration
by: Zhe Wang, et al.
Published: (2024-01-01) -
Eite-Mono: An Extreme Lightweight Architecture for Self-Supervised Monocular Depth Estimation
by: Chaopeng Ren
Published: (2024-01-01) -
The effect of depth data and upper limb impairment on lightweight monocular RGB human pose estimation models
by: Gloria-Edith Boudreault-Morales, et al.
Published: (2025-02-01) -
CAPDepth: 360 Monocular Depth Estimation by Content-Aware Projection
by: Xu Gao, et al.
Published: (2025-01-01)