A Dynamic Cascade Cross-Modal Coassisted Network for AAV Image Object Detection

Accurate detection of small objects plays an important role in the application of Autonomous aerial vehicles (AAV). However, current works mainly extract comprehensive features from unimodal images, which can obtain very limited distinguishable features for objects, especially those with small sizes...

Full description

Saved in:
Bibliographic Details
Main Authors: Shu Tian, Li Wang, Lin Cao, Lihong Kang, Xian Sun, Jing Tian, Xiangwei Xing, Bo Shen, Chunzhuo Fan, Kangning Du, Chong Fu, Ye Zhang
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10797700/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Accurate detection of small objects plays an important role in the application of Autonomous aerial vehicles (AAV). However, current works mainly extract comprehensive features from unimodal images, which can obtain very limited distinguishable features for objects, especially those with small sizes. To address this issue, we propose a dynamic cascade cross-modal coassisted network, which integrates multimodal images fusion and fine-grained feature learning to generate powerful object semantic representations. Specifically, we design a multimodal high-order interaction module to achieve collaborative interaction of spatial details and channel dependencies between modalities, thereby enhancing object discrimination. To preserve multimodal fine-grained details, we devise a scale-adaptive dynamic feature prompt module, which dynamically motivates the backbone network to capture feature degradation clues. Meanwhile, to maintain the spatial correlation of multimodal cross-scale features and improve the quality of feature fusion, we derive a global collaborative enhancement module into the feature pyramid network for enhancing the detection accuracy across multiple scales. Extensive experimental results on multimodal datasets have shown that our method achieves favorable performance, surpassing other state-of-the-art methods.
ISSN:1939-1404
2151-1535