DSMF-Net: Dual Semantic Metric Learning Fusion Network for Few-Shot Aerial Image Semantic Segmentation

Semantic segmentation of aerial images is crucial yet resource-intensive. Inspired by human ability to learn rapidly, few-shot semantic segmentation offers a promising solution by utilizing limited labeled data for efficient model training and generalization. However, the intrinsic complexities of a...

Full description

Saved in:
Bibliographic Details
Main Authors: Xiyu Qi, Yidan Zhang, Lei Wang, Yifan Wu, Yi Xin, Zhan Chen, Yunping Ge
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10746596/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Semantic segmentation of aerial images is crucial yet resource-intensive. Inspired by human ability to learn rapidly, few-shot semantic segmentation offers a promising solution by utilizing limited labeled data for efficient model training and generalization. However, the intrinsic complexities of aerial images, compounded by scarce samples, often result in inadequate feature representation and semantic ambiguity, detracting from the model&#x0027;s performance. In this article, we propose to tackle these challenging problems via dual semantic metric learning and multisemantic features fusion and introduce a novel few-shot segmentation Network (DSMF-Net). On the one hand, we consider the inherent semantic gap between the feature of graph and grid structures and metric learning of few-shot segmentation. To exploit multiscale global semantic context, we construct scale-aware graph prototypes from different stages of the feature layers based on graph convolutional networks (GCNs), while also incorporating prior-guided metric learning to further enhance context at the high-level convolution features. On the other hand, we design a pyramid-based fusion and condensation mechanism to adaptively merge and couple the multisemantic information from support and query images. The indication and fusion of different semantic features can effectively emphasize the representation and coupling abilities of the network. We have conducted extensive experiments over the challenging iSAID-5<inline-formula><tex-math notation="LaTeX">$^{i}$</tex-math></inline-formula> and DLRSD benchmarks. The experiments have demonstrated our network&#x0027;s effectiveness and efficiency, yielding on-par performance with the state-of-the-art methods.
ISSN:1939-1404
2151-1535