A Hierarchical Graph-Enhanced Transformer Network for Remote Sensing Scene Classification

Remote sensing scene classification (RSSC) is essential in Earth observation, with applications in land use, environmental status, urban development, and disaster risk assessment. However, redundant background interference, varying feature scales, and high interclass similarity in remote sensing ima...

Full description

Saved in:
Bibliographic Details
Main Authors: Ziwei Li, Weiming Xu, Shiyu Yang, Juan Wang, Hua Su, Zhanchao Huang, Sheng Wu
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10742489/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Remote sensing scene classification (RSSC) is essential in Earth observation, with applications in land use, environmental status, urban development, and disaster risk assessment. However, redundant background interference, varying feature scales, and high interclass similarity in remote sensing images present significant challenges for RSSC. To address these challenges, this article proposes a novel hierarchical graph-enhanced transformer network (HGTNet) for RSSC. Initially, we introduce a dual attention (DA) module, which extracts key feature information from both the channel and spatial domains, effectively suppressing background noise. Subsequently, we meticulously design a three-stage hierarchical transformer extractor, incorporating a DA module at the bottleneck of each stage to facilitate information exchange between different stages, in conjunction with the Swin transformer block to capture multiscale global visual information. Moreover, we develop a fine-grained graph neural network extractor that constructs the spatial topological relationships of pixel-level scene images, thereby aiding in the discrimination of similar complex scene categories. Finally, the visual features and spatial structural features are fully integrated and input into the classifier by employing skip connections. HGTNet achieves classification accuracies of 98.47%, 95.75%, and 96.33% on the aerial image, NWPU-RESISC45, and OPTIMAL-31 datasets, respectively, demonstrating superior performance compared to other state-of-the-art models. Extensive experimental results indicate that our proposed method effectively learns critical multiscale visual features and distinguishes between similar complex scenes, thereby significantly enhancing the accuracy of RSSC.
ISSN:1939-1404
2151-1535