MSHRNet: a multi-scale high-resolution network for land cover classification from high spatial resolution remote sensing images

Land cover classification is vital for land resource management. However, challenges such as feature similarity among ground objects, blurred boundaries, and indistinct small objects persist. To address these challenges, we propose the Multi-Scale High-Resolution Network (MSHRNet) for classifying gr...

Full description

Saved in:
Bibliographic Details
Main Authors: Fang Chen, Zhihui Ou, Congrong Li, Lei Wang, Bo Yu
Format: Article
Language:English
Published: Taylor & Francis Group 2025-08-01
Series:International Journal of Digital Earth
Subjects:
Online Access:https://www.tandfonline.com/doi/10.1080/17538947.2025.2509090
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Land cover classification is vital for land resource management. However, challenges such as feature similarity among ground objects, blurred boundaries, and indistinct small objects persist. To address these challenges, we propose the Multi-Scale High-Resolution Network (MSHRNet) for classifying ground objects from high-resolution remote sensing images. MSHRNet is an encoder-decoder network that incorporates an attentional boundary refinement branch in the decoder to sharpen object boundaries. It features a multi-scale feature interaction module that integrates feature maps across different resolutions in the encoder and enhances the importance of these fused features using a coordinate attention module. Additionally, we introduce a Laplacian operator-based boundary loss function (LBLoss) to further improve segmentation performance. Evaluated on the GID and Huawei Ascend Cup AI + Remote Sensing Image Competition datasets, MSHRNet demonstrates robustness with a mean Intersection over Union (mIoU) of 82.45% and 72.26%, respectively, and surpasses nine recently published models by at least 1.52% and 1.01% mIoU. Moreover, when tested on the LoveDA dataset without additional training, MSHRNet exhibited strong transferability, achieving an mIoU of 18.53% and surpassing the second-best model by 2.33%. This framework represents a significant advancement in land cover classification, addressing challenges of high-resolution imagery and exhibiting generalization across diverse datasets.
ISSN:1753-8947
1753-8955