Deep Layered Network Based on Rotation Operation and Residual Transform for Building Segmentation from Remote Sensing Images
Deep learning has been widely applied in building segmentation from high-resolution remote sensing (HRS) images. However, HRS images suffer from insufficient complementary representation of target points in terms of capturing details and global information. To this end, we propose a novel building s...
Saved in:
| Main Authors: | , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-04-01
|
| Series: | Sensors |
| Subjects: | |
| Online Access: | https://www.mdpi.com/1424-8220/25/8/2608 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Deep learning has been widely applied in building segmentation from high-resolution remote sensing (HRS) images. However, HRS images suffer from insufficient complementary representation of target points in terms of capturing details and global information. To this end, we propose a novel building segmentation model for HRS images, termed C_ASegformer. Specifically, we design a Deep Layered Enhanced Fusion (DLEF) module to integrate hierarchical information from different receptive fields, thereby enhancing the feature representation capability of HRS information from global to detailed levels. Additionally, we introduce a Triplet Attention (TA) Module, which establishes dependency relationships between buildings and the environment through multi-directional rotation operations and residual transformations. Furthermore, we propose a Multi-Level Dilated Connection (MDC) Module to efficiently capture contextual relationships across different scales at a low computational cost. We conduct comparative experiments with several state-of-the-art models on three datasets, including the Massachusetts dataset, the INRIA dataset, and the WHU dataset. On the Massachusetts dataset, C_ASegformer achieves 95.42%, 85.69%, and 75.46% for OA, F1score, and mIoU, respectively. C_ASegformer shows more accurate performance, demonstrating the validity and sophistication of the model. |
|---|---|
| ISSN: | 1424-8220 |