An Efficient Topology Construction Scheme Designed for Graph Neural Networks in Hyperspectral Image Classification
Superpixel-based Graph Neural Networks (GNNs) have achieved remarkable success in hyperspectral image (HSI) classification tasks, primarily due to their ability to capture the implicit topological structure in the data while maintaining low computational complexity by propagating information between...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/11113241/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Superpixel-based Graph Neural Networks (GNNs) have achieved remarkable success in hyperspectral image (HSI) classification tasks, primarily due to their ability to capture the implicit topological structure in the data while maintaining low computational complexity by propagating information between spatially adjacent superpixels. However, the assumption of treating pixel features within superpixels as identical representations may limit the model’s expressive power. This is because land cover regions in HSI are often irregular, which leads to a difficult-to-resolve contradiction between the superpixel segmentation scale and the homogeneity of pixel labels within superpixels. To fundamentally address this issue, we reinterpreted the implicit topological structure between pixels on the basis of their spectral feature similarity and spatial position dependencies. Considering computational bottlenecks, we proposed a new subgraph partitioning method and sorting selection technique to choose important relationships from the graph, thereby constructing an a priori topology conducive to downstream tasks on HSI of any scale. Based on this a priori topology, we designed a GCN model for learning pixel feature representations and integrated it into a unified framework. We conducted comprehensive experiments on three widely used benchmark datasets. The results show that, compared to the mainstream superpixel-level GCN models, the proposed method improved the overall accuracy (OA), the average accuracy (AA), and the Kappa coefficient (Kappa) by an average of 2.0%, 5.4%, and 2.3%, respectively, across the three datasets. Moreover, on some datasets, our method outperformed several recent multi-scale feature fusion models. We also observed that different models exhibit different performances when dealing with land cover areas with different characteristics. In particular, by combining these models with our method, the classification performance was significantly improved. Our code is open-source on the public platform GitHub: <uri>https://github.com/LittleBlackBearLiXin/An-Efficient-Topology-Construction-Scheme-Designed-for-Graph-Neural-Networks</uri> |
|---|---|
| ISSN: | 2169-3536 |