Multifeature Alignment and Matching Network for SAR and Optical Image Registration
Due to the modal disparities between synthetic aperture radar (SAR) and optical images, effectively extracting modality-shared structural features is crucial for achieving accurate registration results. Considering that point features have a limited ability to describe the common structural features...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10746326/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1841533440738983936 |
---|---|
author | Xin Hu Yan Wu Zhikang Li Zhifei Yang Ming Li |
author_facet | Xin Hu Yan Wu Zhikang Li Zhifei Yang Ming Li |
author_sort | Xin Hu |
collection | DOAJ |
description | Due to the modal disparities between synthetic aperture radar (SAR) and optical images, effectively extracting modality-shared structural features is crucial for achieving accurate registration results. Considering that point features have a limited ability to describe the common structural features between SAR and optical images, graph topology is introduced to extract edge features to derive modality-shared structural features for reliable registration. In this article, we propose a registration network for multifeature alignment and matching (MFAM-RegNet) between SAR and optical images, which includes a multifeature alignment module (MFAM) and a multifeature matching module (MFMM). First, we construct an MFAM to extract and align point and edge features to mine modality-shared structural features. In MFAM, point features are extracted by graph neural networks, and edge features are constructed by the feature similarity between two keypoints. Inspired by graph matching, we design linear and quadratic contrastive learning to mine the correspondence of point and edge features of intramodal and intermodal images. Second, speckle noise in SAR images inevitably leads to some noise labels, which decreases the accuracy and robustness of our supervised algorithm. Therefore, we design an MFMM to modify noise labels and use bidirectional matching for robust matching. According to the essential relationships of features mined by the momentum contrastive learning strategy, the labels are adaptively modified to reduce the influence of the incorrect labels on the model's performance and achieve more stable matching results. Experiments on three publicly available SAR and optical datasets indicate that our proposed MFAM-RegNet outperforms the existing state-of-the-art algorithms. |
format | Article |
id | doaj-art-ce17f7adfb8441d48ed234eb2cb31197 |
institution | Kabale University |
issn | 1939-1404 2151-1535 |
language | English |
publishDate | 2025-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing |
spelling | doaj-art-ce17f7adfb8441d48ed234eb2cb311972025-01-16T00:00:25ZengIEEEIEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing1939-14042151-15352025-01-011835236710.1109/JSTARS.2024.349227810746326Multifeature Alignment and Matching Network for SAR and Optical Image RegistrationXin Hu0https://orcid.org/0000-0003-4012-684XYan Wu1https://orcid.org/0000-0001-7502-2341Zhikang Li2Zhifei Yang3https://orcid.org/0009-0004-9314-8051Ming Li4https://orcid.org/0000-0002-4706-5173Remote Sensing Image Processing and Fusion Group, School of Electronic Engineering, Xidian University, Xi'an, ChinaRemote Sensing Image Processing and Fusion Group, School of Electronic Engineering, Xidian University, Xi'an, ChinaRemote Sensing Image Processing and Fusion Group, School of Electronic Engineering, Xidian University, Xi'an, ChinaRemote Sensing Image Processing and Fusion Group, School of Electronic Engineering, Xidian University, Xi'an, ChinaNational Key Laboratory of Radar Signal Processing, Xidian University, Xi'an, ChinaDue to the modal disparities between synthetic aperture radar (SAR) and optical images, effectively extracting modality-shared structural features is crucial for achieving accurate registration results. Considering that point features have a limited ability to describe the common structural features between SAR and optical images, graph topology is introduced to extract edge features to derive modality-shared structural features for reliable registration. In this article, we propose a registration network for multifeature alignment and matching (MFAM-RegNet) between SAR and optical images, which includes a multifeature alignment module (MFAM) and a multifeature matching module (MFMM). First, we construct an MFAM to extract and align point and edge features to mine modality-shared structural features. In MFAM, point features are extracted by graph neural networks, and edge features are constructed by the feature similarity between two keypoints. Inspired by graph matching, we design linear and quadratic contrastive learning to mine the correspondence of point and edge features of intramodal and intermodal images. Second, speckle noise in SAR images inevitably leads to some noise labels, which decreases the accuracy and robustness of our supervised algorithm. Therefore, we design an MFMM to modify noise labels and use bidirectional matching for robust matching. According to the essential relationships of features mined by the momentum contrastive learning strategy, the labels are adaptively modified to reduce the influence of the incorrect labels on the model's performance and achieve more stable matching results. Experiments on three publicly available SAR and optical datasets indicate that our proposed MFAM-RegNet outperforms the existing state-of-the-art algorithms.https://ieeexplore.ieee.org/document/10746326/Linear contrastive learning (LCL) and quadratic contrastive learning (QCL)momentum contrastive learning (MCL)multifeature alignmentmultifeature matchingsynthetic aperture radar (SAR)optical image registration |
spellingShingle | Xin Hu Yan Wu Zhikang Li Zhifei Yang Ming Li Multifeature Alignment and Matching Network for SAR and Optical Image Registration IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing Linear contrastive learning (LCL) and quadratic contrastive learning (QCL) momentum contrastive learning (MCL) multifeature alignment multifeature matching synthetic aperture radar (SAR) optical image registration |
title | Multifeature Alignment and Matching Network for SAR and Optical Image Registration |
title_full | Multifeature Alignment and Matching Network for SAR and Optical Image Registration |
title_fullStr | Multifeature Alignment and Matching Network for SAR and Optical Image Registration |
title_full_unstemmed | Multifeature Alignment and Matching Network for SAR and Optical Image Registration |
title_short | Multifeature Alignment and Matching Network for SAR and Optical Image Registration |
title_sort | multifeature alignment and matching network for sar and optical image registration |
topic | Linear contrastive learning (LCL) and quadratic contrastive learning (QCL) momentum contrastive learning (MCL) multifeature alignment multifeature matching synthetic aperture radar (SAR) optical image registration |
url | https://ieeexplore.ieee.org/document/10746326/ |
work_keys_str_mv | AT xinhu multifeaturealignmentandmatchingnetworkforsarandopticalimageregistration AT yanwu multifeaturealignmentandmatchingnetworkforsarandopticalimageregistration AT zhikangli multifeaturealignmentandmatchingnetworkforsarandopticalimageregistration AT zhifeiyang multifeaturealignmentandmatchingnetworkforsarandopticalimageregistration AT mingli multifeaturealignmentandmatchingnetworkforsarandopticalimageregistration |