Enhanced CLIP-GPT Framework for Cross-Lingual Remote Sensing Image Captioning

Remote Sensing Image Captioning (RSIC) aims to generate precise and informative descriptive text for remote sensing images using computational algorithms. Traditional “encoder-decoder” approaches face limitations due to their high training costs and heavy reliance on large-scal...

Full description

Saved in:
Bibliographic Details
Main Authors: Rui Song, Beigeng Zhao, Lizhi Yu
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10816156/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Remote Sensing Image Captioning (RSIC) aims to generate precise and informative descriptive text for remote sensing images using computational algorithms. Traditional “encoder-decoder” approaches face limitations due to their high training costs and heavy reliance on large-scale annotated datasets, hindering their practical applications. To address these challenges, we propose a lightweight solution based on an enhanced CLIP-GPT framework. Our approach utilizes CLIP for zero-shot multimodal feature extraction of remote sensing images, followed by the design and optimization of a mapping network based on an improved Transformer with adaptive multi-head attention to align these features with the text space of GPT-2, facilitating the generation of high-quality descriptive text. Experimental results on the Sydney-captions, UCM-captions, and RSICD datasets demonstrate that the proposed mapping network outperforms existing methods in leveraging CLIP-extracted multimodal features, leading to more accurate and stylistically appropriate text generated by the GPT language model. Furthermore, our method achieves comparable or superior performance to traditional “encoder-decoder” baselines in terms of BLEU, CIDEr, and METEOR metrics, while requiring only one-fifth of the training time. Experiments conducted on an additional Chinese-English bilingual RSIC dataset underscore the distinct advantages of our CLIP-GPT framework, which leverages extensive multimodal pre-training to effectively demonstrate the robust potential of this approach in cross-lingual RSIC tasks.
ISSN:2169-3536