A Generation Algorithm for “Text to Image” Based on Multi-Channel Attention

Research on text-to-image has gained significant attention. However, existing methods primarily rely on upsampling convolution operations for feature extraction during the initial image generation stage. This approach has inherent limitations, often leading to the loss of global information and the...

Full description

Saved in:
Bibliographic Details
Main Authors: Yang Yang, Ainuddin Wahid Bin Abdul Wahab, Norisma Binti Idris, Dingguo Yu, Chang Liu
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11119635/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Research on text-to-image has gained significant attention. However, existing methods primarily rely on upsampling convolution operations for feature extraction during the initial image generation stage. This approach has inherent limitations, often leading to the loss of global information and the inability to capture long-range semantic dependencies. To address these issues, this study proposes a generation algorithm for “text to image” based on multi-channel attention (TTI-MCA). The method integrates a self-supervised module into the initial image generation phase, leveraging attention mechanisms to enable autonomous mapping learning between image features. This facilitates a deep integration of contextual understanding and self-attention learning. Additionally, a feature fusion enhancement module is introduced, which combines low-resolution features from the previous stage with high-resolution features from the current stage. This allows the generation network to fully utilize the rich semantic information of low-level features and the high-resolution details of high-level features, ultimately producing high-quality, realistic images. Experimental results show that TTI-MCA outperforms the baseline algorithm in both Inception Score (IS) and Fréchet Inception Distance (FID), achieving superior performance on the CUB and COCO datasets. This research provides a novel approach to generating high-quality images from text.
ISSN:2169-3536