DMA‐Net: A dual branch encoder and multi‐scale cross attention fusion network for skin lesion segmentation

Abstract Automatic segmentation of skin lesion is an important step in computer‐aided diagnosis. However, due to the significant variations in the size and shape of the lesion areas, as well as the low contrast with normal skin tissue, the boundaries are not clearly distinguishable, leading to a hig...

Full description

Saved in:
Bibliographic Details
Main Authors: Guangyao Zhai, Guanglei Wang, Qinghua Shang, Yan Li, Hongrui Wang
Format: Article
Language:English
Published: Wiley 2024-12-01
Series:IET Image Processing
Subjects:
Online Access:https://doi.org/10.1049/ipr2.13265
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Automatic segmentation of skin lesion is an important step in computer‐aided diagnosis. However, due to the significant variations in the size and shape of the lesion areas, as well as the low contrast with normal skin tissue, the boundaries are not clearly distinguishable, leading to a high possibility of incorrect segmentation. Therefore, this task is highly challenging. To overcome these difficulties, this paper proposes a medical image segmentation architecture named dual branch encoder and multi‐scale cross attention fusion network, which includes a dual‐branch encoder based on convolutional neural network and an improved channel‐enhanced Mamba to comprehensively extract local and global information from dermoscopy images. Additionally, to enhance the feature interaction and fusion of local and global information, a multi‐scale cross attention fusion module is adopted to cross‐merge features in different directions and at different scales, maximizing the advantages of the dual‐branch encoder and achieving precise segmentation of skin lesions. Extensive experiments are conducted on three public skin lesion datasets: ISIC‐2018, ISIC‐2017, and ISIC‐2016, to verify the effectiveness and superiority of the proposed method. The dice similarity coefficient scores on the three datasets reached 81.77%, 81.68% and 85.60%, respectively, surpassing most state‐of‐the‐art methods.
ISSN:1751-9659
1751-9667