NMD-FusionNet: a multimodal fusion-based medical imaging-assisted diagnostic model for liver cancer

Abstract Liver cancer is a malignant tumor with high incidence and mortality rates. Computed tomography is a key imaging modality for clinical diagnosis but faces challenges such as subtle intra-class variations and the inefficiency of manual interpretation. Traditional diagnostic approaches often l...

Full description

Saved in:
Bibliographic Details
Main Authors: Qing Ye, Minghao Luo, Jing Zhou, Chunlei Cheng, Lin Peng, Jia Wu
Format: Article
Language:English
Published: Springer 2025-07-01
Series:Journal of King Saud University: Computer and Information Sciences
Subjects:
Online Access:https://doi.org/10.1007/s44443-025-00162-8
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Liver cancer is a malignant tumor with high incidence and mortality rates. Computed tomography is a key imaging modality for clinical diagnosis but faces challenges such as subtle intra-class variations and the inefficiency of manual interpretation. Traditional diagnostic approaches often lack generalizability and effective feature extraction, increasing the risk of misdiagnosis. While deep learning models like ResUNet have improved diagnostic performance, issues such as information loss and inaccurate tumor boundary segmentation persist. To address these challenges, this study proposes NMD-FusionNet, a deep learning-based framework for liver cancer image segmentation and diagnosis. The framework includes a three-stage pipeline: first, a refined non-local means filtering algorithm is employed for pre-screening, discarding over 80% of non-diagnostic images using adaptive thresholding; second, a multimodal image fusion method integrates multi-phase, multi-source liver cancer image data through multi-scale decomposition and precise fusion rules to reduce noise and motion artifacts; third, a dual-path DconnNet segmentation network is constructed, incorporating a directional excitation module in the encoder and a spatial awareness unit in the decoder, guided by a boundary-constrained loss function to enhance segmentation accuracy. Evaluated on over 2,000 liver cancer images from the Second People's Hospital of Huaihua, NMD-FusionNet achieves a tumor segmentation Intersection over Union (IoU) of 83.9%, representing an 8.7% improvement over ResUNet and demonstrating superior sensitivity. This work provides a reliable computer-aided diagnostic tool for radiologists and shows strong potential for clinical application, particularly in resource-limited healthcare settings.
ISSN:1319-1578
2213-1248