Deep learning-driven brain tumor classification and segmentation using non-contrast MRI

Abstract This study aims to enhance the accuracy and efficiency of MRI-based brain tumor diagnosis by leveraging deep learning (DL) techniques applied to multichannel MRI inputs. MRI data were collected from 203 subjects, including 100 normal cases and 103 cases with 13 distinct brain tumor types. N...

Full description

Saved in:
Bibliographic Details
Main Authors: Nan-Han Lu, Yung-Hui Huang, Kuo-Ying Liu, Tai-Been Chen
Format: Article
Language:English
Published: Nature Portfolio 2025-07-01
Series:Scientific Reports
Subjects:
Online Access:https://doi.org/10.1038/s41598-025-13591-2
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849343779143680000
author Nan-Han Lu
Yung-Hui Huang
Kuo-Ying Liu
Tai-Been Chen
author_facet Nan-Han Lu
Yung-Hui Huang
Kuo-Ying Liu
Tai-Been Chen
author_sort Nan-Han Lu
collection DOAJ
description Abstract This study aims to enhance the accuracy and efficiency of MRI-based brain tumor diagnosis by leveraging deep learning (DL) techniques applied to multichannel MRI inputs. MRI data were collected from 203 subjects, including 100 normal cases and 103 cases with 13 distinct brain tumor types. Non-contrast T1-weighted (T1w) and T2-weighted (T2w) images were combined with their average to form RGB three-channel inputs, enriching the representation for model training. Several convolutional neural network (CNN) architectures were evaluated for tumor classification, while fully convolutional networks (FCNs) were employed for tumor segmentation. Standard preprocessing, normalization, and training procedures were rigorously followed. The RGB fusion of T1w, T2w, and their average significantly enhanced model performance. The classification task achieved a top accuracy of 98.3% using the Darknet53 model, and segmentation attained a mean Dice score of 0.937 with ResNet50. These results demonstrate the effectiveness of multichannel input fusion and model selection in improving brain tumor analysis. While not yet integrated into clinical workflows, this approach holds promise for future development of DL-assisted decision-support tools in radiological practice.
format Article
id doaj-art-6b7a9e6ab1a14b3e96fcfd1c0ce153b1
institution Kabale University
issn 2045-2322
language English
publishDate 2025-07-01
publisher Nature Portfolio
record_format Article
series Scientific Reports
spelling doaj-art-6b7a9e6ab1a14b3e96fcfd1c0ce153b12025-08-20T03:42:52ZengNature PortfolioScientific Reports2045-23222025-07-0115112410.1038/s41598-025-13591-2Deep learning-driven brain tumor classification and segmentation using non-contrast MRINan-Han Lu0Yung-Hui Huang1Kuo-Ying Liu2Tai-Been Chen3Department of Radiology, E-DA Cancer Hospital, I-Shou UniversityDepartment of Medical Imaging and Radiological Science, I-Shou UniversityDepartment of Radiology, E-DA Cancer Hospital, I-Shou UniversityDepartment of Radiological Technology, Teikyo UniversityAbstract This study aims to enhance the accuracy and efficiency of MRI-based brain tumor diagnosis by leveraging deep learning (DL) techniques applied to multichannel MRI inputs. MRI data were collected from 203 subjects, including 100 normal cases and 103 cases with 13 distinct brain tumor types. Non-contrast T1-weighted (T1w) and T2-weighted (T2w) images were combined with their average to form RGB three-channel inputs, enriching the representation for model training. Several convolutional neural network (CNN) architectures were evaluated for tumor classification, while fully convolutional networks (FCNs) were employed for tumor segmentation. Standard preprocessing, normalization, and training procedures were rigorously followed. The RGB fusion of T1w, T2w, and their average significantly enhanced model performance. The classification task achieved a top accuracy of 98.3% using the Darknet53 model, and segmentation attained a mean Dice score of 0.937 with ResNet50. These results demonstrate the effectiveness of multichannel input fusion and model selection in improving brain tumor analysis. While not yet integrated into clinical workflows, this approach holds promise for future development of DL-assisted decision-support tools in radiological practice.https://doi.org/10.1038/s41598-025-13591-2Artificial intelligenceBrain MRIConvolutional neural networks (CNNs)Fully convolutional networks (FCNs)Deep learningTumor classification
spellingShingle Nan-Han Lu
Yung-Hui Huang
Kuo-Ying Liu
Tai-Been Chen
Deep learning-driven brain tumor classification and segmentation using non-contrast MRI
Scientific Reports
Artificial intelligence
Brain MRI
Convolutional neural networks (CNNs)
Fully convolutional networks (FCNs)
Deep learning
Tumor classification
title Deep learning-driven brain tumor classification and segmentation using non-contrast MRI
title_full Deep learning-driven brain tumor classification and segmentation using non-contrast MRI
title_fullStr Deep learning-driven brain tumor classification and segmentation using non-contrast MRI
title_full_unstemmed Deep learning-driven brain tumor classification and segmentation using non-contrast MRI
title_short Deep learning-driven brain tumor classification and segmentation using non-contrast MRI
title_sort deep learning driven brain tumor classification and segmentation using non contrast mri
topic Artificial intelligence
Brain MRI
Convolutional neural networks (CNNs)
Fully convolutional networks (FCNs)
Deep learning
Tumor classification
url https://doi.org/10.1038/s41598-025-13591-2
work_keys_str_mv AT nanhanlu deeplearningdrivenbraintumorclassificationandsegmentationusingnoncontrastmri
AT yunghuihuang deeplearningdrivenbraintumorclassificationandsegmentationusingnoncontrastmri
AT kuoyingliu deeplearningdrivenbraintumorclassificationandsegmentationusingnoncontrastmri
AT taibeenchen deeplearningdrivenbraintumorclassificationandsegmentationusingnoncontrastmri