MM-BiFPN: Multi-Modality Fusion Network With Bi-FPN for MRI Brain Tumor Segmentation

For medical imaging tasks, it is a prevalent practice to have a multi-modality image dataset, as experts prefer using multiple medical devices to diagnose a disease. Each device can show different aspects of segmentation, which in our case, is magnetic resonance imaging (MRI) brain tumor segmentatio...

Full description

Saved in:
Bibliographic Details
Main Authors: Nur Suriza Syazwany, Ju-Hyeon Nam, Sang-Chul Lee
Format: Article
Language:English
Published: IEEE 2021-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9632555/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:For medical imaging tasks, it is a prevalent practice to have a multi-modality image dataset, as experts prefer using multiple medical devices to diagnose a disease. Each device can show different aspects of segmentation, which in our case, is magnetic resonance imaging (MRI) brain tumor segmentation. For such medical imaging tasks, researchers tend to combine all modalities as an input into the network for feature extraction, and neglect the complexity between different modalities. It is no longer novel to use an encoder-decoder-based model and residual connections to transfer information from high-resolution maps to lower-resolution maps in medical segmentation tasks. In this work, we propose a multimodal fusion network with bi-directional feature pyramid network (MM-BiFPN) using an individual encoder to extract the features of each of the four modalities (FLAIR, T1-weighted, T1-c, and T2-weighted) to focus on the exploitation of the complex relationships among the modalities. In addition, by using the bi-directional feature pyramid network (Bi-FPN) layer, we focus on the aggregation of multiple modalities to study the cross-modality relationship and multi-scale features. Our experiment was conducted on the brain segmentation challenge datasets, the MICCAI BraTS2018 and MICCAI BraTS2020 datasets. We also implemented two ablation studies on our model with different cross-scale modalities fusion networks, as well as a study on different modality settings to see the effect each modality brings in detecting tumor content. With missing modalities, our method achieves a comparable result, demonstrating that our method is robust for brain tumor segmentation.
ISSN:2169-3536