A Lightweight Residual Network for Unsupervised Deformable Image Registration

Unsupervised deformable volumetric image registration is crucial for various applications, such as medical imaging and diagnosis. Recently, learning-based methods have achieved remarkable success in this domain. Due to their strong global modeling capabilities, transformers outperform convolutional...

Full description

Saved in:
Bibliographic Details
Main Authors: Ahsan Raza Siyal, Astrid Ellen Grams, Markus Haltmeier
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10786016/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Unsupervised deformable volumetric image registration is crucial for various applications, such as medical imaging and diagnosis. Recently, learning-based methods have achieved remarkable success in this domain. Due to their strong global modeling capabilities, transformers outperform convolutional neural networks (CNNs) in registration tasks. However, transformers rely on large models with vast parameter sets, require significant computational resources, and demand extensive amounts of training data to achieve meaningful results. While existing CNN-based image registration methods provide rich local information, their limited global modeling capabilities hinder their ability to capture long-distance interactions, which restricts their overall performance. In this work, we propose a novel CNN-based registration method that improves the receptive field, maintains a low parameter count, and delivers strong results even on limited training datasets. Specifically, we use a residual U-Net architecture, enhanced with embedded parallel dilated-convolutional blocks, to expand the receptive field effectively. The proposed method is evaluated on inter-patient and atlas-to-patient datasets. We show that the performance of the proposed method is comparable to, and slightly better than, transformer-based methods while using only 1.5% of their number of parameters.
ISSN:2169-3536