Machine learning-based 3D deformable motion modeling for MRI-guided radiotherapy

Published in Preprint, 2024

Abstract: Background; To compensate for organ motion during a radiotherapy session, margins are normally added to the tumor region. With MR-linacs, it is now possible to monitor the motion by acquiring 2D cine-MRI in real-time. Purpose; In this paper, we propose a method to estimate the entire 3D motion given sparse information in the form of 2D images of the anatomy. Methods; The methods consist of three models; two 2D motion models with forecasting possibility and one 2D-to-3D extrapolation model to estimate the 3D motion at each point in time. In the experiments, we use real images from patients treated with an MRI-linac system, where seven patients were used for training and two for evaluation. The experiment was two-fold; one based on a phase-sorted 4D CT with known motions, and, one based on a cine-MRI sequence, where the ground-truth 3D motion was unknown. Results; Our model estimates the 3D motion given two 2D image observations in the coronal and sagittal orientation with an average error of 0.43 mm in the entire anatomy. In the PTV, the average error was 0.82 mm and 0.56 mm for the two patients in the evaluation cohort. For the cine-MRI sequence, our approach achieved results comparable to previously published centroid tracking while also providing a complete deformable 3D motion estimate. Conclusions; We present a method to estimate full 3D motion from soarse data in the form of 2D images suitable for the MR-linac.

Recommended citation: Niklas Gunnarsson, Uffe Bernchou, Faisal Mahmood, Anders Bertelsen, Peter Kimstrand, "Machine learning-based 3D deformable motion modeling for MRI-guided radiotherapy." Preprint, 2024.