Recent advances in deep learning have dramatically enhanced the accuracy and efficiency of image registration. Yet, the dependency of these algorithms on the specific training data remains an unsolved problem, characterized by inaccurate registration of out-of-distribution images. To address this data dependency, we propose SynthMorph, a strategy for learning contrast-invariant registration without acquired images. By exposing networks to a landscape of unrealistic synthetic data at training, SynthMorph enables unprecedented robustness across a range of real image types and greatly alleviates the need for retraining models.
@article{hoffmann2022synthmorph,
title={SynthMorph: learning contrast-invariant registration without acquired images},
author={Hoffmann, Malte and Billot, Benjamin and Greve, Douglas N and Iglesias, Juan Eugenio and Fischl, Bruce and Dalca, Adrian V},
journal={IEEE Transactions on Medical Imaging},
volume={41},
number={3},
pages={543--558},
year={2022},
publisher={IEEE}
}
The authors thank Danielle F. Pace for help with computing surface distances. The research project benefitted from computational hardware generously provided by the Massachusetts Life Sciences Center.