Synthesizing MR imaging sequences is highly attractive for clinical practice, as often single sequences are missing or of poor quality (e.g. due to motion). Naturally, the idea arises that a target modality would benefit from multi-modal input. However, existing methods fail to scale up to non-aligned image volumes with multiple modalities, facing common drawbacks of complex multi-modal imaging sequences. We propose a novel, scalable and multi-modal approach called DiamondGAN. Our model is capable of performing flexible non-aligned cross-modality synthesis and data infill, when given multiple modalities or any of their arbitrary subsets, learning structured information in an end-to-end fashion. We synthesize two MRI sequences with clinical relevance (i.e., double inversion recovery (DIR) and contrast-enhanced T1 (T1-c)), reconstructed from three common MRI sequences. In addition, we perform multi-rater visual evaluation experiment and find that trained radiologists are unable to distinguish our synthetic DIR images from real ones.
«
Synthesizing MR imaging sequences is highly attractive for clinical practice, as often single sequences are missing or of poor quality (e.g. due to motion). Naturally, the idea arises that a target modality would benefit from multi-modal input. However, existing methods fail to scale up to non-aligned image volumes with multiple modalities, facing common drawbacks of complex multi-modal imaging sequences. We propose a novel, scalable and multi-modal approach called DiamondGAN. Our model is capab...
»