Recently, it has been shown that CycleGANs are masters of steganography. They cannot only learn reliable mappings between two distributions without calling for paired training data, but can effectively hide information unseen during training in mapping results from which input data can be recovered almost perfectly. When preventing this during training, CycleGANs actually map samples much closer to the training distribution. Here, we propose to leverage this effect in the context of trending unsupervised anomaly detection, which primarily relies on modeling healthy anatomy with generative models. Here, we embed anomaly detection into a CycleGAN-based style-transfer framework, which is trained to translate healthy brain MR images to a simulated distribution with lower entropy and vice versa. By filtering high frequency, low amplitude signals from lower entropy samples during training, the resulting model suppresses anomalies in reconstructions of the input data at test time. Similar to Autoencoder and GAN-based anomaly detection methods, this allows us to delineate pathologies directly from residuals between input and reconstruction. Various ablative studies and comparisons to state-of-the-art methods highlight the potential of our method.
«
Recently, it has been shown that CycleGANs are masters of steganography. They cannot only learn reliable mappings between two distributions without calling for paired training data, but can effectively hide information unseen during training in mapping results from which input data can be recovered almost perfectly. When preventing this during training, CycleGANs actually map samples much closer to the training distribution. Here, we propose to leverage this effect in the context of trending uns...
»