This work proposes a self-supervised algorithm to segment each arbitrary anatomical structure in a 3D medical image produced under various acquisition conditions, dealing with domain shift problems and generalizability. Furthermore, we advocate an interactive setting in the inference time, where the self-supervised model trained on unlabeled volumes should be directly applicable to segment each test volume given the user-provided single slice annotation. To this end, we learn a novel 3D registration network, namely Vol2Flow, from the perspective of image sequence registration to find 2D displacement fields between all adjacent slices within a 3D medical volume together. Specifically, we present a novel 3D CNN-based architecture that finds a series of registration flows between consecutive slices within a whole volume, resulting in a dense displacement field. A new self-supervised algorithm is proposed to learn the transformations or registration fields between the series of 2D images of a 3D volume. Consequently, we enable gradually propagating the user-provided single slice annotation to other slices of a volume in the inference time. We demonstrate that our model substantially outperforms related methods on various medical image segmentation tasks through several experiments on different medical image segmentation datasets. Code is available at https://github.com/AdelehBitarafan/Vol2Flow.
«
This work proposes a self-supervised algorithm to segment each arbitrary anatomical structure in a 3D medical image produced under various acquisition conditions, dealing with domain shift problems and generalizability. Furthermore, we advocate an interactive setting in the inference time, where the self-supervised model trained on unlabeled volumes should be directly applicable to segment each test volume given the user-provided single slice annotation. To this end, we learn a novel 3D registra...
»