We present a novel framework for the segmentation of multiple organs in 3D abdominal CT images, which does not require registration with an atlas. Instead we use discriminative classifiers that have been trained on an array of 3D volumetric features and implicitly model the appearance of the organs of interest. We fully leverage all the available data and extract the features from inside supervoxels at multiple levels of detail. Parallel to this, we employ a hierarchical auto-context classification scheme, where the trained classifier at each level is applied back onto the image to provide additional features for the next level. The final segmentation is obtained using a hierarchical conditional random field fusion step. We have tested our approach on 20 contrast enhanced CT images of 8 organs from the VISCERAL dataset and obtained results comparable to the state-of-the-art methods that require very costly registration steps and a much larger corpus of training data. Our method is accurate, fast and general enough that may be applied to a variety of realistic clinical applications and any number of organs.
«
We present a novel framework for the segmentation of multiple organs in 3D abdominal CT images, which does not require registration with an atlas. Instead we use discriminative classifiers that have been trained on an array of 3D volumetric features and implicitly model the appearance of the organs of interest. We fully leverage all the available data and extract the features from inside supervoxels at multiple levels of detail. Parallel to this, we employ a hierarchical auto-context classificat...
»