Abstract Purpose: X-ray imaging is widely used for guiding minimally-invasive surgeries. Despite ongoing efforts in particular towards advanced visualization incorporating mixed reality concepts, correct depth perception from X-ray imaging is still hampered due to its projective nature. Methods: In this paper, we introduce a radically new concept for predicting depth information from single view X-ray images. Patient-specific training data for depth and corresponding X-ray attenuation information is constructed using readily available preoperative 3D image information. The corresponding depth model is learned employing a novel label consistent dictionary learning method incorporating atlas and spatial prior constraints to allow for efficient reconstruction performance. Results: We have validated our algorithm on patient data acquired for different anatomy focus (abdomen and thorax). Of 100 image pairs per each of 6 experimental instances, 80 images have been used for training, and 20 for testing. Depth estimation results have been compared to ground truth depth values. Conclusions: We have achieved around 4.40% ± 2.04 and 11.47% ± 2.27 mean squared error on abdomen and thorax datasets respectively, visual results of our proposed method are very promising. We have therefore presented a radically new concept for enhancing depth perception for image guided interventions.
«
Abstract Purpose: X-ray imaging is widely used for guiding minimally-invasive surgeries. Despite ongoing efforts in particular towards advanced visualization incorporating mixed reality concepts, correct depth perception from X-ray imaging is still hampered due to its projective nature. Methods: In this paper, we introduce a radically new concept for predicting depth information from single view X-ray images. Patient-specific training data for depth and corresponding X-ray attenuation info...
»