Purpose Accurate and robust estimates of camera position and orientation in a bronchoscope are required for navigation. Fusion of pre-interventional information (e.g., CT, MRI, or US) and intra-interventional information (e.g., bronchoscopic video) were incorporated into a navigation system to provide physicians with an augmented reality environment for bronchoscopic interventions. Methods Two approaches were used to predict bronchoscope movements by incorporating sequential Monte Carlo (SMC) simulation including 1) image-based tracking techniques and 2) electromagnetic tracking (EMT) methods. SMC simulation was introduced to model ambiguities or uncertainties that occurred in image- and EMT-based bronchoscope tracking. Scale invariant feature transform (SIFT) features were employed to overcome limitations of image-based motion tracking methods. Validation was performed on five phantom and ten human case datasets acquired in the supine position. Results For dynamic phantom validation, the EMT-SMC simulation method improved the tracking performance of the successfully registered bronchoscopic video frames by 12.7% compared with a hybrid-based method. In comparisons between tracking results and ground truth, the accuracy of the EMT-SMC simulation method was 1.51 mm (positional error) and 5.44 degrees (orientation error). During patient assessment, the SIFT-SMC simulation scheme was more stable or robust than a previous image-based approach for bronchoscope motion estimation, which showed 23.6% improvement of successfully tracked frames. Comparing the estimates of our method to ground truth, the position and orientation errors are 3.72 mm and 10.2 degrees, while those of our previous image-based method were at least 7.77 mm and 19.3 degrees. The computational times of our EMT- and SIFT-SMC simulation methods were 0.9 and 1.2 seconds per frame, respectively. Conclusion The SMC simulation method was developed to model ambiguities that occur in bronchoscope tracking. This method more stably and accurately predicts the bronchoscope camera position and orientation parameters, reducing uncertainties due to problematic bronchoscopic video frames and airway deformation during intra-bronchoscopy navigation.
«
Purpose Accurate and robust estimates of camera position and orientation in a bronchoscope are required for navigation. Fusion of pre-interventional information (e.g., CT, MRI, or US) and intra-interventional information (e.g., bronchoscopic video) were incorporated into a navigation system to provide physicians with an augmented reality environment for bronchoscopic interventions. Methods Two approaches were used to predict bronchoscope movements by incorporating sequential Monte Carl...
»