Robot-assisted minimally invasive surgery requires accurate segmentation for surgical instruments in order to guide surgical robots on tracking the target instruments. Nevertheless, it is difficult to perform surgical-instrument semantic segmentation in unknown scenes with extremely insufficient intra-scene surgical data, despite of the attempts for general semantic segmentation tasks. To address this issue, we propose a cross-scene semantic segmentation approach for medical surgical instruments using structural similarity based partial activation networks in this paper. The proposed approach includes a main branch for multi-level feature extraction, a segmentation head global consistency, and a structural similarity based loss function to provide high-level information acquisition, which improves the generalisation performance for the cross-scene segmentation task. Then, the experimental results in cross-scene surgical-instrument semantic segmentation cases show the effectiveness of the proposed approach compared with state-of-the-art semantic segmentation ones, using the newly established endoscopic simulation dataset.
«
Robot-assisted minimally invasive surgery requires accurate segmentation for surgical instruments in order to guide surgical robots on tracking the target instruments. Nevertheless, it is difficult to perform surgical-instrument semantic segmentation in unknown scenes with extremely insufficient intra-scene surgical data, despite of the attempts for general semantic segmentation tasks. To address this issue, we propose a cross-scene semantic segmentation approach for medical surgical instruments...
»