Recent technological advances have led to an increasing popularity of 3D gesture-based interfaces, in particular in gaming and entertainment consoles. However, unlike 2D gestures, which have been successfully utilized in many multi-touch devices, developing a 3D gesture-based interface is not an easy endeavor. Reasons include the complexity of capturing human movements in 3D and the difficulties associated with recognizing gestures from human motion data. In this work, we target the latter problem by proposing a novel gesture recognition technique for skeletal input data that simultaneously allows for categorical and spatio-temporal gestures. In other words, it recognizes the gesture type and the relative pose within a gesture at the same time. Moreover, our method can learn gestures that are most appropriate for the user from examples. In order to avoid the need for user-specific training, we further propose and evaluate several types of feature representations for human pose data. We argue how our approach can facilitate the development of a customizable 3D gesture-based interface and explore possibilities in order to smoothly integrate the proposed recognition approach into available component-based user interface frameworks. Besides a quantitative evaluation, we present a user study in the scenario of a 3D gesture-based interface for an intra-operative medical image viewer. Our studies support the applicability of our method for developing 3D gesture-based interfaces in practice.
«
Recent technological advances have led to an increasing popularity of 3D gesture-based interfaces, in particular in gaming and entertainment consoles. However, unlike 2D gestures, which have been successfully utilized in many multi-touch devices, developing a 3D gesture-based interface is not an easy endeavor. Reasons include the complexity of capturing human movements in 3D and the difficulties associated with recognizing gestures from human motion data. In this work, we target the latter probl...
»