Visual observations, such as camera images, are hard to obtain for long-term human motion analysis in unconstrained environments. In this paper, we present a method for human full-body pose tracking and activity recognition from measurements of few body-worn inertial orientation sensors. The sensors make our approach insensitive to illumination and occlusions and permit a person to move freely. Since the data provided by inertial sensors is sparse, noisy and often ambiguous, we use a generative prior model of feasible human poses and movements to constrain the tracking problem. Our model consists of several low-dimensional, activity-specific manifold embeddings that significantly restrict the search space for pose tracking. Using a particle filter, our method continuously explores multiple pose hypotheses in the embedding space. An efficient activity switching mechanism governs the distribution of particles across the activity-specific manifold embeddings. Selecting a pose hypothesis that best explains incoming sensor observations simultaneously allows us to classify the activity a person is performing and to estimate the full-body pose. We also derive an effective measure of predictive confidence that enables detecting anomalous movements. Experiments on a multi-person dataset containing several activities show that our method can seamlessly detect activity switches and accurately reconstruct full-body poses from the data of only six wearable inertial sensors.
«
Visual observations, such as camera images, are hard to obtain for long-term human motion analysis in unconstrained environments. In this paper, we present a method for human full-body pose tracking and activity recognition from measurements of few body-worn inertial orientation sensors. The sensors make our approach insensitive to illumination and occlusions and permit a person to move freely. Since the data provided by inertial sensors is sparse, noisy and often ambiguous, we use a generative...
»