We propose an approach for 3D reconstruction and tracking of dynamic surfaces using a single depth sensor, without any prior knowledge of the scene. It is robust to rapid inter-frame motions due to the probabilistic expectation-maximization non-rigid registration framework. Our pipeline subdivides each input depth image into non-rigidly connected surface patches, and deforms it towards the canonical pose by estimating a rigid transformation for each patch. The combination of a data term imposing similarity between model and data, and a regularizer enforcing as-rigid-as-possible motion of neighboring patches ensures that we can handle large deformations, while coping with sensor noise. We employ a surfel-based fusion technique, which lets us circumvent the repeated conversion between mesh and signed distance field representations which are used by related techniques. Furthermore, a robust keyframe-based scheme allows us to keep track of correspondences throughout the entire sequence. Through a variety of qualitative and quantitative experiments, we demonstrate resistance to larger motion and achieving lower reconstruction errors than related approaches.
«
We propose an approach for 3D reconstruction and tracking of dynamic surfaces using a single depth sensor, without any prior knowledge of the scene. It is robust to rapid inter-frame motions due to the probabilistic expectation-maximization non-rigid registration framework. Our pipeline subdivides each input depth image into non-rigidly connected surface patches, and deforms it towards the canonical pose by estimating a rigid transformation for each patch. The combination of a data term imposing...
»