This work presents a novel shared control architecture dedicated to teleoperated contact tasks. We use Learning from Demonstration as a framework to learn a task model that encodes the desired motions, forces and stiffness profiles. Then, the learnt information is used by a Virtual Fixture (VF) to guide the human operator along a nominal task trajectory that captures the task dynamics, while simultaneously adapting the remote robot impedance. Furthermore, we provide haptic guidance in a human-aware manner. To that end, we propose a control law that eliminates time dependency and depends only on the current human state, inspired by the path and flow control formulations used in the exoskeleton literature [1], [2]. The proposed approach is validated in a user study where we test the guidance effect for the bilateral teleoperation of a drawing and a wiping task. The experimental results reveal a statistically significant improvement in several metrics, compared to teleoperation without guidance.
«
This work presents a novel shared control architecture dedicated to teleoperated contact tasks. We use Learning from Demonstration as a framework to learn a task model that encodes the desired motions, forces and stiffness profiles. Then, the learnt information is used by a Virtual Fixture (VF) to guide the human operator along a nominal task trajectory that captures the task dynamics, while simultaneously adapting the remote robot impedance. Furthermore, we provide haptic guidance in a human-aw...
»