Experimente und Beobachtungen / experiments and observations; Simulationen / simulations
Datentyp:
Video-Aufzeichnungen / audiovisual collection
Beschreibung:
In this video we demonstrate fully autonomous multi-contact locomotion for our humanoid robot LOLA. In contrast to our previous multi-contact videos where contact points for the feet and hands had to be specified manually by the user, this time all contacts are autonomously planned by the robot itself based on the perceived environment. The only input by the user is the desired final goal position (a horizontal position and rotation around the vertical axis). The robot then automatically computes a feasible contact sequence (if possible) and connects the discrete poses with kinematically and dynamically feasible trajectories while considering multi-contact effects (external forces applied at the hands of the robot).
Through this experiments we demonstrate the coupling of LOLA's new computer vision (Chair for Computer Aided Medical Procedures & Augmented Reality, TUM) and walking pattern generation (Chair of Applied Mechanics, TUM) systems. All algorithms run onboard and in real-time. The scene is not known to the robot (it has to detect it on its own).
This work is supported by the German Research Foundation (DFG, project number 407378162).