Task and motion planning deals with complex tasks that require a robot to automatically define and execute multi-step sequences of actions in cluttered scenarios. In this context, a linear motion is often not sufficient to approach a target object since collisions of the gripper with other objects or the target object might occur. Thus, motion planners should be able to generate collision-free trajectories for every particular configuration of obstacles for grounding the symbolic actions of the task plan. Current approaches either search for feasible motions offline using computationally expensive trial-and-error processes on physically realistic simulations or learn a set of motion parameters for particular object configuration spaces with little generalization. This work proposes an appealing alternative by efficiently generating trajectories for the collision-free execution of symbolic actions in variable scenarios without the need of intensive offline simulations. Our approach combines the benefit of learning from demonstration, to quickly generate an initial set of motion parameters for each symbolic action, with policy improvement with path integrals, to diversify this initial set of parameters to cope with different obstacle configurations. We show how the improved flexibility is achieved after a few minutes of training and successfully solves tasks requiring different sequences of picking and placing actions in variable configurations of obstacles.
«
Task and motion planning deals with complex tasks that require a robot to automatically define and execute multi-step sequences of actions in cluttered scenarios. In this context, a linear motion is often not sufficient to approach a target object since collisions of the gripper with other objects or the target object might occur. Thus, motion planners should be able to generate collision-free trajectories for every particular configuration of obstacles for grounding the symbolic actions of the...
»