Many robotic applications, such as manipulation and human-robot interaction, require accurate knowledge about the workspace of a manipulator. Further, an abstraction of the capabilities of a robot arm within its workspace, often modeled by so-called capability maps, is important for grasp and task planning. Unfortunately, existing methods to identify the capabilities of a manipulator are time-consuming and data-intensive. This work proposes generating robot capability maps directly in task space leveraging neural fields trained entirely on synthetically generated data. In numerical experiments, we show that our approach generalizes over various morphologies and produces accurate capability maps within milliseconds.
«
Many robotic applications, such as manipulation and human-robot interaction, require accurate knowledge about the workspace of a manipulator. Further, an abstraction of the capabilities of a robot arm within its workspace, often modeled by so-called capability maps, is important for grasp and task planning. Unfortunately, existing methods to identify the capabilities of a manipulator are time-consuming and data-intensive. This work proposes generating robot capability maps directly in task space...
»