Reinforcement learning has become more and more popular in robotics
for acquiring feedback controllers. Many approaches aim for learning
a controller from scratch, i.e., data-driven without any modeling
of the physical plant. However, stability properties of the closed
loop are often not considered, or established only a-posteriori or
ad hoc. We propose to employ reinforcement learning in the context
of model-based control, allowing to learn in a framework of stabilizing
controllers built by using only little prior model knowledge. This
way, the action space is suitably structured for safe learning of
a feedback controller to compensate for uncertainties due to model
mismatch or external disturbances. The resulting scheme is developed
around a decentralized PD feedback controller. Therefore, given such
a controller, by the proposed method one can also add a learning
module for performance enhancement. We demonstrate our approach both
in simulation and in a hardware experiment using a two degree of
freedom robot manipulator.
«
Reinforcement learning has become more and more popular in robotics
for acquiring feedback controllers. Many approaches aim for learning
a controller from scratch, i.e., data-driven without any modeling
of the physical plant. However, stability properties of the closed
loop are often not considered, or established only a-posteriori or
ad hoc. We propose to employ reinforcement learning in the context
of model-based control, allowing to learn in a framework of stabilizing
controllers built...
»