This article presents a new formulation for model-free robust optimal regulation of continuous-time nonlinear systems. The proposed reinforcement learning based approach, referred to as incremental adaptive dynamic programming (IADP), utilizes measured input-state data to allow the design of the approximate optimal incremental control strategy, stabilizing the controlled system incrementally under model uncertainties, environmental disturbances, and input saturation. By leveraging the time delay estimation (TDE) technique, we first use sensor data to reduce the requirement of a complete dynamics, where input-state data is adopted to construct an incremental dynamics which reflects the system evolution in an incremental form. Then, the resulting incremental dynamics serves to design the approximate optimal incremental control strategy based on adaptive dynamic programming, which is implemented as a simplified single critic structure to get the approximate solution to the value function of the Hamilton–Jacobi–Bellman equation. Furthermore, for the critic neural network, experience data are used to design an off-policy weight update law with guaranteed weight convergence. Rather importantly, we incorporate a TDE error bound related term into the cost function, whereby the unintentionally introduced TDE error is attenuated during the optimization process. The proofs of system stability and weight convergence are provided. Numerical simulations are conducted to validate the effectiveness and superiority of our proposed IADP, especially regarding the reduced control energy expenditure and the enhanced robustness.
«
This article presents a new formulation for model-free robust optimal regulation of continuous-time nonlinear systems. The proposed reinforcement learning based approach, referred to as incremental adaptive dynamic programming (IADP), utilizes measured input-state data to allow the design of the approximate optimal incremental control strategy, stabilizing the controlled system incrementally under model uncertainties, environmental disturbances, and input saturation. By leveraging the time delay...
»