MetaTOC stay on top of your field, easily

Reinforcement learning analysis for a minimum time balance problem

,

Transactions of the Institute of Measurement and Control

Published online on

Abstract

Reinforcement learning was developed to solve complex learning control problems, where only a minimal amount of a priori knowledge exists about the system dynamics. It has also been used as a model of cognitive learning in humans and applied to systems, such as pole balancing and humanoid robots, to study embodied cognition. However, closed-form analysis of the value function learning based on a higher-order unstable test problem dynamics has been rarely considered. In this paper, firstly, a second-order, unstable balance test problem is used to investigate issues associated with the value function parameter convergence and rate of convergence. In particular, the convergence of the minimum time value function is analysed, where the minimum time optimal control policy is assumed known. It is shown that the temporal difference error introduces a null space associated with the experiment termination basis function during the simulation. As this effect occurs due to termination or any kind of switching in control signal, this null space appears in temporal differences (TD) error for more general higher-order systems. Secondly, the rate of parameter convergence is analysed and it is shown that residual gradient algorithm converges faster than TD(0) for this particular test problem. Thirdly, impact of the finite horizon on both the value function and control policy learning has been analysed in case of unknown control policy and added random exploration noise.