The acrobot is a two-link robot, actuated only at the joint between the two links. It is a difficult task in reinforcement learning (RL) to control the acrobot because it has nonlinear dynamics and continuous state and action spaces. In this article, we discuss applying the RL to the task of balancing control of the acrobot. Our RL method has an architecture similar to the actor-critic model. The actor is a controller that yields a control signal for the current state. The critic predicts the expected return in the future. The actor and the critic are approximated by normalized Gaussian networks, which are trained by an on-line EM algorithm. We also introduce a new method to promote the critic's learning. Our computer simulation shows that our method is able to acquire fairly good control through a small number of learning trials.
All Science Journal Classification (ASJC) codes
- Theoretical Computer Science
- Information Systems
- Hardware and Architecture
- Computational Theory and Mathematics