Abstract
In this article, we propose a new reinforcement learning (RL) method for a system having continuous state and action spaces. Our RL method has an architecture like the actor-critic model. The critic tries to approximate the Q-function, which is the expected future return for the current state-action pair. The actor tries to approximate a stochastic soft-max policy defined by the Q-function. The soft-max policy is more likely to select an action that has a higher Q-function value. The on-line EM algorithm is used to train the critic and the actor. We apply this method to two control problems. Computer simulations show that our method is able to acquire fairly good control in the two tasks after a few learning trials.
Original language | English |
---|---|
Pages | 163-168 |
Number of pages | 6 |
Publication status | Published - 2000 |
Externally published | Yes |
Event | International Joint Conference on Neural Networks (IJCNN'2000) - Como, Italy Duration: 24-07-2000 → 27-07-2000 |
Conference
Conference | International Joint Conference on Neural Networks (IJCNN'2000) |
---|---|
City | Como, Italy |
Period | 24-07-00 → 27-07-00 |
All Science Journal Classification (ASJC) codes
- Software
- Artificial Intelligence