抄録
In this article, we propose a new reinforcement learning (RL) method for a system having continuous state and action spaces. Our RL method has an architecture like the actor-critic model. The critic tries to approximate the Q-function, which is the expected future return for the current state-action pair. The actor tries to approximate a stochastic soft-max policy defined by the Q-function. The soft-max policy is more likely to select an action that has a higher Q-function value. The on-line EM algorithm is used to train the critic and the actor. We apply this method to two control problems. Computer simulations show that our method is able to acquire fairly good control in the two tasks after a few learning trials.
| 本文言語 | 英語 |
|---|---|
| ページ | 163-168 |
| ページ数 | 6 |
| 出版ステータス | 出版済み - 2000 |
| 外部発表 | はい |
| イベント | International Joint Conference on Neural Networks (IJCNN'2000) - Como, Italy 継続期間: 24-07-2000 → 27-07-2000 |
会議
| 会議 | International Joint Conference on Neural Networks (IJCNN'2000) |
|---|---|
| City | Como, Italy |
| Period | 24-07-00 → 27-07-00 |
All Science Journal Classification (ASJC) codes
- ソフトウェア
- 人工知能
フィンガープリント
「On-line EM reinforcement learning」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。引用スタイル
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver