Control of exploitation-exploration meta-parameter in reinforcement learning

Shin Ishii, Wako Yoshida, Junichiro Yoshimoto

研究成果: Article査読

132 被引用数 (Scopus)

抄録

In reinforcement learning (RL), the duality between exploitation and exploration has long been an important issue. This paper presents a new method that controls the balance between exploitation and exploration. Our learning scheme is based on model-based RL, in which the Bayes inference with forgetting effect estimates the state-transition probability of the environment. The balance parameter, which corresponds to the randomness in action selection, is controlled based on variation of action results and perception of environmental change. When applied to maze tasks, our method successfully obtains good controls by adapting to environmental changes. Recently, Usher et al. [Science 283 (1999) 549] has suggested that noradrenergic neurons in the locus coeruleus may control the exploitation-exploration balance in a real brain and that the balance may correspond to the level of animal's selective attention. According to this scenario, we also discuss a possible implementation in the brain.

本文言語English
ページ(範囲)665-687
ページ数23
ジャーナルNeural Networks
15
4-6
DOI
出版ステータスPublished - 2002
外部発表はい

All Science Journal Classification (ASJC) codes

  • 認知神経科学
  • 人工知能

フィンガープリント

「Control of exploitation-exploration meta-parameter in reinforcement learning」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル