Derivatives of logarithmic stationary distributions for policy gradient reinforcement learning

Tetsuro Morimura, Eiji Uchibe, Junichiro Yoshimoto, Jan Peters, Kenji Doya

研究成果: ジャーナルへの寄稿学術論文査読

20 被引用数 (Scopus)

抄録

Most conventional policy gradient reinforcement learning (PGRL) algorithms neglect (or do not explicitly make use of) a term in the average reward gradient with respect to the policy parameter. That term involves the derivative of the stationary state distribution that corresponds to the sensitivity of its distribution to changes in the policy parameter. Although the bias introduced by this omission can be reduced by setting the forgetting rate γ for the value functions close to 1, these algorithms do not permit γ to be set exactly at γ = 1. In this article, we propose a method for estimating the log stationary state distribution derivative (LSD) as a useful form of the derivative of the stationary state distribution through backward Markov chain formulation and a temporal difference learning framework. A new policy gradient (PG) framework with an LSD is also proposed, in which the average reward gradient can be estimated by setting γ = 0, so it becomes unnecessary to learn the value functions. We also test the performance of the proposed algorithms using simple benchmark tasks and show that these can improve the performances of existing PG methods.

本文言語英語
ページ(範囲)342-376
ページ数35
ジャーナルNeural Computation
22
2
DOI
出版ステータス出版済み - 02-2010
外部発表はい

All Science Journal Classification (ASJC) codes

  • 人文科学(その他)
  • 認知神経科学

フィンガープリント

「Derivatives of logarithmic stationary distributions for policy gradient reinforcement learning」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル