Model-based action planning involves cortico-cerebellar and basal ganglia networks

Alan S.R. Fermin, Takehiko Yoshida, Junichiro Yoshimoto, Makoto Ito, Saori C. Tanaka, Kenji Doya

Research output: Contribution to journalArticlepeer-review

35 Citations (Scopus)


Humans can select actions by learning, planning, or retrieving motor memories. Reinforcement Learning (RL) associates these processes with three major classes of strategies for action selection: exploratory RL learns state-Action values by exploration, model-based RL uses internal models to simulate future states reached by hypothetical actions, and motor-memory RL selects past successful state-Action mapping. In order to investigate the neural substrates that implement these strategies, we conducted a functional magnetic resonance imaging (fMRI) experiment while humans performed a sequential action selection task under conditions that promoted the use of a specific RL strategy. The ventromedial prefrontal cortex and ventral striatum increased activity in the exploratory condition; the dorsolateral prefrontal cortex, dorsomedial striatum, and lateral cerebellum in the model-based condition; and the supplementary motor area, putamen, and anterior cerebellum in the motor-memory condition. These findings suggest that a distinct prefrontal-basal ganglia and cerebellar network implements the model-based RL action selection strategy.

Original languageEnglish
Article number31378
JournalScientific reports
Publication statusPublished - 19-08-2016
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • General


Dive into the research topics of 'Model-based action planning involves cortico-cerebellar and basal ganglia networks'. Together they form a unique fingerprint.

Cite this