TY - JOUR
T1 - A spiking neural network model of model- Free reinforcement learning with high- dimensional sensory input and perceptual ambiguity
AU - Nakano, Takashi
AU - Otsuka, Makoto
AU - Yoshimoto, Junichiro
AU - Doya, Kenji
N1 - Funding Information:
A part of this study is the result of “Bioinformatics for Brain Sciences” carried out under the Strategic Research Program for Brain Sciences by the Ministry of Education, Culture, Sports, Science and Technology of Japan.
Publisher Copyright:
© 2015 Nakano et al.
PY - 2015/3/3
Y1 - 2015/3/3
N2 - A theoretical framework of reinforcement learning plays an important role in understanding action selection in animals. Spiking neural networks provide a theoretically grounded means to test computational hypotheses on neurally plausible algorithms of reinforcement learning through numerical simulation. However, most of these models cannot handle observations which are noisy, or occurred in the past, even though these are inevitable and constraining features of learning in real environments. This class of problem is formally known as partially observable reinforcement learning (PORL) problems. It provides a generalization of reinforcement learning to partially observable domains. In addition, observations in the real world tend to be rich and high-dimensional. In this work, we use a spiking neural network model to approximate the free energy of a restricted Boltzmann machine and apply it to the solution of PORL problems with high-dimensional observations. Our spiking network model solves maze tasks with perceptually ambiguous high-dimensional observations without knowledge of the true environment. An extended model with working memory also solves history-dependent tasks. The way spiking neural networks handle PORL problems may provide a glimpse into the underlying laws of neural information processing which can only be discovered through such a top-down approach.
AB - A theoretical framework of reinforcement learning plays an important role in understanding action selection in animals. Spiking neural networks provide a theoretically grounded means to test computational hypotheses on neurally plausible algorithms of reinforcement learning through numerical simulation. However, most of these models cannot handle observations which are noisy, or occurred in the past, even though these are inevitable and constraining features of learning in real environments. This class of problem is formally known as partially observable reinforcement learning (PORL) problems. It provides a generalization of reinforcement learning to partially observable domains. In addition, observations in the real world tend to be rich and high-dimensional. In this work, we use a spiking neural network model to approximate the free energy of a restricted Boltzmann machine and apply it to the solution of PORL problems with high-dimensional observations. Our spiking network model solves maze tasks with perceptually ambiguous high-dimensional observations without knowledge of the true environment. An extended model with working memory also solves history-dependent tasks. The way spiking neural networks handle PORL problems may provide a glimpse into the underlying laws of neural information processing which can only be discovered through such a top-down approach.
UR - http://www.scopus.com/inward/record.url?scp=84923862767&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84923862767&partnerID=8YFLogxK
U2 - 10.1371/journal.pone.0115620
DO - 10.1371/journal.pone.0115620
M3 - Article
C2 - 25734662
AN - SCOPUS:84923862767
VL - 10
JO - PLoS One
JF - PLoS One
SN - 1932-6203
IS - 3
M1 - e0115620
ER -