TY - JOUR
T1 - Balancing plasticity and stability of on-line learning based on hierarchical Bayesian adaptation of forgetting factors
AU - Hirayama, Junichiro
AU - Yoshimoto, Junichiro
AU - Ishii, Shin
N1 - Funding Information:
This study was partly supported by Grant-in-Aid for Scientific Research (B) (No. 16014214) from Japan Society for the Promotion of Science.
PY - 2006/10
Y1 - 2006/10
N2 - An important character of on-line learning is its potential to adapt to changing environments by properly adjusting meta-parameters that control the balance between plasticity and stability of the learning model. In our previous study, we proposed a learning scheme to address changing environments in the framework of an on-line variational Bayes (VB), which is an effective on-line learning scheme based on Bayesian inference. The motivation of that work was, however, its implications for animal learning, and the formulation of the learning model was heuristic and not theoretically justified. In this article, we propose a new approach that balances the plasticity and stability of on-line VB learning in a more theoretically justifiable manner by employing the principle of hierarchical Bayesian inference. We present a new interpretation of on-line VB as a special case of incremental Bayes that allows the hierarchical Bayesian setting to balance the plasticity and stability as well as yielding a simple learning rule compared to standard on-line VB. This dynamic on-line VB scheme is applied to probabilistic PCA as an example of probabilistic models involving latent variables. In computer simulations using artificial data sets, the new on-line VB learning shows robust performance to regulate the balance between plasticity and stability, thus adapting to changing environments.
AB - An important character of on-line learning is its potential to adapt to changing environments by properly adjusting meta-parameters that control the balance between plasticity and stability of the learning model. In our previous study, we proposed a learning scheme to address changing environments in the framework of an on-line variational Bayes (VB), which is an effective on-line learning scheme based on Bayesian inference. The motivation of that work was, however, its implications for animal learning, and the formulation of the learning model was heuristic and not theoretically justified. In this article, we propose a new approach that balances the plasticity and stability of on-line VB learning in a more theoretically justifiable manner by employing the principle of hierarchical Bayesian inference. We present a new interpretation of on-line VB as a special case of incremental Bayes that allows the hierarchical Bayesian setting to balance the plasticity and stability as well as yielding a simple learning rule compared to standard on-line VB. This dynamic on-line VB scheme is applied to probabilistic PCA as an example of probabilistic models involving latent variables. In computer simulations using artificial data sets, the new on-line VB learning shows robust performance to regulate the balance between plasticity and stability, thus adapting to changing environments.
UR - http://www.scopus.com/inward/record.url?scp=33748416315&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=33748416315&partnerID=8YFLogxK
U2 - 10.1016/j.neucom.2005.11.020
DO - 10.1016/j.neucom.2005.11.020
M3 - Article
AN - SCOPUS:33748416315
SN - 0925-2312
VL - 69
SP - 1954
EP - 1961
JO - Neurocomputing
JF - Neurocomputing
IS - 16-18
ER -