A Bayesian Foundation for Individual Learning Under Uncertainty

Computational learning models are critical for understanding mechanisms of adaptive behavior. However, the two major current frameworks, reinforcement learning (RL) and Bayesian learning, both have certain limitations. For example, many Bayesian models are agnostic of inter-individual variability an...

Full description

Bibliographic Details
Main Authors: Mathys, Christoph, Daunizeau, Jean, Friston, Karl J., Stephan, Klaas E.
Format: Online
Language:English
Published: Frontiers Research Foundation 2011
Online Access:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3096853/
id pubmed-3096853
recordtype oai_dc
spelling pubmed-30968532011-05-31 A Bayesian Foundation for Individual Learning Under Uncertainty Mathys, Christoph Daunizeau, Jean Friston, Karl J. Stephan, Klaas E. Neuroscience Computational learning models are critical for understanding mechanisms of adaptive behavior. However, the two major current frameworks, reinforcement learning (RL) and Bayesian learning, both have certain limitations. For example, many Bayesian models are agnostic of inter-individual variability and involve complicated integrals, making online learning difficult. Here, we introduce a generic hierarchical Bayesian framework for individual learning under multiple forms of uncertainty (e.g., environmental volatility and perceptual uncertainty). The model assumes Gaussian random walks of states at all but the first level, with the step size determined by the next highest level. The coupling between levels is controlled by parameters that shape the influence of uncertainty on learning in a subject-specific fashion. Using variational Bayes under a mean-field approximation and a novel approximation to the posterior energy function, we derive trial-by-trial update equations which (i) are analytical and extremely efficient, enabling real-time learning, (ii) have a natural interpretation in terms of RL, and (iii) contain parameters representing processes which play a key role in current theories of learning, e.g., precision-weighting of prediction error. These parameters allow for the expression of individual differences in learning and may relate to specific neuromodulatory mechanisms in the brain. Our model is very general: it can deal with both discrete and continuous states and equally accounts for deterministic and probabilistic relations between environmental events and perceptual states (i.e., situations with and without perceptual uncertainty). These properties are illustrated by simulations and analyses of empirical time series. Overall, our framework provides a novel foundation for understanding normal and pathological learning that contextualizes RL within a generic Bayesian scheme and thus connects it to principles of optimality from probability theory. Frontiers Research Foundation 2011-05-02 /pmc/articles/PMC3096853/ /pubmed/21629826 http://dx.doi.org/10.3389/fnhum.2011.00039 Text en Copyright © 2011 Mathys, Daunizeau, Friston and Stephan. http://www.frontiersin.org/licenseagreement This is an open-access article subject to a non-exclusive license between the authors and Frontiers Media SA, which permits use, distribution and reproduction in other forums, provided the original authors and source are credited and other Frontiers conditions are complied with.
repository_type Open Access Journal
institution_category Foreign Institution
institution US National Center for Biotechnology Information
building NCBI PubMed
collection Online Access
language English
format Online
author Mathys, Christoph
Daunizeau, Jean
Friston, Karl J.
Stephan, Klaas E.
spellingShingle Mathys, Christoph
Daunizeau, Jean
Friston, Karl J.
Stephan, Klaas E.
A Bayesian Foundation for Individual Learning Under Uncertainty
author_facet Mathys, Christoph
Daunizeau, Jean
Friston, Karl J.
Stephan, Klaas E.
author_sort Mathys, Christoph
title A Bayesian Foundation for Individual Learning Under Uncertainty
title_short A Bayesian Foundation for Individual Learning Under Uncertainty
title_full A Bayesian Foundation for Individual Learning Under Uncertainty
title_fullStr A Bayesian Foundation for Individual Learning Under Uncertainty
title_full_unstemmed A Bayesian Foundation for Individual Learning Under Uncertainty
title_sort bayesian foundation for individual learning under uncertainty
description Computational learning models are critical for understanding mechanisms of adaptive behavior. However, the two major current frameworks, reinforcement learning (RL) and Bayesian learning, both have certain limitations. For example, many Bayesian models are agnostic of inter-individual variability and involve complicated integrals, making online learning difficult. Here, we introduce a generic hierarchical Bayesian framework for individual learning under multiple forms of uncertainty (e.g., environmental volatility and perceptual uncertainty). The model assumes Gaussian random walks of states at all but the first level, with the step size determined by the next highest level. The coupling between levels is controlled by parameters that shape the influence of uncertainty on learning in a subject-specific fashion. Using variational Bayes under a mean-field approximation and a novel approximation to the posterior energy function, we derive trial-by-trial update equations which (i) are analytical and extremely efficient, enabling real-time learning, (ii) have a natural interpretation in terms of RL, and (iii) contain parameters representing processes which play a key role in current theories of learning, e.g., precision-weighting of prediction error. These parameters allow for the expression of individual differences in learning and may relate to specific neuromodulatory mechanisms in the brain. Our model is very general: it can deal with both discrete and continuous states and equally accounts for deterministic and probabilistic relations between environmental events and perceptual states (i.e., situations with and without perceptual uncertainty). These properties are illustrated by simulations and analyses of empirical time series. Overall, our framework provides a novel foundation for understanding normal and pathological learning that contextualizes RL within a generic Bayesian scheme and thus connects it to principles of optimality from probability theory.
publisher Frontiers Research Foundation
publishDate 2011
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3096853/
_version_ 1611454259770949632