Extraversion differentiates between model-based and model-free strategies in a reinforcement learning task
Prominent computational models describe a neural mechanism for learning from reward prediction errors, and it has been suggested that variations in this mechanism are reflected in personality factors such as trait extraversion. However, although trait extraversion has been linked to improved reward...
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Published: |
Frontiers
2013
|
| Online Access: | https://eprints.nottingham.ac.uk/3002/ |
| _version_ | 1848790928622354432 |
|---|---|
| author | Skatova, Anya Chan, Patricia A. Daw, Nathaniel |
| author_facet | Skatova, Anya Chan, Patricia A. Daw, Nathaniel |
| author_sort | Skatova, Anya |
| building | Nottingham Research Data Repository |
| collection | Online Access |
| description | Prominent computational models describe a neural mechanism for learning from reward prediction errors, and it has been suggested that variations in this mechanism are reflected in personality factors such as trait extraversion. However, although trait extraversion has been linked to improved reward learning, it is not yet known whether this relationship is selective for the particular computational strategy associated with error-driven learning, known as model-free reinforcement learning, vs. another strategy, model-based learning, which the brain is also known to employ. In the present study we test this relationship by examining whether humans' scores on an extraversion scale predict individual differences in the balance between model-based and model-free learning strategies in a sequentially structured decision task designed to distinguish between them. In previous studies with this task, participants have shown a combination of both types of learning, but with substantial individual variation in the balance between them. In the current study, extraversion predicted worse behavior across both sorts of learning. However, the hypothesis that extraverts would be selectively better at model-free reinforcement learning held up among a subset of the more engaged participants, and overall, higher task engagement was associated with a more selective pattern by which extraversion predicted better model-free learning. The findings indicate a relationship between a broad personality orientation and detailed computational learning mechanisms. Results like those in the present study suggest an intriguing and rich relationship between core neuro-computational mechanisms and broader life orientations and outcomes. |
| first_indexed | 2025-11-14T18:20:24Z |
| format | Article |
| id | nottingham-3002 |
| institution | University of Nottingham Malaysia Campus |
| institution_category | Local University |
| last_indexed | 2025-11-14T18:20:24Z |
| publishDate | 2013 |
| publisher | Frontiers |
| recordtype | eprints |
| repository_type | Digital Repository |
| spelling | nottingham-30022020-05-04T16:39:01Z https://eprints.nottingham.ac.uk/3002/ Extraversion differentiates between model-based and model-free strategies in a reinforcement learning task Skatova, Anya Chan, Patricia A. Daw, Nathaniel Prominent computational models describe a neural mechanism for learning from reward prediction errors, and it has been suggested that variations in this mechanism are reflected in personality factors such as trait extraversion. However, although trait extraversion has been linked to improved reward learning, it is not yet known whether this relationship is selective for the particular computational strategy associated with error-driven learning, known as model-free reinforcement learning, vs. another strategy, model-based learning, which the brain is also known to employ. In the present study we test this relationship by examining whether humans' scores on an extraversion scale predict individual differences in the balance between model-based and model-free learning strategies in a sequentially structured decision task designed to distinguish between them. In previous studies with this task, participants have shown a combination of both types of learning, but with substantial individual variation in the balance between them. In the current study, extraversion predicted worse behavior across both sorts of learning. However, the hypothesis that extraverts would be selectively better at model-free reinforcement learning held up among a subset of the more engaged participants, and overall, higher task engagement was associated with a more selective pattern by which extraversion predicted better model-free learning. The findings indicate a relationship between a broad personality orientation and detailed computational learning mechanisms. Results like those in the present study suggest an intriguing and rich relationship between core neuro-computational mechanisms and broader life orientations and outcomes. Frontiers 2013-09-03 Article PeerReviewed Skatova, Anya, Chan, Patricia A. and Daw, Nathaniel (2013) Extraversion differentiates between model-based and model-free strategies in a reinforcement learning task. Frontiers in Human Neuroscience, 7 . 525/1-525/10. ISSN 1662-5161 http://journal.frontiersin.org/Journal/10.3389/fnhum.2013.00525/full doi:10.3389/fnhum.2013.00525 doi:10.3389/fnhum.2013.00525 |
| spellingShingle | Skatova, Anya Chan, Patricia A. Daw, Nathaniel Extraversion differentiates between model-based and model-free strategies in a reinforcement learning task |
| title | Extraversion differentiates between model-based and model-free strategies in a reinforcement learning task |
| title_full | Extraversion differentiates between model-based and model-free strategies in a reinforcement learning task |
| title_fullStr | Extraversion differentiates between model-based and model-free strategies in a reinforcement learning task |
| title_full_unstemmed | Extraversion differentiates between model-based and model-free strategies in a reinforcement learning task |
| title_short | Extraversion differentiates between model-based and model-free strategies in a reinforcement learning task |
| title_sort | extraversion differentiates between model-based and model-free strategies in a reinforcement learning task |
| url | https://eprints.nottingham.ac.uk/3002/ https://eprints.nottingham.ac.uk/3002/ https://eprints.nottingham.ac.uk/3002/ |