Discounting of reward sequences: a test of competing formal models of hyperbolic discounting

Humans are known to discount future rewards hyperbolically in time. Nevertheless, a formal recursive model of hyperbolic discounting has been elusive until recently, with the introduction of the hyperbolically discounted temporal difference (HDTD) model. Prior to that, models of learning (especially...

Full description

Bibliographic Details
Main Authors: Zarr, Noah, Alexander, William H., Brown, Joshua W.
Format: Online
Language:English
Published: Frontiers Media S.A. 2014
Online Access:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3944395/
id pubmed-3944395
recordtype oai_dc
spelling pubmed-39443952014-03-17 Discounting of reward sequences: a test of competing formal models of hyperbolic discounting Zarr, Noah Alexander, William H. Brown, Joshua W. Psychology Humans are known to discount future rewards hyperbolically in time. Nevertheless, a formal recursive model of hyperbolic discounting has been elusive until recently, with the introduction of the hyperbolically discounted temporal difference (HDTD) model. Prior to that, models of learning (especially reinforcement learning) have relied on exponential discounting, which generally provides poorer fits to behavioral data. Recently, it has been shown that hyperbolic discounting can also be approximated by a summed distribution of exponentially discounted values, instantiated in the μAgents model. The HDTD model and the μAgents model differ in one key respect, namely how they treat sequences of rewards. The μAgents model is a particular implementation of a Parallel discounting model, which values sequences based on the summed value of the individual rewards whereas the HDTD model contains a non-linear interaction. To discriminate among these models, we observed how subjects discounted a sequence of three rewards, and then we tested how well each candidate model fit the subject data. The results show that the Parallel model generally provides a better fit to the human data. Frontiers Media S.A. 2014-03-06 /pmc/articles/PMC3944395/ /pubmed/24639662 http://dx.doi.org/10.3389/fpsyg.2014.00178 Text en Copyright © 2014 Zarr, Alexander and Brown. http://creativecommons.org/licenses/by/3.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
repository_type Open Access Journal
institution_category Foreign Institution
institution US National Center for Biotechnology Information
building NCBI PubMed
collection Online Access
language English
format Online
author Zarr, Noah
Alexander, William H.
Brown, Joshua W.
spellingShingle Zarr, Noah
Alexander, William H.
Brown, Joshua W.
Discounting of reward sequences: a test of competing formal models of hyperbolic discounting
author_facet Zarr, Noah
Alexander, William H.
Brown, Joshua W.
author_sort Zarr, Noah
title Discounting of reward sequences: a test of competing formal models of hyperbolic discounting
title_short Discounting of reward sequences: a test of competing formal models of hyperbolic discounting
title_full Discounting of reward sequences: a test of competing formal models of hyperbolic discounting
title_fullStr Discounting of reward sequences: a test of competing formal models of hyperbolic discounting
title_full_unstemmed Discounting of reward sequences: a test of competing formal models of hyperbolic discounting
title_sort discounting of reward sequences: a test of competing formal models of hyperbolic discounting
description Humans are known to discount future rewards hyperbolically in time. Nevertheless, a formal recursive model of hyperbolic discounting has been elusive until recently, with the introduction of the hyperbolically discounted temporal difference (HDTD) model. Prior to that, models of learning (especially reinforcement learning) have relied on exponential discounting, which generally provides poorer fits to behavioral data. Recently, it has been shown that hyperbolic discounting can also be approximated by a summed distribution of exponentially discounted values, instantiated in the μAgents model. The HDTD model and the μAgents model differ in one key respect, namely how they treat sequences of rewards. The μAgents model is a particular implementation of a Parallel discounting model, which values sequences based on the summed value of the individual rewards whereas the HDTD model contains a non-linear interaction. To discriminate among these models, we observed how subjects discounted a sequence of three rewards, and then we tested how well each candidate model fit the subject data. The results show that the Parallel model generally provides a better fit to the human data.
publisher Frontiers Media S.A.
publishDate 2014
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3944395/
_version_ 1612064994928623616