Skip to main content

REVIEW article

Front. Psychol., 13 January 2016
Sec. Psychology for Clinical Settings
This article is part of the Research Topic Reward processing in motivational and affective disorders View all 11 articles

A Computational Analysis of Aberrant Delay Discounting in Psychiatric Disorders

  • 1Max Planck University College London Centre for Computational Psychiatry and Ageing Research, University College London, London, UK
  • 2Wellcome Trust Centre for Neuroimaging, University College London, London, UK
  • 3Centre for Health Policy, Imperial College London, Institute of Global Health Innovation, St. Mary's Hospital, London, UK

Impatience for reward is a facet of many psychiatric disorders. We draw attention to a growing literature finding greater discounting of delayed reward, an important aspect of impatience, across a range of psychiatric disorders. We propose these findings are best understood by considering the goals and motivation for discounting future reward. We characterize these as arising from either the opportunity costs of waiting or the uncertainty associated with delayed reward. We link specific instances of higher discounting in psychiatric disorder to heightened subjective estimates of either of these factors. We propose these costs are learned and represented based either on a flexible cognitive model of the world, an accumulation of previous experience, or through evolutionary specification. Any of these can be considered suboptimal for the individual if the resulting behavior results in impairments in personal and social functioning and/or in distress. By considering the neurochemical and neuroanatomical implementation of these processes, we illustrate how this approach can in principle unite social, psychological and biological conceptions of impulsive choice.

Introduction

Vitae summa brevis spem nos vetat incohare longam

Life's short span forbids our embracing far-reaching hopes - Horace, Odes (23BC)

Humans and animals often accept a smaller reward immediately, rather than wait to receive a larger reward in the future (Ainslie, 1974; Thaler, 1981; Thaler and Shefrin, 1981; Fishburn and Rubinstein, 1982; Frederick et al., 2002; McClure et al., 2007; Kalenscher and Pennartz, 2008; Pine et al., 2009). In economic terms, this behavior indicates that the subjective value of reward decreases as it is delayed, a process referred to as temporal discounting (for reviews see Frederick et al., 2002; Kalenscher and Pennartz, 2008). As we will discuss, biological agents have good reason to discount delayed rewards, since these might either fail to materialize or arrive too late to satisfy the organism's current needs. Indeed, as pointed out by the Roman poet Horace in the quotation above, the ultimate motive for discounting is that the agent will die before deferred rewards are realized.

In humans, temporal discounting can be measured by examining choices between quantities of money at varying delays (Mazur, 1987; Kirby and Maraković, 1995; Myerson et al., 2001; Green and Myerson, 2004). The most commonly used method elicits choices between a larger, delayed amount of money, (e.g., “$100 in 6 months”), and a series of immediate amounts of decreasing magnitude (e.g., “$80 today”). By observing at each delay the magnitude of smaller-sooner reward at which the participant switches to preferring the later reward, the decrease in value of the later reward can be plotted as a function of delay. A non-parametric estimate of discounting can be derived by taking the area beneath this indifference curve (Myerson et al., 2001). Alternatively, the shape of the curve can be fitted with a discount function.

Samuelson (1937), and later Strotz (1957), showed that a decision-maker who discounts future benefits according to an exponentially decreasing function (and behaves as if to maximize the sum of exponentially discounted reward) allocates resources across time in a self-consistent manner. Under the classical model, the effect of delay, d, is described by an (exponential) discount function, here denoted byΔ(d), such that:

(d) = e-kd    (1)

Where k is an exponential discount rate, such that higher values of k lead to a steeper decrease in reward value with delay. The effect of reward magnitude, here signified by r, is independently described by an instantaneous utility function, u(r), such that the subjective utility of a stream of future rewards is then given by:

U(rt,rt+1,rt+2rT-1,rT) = tTu(rτ)(τ-t)    (2)

As reviewed by Frederick et al. (2002), the above account was not intended as a veridical psychological model of choice over time. In keeping with this, many experimental studies have shown that a discounting function is better approximated via a hyperbolic than an exponential function (e.g., Green et al., 1994; Kirby and Herrnstein, 1995; Kirby and Maraković, 1995; Myerson and Green, 1995; Laibson, 1997; van der Pol and Cairns, 2002; Rubinstein, 2003), of the form:

 Δ(d)=11+kd    (3)

Here k denotes a hyperbolic discount rate (though for alternative accounts see Read, 2001; Kable and Glimcher, 2010; Read et al., 2012; Luhmann, 2013).

Temporal discounting has received considerable attention in human behavioral neuroscience, not least because many forms of maladaptive behavior are readily characterized as pursuit of immediate gratification at the expense of reaping greater rewards in the future (Critchfield and Kollins, 2001; Bickel et al., 2007, 2014a; Koffarnus et al., 2013; Story et al., 2014). Indeed, lending validity to the discounting construct, steeper discounting is positively associated with behaviors with potentially harmful long-term consequences such as tobacco smoking (Odum et al., 2002; Epstein et al., 2003; Reynolds et al., 2004; Bickel et al., 2008; MacKillop and Kahler, 2009; Fields et al., 2009a,b; Reynolds and Fields, 2012), alcohol use (Van Oers et al., 1999; Mazas et al., 2000; Petry, 2001; Field et al., 2007; Reynolds et al., 2007; Rossow, 2008; MacKillop and Kahler, 2009; Moore and Cusens, 2010), illicit drug misuse (Kirby et al., 1999; Petry and Casarella, 1999; Kollins, 2003; Petry, 2003; Kirby and Petry, 2004; Washio et al., 2011; Stanger et al., 2012), credit card debt (Meier and Sprenger, 2012) and risky sexual or drug-taking practices (Odum et al., 2000; Dierst-Davies et al., 2011). Also, many authors have explored how discounting relates to demographic variables, finding that measured discounting decreases across the lifespan (Green et al., 1996, 1999; Chao et al., 2009; Steinberg et al., 2009), is negatively correlated with income (Green et al., 1996; Eckel et al., 2005; Reimers et al., 2009), and tends to be lower in individuals living in the developed world than in the developing world (Wang et al., 2010). Furthermore, although discounting is sensitive to a gamut of contextual factors (for a review see Koffarnus et al., 2013), the level of discounting has been shown to exhibit high test-retest reliability when measured under similar conditions (Odum, 2011), and the extent of individual discounting for different forms of reward is correlated (Odum, 2011), suggesting that discounting has a substantial trait component.

More recently, researchers have taken an interest in comparing discounting behavior in groups who exhibit symptoms of a given psychiatric disorder and those who do not. These studies have found evidence for steeper discounting amongst patients with symptoms of schizophrenia (Heerey et al., 2007, 2011; Ahn et al., 2011; MacKillop and Tidey, 2011; Wing et al., 2012; Avsar et al., 2013; Weller et al., 2014), depression (Takahashi et al., 2008; Dennhardt and Murphy, 2011; Dombrovski et al., 2012; Imhoff et al., 2014; Pulcu et al., 2014), mania (Mason et al., 2012), attention deficit hyperactivity disorder (ADHD) (Barkley et al., 2001; Tripp and Alsop, 2001; Bitsakou et al., 2009; Paloyelis et al., 2010a,b; Scheres et al., 2010; Scheres and Hamaker, 2010), anxiety disorder (Rounds et al., 2007) and cluster B personality disorder (Dougherty et al., 1999; Moeller et al., 2002; Petry, 2002; Dom et al., 2006a,b; Lawrence et al., 2010; Coffey et al., 2011). This line of enquiry is not without theoretical justification, for example the broader construct of impulsivity, defined as taking action without forethought or regard for consequences (Moeller et al., 2001), of which discounting is an element, is a defining feature of some psychiatric disorders, for example borderline personality disorder (Moeller et al., 2001; DSM V, 2013) and mania (Swann, 2009). Also, psychiatric disorders are strongly associated with poor health choices, including but not limited to cigarette smoking, and drug and alcohol misuse (Robson and Gray, 2007), which have themselves been associated with steeper discounting (Bickel et al., 2012b, 2014a,b; Story et al., 2014). However, in many cases this research, although clearly valuable, appears to have been opportunist.

In this article we attempt to understand increases in discounting seen across a range of psychiatric disorders in light of the reasons why people should discount the future in the first place. We propose that the study of intertemporal impulsivity in psychiatric disorders would benefit from fractionating these underlying motives, and that parsing discounting in this manner can assist in drawing out the contributing psychological and biological processes. Our approach follows that of the neuroscientist David Marr (Marr, 1982), who proposed that information processing systems can be understood at three levels of analysis: a “computational” level, specifying what information processing problem is being solved by the system, an “algorithmic” level, formalizing how the system attempts to solve the problem, and an “implementational” level, denoting how these processes are realized physically.

For the case of discounting, the computational problem is easily defined in economic terms: to optimize the sum of future reward. However, this definition obscures a difficult question as to what constitutes “reward” (Moutoussis et al., 2015). It is convenient here to assume that all biological agents share some fundamental objective function. Rather than attempting to characterize the objective function directly, we assume some consensus on the kinds of outcome that organisms often seek, and that can therefore be considered “rewarding.” We then consider a subset of generic scenarios under which behavior consistent with discounting would indeed optimize the sum of future “reward.” This will give us some insight as to the contexts that agents, who discount future reward in different ways, including humans deemed to have mental disorders, might be adapted to.

We go on to speculate as to the broad classes of algorithms that biological agents might use to optimize reward, and where relevant their possible neural implementation. We argue that the application of this approach to psychiatric disorders, the bedrock of the emerging field of computational psychiatry (Huys et al., 2011; Montague et al., 2012; Friston et al., 2014; Stephan and Mathys, 2014; Wang and Krystal, 2014), can help to bridge a gap between psychological and biological conceptions of mental ill health (for further discussion see Moutoussis et al., 2015).

Marr's Computational Level: Reasons to Discount Future Reward

The discount function estimated from the analysis of intertemporal choice paradigms is likely to reflect the influence of factors jointly serving to make impatience potentially advantageous. A key ambiguity in the classical economic model concerns whether these factors should be properly assigned to the time series of future rewards, or to the discount function (Frederick et al., 2002; Frederick and Loewenstein, 2008; Friston et al., 2013; for a review of contextual influences on discounting see Koffarnus et al., 2013). The following discussion illustrates that if they are made fully explicit in the utility function, behavior consistent with temporal discounting emerges.

Opportunity Cost

Growth and Missed Investment

For most organisms growth and development are necessary to reach reproductive capacity (Williams, 1957). For humans, development also extends to furthering one's social status. Growth potential motivates obtaining rewards sooner rather later, since earlier rewards can be invested—effectively loaned out at some rate of interest (see Rachlin, 2006; Kacelnik, 2011). The form of discounting that results depends on whether or not interest can be re-invested. Under the most straighforward scenario, referred to as simple interest, interest is not reinvested during the term of the loan. Consider a reward with utility r (for simplicity we omit the instantaneous utility function) invested for a period of time, d, to yield a larger payout, R. With simple interest:

R = r + krd    (4)

Solving for r and expressing as a ratio of the payout gives:

rR = 11 + kd    (5)

A decision-maker should therefore be indifferent between a larger reward of utility, R, received after a delay, d, and a smaller reward, r, received immediately. Thus, linear growth (simple interest) motivates hyperbolic discounting (see Read, 2004; Rachlin, 2006).

In the above example, after the delay has lapsed the agent ought to reclaim their money and re-invest the entire payout to avoid losing out to a lower rate of interest. Compound interest represents a continual reinvestment of the payout, and generates exponential growth, such that the payout accrued at time d after choosing r is given by:

R = regd    (6)

Where g reflects the interest rate. Rearranging as before gives:

rR = e-gd    (7)

Thus, compound interest motivates exponential discounting.

Missed Income

In the natural world, delay often entails inactive waiting, during which other sources of reward cannot be harvested. The cost associated with an inactive delay can be quantified as the reward that is missed out on while waiting (Kacelnik, 2011). Under one such formulation, organisms should consequently choose an action which maximizes a rate of reward per unit time, a concept that has arisen in ecological theory independently from the notion of discounting (Stevens and Krebs, 1986). Under this formulation, discounted value is simply inversely proportional to delay (Chung and Herrnstein, 1967). It can be easily shown however that if even “immediate” rewards are associated with some small delay, m, where m = 1∕k, this is equivalent to hyperbolic discounting (Daw and Touretzky, 2000). Thus, at indifference:

rm = Rm + d    (8)

Rearranging as previously:

rR=mm+d=11+dm=11+kd    (9)

A corollary of this theory is that the opportunity cost of delaying reward on a particular option depends on the average rate of reward from all other options (Chung and Herrnstein, 1967; Daw and Touretzky, 2000; Niv et al., 2007).

Inactive waiting leads to interesting results if other options become available only once the delays associated with the current choice have lapsed. Consider for example a lawyer who is paid by the hour for seeing clients at weekdays, but does not work at weekends. Say that he or she has two lunch options, either waiting in a long queue for a tasty lunch at a popular café, or being able to buy an equally calorific but less enjoyable meal straightaway at a sandwich bar. The lawyer might be optimally inclined to choose the sandwich bar on weekdays, so as to facilitate a sooner return to work, but might choose to wait at the café if faced with the same choice on a weekend. Here the intertemporal choice is influenced by other available sources of reward, which are inaccessible during the delay. In ecological terms, if an organism is foraging in a reward-rich area, the opportunity cost of delaying foraging by engaging in other activities is greater than when foraging in a reward-poor area (Niv et al., 2007).

Thus, expressed in terms of the total reward received, and letting the average rate of reward available after the delay be signified by ρ, then at indifference:

R = r + ρd    (10)

Thus:

r=R - ρd    (11)

This arrangement allows for the possibility that a delayed reward carries negative value, whereby a decision-maker would willing to pay so as to be able to resume seeking rewards at the average rate, rather than to wait for the delayed reward.

Uncertainty

Probability and Hazard

Whenever reward (capital) is stored for the future, for example when a person lends money to another person or when an animal stores food, there is some possibility that the capital will be lost (for example if a conspecific raids the food store or the debtor defaults on their loan). If there is some constant probability per unit time, referred to as a hazard rate, that future rewards do not materialize as promised, the expected value of reward (magnitude × probability) decreases exponentially with delay and gives rise to exponential discounting (Sozou, 1998).

Following the notation above at indifference:

r=Re-λt    (12)

Rearranging:

rR=e-λt    (13)

Where λ denotes a constant hazard rate.

Thus, the agent choosing whether to store reward should adopt a discount rate appropriate to the estimated hazard rate. For example a creditor ought to demand a rate of interest that is commensurate with the risk of the debtor's chance of default per unit time. Interestingly, where the appropriate hazard rate is uncertain, decision-makers ought to weight each possible hazard rate by its probability of being the true rate; such a weighted average of exponential rates approximates hyperbolic discounting (Sozou, 1998; Kurth-Nelson and Redish, 2009). As shown by Sozou (1998), hyperbolic discounting results exactly if:

0f(λ)eλtdλ=11+kt    (14)

Where f(λ) is a probability density function over hazard rates. The above is satisfied if:

f(λ)=1ke-λk    (15)

i.e., if there is an exponential prior distribution over hazard rates, where k determines the shape of this distribution. In support of Sozou's theory, Takahashi et al. (2007) find that the subjective probability of receiving delayed reward in standard intertemporal choice tasks indeed decays hyperbolically.

As the quotation at the start of this article encapsulates, death creates a fundamental motive not to defer rewards for too long. In computational terms death can be considered to be an absorbing state, from which no future reward can be harvested. Notably a hazard rate for the event of dying can be seen to depend on the organism's current state, such that a greater physiological deficit is associated with a greater probability of dying per unit time. The fundamental value of reward is then its effect to reduce the hazard rate for dying (before successfully securing one's legacy). This argument suggests that it is optimal for biological agents to discount future reward more steeply when they are currently far from a physiological set point, based simply on an increased probability of their dying before future reward is attained.

Volatility

In summary, environmental hazards create a motive to discount the future, since future rewards might not materialize as promised. In addition, the utility of future rewards might be more uncertain, in the sense of having higher variance than immediate rewards (when the variance is known the resulting uncertainty is referred to as risk). Many behavioral economic studies have shown that people tend to be risk averse (Kahneman and Tversky, 1979; Holt and Laury, 2002; Trepel et al., 2005; Andersen et al., 2008; Platt and Huettel, 2008; Jones and Rachlin, 2009), in so far as they will accept a smaller expected payoff over a larger expected payoff with higher variance. If future events tend to evolve with a random component, the uncertainty associated with future events increases with delay (Mathys et al., 2011). To take an example, a decision-maker responding to a discounting questionnaire might have some degree of uncertainty about the subjective utility of a $20 payout received immediately (if this appears implausible, imagine being paid in a foreign currency, whose worth is uncertain). However, owing to volatility governing future events in their lives (e.g., becoming ill, falling into debt, national economic collapse), uncertainty regarding the utility of the $20 ought to increase as it is delayed. In combination with risk aversion this motivates delay discounting. In support of this idea, individual discount rates are correlated with risk aversion (Leigh, 1986; Anderhub et al., 2001; Eckel et al., 2005; Jones and Rachlin, 2009; Dohmen et al., 2010).

Notably, risk aversion can be expressed in terms of probability discounting, which is found to be hyperbolic in the odds against receiving a reward. Whilst probability discounting and temporal discounting are often found to be correlated across individuals (e.g., Jones and Rachlin, 2009), they are subject to distinct influences. For example, increasing reward magnitude increases probability discounting (i.e., risk aversion) and decreases temporal discounting (Green and Myerson, 2004). This is often taken as evidence that temporal discounting does not encompass an estimate of the risk associated with future rewards. However, pertinent to discounting is how a person estimates risk to be dependent on delay. Probability discounting offers a measure of risk aversion but does not access this time-dependent representation of risk. In support of this idea Takahashi et al. (2007) find that while probability and temporal discounting are uncorrelated across individuals, temporal discounting does correlate with the rate of decay in the subjective probability of receiving reward after increasing delay. This may help explain why psychiatric disorders are often associated with increased inter-temporal discounting but not necessarily with excessive probability discounting.

Marr's Algorithmic Level: Processes Sub-Serving Intertemporal Choice

In the preceding analysis we have outlined some generic scenarios under which behavior consistent with discounting would be optimal. These scenarios illustrate that discounting need not be considered as a unitary process, rather as (implicitly or explicitly) reflecting an expectation of different environmental contingencies. Under reinforcement learning formulations, such contingencies are seen as engendering transtitions in a state-space (Sutton and Barto, 1998; Dayan and Balleine, 2002; Dayan and Daw, 2008; Kurth-Nelson and Redish, 2009). That it is, an action is assumed to move the agent from one (discrete) state to another, where each state may be associated with a varying quantity of reward. The state-space is equivalent to the vector of rewards described in the classical economic model (Equation 2), though may also be made contingent on the agent's future behavior, giving rise to a matrix, or “decision-tree.” A key question for this account is whether the (discounted) utility of a delayed reward is directly parameterized, which is to say that there is no more inference or learning beyond the state where this utility is considered, or whether the delayed reward is instead considered as part of a cascade of preceding states.

A Parametric Discount Function?

If higher organisms indeed represent a discount function parametrically, they would require a widespread and efficient system for making this information accessible for decision-making. Neuromodulatory systems, with their diffuse connections to many areas of the brain, would be well placed to achieve this, and several authors have speculated that neuromodulators, such as dopamine and norepinephrine might represent some of the relevant parameters. For example, Niv et al. (2007) have proposed that the average rate of reward is signaled in the mammalian brain by tonic levels of extracellular dopamine in the striatum, suggesting that increased striatal dopamine availability might increase discounting by increasing the implicit opportunity cost of delay. Commensurate with this hypothesis, systemic administration in humans of the dopamine precursor l-Dopa increases discount rates (Pine et al., 2010), although potentially countervailing evidence is that decreasing dopamine transmission in rats by administration of haloperidol (Denk et al., 2005) or flupethixol (Floresco et al., 2008) has been found to increase discounting, or in other studies to exert no significant effect on discounting (Winstanley et al., 2005).

Similarly, a good deal of decision-making neuroscience seeks to uncover how uncertainty is represented neurally (see Behrens et al., 2007; Wilson et al., 2010; Mathys et al., 2011; Nassar et al., 2012). A recent suggestion is that operating in an unstable environment is associated with tonic release (over a time course of minutes) of norepinephrine (Yu and Dayan, 2003, 2005). The latter would suggest that tonic norepinephrine might signal environmental volatility, and thus influence discounting. Clearly, further psychopharmacological work is needed to fully uncover the role of monoaminergic signaling on discounting behavior. Also, if organisms indeed have a parametric model of discounting in the strictest sense, then this ought be revealed in the manner in which estimates of discounting are updated in light of changes in the environment, and careful behavioral work is required to probe this possibility.

Discounting as a Revealed Phenomenon

According to a second possibility outlined above, choosing a delayed reward leads to a cascade of states, and may (or may not) lead to the promised reward, which if it occurs, may be delivered in a variety of future states (just in time for Christmas, after I've been killed by a bus, etc.) (see Peters and Büchel, 2010). If an agent uses this cascade of states to evaluate their actions, only the resulting transitions will endow this action with whatever value percolates through from the end states. Here discounting takes place due to learning and/or inference, where the value of the reward gradually evaporates as inference (or learning) propagates through a cascade of states. Given the properties of organisms and their environments, as outlined above, behavior consistent with discounting would simply emerge as the end result of applying these learning processes to situations where there is delay in the receipt of reward. Under this possibility, in terms of the economic model, all relevant information is summarized in an agent's utility function, which then implicitly incorporates the discount function. It appears likely that organisms use parallel mechanisms to calculate the value of the resulting state-space, operating across different timescales of information integration, ranging from updating innate behaviors through evolution, through learning from experience, to inferring future states via deployment of a cognitive map or model of the world.

Reliable valuations may be refined and passed on through genetic inheritance and evolution. For example, the possibility of death, and its associated opportunity cost, is likely incorporated through evolution, whereby internal states deviating from a homeostatic ideal, such as hunger and thirst, are assigned an innate cost as a proxy (see Keramati and Gutkin, 2011). Thus, discounting for food would be expected to increase when hungry, due to innate negative value associated with prolonging a state of hunger. Furthermore, actions themselves might in some cases be selected from an innately determined repertoire. Through Pavlovian conditioning, a stimulus (termed unconditioned stimulus, US, e.g., food) that elicits an innate response (the unconditioned response, e.g., salivation), can become associated with another stimulus (conditioned stimulus, CS, e.g., a tone), such that the latter subsequently becomes capable of eliciting an appropriate innate response independently (Rescorla and Solomon, 1967; Williams and Williams, 1969; Hershberger, 1986; Pavlov, 2003). Here the conditioning process, whereby CS becomes associated with US, can incorporate the cost of delay to conform to the optimal adaptations of some of the computational processes above. For example, if delivery of food follows a tone, with an intervening delay of 10 s, the “Pavlovian value” of the tone may be temporally discounted by a given proportion per unit time relative to that of the food (Domjan, 2003). Algorithmic accounts of classical conditioning, such as temporal difference learning, thus incorporate an exponential discount factor (O'Doherty et al., 2003; Moutoussis et al., 2008; Dayan, 2009; Kurth-Nelson and Redish, 2009). Exactly how such discounting is represented at a neurobiological process level remains unclear, but the influences outlined must be important. For example, the incremental process of temporal-difference learning, including Rescorla-Wagner learning (Domjan, 2003), means that the strength of the association between CS and US comes to reflect their probabilistic relationship.

Organisms can also learn the value of actions based simply on whether or not they yielded benefits in the past, referred to as instrumental conditioning (Domjan, 2003). In algorithmic terms, this can be most parsimoniously achieved by integrating the history of reinforcement following a given action, without representing an explicit model of the relationship between actions and their outcomes (Watkins and Dayan, 1992; Daw et al., 2005; Seymour et al., 2005; Schultz, 2006; Moutoussis et al., 2008; McDannald et al., 2011). This is referred to as model-free reinforcement learning, and corresponds to the “Thorndikian” Law of Effect (Thorndike, 1927), or “habit” learning (Dickinson et al., 1995; Ouellette, 1998; Neal, 2006; Tricomi et al., 2009; Dolan and Dayan, 2013; Orbell and Verplanken, 2014). Instrumental learning would be expected to incorporate discounting, to the extent that the environmental influences described earlier in this article affect the timecourse of reward contingent on a particular action.

Finally, biological agents can be availed of a cognitive map, or model, of the world, detailing the results of different actions and their respective values (Dickinson and Balleine, 1994; Balleine and Dickinson, 1998; Gläscher et al., 2010; Daw et al., 2011; McDannald et al., 2011). The choice of action proceeds by thinking forward through the map (or tree), and considering the consequences of alternative actions (see Seymour and Dolan, 2008). This mode of control is referred to in reinforcement learning applications as model-based (Gläscher et al., 2010; Daw et al., 2011; Wunderlich et al., 2012; Smittenaar et al., 2013; Lucantonio et al., 2014), and corresponds to the definition of goal-directed behavior in animal learning as being rapidly sensitive to changes in the contingency between action and outcome, or to devaluing the outcome (Dickinson and Balleine, 1994; Balleine and Dickinson, 1998). An advantage of the model-based approach lies in its flexibility. For example, this approach is necessary to generate appropriate intertemporal choices in esoteric scenarios, to which a smooth discount function is not well adapted. For example, say a generous experimenter offers me a choice between $100 today and $125 4 weeks from today. The knowledge that I will be receiving my monthly pay of $1000 exactly 4 weeks from today, and that without additional income I am likely to exceed my overdraft limit next week by around $50, incurring a heavy fine, would likely encourage me to choose the immediate money. If I were to try choose between the immediate and delayed money according to a parametric discount function alone, without considering extraneous sources of (dis)utility, I might lose out to the overdraft fine. In summary, through the above innate and instrumental learning processes, given appropriate experience of the cost of delay, an organism can behave in a manner consistent with discounting without directly computing discounted value at all.

(Mal)Adaptive Discounting in Psychiatric Disorders

We propose that whether parametric, or revealed through the above valuation processes, discounting nevertheless represents encoding of different environmental contingencies. It is therefore noteworthy, where changes in discounting are observed, for example in psychiatric disorders, to consider such changes in light of the environment to which a given individual might be “tuned to” (see also Del Giudice, 2014). The key point here is that, the decision-maker brings to a laboratory intertemporal choice task their previous experience of delay and may also consider the rewards of the task in the context of other future outcomes they expect to receive. We consider particular instances of this below.

Mania as a State of Increased Opportunity Cost

Might steeper discounting in some pathological states reflect increased estimates of opportunity cost? In support of discounting being sensitive to changes in opportunity cost, discount rates for money have been shown to increase in line with increases in inflation (Ostaszewski et al., 1998). More speculatively, steeper discount rates in childhood and adolescence which decline into adulthood (Green et al., 1999; Chao et al., 2009; Steinberg et al., 2009) might even reflect greater potential for growth in adolescence. We propose that the pathological state of mania is associated with perceived high rates of reward and high growth potential, creating a heightened opportunity cost associated with inaction. Mania is known to be associated with impulsive behavior, such as overspending, rash financial decision-making or drug–taking (Swann, 2009), and one study (Mason et al., 2012) finds evidence for steeper discounting in an intertemporal choice task with real-time delays in the order of seconds in individuals prone to hypomanic symptoms.

Notably growth potential creates something of a paradox. On the one hand investing reward to achieve growth implies that the decision maker has adopted a long-term view. On the other hand, having something worthwhile to invest in favors choices that obtain rewards sooner rather than later, so that they too can be invested. For example, imagine you are starting a new business venture. Whilst this is necessarily a long-term project, you might sacrifice other potential rewards, such as your health or relationships, in order to invest resources in the business, which can be seen as borrowing predicated on a high level of return from your new business. Manic individuals generate novel, and often unrealistically ambitious, goals, for example, enlisting on education courses, or indeed starting new business ventures (DSM V, 2013). We propose that these goals create high opportunity costs to delaying reward, increasing preference for immediate rewards, so as to enlist resources for goal-pursuit. This offers a putative psychological explanation for why increased impulsivity in mania (Swann, 2009), including steeper discounting (Mason et al., 2012), manifests alongside an apparent increase in goal-directed activity.

The investment in apparently long-term goals in mania seems to occur at the expense of patients correctly “playing out” or “forward modeling” future scenarios themselves. This explains why the same (mal)adaptation is found across several behavioral domains. McClure and colleagues (McClure et al., 2004, 2007) have suggested that the explicit influence of larger-later options on behavior is associated with greater cognitive control, which is reduced in mania in tandem with prefrontal activation (Murphy et al., 1999; Townsend et al., 2010). This reduction in “forward modeling” is in fact consistent—if not necessary—for the suggestion we make here to work. That is, if a person with mania were to consider in detail the path ahead leading to their goals, they would realize that the projection implicit in their growth estimate is unrealistic and they would feel able to afford to be patient. A further interesting possibility, discussed further below, is that such forward modeling itself takes time, and that in the face of high opportunity costs, the depth of such model-based strategies is reduced in favor of more rough-and-ready heuristics, or more Pavlovian or habitual responding (Dezfouli, 2009; Huys et al., 2012). Future investigations of mania might focus on measuring beliefs about growth and opportunity cost directly, and whether such beliefs correlate with changes in discounting. Interestingly, Dezfouli (2009) similarly propose that the abnormally high rewards engendered by drugs of abuse lead to an artifically elevated estimate of the average reward rate in the environment, and that this accounts for increased discounting seen amongst substance abusers (e.g., Kirby et al., 1999; Kollins, 2003; Kirby and Petry, 2004).

Finally, we have shown above how an increase in the rate of reward available from activities other than those currently on offer increases impatience to complete the current activity as soon as possible (i.e., increases discounting for rewards obtained from the task in hand). Niv et al. (2007) use the same approach to explain variations in response vigor. In their model they propose that the agent can choose to reduce latency of its responses, at some energetic cost that is proportional to the latency reduction. Thus, choosing how quickly to perform a particular action itself becomes an intertemporal choice. As their model illustrates, greater vigor (shorter response latency) is then optimal where the average reward rate is higher, in order that agents can resume reward seeking as soon as possible. This description accords well with that of mania, where sufferers often describe the need to complete various tasks with great urgency and where the general vigor of behavior is markedly increased. Furthermore, the model of Niv and colleagues incorporates a latency-independent cost associated with switching tasks. As the authors show, at high reward rates latency-dependent costs tend to dwarf the switching cost, leading to greater task switching than at low reward rates. This too is in keeping with behavior exhibited in manic states, where sufferers have difficulty sustaining tasks.

Economic Poverty as a Deficit State

In keeping with the normative notion that deficit states increase a hazard rate for losing out on future reward, discounting indeed tends to be higher in states of monetary or physiological deficit. For example, steeper discounting is observed in individuals with lower incomes (Green et al., 1996; Reimers et al., 2009), an effect which remains after controlling for level of education. Of course, such studies are correlational, making it difficult to conclude that changes in income directly alter discounting. However, an interesting study by Callan et al. (2011) provides indirect support for a more causal role of low income in increasing discounting. The authors found that a manipulation which lead people to believe that their income was lower than their peers brought about an increase in discounting, relative to a group who were lead to believe that their income was similar to that of their peers. The manipulation was interpreted as priming personal notions of deservedness, though this might just as easily be formalized as a shift toward a perceived deficit state. In a conceptually related study Haushofer et al. (2013) performed an experiment in which subjects performed an effort task for monetary reward, after which different groups received either an increase in income from a low starting endowment, or a decrease in income from a high starting endowment. The design thus allowed the effect of (experimental) wealth changes to be dissociated from absolute wealth. Subjects' temporal discount rates were measured before and after the task, with the finding that negative income shocks lead to an increase in discounting, while positive income shocks effected a small decrease in discounting. Starting wealth was found to be unrelated to discounting. Notably, the size of an experimental endowment might not be expected to have an effect on discounting, since the endowment was likely to be small in comparison to subjects' total real-world wealth. The effect of negative income shocks, which might be interpreted as having primed an increased hazard rate for future earnings, suggests that instability in earnings, rather than simply total wealth, is an important determinant of the relationship between socioeconomic status and discounting.

A study in women deprived of food and water (for 4 h after their usual waking time) found that women given a pre-loading meal prior to testing chose an option leading to the delayed, rather than immediate, delivery of juice significantly and significantly more so than women who had not received a preloading meal (Kirk and Logue, 1997). Also, Wang and Dvorak (2010) measured monetary discounting before and after participants drank either a sugary or a sugar-free drink (both caffeine-free), finding a significant decrease in discounting in the group who drank the sugary drink and a significant increase in the control group. This finding suggests that raising blood glucose decreases discounting, an idea congruent with increased discounting associated with deficit states.

Economic poverty may well underlie some of the steeper discounting seen in psychiatric disorders, through an association between mental illness and lower socioeconomic status (e.g., Weich and Lewis, 1998; Lorant et al., 2007) (however in several studies associations remain after controlling for socioeconomic characterisitics). Notably, there may be an interdependent relationship between low socioeconomic status, discounting, and mental ill health, whereby impatience for rewards leads to maladaptive choices such as substance misuse, which in turn are associated with worsening finances, further increases in discounting and increased risk of psychiatric disorder (e.g., Fields et al., 2009b; Leitão et al., 2013). A similar idea has been championed by Bickel et al. (2014b), who propose that the environment associated with low socioeconomic status promotes steeper discounting, which in turn engenders unhealthy choices, thus contributing to known socioeconomic gradients in health status (Adler and Rehkopf, 2008). This is supported by evidence that cigarette smoking, obesity, alcohol use and illicit drug use all exhibit negative relationships with socioeconomic status (Conner and Norman, 2005), that these behaviors are associated with poor executive functioning (e.g., Bickel et al., 2012a), and that economic poverty is prospectively associated with poor executive functioning (Lupien et al., 2007; Noble et al., 2007; Evans and Schamberg, 2009). We discuss this interaction between environment and cognition in Section The Cost of Thinking in Economic Poverty, Borderline Personality Disorder and Schizophrenia below.

ADHD as a Deficit State

Interestingly, the effects of deprivation appear to cross modalities of reward. For example, mild opioid deprivation in opioid dependent individuals increases discounting for money as well as heroin (Giordano et al., 2002). Arguably this might be motivated by a desire on the part of subjects to obtain money sooner so as to buy drugs. However, it might equally be attributable to a more global alteration in decision-making associated with physiological deficit states (see also Loewenstein, 1996; Metcalfe and Mischel, 1999). In further support of this idea, exposure to erotic cues increases discounting for money, as well as for candy bars or soda drinks in men (Van den Bergh et al., 2008). Furthermore, the effect of sex cues to increase discounting for food and drink rewards was attenuated by satiation with money, providing evidence for a global physiological signaling mechanism. Niv et al. (2007) propose that this mechanism “global drive” mechanism might involve modulation in tonic dopamine signaling.

In some cases steeper discounting observed in psychiatric disorders might reflect processes associated with normal deficit states. ADHD is a possible example. ADHD is defined by behavioral symptoms of inattentiveness, over-activity and impulsivity, of long-standing duration and is most commonly diagnosed in school-aged children (DSM V, 2013). Many studies have shown that children with ADHD have a greater tendency than controls to choose immediate over delayed rewards in single choices (e.g., Sonuga−Barke et al., 1992; Schweitzer and Sulzer−Azaroff, 1995; Kuntsi et al., 2001; Bitsakou et al., 2009; for reviews see Luman et al., 2005; Paloyelis et al., 2009) and (relative to controls) are biased toward choosing tasks which yield earlier, rather than delayed, reinforcement (Tripp and Alsop, 2001). Also, on delay of gratification tasks (Mischel et al., 1989) children with hyperactivity exhibit a greater tendency to terminate the delay to obtain a smaller reward, rather than waiting an allotted time for a larger reward (Rapport et al., 1986). Furthermore, several studies now report steeper monetary discounting in children with ADHD (Paloyelis et al., 2009; Scheres et al., 2010; Wilson et al., 2011; Demurie et al., 2012) or in adults with previous ADHD (Hurst et al., 2011).

We hypothesize that the increased discounting rates found in ADHD reflect both the well-known genetic vulnerability for this disorder but also encode the more deprived environments that lead to increased expression of this disorder (Apperley and Mittal, 2013; Russell et al., 2015). In support of this, in one study boys with ADHD symptoms who had been reared in deprived institutions showed increased aversion to delay compared with ADHD controls compared to less deprived patients (Loman, 2012). Thus, seeking of immediate reward in ADHD might reflect underlying mechanisms linking increased discounting with states of internal deprivation. One such mechanism would be that outlined above of higher rates of reward available from alternative tasks. For example, say that children with ADHD have an internal state resembling a deprivation of loving attention; their performance of tasks that do not offer this attention, such as quiet private study, is likely to be more impatient, so as to more quickly return to actions that do command attention from others.

Increased Estimates of Uncertainty and Hazard

Although conventional discounting tasks offer choices between rewards that are promised to be delivered with certainty, decision-makers likely come to the task with a prior belief regarding the level of hazard in the environment, and so tend to implicitly distrust the experimenter's assertion that the future rewards are guaranteed. In support of this, discount rates amongst cigarette smokers have been shown to correlate positively with their belief that the future reward will be delivered (Reynolds et al., 2007). Also, within a standard discounting questionnaire, people discount more steeply when rewards are framed as being received from fictive characters rated as untrustworthy, as opposed to from characters perceived as trustworthy (Michaelson et al., 2013).

In an interesting study, Callan et al. (2009) measured discounting in 56 undergraduate students who first watched an interview with a HIV-positive woman. One group were told that she had acquired HIV through unprotected sex and the other group that she had acquired the virus via an infected blood transfusion. The latter group exhibited significantly steeper discounting, an effect which was proposed to result from the story of the infected blood transfusion having primed a belief that the world is unjust. A related explanation, independent of feelings of injustice per se, would be that the transfusion scenario increased the perceived hazard rate for adverse life events.

Finally, as described previously, the ultimate hazard is that one will die before the future reward occurs. In keeping with this, in a South African population, discounting was found to be higher amongst individuals with the lowest perceived survival probability than amongst those with average survival probability (Chao et al., 2009), and to correlate with the number of bereavements of close family members reported by North Americans (a factor putatively increasing perceived mortality risk) (Pepper and Nettle, 2013). Furthermore, discounting has been shown to increase on conscription into the Israeli army (Lahav et al., 2011), and to be higher in youths living in slums in Rio De Janeiro than in an age matched sample of university students (Ramos et al., 2013).

Populations with psychiatric disorders might well believe that future rewards are less likely to materialize (a higher hazard rate) than do healthy control populations, for quite rational reasons, given their life experiences (Hill et al., 2008). In other words, the past is the best predictor of the future, and this may be why psychiatric disorders associated with hazardous development are characterized by higher discounting rates. Populations with psychiatric illness have experienced an excess of major life events compared with the healthy population (Paykel, 1978), and have excess mortality from physical health conditions compared with the general population (Robson and Gray, 2007). The latter would be expected to be associated with lower perceived survival probability, given correlations between perceived and actual mortality in the general population (Idler and Benyamini, 1997). To our knowledge no previous studies have examined this. This may in turn result in decisions that perpetuate or worsen the disorder. Indeed, Sonuga-Barke has hypothesized that the high discounting rates measured in the laboratory in youths with conduct disorder represent an accurate—and hence adaptive in their native environment—summary of the increased hazards that these youths so commonly have experienced (Barke, 2014). An interesting possibility for future research would be to elicit beliefs of groups with psychiatric disorder about the likelihood that future reward will be forthcoming, and to regress this against their discounting choices. Similarly further research is needed to examine relationships between an individual's experience of significant life events, their confidence in the future, and their level of temporal discounting.

The Cost of Thinking in Economic Poverty, Borderline Personality Disorder and Schizophrenia

It appears that a greater engagement of model-based control, a faculty tightly dependent on working memory, is associated with more future-oriented responses on discounting paradigms. Promoting mental simulations of future outcomes by cueing participants with episodes in their lives corresponding to the timing of the options decreases measured discount rates (Peters and Büchel, 2010). Higher working memory capacity is associated with both lower discounting (Shamosh et al., 2008), and an increased emphasis on model-based control (Eppinger et al., 2013), while working memory training in substance misusers has been found to decrease their delay discounting (Bickel et al., 2011b).

In keeping with the above, functional neuroimaging studies have found that the dorsolateral prefrontal cortex (dlPFC), an area often implicated in tasks dependent on working memory (Curtis and D'Esposito, 2003), is sensitive to model-based learning signals (Gläscher et al., 2010). This area is also known to be active when choosing delayed rewards on intertemporal choice paradigm (McClure et al., 2004, 2007). Furthermore, disrupting dlPFC function (using either transcranial magnetic stimulation or transcranial direct current stimulation) both decreases the emphasis on model-based control (Smittenaar et al., 2013) and increases temporal discounting (Hecht et al., 2013). The process of mentally simulating future outcomes is also known to be dependent on the hippocampus (Hassabis et al., 2007; Johnson et al., 2007; Schacter et al., 2008; Schacter and Schacter, 2008), and rats with hippocampal lesions have been found to exhibit increased discounting (Mariano et al., 2009). Taken together these results suggest that mental simulation of the future tends to generate more patient intertemporal choices, and that this process is working memory dependent.

A plausible explanation for the above is that mentally simulating the future resolves uncertainty about the utility of larger-later rewards (see Daw et al., 2005). For example, I might be uncertain about how much I am likely to require money in 7 months' time, but if I remember that my partner's birthday is in seven and a half months' time, and I anticipate needing the money to buy him or her an expensive present, I might revise my estimate of the utility of the future money. An interesting possibility is that decision-makers face a trade-off between making the best possible decisions and doing so in a timely manner with the minimum of effort. Model-based simulation of the future is compuationally costly, i.e., consumes time and energy. If conditions are sufficiently unpredictable, then attempting to explicitly plan out future possibilities is futile, and may even be disadvantageous (see Daw et al., 2005). Thus, prolonged exposure to an unstable environment during development ought to both discourage the use of model-based strategies and increase discounting via greater uncertainty associated with future rewards. This possibility would conceptually bind together an unstable childhood environment, diminished cognitive ability and steeper discounting of reward, providing a tentative theoretical basis for explaining the association between these factors in several psychiatric disorders. For example, people with borderline personality disorder are likely to have experienced childhood abuse (Lewis and Christopher, 1989; Ogata et al., 1990; Zanarini et al., 1997), exhibit below average cognitive function (Swirsky-Sacchetti et al., 1993) and discount the future more steeply than healthy controls (Lawrence et al., 2010).

A similar interaction might in part underlie associations between low socioeconomic status, steeper discounting and psychiatric disorder. Bickel et al. (2014a, 2011a) propose a neuropsychological explanation for relationships between low socioeconomic status and unhealthy lifestyle choices, in terms of a dual-systems model of cognition, whereby low socioeconomic status encourages engagement of a more “impulsive” decision-making system, putatively mediated by limbic brain structures, over an “executive” decision-making system, mediated by parts of frontal cortex. The authors point to evidence that several neurocognitive abilities including working memory, declarative memory, and cognitive control exhibit socioeconomic gradients (Noble et al., 2007). This association appears to hold in prospective analyses too. On a developmental timescale, Evans and Schamberg (2009) show that childhood poverty predicts lower working memory in young adulthood, and that high levels of childhood stress mediate this relationship. State-based effects of poverty on cognitive function are also evident, for example Indian sugar-cane farmers exhibit worse cognitive performance before their harvest, when they are poor, than after their harvest, when they are richer, even controlling for levels of stress (Mani et al., 2013). The dual-systems approach is not incompatible with our three-way division of behavioral control. The model-based system for instance appears to depend on executive functions such as working memory, but has the advantage of carrying a specific algorithmic meaning. Also, we envisage the three-controllers as sharing the mutual goal of maximizing reward (Dayan et al., 2006), and suggest that their relative deployment is also subject to a cost-benefit trade-off (Daw et al., 2005; Dezfouli, 2009; Huys et al., 2012). We therefore go as far as to propose that diminished deployment of model-based control in states of deprivation might reflect an evolutionary milieu in which such changes were approximately optimal, for example in response to irreducible future uncertainty.

Deficits in future thinking appear likely to underlie steeper discounting seen in patients diagnosed with schizophrenia compared with healthy controls (Heerey et al., 2007, 2011), in keeping with observations that such patients often exhibit cognitive and executive dysfunction. Furthermore, patients with schizophrenia exhibit atrophy of frontal and temporal brain regions (Madsen et al., 1999; Velakoulis et al., 2001; van Haren et al., 2008), a pattern which would be expected to be accompanied by shortened time perspective, given the role of these structures in imagining future scenarios (Hassabis et al., 2007; Johnson et al., 2007; Schacter et al., 2008; Schacter and Schacter, 2008). Heerey et al. (2011) present evidence to support this view, comparing measures of discounting, cognitive function and “future representation” in 39 patients with schizophrenia and 25 healthy control participants. Patients discounted more steeply than controls, and when asked to list events which they thought might happen to them in their lives, on average reported future life-events that were nearer in time. This shortened future perspective correlated with lower working memory scores in both patients and controls, to the extent that controlling for working memory abolished the effect of schizophrenia status on discounting. These results suggest that discounting deficits in schizophrenia are attributable to an impaired ability to imagine the future, a faculty that is limited by working memory capacity.

Future Directions

The above account leaves considerable room for future research. The foregoing discussion has largely focused on appetitive processes evoked in the appraisal of future rewards. A complementary, but distinct, set of principles might apply to how humans evaluate future punishment. For example, as a complement to the theory that tonic dopamine signals the average reward rate, it has been proposed that tonic serotonin signals the long run average punishment rate, and thus controls the vigor of avoidance behavior (Dayan, 2012a,b, see also Crockett et al., 2012). This idea might hold relevance for increased discounting in depression, which is associated with both marked avoidance (Ferster, 1973) and possible serotonergic abnormalities (e.g., Mann et al., 2000). Although a normative account of the role of serotonin in depression remains elusive, it is interesting that decreasing serotonin availability (achieved by tryptophan depletion) in healthy subjects acts to increase discounting (Tanaka et al., 2007; Schweighofer et al., 2008), commensurate with increased discounting seen in depression (Takahashi et al., 2008; Dennhardt and Murphy, 2011; Dombrovski et al., 2011, 2012; Imhoff et al., 2014; Pulcu et al., 2014) (For further discussion of temporal preferences for punishment see Berns et al., 2006; Story et al., 2013, 2015).

A further area for future research concerns the effect of stress on discounting (e.g., Diller et al., 2011; Kimura et al., 2013). A recent meta-analysis (Fields et al., 2014) of 16 studies examining the relationships between delay discounting or delay of gratification and subjective or physiological measures of stress and found that stress was associated with steeper discounting, with a large aggregate effect size (Hedge's g = 0.59). Seemingly contradicting these findings, low baseline cortisol levels have been associated with increased delay discounting (Takahashi, 2004), and similarly predict higher discounting at 6 month follow up (Takahashi et al., 2009). A possible explanation would be that baseline stress and responsivity to stress manipulations exert distinct influences on discounting. In part supporting this idea, Lempert et al. (2012) found that when placed under stressful conditions, individuals with low trait perceived stress showed higher discounting than those with high trait perceived stress, perhaps reflecting greater responsiveness to acute stressors in subjects with low trait stress. In addition acute administration of hydrocortisone, a key hormone involved in stress response, has been found to cause a short-lived increase in discounting (Cornelisse et al., 2013). Further work is required to understand the relationships between baseline and induced stress and their interaction with discounting, as well as to characterize stress in terms of the information content of stressful situations.

The above account has not specifically addressed willpower. Several lines of evidence point to the fact that humans often renege on best-laid plans, in favor of immediate consumption. We propose that this results since people are poor in predicting in advance the effect of conditioned cues and motivational state changes on their behavior (see also Loewenstein, 1996; Metcalfe and Mischel, 1999; Read, 2001; Chapman, 2005; Dayan et al., 2006; Story et al., 2014). Thus, one might plan to abstain from eating dessert as part of a diet plan, but find it harder to resist when presented with a piece of cake (see for example Read and Van Leeuwen, 1998; Allan et al., 2010) and relapses in drug-taking behavior following abstinence commonly occur after exposure to a previous drug-taking environment (O'Brien et al., 1998). Similarly, people appear poor in predicting their behavior in future motivational states that differ from their current motivational state. For example, in a study of analgesic preferences for childbirth (Christensen-Szalanski, 1984), women asked roughly 1 month in advance of labor preferred to avoid invasive spinal anesthesia in favor of less invasive but less effective pain relief methods, however during active labor women frequently reversed preference and opted for anesthesia. “Battles of will” then consist in the attempt to punish or extinguish existing habitual or Pavlovian responses through the imposition of countervailing model-based (goal-directed) valuations. Hyperbolic discounting theoretically gives rise to similar intertemporal choice conflicts, but considered alone has difficulty accounting for the state-dependence of real world failures of self-control. Thus, in the study of Christensen-Szalanski (1984) it seems likely to be the transition into a painful state that brings about a shift in womens' preferences for analgesia, rather than the time preceding childbirth per se as hyperbolic discounting would suggest. An interesting direction for future research will be to examine whether individuals with psychiatric disorders, for example borderline personality disorder, exhibit greater choice inconsistency over time, relative to controls. This possibility would accord with a well-esteemed theory that individuals with borderline personality disorder are impaired in modeling mental states (Bateman and Fonagy, 2004).

Another interesting direction not explored here concerns discounting of past rewards (Yi et al., 2006; Bickel et al., 2008). Discounting for past rewards has been shown to be systematic and hyperbolic in form, and is correlated with the degree of future discounting across individuals (Yi et al., 2006). Furthermore, cigarette smokers are found to discount past, as well as future, rewards more steeply than non-smokers (Bickel et al., 2008). Symmetry between past and future discounting is in keeping with evidence that remembering the past and imagining the future are both dependent on the hippocampus (Hassabis et al., 2007; Johnson et al., 2007; Schacter et al., 2008; Schacter and Schacter, 2008). Notably past discounting is difficult to directly account for in terms of some of the informational influences suggested in this article. Growth potential for example ought to motivate having received rewards in the distant past, since these should have had time to accrue greater value. Further work is clearly needed to understand the possible normative basis of past discounting. One possibility is that factors tending to foreshorten model-based consideration of future outcomes, such as uncertainty, also dimish retrieval of episodic memories, leading to a narrowing of temporal perspective. Notably, the learning rate in model-free reinforcement learning algorithms corresponds to an exponential discount factor for past reward. Yechiam et al. (2005) have shown that susbtance misusers and inidividuals with ventral medial prefrontal cortex lesions both exhibit increased learning rates on the Iowa gambling task, where an excessive focus on recent reinforcement is disadvantageous. This suggests that high learning rates might reflect a form of “retrospective impulsivity,” through assigning too little weight to distant past experience. Further work is required to explore this possibility.

A final consideration is that of how discounting differs between different forms of outcome. Discounting for several forms of appetitive outcome shows consistency across individuals, for example discount rates for money are strongly and significantly correlated with other forms of appetitive outcome, such as the discounting of cigarettes for cigarette smokers, the discounting of heroin for opioid-dependent outpatients and the discounting of food amongst college students (Odum, 2011; Pearson r = 0.93; p = 0.0007 for money vs. the mean of all other outcomes). However, rates are not identical across commodities: people tend to discount primary reinforcers such as food, water and sex more steeply than money (Lawyer et al., 2010; Odum, 2011; Jarmolowicz et al., 2013) and a number of studies have shown that people with substance dependence discount their drug of abuse more steeply than money (e.g., Madden et al., 1997; Bickel et al., 1999; Petry, 2001). Steeper discounting for primary reinforcers might reflect their greater engagement of innate appetitive systems. In other words, deliberative consideration of primary reinforcers might increase attention to the relevant underlying deficit state (drive). Steeper discounting then putatively results due to the negative Pavlovian value associated with prolonging the deficit state. Further research is needed to examine this possibility.

Interesting results have been obtained when discounting choices are made across different commodities, for example in choices between money now vs. cigarettes later, termed cross-commodity discounting (CCD), as opposed to single-commodity discounting (SCD). For instance, Bickel et al. (2011a, 2007) examined discounting in cocaine-dependent individuals between cocaine now vs. cocaine later (C-C), money now vs. money later (M-M), cocaine now vs. money later (C-M), and money now vs. cocaine later (M-C) conditions, where the amounts of money and cocaine across conditions were equated in immediate worth. Consistent with previous findings, C-C discount rates were significantly greater than M-M discount rates; indeed there was a significant main effect of changing the delayed commodity to cocaine, consistent with cocaine being discounted more steeply than money. However, the authors found that, whilst C-M and M-M discounting were statistically indistinguishable, M-C discount rates were significantly higher than C-C discount rates. Wesley et al. (2014) broadly replicate this result, and Jarmolowicz et al. (2014) find a similar pattern of findings for money vs. sex CCD, wherein a M-S condition was associated with the steepest discounting. A possible explanation in terms of the classical economic model would be that cocaine (or sex) is both discounted more steeply and has a less concave utility function than money. Bickel et al. (2011a, 2007) illustrate this possibility though favor an explanation in terms of a framing effect. We propose a framing hypothesis whereby primary reinforcers are associated with a steeper implicit hazard rate than money (this might in part underlie their steeper discounting, but is of itself insufficient to explain the above findings); SCD then hypothetically diminishes the implicit hazard rate, by priming the idea that the commodity will definitely be received sooner-or-later. By contrast, the implicit exchange of money for primary reinforcement in CCD hypothetically amplifies the hazard rate for the delayed commodity, by priming the notion that the delayed commodity is not guaranteed. This hypothesis leads to the observed interaction, with the steepest discounting for CCD in which primary reinforcement is delayed, and is an eminently testable. The possible modulation of such cross-commodity effects in various psychiatric disorders might offer further clues as to the underlying decision mechanisms at play.

In summary we have reviewed motivations for steeper discounting of delayed reward. Discounting tends to be increased across a broad range of disorders, including ADHD, schizophrenia, bipolar disorder, hypomania, depression, borderline personality disorder and substance misuse disorders. We have proposed that these findings can be parsimoniously understood by examining the reasons why people should discount the future, namely the opportunity costs of delay, uncertainty associated with future outcomes and the cognitive costs of resolving this uncertainty. We have detailed different types of information processing in the brain that can take these factors into account, broadly distinguishing “parametric discounting,” whereby rewards labeled as delayed are automatically discounted as a function of delay, vs. “planful discounting” where the factors associated with the delay are accounted for in the course of learning. Where possible we have attempted to map these normative influences onto putative, albeit broad neurobiological mechanisms. More generally we propose that this approach, that is, attempting to understand the biological substrates of psychiatric disorder in terms of their physiological function, and in light of a person's life history, is key to bridging psychosocial and biological conceptions of mental illness. We accept that our use of this approach here might appear speculative. In essence, we feel is this justified given the emerging nature of the field and await further research developments with eager interest.

Funding Statement

This work was supported by the Wellcome Trust [Ray Dolan Senior Investigator Award 098362/Z/12/Z]. The Wellcome Trust Centre for Neuroimaging is supported by core funding from the Wellcome Trust 091593/Z/10/Z. Dr. Moutoussis is also supported by the UCLH Biomedical Research Council.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

Adler, N. E., and Rehkopf, D. H. (2008). US disparities in health: descriptions, causes, and mechanisms. Annu. Rev. Public Health 29, 235–252. doi: 10.1146/annurev.publhealth.29.020907.090852

PubMed Abstract | CrossRef Full Text | Google Scholar

Ahn, W.-Y., Rass, O., Fridberg, D. J., Bishara, A. J., Forsyth, J. K., Breier, A., et al. (2011). Temporal discounting of rewards in patients with bipolar disorder and schizophrenia. J. Abnorm. Psychol. 120, 911. doi: 10.1037/a0023333

PubMed Abstract | CrossRef Full Text | Google Scholar

Ainslie, G. W. (1974). Impulse control in pigeons. J. Exp. Anal. Behav. 21, 485–489. doi: 10.1901/jeab.1974.21-485

PubMed Abstract | CrossRef Full Text | Google Scholar

Allan, J. L., Johnston, M., and Campbell, N. (2010). Unintentional eating. what determines goal-incongruent chocolate consumption? Appetite 54, 422–425. doi: 10.1016/j.appet.2010.01.009

PubMed Abstract | CrossRef Full Text | Google Scholar

Anderhub, V., Güth, W., Gneezy, U., and Sonsino, D. (2001). On the interaction of risk and time preferences: an experimental study. Ger. Econ. Rev. 2, 239–253. doi: 10.1111/1468-0475.00036

CrossRef Full Text | Google Scholar

Andersen, S., Harrison, G. W., Lau, M. I., and Rutström, E. E. (2008). Eliciting risk and time preferences. Econometrica 76, 583–618. doi: 10.1111/j.1468-0262.2008.00848.x

CrossRef Full Text | Google Scholar

Apperley, L., and Mittal, R. (2013). G207 is there a link between ADHD and social deprivation? Arch. Dis. Child. 98, A92–A93. doi: 10.1136/archdischild-2013-304107.219

CrossRef Full Text | Google Scholar

Avsar, K. B., Weller, R. E., Cox, J. E., Reid, M. A., White, D. M., and Lahti, A. C. (2013). An fMRI investigation of delay discounting in patients with schizophrenia. Brain Behav. 3, 384–401. doi: 10.1002/brb3.135

PubMed Abstract | CrossRef Full Text | Google Scholar

Balleine, B. W., and Dickinson, A. (1998). Goal-directed instrumental action: contingency and incentive learning and their cortical substrates. Neuropharmacology 37, 407–419. doi: 10.1016/S0028-3908(98)00033-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Barke, E. (2014). Temporal discouting in conduct disorder: toward an experience-adaptation hypothesis of the role of psychosocial insecurity. J. Pers. Disord. 28, 19–24. doi: 10.1521/pedi.2014.28.1.19

PubMed Abstract | CrossRef Full Text | Google Scholar

Barkley, R. A., Edwards, G., Laneri, M., Fletcher, K., and Metevia, L. (2001). Executive functioning, temporal discounting, and sense of time in adolescents with attention deficit hyperactivity disorder (ADHD) and oppositional defiant disorder (ODD). J. Abnorm. Child Psychol. 29, 541–556. doi: 10.1023/A:1012233310098

PubMed Abstract | CrossRef Full Text | Google Scholar

Bateman, A. W., and Fonagy, P. (2004). Mentalization-based treatment of BPD. J. Pers. Disord. 18, 36–51. doi: 10.1521/pedi.18.1.36.32772

PubMed Abstract | CrossRef Full Text | Google Scholar

Behrens, T. E., Woolrich, M. W., Walton, M. E., and Rushworth, M. F. (2007). Learning the value of information in an uncertain world. Nat. Neurosci. 10, 1214–1221. doi: 10.1038/nn1954

PubMed Abstract | CrossRef Full Text | Google Scholar

Berns, G. S., Chappelow, J., Cekic, M., Zink, C. F., Pagnoni, G., and Martin-Skurski, M. E. (2006). Neurobiological substrates of dread. Science 312, 754–758. doi: 10.1126/science.1123721

PubMed Abstract | CrossRef Full Text | Google Scholar

Bickel, W. K., Jarmolowicz, D. P., Mueller, E. T., Gatchalian, K. M., and McClure, S. M. (2012a). Are executive function and impulsivity antipodes? A conceptual reconstruction with special reference to addiction. Psychopharmacology 221, 361–387. doi: 10.1007/s00213-012-2689-x

PubMed Abstract | CrossRef Full Text | Google Scholar

Bickel, W. K., Jarmolowicz, D. P., Mueller, E. T., Koffarnus, M. N., and Gatchalian, K. M. (2012b). Excessive discounting of delayed reinforcers as a trans-disease process contributing to addiction and other disease-related vulnerabilities: emerging evidence. Pharmacol. Ther. 134, 287–297. doi: 10.1016/j.pharmthera.2012.02.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Bickel, W. K., Koffarnus, M. N., Moody, L., and Wilson, A. G. (2014a). The behavioral-and neuro-economic process of temporal discounting: a candidate behavioral marker of addiction. Neuropharmacology 76, 518–527. doi: 10.1016/j.neuropharm.2013.06.013

PubMed Abstract | CrossRef Full Text | Google Scholar

Bickel, W. K., Landes, R. D., Christensen, D. R., Jackson, L., Jones, B. A., Kurth-Nelson, Z., et al. (2011a). Single-and cross-commodity discounting among cocaine addicts: the commodity and its temporal location determine discounting rate. Psychopharmacology 217, 177–187. doi: 10.1007/s00213-011-2272-x

PubMed Abstract | CrossRef Full Text | Google Scholar

Bickel, W. K., Miller, M. L., Yi, R., Kowal, B. P., Lindquist, D. M., and Pitcock, J. A. (2007). Behavioral and neuroeconomics of drug addiction: competing neural systems and temporal discounting processes. Drug Alcohol Depend. 90(Suppl. 1), S85–S91. doi: 10.1016/j.drugalcdep.2006.09.016

PubMed Abstract | CrossRef Full Text | Google Scholar

Bickel, W. K., Moody, L., Quisenberry, A. J., Ramey, C. T., and Sheffer, C. E. (2014b). A competing neurobehavioral decision systems model of SES-related health and behavioral disparities. Prev. Med. 68, 37–43. doi: 10.1016/j.ypmed.2014.06.032

PubMed Abstract | CrossRef Full Text | Google Scholar

Bickel, W. K., Odum, A. L., and Madden, G. J. (1999). Impulsivity and cigarette smoking: delay discounting in current, never, and ex-smokers. Psychopharmacology 146, 447–454.

PubMed Abstract | Google Scholar

Bickel, W. K., Yi, R., Kowal, B. P., and Gatchalian, K. M. (2008). Cigarette smokers discount past and future rewards symmetrically and more than controls: is discounting a measure of impulsivity? Drug Alcohol Depend. 96, 256–262. doi: 10.1016/j.drugalcdep.2008.03.009

PubMed Abstract | CrossRef Full Text | Google Scholar

Bickel, W. K., Yi, R., Landes, R. D., Hill, P. F., and Baxter, C. (2011b). Remember the future: working memory training decreases delay discounting among stimulant addicts. Biol. Psychiatry 69, 260–265. doi: 10.1016/j.biopsych.2010.08.017

PubMed Abstract | CrossRef Full Text | Google Scholar

Bitsakou, P., Psychogiou, L., Thompson, M., and Sonuga-Barke, E. J. (2009). Delay aversion in attention deficit/hyperactivity disorder: an empirical investigation of the broader phenotype. Neuropsychologia 47, 446–456. doi: 10.1016/j.neuropsychologia.2008.09.015

PubMed Abstract | CrossRef Full Text | Google Scholar

Callan, M. J., Shead, N. W., and Olson, J. M. (2009). Foregoing the labor for the fruits: the effect of just world threat on the desire for immediate monetary rewards. J. Exp. Soc. Psychol. 45, 246–249. doi: 10.1016/j.jesp.2008.08.013

CrossRef Full Text | Google Scholar

Callan, M. J., Shead, N. W., and Olson, J. M. (2011). Personal relative deprivation, delay discounting, and gambling. J. Pers. Soc. Psychol. 101, 955. doi: 10.1037/a0024778

PubMed Abstract | CrossRef Full Text | Google Scholar

Chao, L. W., Szrek, H., Pereira, N. S., and Pauly, M. V. (2009). Time preference and its relationship with age, health, and survival probability. Judgm. Decis. Mak. 4, 1–19.

PubMed Abstract | Google Scholar

Chapman, G. B. (2005). Short-term cost for long-term benefit: time preference and cancer control. Health Psychol. 24, S41–S48. doi: 10.1037/0278-6133.24.4.s41

PubMed Abstract | CrossRef Full Text | Google Scholar

Christensen-Szalanski, J. J. (1984). Discount functions and the measurement of patients' values. Women's decisions during childbirth. Med. Decis. Making 4, 47–58.

PubMed Abstract | Google Scholar

Chung, S. H., and Herrnstein, R. J. (1967). Choice and delay of reinforcement. J. Exp. Anal. Behav. 10, 67–74. doi: 10.1901/jeab.1967.10-67

PubMed Abstract | CrossRef Full Text | Google Scholar

Coffey, S. F., Schumacher, J. A., Baschnagel, J. S., Hawk, L. W., and Holloman, G. (2011). Impulsivity and risk-taking in borderline personality disorder with and without substance use disorders. Pers. Disord. Theory Res. Treat. 2, 128. doi: 10.1037/a0020574

PubMed Abstract | CrossRef Full Text | Google Scholar

Conner, M., and Norman, P. (2005). Predicting Health Behaviour, 2nd Edn. Maidenhead: McGraw-Hill International.

Google Scholar

Cornelisse, S., Van Ast, V., Haushofer, J., Seinstra, M., and Joels, M. (2013). Time-Dependent Effect of Hydrocortisone Administration on Intertemporal Choice. Available online at: http://ssrn.com/abstract=2294189

Critchfield, T. S., and Kollins, S. H. (2001). Temporal discounting: basic research and the analysis of socially important behavior. J. Appl. Behav. Anal. 34, 101–122. doi: 10.1901/jaba.2001.34-101

PubMed Abstract | CrossRef Full Text | Google Scholar

Crockett, M. J., Clark, L., Apergis-Schoute, A. M., Morein-Zamir, S., and Robbins, T. W. (2012). Serotonin modulates the effects of Pavlovian aversive predictions on response vigor. Neuropsychopharmacology 37, 2244–2252. doi: 10.1038/npp.2012.75

PubMed Abstract | CrossRef Full Text | Google Scholar

Curtis, C. E., and D'Esposito, M. (2003). Persistent activity in the prefrontal cortex during working memory. Trends Cogn. Sci. 7, 415–423. doi: 10.1016/S1364-6613(03)00197-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Daw, N. D., Gershman, S. J., Seymour, B., Dayan, P., and Dolan, R. J. (2011). Model-based influences on humans' choices and striatal prediction errors. Neuron 69, 1204–1215. doi: 10.1016/j.neuron.2011.02.027

PubMed Abstract | CrossRef Full Text | Google Scholar

Daw, N. D., Niv, Y., and Dayan, P. (2005). Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nat. Neurosci. 8, 1704–1711. doi: 10.1038/nn1560

PubMed Abstract | CrossRef Full Text | Google Scholar

Daw, N. D., and Touretzky, D. S. (2000). Behavioral considerations suggest an average reward TD model of the dopamine system. Neurocomputing 32–33, 679–684. doi: 10.1016/S0925-2312(00)00232-0

CrossRef Full Text | Google Scholar

Dayan, P. (2009). Prospective and retrospective temporal difference learning. Network 20, 32–46. doi: 10.1080/09548980902759086

PubMed Abstract | CrossRef Full Text | Google Scholar

Dayan, P. (2012a). “Models of value and choice,” in Neuroscience of Preference and Choice, eds R. Dolan and T. Sharot (London: Elsevier), 33–59. doi: 10.1016/B978-0-12-381431-9.00002-4

CrossRef Full Text | Google Scholar

Dayan, P. (2012b). Instrumental vigour in punishment and reward. Eur. J. Neurosci. 35, 1152–1168. doi: 10.1111/j.1460-9568.2012.08026.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Dayan, P., and Balleine, B. W. (2002). Reward, motivation, and reinforcement learning. Neuron 36, 285–298. doi: 10.1016/S0896-6273(02)00963-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Dayan, P., and Daw, N. D. (2008). Decision theory, reinforcement learning, and the brain. Cogn. Affect. Behav. Neurosci. 8, 429–453. doi: 10.3758/CABN.8.4.429

PubMed Abstract | CrossRef Full Text | Google Scholar

Dayan, P., Niv, Y., Seymour, B., and, D., and Daw, N. (2006). The misbehavior of value and the discipline of the will. Neural Netw. 19, 1153–1160. doi: 10.1016/j.neunet.2006.03.002

PubMed Abstract | CrossRef Full Text | Google Scholar

Del Giudice, M. (2014). An evolutionary life history framework for psychopathology. Psychol. Inq. 25, 261–300. doi: 10.1080/1047840X.2014.884918

CrossRef Full Text | Google Scholar

Demurie, E., Roeyers, H., Baeyens, D., and Sonuga−Barke, E. (2012). Temporal discounting of monetary rewards in children and adolescents with ADHD and autism spectrum disorders. Dev. Sci. 15, 791–800. doi: 10.1111/j.1467-7687.2012.01178.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Denk, F., Walton, M., Jennings, K., Sharp, T., Rushworth, M., and Bannerman, D. (2005). Differential involvement of serotonin and dopamine systems in cost-benefit decisions about delay or effort. Psychopharmacology 179, 587–596. doi: 10.1007/s00213-004-2059-4

PubMed Abstract | CrossRef Full Text | Google Scholar

Dennhardt, A. A., and Murphy, J. G. (2011). Associations between depression, distress tolerance, delay discounting, and alcohol-related problems in European American and African American college students. Psychol. Addict. Behav. 25, 595. doi: 10.1037/a0025807

PubMed Abstract | CrossRef Full Text | Google Scholar

Dezfouli, A. (2009). A neurocomputational model for cocaine addiction. Neural Comput. 21, 2869. doi: 10.1162/neco.2009.10-08-882

PubMed Abstract | CrossRef Full Text | Google Scholar

Dickinson, A., and Balleine, B. (1994). Motivational control of goal-directed action. Learn. Behav. 22, 1–18. doi: 10.3758/BF03199951

CrossRef Full Text | Google Scholar

Dickinson, A., Balleine, B., Watt, A., Gonzalez, F., and Boakes, R. (1995). Motivational control after extended instrumental training. Anim. Learn. Behav. 23, 197–206. doi: 10.3758/BF03199935

CrossRef Full Text | Google Scholar

Dierst-Davies, R., Reback, C. J., Peck, J. A., Nuño, M., Kamien, J. B., and Amass, L. (2011). Delay-discounting among homeless, out-of-treatment, substance-dependent men who have sex with men. Am. J. Drug Alcohol. Abuse 37, 93–97. doi: 10.3109/00952990.2010.540278

PubMed Abstract | CrossRef Full Text | Google Scholar

Diller, J. W., Patros, C. H., and Prentice, P. R. (2011). Temporal discounting and heart rate reactivity to stress. Behav. Processes 87, 306–309. doi: 10.1016/j.beproc.2011.05.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Dohmen, T., Falk, A., Huffman, D., and Sunde, U. (2010). Are risk aversion and impatience related to cognitive ability? Am. Econ. Rev. 100, 1238–1260. doi: 10.1257/aer.100.3.1238

CrossRef Full Text | Google Scholar

Dolan, R. J., and Dayan, P. (2013). Goals and Habits in the Brain. Neuron 80, 312–325. doi: 10.1016/j.neuron.2013.09.007

PubMed Abstract | CrossRef Full Text | Google Scholar

Dom, G., De Wilde, B., Hulstijn, W., Van Den Brink, W., and Sabbe, B. (2006a). Behavioural aspects of impulsivity in alcoholics with and without a cluster-B personality disorder. Alcohol. Alcohol. 41, 412–420. doi: 10.1093/alcalc/agl030

PubMed Abstract | CrossRef Full Text | Google Scholar

Dom, G., De Wilde, B., Hulstijn, W., Van Den Brink, W., and Sabbe, B. (2006b). Decision-making deficits in alcohol-dependent patients with and without comorbid personality disorder. Alcohol. Clin. Exp. Res. 30, 1670–1677. doi: 10.1111/j.1530-0277.2006.00202.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Dombrovski, A. Y., Siegle, G. J., Szanto, K., Clark, L., Reynolds, C., and Aizenstein, H. (2012). The temptation of suicide: striatal gray matter, discounting of delayed rewards, and suicide attempts in late-life depression. Psychol. Med. 42, 1203–1215. doi: 10.1017/S0033291711002133

PubMed Abstract | CrossRef Full Text | Google Scholar

Dombrovski, A. Y., Szanto, K., Siegle, G. J., Wallace, M. L., Forman, S. D., Sahakian, B., et al. (2011). Lethal forethought: delayed reward discounting differentiates high-and low-lethality suicide attempts in old age. Biol. Psychiatry 70, 138–144. doi: 10.1016/j.biopsych.2010.12.025

PubMed Abstract | CrossRef Full Text | Google Scholar

Domjan, M. (2003). The Principles of Learning and Behaviour, 5th Edn. Belmont, CA: Wadsworth.

Dougherty, D. M., Bjork, J. M., Huckabee, H. C., Moeller, F. G., and Swann, A. C. (1999). Laboratory measures of aggression and impulsivity in women with borderline personality disorder. Psychiatry Res. 85, 315–326. doi: 10.1016/S0165-1781(99)00011-6

PubMed Abstract | CrossRef Full Text | Google Scholar

DSM V (2013). Diagnostic and Statistical Manual of Mental Disorders, 5th Edn. (DSM-5 TM). Washington, DC: American Psychiatric Association.

Google Scholar

Eckel, C., Johnson, C., and Montmarquette, C. (2005). “Savings decisions of the working poor: short- and long-term horizons,” in Field Experiments in Economics, Research in Experimental Economics, Vol. 10, eds J. Carpenter, G. W. Harrison, and J. A. List (Greenwich, CT: JAI Press), 219–260. doi: 10.1016/S0193-2306(04)10006-9

CrossRef Full Text

Eppinger, B., Walter, M., Heekeren, H. R., and Li, S.-C. (2013). Of goals and habits: age-related and individual differences in goal-directed decision-making. Front. Neurosci. 7:253. doi: 10.3389/fnins.2013.00253

PubMed Abstract | CrossRef Full Text | Google Scholar

Epstein, L. H., Richards, J. B., Saad, F. G., Paluch, R. A., Roemmich, J. N., and Lerman, C. (2003). Comparison between two measures of delay discounting in smokers. Exp. Clin. Psychopharmacol. 11, 131–138. doi: 10.1037/1064-1297.11.2.131

PubMed Abstract | CrossRef Full Text | Google Scholar

Evans, G. W., and Schamberg, M. A. (2009). Childhood poverty, chronic stress, and adult working memory. Proc. Natl. Acad. Sci. U.S.A. 106, 6545–6549. doi: 10.1073/pnas.0811910106

PubMed Abstract | CrossRef Full Text | Google Scholar

Ferster, C. B. (1973). A functional analysis of depression. Am. Psychol. 28, 857. doi: 10.1037/h0035605

PubMed Abstract | CrossRef Full Text | Google Scholar

Field, M., Christiansen, P., Cole, J., and Goudie, A. (2007). Delay discounting and the alcohol Stroop in heavy drinking adolescents. Addiction 102, 579–586. doi: 10.1111/j.1360-0443.2007.01743.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Fields, S. A., Lange, K., Ramos, A., Thamotharan, S., and Rassu, F. (2014). The relationship between stress and delay discounting: a meta-analytic review. Behav. Pharmacol. 25, 434–444. doi: 10.1097/fbp.0000000000000044

PubMed Abstract | CrossRef Full Text | Google Scholar

Fields, S., Collins, C., Leraas, K., and Reynolds, B. (2009a). Dimensions of impulsive behavior in adolescent smokers and nonsmokers. Exp. Clin. Psychopharmacol. 17, 302–311. doi: 10.1037/a0017185

PubMed Abstract | CrossRef Full Text | Google Scholar

Fields, S., Leraas, K., Collins, C., and Reynolds, B. (2009b). Delay discounting as a mediator of the relationship between perceived stress and cigarette smoking status in adolescents. Behav. Pharmacol. 20, 455–460. doi: 10.1097/FBP.0b013e328330dcff

PubMed Abstract | CrossRef Full Text | Google Scholar

Fishburn, P. C., and Rubinstein, A. (1982). Time preference. Int. Econ. Rev. 23, 677–694. doi: 10.2307/2526382

CrossRef Full Text | Google Scholar

Floresco, S. B., Maric, T., and Ghods-Sharifi, S. (2008). Dopaminergic and glutamatergic regulation of effort-and delay-based decision making. Neuropsychopharmacology 33, 1966–1979. doi: 10.1038/sj.npp.1301565

PubMed Abstract | CrossRef Full Text | Google Scholar

Frederick, S., and Loewenstein, G. (2008). Conflicting motives in evaluations of sequences. J. Risk Uncert. 37, 221–235. doi: 10.1007/s11166-008-9051-z

CrossRef Full Text | Google Scholar

Frederick, S., Loewenstein, G., and O'Donoghue, T. (2002). Time discounting and time preference: a critical review. J. Econ. Lit. 40, 351–401. doi: 10.1257/jel.40.2.351

CrossRef Full Text | Google Scholar

Friston, K. J., Stephan, K. E., Montague, R., and Dolan, R. J. (2014). Computational psychiatry: the brain as a phantastic organ. Lancet Psychiatry 1, 148–158. doi: 10.1016/S2215-0366(14)70275-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Friston, K., Schwartenbeck, P., FitzGerald, T., Moutoussis, M., Behrens, T., and Dolan, R. J. (2013). The anatomy of choice: active inference and agency. Front. Hum. Neurosci. 7:598. doi: 10.3389/fnhum.2013.00598

PubMed Abstract | CrossRef Full Text | Google Scholar

Giordano, L. A., Bickel, W. K., Loewenstein, G., Jacobs, E. A., Marsch, L., and Badger, G. J. (2002). Mild opioid deprivation increases the degree that opioid-dependent outpatients discount delayed heroin and money. Psychopharmacology 163, 174–182. doi: 10.1007/s00213-002-1159-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Gläscher, J., Daw, N., Dayan, P., and O'Doherty, J. P. (2010). States versus rewards: dissociable neural prediction error signals underlying model-based and model-free reinforcement learning. Neuron 66, 585–595. doi: 10.1016/j.neuron.2010.04.016

PubMed Abstract | CrossRef Full Text | Google Scholar

Green, L., Fristoe, N., and Myerson, J. (1994). Temporal discounting and preference reversals in choice between delayed outcomes. Psychon. Bull. Rev. 1, 383–389. doi: 10.3758/BF03213979

PubMed Abstract | CrossRef Full Text | Google Scholar

Green, L., and Myerson, J. (2004). A discounting framework for choice with delayed and probabilistic rewards. Psychol. Bull. 130, 769–792. doi: 10.1037/0033-2909.130.5.769

PubMed Abstract | CrossRef Full Text | Google Scholar

Green, L., Myerson, J., Lichtman, D., Rosen, S., and Fry, A. (1996). Temporal discounting in choice between delayed rewards: the role of age and income. Psychol. Aging 11:79. doi: 10.1037/0882-7974.11.1.79

PubMed Abstract | CrossRef Full Text | Google Scholar

Green, L., Myerson, J., and Ostaszewski, P. (1999). Discounting of delayed rewards across the life span: age differences in individual discounting functions. Behav. Processes 46, 89–96. doi: 10.1016/S0376-6357(99)00021-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Hassabis, D., Kumaran, D., Vann, S. D., and Maguire, E. A. (2007). Patients with hippocampal amnesia cannot imagine new experiences. Proc. Natl. Acad. Sci. U.S.A. 104, 1726–1731. doi: 10.1073/pnas.0610561104

PubMed Abstract | CrossRef Full Text | Google Scholar

Haushofer, J., Cornelisse, S., Seinstra, M., Fehr, E., Joëls, M., and Kalenscher, T. (2013). No effects of psychosocial stress on intertemporal choice. PLoS ONE 8:e78597. doi: 10.1371/journal.pone.0078597

PubMed Abstract | CrossRef Full Text | Google Scholar

Hecht, D., Walsh, V., and Lavidor, M. (2013). Bi-frontal direct current stimulation affects delay discounting choices. Cogn. Neurosci. 4, 7–11. doi: 10.1080/17588928.2011.638139

PubMed Abstract | CrossRef Full Text | Google Scholar

Heerey, E. A., Matveeva, T. M., and Gold, J. M. (2011). Imagining the future: degraded representations of future rewards and events in schizophrenia. J. Abnorm. Psychol. 120, 483. doi: 10.1037/a0021810

PubMed Abstract | CrossRef Full Text | Google Scholar

Heerey, E. A., Robinson, B. M., McMahon, R. P., and Gold, J. M. (2007). Delay discounting in schizophrenia. Cogn. Neuropsychiatry 12, 213–221. doi: 10.1080/13546800601005900

PubMed Abstract | CrossRef Full Text | Google Scholar

Hershberger, W. A. (1986). An approach through the looking-glass. Anim. Learn. Behav. 14, 443–451. doi: 10.3758/BF03200092

CrossRef Full Text | Google Scholar

Hill, E. M., Jenkins, J., and Farmer, L. (2008). Family unpredictability, future discounting, and risk taking. J. Socio-Econ. 37, 1381–1396. doi: 10.1016/j.socec.2006.12.081

CrossRef Full Text | Google Scholar

Holt, C. A., and Laury, S. K. (2002). Risk aversion and incentive effects. Am. Econ. Rev. 92, 1644–1655. doi: 10.1257/000282802762024700

CrossRef Full Text | Google Scholar

Hurst, R. M., Kepley, H. O., McCalla, M. K., and Livermore, M. K. (2011). Internal consistency and discriminant validity of a delay-discounting task with an adult self-reported ADHD sample. J. Atten. Disord. 15, 412–422. doi: 10.1177/1087054710365993

PubMed Abstract | CrossRef Full Text | Google Scholar

Huys, Q. J., Eshel, N., O'Nions, E., Sheridan, L., Dayan, P., and Roiser, J. P. (2012). Bonsai trees in your head: how the Pavlovian system sculpts goal-directed choices by pruning decision trees. PLoS Comput. Biol. 8:e1002410. doi: 10.1371/journal.pcbi.1002410

PubMed Abstract | CrossRef Full Text | Google Scholar

Huys, Q. J., Moutoussis, M., and Williams, J. (2011). Are computational models of any use to psychiatry? Neural Netw. 24, 544–551. doi: 10.1016/j.neunet.2011.03.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Idler, E. L., and Benyamini, Y. (1997). Self-rated health and mortality: a review of twenty-seven community studies. J. Health Soc. Behav. 38, 21–37. doi: 10.2307/2955359

PubMed Abstract | CrossRef Full Text | Google Scholar

Imhoff, S., Harris, M., Weiser, J., and Reynolds, B. (2014). Delay discounting by depressed and non-depressed adolescent smokers and non-smokers. Drug Alcohol. Depend. 135, 152–155. doi: 10.1016/j.drugalcdep.2013.11.014

PubMed Abstract | CrossRef Full Text | Google Scholar

Jarmolowicz, D. P., Bickel, W. K., and Gatchalian, K. M. (2013). Alcohol-dependent individuals discount sex at higher rates than controls. Drug Alcohol. Depend. 131, 320–323. doi: 10.1016/j.drugalcdep.2012.12.014

PubMed Abstract | CrossRef Full Text | Google Scholar

Jarmolowicz, D. P., Landes, R. D., Christensen, D. R., Jones, B. A., Jackson, L., Yi, R., et al. (2014). Discounting of money and sex: effects of commodity and temporal position in stimulant-dependent men and women. Addict. Behav. 39, 1652–1657. doi: 10.1016/j.addbeh.2014.04.026

PubMed Abstract | CrossRef Full Text | Google Scholar

Johnson, A., van der Meer, M. A., and Redish, A. D. (2007). Integrating hippocampus and striatum in decision-making. Curr. Opin. Neurobiol. 17, 692–697. doi: 10.1016/j.conb.2008.01.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Jones, B. A., and Rachlin, H. (2009). Delay, probability, and social discounting in a public goods game. J. Exp. Anal. Behav. 91, 61–73. doi: 10.1901/jeab.2009.91-61

PubMed Abstract | CrossRef Full Text | Google Scholar

Kable, J. W., and Glimcher, P. W. (2010). An “as soon as possible” effect in human intertemporal decision making: behavioral evidence and neural mechanisms. J. Neurophysiol. 103, 2513–2531. doi: 10.1152/jn.00177.2009

PubMed Abstract | CrossRef Full Text | Google Scholar

Kacelnik, A. (2011). “The evolution of patience,” in Time and Decision, eds G. Loewenstein, D. Read, and R. F. Baumeister (New York, NY: Russell Sage), 115–137.

Kahneman, D., and Tversky, A. (1979). Prospect theory: an analysis of decision under risk. Econometrica 47, 263–291. doi: 10.2307/1914185

CrossRef Full Text | Google Scholar

Kalenscher, T., and Pennartz, C. M. (2008). Is a bird in the hand worth two in the future? The neuroeconomics of intertemporal decision-making. Prog. Neurobiol. 84, 284–315. doi: 10.1016/j.pneurobio.2007.11.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Keramati, M., and Gutkin, B. S. (2011). “A reinforcement learning theory for homeostatic regulation,” in Advances in Neural Information Processing Systems 24 (NIPS 2011), eds J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and K. Q. Weinberger. Available online at: http://papers.nips.cc/paper/4437-a-reinforcement

Kimura, K., Izawa, S., Sugaya, N., Ogawa, N., Yamada, K. C., Shirotsuki, K., et al. (2013). The biological effects of acute psychosocial stress on delay discounting. Psychoneuroendocrinology 38, 2300–2308. doi: 10.1016/j.psyneuen.2013.04.019

PubMed Abstract | CrossRef Full Text | Google Scholar

Kirby, K. N., and Herrnstein, R. J. (1995). Preference reversals due to myopic discounting of delayed reward. Psychol. Sci. 6, 83–89. doi: 10.1111/j.1467-9280.1995.tb00311.x

CrossRef Full Text | Google Scholar

Kirby, K. N., and Maraković, N. N. (1995). Modeling myopic decisions: evidence for hyperbolic delay-discounting within subjects and amounts. Organ. Behav. Hum. Decis. Proc. 64, 22–30. doi: 10.1006/obhd.1995.1086

CrossRef Full Text | Google Scholar

Kirby, K. N., and Petry, N. M. (2004). Heroin and cocaine abusers have higher discount rates for delayed rewards than alcoholics or non-drug-using controls. Addiction 99, 461–471. doi: 10.1111/j.1360-0443.2003.00669.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Kirby, K. N., Petry, N. M., and Bickel, W. K. (1999). Heroin addicts have higher discount rates for delayed rewards than non-drug-using controls. J. Exp. Psychol. Gen. 128, 78–87. doi: 10.1037/0096-3445.128.1.78

PubMed Abstract | CrossRef Full Text | Google Scholar

Kirk, J., and Logue, A. (1997). Effects of deprivation level on humans' self-control for food reinforcers. Appetite 28, 215–226. doi: 10.1006/appe.1996.0071

PubMed Abstract | CrossRef Full Text | Google Scholar

Koffarnus, M. N., Jarmolowicz, D. P., Mueller, E. T., and Bickel, W. K. (2013). Changing delay discounting in the light of the competing neurobehavioral decision systems theory: a review. J. Exp. Anal. Behav. 99, 32–57. doi: 10.1002/jeab.2

PubMed Abstract | CrossRef Full Text | Google Scholar

Kollins, S. H. (2003). Delay discounting is associated with substance use in college students. Addict. Behav. 28, 1167–1173. doi: 10.1016/S0306-4603(02)00220-4

PubMed Abstract | CrossRef Full Text | Google Scholar

Kuntsi, J., Oosterlaan, J., and Stevenson, J. (2001). Psychological mechanisms in hyperactivity: I response inhibition deficit, working memory impairment, delay aversion, or something else? J. Child Psychol. Psychiatry 42, 199–210. doi: 10.1111/1469-7610.00711

PubMed Abstract | CrossRef Full Text | Google Scholar

Kurth-Nelson, Z., and Redish, A. D. (2009). Temporal-difference reinforcement learning with distributed representations. PLoS ONE 4:e7362. doi: 10.1371/journal.pone.0007362

PubMed Abstract | CrossRef Full Text | Google Scholar

Lahav, E., Benzion, U., and Shavit, T. (2011). The effect of military service on soldiers' time preference-Evidence from Israel. Judgm. Decis. Mak. 6, 130–138.

Google Scholar

Laibson, D. (1997). Golden eggs and hyperbolic discounting. Q. J. Econ. 112, 443–478. doi: 10.1162/003355397555253

CrossRef Full Text | Google Scholar

Lawrence, K. A., Allen, J. S., and Chanen, A. M. (2010). Impulsivity in borderline personality disorder: reward-based decision-making and its relationship to emotional distress. J. Pers. Disord. 24, 785–799. doi: 10.1521/pedi.2010.24.6.785

PubMed Abstract | CrossRef Full Text | Google Scholar

Lawyer, S. R., Williams, S. A., Prihodova, T., Rollins, J. D., and Lester, A. C. (2010). Probability and delay discounting of hypothetical sexual outcomes. Behav. Proc. 84, 687–692. doi: 10.1016/j.beproc.2010.04.002

PubMed Abstract | CrossRef Full Text | Google Scholar

Leigh, J. P. (1986). Accounting for tastes: correlates of risk and time preferences. J. Post Keynes. Econ. 9, 17–31. doi: 10.1080/01603477.1986.11489597

CrossRef Full Text | Google Scholar

Leitão, M., Guedes, Á., Yamamoto, M. E., and de Araújo Lopes, F. (2013). Do people adjust career choices according to socioeconomic conditions? An evolutionary analysis of future discounting. Psychol. Neurosci. 6, 383. doi: 10.3922/j.psns.2013.3.16

CrossRef Full Text

Lempert, K. M., Porcelli, A. J., Delgado, M. R., and Tricomi, E. (2012). Individual differences in delay discounting under acute stress: the role of trait perceived stress. Front. Psychol. 3:251. doi: 10.3389/fpsyg.2012.00251

PubMed Abstract | CrossRef Full Text | Google Scholar

Lewis, J., and Christopher, J. (1989). Childhood trauma in borderline personality disorder. Am. J. Psychiatry 1, 46.

Google Scholar

Loewenstein, G. (1996). Out of control: visceral influences on behavior. Organ. Behav. Hum. Decis. Proc. 65, 272–292. doi: 10.1006/obhd.1996.0028

CrossRef Full Text | Google Scholar

Loman, M. M. (2012). Is deprivation-Related ADHD Different from ADHD Among Children Without Histories of Deprivation? Doctoral Dissertation: University of Minnesota.

Google Scholar

Lorant, V., Croux, C., Weich, S., Deliege, D., Mackenbach, J., and Ansseau, M. (2007). Depression and socio-economic risk factors: 7-year longitudinal population study. Brit. J. Psychiatry 190, 293–298. doi: 10.1192/bjp.bp.105.020040

PubMed Abstract | CrossRef Full Text | Google Scholar

Lucantonio, F., Caprioli, D., and Schoenbaum, G. (2014). Transition from ‘model-based’to ‘model-free’behavioral control in addiction: involvement of the orbitofrontal cortex and dorsolateral striatum. Neuropharmacology 76, 407–415. doi: 10.1016/j.neuropharm.2013.05.033

PubMed Abstract | CrossRef Full Text | Google Scholar

Luhmann, C. C. (2013). Discounting of delayed rewards is not hyperbolic. J. Exp. Psychol. Learn. 39, 1274. doi: 10.1037/a0031170

CrossRef Full Text | Google Scholar

Luman, M., Oosterlaan, J., and Sergeant, J. A. (2005). The impact of reinforcement contingencies on AD/HD: a review and theoretical appraisal. Clin. Psychol. Rev. 25, 183–213. doi: 10.1016/j.cpr.2004.11.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Lupien, S. J., Maheu, F., Tu, M., Fiocco, A., and Schramek, T. E. (2007). The effects of stress and stress hormones on human cognition: implications for the field of brain and cognition. Brain Cogn. 65, 209–237. doi: 10.1016/j.bandc.2007.02.007

PubMed Abstract | CrossRef Full Text | Google Scholar

MacKillop, J., and Kahler, C. W. (2009). Delayed reward discounting predicts treatment response for heavy drinkers receiving smoking cessation treatment. Drug Alcohol. Depend. 104, 197–203. doi: 10.1016/j.drugalcdep.2009.04.020

PubMed Abstract | CrossRef Full Text | Google Scholar

MacKillop, J., and Tidey, J. W. (2011). Cigarette demand and delayed reward discounting in nicotine-dependent individuals with schizophrenia and controls: an initial study. Psychopharmacology 216, 91–99. doi: 10.1007/s00213-011-2185-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Madden, G. J., Petry, N. M., Badger, G. J., and Bickel, W. K. (1997). Impulsive and self-control choices in opioid-dependent patients and non-drug-using control patients: drug and monetary rewards. Exp. Clin. Psychopharmacol. 5, 256.

PubMed Abstract | Google Scholar

Madsen, A., Karle, A., Rubin, P., Cortsen, M., Andersen, H., and Hemmingsen, R. (1999). Progressive atrophy of the frontal lobes in first−episode schizophrenia: interaction with clinical course and neuroleptic treatment. Acta Psychiatr. Scand. 100, 367–374. doi: 10.1111/j.1600-0447.1999.tb10880.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Mani, A., Mullainathan, S., Shafir, E., and Zhao, J. (2013). Poverty impedes cognitive function. Science 341, 976–980. doi: 10.1126/science.1238041

PubMed Abstract | CrossRef Full Text | Google Scholar

Mann, J. J., Huang, Y.-Y., Underwood, M. D., Kassir, S. A., Oppenheim, S., Kelly, T. M., et al. (2000). A serotonin transporter gene promoter polymorphism (5-HTTLPR) and prefrontal cortical binding in major depression and suicide. Arch. Gen. Psychiatry 57, 729–738. doi: 10.1001/archpsyc.57.8.729

PubMed Abstract | CrossRef Full Text | Google Scholar

Mariano, T., Bannerman, D., McHugh, S., Preston, T., Rudebeck, P., Rudebeck, S., et al. (2009). Impulsive choice in hippocampal but not orbitofrontal cortex−lesioned rats on a nonspatial decision−making maze task. Eur. J. Neurosci. 30, 472–484. doi: 10.1111/j.1460-9568.2009.06837.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Marr, D. (1982). Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. San Francisco, CA: W. H. Freeman.

Mason, L., O'sullivan, N., Blackburn, M., Bentall, R., and El-Deredy, W. (2012). I want it now! Neural correlates of hypersensitivity to immediate reward in hypomania. Biol. Psychiatry 71, 530–537. doi: 10.1016/j.biopsych.2011.10.008

PubMed Abstract | CrossRef Full Text | Google Scholar

Mathys, C., Daunizeau, J., Friston, K. J., and Stephan, K. E. (2011). A Bayesian foundation for individual learning under uncertainty. Front. Hum. Neurosci. 5:39. doi: 10.3389/fnhum.2011.00039

PubMed Abstract | CrossRef Full Text | Google Scholar

Mazas, C. A., Finn, P. R., and Steinmetz, J. E. (2000). Decision-making biases, antisocial personality, and early-onset alcoholism. Alcohol. Clin. Exp. Res. 24, 1036–1040. doi: 10.1111/j.1530-0277.2000.tb04647.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Mazur, J. E. (1987). “An adjusting procedure for studying delayed reinforcement,” in Quantitative Analysis of Behavior, eds M. Commons, J. A. Nevin, and H. Rachlin (Hillsdale, NJ: Erlbaum), 55–73.

Google Scholar

McClure, S. M., Ericson, K. M., Laibson, D. I., Loewenstein, G., and Cohen, J. D. (2007). Time discounting for primary rewards. J. Neurosci. 27, 5796–5804. doi: 10.1523/JNEUROSCI.4246-06.2007

PubMed Abstract | CrossRef Full Text | Google Scholar

McClure, S. M., Laibson, D. I., Loewenstein, G., and Cohen, J. D. (2004). Separate neural systems value immediate and delayed monetary rewards. Science 306, 503–507. doi: 10.1126/science.1100907

PubMed Abstract | CrossRef Full Text | Google Scholar

McDannald, M. A., Lucantonio, F., Burke, K. A., Niv, Y., and Schoenbaum, G. (2011). Ventral striatum and orbitofrontal cortex are both required for model-based, but not model-free, reinforcement learning. J. Neurosci. 31, 2700–2705. doi: 10.1523/JNEUROSCI.5499-10.2011

PubMed Abstract | CrossRef Full Text | Google Scholar

Meier, S., and Sprenger, C. D. (2012). Time discounting predicts creditworthiness. Psychol. Sci. 23, 56–58. doi: 10.1177/0956797611425931

PubMed Abstract | CrossRef Full Text | Google Scholar

Metcalfe, J., and Mischel, W. (1999). A hot/cool-system analysis of delay of gratification: dynamics of willpower. Psychol. Rev. 106:3. doi: 10.1037/0033-295X.106.1.3

PubMed Abstract | CrossRef Full Text | Google Scholar

Michaelson, L., de la Vega, A., Chatham, C. H., and Munakata, Y. (2013). Delaying gratification depends on social trust. Front. Psychol. 4:355. doi: 10.3389/fpsyg.2013.00355

PubMed Abstract | CrossRef Full Text | Google Scholar

Mischel, W., Shoda, Y., and Rodriguez, M. (1989). Delay of gratification in children. Science 244, 933–938. doi: 10.1126/science.2658056

PubMed Abstract | CrossRef Full Text | Google Scholar

Moeller, F. G., Barratt, E. S., Dougherty, D. M., Schmitz, J. M., and Swann, A. C. (2001). Psychiatric aspects of impulsivity. Am. J. Psychiatry 158, 1783–1793. doi: 10.1176/appi.ajp.158.11.1783

PubMed Abstract | CrossRef Full Text | Google Scholar

Moeller, F. G., Dougherty, D. M., Barratt, E. S., Oderinde, V., Mathias, C. W., Harper, R. A., et al. (2002). Increased impulsivity in cocaine dependent subjects independent of antisocial personality disorder and aggression. Drug Alcohol. Depend. 68, 105–111. doi: 10.1016/S0376-8716(02)00106-0

PubMed Abstract | CrossRef Full Text | Google Scholar

Montague, P. R., Dolan, R. J., Friston, K. J., and Dayan, P. (2012). Computational psychiatry. Trends Cogn. Sci. 16, 72–80. doi: 10.1016/j.tics.2011.11.018

PubMed Abstract | CrossRef Full Text | Google Scholar

Moore, S. C., and Cusens, B. (2010). Delay discounting predicts increase in blood alcohol level in social drinkers. Psychiatry Res. 179, 324–327. doi: 10.1016/j.psychres.2008.07.024

PubMed Abstract | CrossRef Full Text | Google Scholar

Moutoussis, M., Bentall, R. P., Williams, J., and Dayan, P. (2008). A temporal difference account of avoidance learning. Network 19, 137–160. doi: 10.1080/09548980802192784

PubMed Abstract | CrossRef Full Text | Google Scholar

Moutoussis, M., Story, G., and Dolan, R. (2015). The computational psychiatry of reward: broken brains or misguided minds? Front. Psychol. 6:1445. doi: 10.3389/fpsyg.2015.01445

PubMed Abstract | CrossRef Full Text | Google Scholar

Murphy, F., Sahakian, B., Rubinsztein, J., Michael, A., Rogers, R., Robbins, T., et al. (1999). Emotional bias and inhibitory control processes in mania and depression. Psychol. Med. 29, 1307–1321. doi: 10.1017/S0033291799001233

PubMed Abstract | CrossRef Full Text | Google Scholar

Myerson, J., and Green, L. (1995). Discounting of delayed rewards: models of individual choice. J. Exp. Anal. Behav. 64, 263–276. doi: 10.1901/jeab.1995.64-263

PubMed Abstract | CrossRef Full Text | Google Scholar

Myerson, J., Green, L., and Warusawitharana, M. (2001). Area under the curve as a measure of discounting. J. Exp. Anal. Behav. 76, 235–243. doi: 10.1901/jeab.2001.76-235

PubMed Abstract | CrossRef Full Text | Google Scholar

Nassar, M. R., Rumsey, K. M., Wilson, R. C., Parikh, K., Heasly, B., and Gold, J. I. (2012). Rational regulation of learning dynamics by pupil-linked arousal systems. Nat. Neurosci. 15, 1040–1046. doi: 10.1038/nn.3130

PubMed Abstract | CrossRef Full Text | Google Scholar

Neal, D. T. (2006). Habits—a repeat performance. Curr. Dir. Psychol. Sci. 15, 198. doi: 10.1111/j.1467-8721.2006.00435.x

CrossRef Full Text | Google Scholar

Niv, Y., Daw, N., Joel, D., and Dayan, P. (2007). Tonic dopamine: opportunity costs and the control of response vigor. Psychopharmacology 191, 507–520. doi: 10.1007/s00213-006-0502-4

PubMed Abstract | CrossRef Full Text | Google Scholar

Noble, K. G., McCandliss, B. D., and Farah, M. J. (2007). Socioeconomic gradients predict individual differences in neurocognitive abilities. Dev. Sci. 10, 464–480. doi: 10.1111/j.1467-7687.2007.00600.x

PubMed Abstract | CrossRef Full Text | Google Scholar

O'Brien, C. P., Childress, A. R., Ehrman, R., and Robbins, S. J. (1998). Conditioning factors in drug abuse: can they explain compulsion? J. Psychopharmacol. 12, 15–22. doi: 10.1177/026988119801200103

PubMed Abstract | CrossRef Full Text | Google Scholar

O'Doherty, J. P., Dayan, P., Friston, K., Critchley, H., and Dolan, R. J. (2003). Temporal difference models and reward-related learning in the human brain. Neuron 38, 329–337. doi: 10.1016/S0896-6273(03)00169-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Odum, A. L. (2011). Delay discounting: trait variable? Behav. Processes 87, 1–9. doi: 10.1016/j.beproc.2011.02.007

PubMed Abstract | CrossRef Full Text | Google Scholar

Odum, A. L., Madden, G. J., Badger, G. J., and Bickel, W. K. (2000). Needle sharing in opioid-dependent outpatients: psychological processes underlying risk. Drug Alcohol. Depend. 60, 259–266. doi: 10.1016/S0376-8716(00)00111-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Odum, A. L., Madden, G. J., and Bickel, W. K. (2002). Discounting of delayed health gains and losses by current, never- and ex-smokers of cigarettes. Nicotine Tob. Res. 4, 295–303. doi: 10.1080/14622200210141257

PubMed Abstract | CrossRef Full Text | Google Scholar

Ogata, S. N., Silk, K. R., Goodrich, S., Lohr, N. E., Westen, D., and Hill, E. M. (1990). Childhood sexual and physical abuse in adult patients with borderline personality personality disorder. Am. J. Psychiatry 147, 1008. doi: 10.1176/ajp.147.8.1008

PubMed Abstract | CrossRef Full Text | Google Scholar

Orbell, S., and Verplanken, B. (2014). The strength of habit. Health Psychol. Rev. 9, 311–317. doi: 10.1080/17437199.2014.992031

PubMed Abstract | CrossRef Full Text | Google Scholar

Ostaszewski, P., Green, L., and Myerson, J. (1998). Effects of inflation on the subjective value of delayed and probabilistic rewards. Psychon. B Rev. 5, 324–333. doi: 10.3758/BF03212959

CrossRef Full Text | Google Scholar

Ouellette, J. A. (1998). Habit and intention in everyday life: the multiple processes by which past behavior predicts future behavior. Psychol. Bull. 124:54. doi: 10.1037/0033-2909.124.1.54

CrossRef Full Text | Google Scholar

Paloyelis, Y., Asherson, P., and Kuntsi, J. (2009). Are ADHD symptoms associated with delay aversion or choice impulsivity? A general population study. J. Am. Acad. Child Adolesc. Psychiatry 48, 837–846. doi: 10.1097/CHI.0b013e3181ab8c97

PubMed Abstract | CrossRef Full Text | Google Scholar

Paloyelis, Y., Asherson, P., Mehta, M. A., Faraone, S. V., and Kuntsi, J. (2010a). DAT1 and COMT effects on delay discounting and trait impulsivity in male adolescents with attention deficit/hyperactivity disorder and healthy controls. Neuropsychopharmacology 35, 2414–2426. doi: 10.1038/npp.2010.124

PubMed Abstract | CrossRef Full Text | Google Scholar

Paloyelis, Y., Stahl, D. R., and Mehta, M. (2010b). Are steeper discounting rates in attention-deficit/hyperactivity disorder specifically associated with hyperactivity-impulsivity symptoms or is this a statistical artifact? Biol. Psychiatry 68, e15–e16. doi: 10.1016/j.biopsych.2010.02.025

PubMed Abstract | CrossRef Full Text

Pavlov, I. P. (2003). Conditioned Reflexes. Mineola, NY: Courier Dover Publications.

Google Scholar

Paykel, E. S. (1978). Contribution of life events to causation of psychiatric illness. Psychol. Med. 8, 245–253. doi: 10.1017/S003329170001429X

PubMed Abstract | CrossRef Full Text | Google Scholar

Pepper, G. V., and Nettle, D. (2013). Death and the time of your life: experiences of close bereavement are associated with steeper financial future discounting and earlier reproduction. Evol. Hum. Behav. 34, 433–439. doi: 10.1016/j.evolhumbehav.2013.08.004

CrossRef Full Text | Google Scholar

Peters, J., and Büchel, C. (2010). Episodic future thinking reduces reward delay discounting through an enhancement of prefrontal-mediotemporal interactions. Neuron 66, 138–148. doi: 10.1016/j.neuron.2010.03.026

PubMed Abstract | CrossRef Full Text | Google Scholar

Petry, N. M. (2001). Delay discounting of money and alcohol in actively using alcoholics, currently abstinent alcoholics, and controls. Psychopharmacology 154, 243–250. doi: 10.1007/s002130000638

PubMed Abstract | CrossRef Full Text | Google Scholar

Petry, N. M. (2002). Discounting of delayed rewards in substance abusers: relationship to antisocial personality disorder. Psychopharmacology 162, 425–432. doi: 10.1007/s00213-002-1115-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Petry, N. M. (2003). Discounting of money, health, and freedom in substance abusers and controls. Drug Alcohol. Depend. 71, 133–141. doi: 10.1016/S0376-8716(03)00090-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Petry, N. M., and Casarella, T. (1999). Excessive discounting of delayed rewards in substance abusers with gambling problems. Drug Alcohol. Depend. 56, 25–32. doi: 10.1016/S0376-8716(99)00010-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Pine, A., Seymour, B., Roiser, J. P., Bossaerts, P., Friston, K. J., Curran, H. V., et al. (2009). Encoding of marginal utility across time in the human brain. J. Neurosci. 29, 9575–9581. doi: 10.1523/JNEUROSCI.1126-09.2009

PubMed Abstract | CrossRef Full Text | Google Scholar

Pine, A., Shiner, T., Seymour, B., and Dolan, R. J. (2010). Dopamine, time, and impulsivity in humans. J. Neurosci. 30, 8888–8896. doi: 10.1523/JNEUROSCI.6028-09.2010

PubMed Abstract | CrossRef Full Text | Google Scholar

Platt, M. L., and Huettel, S. A. (2008). Risky business: the neuroeconomics of decision making under uncertainty. Nat. Neurosci. 11, 398–403. doi: 10.1038/nn2062

PubMed Abstract | CrossRef Full Text | Google Scholar

Pulcu, E., Trotter, P., Thomas, E., McFarquhar, M., Juhasz, G., Sahakian, B., et al. (2014). Temporal discounting in major depressive disorder. Psychol. Med. 44, 1825–1834. doi: 10.1017/S0033291713002584

PubMed Abstract | CrossRef Full Text | Google Scholar

Rachlin, H. (2006). Notes on discounting. J. Exp. Anal. Behav. 85, 425–435. doi: 10.1901/jeab.2006.85-05

PubMed Abstract | CrossRef Full Text | Google Scholar

Ramos, D., Victor, T., Seidl−de−Moura, M. L., and Daly, M. (2013). Future discounting by slum−dwelling youth versus university students in Rio de Janeiro. J. Res. Adolesc. 23, 95–102. doi: 10.1111/j.1532-7795.2012.00796.x

CrossRef Full Text | Google Scholar

Rapport, M. D., Tucker, S. B., DuPaul, G. J., Merlo, M., and Stoner, G. (1986). Hyperactivity and frustration: the influence of control over and size of rewards in delaying gratification. J. Abnorm. Child Psychol. 14, 191–204. doi: 10.1007/BF00915440

PubMed Abstract | CrossRef Full Text | Google Scholar

Read, D. (2001). Is time-discounting hyperbolic or subadditive? J. Risk Uncert. 23, 5–32. doi: 10.1023/A:1011198414683

PubMed Abstract | CrossRef Full Text | Google Scholar

Read, D. (2004). “Intertemporal choice,” in Blackwell Handbook of Judgment and Decision Making, eds D. J. Koehler and N. Harvey (Oxford: Blackwell), 424–443. doi: 10.1002/9780470752937.ch21

CrossRef Full Text | Google Scholar

Read, D., Frederick, S., and Airoldi, M. (2012). Four days later in Cincinnati: longitudinal tests of hyperbolic discounting. Acta Psychol. 140, 177–185. doi: 10.1016/j.actpsy.2012.02.010

PubMed Abstract | CrossRef Full Text | Google Scholar

Read, D., and Van Leeuwen, B. (1998). Predicting hunger: the effects of appetite and delay on choice. Organ. Behav. Hum. Decis. Process. 76, 189–205. doi: 10.1006/obhd.1998.2803

PubMed Abstract | CrossRef Full Text | Google Scholar

Reimers, S., Maylor, E. A., Stewart, N., and Chater, N. (2009). Associations between a one-shot delay discounting measure and age, income, education and real-world impulsive behavior. Pers. Indiv. Differ. 47, 973–978. doi: 10.1016/j.paid.2009.07.026

CrossRef Full Text | Google Scholar

Rescorla, R. A., and Solomon, R. L. (1967). Two-process learning theory: relationships between Pavlovian conditioning and instrumental learning. Psychol. Rev. 74, 151. doi: 10.1037/h0024475

PubMed Abstract | CrossRef Full Text | Google Scholar

Reynolds, B., and Fields, S. (2012). Delay discounting by adolescents experimenting with cigarette smoking. Addiction 107, 417–424. doi: 10.1111/j.1360-0443.2011.03644.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Reynolds, B., Patak, M., and Shroff, P. (2007). Adolescent smokers rate delayed rewards as less certain than adolescent nonsmokers. Drug Alcohol Depend. 90, 301–303. doi: 10.1016/j.drugalcdep.2007.04.008

PubMed Abstract | CrossRef Full Text | Google Scholar

Reynolds, B., Richards, J. B., Horn, K., and Karraker, K. (2004). Delay discounting and probability discounting as related to cigarette smoking status in adults. Behav. Processes 65, 35–42. doi: 10.1016/S0376-6357(03)00109-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Robson, D., and Gray, R. (2007). Serious mental illness and physical health problems: a discussion paper. Int. J. Nurs. Stud. 44, 457–466. doi: 10.1016/j.ijnurstu.2006.07.013

PubMed Abstract | CrossRef Full Text | Google Scholar

Rossow, I. (2008). Alcohol consumption and discounting. Addict. Res. Theory 16, 572–584. doi: 10.1080/16066350801896248

CrossRef Full Text | Google Scholar

Rounds, J. S., Beck, J. G., and Grant, D. M. (2007). Is the delay discounting paradigm useful in understanding social anxiety? Behav. Res. Ther. 45, 729–735. doi: 10.1016/j.brat.2006.06.007

PubMed Abstract | CrossRef Full Text | Google Scholar

Rubinstein, A. (2003). Economics and psychology: the case of hyperbolic discounting. Econ. Rev. 44, 1207–1216. doi: 10.1111/1468-2354.t01-1-00106

CrossRef Full Text | Google Scholar

Russell, A. E., Ford, T., Williams, R., and Russell, G. (2015). The association between socioeconomic disadvantage and attention deficit/hyperactivity disorder (ADHD): a systematic review. Child Psychiatry Hum. Dev. doi: 10.1007/s10578-015-0578-3. [Epub ahead of print].

PubMed Abstract | CrossRef Full Text | Google Scholar

Samuelson, P. (1937). A note on the measurement of utility. Rev. Econ. Stud. 4, 155–161. doi: 10.2307/2967612

CrossRef Full Text | Google Scholar

Schacter, D. L., and Schacter. (2008). Episodic simulation of future events. Ann. N.Y. Acad. Sci. 1124, 39. doi: 10.1196/annals.1440.001

PubMed Abstract | CrossRef Full Text

Schacter, D. L., Addis, D. R., and Buckner, R. L. (2008). Episodic simulation of future events: concepts, data, and applications. Ann. N.Y. Acad. Sci. 1124, 39–60. doi: 10.1196/annals.1440.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Scheres, A., and Hamaker, E. L. (2010). What we can and cannot conclude about the relationship between steep temporal reward discounting and hyperactivity-impulsivity symptoms in attention-deficit/hyperactivity disorder. Biol. Psychiatry 68, e17–e18. doi: 10.1016/j.biopsych.2010.05.021

PubMed Abstract | CrossRef Full Text

Scheres, A., Tontsch, C., Thoeny, A. L., and Kaczkurkin, A. (2010). Temporal reward discounting in attention-deficit/hyperactivity disorder: the contribution of symptom domains, reward magnitude, and session length. Biol. Psychiatry 67, 641–648. doi: 10.1016/j.biopsych.2009.10.033

PubMed Abstract | CrossRef Full Text | Google Scholar

Schultz, W. (2006). Behavioral theories and the neurophysiology of reward. Annu. Rev. Psychol. 57, 87–115. doi: 10.1146/annurev.psych.56.091103.070229

PubMed Abstract | CrossRef Full Text | Google Scholar

Schweighofer, N., Bertin, M., Shishida, K., Okamoto, Y., Tanaka, S. C., Yamawaki, S., et al. (2008). Low-serotonin levels increase delayed reward discounting in humans. J. Neurosci. 28, 4528–4532. doi: 10.1523/JNEUROSCI.4982-07.2008

PubMed Abstract | CrossRef Full Text | Google Scholar

Schweitzer, J. B., and Sulzer−Azaroff, B. (1995). Self−control in boys with attention deficit hyperactivity disorder: effects of added stimulation and time. J. Child Psychol. Psychiatry 36, 671–686. doi: 10.1111/j.1469-7610.1995.tb02321.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Seymour, B., and Dolan, R. (2008). Emotion, decision making, and the amygdala. Neuron 58, 662–671. doi: 10.1016/j.neuron.2008.05.020

PubMed Abstract | CrossRef Full Text | Google Scholar

Seymour, B., O'Doherty, J. P., Koltzenburg, M., Wiech, K., Frackowiak, R., Friston, K., et al. (2005). Opponent appetitive-aversive neural processes underlie predictive learning of pain relief. Nat. Neurosci. 8, 1234–1240. doi: 10.1038/nn1527

PubMed Abstract | CrossRef Full Text | Google Scholar

Shamosh, N. A., DeYoung, C. G., Green, A. E., Reis, D. L., Johnson, M. R., Conway, A. R., et al. (2008). Individual differences in delay discounting relation to intelligence, working memory, and anterior prefrontal cortex. Psychol. Sci. 19, 904–911. doi: 10.1111/j.1467-9280.2008.02175.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Smittenaar, P., FitzGerald, T. H., Romei, V., Wright, N. D., and Dolan, R. J. (2013). Disruption of dorsolateral prefrontal cortex decreases model-based in favor of model-free control in humans. Neuron 80, 914–919. doi: 10.1016/j.neuron.2013.08.009

PubMed Abstract | CrossRef Full Text | Google Scholar

Sonuga−Barke, E., Taylor, E., Sembi, S., and Smith, J. (1992). Hyperactivity and delay aversion—I. The effect of delay on choice. J. Child Psychol. Psychiatry 33, 387–398. doi: 10.1111/j.1469-7610.1992.tb00874.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Sozou, P. D. (1998). On hyperbolic discounting and uncertain hazard rates. Proc. Biol. Sci. 265, 2015–2020. doi: 10.1098/rspb.1998.0534

CrossRef Full Text | Google Scholar

Stanger, C., Ryan, S. R., Fu, H., Landes, R. D., Jones, B. A., Bickel, W. K., et al. (2012). Delay discounting predicts adolescent substance abuse treatment outcome. Exp. Clin. Psychopharmacol. 20, 205. doi: 10.1037/a0026543

PubMed Abstract | CrossRef Full Text | Google Scholar

Steinberg, L., Graham, S., O'Brien, L., Woolard, J., Cauffman, E., and Banich, M. (2009). Age differences in future orientation and delay discounting. Child Dev. 80, 28–44. doi: 10.1111/j.1467-8624.2008.01244.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Stephan, K. E., and Mathys, C. (2014). Computational approaches to psychiatry. Curr. Opin. Neurobiol. 25, 85–92. doi: 10.1016/j.conb.2013.12.007

PubMed Abstract | CrossRef Full Text | Google Scholar

Stevens, D. W., and Krebs, J. R. (1986). Foraging Theory. New Jersey, NJ: Princeton University Press.

Story, G., Vlaev, I., Seymour, B., Darzi, A., and Dolan, R. (2014). Does temporal discounting explain unhealthy behavior? A systematic review and reinforcement learning perspective. Front. Behav. Neurosci. 8:76. doi: 10.3389/fnbeh.2014.00076

PubMed Abstract | CrossRef Full Text | Google Scholar

Story, G. W., Vlaev, I., Dayan, P., Seymour, B., Darzi, A., and Dolan, R. J. (2015). Anticipation and choice heuristics in the dynamic consumption of pain relief. PLoS Comput. Biol. 11:e1004030. doi: 10.1371/journal.pcbi.1004030

PubMed Abstract | CrossRef Full Text | Google Scholar

Story, G. W., Vlaev, I., Seymour, B., Winston, J. S., Darzi, A., and Dolan, R. J. (2013). Dread and the disvalue of future pain. PLoS Comput. Biol. 9:e1003335. doi: 10.1371/journal.pcbi.1003335

PubMed Abstract | CrossRef Full Text | Google Scholar

Strotz, R. H. (1957). Myopia and inconsistency in dynamic utility maximisation. Rev. Econ. Stud. 23, 165–180. doi: 10.2307/2295722

CrossRef Full Text | Google Scholar

Sutton, R. S., and Barto, A. G. (1998). Introduction to Reinforcement Learning. Boston, MA: MIT Press.

Google Scholar

Swann, A. C. (2009). Impulsivity in mania. Curr. Psychiatry Rep. 11, 481–487. doi: 10.1007/s11920-009-0073-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Swirsky-Sacchetti, T., Gorton, G., Samuel, S., Sobel, R., Genetta-Wadley, A., and Burleigh, B. (1993). Neuropsychological function in borderline personality disorder. J. Clin. Psychol. 49, 385–396.

PubMed Abstract | Google Scholar

Takahashi, T. (2004). Cortisol levels and time-discounting of monetary gain in humans. Neuroreport 15, 2145–2147. doi: 10.1097/00001756-200409150-00029

PubMed Abstract | CrossRef Full Text | Google Scholar

Takahashi, T., Ikeda, K., and Hasegawa, T. (2007). A hyperbolic decay of subjective probability of obtaining delayed rewards. Behav. Brain Funct. 3:52. doi: 10.1186/1744-9081-3-52

PubMed Abstract | CrossRef Full Text | Google Scholar

Takahashi, T., Oono, H., Inoue, T., Boku, S., Kako, Y., Kitaichi, Y., et al. (2008). Depressive patients are more impulsive and inconsistent in intertemporal choice behavior for monetary gain and loss than healthy subjects–an analysis based on Tsallis' statistics. Neuro Endocrinol. Lett. 29, 351.

PubMed Abstract | Google Scholar

Takahashi, T., Shinada, M., Inukai, K., Tanida, S., Takahashi, C., Mifune, N., et al. (2009). Stress hormones predict hyperbolic time-discount rates six months later in adults. Neuro Endocrinol. Lett. 31, 616–621.

PubMed Abstract | Google Scholar

Tanaka, S. C., Schweighofer, N., Asahi, S., Shishida, K., Okamoto, Y., Yamawaki, S., et al. (2007). Serotonin differentially regulates short-and long-term prediction of rewards in the ventral and dorsal striatum. PLoS ONE 2:e1333. doi: 10.1371/journal.pone.0001333

PubMed Abstract | CrossRef Full Text | Google Scholar

Thaler, R. H. (1981). Some empirical evidence on dynamic inconsistency. Econ. Lett. 8, 201–207. doi: 10.1016/0165-1765(81)90067-7

CrossRef Full Text | Google Scholar

Thaler, R. H., and Shefrin, H. M. (1981). An economic theory of self-control. J. Polit. Econ. 89, 392–406. doi: 10.1086/260971

CrossRef Full Text

Thorndike, E. L. (1927). The law of effect. Am. J. Psychol. 39, 212–222. doi: 10.2307/1415413

PubMed Abstract | CrossRef Full Text | Google Scholar

Townsend, J., Bookheimer, S. Y., Foland–Ross, L. C., Sugar, C. A., and Altshuler, L. L. (2010). fMRI abnormalities in dorsolateral prefrontal cortex during a working memory task in manic, euthymic and depressed bipolar subjects. Psychiatry Res. 182, 22–29. doi: 10.1016/j.pscychresns.2009.11.010

PubMed Abstract | CrossRef Full Text | Google Scholar

Trepel, C., Fox, C. R., and Poldrack, R. A. (2005). Prospect theory on the brain? Toward a cognitive neuroscience of decision under risk. Brain Res. Cogn. Brain Res. 23, 34–50. doi: 10.1016/j.cogbrainres.2005.01.016

PubMed Abstract | CrossRef Full Text | Google Scholar

Tricomi, E., Balleine, B. W., and O'Doherty, J. P. (2009). A specific role for posterior dorsolateral striatum in human habit learning. Eur. J. Neurosci. 29, 2225–2232. doi: 10.1111/j.1460-9568.2009.06796.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Tripp, G., and Alsop, B. (2001). Sensitivity to reward delay in children with attention deficit hyperactivity disorder (ADHD). J. Child Psychol. Psychiatry 42, 691–698. doi: 10.1111/1469-7610.00764

PubMed Abstract | CrossRef Full Text | Google Scholar

Van den Bergh, B., Dewitte, S., and Warlop, L. (2008). Bikinis instigate generalized impatience in intertemporal choice. J. Consum. Res. 35, 85–97. doi: 10.1086/525505

CrossRef Full Text | Google Scholar

van der Pol, M., and Cairns, J. (2002). A comparison of the discounted utility model and hyperbolic discounting models in the case of social and private intertemporal preferences for health. J. Econ. Behav. Organ. 49, 79–96. doi: 10.1016/S0167-2681(02)00059-8

CrossRef Full Text | Google Scholar

van Haren, N. E., Pol, H. E. H., Schnack, H. G., Cahn, W., Brans, R., Carati, I., et al. (2008). Progressive brain volume loss in schizophrenia over the course of the illness: evidence of maturational abnormalities in early adulthood. Biol. Psychiatry 63, 106–113. doi: 10.1016/j.biopsych.2007.01.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Van Oers, J., Bongers, I., Van de Goor, L., and Garretsen, H. (1999). Alcohol consumption, alcohol-related problems, problem drinking, and socioeconomic status. Alcohol. Alcohol. 34, 78–88. doi: 10.1093/alcalc/34.1.78

PubMed Abstract | CrossRef Full Text | Google Scholar

Velakoulis, D., Stuart, G. W., Wood, S. J., Smith, D. J., Brewer, W. J., Desmond, P., et al. (2001). Selective bilateral hippocampal volume loss in chronic schizophrenia. Biol. Psychiatry 50, 531–539. doi: 10.1016/S0006-3223(01)01121-0

PubMed Abstract | CrossRef Full Text | Google Scholar

Wang, M., Rieger, M. O., and Hens, T. (2010). How time preferences differ: evidence from 45 countries. Swiss Finance Institute Research Paper, 09–47. Available online at: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1481443 doi: 10.2139/ssrn.1481443

CrossRef Full Text

Wang, X.-J., and Krystal, J. H. (2014). Computational psychiatry. Neuron 84, 638–654. doi: 10.1016/j.neuron.2014.10.018

PubMed Abstract | CrossRef Full Text | Google Scholar

Wang, X. T., and Dvorak, R. D. (2010). Sweet future fluctuating blood glucose levels affect future discounting. Psychol. Sci. 21, 183–188. doi: 10.1177/0956797609358096

PubMed Abstract | CrossRef Full Text | Google Scholar

Washio, Y., Higgins, S. T., Heil, S. H., McKerchar, T. L., Badger, G. J., Skelly, J. M., et al. (2011). Delay discounting is associated with treatment response among cocaine-dependent outpatients. Exp. Clin. Psychopharmacol. 19, 243–248. doi: 10.1037/a0023617

PubMed Abstract | CrossRef Full Text | Google Scholar

Watkins, C. J., and Dayan, P. (1992). Q-learning. Mach. Learn. 8, 279–292. doi: 10.1007/BF00992698

CrossRef Full Text | Google Scholar

Weich, S., and Lewis, G. (1998). Poverty, unemployment, and common mental disorders: population based cohort study. Brit. Med. J. 317, 115–119. doi: 10.1136/bmj.317.7151.115

PubMed Abstract | CrossRef Full Text | Google Scholar

Weller, R. E., Avsar, K. B., Cox, J. E., Reid, M. A., White, D. M., and Lahti, A. C. (2014). Delay discounting and task performance consistency in patients with schizophrenia. Psychiatry Res. 215, 286–293. doi: 10.1016/j.psychres.2013.11.013

PubMed Abstract | CrossRef Full Text | Google Scholar

Wesley, M. J., Lohrenz, T., Koffarnus, M. N., McClure, S. M., De La Garza, R., Salas, R., et al. (2014). Choosing money over drugs: the neural underpinnings of difficult choice in chronic cocaine users. J. Addict. 2014:189853. doi: 10.1155/2014/189853

PubMed Abstract | CrossRef Full Text | Google Scholar

Williams, D. R., and Williams, H. (1969). Auto-maintenance in the pigeon: sustained pecking despite contingent non-reinforcement. J. Exp. Anal. Behav. 12, 511–520. doi: 10.1901/jeab.1969.12-511

PubMed Abstract | CrossRef Full Text | Google Scholar

Williams, G. C. (1957). Pleiotropy, natural selection, and the evolution of senescence. Evolution 11, 398–411. doi: 10.2307/2406060

CrossRef Full Text | Google Scholar

Wilson, R. C., Nassar, M. R., and Gold, J. I. (2010). Bayesian online learning of the hazard rate in change-point problems. Neural Comput. 22, 2452–2476. doi: 10.1162/NECO_a_00007

PubMed Abstract | CrossRef Full Text | Google Scholar

Wilson, V. B., Mitchell, S. H., Musser, E. D., Schmitt, C. F., and Nigg, J. T. (2011). Delay discounting of reward in ADHD: application in young children. J. Child Psychol. Psychiatry 52, 256–264. doi: 10.1111/j.1469-7610.2010.02347.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Wing, V. C., Moss, T. G., Rabin, R. A., and George, T. P. (2012). Effects of cigarette smoking status on delay discounting in schizophrenia and healthy controls. Addict. Behav. 37, 67–72. doi: 10.1016/j.addbeh.2011.08.012

PubMed Abstract | CrossRef Full Text | Google Scholar

Winstanley, C. A., Theobald, D. E., Dalley, J. W., and Robbins, T. W. (2005). Interactions between serotonin and dopamine in the control of impulsive choice in rats: therapeutic implications for impulse control disorders. Neuropsychopharmacology 30, 669–682. doi: 10.1038/sj.npp.1300610

PubMed Abstract | CrossRef Full Text | Google Scholar

Wunderlich, K., Smittenaar, P., and Dolan, R. J. (2012). Dopamine enhances model-based over model-free choice behavior. Neuron 75, 418–424. doi: 10.1016/j.neuron.2012.03.042

PubMed Abstract | CrossRef Full Text | Google Scholar

Yechiam, E., Busemeyer, J. R., Stout, J. C., and Bechara, A. (2005). Using cognitive models to map relations between neuropsychological disorders and human decision-making deficits. Psychol. Sci. 16, 973–978. doi: 10.1111/j.1467-9280.2005.01646.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Yi, R., Gatchalian, K. M., and Bickel, W. K. (2006). Discounting of past outcomes. Exp. Clin. Psychopharmacol. 14:311. doi: 10.1037/1064-1297.14.3.311

PubMed Abstract | CrossRef Full Text | Google Scholar

Yu, A. J., and Dayan, P. (2003). “Expected and unexpected uncertainty: ACh and NE in the neocortex,” in Advances in Neural Information Processing Systems 15 (Cambridge, MA).

Google Scholar

Yu, A., and Dayan, P. (2005). Uncertainty, neuromodulation, and attention. Neuron 46, 681–692. doi: 10.1016/j.neuron.2005.04.026

PubMed Abstract | CrossRef Full Text | Google Scholar

Zanarini, M. C., Williams, A. A., Lewis, R. E., and Reich, R. B. (1997). Reported pathological childhood experiences associated with the development of borderline personality disorder. Am. J. Psychiatry 154:1101. doi: 10.1176/ajp.154.8.1101

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: discounting, time preference, psychiatric, computational psychiatry, mental illness, biopsychosocial

Citation: Story GW, Moutoussis M and Dolan RJ (2016) A Computational Analysis of Aberrant Delay Discounting in Psychiatric Disorders. Front. Psychol. 6:1948. doi: 10.3389/fpsyg.2015.01948

Received: 29 June 2015; Accepted: 04 December 2015;
Published: 13 January 2016.

Edited by:

Gianluca Castelnuovo, Università Cattolica del Sacro Cuore, Italy

Reviewed by:

Michelle Dow Keawphalouk, Harvard and Massachusetts Institute of Technology, USA
Warren K. Bickel, Virginia Polytechnic Institute and State University, USA

Copyright © 2016 Story, Moutoussis and Dolan. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Giles W. Story, g.story@ucl.ac.uk

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.