Skip to main content

ORIGINAL RESEARCH article

Front. Hum. Neurosci., 11 September 2009
Sec. Cognitive Neuroscience
Volume 3 - 2009 | https://doi.org/10.3389/neuro.09.023.2009

Neural substrates of contingency learning and executive control: dissociating physical, valuative, and behavioral changes

1
Brain Imaging and Analysis Center, Duke University, Durham, NC, USA
2
Center for Neuroeconomic Studies, Duke University, Durham, NC, USA
3
Center for Cognitive Neuroscience, Duke University, Durham, NC, USA
Contingency learning is fundamental to cognition. Knowledge about environmental contingencies allows behavioral flexibility, as executive control processes accommodate the demands of novel or changing environments. Studies of experiential learning have focused on the relationship between actions and the values of associated outcomes. However, outcome values have often been confounded with the physical changes in the outcomes themselves. Here, we dissociated contingency learning into valuative and non-valuative forms, using a novel version of the two-alternative choice task, while measuring the neural effects of contingency changes using functional magnetic resonance imaging (fMRI). Changes in value-relevant contingencies evoked activation in the lateral prefrontal cortex (LPFC), posterior parietal cortex (PPC), and dorsomedial prefrontal cortex (DMPFC) consistent with prior results (e.g., reversal-learning paradigms). Changes in physical contingencies unrelated to value or to action produced similar activations within the LPFC, indicating that LPFC may engage in generalized contingency learning that is not specific to valuation. In contrast, contingency changes that required behavioral shifts evoked activation localized to the DMPFC, supplementary motor, and precentral cortices, suggesting that these regions play more specific roles within the executive control of behavior.

Introduction

Contingency learning is a fundamental component of cognition. By identifying the relationships between actions and events, humans and other animals can produce goal-directed and flexible behavior that accounts for changes in their environments. Models of goal-directed behavior, such as those of reinforcement learning, have traditionally examined the contingencies between actions and their rewarding or punishing outcomes (Thorndike, 1898 ; Pavlov, 1928 ; Skinner, 1938 ; Herrnstein, 1970 ). Such models can account for simple reward-seeking behaviors (e.g., foraging), and even describe quite complex aspects of behavior and decision making (Sutton and Barto, 1998 ). However, effective behavior requires learning not only about relations between actions and received rewards, but also about environmental contingencies that do not themselves influence reward outcomes (Tolman, 1932 ), such as information about the available options, state space, or stimulus-stimulus contingencies.
Studies of the neural basis of contingency learning – often using reversal tasks in which reward contingencies change unexpectedly – have identified a host of involved brain areas (Cools et al., 2002 ; O’Doherty et al., 2003 ; Remijnse et al., 2005 ; Xue et al., 2008 ). Collectively, many of these regions have been described as constituting the dorsal executive network (Duncan and Owen, 2000 ) or central executive (Goldman-Rakic, 1996 ). This network may contribute to contingency learning in other non-rewarding contexts, as well. For example, regions of lateral and medial prefrontal cortex exhibit increased activation to events that violate local temporal patterns of stimuli, even when those patterns arise because of random chance and even in the absence of explicit awareness (Squires et al., 1976 ; Huettel et al., 2002 ; Fan et al., 2007 ). Such findings fit the broad theory that the dorsal executive network detects environmental changes and implements cognitive processes necessary for modifying behavior appropriately (Wise et al., 1996 ; Botvinick et al., 2001 ; Miller and Cohen, 2001 ; Ridderinkhof et al., 2004 ; Walton et al., 2004 ; Mansouri et al., 2009 ). The resulting behavioral changes are postulated to reflect biasing signals directed at other brain systems (Botvinick et al., 2001 ; Miller and Cohen, 2001 ). However, most prior studies of executive control have used tasks with co-occurring contingency changes and engagement of executive control mechanisms.
Here, we adapted the classic two-option reversal-learning paradigm to dissociate multiple forms of contingency learning. Across a series of rapidly presented trials, the environmental contingencies changed in three independent ways: (1) changes in outcome valence, which resulted in a behavioral change on the next trial; (2) changes in outcome magnitude, which affected obtained rewards but did not produce a behavioral change; and (3) changes in the physical effect of an action, through new visual feedback that was completely unrelated to rewards or required actions. We hypothesized that brain areas previously implicated in learning and responding to changes in action-outcome contingencies also process contingency changes that are behaviorally and motivationally irrelevant. If obtained, such results would indicate that these regions support the generalized updating of models about the current environment, including but not limited to the changes in the anticipated value of actions, with other regions contributing to the executive control of action.

Materials and Methods

Participants

Fourteen healthy, neurologically normal young-adult volunteers (six female; age range: 18–29 years; mean age: 22.4 years) participated in a single fMRI session. All participants acclimated to the testing environment using a mock MRI scanner. Two participants were removed from the analyses, one because of technical issues with stimulus presentation and the other because of excessive head motion. All participants gave written informed consent as part of a protocol approved by the Institutional Review Board of Duke University Medical Center. Subjects’ payment was contingent on the choices made during the experiment (mean payment: $49 out of a possible $50).

Task

Participants engaged in a modified two-alternative choice task (Figure 1 A). As a framing story, each participant was told to act as an investment broker who selects between two factories in which to invest. On each trial, one of the two choice options resulted in a monetary gain, while the other resulted in an equal magnitude loss (range: ±18 to 93�). The outcome comprised two simultaneously presented visual components: the received value and an abstract visual object (described as the product of the factory). A total of eight different objects were presented across all trials, constructed through the factorial combination of two values for each of three dimensions (shape, color, and orientation of a diagonal slash). Participants were given explicit instructions that the abstract objects were not predictive of future outcomes nor of changes in the value of those outcomes; moreover, they were told that outcomes could alter after as few as one trial, but that on average the outcomes would remain stable for several trials at a time. Participants were instructed that their commission (i.e., payment) was proportional to the total amount earned across all decisions.
Figure 1. Task. (A) Example trial structure. Following an initial stimulus display, the onset of the trial was signaled by a change in color of the response circles. When the participant indicated the choice for that trial, the selected option changed color, whereupon the outcome of the trial was indicated by a visual object and a monetary reward. Trials were each 2.8 s, with 1.4 s for response and choice presentation and 1.4 s for outcome presentation. Intertrial intervals ranged between 0.1 and 0.5 s, to prevent subjects from predicting the onset of each trial. (B) Possible outcomes derived from different changes in contingencies. On Standard trials, the outcome was maintained from the prior trial. In Reversal Changes, the action-outcome contingencies switch, and thus the participant loses the amount of money they were expecting to gain. In Value Changes, the participant gains money, but either more or less than they expected. Finally, in Effect Changes, the participant receives the amount of money they were expecting, but the visual stimulus changes.
Value and effect contingencies were altered independently, through four different possible trial types (see Figure 1 B). The first type comprised the Standard trials (76% of total), where no stimuli, rewards, or actions changed from the previous trial. The second was a Reversal Change (6% of trials) in which the associated values of each option switch sign (without changing magnitude); this trial type is equivalent to the critical events in the canonical reversal learning task. Given the complementary relationship between the valences of the two options, Reversal Changes should guide the participants to select the other option on the next trial. The third type of trials involved Value Change (6% of trials), such that the absolute magnitude of the current value was changed, either up or down, without a change of sign. Because the participants still received a positive reward (albeit not the expected amount), they should continue to select the same option on succeeding trials. The fourth type of trials, called Effect Change (12% of trials, split evenly between one- and two-dimensional), involved a behaviorally- and motivationally-irrelevant contingency change in the visual object associated with each action (i.e., the object shown on the screen). This could involve either a change in one dimension of the object (e.g., color alone) or two dimensions of the object (e.g., both color and shape), with equal probability. However, because the reward value remained constant, the participant should continue selecting the same option on future trials.
Participants fixated on a cross at the center of the display throughout the experiment. Failure to respond in the allotted time (1.4 s after the trial start) resulted in a monetary loss equivalent to the value of the worse option on the current trial. To ensure comprehension of instructions and to provide experience with task contingencies, all participants performed a 7-min behavioral training session before the fMRI experiment.
Participants carried out four runs of this task while in the scanner, each consisting of 150 trials including 10 reversal changes, 10 value changes, 10 one-dimensional effect changes, and 10 two-dimensional effect changes. We used a constant hazard probability for contingency shifts of p ∼ 0.28. Each run also included four non-task pauses, in which the inter-trial interval was extended to 10–20 s. We predetermined the timing and order of contingency shifts to maximize the statistical dissociation between the hypothesized hemodynamic responses evoked by each contingency shift type. The order of runs was randomized across participants. Stimuli presentation and behavioral data collection were carried out using the Psychophysics Toolbox for Matlab (Brainard, 1997 ), with stimuli presented through MR-Compatible LCD goggles and behavioral selections made using the first two fingers on the right hand on an MR-compatible response box.

fMRI Data Collection and Analysis

We acquired data with a 4T GE scanner using an inverse-spiral pulse sequence with standard imaging parameters [TR = 2000 ms; TE = 30 ms; 34 axial slices parallel to the AC-PC plane; voxel size of 3.75 × 3.75 × 3.8 mm]. High resolution 3D full-brain SPGR anatomical images were acquired and used for normalizing and co-registering individual participants’ data.
Analyses were performed using FEAT (FMRI Expert Analysis Tool) Version 5.92, part of the FSL package (Smith et al., 2004 ; Woolrich et al., 2009 ). The following pre-statistics processing steps were applied: motion correction using MCFLIRT, slice-timing correction, removal of non-brain voxels using BET (Smith, 2002 ), spatial smoothing with a Gaussian kernel of full-width-half-maximum of 8 mm, and 50 ms high-pass temporal filtering. Registration to high resolution and standard images was carried out using FLIRT (Jenkinson and Smith, 2001 ).
Our first-level FEAT model contained five factors: three coding the onsets of contingency changes (e.g., Reversal Changes, Value Changes, and Effect Changes) and the fourth a nuisance variable to code for the infrequent missed trials. An impulse of unit duration and unit weight was used for each of these events. The fifth factor coded the timing of the non-task pauses by their duration and a unit weight. Each of these factors was then convolved with a double-gamma hemodynamic response function to create the final regressors within our design matrix. Of note, this model uses the Standard trials as a task-related baseline. Thus, activations associated with the performance of Standard trials, such as stimuli presentation and motor response execution, are controlled by comparison to the baseline fMRI signal.
Second-level FEAT analyses to combine runs within-participants used a fixed-effects model, while third-level, across-participants analyses used FLAME (stages 1 and 2) random-effects analysis, with automatic outlier de-weighting (Woolrich, 2008 ). All statistical inferences, including data visualization, use whole-brain-corrected cluster-significance thresholds of p < 0.05 (z > 2.3). Finally, to quantify the percent change in activation across different contingency change types, we created spheres with 8 mm radii around centroids of our functionally-defined regions-of-interest (ROIs) using MRICRON (Rorden et al., 2007 ).
Activation cluster peaks presented in Tables 1 , 2 , and 4 were produced using FSL. For each table, 30 peaks were determined for each cluster, and labeled with their Harvard-Oxford designation using FSLview. Only the peak voxel for each anatomical designation is listed. Activation Table 3 was produced using MRICRON to calculate the centroid of each cluster.

Results

Behavioral Data

Following a random guess on the first trial, optimal behavior in this task was to select the option that was rewarded on the previous trial (i.e., follow a win-stay/lose-shift strategy, WSLS). A feature of this task is that subjects should engage in one-trial learning, which minimizes the problems of temporal credit assignment that occur within probabilistic reversal learning tasks. Note that given the low likelihood and unpredictability of the reversal change trials, attempts to predict such shifts in value contingencies would reduce overall payment. Thus, we measured behavioral performance in reference to an optimal WSLS strategy. On average, participants performed the WSLS strategy on 99.4% of trials, with no individual run for any participant below 95% performance. All participants described (in their own words) following a WSLS strategy in a post-study questionnaire. In addition, very few trials were missed due to slow responses (mean: 0.7%), with only one participant missing over 2% of trials in any individual run.
To examine the effects of contingency changes upon subsequent behavior, we evaluated whether participants’ response times were slowed on the trial following each type of contingency change. Following standard trials, the mean response time across participants was 396 ms (SD: 144 ms). Response time increased significantly following each type of contingency change (repeated measures ANOVA, main effect p < 0.05): for reversal changes, 418 ms; for value changes, 411 ms; for one-dimensional effect changes, 410 ms; and for two-dimensional effect changes, 425 ms. Thus, our behavioral data indicate that participants performed at a nearly optimal level throughout the experiment, but were nevertheless influenced by each sort of contingency change.

fMRI Data

We first examined the main effect of Reversal Change, as a contrast between this event regressor and the Standard trials baseline. Significant areas of activation included posterior parietal cortex (PPC), anterior insula cortex (aINS), lateral prefrontal cortex (LPFC), dorsomedial prefrontal cortex (DMPFC), precuneus, supplementary motor cortex (SMC), and precentral cortex (Figure 2 A and Table 1 ). This set of regions replicates that described by previous studies in which contingency changes involved behavioral shifts (Cools et al., 2002 ; O’Doherty et al., 2003 ; Remijnse et al., 2005 ; Xue et al., 2008 ). For example, Cools et al. (2002) found greater activation in the LPFC for reversal errors relative to probabilistic errors.
Figure 2. Neural activation in response to contingency changes. (A) On Reversal Change trials, the contingencies between actions and their rewarding outcomes switch, so that the participant should select a different action on the subsequent trial. Such changes evoke activation in a dorsal executive network comprising regions of medial and lateral prefrontal cortex, parietal cortex, and insular cortex, among other regions. (B) On Effect Change trials, only the visual effect of the action changes; the outcome has the same value, and the participant does not need to switch behavioral responses. Activation was again observed in lateral prefrontal and parietal cortices, along with regions of visual cortex.
yes
Next, we examined the main effect of Value Change, again as a contrast between this task-related regressor and the Standard trial baseline. No brain regions survived our standard statistical criterion. This absence of activation could be due to the specific bimodal distribution of value changes in our task, which contained both large negative events (i.e., Reversal changes) and small positive and negative changes (i.e., Value changes).
Finally, we identified the main effect of Effect Change, contrasting this regressor with the Standard trial baseline. We found significant activations in the striatum, PPC, LPFC, aINS, and temporal cortices (Figure 2 B and Table 2 ). This pattern of activation contains many of the same regions as observed for Reversal Change trials, consistent with the interpretation that similar networks process each sort of contingency change.
yes
To determine regions whose activation increased significantly for both Reversal Changes (i.e., valuative contingency shifts) and Effect Changes (i.e., non-valuative contingency shifts), we examined the intersection of voxels whose activation increased significantly to both types of change, independently (voxelwise z > 2.3, with whole-brain cluster correction of p < 0.05). This conjunction analysis revealed activations in aINS, PPC, and LPFC (Figure 3 and Table 3 ). ROI analyses were performed to compare the levels of activation produced by the reversal and effect changes. No significant differences between the reversal and effect changes were found within these co-activated regions (Figures 3 B–D).
Figure 3. Conjunction of activations for Reversal Changes and Effect Changes. (A) Shown are regions that exhibited significant activation in both the Reversal Change and Effect Change conditions. Subsequent functional ROI analyses extracted the relative signal change evoked for each trial type – Reversal Changes, Value Changes, and Effect Changes – within the (B) right anterior insula cortex, (C) right posterior parietal cortex (PPC), and (D) right middle frontal gyrus (MFG). Horizontal lines indicate pairs of conditions with significant differences in activation amplitude (p < 0.05).
yes
To identify regions whose activation differed based on the nature of contingency change, we examined the contrasts of Effect Change > Reversal Change, and Reversal Change > Effect Change; (Figure 4 and Table 4 ). Effect Changes only produced significantly greater activations in a small number of posterior regions (including the middle temporal gyrus) that have been previously implicated in object processing (Martin et al., 1996 ; Weisberg et al., 2007 ). Reversal Changes produced significantly greater activations in the DMPFC, SMC, and precentral gyrus, regions previously implicated in the selection of actions and the production of motor responses. ROI analyses of these regions are presented in Figures 4 B–D.
Figure 4. Contrasts between Reversal Changes and Effect Changes. (A) Shown are regions that exhibited significant differences in activation between the Reversal Change and Effect Change conditions. Functional ROI analyses extracted the relative signal change evoked for each trial type – Reversal Changes, Value Changes, and Effect Changes – within the (B) right middle temporal gyrus (MTG), (C) right precentral gyrus (PCG), and (D) dorsomedial prefrontal cortex (DMPFC). Horizontal lines indicate pairs of conditions with significant differences in activation amplitude (p < 0.05).
yes

Discussion

Humans, like many other animals, possess a high degree of behavioral flexibility. We can learn the values of different actions and can select new courses of behavior when those values change. However, human learning extends well beyond action-value contingencies to include learning about the physical effects of our actions, which can be generalized to novel situations with new valuative contingencies. Although numerous studies have explored the neural basis of valuative contingency learning, many have confounded the change in the reward with the physical change in the rewarding stimulus. Here, we show that, when compared within the same task, the brain regions associated with learning reward-action contingencies are also engaged by behaviorally- and reward-irrelevant contingency changes.

Valuative vs. Non-Valuative Contingency Processing

Our data suggest that the prefrontal and parietal cortex activations associated with value learning could be attributable to the co-occurring physical changes in the rewarding stimulus. This strong interpretation is supported by the differential effects of prefrontal lesions across a range of species. Lesions in the ventromedial prefrontal cortex (including orbitofrontal areas) appear to remove the motivation or ability to learn values and respond appropriately, such as in the reversal-learning task (Doar et al., 1987 ; Bechara et al., 1994 ; Dias et al., 1996 ; Hornak et al., 2004 ; Izquierdo et al., 2004 ; Rudebeck and Murray, 2008 ). In contrast, LPFC lesions disrupt the learning or accessing of contingency information, such as in an extradimensional set shifting task (Owen et al., 1991 ; Dias et al., 1996 ; Hornak et al., 2004 ; see also Barcelo et al., 2007 ). Alternatively, LPFC could contribute to both sorts of contingencies: the physical effect of our action and the valuation of that effect. These two interpretations cannot be distinguished when examining value learning as a categorical change, as in the reversal learning task, as new value contingencies also reflect newly rewarded stimuli.
The key to determining which of these interpretations is correct is to parametrically dissociate the change in value from the changes in the physical stimulus. Studies which have examined the processing of parametric value signals using fMRI and physiological recording techniques have identified several brain regions which appear to encode parametric value signals, including the medial prefrontal cortex, orbitofrontal cortex, amygdala, dorsal and ventral striatum, nucleus accumbens, and posterior parietal cortex (PPC) (O’Doherty et al., 2001 , 2003 ; Plassmann et al., 2007 , 2008 ; Delgado et al., 2003 ; Dorris and Glimcher, 2004 ; Knutson et al., 2005 ; Kable and Glimcher, 2007 ; Lau and Glimcher, 2007 ; Hare et al., 2008 ; Schiller et al., 2008 ). Notably absent from this list of value-encoding regions are the MFG, IFG, and aINS, in which activation was evoked during both the Reversal Change and Effect Change trials. This suggests these lateral prefrontal and insular cortices encode the non-valuative contingency information related to learning about the environment.
Of the brain areas we identified as processing multiple sorts of contingency information, only the PPC has previously been implicated in processing parametric value information (Platt and Glimcher, 1999 ; Dorris and Glimcher, 2004 ). This suggests that the PPC may play a role in the integration of actions with both valuative and non-valuative outcomes (Assad, 2003 ). This agrees with the hypothesis that the PPC acts as a decision map, relating actions to the expected value of their effects (Platt and Glimcher, 1999 ; Beck et al., 2008 ; Churchland et al., 2008 ). Such an integrative role could provide a unitary framework for the myriad functions supported by PPC subregions, such as in multi-modal integration of sensory input (Cohen and Andersen, 2000 ; Toth and Assad, 2002 ; Cohen et al., 2004 ; Mullette-Gillman et al., 2005 ), the attentional-intentional processes relating to motor planning (Colby and Goldberg, 1999 ; Snyder et al., 2000 ), working memory (Stoet and Snyder, 2004 ; Vingerhoets, 2008 ), and visuomotor learning (Grafton et al., 2008 ).

Contingency vs. Control Processing

An important question is how contingency processing interacts with the executive control processes responsible for producing behavioral changes when necessitated by a contingency change. The LPFC and aINS have long been hypothesized to play a critical role in working memory and other executive functions, based upon converging evidence from single-unit (Goldman-Rakic, 1996 ; Chafee and Goldman-Rakic, 1998 ), lesion (Doar et al., 1987 ; Hornak et al., 2004 ), and fMRI studies (Elliott et al., 1997 ; Rowe et al., 2000 ). Our LPFC and aINS activations during both reversal changes and effect changes agree with the interpretation that the prefrontal cortex is involved in forming, updating, and accessing models that relate stimuli to actions (Passingham, 1975 ; Cohen and Servan-Schreiber, 1992 ; Grafman et al., 1994 ; Wise et al., 1996 ; Miller and Cohen, 2001 ).
Co-occurring LPFC, aINS, and DMPFC activity has also been observed across a large number of studies examining executive control processes, including auditory detection, pattern detection, working memory, and response selection (Goldman-Rakic, 1996 ; McCarthy et al., 1997 ; Duncan and Owen, 2000 ; Miller and Cohen, 2001 ; Huettel et al., 2002 ; Robbins, 2007 ; Hyafil et al., 2009 ). Models of executive control processing have suggested that the DMPFC (referred to as anterior cingulate cortex, ACC) detects changes in contingencies and then activates the LPFC, which exerts executive control over behavior by biasing activity in other brain areas (Botvinick et al., 2001 ; Miller and Cohen, 2001 ; Ridderinkhof et al., 2004 ; Walton et al., 2004 ; Behrens et al., 2007 ; Mansouri et al., 2009 ). However, a recent study by Sridharan et al. (2008) using Granger causality analysis, found the aINS exhibited a causal influence on the LPFC and DMPFC (referred to as ACC, and anterior to our specific DMPFC activation) (Sridharan et al., 2008 ), an inversion of the directionality of influence suggested by the previously mentioned executive function models (see also Markela-Lerenc et al., 2004 ).
We suggest that these discrepancies reflect, at least in part, the conflation of contingency detection and control processes in many paradigms. For example, although activations of the LPFC during an oddball task are often described in terms of behavioral control or inhibition, these activations have been shown to be produced by contingency changes in the mapping of stimuli to responses independently of changes in the specific motor response (Huettel and McCarthy, 2004 ). Similarly, Carter et al. (2006) found activity in the LPFC (specifically, MFG) correlated with the trial-by-trial level of explicit contingency knowledge during a classical conditioning paradigm in which there was no rewarded, or ‘correct’, response (Carter et al., 2006 ). These studies show that the LPFC processes contingency information in the absence of engaged control processes.
Our task allowed the dissociation of contingency and executive control processing within the same task to examine the functional roles of these brain areas. The contrast of Reversal Change > Effect Change allowed us to determine which brain areas are significantly more activated during the engagement of control and concurrent valuation processes, while controlling for contingency changes. We found increased activations in the posterior dorsomedial prefrontal cortex (DMPFC, including dorsal anterior cingulate), supplementary cortex (SMC), and precentral cortex during reversal changes contrasted with effect changes. As contingency change detection occurs in both Effect and Reversal Changes, this suggests that the aINS and LPFC process the contingency change (and possibly, exert the required cognitive control), with activation of the DMPFC only when control processes are required to produce a change in behavioral response. This is consistent with the directionality found by recent studies (Markela-Lerenc et al., 2004 ; Sridharan et al., 2008 ), and suggests dissociable functional roles for these brain areas which are an inversion of previous models.

Conclusions

We examined the potentially distinct neural mechanisms underlying behaviorally relevant valuative and behaviorally irrelevant non-valuative contingency learning. We found that the brain areas previously suggested to be involved in valuative contingency learning also contribute to the processing of behaviorally and motivationally irrelevant contingency changes. This suggests two key conclusions. First, the processing of value information may co-opt a more general executive system for contingency learning. Second, because non-valuative contingency changes are behaviorally irrelevant, the executive system may play an informational rather than control role in many tasks.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

We thank McKell Carter, John Clithero, Yale Cohen, Brandi Newell, David Smith, and Lihong Wang for comments on data analysis and the manuscript. This research was supported by the US National Institute of Mental Health (NIMH-70685) and by the US National Institute of Neurological Disease and Stroke (NINDS-41328). SAH is supported by an Incubator Award from the Duke Institute for Brain Sciences.

References

Assad, J. A. (2003). Neural coding of behavioral relevance in parietal cortex. Curr. Opin. Neurobiol. 13, 194–197.
Barcelo, F., Perianez, J. A., and Nyhus, E. (2007). An information theoretical approach to task-switching: evidence from cognitive brain potentials in humans. Front. Hum. Neurosci. 1, 13. doi: 10.3389/neuro.09.013.2007
Bechara, A., Damasio, A. R., Damasio, H., and Anderson, S. W. (1994). Insensitivity to future consequences following damage to human prefrontal cortex. Cognition 50, 7–15.
Beck, J. M., Ma, W. J., Kiani, R., Hanks, T., Churchland, A. K., Roitman, J., Shadlen, M. N., Latham, P. E., and Pouget, A. (2008). Probabilistic population codes for Bayesian decision making. Neuron 60, 1142–1152.
Behrens, T. E., Woolrich, M. W., Walton, M. E., and Rushworth, M. F. (2007). Learning the value of information in an uncertain world. Nat. Neurosci. 10, 1214–1221.
Botvinick, M. M., Braver, T. S., Barch, D. M., Carter, C. S., and Cohen, J. D. (2001). Conflict monitoring and cognitive control. Psychol. Rev. 108, 624–652.
Brainard, D. H. (1997). The psychophysics toolbox. Spat. Vis. 10, 433–436.
Carter, R. M., O’Doherty, J. P., Seymour, B., Koch, C., and Dolan, R. J. (2006). Contingency awareness in human aversive conditioning involves the middle frontal gyrus. Neuroimage 29, 1007–1012.
Chafee, M. V., and Goldman-Rakic, P. S. (1998). Matching patterns of activity in primate prefrontal area 8a and parietal area 7ip neurons during a spatial working memory task. J. Neurophysiol. 79, 2919–2940.
Churchland, A. K., Kiani, R., and Shadlen, M. N. (2008). Decision-making with multiple alternatives. Nat. Neurosci. 11, 693–702.
Cohen, J. D., and Servan-Schreiber, D. (1992). Context, cortex, and dopamine: a connectionist approach to behavior and biology in schizophrenia. Psychol. Rev. 99, 45–77.
Cohen, Y. E., and Andersen, R. A. (2000). Reaches to sounds encoded in an eye-centered reference frame. Neuron 27, 647–652.
Cohen, Y. E., Cohen, I. S., Gifford, G. W., 3rd. (2004). Modulation of LIP activity by predictive auditory and visual cues. Cereb. Cortex 14, 1287–1301.
Colby, C. L., and Goldberg, M. E. (1999). Space and attention in parietal cortex. Annu. Rev. Neurosci. 22, 319–349.
Cools, R., Clark, L., Owen, A. M., and Robbins, T. W. (2002). Defining the neural mechanisms of probabilistic reversal learning using event-related functional magnetic resonance imaging. J. Neurosci. 22, 4563–4567.
Delgado, M. R., Locke, H. M., Stenger, V. A., and Fiez, J. A. (2003). Dorsal striatum responses to reward and punishment: effects of valence and magnitude manipulations. Cogn. Affect. Behav. Neurosci. 3, 27–38.
Dias, R., Robbins, T. W., and Roberts, A. C. (1996). Dissociation in prefrontal cortex of affective and attentional shifts. Nature 380, 69–72.
Doar, B., Finger, S., and Almli, C. R. (1987). Tactile-visual acquisition and reversal learning deficits in rats with prefrontal cortical lesions. Exp. Brain Res. 66, 432–434.
Dorris, M. C., and Glimcher, P. W. (2004). Activity in posterior parietal cortex is correlated with the relative subjective desirability of action. Neuron 44, 365–378.
Duncan, J., and Owen, A. M. (2000). Common regions of the human frontal lobe recruited by diverse cognitive demands. Trends Neurosci. 23, 475–483.
Elliott, R., Frith, C. D., and Dolan, R. J. (1997). Differential neural response to positive and negative feedback in planning and guessing tasks. Neuropsychologia 35, 1395–1404.
Fan, J., Kolster, R., Ghajar, J., Suh, M., Knight, R. T., Sarkar, R., and McCandliss, B. D. (2007). Response anticipation and response conflict: an event-related potential and functional magnetic resonance imaging study. J. Neurosci. 27, 2272–2282.
Goldman-Rakic, P. S. (1996). The prefrontal landscape: implications of functional architecture for understanding human mentation and the central executive. Philos. Trans. R. Soc. Lond., B, Biol. Sci. 351, 1445–1453.
Grafman, J., Pascual-Leone, A., Alway, D., Nichelli, P., Gomez-Tortosa, E., and Hallett, M. (1994). Induction of a recall deficit by rapid-rate transcranial magnetic stimulation. Neuroreport 5, 1157–1160.
Grafton, S. T., Schmitt, P., Van Horn, J., and Diedrichsen, J. (2008). Neural substrates of visuomotor learning based on improved feedback control and prediction. Neuroimage 39, 1383–1395.
Hare, T. A., O’Doherty, J., Camerer, C. F., Schultz, W., and Rangel, A. (2008). Dissociating the role of the orbitofrontal cortex and the striatum in the computation of goal values and prediction errors. J. Neurosci. 28, 5623–5630.
Herrnstein, R. J. (1970). On the law of effect. J. Exp. Anal. Behav. 13, 243–266.
Hornak, J., O’Doherty, J., Bramham, J., Rolls, E. T., Morris, R. G., Bullock, P. R., and Polkey, C. E. (2004). Reward-related reversal learning after surgical excisions in orbito-frontal or dorsolateral prefrontal cortex in humans. J. Cogn. Neurosci. 16, 463–478.
Huettel, S. A., Mack, P. B., and McCarthy, G. (2002). Perceiving patterns in random series: dynamic processing of sequence in prefrontal cortex. Nat. Neurosci. 5, 485–490.
Huettel, S. A., and McCarthy, G. (2004). What is odd in the oddball task? Prefrontal cortex is activated by dynamic changes in response strategy. Neuropsychologia 42, 379–386.
Hyafil, A., Summerfield, C., and Koechlin, E. (2009). Two mechanisms for task switching in the prefrontal cortex. J. Neurosci. 29, 5135–5142.
Izquierdo, A., Suda, R. K., and Murray, E. A. (2004). Bilateral orbital prefrontal cortex lesions in rhesus monkeys disrupt choices guided by both reward value and reward contingency. J. Neurosci. 24, 7540–7548.
Jenkinson, M., and Smith, S. (2001). A global optimisation method for robust affine registration of brain images. Med. Image Anal. 5, 143–156.
Kable, J. W., and Glimcher, P. W. (2007). The neural correlates of subjective value during intertemporal choice. Nat. Neurosci. 10, 1625–1633.
Knutson, B., Taylor, J., Kaufman, M., Peterson, R., and Glover, G. (2005). Distributed neural representation of expected value. J. Neurosci. 25, 4806–4812.
Lau, B., and Glimcher, P. W. (2007). Action and outcome encoding in the primate caudate nucleus. J. Neurosci. 27, 14502–14514.
Mansouri, F. A., Tanaka, K., and Buckley, M. J. (2009). Conflict-induced behavioural adjustment: a clue to the executive functions of the prefrontal cortex. Nat. Rev. Neurosci. 10, 141–152.
Markela-Lerenc, J., Ille, N., Kaiser, S., Fiedler, P., Mundt, C., and Weisbrod, M. (2004). Prefrontal-cingulate activation during executive control: which comes first? Brain Res. Cogn. Brain Res. 18, 278–287.
Martin, A., Wiggs, C. L., Ungerleider, L. G., and Haxby, J. V. (1996). Neural correlates of category-specific knowledge. Nature 379, 649–652.
McCarthy, G., Luby, M., Gore, J., and Goldman-Rakic, P. (1997). Infrequent events transiently activate human prefrontal and parietal cortex as measured by functional MRI. J. Neurophysiol. 77, 1630–1634.
Miller, E. K., and Cohen, J. D. (2001). An integrative theory of prefrontal cortex function. Annu. Rev. Neurosci. 24, 167–202.
Mullette-Gillman, O. A., Cohen, Y. E., and Groh, J. M. (2005). Eye-centered, head-centered, and complex coding of visual and auditory targets in the intraparietal sulcus. J. Neurophysiol. 94, 2331–2352.
O’Doherty, J., Critchley, H., Deichmann, R., and Dolan, R. J. (2003). Dissociating valence of outcome from behavioral control in human orbital and ventral prefrontal cortices. J. Neurosci. 23, 7931–7939.
O’Doherty, J., Kringelbach, M. L., Rolls, E. T., Hornak, J., and Andrews, C. (2001). Abstract reward and punishment representations in the human orbitofrontal cortex. Nat. Neurosci. 4, 95–102.
Owen, A. M., Roberts, A. C., Polkey, C. E., Sahakian, B. J., and Robbins, T. W. (1991). Extra-dimensional versus intra-dimensional set shifting performance following frontal lobe excisions, temporal lobe excisions or amygdalo-hippocampectomy in man. Neuropsychologia 29, 993–1006.
Passingham, R. (1975). Delayed matching after selective prefrontal lesions in monkeys (Macaca mulatta). Brain Res. 92, 89–102.
Pavlov, I. P. (1928). Lectures on Conditioned Reflexes. New York, Liveright Publishing Corp.
Plassmann, H., O’Doherty, J., and Rangel, A. (2007). Orbitofrontal cortex encodes willingness to pay in everyday economic transactions. J. Neurosci. 27, 9984–9988.
Plassmann, H., O’Doherty, J., Shiv, B., and Rangel, A. (2008). Marketing actions can modulate neural representations of experienced pleasantness. Proc. Natl. Acad. Sci. U.S.A. 105, 1050–1054.
Platt, M. L., and Glimcher, P. W. (1999). Neural correlates of decision variables in parietal cortex. Nature 400, 233–238.
Remijnse, P. L., Nielen, M. M., Uylings, H. B., and Veltman, D. J. (2005). Neural correlates of a reversal learning task with an affectively neutral baseline: an event-related fMRI study. Neuroimage 26, 609–618.
Ridderinkhof, K. R., Ullsperger, M., Crone, E. A., and Nieuwenhuis, S. (2004). The role of the medial frontal cortex in cognitive control. Science 306, 443–447.
Robbins, T. W. (2007). Shifting and stopping: fronto-striatal substrates, neurochemical modulation and clinical implications. Philos. Trans. R. Soc. Lond., B, Biol. Sci. 362, 917–932.
Rorden, C., Karnath, H. O., and Bonilha, L. (2007). Improving lesion-symptom mapping. J. Cogn. Neurosci. 19, 1081–1088.
Rowe, J. B., Toni, I., Josephs, O., Frackowiak, R. S., and Passingham, R. E. (2000). The prefrontal cortex: response selection or maintenance within working memory? Science 288, 1656–1660.
Rudebeck, P. H., and Murray, E. A. (2008). Amygdala and orbitofrontal cortex lesions differentially influence choices during object reversal learning. J. Neurosci. 28, 8338–8343.
Schiller, D., Levy, I., Niv, Y., LeDoux, J. E., and Phelps, E. A. (2008). From fear to safety and back: reversal of fear in the human brain. J. Neurosci. 28, 11517–11525.
Skinner, B. F. (1938). The Behavior of Organisms. New York, Appleton-Century-Crofts.
Smith, S. M. (2002). Fast robust automated brain extraction. Hum. Brain Mapp. 17, 143–155.
Smith, S. M., Jenkinson, M., Woolrich, M. W., Beckmann, C. F., Behrens, T. E., Johansen-Berg, H., Bannister, P. R., De Luca, M., Drobnjak, I., Flitney, D. E., Niazy, R. K., Saunders, J., Vickers, J., Zhang, Y., De Stefano, N., Brady, J. M., and Matthews, P. M. (2004). Advances in functional and structural MR image analysis and implementation as FSL. Neuroimage 23(Suppl. 1), S208–S219.
Snyder, L. H., Batista, A. P., and Andersen, R. A. (2000). Intention-related activity in the posterior parietal cortex: a review. Vision Res. 40, 1433–1441.
Squires, K. C., Wickens, C., Squires, N. K., and Donchin, E. (1976). The effect of stimulus sequence on the waveform of the cortical event-related potential. Science 193, 1142–1146.
Sridharan, D., Levitin, D. J., and Menon, V. (2008). A critical role for the right fronto-insular cortex in switching between central-executive and default-mode networks. Proc. Natl. Acad. Sci. U.S.A. 105, 12569–12574.
Stoet, G., and Snyder, L. H. (2004). Single neurons in posterior parietal cortex of monkeys encode cognitive set. Neuron 42, 1003–1012.
Sutton, R. S., and Barto, A. G. (1998). Reinforcement learning: an introduction. IEEE Trans. Neural Netw. 9, 1054.
Thorndike, E. (1898). Some experiments on animal intelligence. Science 7, 818–824.
Tolman, E. C. (1932). Purposive Behavior in Animals and Men. New York, The Century Co.
Toth, L. J., and Assad, J. A. (2002). Dynamic coding of behaviourally relevant stimuli in parietal cortex. Nature 415, 165–168.
Vingerhoets, G. (2008). Knowing about tools: neural correlates of tool familiarity and experience. Neuroimage 40, 1380–1391.
Walton, M. E., Devlin, J. T., and Rushworth, M. F. (2004). Interactions between decision making and performance monitoring within prefrontal cortex. Nat. Neurosci. 7, 1259–1265.
Weisberg, J., van Turennout, M., and Martin, A. (2007). A neural system for learning about object function. Cereb. Cortex 17, 513–521.
Wise, S. P., Murray, E. A., and Gerfen, C. R. (1996). The frontal cortex-basal ganglia system in primates. Crit. Rev. Neurobiol. 10, 317–356.
Woolrich, M. (2008). Robust group analysis using outlier inference. Neuroimage 41, 286–301.
Woolrich, M. W., Jbabdi, S., Patenaude, B., Chappell, M., Makni, S., Behrens, T., Beckmann, C., Jenkinson, M., and Smith, S. M. (2009). Bayesian analysis of neuroimaging data in FSL. Neuroimage 45, S173–S186.
Xue, G., Ghahremani, D. G., and Poldrack, R. A. (2008). Neural substrates for reversing stimulus-outcome and stimulus-response associations. J. Neurosci. 28, 11196–11204.
Keywords:
executive control, decision making, prefrontal cortex, anterior cingulate cortex, fMRI, dorsomedial prefrontal cortex, cognitive control
Citation:
Mullette-Gillman OA and Huettel SA (2009). Neural substrates of contingency learning and executive control: dissociating physical, valuative, and behavioral changes. Front. Hum. Neurosci. 3:23. doi: 10.3389/neuro.09.023.2009
Received:
23 June 2009;
 Paper pending published:
24 July 2009;
Accepted:
24 August 2009;
 Published online:
11 September 2009.

Edited by:

Russell A. Poldrack, University of California, USA

Reviewed by:

Roshan Cools, University of Cambridge, UK Elizabeth Tricomi, Rutgers University, USA
Copyright:
© 2009 Mullette-Gillman and Huettel. This is an open-access article subject to an exclusive license agreement between the authors and the Frontiers Research Foundation, which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are credited.
*Correspondence:
O’Dhaniel A. Mullette-Gillman Center for Cognitive Neuroscience, Box 90999, Duke University, Durham, NC 27708, USA. e-mail: mullette-gillman@biac.duke.edu
Download