We have the conscious experience that we can sometimes “override” emotional or habitual responses. This means that some process must mediate between the multiple decision-making systems. The mechanisms of that “override” process are still being studied, but some neural components, such as the prefrontal cortex, are known to be involved.
As we’ve seen in the past several chapters, the human decision-making system contains multiple action-selection components. Reflexes are largely driven by spinal control, but reflexes can be overridden by other systems in the brain (as we saw with Lawrence of Arabia holding the match). Emotional (Pavlovian) response systems release behaviors learned over evolutionary timescales, but, again, they can be overridden by other systems (controlling your fear). Procedural learning systems are habits that develop in response to consistent reward contingencies, but again, these can be overridden by other systems (as in the case of remembering to take the alternate route to work). And finally, there is the Deliberative system, which searches through future possibilities but does not (in general) have to pick the most desired choice. (Take, for example, the ability to select a small token of reward over a highly craved drug; the craving for a high-value option can be overridden to take what seems on the surface to be a lesser-value option.)
The word “override” comes from the ability of the nobility to dominate decisions (since the nobility served as cavalry and could literally override the opposition if it came to that) and echoes the Platonic concept of two horses leading a chariot—a wild uncontrollable (Dionysian) horse and an intellectual, rational, reasoned (Apollonian) horse, as well as the Augustinian and Freudian concepts of human reason controlling a wild animal past.1 These concepts echo the popularized concept of a horse and rider, with the horse driving the emotional responses and the rider the reasoned responses.2
As anyone who has ridden a horse knows, riding a horse is very different from driving a car. The horse has internal sensors and will do some of the basic chores for you (like not bump into walls), while a car will do whatever you tell it to. Of course, this means that sometimes you have to fight the horse (for example, if you need to drive it into battle) in a way that you don’t have to convince a car. (However, I am reminded of the scene in John Carpenter’s darkly comic movie Dark Star where Lt. Doolittle has to convince “smart” Bomb #20 that it really “wants” to launch and blow up the planet.) It will be interesting to see how driving a car changes as we introduce more and more self-reliant decision systems into them (such as sensory-driven speed control, automatic steering, and GPS navigation systems).3 Jonathan Haidt has popularized this analogy as an elephant and a rider rather than the traditional horse and rider, because the elephant is a more powerful creature, and no rider is ever completely in control of the elephant.4 (Of course, this misses the possibility that the rider and the horse can understand each other so well and be in such sync as to be an even more capable team than either one individually.)
In all of these analogies, there is the belief that the “self” has to work to exert control over the “other.” As we noted at the very beginning of the book (Chapter 1), you are both the horse and the rider (or the elephant and the rider). Robert Kurzban, in his new book, rejects the concept of the individual self and suggests that “self-control” is better understood as conflict between multiple modules.5 In the language of this book, the decision-making system that is you includes the reflexes you inherited, the Pavlovian learning that drives you, the habits you have learned, the stories you tell, and the deliberation you do. Nevertheless, self-control is something that feels very real to us as humans. Consciously, we know when we successfully exert our self-control, and (afterwards) we know when we don’t. Therefore, it is important to understand what this process is.
Before we can take the mechanisms of self-control apart, we need to find a way to make a subject show self-control in the laboratory and to measure it. First, we want to be quantitative about when people are showing self-control and when they aren’t. This will allow us to examine the mechanisms of failures of self-control. Second, we’d like to be able to access self-control directly and not have to wait for a person to fall off the wagon. Third, we’d like to be able to examine failures of self-control in nondangerous situations. Fourth, we’d like to be able to use our big measuring machines (fMRI, EEG, neural recordings) to see the physical correlates of self-control. And fifth, we’d like to be able to examine self-control in nonhuman animals.
Self-control has been a topic of study since the inception of modern psychology. (Freud’s concept of the id, ego, and superego is a multistage theory of self-control.6) Several particularly sensitive measures of self-control have been introduced over the years, including the Stroop task and the stop-signal task, as well as tasks that put subjects in more realistic situations, such as the marshmallow task and its animal analogs that compare abstract and concrete representations (the chimpanzee and the jellybeans).7 Finally, there is the candy-rejection task, which will lead us into issues of cognitive load and how self-control can tire out.8
Perhaps the simplest experiment to address the question of self-control is the Stroop task, a remarkably simple yet subtle task in which a person is shown a word in a color and is supposed to name the color of the text. This is normally very easy. But if the word is itself a color (such as the word “red” written in blue), then an interference appears between reading the word itself and recognizing the color that it is printed in. This interference produces a slowing of the time it takes to say the word (the reaction time) and increases the likelihood of making an error. Both of these measures produce quantitative measures of cognitive dissonance and track other measures of self-control.9 The increase and decrease in reaction times in the Stroop task is so reliable that it is now used to test for cognitive dissonance and attention to other concepts—for example, for food-related words in obese individuals and drug-related words in addicts.10
The Stroop task can’t be used with animals (who can’t read), and here the stop-signal task has been very useful. In the stop-signal task, the subject is trained to react as quickly as possible to a go signal (such as “when the light comes on, push the button”), but sometimes a different signal (a Stop! signal) appears between the go signal and the subject’s reaction. If the subject sees the stop signal, he or she is rewarded for not reacting and is punished for reacting—the subject has to stop the already started motor-action sequence. The stop-signal task has successfully been used in rats, monkeys, and humans, and performance is similar across the different species.11
Generally, the most successful description of the task is that there are separate processes each racing to a threshold (a go process and a Stop! process). The Stop! signal is assumed to run faster than the go signal, but also to start later. If the Stop! signal reaches threshold before the go signal, the subject is able to cancel the action and stop himself or herself.12 Neural recordings have found increasing activity in certain areas (cortical motor control structures, and the subthalamic nucleus [a part of the basal ganglia]) that track the two racing components and their interaction.13
Both the Stroop and stop-signal tasks are computer-driven, timed tasks. The other tasks listed above depend on physically real objects being offered to subjects; this produces different results than symbols or images of objects.14 In the first few chapters of the book, we encountered an experiment (the Parable of the Jellybeans, Chapter 1) in which chimpanzees had to choose between two sets of jellybeans. The key to this task was that the subject got the tray he didn’t pick—to get the tray with more jellybeans, he had to reach for the other one.15 Sarah Boysen and Gary Berntson found that their subjects were much better at picking the “humble portion” if they were using symbols rather than actual jellybeans. Humans also show a preference for real objects, particularly sensory-positive objects, when faced with experimental decision tasks.16
This preference for real objects leaves us with an important (and interesting) inconsistency in decision-making systems and underlies some of the problems with valuation that we saw in our initial discussion of “value” (Chapter 3). The idea of a thing and the physical thing may have very different values. This occurs because there are multiple decision-making systems competing for that valuation. If the Pavlovian system (Chapter 8) is engaged (when the object is physically in front of us), we make one valuation, but if it is not (if we are using the Deliberative system, Chapter 9), we make another.
This creates an interesting inconsistency in decision-making.17 Even though an alcoholic might deny the urge to drink when sitting at home, that same alcoholic sitting in the bar an hour later will take that drink. Some authors have argued that the way to think about this is that there are two selves: that your present self is in conflict with your future self. Other authors have suggested that there are two modules, one that thinks long term and one that responds more immediately. I think it’s simpler to recognize that we are inconsistent and irrational and to figure out how to work our lives around that inconsistency and irrationality. One way to exert self-control is to recognize this inconsistency and to precommit to a condition that does not allow you to reach that future difficult choice.18 For example, the alcoholic might realize that it will be very difficult to not drink in the bar and might decide not to go to the bar in the first place. A gambler with a known problem might take a different route home so as not to drive by the casino.19
This suggests the importance of attention and distraction in self-control20—if we can keep our attention on the other option, we may be better able to avoid the Pavlovian choice. Similarly, if we can attend to specific aspects of an alternative, we may be able to change the valuation of that alternative against the temptation. This may be one of the reasons that personal religion is often a source of strength in the face of severe danger or temptation. Religion may enable people to maintain their attention on an alternative option that precludes committing the “sin” that they are trying to control.21
But what if you are faced with that immediate choice? Is it a lost cause? What are the mechanisms of self-control? Even the alcoholic facing a drink in a bar can sometimes deny that driving urge to drink.
In the classic marshmallow task, introduced by Walter Mischel in the 1960s,22 a child is brought into a room, a marshmallow (or equivalent high-value candy or toy) is placed in front of the child, and the child is told, “If you can wait fifteen minutes without eating the marshmallow, I’ll give you two marshmallows.” Then the adult leaves the room. Videos from the original experiments are now famous in the psychological literature because the children show classic self-control behaviors. They try to distract themselves from the marshmallow. They cover their eyes so they don’t have to look at the marshmallow. They turn away. They do other actions like kicking the desk. The problem is that keeping attention off the marshmallow is hard. Follow-up experiments have shown that the ability to wait is surprisingly well correlated to later success, including staying out of jail, avoiding drug addiction, getting good grades in high school, SAT scores, and job success.
Adults are generally able to reject a single marshmallow in this situation, but rejecting that marshmallow takes cognitive effort. This leads us to the candy-rejection task.23 In this task, young women who have expressed an interest in dieting are asked to watch a movie with a bowl of high-sugar snacks placed next to them. By quantitatively measuring how many of the snacks are eaten, one can quantitatively measure their lack of self-control. Some of the women are asked to not respond to the movie, while others are told to just watch it. The more they have to control their emotions while watching the movie, the more they eat. A host of similar self-control and fatigue tests can be used as the quantitative test, including willingness to drink a bitter-tasting medicine, willingness to hold one’s hand in a cold ice-bath, and physical stamina on a lever-pulling or handle-squeezing task. All of these require some aspect of self-control, and they are impaired (decreased) by preceding them with tasks that engage excess emotional and/or cognitive components.
This leads us to the concept of cognitive load. Many of these tasks, such as the candy-rejection task, and even the Stroop task, depend on the ability to allocate some sort of limited cognitive resource to the self-control component.24 We can see this most clearly in the candy-rejection task—spending your self-control attention and effort on keeping your emotions in check reduces your ability to reject temptation and increases the amount of candy that you eat. Putting your override abilities elsewhere allows your Pavlovian system free rein. (Ask anyone nibbling on chips and pretzels in a sports bar.) In fact, we see this sort of effect all the time—when one is distracted, one tends to forget to override prepotent actions, whether they are driven by the emotional (Pavlovian) system or by the overlearned (Procedural) system, or whether they are highly motivated targets in the Deliberative system.25 This is a specific instantiation of the general process of having limited resources. Complex processing, including memory, navigation, and self-control, all require a similar set of limited resources. If these resources are taken up by one task, they cannot be used in another.26
In one of my favorite examples of limited cognitive resources, following on an experiment introduced by Ken Cheng and Randy Gallistel for rats,27 Linda Hermer and Elizabeth Spelke tested how well children (two-year-olds) could find a favorite hidden toy.28 The toy was placed in one of two boxes in the corners of a rectangular room. The key is that a rectangular room has a 180-degree rotational symmetry. The children watched the toy being placed in the box and were then spun around until they were dizzy. Although the children had watched the toy being placed in the box only a few minutes earlier, they were unable to remember where it was; they were able to use the geometry of the room to identify two of the four corners, but they were unable to remember which of the two opposite corners had the box.A Even with colored boxes and salient wall cues (one wall painted white, one painted black), the children were unable to break that 180-degree symmetry. (Children who were not spun had no trouble at all finding the favorite toy.) Adults were of course able to remember even through being spun. (Of course, what the adults were doing was saying, “It’s left of the black wall” or “It’s in the blue box.”) If the adults were given a linguistic blocking task at the same time (to stop them from repeating the location over and over again, linguistically), they ended up just like the children—unable to remember where they left the toy. Classic cognitive load tasks used in experiments include saying the alphabet backwards, and counting down from 1000 by sevens. (Counting down from 1000 by sevens is really hard to do and really distracting.)
Distracted people revert to the other systems. I recently changed the route I drive to work. If I get distracted (say by thinking about how I’m going to deal with a problem at work or at home or even just by thinking about writing this book instead of driving), then I miss my turn and go straight through to the old route. I’m sure if I was counting down from 1000 by sevens as I drove through that corner, I would miss it every time.
Using these limited resources takes energy. One of the most interesting things about self-control is that not only does it depend on paying attention, and not only does it diminish when one is tired, but accomplishing self-control takes energy.30 (Humans are aware of this. We talk of “willpower” as if it is a limited resource that we need to conserve for when we need it. In fact, if subjects are told that they will need to use their self-control later, they can conserve it, showing less self-control in earlier tasks but then being more able to express self-control in later tasks.) Even something as simple as resisting temptation (such as not eating offered chocolate) can have an effect on one’s ability to perform some physical task requiring stamina.31
Many scientists argue that this is one of the reasons that stress often leads to falling back into habits that have been stopped, such as smoking or drug use. Coping with stress depletes one’s self-control and leaves one vulnerable to tasks that depend on that self-control resource, such as preventing relapse. In fact, it has been argued that one of the reasons that regular meetings (such as in Contingency Management or 12-step programs such as Alcoholics Anonymous and its more-regulated cousins) work so well is that they provide daily rewards for accomplishing self-control, which helps strengthen those self-control “muscles.”32 In a similar vein, Walter Mischel has argued that much of what we call good parenting entails an explicit training of self-control33 (such as telling kids that “it’s ok to wait for two marshmallows”).
These self-control resources can be replenished through sleep, rest, or relaxation.34 Roy Baumeister and his colleagues have suggested that this limited resource depends on glucose levels in the bloodstream;35 however, this suggestion is extremely controversial.36 Glucose (sugar) is a resource required by neurons to power neural function.37 Blood-glucose levels in subjects performing a self-control task were much lower than in subjects performing a similar control task that did not require self-control, and subjects who drank a sugar drink showed improved self-control relative to those who drank a similarly sweet but artificially flavored drink.38 Apparently, eating sugar really does help with self-control, which may be why stressed people crave sweets.B If this glucose theory is correct, then self-control may really be like a muscle, in that it gets tired because it runs out of resources. However, other scientists have suggested that the taste of glucose may be priming expectations of goals and changing the distribution of which systems are driving decision-making. Merely tasting (but not swallowing) a sweet solution can increase exercise performance.39 Perhaps the glucose solutions are changing one’s motivation, invigorating the subject (Chapter 13).
It is also not clear why self-control requires additional glucose compared to other brain-intensive phenomena, such as increased attention (listening for a very small sound), increased perception (finding a hidden image in a picture), fine motor control (playing piano or violin), or even doing a cognitively difficult problem (a math test). To my knowledge, the glucose theory has not been tested in these other brain-intensive phenomena.C It may be that self-control simply engages more neural systems than just going with the flow does, and those neural systems require resources.
There is also evidence that self-control can be trained,42 but whether this is more akin to a practice effect (through which neural systems learn to perform tasks better) or to a homeostatic change (such as muscles increasing their resource buffers from exercise) is still unclear.
So why does self-control engage additional parts of the brain and require more neuronal resources?
Two structures in the human brain appear again and again in self-control tasks: the dorsolateral prefrontal cortex (dlPFC) and the anterior cingulate cortex (ACC).43 These structures are often talked about as being involved in “top-down control” because they are involved in abilities to recognize and change goals and tasks.44 These two structures seem to be involved in the monitoring of conflicting information and desires, task-setting, and the overriding of plans.45 Both of these structures are located in the front of the human brain, and they are often referred to as part of the “prefrontal” cortex.D
The implication that the prefrontal cortex is involved in self-control originally came from inadvertent lesion studies, including soldiers returning from the wars of the late 19th and early 20th centuries.51 But the first and most important indication that the prefrontal cortex played a role in self-control, particularly the Augustinian nature of it, came from Phineas Gage,52 whom we met in our discussion of emotion (Chapter 8). To remind you, Phineas Gage was a railway worker who had an iron spike shot through his head, obliterating his frontal cortices. One of the key descriptions of Phineas Gage after his accident was his lack of self-control, particularly in emotional situations. More modern results, including both fMRI data from humans and neural recordings in monkeys, have found dorsolateral prefrontal cortical activity to be related to working memory, to self-control, and to the construction and maintenance of alternate plans (such as one might take in the face of an error-related contingency), particularly complex alternate plans.53
Because the stop-signal task is trainable in animals, particularly monkeys, the anatomical pathways through which the “stop” behavior is achieved are well known. In a pair of remarkable recent papers, Masaki Isoda and Okehide Hikosaka found that an area to the front of the motor cortex (the “supplementary motor area” [SMA]) stops unwanted behaviors through its strong projection to the subthalamic nucleus in the basal ganglia.54 In our discussion of the Procedural action-selection system (Chapter 10), we talked of go/no-go pathways in the basal ganglia—cortical input enters the striatum in the basal ganglia and is then passed through two pathways (a “go” pathway, often called the “direct” pathway, in which the striatum inhibits an area that inhibits actions, making a double negative that learns to encourage actions, and a “no-go” pathway, often called the “indirect” pathway, in which there is a triple negative that learns to discourage actions). There is also a third pathway through the basal ganglia in which the frontal areas of the cortex project directly to the subthalamic nucleus, which projects directly to the final negative stage of the two pathways. (See Figure 10.1.) The subthalamic nucleus is excitatory (positive) and excites the final inhibitory stage, making a single negative—a Stop! pathway. To differentiate it from the direct “go” pathway and the indirect “no-go” pathway, it is sometimes called the “hyper-direct” pathway.55
This can be seen as a general phenomenon—the frontal cortices provide signals that allow other structures to change previously learned responses.56 Historically, this has been called the behavioral inhibition system because it is usually tested in experiments in which behaviors are stopped rather than changed,57 but I’ve always suspected that the prefrontal cortex is better understood as a biasing system that allows complex plans to control behavior through the manipulation of simpler systems. This architecture reflects the subsumption architecture proposed by Rodney Brooks58 in which more complex systems are overlaid on top of simpler systems. The more complex systems listen to the inputs and step in if needed by either modulating the simpler systems (by providing them with additional inputs) or by directly driving the output themselves.
In the rodent, the anterior cingulate cortex (ACC) and the medial frontal cortex (particularly the infralimbic and prelimbic cortices) provide signals to the amygdala to override learned fear-related responses.59 In the monkey, the prefrontal cortex overrides habitual responses to allow more controlled responses. In general, this can be seen as a form of what is sometimes called top-down processing,60 in which frontal cortical systems modulate the “lower” systems to change expectations, attention, and behaviors. Reflexes may be driven by the spinal cord, but there is a loop through which the brain examines the inputs to the reflexes and can modulate them if needed. The ACC modulates emotional action-selection systems.61 The frontal motor areas (like the SMA) modulate Procedural learning.62 And dorsolateral prefrontal cortex modulates evaluation systems to select longer-term options in Deliberative systems.63
To see how this works, we can take a specific example. In a recent fMRI study, Yadin Dudai and his colleagues in Israel studied humans overcoming their fear of snakes.64 They found people who had specific phobias of snakes and gave them the chance to pull a rope that brought a snake closer to them. The sight of the snake produced activity in the amygdala and limbic emotional areas, which have long been associated with fear in both animals and humans. (We can recognize these as Pavlovian [emotional] action-selection circuits, Chapter 8.) People who were able to overcome their fear (pulling the rope) had decreased activity in the amygdala, but increased activity in the subgenual anterior cingulate cortex (sgACC), an area of prefrontal cortex that projects to the amygdala. Somehow, activity in the sgACC has inhibited activity in the amygdala and allowed these people to overcome their fear and pull the rope toward them.
• Roy F. Baumeister, Todd F. Heatherton, and Dianne M. Tice (1994). Losing Control: How and Why People Fail at Self-Regulation. San Diego, CA: Academic Press.
• George Ainslie (2001). Breakdown of Will. Cambridge, UK: Cambridge University Press.
• Robert Kurzban (2010). Why Everyone (Else) is a Hypocrite. Princeton, NJ: Princeton University Press.
• Uri Nili, Hagar Goldberg, Abraham Weizman, and Yadin Dudai (2010). Fear Thou Not: Activity of frontal and temporal circuits in moments of real-life courage. Neuron, 66, 949–962.