In order to be able to scientifically measure decision-making, we define decisions as “taking an action.” There are multiple decision-making systems within each of us. The actions we take are a consequence of the interactions of those systems. Our irrationality occurs when those multiple systems disagree with each other.
Because we like to think of ourselves as rational creatures, we like to define decision as the conscious deliberation over multiple choices. But this presumes a mechanism that might or might not be correct. If we are such rational creatures, why do we make such irrational decisions? Whether it be eating that last French fry that’s just one more than we wanted or saying something we shouldn’t have at the faculty meeting or the things we did that time we were drunk in college, we have all said and done things that we regret. Many a novel is based on a character having to overcome an irrational fear. Many an alcoholic has sworn not to drink, only to be found a few days later, in the bar, drink in hand.
We will see later in this book that the decisions that we make arise from an interaction of multiple decision-making systems. We love because we have emotional reactions borne of intrinsic social needs and evolutionary drives (the Pavlovian system, Chapter 8). We decide what job to take through consideration of the pros and cons of the imagined possibilities (episodic future thinking and the Deliberative system, Chapter 9). We can ride a bike because we have trained up our Procedural learning system (Chapter 10), but it can also be hard to break bad habits that we’ve learned too well (Chapter 15). We will see that these are only a few of the separable systems that we can identify. All of these different decision-making systems make up the person you are.
The idea that our actions arise from multiple, separably identifiable components has a long history in psychology, going back to Freud, or earlier, and has gained recent traction with theories of distributed computing, evolutionary psychology, and behavioral economics.1 Obviously, in the end, there is a single being that takes an action, but sometimes it’s helpful to understand that being in terms of its subsystems. The analogy that I like, which I will use several times in this book, is that of a car. The car has a drive train, a steering system, brakes. Cars often have multiple systems to accomplish the same goal (the foot-pedal brake and the emergency/parking brake, or the electric and gasoline engines in a hybrid like the Toyota Prius).
The psychologist Jonathan Haidt likes to talk of a rider and an elephant as a metaphor for the conscious and unconscious minds both making decisions,2 but I find this analogy unsuitable because it separates the “self” from the “other” decision-making systems. As is recognized by Haidt at the end of his book, you are both the rider and the elephant. When a football star talks about being “in the zone,” he’s not talking about being out of his body and letting some other being take over—he feels that he is making the right decisions. (In fact, he’s noticing that his procedural/habit system is working perfectly and is making the right decisions quickly and easily.) When you are walking through a dark forest and you start to get afraid and you jump at the slightest sound, that’s not some animal reacting—that’s you. (It’s a classic example of the Pavlovian action-selection system.) Sometimes these systems work together. Sometimes they work at cross purposes. In either case, they are all still you. “Do I contradict myself?” asks Walt Whitman in his long poem Song of Myself. “Then I contradict myself. I am large. I contain multitudes.”
So how do we determine how these multiple systems work? How do we determine when they are working together and when they are at cross-purposes? How do we identify the mechanisms of Whitman’s multitudes?
To study something scientifically, we need to define our question in a way that we can measure and quantify it. Thus, we need a measure of decision-making that we can observe, so we can compare the predictions that arise from our hypotheses with actual data. This is the key to the scientific process: there must be a comparison to reality. If the hypothesis doesn’t fit that reality, we must reject the hypothesis, no matter how much we like it.
One option is to simply to ask people what they want. But, of course, the idea that we always do what we say we want makes very strong assumptions about how we make decisions, and anyone who has regretted a decision knows that we don’t always decide to do what we want. Some readers may take issue with this statement, saying that you wanted that decision when you took the action. Just because you regret that night of binge drinking when you have a hangover in the morning doesn’t mean you didn’t want all those drinks the night before. Other readers may argue that a part of you wanted that decision, even if another part didn’t. We will come to a conclusion very similar to this, that there are multiple decision-making modules, and that the members of Whitman’s multitudes do not always agree with each other. Much of this book will be about determining who those multitudes are and what happens when they disagree with each other.
As a first step in this identification of the multitudes, careful scientific studies have revealed that a lot of conscious “decisions” that we think we make are actually rationalizations after the fact.3 For example, the time at which we think we decided to start an action is often after the action has already begun.
In what are now considered classic studies of consciousness, Benjamin Libet asked people to perform an action (such as tapping a finger whenever they wanted to) while watching a dot move around a circle.4 The people were asked to report the position of the dot when they decided to act. Meanwhile, Libet and his colleagues recorded electrical signals from the brain and the muscles. Libet found that these signals preceded the conscious decision to act by several hundred milliseconds. Both the brain and muscles work by manipulating electricity, which we can measure with appropriate technologies. With the appropriate mathematics, we can decode those signals and determine what is happening within the brain. Libet decoded when the action could be determined from signals in the motor areas of the brain and compared it to when consciousness thought the action had occurred. Libet found that the conscious decision to take an action was delayed, almost as if consciousness was perceiving the action, rather than instigating it. Several researchers have suggested that much of consciousness is a monitoring process, allowing it to keep track of things and step in if there are problems.5
Much of the brain’s machinery is doing this sort of filling-in, of making good guesses about the world. Our eyes can focus on only a small area of our visual field at a time. Our best visual sensors are an area of our retina that has a concentration of cells tuned to color and specificity in a location called the fovea. This concentration of detectors at the fovea means that we’re much better at seeing something if we focus our eyes on it. Our vision focuses on a new area of the visual world every third of a second or so. The journeys between visual focusings are called saccades. Even while we think we are focusing our attention on a small location, our eyes are making very small shifts called microsaccades. If the visual world is aligned to our microsaccades so that it shifts when we do,A the cells adapt to the constant picture and the visual image “grays out” and vanishes. This means that most of our understanding of the visual world is made from combining and interpreting short-term visual memories. We are inferring the shape of the world, not observing it.
In a simple case familiar to many people, there’s a small location on our retina where the axons from the output cells have to pass through to form the optic nerve sending the visual signals to the rest of our brain. This leaves us with a “blind spot” that must be filled in by the retina and visual processing system.B Our brain normally fills in the “blind spot” from memories and expectations from the surrounding patterns.7
In a similar way, we don’t always notice what actually drives our decision-making process. We rationalize it, filling in our reasons from logical perspectives. Some of my favorite examples of this come from the work of V. S. Ramachandran,8 who has studied patients with damage to parts of the brain that represent the body. A patient who is physically unable to lift her arm denies that she has a problem and merely states that she does not want to. When her arm was made to rise by stimulating her muscles directly, she claimed that she had changed her mind and raised her arm because she wanted to, even though she had no direct control of the arm. In a wonderful story (told by Oliver Sacks in A Leg to Stand On), a patient claims that his right hand is not his own. When confronted with the fact that there are four hands on the table (two of his and the two of the doctor’s), the patient says that three of the hands belong to the doctor. “How can I have three hands?” asks the doctor. “Why not? You have three arms.” replies the patient.
In his book Surely You’re Joking, Mr. Feynman, the famous physicist Richard Feynman described being hypnotized. He wrote about how he was sure he could take the action (in this case opening his eyes) even though he had been hypnotized not to, but he decided not to in order to see what would happen. So he didn’t, which was what he had been asked to do under hypnosis. Notice that he has rationalized his decision. As Feynman himself recognized, even if he said “I could have opened my eyes,” he didn’t. So what made the decision? Was it some effect of the hypnosis on his brain or was it that he didn’t want to? How could we tell? Can we tell?
Many of the experiments we’re going to talk about in this book are drawn from animals making decisions. If we’re going to say that animals make decisions, we need to have a way of operationalizing that decision—it’s hard to ask animals what they think. There are methods that allow us to decode information represented within specific parts of the brain, which could be interpreted as a means of asking an animal what it thinks (see Appendix B). However, unlike asking humans linguistically, where one is asking the overall being what it thinks, decoding is asking a specific brain structure what it is representing. Of course, one could argue that assuming what people say is what they think assumes that humans are unified beings. As we will see as we delve deeper into how decisions are made, humans (like other mammals) are mixtures of many decision-making systems, not all of which always agree with each other.
Just as the Libet experiments suggest that parts of the brain can act without consciousness, there are representations in the brain that are unable to affect behavior. In a remarkable experiment, Pearl Chiu, Terry Lohrenz, and Read Montague found signals in both smokers’ and nonsmokers’ brains that represented not only the success of decisions made but also what they could have done if they had made a better choice.9 This recognition of what they could have done is called a counterfactual (an imagination of what might have been) and enables enhanced learning. (It is now known that both rats and monkeys can represent counterfactual reward information as well. These signals appear to use the same brain structures that Chiu, Lohrenz, and Montague were studying, and to be the same structures involved when humans express regret.10) Counterfactuals enhance learning by allowing one to learn from imagined possibilities. For example, by watching someone else make a mistake. Or (in the example used by Chiu, Lohrenz, and Montague) “if I had taken my money out of the stock market last week, I wouldn’t have lost all that money when it crashed.” While signals in both groups’ brains reflected this counterfactual information, only nonsmokers’ behavior took that information into account. If we want to understand how decisions are made and how they go wrong, we are going to need a way to determine not only the actions taken by the subject but also the information processing happening in his or her brain.
Certainly, most nonhuman animals don’t have language, although there may be some exceptions.C Nevertheless, it would be hard to ask rats, pigeons, or monkeys (all of which we will see making decisions later in the book) what they want linguistically. Given that it is also hard to ask humans what they really want, we will avoid this language problem altogether and operationalize making a decision as taking an action, because taking an action is an observable response.
This is very similar to what is known in behavioral economics as revealed preferences.13 Economic theory (and the concomitant new field of neuroeconomics) generally assumes that those revealed preferences maintain a rational ordering such that if you prefer one thing (say chocolate ice cream) to another (say vanilla ice cream), then you will always prefer chocolate ice cream to vanilla if you are ever in the same situation. We will not make that assumption.
Similarly, we do not want to assume that a person telling you what he or she wants actually reflects the choices a person will make when faced with the actual decision.14 Given the data that our conscious observations of the world are inferred and the data that our spoken explanations are rationalizations,15 some researchers have suggested that our linguistic explanations of our desires are better thought of as the speech of a “press secretary” than the actions of an executive.16 Thus, rather than asking what someone wants, we should measure decisions by giving people explicit choices and asking them to actually choose. We will encounter some of the strangenesses discovered by these experiments in subsequent chapters.
Often, experimentalists will offer people hypothetical choices. It is particularly difficult to get funding to provide people a real choice between $10,000 and $100,000, or to give them a real choice whether or not to kill one person to save five. Instead, subjects are asked to imagine a situation and pretend it was real. In practice, in the few cases where they have been directly compared, hypothetical and real decision-making choices tend to match closely.17 But there are some examples where hypothetical and real decisions diverge.18 These tend to be with rewards or punishments that are sensory, immediate, and what we will recognize later as Pavlovian (Chapter 8).
A classic example of this I call the parable of the jellybeans. Sarah Boysen and Gary Berntson tried to train chimpanzees to choose the humble portion.19 They offered the subject two trays of jellybeans. If he reached for the larger tray, he got the smaller one and the larger one was given to another chimpanzee; however, if the deciding chimpanzee reached for the smaller tray, he got the larger tray and the smaller one was given to another chimpanzee. When presented with symbols (Arabic numerals that they had previously been trained to associate with numbers of jellybeans), subjects were able to choose the smaller group, but when the choices were physical jellybeans, they were unable to prevent themselves from reaching for the larger group of jellybeans. Other experiments have found that more linguistically capable animals are more able to perform these self-control behaviors.20 This may be akin to our ability to talk ourselves out of doing things that we feel we really want: “I know I’m craving that cigarette. But I don’t want it. I really don’t.”
A similar experiment is known colloquially as the marshmallow experiment.21 Put a single marshmallow in front of a child sitting down at the kitchen table. Tell the child that if the marshmallow is still sitting there in five minutes, you’ll add a second marshmallow to it. Then leave the room. It is very, very difficult for children not to reach for the marshmallow. It is much easier for children to wait for two pennies or two tokens than for two marshmallows. We will discuss the marshmallow experiment in detail in the chapter on self-control (Chapter 15).
Studies of decision-making in psychology (such as the marshmallow experiment) as well as studies in behavioral economics and the new field of neuroeconomics tend to measure choices within the limitation of discrete options. In the real world, we are rarely faced with a discrete set of options. Whether it be deciding when to swing the bat to hit a baseball or deciding where to run to on a playground, we are always continuously interacting with the world.22 As we will see later, some of the mechanisms that select the actions we take are not always deliberative, and do not always entail discrete choices.
So where does that leave us in our search for a definition of decision-making? We will not assume that all decision-making is rational. We will not assume that all decision-making is deliberative. We will not assume that decision-making requires language. Instead, we define decision-making as the selection of an action. Obviously, many of the decisions we take (such as pushing a lever or button—say on a soda machine) are actual actions. But note that even complex decisions always end in taking an action. For example, buying a house entails signing a form. Getting married entails making a physical statement (saying “I do”). We are going to be less concerned about the physical muscle movements of the action than about the selection process that decided on which action to take. Nevertheless, defining “decision-making” as “action-selection” will force us to directly observe the decisions made. It will allow us to ask why we take the actual actions we do. Why don’t those actions always match our stated intentions? How do we choose those actions over other potential actions? What are the systems that select actions, and how do those systems interact with the world? How do they break down? What are their vulnerabilities and failure-modes? That is what the rest of this book is about.