2
The Tale of the Thermostat

Your brain is a decision-making machine, a complex but physical thing. Like any physical process, there are multiple ways in which decisions can go wrong. Being a physical being does not diminish who you are, but it can explain some of the irrational choices you make.

The decision-making that you do arises from the physical processes that occur in your brain. Because your brain is a physical thing, it has vulnerabilities, what one might call “failure-modes” in the engineering world. We see these vulnerabilities in our susceptibility to bad choices (Do you really need a new sports car?), in our susceptibility to addictions (Why can’t you just quit smoking those cigarettes?), in our inability to control our emotions or our habits (Why do you get so angry about things you can’t change?). We would like to believe that we are rational creatures, capable of logic, always choosing what’s best for us. But anyone who has observed their own decisions (or those of their friends) will recognize that this is simply not correct. We are very complex decision-making machines, and sometimes those decision-making processes perplex us. Understanding how the brain makes decisions will help us understand ourselves. To understand those vulnerabilities, we need to understand the mechanism of decision-making in our brains.

Today, we are all familiar with complex machines, even complex machines that make decisions. The simplest machine that makes a decision is the thermostat—when the house is too hot, the thermostat turns on the air conditioning to cool it down, and when the house is too cold, the thermostat turns on the furnace to heat it up. This process is called negative feedback—the thermostat’s actions are inversely related to the difference between the temperature of the room and the temperature you’d like it to be (the set-point). But is the process really so simple? Taking the decision-making process of the thermostat apart suggests that even the simple decision-making process of the thermostat is not so simple.

The thermostat has three key components of decision-making that we will come back to again and again in this book. First, it perceives the world—the thermostat has a sensor that detects the temperature. Second, it determines what needs to be done—it compares that sensor to the set-point and needs to increase the temperature because it is too cold, needs to decrease the temperature because it is too hot, or doesn’t need to do anything because the temperature is just right. Finally, it takes an action—it turns on either the furnace or the air conditioning.

In the artificial intelligence literature, there was an argument through much of the 1980s about whether a thermostat could have a “belief.”1 Fundamentally, a belief is a (potentially incorrect) representation about the world. Clearly, a working thermostat requires a representation of the target temperature in order to take actions reflecting the temperature of the outside world. But can we really say that the thermostat “recognizes” the temperature of the outside world? The key to answering this question is that the thermostat does not take actions based on the temperature of the world, but rather on its internal representation of the temperature of the world. Notice that the internal representation might differ from the real temperature of the room. If the sensor is wrong, the thermostat could believe that the room is warmer or cooler than it really is and take the wrong action.

One of the key points in this book is that knowing how the brain works allows us a better perception of what happens when something goes wrong. I live in Minnesota. In the middle of winter, it can get very cold outside. Imagine you wake up one morning to find your bedroom is cold. Something is wrong. But simply saying that the thermostat is broken won’t help you fix the problem. We need to identify the problem, to diagnose it, if you will.

Maybe something is wrong with the thermostat’s perception. Perhaps the sensor is broken and is perceiving the wrong temperature. In this case, the thermostat could think that the house is fine even though it is too cold. (Notice the importance of belief here—the difference between the thermostat’s internal representation and the actual temperature can have a big impact on how well the thermostat makes its decision!) Maybe the set-point is set to the wrong temperature. This means that the thermostat is working properly—it has correctly moved the temperature of your house to the set-point, but that’s not what you wanted. Or, maybe there’s something wrong with the actions available to the thermostat. If the furnace is broken, the thermostat may be sending the signal saying “heat the house” but the house would not be heating correctly. Each of these problems requires a different solution. Just as there are many potential reasons why your bedroom is too cold and knowing how a thermostat works is critical to understanding how to fix it, when smokers say that they really want to quit smoking, but can’t, we need to know where each individual’s decision-making process has gone wrong or we won’t be able to help. Before we can identify where the decision-making process has broken down, we’re going to need to understand how the different parts of the brain work together to make decisions.

Many readers will object at this point that people are much more complicated than thermostats. (And we are.) Many readers will then conclude that people are not machines. Back when negative feedback like the thermostat was the most complicated machine mechanism that made decisions, it was easy to dismiss negative feedback as too simple a model for understanding people. However, as we will see later in the book, we now know of much more complicated mechanisms that can make decisions. Are these more complicated mechanisms capable of explaining human decision-making? (I will argue that they are.) This leaves open some difficult questions: Can we be machines and still be conscious? Can we be machines and still be human?

The concept of conscious machines making decisions pervades modern science fiction, including the droids C3P0 and R2D2 in Star Wars, the android Data of Star Trek: The Next Generation, the desperate Replicants of Ridley Scott’s Blade Runner, and the emotionally troubled Cylons of Battlestar Galactica. Star Trek: The Next Generation spent an entire episode (The Measure of a Man) on the question of how the fact that Data was a machine affected his ability to decide for himself whether or not to allow himself to be disassembled. In the episode, the judge concludes the trial with a speech that directly addresses this question—“Is Data a machine? Yes. Is he the property of Starfleet? No. We have all been dancing around the basic issue: does Data have a soul? I don’t know that he has. I don’t know that I have. But I have got to give him the freedom to explore that question himself. It is the ruling of this court that Lieutenant Commander Data has the freedom to choose.”2 We will examine the complex questions of self and consciousness in detail at the end of the book (The Conundrum of Robotics, Chapter 24), after we have discussed the mechanisms of decision-making. In the interim, I aim to convince you that we can understand the mechanisms of our decision-making process without losing the freedom to choose.

I don’t want you to take the actual process of the thermostat as the key to the story here, anymore than we would take jet planes as good models of how swans fly. And yet, both planes and swans fly through physical forces generated by the flow of air over their specially shaped wings. Even though bird wings are bone, muscle, and feathers, while airplane wings are metal, for both birds and planes, lift is generated by airflow over the wings, and airflow is generated by speed through the air. If we can understand what enables a 30-ton airplane to fly, we will have a better understanding of how a 30-pound swan can fly. Even though the forward push through the air is generated differently, both birds and planes fly through physical interactions with the air. We will use analogous methods of understanding decision-making processes to identify and fix problems in thermostats and in ourselves, because both use identifiable computational decision-making processes.

A good way to identify where a system (like a thermostat) has broken down is a process called “differential diagnosis,” as in What are the questions that will differentiate the possible diagnoses? My favorite example of this is the show CarTalk on National Public Radio, in which a pair of MIT-trained auto mechanics (Tom and Ray Magliozzi) diagnose car troubles. When a caller calls in with a problem, the first things they discuss are the basics of the problem. (Anyone who has actually listened to CarTalk will know that the first things Tom and Ray discuss are the person’s name, where the caller is from, and some completely unrelated jokes. But once they get down to discussing cars, they follow a very clear process of differential diagnosis.) A typical call might start with the caller providing a description of the problem—“I hear a nasty sound when I’m driving.” And then Tom and Ray will get down to business—they’ll ask questions about the sound: “What is the sound? Where is it coming from? Does it get faster as you go faster? Does it still happen when the engine is on but the car is not moving?” Each question limits the set of possible problems. By asking the right series of questions, one can progressively work one’s way to identifying what’s wrong with the car. If we could organize these questions into a series of rules, then we could write a computer program to solve our car problems. (Of course, then we wouldn’t get to hear all the CarTalk jokes. Whether this is good or bad is a question of taste.)

In the 1980s, the field of artificial intelligence developed “expert systems” that codified how to arrange these sorts of question-and-answer rules to perform differential diagnosis.3 At the time, expert systems were hailed as the answer to intelligence—they could make decisions as well as (or better than) experts. But it turns out that most humans don’t make decisions using differential diagnoses. In a lot of fields (including, for example, medicine), a large part of the training entails trying to teach people to make decisions by these highly rational rules.4 However, just because it is hard for people to make decisions by rule-based differential diagnoses does not mean that humans don’t have a mechanism for making decisions. In fact, critics complained that the expert systems developed by artificial intelligence were not getting at the real question of what it means to be “intelligent” long before it was known that humans didn’t work this way.5 People felt that we understood how expert systems work and thus they could not be intelligent. A classmate in college once said to me that “we would never develop an artificial intelligence. Instead, we will recognize that humans are not intelligent.” One goal of this book is to argue that we can recognize the mechanisms of human decision-making without losing our sense of wonder at the marvel that is human intelligence.

Some readers will complain that people are not machines; they have goals, they have plans, they have personalities. Because we are social creatures and much of our intelligence is dedicated to understanding each other, we have a tendency to attribute agency to any object that behaves in a complex manner.6 Many of my friends name their cars and talk about the personality of their cars. My son named our new GPS navigation system “Dot.” When asked why he named the GPS (the voice is definitely a woman’s), he said, “So we can complain to her when she gets lost—‘Darn you, Dot!’”

A GPS navigator has goals. (These are goals we’ve programmed in, but they are goals nonetheless.) Dot’s internal computer uses her knowledge of maps and her calculation of the current location to make plans to achieve those goals. You can even tell Dot whether you prefer the plans to include more highways or more back-country scenic roads. If Dot were prewired to prefer highways or back-country scenic roads, we would say she has a clear personality. In fact, I wish I had some of Dot’s personality—when we miss an exit, Dot doesn’t complain or curse, she just says “recalculating” and plans a new route to her goal.

The claim that computers can have goals and that differences in how they reach those goals reflects their personality suggests that goals and plans are simple to construct and that personality is simply a difference in underlying behavioral variables and preferences. We explain complex machines by thinking they’re like people. Is it fair to turn that on its head and explain people as complex machines?

In this book, I’m going to argue that the answer to this question is “yes”—your brain is a decision-making machine, albeit a very complex one. You are that decision-making machine. This doesn’t mean you’re not conscious. This doesn’t mean you’re not you. But it can explain some of the irrational things that you do.

Understanding how the human decision-making system works has enormous implications for understanding who we are, what we do, and why we do what we do. Scientists study brains, they study decision-making, and they study machines. By bringing these three things together, we will begin to get a sense of ourselves. In this book, I will discuss what we know about how brains work, what we know about how we make decisions, and what we know about how that decision-making machine can break down under certain conditions to explain irrationality, impulsivity, and even addiction.