CHAPTER 10

WHO’S IN CHARGE ANYWAY? THE ZOMBIE INSIDE YOU

Reason is, and ought only to be the slave of passions, and can never pretend to any other office than to serve and obey them.

—DAVID HUME

YOU ARE ABOUT TO LEAVE YOUR HOUSE, BUT BEFORE you open the door you consider whether you should bring an umbrella with you or not. There is a perfectly rational way to assess the likelihood of rain based on the best available information. For instance, let’s assume that you’ve noticed dark clouds outside and that a bit of research (probably on the Internet) shows you that whenever it rains there is a 90 percent chance that that type of cloud is around. You also find out, however, that when it does not rain, there is still a 30 percent chance of seeing the same type of clouds in the sky. (Notice that these are not complementary events, so their probabilities don’t add up to 100 percent.) Should you take the umbrella?

Your intuitive answer is probably yes, and you would be right. Statistical theory shows that the proper, formal way to assess the chance of rain in this situation is to take the logarithm of the ratio of the two probabilities: log (90/30) = 0.48. Since this number is greater than zero, you should in fact pick up your umbrella on your way out. Had the probabilities been the other way around (log (30/90) = −0.48), the result would have been a negative number, so the odds would have been against rain and your best bet would have been to leave the umbrella at home. (Unless you have a phobia about getting wet, in which case you should carry an umbrella with you at all times.)

But surely nobody in his right mind would engage in these or more complex calculations before making such a simple decision as whether to pick up an umbrella? You would be surprised. This example was put together by Paul Cisek of the Department of Physiology at the University of Montreal to explain an interesting finding of recent neurobiological research: it turns out that your brain has neurons that specialize in precisely this sort of calculation, except that you are not consciously aware of what is going on. It is as if a zombie inside you is in control of your operations and decisions, and “you” (meaning your conscious self) realize what is going on only after the fact.

The research that Cisek was commenting on, published by T. Yang and M. N. Shadlen in Nature in 2007, was conducted on monkeys instead of humans, and it had to do with how the animals interpreted symbolic information—information exactly analogous to your knowledge of the probability of rain when you observe certain clouds. The monkeys were exposed to two targets, a green one and a red one, and one of them was associated with a reward. The animals had to guess which target to choose based on cues provided in the form of geometrical figures that were probabilistic predictors of a given reward. For example, a triangle appeared in only 5 percent of instances where the red target was correct (that is, when it was associated with the reward) but in 50 percent of instances when the green target provided the reward. If the monkeys were capable of what philosophers call “probabilistic inference,” they should have regarded triangles as a cue indicating green targets. Not only was that in fact the case, but the researchers showed that when more complex cues were given, the monkeys behaved as if they were deploying the concept of the logarithm of the likelihood ratio—the very same one you used to decide whether to pick up an umbrella on your way out the door.

But monkeys don’t know about logarithms. Indeed, many human beings don’t know about or understand logarithms—let alone the theory behind probabilistic inference—so how is this possible? It’s possible because your brain does it all for you (and the monkey) without any need of your conscious awareness! Yang and Shadlen showed that there is a very tight correlation between the activity of certain neurons (called LIP, for lateral intraparietal area, the area of the brain where they are located) and the logarithm of the likelihood ratio of the cues provided during the experiment. That is, the higher the value of the logarithm, the stronger the activity of the LIP neurons, as if the monkeys had a built-in inferential calculator in their brains that allowed them to use the available information most efficiently and to get the reward more times than not.

To some extent, all of us become aware of having a “zombie moment” at one time or another. A baseball player hitting a fastball pitched at ninety miles per hour does not have time to solve a complex set of differential equations to tell him where and when to swing the bat. And yet his brain seems to do the calculations for him without conscious input. Indeed, if consciousness were required in baseball, the game would be even slower than it already is, because it takes hundreds of milliseconds (an eternity in most sports) for consciousness to get the body into swinging mode.

I used to live near the Brooklyn Bridge, on the Brooklyn side, and often had to cross a double pedestrian light before reaching my apartment. The light farthest from me always turned green for pedestrians before the near light (because of the convoluted traffic patterns at the entrance of the bridge). Although I had to wait for the second light to turn green before I could safely cross, I often caught myself with a foot already off the sidewalk, as if an internal autopilot had seen the first light going green and had automatically (though erroneously, in that case) equated green with “go.” Usually I became conscious of this mistaken (and potentially very dangerous) decision in time, stopped my leg from moving further, and patiently waited for the second light to turn green before crossing.

This experience illustrates an idea of Libet’s that he developed in the years after his famous 1983 experiment (see Chapter 9): perhaps the role of consciousness is not to engage in constant evaluation and decision-making, because it would be too inefficient to do so in most practical situations, but rather to monitor the internal zombie’s activities (it becomes “aware” of its output) and occasionally exercise a veto or redirect actions to improve over the zombie’s fast, but sometimes inaccurate, decisions.

Our autopilot zombie, then, can be very useful, especially when it comes to complex operations like hitting a baseball or calculating the odds of rain. The problem is that the more we learn about our subconscious decision-making mechanisms, the more we find out that they can be easily manipulated by others, without our notice. We encountered an example of this phenomenon, called “priming,” in Chapter 6 when I told you about the experiment showing that thinking of the last digits of your social security number (if they’re high) makes you more prone to overpay for a given item. The same phenomenon was at play when another set of researchers showed that the outcome of a job interview (or a date) is affected by holding a cold or a hot liquid (engendering “cold” or “warm” reactions toward the interviewee or the date).

There is direct neurobiological evidence that a lot of our decision-making is done by subconscious processing of information, even when we think we are processing information consciously. In an experiment published in Science magazine in May 2007, a group of neuroscientists scanned the brains of players of a video game. The participants had to squeeze a handgrip whenever they saw the image of some currency on the screen, and they were told to squeeze it tighter the more valuable the currency was (for example, a pound note versus a penny). The interesting twist to the experiment was that some of the images were visible long enough to register consciously, while others flashed rapidly in and out of the screen and were perceived only subliminally. In both cases, however, the same region of the brain, the ventral pallidum, was activated when the subjects squeezed the handgrip. This is surprising because the ventral pallidum is an area of the brain that is evolutionarily very ancient and is not involved in conscious thinking. The implication is that the reactions to both subliminal and conscious imagery were decided subconsciously. The prefrontal cortex, where conscious thought originates, was literally the last to know.

The idea that there are several components to the human mind, operating in a quasi-independent manner that sometimes brings them into conflict, is of course not new at all. In 1920 Freud published an essay entitled “Beyond the Pleasure Principle” in which he presented his theory of a tripartite mind, with the parts labeled “id,” “ego,” and “superego.” The id for Freud was the seat of sensual drives, particularly but not exclusively sexual ones. The superego represented the conscience and embodied social norms. The ego was the intermediary between the other two, mediating the constraints imposed by external reality. Freud’s ideas had no empirical neurobiological basis (and today’s research shows a much more complex picture of the relationships between conscious and unconscious minding) and were instead rooted, surprisingly, in philosophy.

Perhaps the most ancient theory of the mind as made up of multiple interacting parts is Plato’s concept of the “soul,” as presented in two of his dialogues, the Phaedo and the Republic. The two dialogues present two versions of the theory, so here I will briefly discuss the one articulated in the Republic, since it is the more mature version and the one that is pertinent to our discussion. Plato’s aim in the Republic, as the title may suggest, was not really to investigate how the human mind works, but rather to pursue the question of how we should build an ideal state. Yet, the Greek philosopher draws a direct parallel between the state and the individual human beings who are parts of the state, suggesting that a just state is made possible by the harmonious balance of its component parts, just as a happy person is the result of a balance achieved among the components of his soul. Hence the somewhat strange idea of analyzing what makes for a balanced soul in order to draw conclusions about what makes for a just state.

The concept of “soul” in ancient Greece was varied and complex and did not necessarily include the idea of survival after death. (It did for Plato, but not for his pupil Aristotle.) For the purposes of our discussion, we can roughly equate the soul that Plato was talking about with our (sometimes equally fuzzy) idea of mind. For Plato, the soul/mind has an appetitive part, a spirited part, and a rational part. The appetitive component is concerned with the satisfaction of fundamental instinctual desires for food, water, or sex. In this sense, the appetitive soul is directly analogous to Freud’s id. The spirited part of the soul deals with self-preservation and is also the source of courage (on the positive side) and anger or envy (on the negative side). The rational component, as the name implies, is the seat of wisdom and higher-level thought and is concerned with issues of truth. The analogy between the spirited and rational parts of the soul, on the one hand, and Freud’s ego and superego, on the other, is a bit less obvious, though the parallel is not that far-fetched either.

The reason it is interesting to consider Plato’s view of the soul, however, lies not in the details of the philosopher’s tripartition, but in the idea that the various components are often in conflict with one another and that there is a hierarchy of importance among them. For Plato, a good life is possible only if the three parts of the soul are in equilibrium, and this equilibrium is most certainly not a democracy (and not surprisingly, neither is the ideal state that he ends up advocating in the Republic). Instead, for Plato the rational soul ought to be in charge and keep the appetitive one in check, helped in this by the spirited soul. Plato compares the soul to a chariot: a wild black horse represents the appetitive soul, flanked by a more noble white horse (the spirited soul), and both are under the stern command of the charioteer (the rational soul). So here we have a philosophical—and colorfully presented—theory of what makes for a balanced human being: such a human being is one who cultivates reason over passion as a guide to her life. In modern terms, if we can stretch our interpretation of Plato a bit, the philosopher would claim that conscious thinking ought to guide and keep in check subconscious instincts. As we have seen, however, although the higher brain functions have some room to exercise their veto power over the zombie inside us, it is beginning to look like it is Plato’s horses that steer the charioteer, not the other way around.

But perhaps this state of affairs is not such a bad thing. Another influential philosopher, David Hume (who wrote some twenty centuries after Plato), turned the cards around and claimed not only that “reason alone can never be a motive to any action of the will” but most famously that “reason is, and ought only to be the slave of passions, and can never pretend to any other office than to serve and obey them.” Hume was no defender of irrationality. Indeed, he was friends with the French philosophers of the Enlightenment (also called the Age of Reason), and he is remembered for his exquisitely well-argued writings on matters of morals, politics, and science. What, then, could Hume possibly mean by saying that reason not only is (as a matter of fact) but ought to be the slave of passions? Hume was a keen observer of human nature, and he realized that we do things because we have motivations, but that motivations come out of “passions” (emotional drives), not reason.

Suppose I stop typing this on my laptop’s keyboard, get up from my chair, go to the refrigerator, open the door, pick up a bottle of water, and start drinking. I don’t do all of this because my reason is telling me that if I do not I will eventually get dehydrated, lose my concentration, and possibly die of thirst. I do it because I’m thirsty—that is, I have a feeling of thirst that was registered by my brain and the decision was made by my inner zombie to act on it. In this scenario, there’s no need to invoke conscious, rational decision-making.

Hume generalized this principle to apply not just to obvious instances, such as my little example here, but to pretty much anything of consequence we do as human beings. As he put it: “‘Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger. ‘Tis not contrary to reason for me to chuse my total ruin, to prevent the least uneasiness of a . . . person wholly unknown to me. ‘Tis as little contrary to reason to prefer even my own acknowledg’d lesser good to my greater, and have a more ardent affection for the former than for the latter.” As counterintuitive as Hume’s ideas are, and as much as they went in direct opposition to a long tradition in philosophy stemming from Plato and continuing to our day, modern neurobiology seems to vindicate Hume when it portrays reason as the instrumental tool deployed to achieve our desires, with the fundamental engine generating those desires lying much lower than the cerebral cortex.

Hume went a step further and famously maintained that morality itself is not arrived at by logical argument (again, contrary to what most philosophers had argued before and have since), but is rather the outcome of our emotional reactions. Let us consider an example. Most people today feel that slavery is a repellent and morally wrong practice, but of course that has not always been the case in human history. Now, one can in fact build a logical argument against the practice of slavery to show that it is wrong from a rational perspective. But such an argument would have to rely on certain premises that are not, in themselves, easy to defend on rational grounds. For instance, one could say that it is wrong to limit the freedom of other human beings, or that we should not force on others what we would not want others to force upon us. Yet a hypothetical defender of slavery could counter such arguments with a logic of his own: that it is rational to limit some people’s freedoms so that a stronger and more prosperous society can be built, or that it is acceptable to force others to do our bidding if we have the might to impose it on them, and so on. You may not find such proslavery arguments convincing (I certainly hope you don’t!), but the point is that one can rationally argue both sides of the debate and that ultimately our moral sense derives from how we feel about slavery, with our arguments elaborating on that feeling, not determining it.

Earlier (in Chapter 4), we encountered the idea that morality emerged out of an evolutionary process that first shaped our ancestors’ instincts (and hence their emotional reactions) and only much later on developed into the complex practices of modern human societies. Hume obviously was not thinking of evolution (he wrote before Darwin), but he would have probably been very pleased with the idea nonetheless. In fact, another intriguing example of modern research in cognitive science shedding some light on philosophical concepts manages, once again, to vindicate Hume. A study published in Science magazine in February 2009 and conducted by H. A. Chapman, D. A. Kim, J. M. Susskind, and A. K. Anderson made a surprising connection between physical disgust and moral disapproval, hinting that there might be a deep connection between basic human emotions and our more sophisticated moral judgment. The authors showed that a primary muscle involved in expressions of disgust elicited by bad tastes or smells or by disease is also activated when we experience moral disgust—for example, when we are treated unfairly.

This so-called oral-to-moral hypothesis is still a bit speculative, but it is in agreement with a key prediction of evolutionary theory: evolution builds on previously existing mechanisms and structures and recycles them for new functions. In this instance, the primordial response, common to many mammals, is a reaction of disgust to the experience of bitter foods, which in nature are often poisonous. The idea is that the evolution of moral reactions was facilitated by co-opting this preexisting “oral” route to disgust and using much of the same brain and muscle apparatus to express rejection of complex biologically dangerous (and hence morally reprehensible) behaviors like incest. The last step would then have been to co-opt again the same physiological machinery for the expression of morally even “higher” levels of disgust, such as those elicited by discriminating and unjust behavior (say, being shortchanged in a financial transaction).

The connection between deliberative reasoning and emotional impulse, which Plato thought ought to be supervised by our rational self and Hume concluded was instead controlled by our passions, can go wrong when people pathologically succumb to their impulses. About 9 percent of Americans, it turns out, have a problem with compulsive behavior that leads them to make rush decisions about their lives that they are likely to regret, to have trouble planning for their future, or to engage in self-destructive behaviors like drug and alcohol abuse.

Neurobiologists know that impulse suppression is located in an area of the prefrontal cortex of the brain known as the dorsal anterior cingulate. You can think of it as the brain’s braking system, and it does not mature completely until the end of adolescence—which can go a long way toward explaining what cognitive scientists call “risk-taking” behavior in young adults. Interestingly, there are genetic effects on the ability of the dorsal anterior cingulate to do its job. For instance, remember MAO-A, the gene associated with psychopathology that we encountered when we examined the story of Jim Fallon in Chapter 3? It produces an enzyme that reduces the brain activity of serotonin, a neurotransmitter that affects our mood (including how hungry we are, or how angry). A variant form of MAO-A has been linked to excessive impulsive behavior, leading neuroscientists to perform brain scans of subjects with the normal version of the gene and of others with the high-risk version while they were engaged in a video game that tested their propensity for impulsive decisions. Intriguingly, people with the high-risk variant of MAO-A also showed lower activity in the very same dorsal anterior cingulate that keeps impulsive behavior in check.

Then again, the problem is not all in our genes—far from it. The same effect of letting go of the brain’s brakes, thereby succumbing to emotional impulses, can also be brought about by external or environmental circumstances that we can exercise only partial control over, if any, such as stress or alcohol and drug abuse. In a further twist that is scientifically intriguing but makes helping people much more difficult, external conditions interplay in multiple ways with genetic ones to generate what scientists call “gene-environment interactions.”

All of this complexity notwithstanding, we should by now have gained a dose of humility about how much in charge of our lives we really are, at least if by “we” we mean our conscious self. It will come as no surprise in the next chapter, then, that things are particularly messy in the conscious-versus-unconscious department when it comes to one of the most vital and more meaningful components of our emotional lives: love.