The secret of rulership is to combine a belief in one’s own infallibility with the power to learn from past mistakes. —GEORGE ORWELL
IN 2005 HURRICANE Katrina devastated the Gulf Coast of Louisiana and Mississippi. More than a thousand people lost their lives, and hundreds of thousands of others were displaced. New Orleans was flooded, with some parts of the city covered by fifteen feet of water. The U.S. government’s response was, by all accounts, badly botched. Well, by almost all accounts. When Michael Brown, the head of the Federal Emergency Management Agency, was accused of mismanagement and a lack of leadership, and Congress convened a panel to investigate, did the inexperienced Brown admit to any shortcomings? No, he said the poor response was “clearly the fault of a lack of coordination and planning by Louisiana governor Kathleen Blanco and New Orleans mayor Ray Nagin.” In fact, in his own mind, Brown seemed to be some sort of tragic, Cassandra-like figure: “I predicted privately for several years,” he said, “that we were going to reach this point [of crisis] because of the lack of resources and the lack of attention being paid.”1
Perhaps in his heart Brown accepted more responsibility. Perhaps these public statements were simply an awkward attempt to plea-bargain the public accusations against him down from negligence to impotence. Disingenuousness is a little harder to argue in the case of O. J. Simpson, the former sports hero accused of murdering two people but acquitted in criminal court. Afterward, he couldn’t seem to stay out of trouble. Finally, in 2007 he and a couple of buddies burst into a Las Vegas hotel room and seized sports memorabilia from dealers at gunpoint. At his sentencing, O.J. had a chance to apologize and ask the judge for leniency. He would certainly have had strong motive for a bit of either honest or phony self-criticism. But did he do the self-serving thing and, in an attempt to cut a few years off his sentence, express regret for behaving as a criminal? No, he stood his ground. His answer was sincere. He was sorry for his actions, he said, but he believed he had done nothing wrong. Even with years of prison at stake, Simpson felt the need to justify himself.
The stronger the threat to feeling good about yourself, it seems, the greater the tendency to view reality through a distorting lens. In his classic book How to Win Friends and Influence People, Dale Carnegie described the self-images of famous mobsters of the 1930s.2 Dutch Schultz, who terrorized New York, wasn’t shy about murder—and he certainly wouldn’t have been diminished in the eyes of his colleagues in crime had he simply described himself as a man who had built a successful empire by killing people. Instead, he told a newspaper interviewer that he saw himself as a “public benefactor.” Similarly, Al Capone, a purveyor of bootleg alcohol who was responsible for hundreds of killings, said, “I’ve spent the best years of my life giving people the lighter pleasures, helping them have a good time, and all I get is abuse, the existence of a hunted man.” And when a notorious murderer named “Two Gun” Crowley was sentenced to the electric chair for killing a policeman who had asked for his driver’s license, he didn’t express sadness over taking another man’s life. Rather, he complained, “This is what I get for defending myself.”
Do we really believe the enhanced versions of ourselves that we offer up to our audiences? Do we manage to convince ourselves that our corporate strategy was brilliant even though revenue has plummeted, that we deserve our $50 million exit package when the company we led lost twenty times that amount in the three years we ran it, that we argued the case brilliantly, though our client got the chair, or that we are only social smokers, though we go through the same pack a day whether we see another human being or not? How accurately do we perceive ourselves?
Consider a survey of nearly one million high school seniors.3 When asked to judge their ability to get along with others, 100 percent rated themselves as at least average, 60 percent rated themselves in the top 10 percent, and 25 percent considered themselves in the top 1 percent. And when asked about their leadership skills, only 2 percent assessed themselves as being below average. Teachers aren’t any more realistic: 94 percent of college professors say they do above-average work.4
Psychologists call this tendency for inflated self-assessment the “above-average effect,” and they’ve documented it in contexts ranging from driving ability to managerial skills.5 In engineering, when professionals were asked to rate their performance, between 30 percent and 40 percent put themselves in the top 5 percent.6 In the military, officers’ assessments of their leadership qualities (charisma, intellect, and so on) are far rosier than assessments of them made by their subordinates and superiors.7 In medicine, doctors’ assessments of their interpersonal skill are far higher than the ratings they received from their patients and supervisors, and their estimates of their own knowledge are far higher than objective tests bear out.8 In one study, in fact, physicians who diagnosed their patients as having pneumonia reported an average of 88 percent confidence in that diagnosis but proved correct only 20 percent of the time.9
This kind of inflation is equally the rule in the corporate world. Most business executives think their company is more likely to succeed than the typical company in their business, because it’s theirs,10 and CEOs display overconfidence when entering into new markets or embarking on risky projects.11 One result of this is that when companies acquire other firms, they typically pay 41 percent more for the target firm’s stock than its current price, feeling they can run it more profitably, while the combined value of the merging firms usually falls, indicating that impartial observers feel otherwise.12
Stock pickers, too, are overly optimistic about their ability to choose winners. Overconfidence can even lead otherwise savvy and rational investors to think they can predict when a stock market move will occur despite the fact that, on an intellectual level, they believe otherwise. In fact, in a survey conducted by economist Robert Schiller after the crash on Black Monday in October 1987, about one-third of investors claimed that they had a “pretty good idea when a rebound” would occur, though few, when asked, could offer an explicit theory to back up their confidence in predicting the market’s future.13
Ironically, people tend to recognize that inflated self-assessment and overconfidence can be a problem—but only in others.14 That’s right, we even overestimate our ability to resist overestimating our abilities. What’s going on?
IN 1959, THE social psychologist Milton Rokeach gathered three psychiatric patients to live together in Ypsilanti State Hospital in Michigan.15 Each of the patients believed he was Jesus Christ. Since at least two of them had to be wrong, Rokeach wondered how they would process this idea. There were precedents. In a famous seventeenth-century case a fellow named Simon Morin was sent to a madhouse for making the same claim. There he met another Jesus and “was so struck with the folly of his companion that he acknowledged his own.” Unfortunately, he subsequently reverted to his original belief and, like Jesus, ended up being killed—in this case, burned at the stake for blasphemy. No one was burned in Ypsilanti. One patient, like Morin, relinquished his belief; the second saw the others as mentally ill, but not himself; and the third managed to dodge the issue completely. So in this case, two out of the three patients managed to hang on to a self-image at odds with reality. The disconnect may be less extreme, but the same could be said to be true even of many of us who don’t believe we can walk on water. If we probed—or, in many cases, simply bothered to pay attention—most of us would notice that our self-image and the more objective image that others have of us are not quite in sync.
By the time we were two, most of us had a sense of ourselves as social agents.16 Around the time we learned that diapers are not a desirable fashion statement, we began to actively engage with adults to construct visions of our own past experiences. By kindergarten, we were able to do that without adult help. But we had also learned that people’s behavior is motivated by their desires and beliefs. From that time onward, we’ve had to reconcile the person we would like to be with the person whose thoughts and actions we live with each moment of every day.
I’ve talked a lot about how research psychologists reject much of Freudian theory, but one idea Freudian therapists and experimental psychologists agree on today is that our ego fights fiercely to defend its honor. This agreement is a relatively recent development. For many decades, research psychologists thought of people as detached observers who assess events and then apply reason to discover truth and decipher the nature of the social world.17 We were said to gather data on ourselves and to build our self-images based on generally good and accurate inferences. In that traditional view, a well-adjusted person was thought to be like a scientist of the self, whereas an individual whose self-image was clouded by illusion was regarded as vulnerable to, if not already a victim of, mental illness. Today, we know that the opposite is closer to the truth. Normal and healthy individuals—students, professors, engineers, lieutenant colonels, doctors, business executives—tend to think of themselves as not just competent but proficient, even if they aren’t.
Doesn’t the business executive, noting that her department keeps missing its numbers, question her own abilities? Or the lieutenant colonel, noting that he can’t seem to shed that prefix, wonder whether he’s fit to be a colonel? How do we convince ourselves that we’ve got talent and that when the promotion goes to the other guy, it’s only because the boss was misguided?
As the psychologist Jonathan Haidt put it, there are two ways to get at the truth: the way of the scientist and the way of the lawyer. Scientists gather evidence, look for regularities, form theories explaining their observations, and test them. Attorneys begin with a conclusion they want to convince others of and then seek evidence that supports it, while also attempting to discredit evidence that doesn’t. The human mind is designed to be both a scientist and an attorney, both a conscious seeker of objective truth and an unconscious, impassioned advocate for what we want to believe. Together these approaches vie to create our worldview.
Believing in what you desire to be true and then seeking evidence to justify it doesn’t seem to be the best approach to everyday decisions. For example, if you’re at the races, it is rational to bet on the horse you believe is fastest, but it doesn’t make sense to believe a horse is fastest because you bet on it. Similarly, it makes sense to choose a job you believe is appealing, but it’s irrational to believe a job is appealing because you’ve accepted the offer. Still, even though in each case the latter approach doesn’t make rational sense, it is the irrational choice that would probably make you happier. And the mind generally seems to opt for happy. In both these instances, the research indicates, it is the latter choice that people are likely to make.18 The “causal arrow” in human thought processes consistently tends to point from belief to evidence, not vice versa.19
As it turns out, the brain is a decent scientist but an absolutely outstanding lawyer. The result is that in the struggle to fashion a coherent, convincing view of ourselves and the rest of the world, it is the impassioned advocate that usually wins over the truth seeker. We’ve seen in earlier chapters how the unconscious mind is a master at using limited data to construct a version of the world that appears realistic and complete to its partner, the conscious mind. Visual perception, memory, and even emotion are all constructs, made of a mix of raw, incomplete, and sometimes conflicting data. We use the same kind of creative process to generate our self-image. When we paint our picture of self, our attorney-like unconscious blends fact and illusion, exaggerating our strengths, minimizing our weaknesses, creating a virtually Picassoesque series of distortions in which some parts have been blown up to enormous size (the parts we like) and others shrunk to near invisibility. The rational scientists of our conscious minds then innocently admire the self-portrait, believing it to be a work of photographic accuracy.
Psychologists call the approach taken by our inner advocate “motivated reasoning.” Motivated reasoning helps us to believe in our own goodness and competence, to feel in control, and to generally see ourselves in an overly positive light. It also shapes the way we understand and interpret our environment, especially our social environment, and it helps us justify our preferred beliefs. Still, it isn’t possible for 40 percent to squeeze into the top 5 percent, for 60 percent to squeeze into the top decile, or for 94 percent to be in the top half, so convincing ourselves of our great worth is not always an easy task. Fortunately, in accomplishing it, our minds have a great ally, an aspect of life whose importance we’ve encountered before: ambiguity. Ambiguity creates wiggle room in what may otherwise be inarguable truth, and our unconscious minds employ that wiggle room to build a narrative of ourselves, of others, and of our environment that makes the best of our fate, that fuels us in the good times, and gives us comfort in the bad.
WHAT DO YOU see when you look at the figure below? On first glance, you will see it as either a horse or a seal, but if you keep looking, after a while you will see it as the other creature. And once you’ve seen it both ways, your perception tends to automatically alternate between the two animals. The truth is, the figure is both and it is neither. It is just a suggestive assemblage of lines, a sketch that, like your character, personality, and talents, can be interpreted in different ways.
Attention, Perception & Psychophysics 4, no. 3 (1968), p. 191, “Ambiguity of Form: Old and New,” by Gerald H. Fisher, Fig. 3.2, copyright © 1968 by the Psychonomics Society. Reprinted with kind permission from Springer Science+Business Media B.V.
Earlier I said that ambiguity opened the door to stereotyping, to misjudging people we don’t know very well. It also opens the door to misjudging ourselves. If our talents and expertise, our personality and character were all defined by scientific measurement and carved into inalterable stone tablets, it would be difficult to maintain a biased image of who we are. But our characteristics are more like the horse/seal image, open to differing interpretations.
How easy is it for us to tailor reality to fit our desires? David Dunning has spent years pondering questions like that. A social psychologist at Cornell University, he has devoted much of his professional career to studying how and when people’s perception of reality is shaped by their preferences. Consider the horse/seal image. Dunning and a colleague loaded it onto a computer, recruited dozens of subjects, and provided motivation for them to see it as either a horse or a seal.20 Here is how it worked: The scientists told their subjects that they would be assigned to drink one of two liquids. One was a glass of tasty orange juice. The other was a “health smoothie” that looked and smelled so vile that a number of subjects dropped out rather than face the possibility of tasting it. The participants were told that the identity of the beverage they were to drink would be communicated to them via the computer, which would flash a figure—the image above—on the screen for one second. One second is generally not enough time for a person to see the image both ways, so each subject would see either just a horse or just a seal.21
That’s the key to the experiment, for half the subjects were told that if the figure was a “farm animal,” they were to drink the juice and if it was a “sea creature,” they were to drink the smoothie; the other half were told the reverse. Then, after the subjects had viewed the image, the researchers asked them to identify the animal they’d seen. If the students’ motivations biased their perceptions, the unconscious minds of the subjects who were told that farm animal equals orange juice would bias them toward seeing a horse. Similarly, the unconscious minds of those told that farm animal equals disgusting smoothie would bias them toward seeing the seal. And that’s just what happened: among those hoping to see a farm animal, 67 percent reported seeing a horse, while among those hoping to see a sea creature, 73 percent identified a seal.
Dunning’s study was certainly persuasive about the impact of motivation on perception, but the ambiguity at hand was very clear and simple. Everyday life experiences, by contrast, present issues far more complex than deciding what animal you’re looking at. Talent at running a business or a military unit, the ability to get along with people, the desire to act ethically, and myriad other traits that define us are all complicated qualities. As a result, our unconscious can choose from an entire smorgasbord of interpretations to feed our conscious mind. In the end we feel we are chewing on the facts, though we’ve actually been chomping on a preferred conclusion.
Biased interpretations of ambiguous events are at the heart of some of our most heated arguments. In the 1950s, a pair of psychology professors, one from Princeton, the other from Dartmouth, decided to see if, even a year after the event, Princeton and Dartmouth students would be capable of objectivity about an important football game.22 The game in question was a brutal match in which Dartmouth played especially rough but Princeton came out on top. The scientists showed a group of students from each school a film of the match and asked them to take note of every infraction they spotted, specifying which were “flagrant” or “mild.” Princeton students saw the Dartmouth team commit more than twice as many infractions as their own team, while Dartmouth students counted about an equal number on both sides. Princeton viewers rated most of the Dartmouth fouls as flagrant but few of their own as such, whereas the Dartmouth viewers rated only a few of their own infractions as flagrant but half of Princeton’s. And when asked if Dartmouth was playing intentionally rough or dirty, the vast majority of the Princeton fans said “yes,” while the vast majority of the Dartmouth fans who had a definite opinion said “no.” The researchers wrote, “The same sensory experiences emanating from the football field, transmitted through the visual mechanism to the brain … gave rise to different experiences in different people.… There is no such ‘thing’ as a game existing ‘out there’ in its own right which people merely ‘observe.’ ”
I like that last quote because, though it was written about football, it seems to be true about the game of life in general. Even in my field, science, in which objectivity is worshipped, it is often clear that people’s views of the evidence are highly correlated to their vested interests. For example, in the 1950s and ’60s a debate raged about whether the universe had had a beginning or whether it had always been in existence. One camp supported the big bang theory, which said that the cosmos began in a manner indicated by the theory’s name. The other camp believed in the steady state theory, the idea that the universe had always been around, in more or less the same state that it is in today. In the end, to any disinterested party, the evidence landed squarely in support of the big bang theory, especially after 1964, when the afterglow of the big bang was serendipitously detected by a pair of satellite communications researchers at Bell Labs. That discovery made the front page of the New York Times, which proclaimed that the big bang had won out. What did the steady state researchers proclaim? After three years, one proponent finally accepted it with the words “The universe is in fact a botched job, but I suppose we shall have to make the best of it.” Thirty years later, another leading steady state theorist, by then old and silver-haired, still believed in a modified version of his theory.23
The little research that has been done by scientists on scientists shows that it isn’t uncommon for scientists to operate as advocates rather than impartial judges, especially in the social sciences, in which there is greater ambiguity than in the physical sciences. For example, in one study, advanced graduate students at the University of Chicago were asked to rate research reports dealing with issues on which they already had an opinion.24 Unbeknownst to the volunteers, the research reports were all phony. For each issue, half the volunteers saw a report presenting data that supported one side, while the other half saw a report in which the data supported the opposite camp. But it was only the numbers that differed—the research methodology and presentation were identical in both cases.
When asked, most subjects denied that their assessment of the research depended on whether the data supported their prior opinion. But they were wrong. The researcher’s analysis showed that they had indeed judged the studies that supported their beliefs to be more methodologically sound and clearly presented than the otherwise identical studies that opposed their beliefs—and the effect was stronger for those with strong prior beliefs.25 I’m not saying that claims of truth in science are a sham—they aren’t. History has repeatedly shown that the better theory eventually wins. That’s why the big bang triumphed and the steady state theory died, and no one even remembers cold fusion. But it is also true that scientists with an investment in an established theory sometimes stubbornly cling to their old beliefs. Sometimes, as the economist Paul Samuelson wrote, “science advances funeral by funeral.”26
Because motivated reasoning is unconscious, people’s claims that they are unaffected by bias or self-interest can be sincere, even as they make decisions that are in reality self-serving. For example, many physicians think they are immune to monetary influence, yet recent studies show that accepting industry hospitality and gifts has a significant subliminal effect on patient-care decisions.27 Similarly, studies have shown that research physicians with financial ties to pharmaceutical manufacturers are significantly more likely than independent reviewers to report findings that support the sponsor’s drugs and less likely to report unfavorable findings; that investment managers’ estimates of the probabilities of various events are significantly correlated to the perceived desirability of those events; that auditors’ judgments are affected by the incentives offered; and that, at least in Britain, half the population believes in heaven, but only about a quarter believes in hell.28
Recent brain-imaging studies are beginning to shed light on how our brains create these unconscious biases. They show that when assessing emotionally relevant data, our brains automatically include our wants and dreams and desires.29 Our internal computations, which we believe to be objective, are not really the computations that a detached computer would make but, rather, are implicitly colored by who we are and what we are after. In fact, the motivated reasoning we engage in when we have a personal stake in an issue proceeds via a different physical process within the brain than the cold, objective analysis we carry out when we don’t. In particular, motivated reasoning involves a network of brain regions that are not associated with “cold” reasoning, including the orbitofrontal cortex and the anterior cingulate cortex—parts of the limbic system—and the posterior cingulate cortex and precuneus, which are also activated when one makes emotionally laden moral judgments.30 That’s the physical mechanism for how our brains manage to deceive us. But what is the mental mechanism? What techniques of subliminal reasoning do we employ to support our preferred worldviews?
OUR CONSCIOUS MINDS are not chumps. So if our unconscious minds distorted reality in some clumsy and obvious way, we would notice and we wouldn’t buy into it. Motivated reasoning won’t work if it stretches credulity too far, for then our conscious minds start to doubt and the self-delusion game is over. That there are limits to motivated reasoning is critically important, for it is one thing to have an inflated view of your expertise at making lasagna and it is quite another to believe you can leap tall buildings in a single bound. In order for your inflated self-image to serve you well, to have survival benefits, it must be inflated to just the right degree and no further. Psychologists describe this balance by saying that the resulting distortion must maintain the “illusion of objectivity.” The talent we are blessed with in this regard is the ability to justify our rosy images of ourselves through credible arguments, in a way that does not fly in the face of obvious facts. What tools do our unconscious minds use to shape our cloudy, ambiguous experience into the clear and distinctly positive vision of the self that we wish to see?
One method is reminiscent of an old joke about a Catholic and a Jew—both white—and a black man, all of whom die and approach the gates of heaven. The Catholic says, “I was a good man all my life, but I suffered a lot of discrimination. What do I have to do to get into heaven?”
“That’s easy,” says God. “All you have to do to enter heaven is spell one word.”
“What’s that?” the Catholic asks.
“God,” answers the Lord.
The Catholic spells it out, G-O-D, and is let in. Then the Jew approaches. He, too, says, “I was a good man.” And then he adds, “And it wasn’t easy—I had to deal with discrimination all my life. What do I have to do to get into heaven?”
God says, “That’s easy. All you have to do is spell one word.”
“What’s that?” the Jew asks.
“God,” answers the Lord.
The Jew says, “G-O-D,” and he, too, is let in. Then the black man approaches and says that he was kind to everyone, although he faced nasty discrimination because of the color of his skin.
God says, “Don’t worry, there is no discrimination here.”
“Thank you,” says the black man. “So how do I get into heaven?”
“That’s easy,” says God. “All you have to do is spell one word!”
“What’s that?” the black man asks.
“Czechoslovakia,” answers the Lord.
The Lord’s method of discrimination is classic, and our brains employ it often: when information favorable to the way we’d like to see the world tries to enter the gateway of our mind we ask that it spell “God,” but when unfavorable information comes knocking, we make it spell “Czechoslovakia.”
For example, in one study volunteers were given a strip of paper to test whether they had a serious deficiency of an enzyme called TAA, which would make them susceptible to a variety of serious pancreas disorders.31 The researchers told them to dip the strip of paper in a bit of their saliva and wait ten to twenty seconds to see if the paper turned green. Half the subjects were told that if the strip turned green it meant they had no enzyme deficiency, while the other half were told that if it turned green it meant they had the dangerous deficiency. In reality, no such enzyme exists, and the strip was ordinary yellow construction paper, so none of the subjects were destined to see it change color. The researchers watched as their subjects performed the test. Those who were motivated to see no change dipped the paper, and when nothing happened, they quickly accepted the happy answer and decided the test was complete. But those motivated to see the paper turn green stared at the strip for an extra thirty seconds, on average, before accepting the verdict. What’s more, over half of these subjects engaged in some sort of retesting behavior. One subject redipped the paper twelve times, like a child nagging its parents. Can you turn green? Can you? Please? Please?
Those subjects may seem silly, but we all dip and redip in an effort to bolster our preferred views. People find reasons to continue supporting their preferred political candidates in the face of serious and credible accusations of wrongdoing or ignorance but take thirdhand hearsay about an illegal left turn as evidence that the candidate of the other party ought to be banned from politics for life. Similarly, when people want to believe in a scientific conclusion, they’ll accept a vague news report of an experiment somewhere as convincing evidence. And when people don’t want to accept something, the National Academy of Sciences, the American Association for the Advancement of Science, the American Geophysical Union, the American Meteorological Society, and a thousand unanimous scientific studies can all converge on a single conclusion, and people will still find a reason to disbelieve.
That’s exactly what happened in the case of the inconvenient and costly issue of global climate change. The organizations I named above, plus a thousand academic articles on the topic, were unanimous in concluding that human activity is responsible, yet in the United States more than half the people have managed to convince themselves that the science of global warming is not yet settled.32 Actually, it would be difficult to get all those organizations and scientists to agree on anything short of a declaration stating that Albert Einstein was a smart fellow, so their consensus reflects the fact that the science of global warming is very much settled. It’s just not good news. To a lot of people, the idea that we are descended from apes is also not good news. So they have found ways not to accept that fact, either.
When someone with a political bias or vested interest sees a situation differently than we do, we tend to think that person is deliberately misinterpreting the obvious to justify their politics or to bring about some personal gain. But through motivated reasoning each side finds ways to justify its favored conclusion and discredit the other, while maintaining a belief in its own objectivity. And so those on both sides of important issues may sincerely think that theirs is the only rational interpretation. Consider the following research on the death penalty. People who either supported or opposed capital punishment on the theory that it deterred crime (or didn’t) were shown two phony studies. Each study employed a different statistical method to prove its point. Let’s call them method A and method B. For half the subjects, the study that used method A concluded that capital punishment works as a deterrent, and the study that used method B concluded that it doesn’t. The other subjects saw studies in which the conclusions were reversed. If people were objective, those on both sides would agree that either method A or method B was the best approach regardless of whether it supported or undermined their prior belief (or they’d agree that it was a tie). But that’s not what happened. Subjects readily offered criticisms such as “There were too many variables,” “I don’t think they have complete enough collection of data,” and “The evidence given is relatively meaningless.” But both sides lauded whatever method supported their belief and trashed whatever method did not. Clearly, it was the reports’ conclusions, not their methods, that inspired these analyses.33
Exposing people to well-reasoned arguments both pro– and anti–death penalty did not engender understanding for the other point of view. Rather, because we poke holes in evidence we dislike and plug holes in evidence we like, the net effect in these studies was to amplify the intensity of the disagreement. A similar study found that, after viewing identical samples of major network television coverage of the 1982 massacre in Beirut, both pro-Israeli and pro-Arab partisans rated the programs, and the networks, as being biased against their side.34 There are critical lessons in this research. First, we should keep in mind that those who disagree with us are not necessarily duplicitous or dishonest in their refusal to acknowledge the obvious errors in their thinking. More important, it would be enlightening for all of us to face the fact that our own reasoning is often not so perfectly objective, either.
ADJUSTING OUR STANDARDS for accepting evidence to favor our preferred conclusions is but one instrument in the subliminal mind’s motivated reasoning tool kit. Other ways we find support for our worldviews (including our view of ourselves) include adjusting the importance we assign to various pieces of evidence and, sometimes, ignoring unfavorable evidence altogether. For example, ever notice how, after a win, sports fans crow about their team’s great play, but after a loss they often ignore the quality of play and focus on Lady Luck or the referees?35 Similarly, executives in public companies pat themselves on the back for good outcomes but suddenly recognize the importance of random environmental factors when performance is poor.36 It can be hard to tell whether those attempts to put a spin on a bad outcome are sincere, and the result of unconscious motivated reasoning, or are conscious and self-serving.
One situation in which that ambiguity is not an issue is scheduling. There is no good reason to offer unrealistic promises with regard to deadlines, because in the end you’ll be required to back up those promises by delivering the goods. Yet contractors and businesses often miss their deadlines even when there are financial penalties for doing so, and studies show that motivated reasoning is a major cause of those miscalculations. It turns out that when we calculate a completion date, the method we think we follow in arriving at it is to break the project down into the necessary steps, estimate the time required for each step, and put it all together. But research shows that, instead, our minds often work backward. That is, the desired target date exerts a great and unconscious influence on our estimate of the time required to complete each of the intermediate steps. In fact, studies show that our estimates of how long it will take to finish a task depend directly on how invested we are in the project’s early completion.37
If it’s important for a producer to get the new PlayStation game done in the next two months, her mind will find reasons to believe that the programming and quality-assurance testing will be more problem-free than ever before. Likewise, if we need to get three hundred popcorn balls made in time for Halloween, we manage to convince ourselves that having the kids help on our kitchen assembly line will go smoothly for the first time in the history of our family. It is because we make these decisions, and sincerely believe they are realistic, that all of us, whether we are throwing a dinner party for ten people or building a new jet fighter, regularly create overly optimistic estimates of when we can finish the project.38 In fact, the U.S. General Accounting Office estimated that when the military purchased equipment involving new technology, it was delivered on schedule and within budget just 1 percent of the time.39
In the last chapter I mentioned that research shows that employers often aren’t in touch with the real reasons they hire someone. An interviewer may like or dislike an applicant because of factors that have little to do with the applicant’s objective qualifications. They may both have attended the same school or both be bird-watchers. Or perhaps the applicant reminds the interviewer of a favorite uncle. For whatever reason, once the interviewer makes a gut-level decision, her unconscious often employs motivated reasoning to back that intuitive inclination. If she likes the applicant, without realizing her motivation she will tend to assign high importance to areas in which the applicant excels and take less seriously those in which the applicant falls short.
In one study, participants considered applications from a male and a female candidate for the job of police chief. That’s a stereotypically male position, so the researchers postulated that the participants would favor the male applicant and then unwittingly narrow the criteria by which they judged the applicants to those that would support that decision. Here is how the study worked: There were two types of résumés. The experimenters designed one to portray a streetwise individual who was poorly educated and lacking in administrative skills. They designed the other to reflect a well-educated and politically connected sophisticate who had little street smarts. Some participants were given a pair of résumés in which the male applicant had the streetwise résumé and the female was the sophisticate. Others were given a pair of résumés in which the man’s and the woman’s strong points were reversed. The participants were asked not just to make a choice but to explain it.
The results showed that when the male applicant had the streetwise résumé, the participants decided street smarts were important for the job and selected him, but when the male applicant had the sophisticate’s résumé, they decided that street smarts were overrated and also chose the male. They were clearly making their decisions on the basis of gender, and not on the streetwise-versus-sophisticated distinction, but they were just as clearly unaware of doing so. In fact, when asked, none of the subjects mentioned gender as having influenced them.40
Our culture likes to portray situations in black and white. Antagonists are dishonest, insincere, greedy, evil. They are opposed by heroes who are the opposite in terms of those qualities. But the truth is, from criminals to greedy executives to the “nasty” guy down the street, people who act in ways we abhor are usually convinced that they are right.
The power of vested interest in determining how we weigh the evidence in social situations was nicely illustrated in a series of experiments in which researchers randomly assigned volunteers to the role of plaintiff or defendant in a mock lawsuit based on a real trial that occurred in Texas.41 In one of those experiments, the researchers gave both sides documents regarding the case, which involved an injured motorcyclist who was suing the driver of an automobile that had collided with him. The subjects were told that in the actual case, the judge awarded the plaintiff an amount between $0 and $100,000. They were then assigned to represent one side or the other in mock negotiations in which they were given a half hour to fashion their own version of a settlement. The researchers told the subjects they’d be paid based on their success in those negotiations. But the most interesting part of the study came next: the subjects were also told they could earn a cash bonus if they could guess—within $5,000—what the judge actually awarded the plaintiff.
In making their guesses, it was obviously in the subjects’ interest to ignore whether they were playing the role of plaintiff’s or defendant’s advocate. They’d have the greatest chance at winning the cash bonus if they assessed the payout that would be fair, based solely on the law and the evidence. The question was whether they could maintain their objectivity.
On average, the volunteers assigned to represent the plaintiff’s side estimated that the judge would dictate a settlement of nearly $40,000, while the volunteers assigned to represent the defendant put that number at only around $20,000. Think of it: $40,000 versus $20,000. If, despite the financial reward offered for accurately guessing the size of a fair and proper settlement, subjects artificially assigned to different sides of a dispute disagree by 100 percent, imagine the magnitude of sincere disagreement between actual attorneys representing different sides of a case, or opposing negotiators in a bargaining session. The fact that we assess information in a biased manner and are unaware we are doing so can be a real stumbling block in negotiations, even if both sides sincerely seek a fair settlement.
Another version of the experiment, created around the scenario of that same lawsuit, investigated the reasoning mechanism the subjects employed to reach their conflicting conclusions. In that study, at the end of the bargaining session, the researchers asked the volunteers to explicitly comment on each side’s arguments, to make concrete judgments on issues like Does ordering an onion pizza via cell phone affect one’s driving? Does a single beer an hour or two before getting on a motorcycle impair safety? As in the police chief résumé example, subjects on both sides tended to assign more importance to the factors that favored their desired conclusion than to the factors favoring their opponent. These experiments suggest that, as they were reading the facts of the case, the subjects’ knowledge that they would be taking one side or the other affected their judgment in a subtle and unconscious manner that trumped any motivation to analyze the situation fairly.
To further probe that idea, in another variant on the experiment, researchers asked volunteers to assess the accident information before being told which side they would be representing. Then the subjects were assigned their roles and asked to evaluate the appropriate award, again with the promise of a cash bonus if they came close. The subjects had thus weighed the evidence while still unbiased, but made their guess about the award after the cause for bias had been established. In this situation, the discrepancy in the assessments fell from around $20,000 to just $7,000, a reduction of nearly two-thirds. Moreover, the results showed that due to the subjects’ having analyzed the data before taking sides in the dispute, the proportion of times the plaintiff’s and defendant’s advocates failed to come to an agreement within the allotted half hour fell from 28 percent to just 6 percent. It’s a cliché, but the experience of walking in the other side’s shoes does seem to be the best way to understand their point of view.
As these studies suggest, the subtlety of our reasoning mechanisms allows us to maintain our illusions of objectivity even while viewing the world through a biased lens. Our decision-making processes bend but don’t break our usual rules, and we perceive ourselves as forming judgments in a bottom-up fashion, using data to draw a conclusion, while we are in reality deciding top-down, using our preferred conclusion to shape our analysis of the data. When we apply motivated reasoning to assessments about ourselves, we produce that positive picture of a world in which we are all above average. If we’re better at grammar than arithmetic, we give linguistic knowledge more weight in our view of what is important, whereas if we are good at adding but bad at grammar, we think language skills just aren’t that crucial.42 If we are ambitious, determined, and persistent, we believe that goal-oriented people make the most effective leaders; if we see ourselves as approachable, friendly, and extroverted, we feel that the best leaders are people-oriented.43
We even recruit our memories to brighten our picture of ourselves. Take grades, for example. A group of researchers asked ninety-nine college freshmen and sophomores to think back a few years and recall the grades they had received for high school classes in math, science, history, foreign language study, and English.44 The students had no incentive to lie because they were told that their recollections would be checked against their high school registrars’ records, and indeed all signed forms giving their permission. Altogether, the researchers checked on the students’ memories of 3,220 grades. A funny thing happened. You’d think that the handful of years that had passed would have had a big effect on the students’ grade recall, but they didn’t. The intervening years didn’t seem to affect the students’ memories very much at all—they remembered their grades from their freshman, sophomore, junior, and senior years all with the same accuracy, about 70 percent. And yet there were memory holes. What made the students forget? It was not the haze of years but the haze of poor performance: their accuracy of recall declined steadily from 89 percent for A’s to 64 percent for B’s, 51 percent for C’s, and 29 percent for D’s. So if you are ever depressed over being given a bad evaluation, cheer up. Chances are, if you just wait long enough, it’ll improve.
MY SON NICOLAI, now in tenth grade, received a letter the other day. The letter was from a person who used to live in my household but no longer exists. That is, the letter was written by Nicolai himself, but four years earlier. Though the letter had traveled very little in space, it had traveled very far in time, at least in the time of a young child’s life. He had written the letter in sixth grade as a class assignment. It was a message from an eleven-year-old Nicolai asked to speak to the fifteen-year-old Nicolai of the future. The class’s letters had been collected and held those four years by his wonderful English teacher, who eventually mailed them to the adolescents her sixth-grade children had become.
What was striking about Nicolai’s letter was that it said, “Dear Nicolai … you want to be in the NBA. I look forward to playing basketball on the middle school seventh and eighth grade team, and then in high school, where you are now in your second year.” But Nicolai did not make the team in seventh grade; nor did he make it in eighth grade. Then, as his luck would have it, the coach who passed him over for those teams also turned up as the freshman coach in high school, and again declined to pick Nicolai for the team. That year, only a handful of the boys who tried out were turned away, making the rejection particularly bitter for Nicolai. What’s remarkable here is not that Nicolai wasn’t smart enough to know when to give up but that through all those years he maintained his dream of playing basketball, to the extent that he put in five hours a day one summer practicing alone on an empty court. If you know kids, you understand that if a boy continues to insist that someday he will be in the NBA but year after year fails to make even his local school team, it will not be a plus for his social life. Kids might like to tease a loser, but they love teasing a loser for whom winning would have been everything. And so, for Nicolai, maintaining his belief in himself came at some cost.
The story of Nicolai’s basketball career is not over. At the end of ninth grade, his school’s new junior varsity coach saw him practicing day after day, sometimes until it was so dark he could barely see the ball. He invited Nicolai to practice with the team that summer. This fall he finally made the team. In fact, he is the team captain.
I’ve mentioned the successes of Apple computer a couple of times in this book, and much has been made of Apple cofounder Steve Jobs’s ability to create what has come to be called a “reality distortion field,” which allowed him to convince himself and others that they could accomplish whatever they set their mind to. But that reality distortion field was not just his creation; it is also Nicolai’s, and—to one degree or another—it is a gift of everyone’s unconscious mind, a tool built upon our natural propensity to engage in motivated reasoning.
There are few accomplishments, large or small, that don’t depend to some degree on the accomplisher believing in him- or herself, and the greatest accomplishments are the most likely to rely on that person being not only optimistic but unreasonably optimistic. It’s not a good idea to believe you are Jesus, but believing you can become an NBA player—or, like Jobs, come back from the humiliating defeat of being ejected from your own company, or be a great scientist or author or actor or singer—may serve you very well indeed. Even if it doesn’t end up turning out to be true in the details of what you accomplish, belief in the self is an ultimately positive force in life. As Steve Jobs said, “You can’t connect the dots looking forward; you can only connect them looking backwards. So you have to trust that the dots will somehow connect in your future.”45 If you believe the dots will connect down the road, it will give you the confidence to follow your heart, even when it leads you off the well-worn path.
I’ve attempted, in writing this book, to illuminate the many ways in which a person’s unconscious mind serves them. For me, the extent to which my inner unknown self guides my conscious mind came as a great surprise. An even greater surprise was the realization of how lost I would be without it. But of all the advantages our unconscious provides, it is this one that I value most. Our unconscious is at its best when it helps us create a positive and fond sense of self, a feeling of power and control in a world full of powers far greater than the merely human. The artist Salvador Dalí once said, “Every morning upon awakening, I experience a supreme pleasure: that of being Salvador Dalí, and I ask myself, wonderstruck, what prodigious thing will he do today, this Salvador Dalí?”46 Dalí may have been a sweet guy or he may have been an insufferable egomaniac, but there is something wonderful about his unrestrained and unabashedly optimistic vision of his future.
The psychological literature is full of studies illustrating the benefits—both personal and social—of holding positive “illusions” about ourselves.47 Researchers find that when they induce a positive mood, by whatever means, people are more likely to interact with others and more likely to help others. Those feeling good about themselves are more cooperative in bargaining situations and more likely to find a constructive solution to their conflicts. They are also better problem solvers, more motivated to succeed, and more likely to persist in the face of a challenge. Motivated reasoning enables our minds to defend us against unhappiness, and in the process it gives us the strength to overcome the many obstacles in life that might otherwise overwhelm us. The more of it we do, the better off we tend to be, for it seems to inspire us to strive to become what we think we are. In fact, studies show that the people with the most accurate self-perceptions tend to be moderately depressed, suffer from low self-esteem, or both.48 An overly positive self-evaluation, on the other hand, is normal and healthy.49
I imagine that, fifty thousand years ago, anyone in their right mind looking toward the harsh winters of northern Europe would have crawled into a cave and given up. Women seeing their children die from rampant infections, men watching their women die in childbirth, human tribes suffering drought, flood, and famine must have found it difficult to keep courageously marching forward. But with so many seemingly insurmountable barriers in life, nature provided us with the means to create an unrealistically rosy attitude about overcoming them—which helps us do precisely that.
As you confront the world, unrealistic optimism can be a life vest that keeps you afloat. Modern life, like our primitive past, has its daunting obstacles. The physicist Joe Polchinski wrote that when he started to draft his textbook on string theory, he expected that the project would take one year. It took him ten. Looking back, had I had a sober assessment of the time and effort required to write this book, or to become a theoretical physicist, I would have shrunk before both endeavors. Motivated reasoning and motivated remembering and all the other quirks of how we think about ourselves and our world may have their downsides, but when we’re facing great challenges—whether it’s losing a job, embarking on a course of chemotherapy, writing a book, enduring a decade of medical school, internship, and residency, spending the thousands of practice hours necessary to become an accomplished violinist or ballet dancer, putting in years of eighty-hour weeks to establish a new business, or starting over in a new country with no money and no skills—the natural optimism of the human mind is one of our greatest gifts.
Before my brothers and I were born, my parents lived in a small flat on the North Side of Chicago. My father worked long hours sewing clothes in a sweatshop, but his meager income left my parents unable to make the rent. Then one night my father came home excited and told my mother they were looking for a new seamstress at work, and that he had gotten her the job. “You start tomorrow,” he said. It sounded like a propitious move, since this would almost double their income, keep them off beggars’ row, and give them the comfort of spending far more time together. There was only one drawback: my mother didn’t sew. Before Hitler invaded Poland, before she lost everyone and everything, before she became a refugee in a strange land, my mother had been a child of wealth. Sewing wasn’t anything a teenage girl in her family had needed to learn.
And so my future parents had a little discussion. My father told my mother he could teach her. They would work at it all night, and in the morning they’d take the train to the shop together and she’d do a passable job. Anyway, he was very fast and could cover for her until she got the hang of it. My mother considered herself clumsy and, worse, too timid to go through with such a scheme. But my father insisted that she was capable and brave. She was a survivor just as he was, he told her. And so they talked, back and forth, about which qualities truly defined my mother.
We choose the facts that we want to believe. We also choose our friends, lovers, and spouses not just because of the way we perceive them but because of the way they perceive us. Unlike phenomena in physics, in life, events can often obey one theory or another, and what actually happens can depend largely upon which theory we choose to believe. It is a gift of the human mind to be extraordinarily open to accepting the theory of ourselves that pushes us in the direction of survival, and even happiness. And so my parents did not sleep that night, while my father taught my mother how to sew.