EIGHT
Why Bad Strategies Happen to Good People
Awareness Is Not Enough
Battles should be fought in the marketplace. So why are so many lost before a strategy even gets off the drawing board?
The short answer is this:
Humans are far from rational in their planning and decision making. Psychological studies going back decades consistently demonstrate that humans face huge impediments when making complicated decisions, such as those involved in setting strategy.
Anthropological studies underscore the difficulties, as shown by Donald E. Brown in his book Human Universals. Rather than looking at the differences between cultures, Brown looked at what all cultures have in common. He reasoned that those common features are part of the fabric of being human. He found that oral language is universal, while writing is not. Basic reasoning is universal, but the sort of abstract reasoning used in mathematics is not. Unfortunately, the more than two hundred universals that Brown found include many of the psychological characteristics that interfere with complex decisions.
In other words, humans are hardwired to come up with bad strategies.
The really aware executives (the sort who read books like ours) realize the limitations they face. So they redouble their efforts, insisting on greater vigilance and deeper analysis.
The problem is that that isn’t enough. As our first seven chapters show, vigilant and analytical executives can still come up with demonstrably bad strategies.
Our suggestion is not to be more careful. We believe that decision makers must accept that the tendency toward errors is deeply ingrained and adopt explicit mechanisms to counter those tendencies.
If our short answer convinces you, feel free to skip ahead to chapter 9, where we explore the organizational impediments to good decisions, or even to chapters 10 and 11, where we lay out the mechanisms that can catch errors. If you still have doubts whether you need to add a process to stress-test your strategy, then please stick with us for a few more minutes in this chapter. We will lay out what we believe to be an overwhelming body of research and insight that refutes the belief that extra vigilance and analysis will suffice.
Our long answer goes something like this:
It is obvious that those formulating strategies should gather all relevant information. They should process that information objectively. They should consider a wide array of possible strategies. They should evaluate all possibilities thoroughly, considering both the negatives and the positives. They should hone their skills by learning from experience—their own, their company’s, and other companies’.
Problems and unexpected roadblocks always pop up during implementation, but avoiding major conceptual mistakes should take no more than a tight focus on Michael Porter’s “Five Forces,” or some other rigorous approach to strategy setting. In fact, psychology and anthropology show that taking a rigorous approach is extremely difficult because of these natural tendencies:
• People home in on an answer prematurely, long before they evaluate all information.
• People have trouble being objective about many kinds of information because they aren’t set up very well to deal with abstractions.
• Once people start moving toward an answer, they look to confirm that their answer is right, rather than hold open the possibility that they’re wrong.
• People conform to the wishes of a group, especially if there is a strong person in the leadership role, rather than raise objections that test ideas.
• People also don’t learn as much as they could from their mistakes, because we humans typically suffer from overconfidence and have elaborate defense mechanisms to explain away our failings; sharp people (the kind entrusted with setting corporate strategies, or so we hope) appear to be even less likely to learn from mistakes or to acknowledge their errors.
That’s a lot to overcome.
We’ll look at each of the five types of problems in turn.
Premature Closure
Psychological studies show that it’s hard—physically hard—for people to avoid reaching conclusions before evaluating all the evidence. We get a first impression of an idea in much the same way we get a first impression of a person. Even when people are trained to withhold judgment, they find themselves evaluating information as they go along, forming a tentative conclusion early in the process. They then test additional information against that conclusion, but that’s not the same as giving all the information the same weight and keeping an open mind until all evidence is viewed. Conclusions, like first impressions, are hard to reverse.
A study of analysts in the intelligence community, for instance, found that, despite their extensive training, analysts tended to come to a conclusion very quickly and then “fit the facts” to that conclusion.
1 A study of clinical psychologists found that they formed diagnoses relatively rapidly and that additional information didn’t improve those diagnoses.
2
In fact, it’s hard to change a conclusion even when we’re told that the information it’s based on is erroneous. In one study, people were given a false sense of success. They were asked a series of questions and told they were right almost all the time—even though they weren’t. The participants in the study were asked to rate their capability and, not surprisingly, felt pretty good about themselves. Participants were then told their real results, which were typically far worse than they had initially been told. Yet, when asked to rerate their capabilities, the participants still concluded that they were almost as good as when they were asked the first time, under false pretenses.
3
Part of the reason we don’t consider all possible information stems from what psychologists call “the availability bias.” The bias means that we typically recall information or ideas that are “available,” either because we came across them recently or because they are particularly vivid. Someone who is asked whether more words in the English language start with the letter r or have r as the third letter is likely to say more words start with r, because those words are more vivid, more available.
Irving Janis, in his classic book Groupthink, blames the availability bias for the early U.S. involvement in Vietnam in the 1950s and 1960s. He says U.S. presidents and generals could remember vividly that an appeasement policy hadn’t worked with Hitler before World War II and that intervention had succeeded in the Korean War, so it was easy to conclude that the United States needed to intervene aggressively to contain Communism.
The availability bias basically means that it isn’t just generals and politicians who fight the last war. We all do—sometimes disastrously. It seems that Robert Galvin pursued Motorola’s disastrous Iridium project because Motorola’s previous successes with grand engineering projects were so available to him that he spent little time considering the downside.
The availability bias also means that we respond to narratives and analogies more than we do to statistical information, even though the raw data typically provides a more accurate picture of reality. Random facts are hard to remember, but those same facts become more “available” once we weave them into a story or decide they are analogous to other situations with which we’re familiar. Studies involving chess masters and grandmasters have found that they’re no better than the rest of us at memorizing where pieces are on a chessboard if the pieces are placed randomly. Everyone remembers roughly half a dozen. But if a board is set up based on an actual game, the masters and grandmasters quickly memorize the entire board, while mere mortals still remember only where half a dozen pieces are.
4 The reason: The pieces are now connected through a sort of narrative for the masters and grandmasters.
Narratives and analogies aren’t necessarily bad, as long as they are used properly and recognized for what they are—and what they aren’t. At Walt Disney Company, executives know they need a story if they are to sell management on investing in a major project. When Joe Rohde wanted to build an amusement park with wild animals in a natural setting, he went through all the PowerPoint presentations and the other rigmarole associated with trying to prove the economic viability of an idea, but he wasn’t quite carrying the day. At the final meeting to decide whether the park was a “go,” then-CEO Michael Eisner said, “I still don’t quite get what the thrill is with live animals.” Rohde walked to the door of the conference room and opened it. In walked a Bengal tiger, tethered with the lightest of lines and restrained only by a young woman. Eisner understood the thrill, and Rohde got his money.
5 By all accounts, Animal Kingdom, which opened in 1999, has been a success for Disney. Lots of people get the thrill of exotic wild animals in the flesh.
The problem is that anecdotes can be used to support either side of any argument. Haste makes waste . . . but the early bird gets the worm. Make hay while the sun shines . . . but save for a rainy day. Birds of a feather flock together . . . but opposites attract.
Narratives can also be dangerous because they can tie facts up in too neat a package or make causal links that aren’t really there. Analogies can also fool us because they’re never perfect, as Galvin learned so painfully. Iridium was analogous to earlier projects in many ways, just not enough. (Now Iridium serves as a vivid analogy for those trying to figure out how not to evaluate the potential of new technologies, but that’s cold comfort to Galvin and Motorola.)
Brown’s anthropological treatise Human Universals lists myths as one of his universals, meaning that the tendency to use stories rather than statistics is deeply ingrained in all of us. So, feel free to use the power of myth—but make sure you aren’t manipulated by it.
Difficulty with Abstraction
Even Albert Einstein didn’t like certain abstractions. Sure, he did just fine with abstraction, as his elegant thought experiments show. But until the end of his life he resisted the interpretation of quantum mechanics—now generally accepted—that described the world in probabilities. Even though Einstein was one of the pioneers of quantum mechanics, it just didn’t feel right to talk about an electron as being 81 percent in one orbit, 5 percent in another, and so on, based on the likelihood of where that electron would be at any given instant.
So, how comfortable are the rest of us with abstraction? Not very. When it comes to processing all the complex relationships and messy data out there in the real world, our minds play all kinds of tricks on us.
One problem is what psychologists call the anchoring bias. If you ask someone to estimate anything whose amount is between, say, one and one hundred units, but first tell them a randomly chosen number, that number will greatly influence the estimate. Even though the random number and the estimate have nothing to do with each other, a low random number will produce low estimates. A high random number will produce high estimates.
6
The anchoring bias poses particular problems in setting strategy because we subconsciously tend to work from whatever spreadsheet or other document we’re presented with. We tend to tinker rather than question whether the ideas behind the document are even worth considering.
When making forecasts, the anchoring bias means we tend to assume that trends will continue pretty much as they always have. This causes particular problems in evaluating how technological progress will unfold. People see an explosion of interest in something and assume the pace of change will continue indefinitely. So we typically overestimate the effect of something like the Internet in the short run. Remember all those stories about how stores would disappear, and all our purchases would be delivered into special, locked boxes that we’d put in front of our houses? But we also have a general comfort level with our lives and assume they’ll change about as fast as they always have, so we underestimate the long-term effects of something like cell phones. Did anybody guess twenty years ago that doctors would now be treating a problem known as cell phone thumb, common among those who bang away too hard on their BlackBerries?
Brown’s anthropological studies in Human Universals underscore the depth of the anchoring bias. He lists a universal “interpolation,” which is the tendency to take a linear approach to estimating future results. Interpolation may make sense in many situations but doesn’t serve us well in volatile environments, where a product may be about to take off or die suddenly, or where businesses face forces that change exponentially rather than linearly. Even when we know a situation requires more sophisticated analysis, it’s hard for us to get our heads around the concept, because we just aren’t built for it.
What psychologists call the bias toward survivorship means that we remember what happened; we don’t remember what didn’t happen.
Well, duh, right? But the fact that we don’t remember what didn’t happen is actually a profound problem in how we think. The faithful troop to Lourdes every year to pray for miracle cures, even though a study by astronomer and TV personality Carl Sagan found that the rate of cure is actually somewhat lower for those who make the trek than for those who stay away.
7 We may read about those who pray to Mary and survive, not those who pray to Mary and die.
Similarly, we are encouraged to take risks in business, because we read about those who made “bet the company” decisions and reaped fortunes—and don’t read about those that never quite made the big time because they made “bet the company” decisions and lost. Or we read The Millionaire Next Door and think about concentrating our investing in just one or two assets, to emulate the millionaires, because we don’t read about those individuals who invested big-time in some disastrous company and lost all their savings. We read about Pierre Omidyar, who founded eBay and made a fortune off online auctions. We don’t read about Jerry Kaplan, who almost had the idea for eBay. Kaplan is actually the better model. Omidyar lucked into eBay while trying to find a way for his girlfriend (now wife) to trade Pez dispensers. Kaplan, after a long and distinguished history in the personal-computer software world, made a well-reasoned guess about the auction potential of the Internet and founded an auction site, called OnSale, before eBay came along. Sure, Kaplan got the model slightly wrong. He auctioned off things that his company bought, such as refurbished electronics, rather than set up a site where anyone could auction anything to anyone. But it would be a lot easier to duplicate his thinking and reasonable success than to duplicate the bolt of lightning that hit Omidyar. Yet Omidyar will show up in the history books as a shining example of the entrepreneurial spirit, while Kaplan will not.
Medical journals have recently faced up to the survivorship problem. They found that they were running articles about, say, a correlation between a genetic defect and the onset of a particular disease, but weren’t writing about studies that didn’t find a correlation between the defect and the disease. Sometimes, taking all studies as a whole would indicate there was no correlation, but, because only correlations were typically highlighted, researchers spent time and resources pursuing false leads.
What is sometimes called the “house money” effect means that people are more likely to take risks with “house money” than with their own. For instance, some people were given $30, then asked whether they’d risk $9 on a coin flip. Some 75 percent said they would. Another group was given the identical choice, but phrased differently. This group was told that it had two choices. Members could take $30, or they could have a coin flip decide whether they would receive $21 or $39. In this group, which made its choice before receiving its “house money,” only 43 percent said they’d take the chance.
8
Let’s face it, at some level almost all business is done with what Silicon Valley types call OPM—which stands for “other people’s money” and which, yes, is pronounced like “opium.” While executives generally respect their fiduciary duties and understand that taking excessive risks can cause them to lose their jobs, the tendency is still to take more risks than people would if they were wagering their own money.
In his books The Black Swan and Fooled by Randomness, Nassim Nicholas Taleb coins a term, “platonizing,” that helps explain why people overlook problems. The term refers to Plato and his explanations of ideals. Taleb says ideals actually corrupt our thinking. We see something that is sort of in the shape of a triangle and process the shape mentally as a triangle. We ignore the data that don’t fit the ideal.
That’s no doubt an efficient way to process impressions, but the result is that we tend not to see small problems that can create frictions and keep us from achieving our goals. The technology world is full of grand plans that fell by the wayside because of the most mundane of reasons; as Jim Barksdale said when he was CEO of Netscape, “The problem with Silicon Valley is that we tend to confuse a clear view with a short distance.”
9 Webvan, for instance, didn’t take into account the difficulties its vans would have finding parking spots in cities or with double-parking. Webvan’s strategy didn’t allow for the time and effort that would be required to carry groceries up elevators or, worse, flights of stairs in apartment buildings. Webvan didn’t pay enough attention to the troubles it would have when a customer wasn’t home on a hot day and there was ice cream in the order. Webvan didn’t realize that all its working customers would want groceries delivered at pretty much the same time, in the evening. All sorts of little things got ignored. They weren’t, individually, enough to derail the plan to reinvent the world of groceries. But, given the thin margins in the food business, the frictions as a whole helped Webvan burn through hundreds of millions of dollars.
Psychologists also say we have problems processing new ideas because of mental ruts. People develop neuron-firing sequences in their brains that become so well-defined they are, almost literally, ruts. Certain things will always be perceived the same way.
10
In Groupthink, Janis says mental ruts explain the U.S. decision to fight in Vietnam. Americans had developed certain ways of thinking about Communists, based on the expansionistic behavior of the Soviet Union following World War II. The North Vietnamese government was Communist, so U.S. officials couldn’t imagine that their primary goal might be to merely unify their country under a Communist government. Instead, the unquestioned assumption was that the Vietnamese wanted to work in concert with the Soviet Union and China to make all of Southeast Asia Communist.
The tricky thing about such mental ruts is that you can’t get rid of them just by being aware of them.
Studies of cognition show that we judge how far away an object is partly by how clearly the object appears to us. Makes sense. But what about days when visibility is especially good or especially bad? We’re going to guess wrong. On a clear day, we think objects are closer than they really are. On a hazy day, we think objects are farther away. The errors of judgment persist even when people in studies are told that a day is especially clear or hazy.
11
It’s possible to correct for the problem. Pilots, who need to be able to judge distances accurately in many situations, go through special training to adjust for visibility. Golfers have yardage books, markers in fairways, and, these days, perhaps a laser range finder so they don’t get fooled by appearances on a foggy Sunday morning.
Still, correcting for a mental rut is hard and requires special training or tools. Just ask any golfer who’s stood there in a fairway and said, “It sure looks farther than that,” and then, while trying to trust the yardage marker on the sprinkler head, still pumps the ball over the green into some deep rough.
Confirmation Bias
Way back in our psychology classes in college, one of the odder concepts was “cognitive dissonance.” It makes sense that you read a lot about cars before deciding which one to buy, but you’d think that you’d pretty much stop reading after you bought one. You already own the car. What’s left to decide? In fact, people keep reading. Some even step up their reading. They try to reduce their dissonance by assuring themselves they made the right choice. So they don’t read just anything. They read things that support their decision while assiduously avoiding anything that challenges it. If you bought a Corvair in 1967, it’s unlikely you then read Ralph Nader’s Unsafe at Any Speed.
The same sort of issue shows up in decision making, as part of what psychologists have more recently labeled “confirmation bias.” Once people start to head to a conclusion, they look for information that confirms their decision and ignore anything that contradicts it. A simple experiment shows how this works, and how it can lead to errant conclusions. P. C. Wason had people try to figure out a rule that generated a number sequence. He told participants that the sequence 2-4-6 fit the rule. Participants could get additional information by asking whether some other sequence also fit the rule. They could ask about as many other sequences as they wanted. Once they felt they knew the rule, they offered a guess. In one version of the experiment, only six of twenty-nine people were right on the first guess. In a later version, fifty-one out of fifty-one were wrong. The problem: Once participants felt they had a reasonable guess at the rule, they asked about sequences that would confirm that they were correct. If someone decided the rule was “ascending, consecutive even numbers,” he’d ask if 8-10-12 satisfied the rule. If someone decided the rule was “ascending even numbers,” he’d ask if 10-22-30 satisfied the rule. In fact, the rule was simply “ascending numbers,” but very few got to that conclusion because they didn’t ask about sequences that would contradict their ideas. If participants wanted to be sure that “ascending, consecutive even numbers” was the rule, they needed to come up with a sequence of ascending even numbers that weren’t consecutive, or ascending odd numbers, or descending even numbers. Just asking about multiple sequences that they thought were correct wasn’t going to get them to the right conclusion. They had to ask about sequences that they thought were wrong, but the confirmation bias prevented them from doing so.
12
The confirmation bias is so strong that people often won’t admit they’re wrong even when it’s clear to the rest of the world that they made a knuckleheaded choice. A classic example is a question asked by Mickey Schulhof, former president of Sony USA, following a write-off on Sony’s acquisition of Columbia Pictures. (An acquisition made, as it happens, based on a flawed analogy. Sony looked at the computer industry and saw the tight coupling of hardware and software, so it decided that it needed movies—a sort of software—to run on its consumer-electronics hardware.) Pressed about the write-off, Schulhof said, “What makes you think the Sony acquisition of Columbia Pictures was a corporate blunder?”
13
Um, because you wrote off $3.2 billion, almost the total value of the purchase of Columbia?
Although science is supposed to be the most rational of endeavors, it constantly demonstrates confirmation bias. Thomas Kuhn’s classic book,
The Structure of Scientific Revolutions, details how scientists routinely ignore uncomfortable facts. Ian Mitroff’s
The Subjective Side of Science shows at great length how scientists who had formulated theories about the origins of the moon refused to capitulate when the moon rocks brought back by Apollo 11 disproved their theories; the scientists merely tinkered with their theories to try to skirt the new evidence. Max Planck, the eminent physicist, said scientists never do give up their biases, even when they are discredited. The scientists just slowly die off, making room for younger scientists, who didn’t grow up with the errant biases.
14 (Of course, the younger scientists have biases of their own that will eventually be proved wrong but won’t be relinquished.)
An interesting book by Charles Perrow, Normal Accidents, has numerous examples of disasters caused by confirmation bias. Take, for example, a dam that was planned for the Snake River. The plans were approved and funded, when geologists found a problem. The area experienced earthquakes, and building a dam would make earthquakes more likely. The geologists issued shrill warnings—at least, shrill in scientists’ terms—but the warnings got toned down as they passed up through channels, to the point where they could eventually be ignored. Other red flags were raised and ignored. In fact, the dam’s operator got permission to fill the dam at several times normal speed, even though that increased the stress on the dam. When the dam started falling apart, the problems were trivialized; earthmoving equipment was simply dispatched to patch the cracks that were appearing in the dam. Eventually, the earthmoving equipment got sucked into a whirlpool. The dam burst on June 5, 1976, killing eleven people and doing more than $1 billion in property damage.
In Human Universals Donald Brown suggests that the confirmation bias is wired into us. He says “mental maps” are universal, meaning we take new information and fit it into the map that already exists in our heads. We don’t typically use the new information to challenge the validity of what’s already in there.
Conformity
Normal Accidents blames conformity for a truly odd nautical phenomenon that bureaucrats refer to delicately as “non-collision-course collisions.” Stripped of the delicacy, the term means this: A captain took a vessel that was in no danger and deliberately changed his course in a way that caused him to collide with another vessel.
Why would he do this? No, drinking wasn’t involved. Nor was great fatigue or time pressure. Instead, the captain simply misjudged the relative movements of his vessel and the other vessel. Of the twenty-six collisions investigated in the book, nineteen to twenty-four would not have occurred except for a last-minute course change by a captain. Others on board read the situation correctly, but they either assumed the captain knew what he was doing or were afraid to contradict him—until it was too late. Perrow writes, “It is not unusual for a deck officer to remain aghast and silent while his captain grounds the ship or collides with another.”
In 1955, Solomon Asch published results from a series of experiments that demonstrated wonderfully the pressures to conform.
Asch’s experiments put a subject in with a group of seven to nine people he hadn’t met. Unknown to the subject, the others were all cooperating with the experimenters. An experimenter announced that the group would be part of a psychological test of visual judgment. The experimenter then held up a card with a single line on it, followed by a card with three lines of different lengths. Group members were asked, in turn, which line on the second card matched the line on the first. The unsuspecting subject was asked toward the end of the group. The differences in length were obvious, and everyone answered the question correctly for the first three sets of cards. After that, however, everyone in the group who was in on the experiment had been instructed to give a unanimous, incorrect answer. They continued to agree on a wrong answer from that point on, except for the occasional time when they’d been instructed to give the correct answer, to keep the subject from getting suspicious. As Asch put it, subjects were being tested to see what mattered more to them, their eyes or their peers.
The eyes had it, but not by much. Asch said that, in 128 runnings of the experiment, subjects gave the wrong answer 37 percent of the time. Many subjects looked befuddled. Some openly expressed their feeling that the rest of the group was wrong. But they went along.
Interestingly, Asch found that all it took was one voice of dissent, and the subject gave the correct answer far more frequently. If just one other person in the room gave the correct answer, the subject went along with the majority just 14 percent of the time—still high, but not nearly so bad.
Asch wrote: “That we have found the tendency to conformity in our society so strong that reasonably intelligent and well-meaning young people are willing to call white black is a matter of concern. It raises questions about our ways of education and about the values that guide our conduct.”
15
Following in Asch’s footsteps, Stanley Milgram conducted a disturbing experiment into the influence of authority figures. It was 1961, and Adolf Eichmann had just gone on trial for war crimes committed as a senior member of Hitler’s staff during World War II. Milgram wondered why so many Germans had gone along with the atrocities and decided to test to see if we might all react to superiors the way many Germans did. He brought in people who had been paid $4.50 to participate in what was billed as a test of how feedback can improve learning. Each subject was paired with an actor who was in on the experiment. The subject was always the “teacher” and was given a series of words to teach the “learner,” played by the actor, who was positioned on the other side of a wall. The subject was told that, when the “learner” was prompted for a word and gave the wrong answer, the subject was to turn a dial, which would give the “learner” an electric shock. The subject was then given a 45-volt jolt to get a sense of the shocks he’d be administering.
During the testing, the dial was actually connected to prerecorded sounds of people acting as though they’d been jolted; no shock was being administered to the “learner.” When the “learner” began making his scripted mistakes, an experimenter ordered the “teacher” to administer a shock. When the mistakes continued, the experimenter told the “teacher” to increase the voltage, then increase it again and again and again. The “teacher” heard moans, yelps, and sounds of protest. As the voltage continued to increase, the “learner” would pound on the wall and say he was afraid he was going to have a heart attack. In the face of all this, “teachers” often complained to the experimenter about having to continue. Some got up and paced around. Some offered to return the money they’d been paid, if they could just stop the jolts. But the experimenter ordered them to continue, and they did. No one stopped before reaching the 300-volt level on the dial. And 65 percent went all the way to the 450-volt level, which they’d been told at the start of the experiment was potentially a fatal shock.
16
We draw two conclusions from these and related experiments:
• First, never trust a social scientist. Whatever he tells you he’s testing surely isn’t what’s really going on.
• Second, our psyches lead us to go along with our peers and to conform, in particular, to the wishes of authority figures. In the Asch experiment, the test was simple, the answers were obvious, and the subject had no prior ties to the rest of the group—yet subjects went along with the group to a surprising degree. Imagine how much greater the pressures are in a business setting, when the subject is complicated, when the answers aren’t clear, and when there are social and economic bonds that tie a group together.
Again, Brown says in Human Universals that the tendency to conform is built into us, based on his findings that “in-group/out-group,” “socialization,” and “status” appear in all cultures. “In-group/out-group” means that people form groups with those they feel close to, and avoid those outside the groups. “Socialization” and “status” mean people also focus heavily on their interactions with others and their status within their groups. From a business standpoint, these three universals suggest that even senior executives, as bright and decisive as they typically are, may value their standing with their peers and bosses so highly that they’ll conform to the group’s wishes. Executives may not raise objections even when they see a strategy is flawed. (Some scientists have recently argued that conformity is built into us through evolution—people with a “conformity gene” were more likely to band together and dominate those who chose to go it alone.)
Brown also says every culture reveres leaders. This high esteem can, obviously, be great for a company. You can’t go anywhere very quickly if the troops won’t follow the generals. But the reliance on leaders can contribute to conformity and let bad ideas go unchallenged.
Peter Drucker once cautioned against trying too hard to find strong leaders, noting, “The three greatest leaders of the twentieth century were Hitler, Stalin and Mao.”
17
Overconfidence and Defense Mechanisms
Call it the Lake Wobegon effect. Just as Garrison Keillor’s fictional town is a place “where all the women are strong, all the men are good-looking, and all the children are above average,” our psyches make us overly confident of our abilities and resist attempts to teach us otherwise.
For example, a simple experiment, repeated many times, has shown how bad we are at estimating. People are asked to think about any of a variety of quantifiable things—how many manhole covers there are in the United States, how many physical books are in the Library of Congress, whatever. Then people are asked to provide a range of estimates in which they believe they have a 98 percent chance that the right answer falls within their range. The first time the experiment was done, instead of having 2 percent of the ranges not include the right answer, it lay outside the ranges set by 45 percent of the participants. More typically, 15 percent to 30 percent are wrong.
18 (The subjects of the initial experiment were Harvard Business School students. Make of that whatever you like.)
Yet we don’t seem to realize our inadequacies. In Sweden, 94 percent of the population believe that they are above average for Swedish drivers.
19 In France, 84 percent of the men say they are above average as lovers.
20
Perhaps because it’s more fun to twit experts than to go after Swedish drivers or even French lovers, many studies have focused on experts. The findings: Even experts aren’t terribly good in their fields—and don’t know it. As far as we can tell, no one has done a systematic study of overconfidence among executives, but plenty of studies have been done in technical fields, because of the real-world consequences of miscalculations. For instance, some MIT engineers took advantage of an abandoned roadway project and tested some road-construction experts. The engineers chose a straightforward question: How much landfill can be piled on top of a clay foundation in a marshy area as the base for a road? In this situation, the engineers had the luxury of being able to get a definite answer. They piled on the landfill until the clay foundation collapsed. The answer was that 18.7 feet of landfill was the maximum the foundation could support. The engineers provided all the relevant information about the clay foundation to the experts and asked for estimates. None of the experts was close. The MIT engineers then asked for a range, so that the experts felt they had at least a 50 percent chance of having the right answer within their range. None of the ranges included the correct answer. In other words, all the experts had confidence in their guesses, and all were wrong.
21
In The Wisdom of Crowds, James Surowiecki writes: “The between-expert agreement in a host of fields, including stock picking, livestock judging, and clinical psychology, is below 50 percent, meaning that experts are as likely to disagree as to agree. More disconcertingly, one study found that the internal consistency of medical pathologists’ judgments was just 0.5, meaning that a pathologist presented with the same evidence would, half the time, offer a different opinion. Experts are also surprisingly bad at what social scientists call ‘calibrating’ their judgments. If your judgments are well-calibrated, then you have a sense of how likely it is that your judgment is correct. But experts are much like normal people: They routinely overestimate the likelihood that they’re right.”
Studies show that experts are actually more likely to suffer from overconfidence than the rest of the world. After all, they’re experts.
22
In the business world, this overconfidence shows up all over the place. For instance, The Three Tensions: Winning the Struggle to Perform Without Compromise cites a Bain study in which 80 percent of companies thought their products were superior to their competitors’—even though only 8 percent of customers agreed.
Human Universals says every society shows what the book calls “overestimating objectivity of thought.” We humans aren’t as rational as we think we are. The study also says every culture demonstrates “risk-taking.” It seems that teenage boys aren’t the only ones who think they’re invincible. In every culture, taking risks is seen as bold and admirable. It’s so much more fun to go all in and bully an opponent off a hand in no-limit hold ’em than it is to fold a hand because you’re surely beaten. It’s hard to get people to back away from bad risks.
Brown’s study also found that “self-image” and “psychological defense mechanism” are universal. By those terms, he means that people think highly of themselves even if they shouldn’t and that people blame problems on bad luck rather than taking responsibility and learning from their failures. Our rivals may succeed through good luck, but not us. We earn our way to the top.
In the business world, a long-term study of Exxon Corporation executives found that “cognitive maps that explain poor performance contain significantly more assumptions about the environment while those that explain good performance contain more assumptions dealing with the effects of executives’ actions.”
23 To translate that to English: If something went wrong, the executives blamed the business climate; if something went right, they took credit.
Look at how most of us think about our golf games. According to the USGA, American male golfers say they hit their drivers an average of 236 yards. The reality: 191 yards. When a man hits a good drive, he thinks that’s the real him; he dismisses bad drives as aberrations.
Or look at the business press. When someone fails, he’s crucified, even if he’s a bright guy who had a sound strategy. When someone succeeds, he’s lionized. Michael Dell, for instance, is credited with having a grand vision way back in the 1980s about selling people made-to-order computers. His was a great business model for a long time and may yet be again. But here’s a secret: He was lucky.
In 1986, when one of us—Paul—first started covering the computer beat for the Wall Street Journal, he went to the huge Comdex trade show in Las Vegas and, being new, was willing to meet with almost anyone. That’s how he spent an hour with a twenty-one-year-old college dropout who had started selling computers out of his dorm room at the University of Texas—in other words, the young Michael Dell. Dell was shooting for publicity because PC retailers were stocking only the major brands, and Dell was desperate to get shelf space in stores. Sure, he sold computers through mail order, which eventually morphed into his make-to-order, Internet-based sales machine. But he’d have happily dumped mail order if he could just get the big chains to carry his goods. He deserves tons of credit for taking advantage of his luck, developing a respected mail-order brand name just in time for the Internet, but it is revisionist history to say he always intended to pursue his current model.
The tendency to take credit, fairly or not, is reinforced by what psychologists call the “narrative fallacy.” That is the tendency to construct stories to explain events, even when no story exists. If we succeed, it must be because we’re talented, right?
One study took teams of MBAs and had them predict companies’ financial results based on the prior year’s annual report. Ten of the teams were told that their results had been exceptionally good. Ten were told that their results were substandard. In fact, the results for each group of ten were almost identical. (Remember, don’t believe what social scientists tell you when they’re conducting a study.) But the groups that were told they did well rated themselves much more highly on interaction, leadership, and several other factors than did the teams that were told they were below average. The “success” needed explanation.
We make up stories about our pasts, too—stories that may contradict the facts but that justify our opinions of ourselves. Example: While it’s been widely found that some 70 percent of corporate takeovers hurt the stock-market value of the acquiring company, studies find that roughly three-quarters of executives report that takeovers
they were involved in had been successes.
24
Those executives were, no doubt, sincere. We’re all simply wired to think highly of ourselves and our efforts, so we don’t dwell on possible failings—and don’t learn from them.