Chapter 4. The Optimism Bias

In the rarified air of New York City real estate moguls, few stood taller in 2001 than Larry Silverstein. Over the years, Larry had assembled an impressive portfolio of marquee properties. In late July, he was poised to acquire the crown jewel of his collection: the twin towers of the World Trade Center (WTC). After months of bargaining, he had finally secured from the Port Authority of New York and New Jersey, a 99-year lease priced at $3.2 billion, an amount that included $14 million of his own money.37

It was known that the buildings carried risk. Just eight years earlier, terrorists detonated 1,500 pounds of explosives in the basement of the North Tower, killing six and injuring more than 1,000, and the years since had seen a growing threat of terrorist attacks worldwide. In addition, a report commissioned just before the Silverstein acquisition highlighted a long list of things that could go wrong with the buildings, one being the possibility of an airliner crashing into one or both of the towers.38 Still, whatever risks the WTC posed, they were not enough to dissuade Silverstein from purchasing the buildings or dissuade tenants from renting office space. Just the year before, for example, the WTC experienced an all-time high occupancy rate.39 Whatever the risks of acquiring the WTC, all the relevant parties saw them as small relative to the financial upside.

The risks posed by the WTC also did not serve as a barrier to Silverstein’s securing insurance coverage. When he acquired the buildings, a consortium of 22 insurers provided him with $3.55 billion in coverage should the WTC suffer any future damage.40 What was most interesting about these policies, however, was not their existence but how they were written. Despite the history of the buildings as a target for terrorism, when drawing up the contract, insurers were content to lump losses due to a future terrorist attack under a standard “all-other perils” clause, meaning that this risk was not priced independently when the premium was determined. In essence, insurers saw the chance of losses from another terrorist attack as sufficiently remote that it could be pooled with a wide range of other difficult-to-imagine events, such as the towers being struck by an errant meteor.

Of course, soon after Silverstein signed the final paperwork, the unthinkable did happen: On the morning of September 11, 2001, terrorists flew two commercial airliners into the towers, causing both to collapse, killing more than 2,700 people. Silverstein declared his intention to rebuild, though he and his insurers became embroiled in a multiyear dispute over whether the attack constituted one event or two. A settlement was reached in 2007, with insurers agreeing to pay out $4.5 billion.41 The eventual combined loss of the disaster and the cost of rebuilding greatly exceeded the insurance settlement.

Silverstein’s misfortunes, however, did not end with the 9/11 attack. On December 11, 2008, a financier named Bernard Madoff was arrested in New York on charges of securities fraud. For more than a decade, Madoff had been running an investment securities firm that promised selected investors a high rate of return with virtually no variance, one that by 2008 claimed more than $65 billion in paper assets.42 The firm, however, turned out to be a Ponzi scheme, one that would turn out to inflict billions of dollars in losses for hundreds of his clients. When the news media released a list of famous celebrities and investors who had been victimized by the scam, one familiar name again stood out: Larry Silverstein.43 A low-probability event had once again taken its toll.

While vastly different in nature and scope, the 9/11 attacks and the Madoff scandal have one thing in common: Both illustrate the catastrophic outcomes that can occur when one does not fully consider all the consequences of foreseeable risks. While it would have been impossible to put a precise probability on the chance of a terrorist attack prior to 9/11, or the odds that Bernard Madoff was running a Ponzi scheme, in both cases information was available to officials that, if properly attended to, might at least have lessened the scale of the losses. After the Madoff scandal, for example, the Securities and Exchange Commission conceded that it had information indicating that a fraud was in the works up to nine years before the event,44 and in the wake of the World Trade Center disaster, the 9-11 Commission pointed to numerous similar intelligence failures.45

In both cases Silverstein’s decisions were fueled by psychological factors that should have played little (if any) role: the emotional lure of winning a bidding war for the WTC purchase and the trust that Madoff would never undertake a Ponzi scheme.

In this chapter, we try to explain why individuals and organizations often err when forming assessments of the likelihood of rare events—why Larry Silverstein was one of many who underestimated the risk of a terrorist attack on 9/11, and why he and so many other investors failed to recognize that something was not right with Bernard Madoff’s investment scheme. While economics and statistics teach us how we should think about probability and outcomes when choosing between alternatives, we rarely follow these principles when actually making decisions. More often than not, we make choices under risk intuitively rather than deliberatively.

How We Should, versus Do, Think About Probability

When statisticians use the term probability, they have something very specific in mind: a long-run relative frequency. A simple example is a coin toss: If we flip a coin a large number of times, it will come up heads about half the time; hence, we say the odds of either outcome are 50-50. It is a precise number that one can determine mathematically. But it is unusual that one can assign such precise probabilities to uncertain events, particularly rare ones. While we might be able to define the event (e.g., a terrorist attack), it is difficult if not impossible to define the complete set of alternative possibilities necessary to compute precise odds. The perceptions we form about risk are thus more a cognitive cocktail of objective facts, subjective feelings, and emotions—a blend that often causes beliefs about risk to stray widely from those a statistician might prescribe.

Consider, for example, the case of insurers writing policies for the World Trade Center prior to the 9/11 attacks. While all knew that there was some risk that the buildings could be damaged by a terrorist attack, it was impossible at the time of Silverstein’s purchase of the property to assign a precise probability to such an event. In the absence of estimates based on past data, the underwriters did the only thing they could: They relied on their subjective beliefs about the likelihood of a future attack that would damage or destroy property and assumed that it was highly unlikely.

While there are a number of reasons subjective beliefs about probability often stray from beliefs formulated by a statistician, psychologists have identified three main reasons: the availability bias, or the tendency to ground beliefs about risk in how easy it is to imagine bad outcomes; the optimism bias, or the tendency to believe that we are more immune than others to bad outcomes; and the compounding bias, or the tendency to underestimate cumulative risk.

The Curse of Intuition: The Availability Bias

In a 1980 airing of his late-night talk show, Johnny Carson made famous a mathematical problem involving computing probabilities called the birthday paradox, which works like this. Imagine a studio audience of 70 people where each is asked to announce the day and month in which he or she was born. “What are the chances that there is at least one pair of birthdays with the same date among the 70 people?” Intuition says that the odds of this happening are small. After all, it is rare to run into someone with the same birthday as ours. For this reason, many people would find it surprising that in an audience of 70, it is a virtual certainty that two will have the same birthday; the probability is 99.9%.46

Why does our intuition fail us in this case? The reason is that our minds instinctively lead us to try to solve the problem the same way that we estimate the odds of most things in life: By trying to imagine how often the event occurs based on our own experience. In the case of the birthday problem, an individual is likely to focus on the number of times he has met someone who was born on the same month and day as he. The likelihood of the person finding a match is very small, even with 70 people.47 The birthday paradox, however, focuses on a different problem: the probability that any pair of individuals in a sample of 70 people has the same birthday.48

The tendency to estimate the likelihood of a specific event occurring on the basis of our own personal experience has been termed the availability bias, and it helps explain why we often have distorted perceptions of risk in a wide variety of settings.49 With respect to the risk of terrorism, in the year prior to 9/11, insurers gave little thought to the possibility that terrorists could bring down both towers of the WTC, simply because such a catastrophic event would have been hard to imagine. It had been six years since any such major incident had occurred in the United States (the Oklahoma City bombing), the federal government had invested substantial resources in detecting and preventing repeats of such attacks, and it was hard to envision the possibility that two of the largest office buildings in the United States could collapse due to coordinated plane hijackings.

After the event occurred, the availability bias had the opposite effect of producing subjective beliefs that the risk of terrorism was much higher than actually plausible—a bias that had its own destructive effects. Insurers now perceived terrorism as a very salient risk and viewed another attack as extremely likely in the near future. As such, they concluded that it was an uninsurable risk. The few insurers who were willing to offer coverage were able to charge a very high premium because commercial enterprises were overly concerned about being protected. One firm, for example, paid $900,000 to protect itself against $9 million in damage to their facility from a terrorist attack in the coming year. If one calculates the implied odds of a terrorist attack damaging the building based on this insurance premium, it is 1 in 10 ($900,000 / $9 million), an estimate almost certainly much larger than the actual probability.

In a similar fashion, a large number of potential air travellers reacted to their newly elevated fears of terrorism by getting into their cars to reach destinations to which they would ordinarily have taken a plane. In 2002, for example, air passenger miles fell between 12% and 20%, while road use surged50—a change that Garrick Blalock and colleagues concluded resulted in the needless loss of 2,300 lives.51

What travellers overlooked at the time, of course, was that the risk of disaster while driving or riding in a car is much higher than that of flying, even given a perceived heightened risk of a terrorist attack overall. In the year after 9/11, though, images of planes flying into the World Trade Center would have been much easier to bring to mind than auto crashes on interstate highways—it is the salience of imagined events, not actuarial odds, that is key in driving behavior. The statistical data reveal that there is a 1 in 11 million chance that a person will be killed in a plane crash. If one compares the safety of planes relative to cars on a per-mile basis, the likelihood of being killed in a car is 720 times greater than that of flying.52

This psychic boomerang associated with the availability bias has been widely documented in other contexts, such as what causes people to buy insurance. Just prior to the 1989 Loma Prieta earthquake in California, only 22% of homeowners in the Bay Area had earthquake coverage. Although residents knew they were living in a seismically active region, it had been 83 years since a quake of magnitude 7 or higher hit the area—the great San Francisco earthquake took place in 1906, long before most were born. But the sudden realization that severe quakes can happen—the Loma Prieta registered 7.1 on the Richter scale—caused residents suddenly to elevate their subjective beliefs about the risk. Four years later, 36.6% of residents had purchased earthquake insurance—a 72% increase in coverage.53 Similarly, the Northridge, California, earthquake of 1994 led to a significant demand for earthquake insurance. For example, more than two-thirds of the homeowners surveyed in nearby Cupertino County had purchased earthquake insurance in 1995.54 There have been no severe earthquakes in California since Northridge, and only 10% of homeowners in seismic areas of the state have earthquake insurance today. If a severe quake hits the Bay Area or Los Angeles in the near future, the damage could be as high as $200 billion, and it is likely that most homeowners suffering damage will be financially unprotected.55

“It Will Not Happen to Me”: The Optimism Bias

A related error arises when people form perceptions of risk: the tendency to believe that they are more immune than others to threats. This is the optimism bias. A classic study of this effect was published by Neil Weinstein in 1980.56 He asked people to estimate the probability of uncertain future life events (such that one’s marriage would end in divorce, one’s car would be stolen, or one would be diagnosed with lung cancer) in one of two ways: the probability for the individual respondent and the probability for an average person in the population. In virtually all cases, people saw their own odds of escaping such misfortunes as being much higher than those of others; divorce, after all, is something that happens to other couples.

The optimism bias helps explain an apparent paradox that has surfaced in studies of how people prepare for natural hazards: a tendency to concede that a hazard is likely to occur, but to take limited or no personal actions to reduce the potential damage. In a recent study, conducted by one of the authors,57 on hurricane preparedness in advance of Hurricane Sandy, coastal residents in New Jersey believed forecasts that their community was about to be hit by a bad storm. In fact, they thought the storm would be more severe than it actually was. For example, some believed that there was more than an 80% chance their homes would experience sustained hurricane-force winds. In contrast, the National Hurricane Center’s estimates of the probability of hurricane-force winds striking Atlantic City were never more than 32%. But here is the surprise: When asked what actions they were taking to prepare for the storm, most residents displayed a surprising laxness. Only slightly more than half of respondents who had removable storm shutters on their houses indicated that they were putting them up. Only 21% indicated that they had plans in place if they needed to evacuate.

The reason for this apparent disconnect became clear when residents were then asked to estimate the probability that their homes would suffer property damage as a result of the hurricane. Their estimates were less than half of those they gave when asked about the probability that they would experience damaging winds. In a nutshell, while residents believed that a hurricane was coming and that it would be bad, they also had faith that when it arrived they would personally escape harm.

There are two principal reasons for people being excessively optimistic that harm is something that happens to other people. One is the availability bias we’ve just discussed. Most of the time, harm does come to others, therefore instances in which one did not experience harm will come to mind much more readily than those in which one did. If a storm is approaching, one is more likely to think of damage in other places—floods in New Orleans, tornados in Oklahoma. It would be hard to imagine a storm surge from a hurricane inundating our home or strong winds detaching the roof if we have not experienced such a disaster before. The media play a key role in this regard by highlighting the damage that occurs in disasters with graphic photos of other people’s misfortunes.

Yet this is only half the story. The second, and more serious, reason for people’s excessive optimism is that we are also prone to construct scenarios that we hope will happen. We shut out images of our living room being underwater, of our homes’ roofs being blown off. We would much prefer to think of the ways that we will escape harm rather than experience it. Psychologists term this effect motivated reasoning; that is, a tendency to selectively gather and process information that is most congruent with a desired goal or outcome.58

A tragic example of this behavior occurred during the great Labor Day Hurricane of 1935, when 257 World War I veterans lost their lives in the Florida Keys awaiting an escape train that never arrived. Knowing that the barracks the veterans were staying in would not be strong enough to survive a hurricane, officials ordered a train to be sent from Miami to evacuate them. A tragic mistake was made, however, in deciding when to send the train. The official in charge of the evacuation optimistically focused on how much time would be required to get to the Florida Keys and back under normal circumstances—not during a hurricane on Labor Day. He overlooked, for example, the fact that it would be difficult to quickly round up a crew on a holiday and that drawbridges would be open as boaters tried to move their vessels to safety. When the train finally made it to the Keys, it was too late; the storm’s tidal surge had begun to submerge the tracks, and evacuation was impossible. At daybreak, it was discovered that most of the veterans in the camps had perished, along with more than 200 other Keys residents.

Underestimation of Cumulative Risk: The Compounding Bias

Another source of error arises when people perceive the risk of rare events: a tendency to focus on the low probability of an adverse event in the immediate future rather than on the relatively high probability over a longer time period. Consider again the case of the World Trade Center. On the eve of 9/11, the risk of a terrorist attack was probably the furthest thing from most people’s minds for good reason: the odds of such a thing happening the next day was extraordinarily small. If someone had told investors, for example, that there was a 1-in-100 chance that a catastrophic terrorist attack would occur in any year during the lease of the WTC, Silverstein might have given this risk some consideration but still assumed it was not worth worrying about. But here’s the rub: While a 1-in-100 chance of a disaster occurring in any one year is indeed quite small, extrapolated out over the life of a 99-year lease, that same risk becomes quite large. To be precise, there would be a 63% chance that a catastrophic attack would happen at least once in 100 years.59 That is a risk that an investor would likely take more seriously.

This kind of oversight is commonplace in day-to-day life. If, one morning, we see an aged tree hanging over our house and are trying to decide whether to hire an arborist to prune it, we might well feel there is no urgency to the matter: The odds that the tree will fall that day are, after all, very small. But, of course, this is not the relevant calculation one should make. One should be thinking about the probability that the tree will fall sometime in the foreseeable future, and act on that likelihood.

There are two principal reasons that people are prone to underestimate long-term risk. First, when trying to compute compound probabilities, our brains tend to be overly influenced by short-term considerations, which come most easily to mind, an effect termed anchoring bias. To understand this, consider the problem of calculating the odds that a tree will fall on your house while you’re living there over the course of several years. One quick way to do this would be to think of a probability you can be reasonably confident about, such as the probability that the tree will fall today, and then adjust that probability upward. This upward adjustment, however, is likely to be insufficient. Our initial estimate of the probability of the tree falling today, which is very small, will unduly anchor our estimate that it will probably fall over the course of several years.

Another reason for the focus on short-term risk is the way others present it to us. To illustrate, consider the case of flood insurance. In the United States, the Federal Emergency Management Agency (FEMA) operates the National Flood Insurance Program, which communicates flood risk to homeowners in terms of expected return periods. For example, a homeowner might be shown a map indicating that his residence lies in the “100-year floodplain,” where a damaging flood might be expected once a century. Is this a high risk? We conjecture that most people would not see it that way; after all, time scales over decades are hard enough to grasp mentally, much less over a century. The tendency to ignore this risk might be exacerbated if the location recently experienced an actual flood. In that case, the “once-in-a-century” reference might wrongly be construed as implying that the home is safe for another 99 years. Yet note that if the National Flood Insurance Program told homeowners that, over the course of a 25-year period, the chances of at least one flood occurring was greater than one in five (that is, the same risk as once in a century, but conveyed in a mentally more manageable way),60 the homeowners might suddenly be far more concerned, and see flood insurance as worth purchasing.

Recap: The Bane of Optimism

One of the major challenges of preparing for catastrophic events is that, unlike with flipping a coin or playing roulette, we cannot assign precise probabilities to potential disasters. When Larry Silverstein, the banks, and insurers were poised to support the purchase of the World Trade Center in July 2001, there were no actuarial tables that listed the probability of a terrorist attack, nor even good statistical models from which they could have derived reliable estimates. Nor do these exist today. While we usually cannot assign precise probabilities to rare events, good decisions can still be made if our subjective estimates are well calibrated—that is, if our estimates, while sometimes being too high and sometimes too low, on average tend to converge to their objective values.

The key lesson of this chapter is that our perception of the likelihood of rare adverse events will often stray widely from objective risk levels due to a blend of three biases: the availability bias, the tendency to equate the likelihood of an event with its salience and the ease with which it can be imagined; the optimism bias, the tendency to believe we are more immune than others to adverse events and thus to treat the risk as below our threshold level of concern; and the compounding bias, the tendency to focus on the low probability of an adverse event happening in the immediate future rather than on the relatively high probability of its occurring over a longer time period. When taken together, these three systematic biases can lead to the kind of misjudgments illustrated in this chapter: a mistaken belief that that the risk of catastrophe is too low to worry about or, if it does occur, that we will be immune from its worst effects.

Before we move on, we might note that all the biases we have talked about so far (myopia, amnesia, and optimism) might be overcome if we had effective heuristics, or rules of thumb, for investing in protection—that is, rules that required us to mentally assess the costs and benefits of obtaining that protection. Indeed, often when we are faced with choices about preparedness, this is what happens: We use heuristics that make selective use of the information available to us. Over the next three chapters we will explore the nature of the systematic biases associated with choice and their impact on decision making under conditions of risk and uncertainty.