The “It Can’t Happen to Me” Syndrome
KENNETH ARROW IS WIDELY ACKNOWLEDGED TO BE ONE OF THE founding fathers of modern economic theory. Over a period of a decade he and I codirected the Jerusalem Summer School for Economics, which annually attracts leading researchers and doctoral students from around the world. I was thus privileged with the opportunity to engage in a very large number of deep and lengthy conversations with him, many of them on the philosophy of decision making. In one of those conversations he related a story that he heard at a lecture by the statistician and operations researcher, Merrill M. Flood.
At one point during the long fight against Japan, Flood’s research unit was tasked with suggesting solutions to a dilemma. The US military attached great strategic importance to conquering the Pacific island of Saipan about 2,000 miles from Tokyo, held at the time by the Japanese. The island was of enormous strategic importance for the United States because it would enable the construction of a forward refueling base for bombers on their way to attacking targets on the Japanese mainland. The direct conquest of the island was to be accomplished by landing Marine invasion forces after a planned massive aerial bombardment of the island’s entrenched Japanese units by elite air squadrons.
Operational planners estimated that an immense amount of ordnance would be required for the aerial bombardment to achieve its aims. Dropping that quantity of explosives would require, in turn, that every pilot conduct several bombing sorties, taking off from and returning to an airfield quite distant from the island. Each such sortie exposed the pilots to significant risks from antiaircraft gunners and Japanese fighter planes. In addition, it was clear that the more bombs loaded onto a plane, the more effective each sortie would be. But adding bombs also increased the risks to the pilots. The sheer weight of the bombs, along with the weight of the fuel needed to get to the target and back, limited the plane’s maneuverability in the face of enemy fire.
Army Air Force staffers working jointly with Flood’s unit worked out an exact relationship between the weight borne by a plane and the risk to the pilot. Flood’s unit was charged with mathematically calculating the optimal way of getting the requested amount of ordnance dropped on enemy forces while minimizing the expected number of pilot fatalities. The main dilemma was whether to conduct many low-risk sorties or a small number of high-risk sorties.
After several days of brainstorming, the unit concluded that there was one optimal solution that would minimize the expected number of total pilot fatalities while attaining the operational goals of the mission. All the researchers working on the problem were unanimous in agreement on the proposed solution, which was as follows: a lottery would be conducted among the pilots participating in the mission, selecting one quarter of them. Each of those lottery-selected pilots would then set out on one and only one bombing sortie, with his plane loaded as heavily as possible with bombs. With each plane maximally loaded with bombs, the entire mission could be accomplished by only a quarter of the pilots. The other three fourths would be relieved of any duty in the bombing mission. To enable the planes to get off the ground with that many bombs on board, however, the amount of fuel that each plane flying on the mission would carry would be sufficient for only a one-way flight to the bombing target.
In other words: under the suggested plan one quarter of the pilots, chosen by lottery, were to be sent to their deaths, because flying over enemy territory without sufficient fuel to return to base is a suicide mission. In contrast, the other three fourths of the pilots would bear zero risk of death, because they would not be flying at all.
The calculations actually showed that this plan yielded the lowest overall risk to the flight squadron. Selecting those who would die with certainty by lottery meant that each pilot had a 75 percent of surviving. Under any other proposed solution studied by the researchers, the individual expected survival rate of each pilot was significantly lower.
The pilots, however, were unanimous in absolutely and categorically rejecting this plan. They preferred to divide the bombs equally among themselves, flying many more sorties and taking their chances against enemy fire instead of submitting to a lottery that would determine who would live and who would die. Fortunately, the whole discussion ended when the Marines captured the island of Iwo Jima, only about 600 miles from Tokyo, so that the trade-off between fuel and bombs became much less significant. When I tell my students this story, they often react by claiming that the research unit’s proposed solution was immoral because it was inequitable. But they are wrong. It did attain the stated aim of saving the greatest number of lives under the expected conditions in a very equitable manner, because each pilot had an equal chance of being selected for a suicide mission under the lottery. In fact, the solution preferred by the pilots was less equitable because it did not give each pilot an equal opportunity for survival: the less-skilled pilots, or those who were unfortunate enough to have suffered a bad night of sleep prior to a bombing run, carried greater risks than the other pilots.
The explanation I prefer for the pilots’ refusal to accept the researchers’ solution is related to a phenomenon that has been much studied in the psychology and economics literature: overconfidence. Most of us, most of the time, fool ourselves into believing that we have greater capabilities than we actually do in reality. This is the famous “it won’t happen to me” syndrome, with which we are familiar each time we hear about someone else’s failure. If you fail to recognize it in yourself, conduct the following simple experiment with the participation of a group of friends or coworkers. Choose a skill that some members of the group are better at than others, such as driving or cooking. Ask the group members to rate their own abilities using one question: do you believe that most of the people in the group are better or worse than you at this particular skill? If possible, ask this question with respect to more than one skill.
After you have gathered the responses, you may be surprised to discover that a vast majority of the respondents, if not all of them, will claim that they belong in the upper half of the group in their abilities (meaning that each believes he or she is better than most of the group). In this situation, some members of the group must be exhibiting overconfidence. It is impossible by definition for a vast majority of the group to be in the top half.
The Second World War pilots in the story apparently also experienced overconfidence. Each pilot believed that his personal flying skills were sufficiently better than those of the others in the squadron to grant him a better than 75 percent chance of survival against enemy forces. Even though the experts had calculated that the maneuverability of the pilots was severely limited by the sheer weight of the bombs being carried and therefore the success or failure of each sortie was almost entirely determined by chance, not skill, each pilot had a gut feeling that “it won’t happen to me.” That is why they preferred the illusion of controlling their fate to yielding their fate to the outcome of a lottery over which they had no control at all.
Had the pilots gone on the bombing mission using their preferred method of sending all of them out to face the enemy, I believe the extent to which they were mistaken in overestimating their own skills, and how correct the experts were, would have been revealed. Luckily for all involved, the mission was canceled only hours before it was scheduled, after an alternative solution for refueling American bombers on the way to Japan was discovered.
An interesting study of overconfidence and its effects was conducted in 2000 by University of California researchers Terry Odean and Brad Barber.1 The researchers studied the actions of stock market investors over a period of several years, focusing on specific decisions along the lines of selling shares of stock A and using the proceeds to buy shares of stock B at the same price. An investor should rationally undertake such a transaction only if he or she predicts that the performance of stock B will outpace that of stock A. In fact, Odean and Barber’s data showed that on average each such transaction led to a loss of 3 percent. In other words, not only did investors not see an average profit in their stock portfolios by buying and selling, they actually lost money. Taking into account transaction fees and other overhead costs, the cumulative losses were even greater. The conclusion of the study was that overconfidence was leading to high trading levels and subsequent poor portfolio performances (hence “trading is hazardous to your wealth”). This is one reason many investment advisers recommend investing in index-linked funds instead of individual stocks, while avoiding investment managers who may be susceptible to overconfidence in their abilities to predict the future values of stocks.
Several years ago, five professional investment managers participated in an investment competition for an Israeli newspaper. Each investor was given a large virtual sum of money that could be used for trading over a six-month period. In addition to the five human investors, a sixth competitor, whom the newspaper called “the monkey,” was involved. The monkey was actually a computer algorithm that randomly selected stocks for investment at the start of the competition, with that random stock portfolio subsequently held constant for the entire time period.
After half a year had passed, the returns of the portfolios of all the competitors were ranked from best to worst. The monkey came in second place, providing better returns than four human professional investment managers. This is undoubtedly an embarrassing outcome for those who consider themselves experts in the stock market, earning immense salaries for their selection of investment portfolios. The monkey’s success apparently was achieved largely by what it avoided doing—frequent buying and selling of shares.
We are not born overconfident. It is acquired by learning. When we make decisions in situations of uncertainty, we need to estimate the chances of each possible outcome. For example, the probability we attach to a stock gaining value or losing value affects our decision of whether or not to buy shares of it. Our estimate of the probability that it will rain tomorrow affects our decision of whether or not to take an umbrella, and the risk of a major earthquake occurring will affect a decision on whether or not to buy earthquake insurance.
Over the course of our lives we get indications that are supposed to enable us to update the probabilities we ascribe to certain events. By taking the past into account, the updated probabilities either strengthen or weaken our beliefs that these events will occur. For example, suppose that there are two small urns in the next room, each containing one hundred coins. In one urn there are fifty gold coins mixed in with fifty copper coins, while in the other urn there are seventy-five gold coins and twenty-five copper coins. If someone in the other room were to choose one of these urns at random by a coin toss and bring it to you, and you were asked what is the probability that you got the preferable urn (the one with seventy-five gold coins), you would probably correctly answer 50 percent.
Imagine, however, that you could sample the urn you were given by randomly pulling one coin out of it, taking a look at it, and placing it back. If you pulled out a gold coin, would you still stick to the 50 percent probability estimate that you received the better urn? Of course not. You have just gotten an indication (but only an indication, not a proof) that this is the better urn. You would update your belief accordingly.
Bayes’s Rule (named after an eighteenth-century mathematician named Thomas Bayes) is a precise mathematical formula for conducting these sorts of probability updates after receiving new information. In this example, Bayes’s Rule determines that sampling a coin at random and seeing that it is a gold coin updates the probability that you received the better urn to 60 percent. If you then pull out another coin randomly and place it back, and discover that it is a gold coin too, that would trigger yet another probability update, increasing the estimate. But every time you pull out a copper coin is a bad indication, lowering the estimate that you’ve received the better urn. If the coins in the urns have been properly mixed, after sufficiently many such samplings the probability estimate will be either very close to 100 percent or very close to 0 percent. In either case, you will be very close to certain knowledge of which urn you’ve got before you.
The reader at this point may be asking what all this has to do with self-confidence. Here is the connection: we never know with certainty whether we are better than average at a particular skill or worse than average. From this perspective, our knowledge of our own abilities is similar to our knowledge of which urn we are drawing our coins from. We do receive daily indications of our abilities. These indications parallel the sampling of coins in the example. Every time we cook, for example, we get an indication of how good we are at cooking. If we burned the scrambled eggs we were preparing for our spouse this morning, we get a negative indication (similar to sampling a copper coin from the urn), and we should accordingly reduce our estimation that we are better than average at cooking. If we have guests over for a multiple course meal that we have cooked, and the guests clean off their plates to the very last drop of sauce, we get a positive indication (parallel to sampling a gold coin). The same reasoning holds regarding our capability at taking good photographs, choosing good financial investments, or making strong social connections—in each skill in life we rate our abilities using indications we get, with each indication leading us to update the probability that we are better or worse than average. We don’t, of course, normally make formal use of Bayes’s Rule in everyday life. We use our memories and intuitions. Each indication is stored in our mind’s memories, changing our beliefs a bit. In many cases our intuitive updating is quite close to what Bayes’s Rule indicates we should do.
If you have been following so far, you must be asking yourself—so what’s wrong? Why are you accusing us of overconfidence? The answer is that we update our beliefs quite well as long as we are not assessing ourselves. When we come to assess ourselves, we start fudging the arithmetic in our favor without even noticing we are doing it. Bayes’s Rule demands giving strictly equal weights to positive and negative indications. But our cognitive and emotional systems refuse to do this. Consider again the example of self-assessment of cooking skills. Most of us give greater weight to our successes in cooking than to our failures, stressing the most successful meals we cooked and forgetting the ones we burned.
Uri Gneezy, Muriel Niederle, and Aldo Rustichini coauthored a paper on this phenomenon in a very convincing laboratory experiment in which students were asked to solve relatively simple riddles while reassessing their abilities at doing so from one riddle to the next.2 The students systematically refused to take their failures into account as much as their successes. As a result they tended to overestimate their abilities in solving riddles. In the experiment, in addition to rating their abilities, the students also bet on their chances of solving the next riddle. Their overconfidence caused them on average to lose money in those bets.
Researchers have only recently started to unveil why we overestimate ourselves, but it is reasonable to suppose that our emotions are particularly active in causing the phenomenon. Our reactions to our successes and failures are mainly emotional, accompanied by feelings of happiness or disappointment. Our emotional reactions may be affected by our ability to remember the indications we get over the course of our lives. We have selective memories that stress the positive events in our lives and blur the negative memories.
Learning from past experiences is undoubtedly an important element in ensuring survival. Imagine what would have happened if prehistoric men were unable to learn from their failures in hunting, arriving time and again at the same clearing in the forest, only to see the prey they are stalking always slip away from them into the woods. Given that, why hasn’t evolution given us the ability to learn about our self-worth efficiently, protecting us from overconfidence?
The answer is that along with the damage it causes, our bias toward overconfidence also bring with it advantages, in fact several important advantages. First of all, self-confidence plays a role similar to the peacock’s tail in raising our “market value” in many social interactions, including the most important interaction from the evolutionary perspective—that relating to reproduction.
Overconfidence also gives individuals advantages in the competition over resources and territory, because expressions of self-confidence can intimidate rivals. Just as at equilibrium an individual’s emotional state can effectively impress others only if it is authentic, faked self-confidence is not as effective as the real thing. If you want to convince others that you have strong abilities, you had better really believe it yourself.
The third advantage of overconfidence is that it can encourage optimism, even a bit of overoptimism. Optimism sparks action, and action is good for survival, hence optimism is good for survival. Imagine, again, two prehistoric hunters, one somewhat optimistic, the other somewhat pessimistic. The optimistic hunter wakes up in the morning and eagerly grabs his hunting implements, believing that this is the day on which he will bring down the fattest buffalo on the plains. The pessimist, in contrast, curls up deeper under his deerskin blanket in his cave instead of getting up in the morning, while mumbling about what an idiot his optimistic mate is: “Doesn’t that misguided optimist realize that he can hop from one hill to another all day long, waving his sharpened spear, and still come home at sundown with nothing to show for all of his efforts?” Guess which one of these two hunters has better chances of bringing home a buffalo.
An extensive psychiatric study published in 1989 compared probabilistic assessments expressed by psychologically healthy people with those suffering from clinical depression.3 Individuals in both categories were asked to assess the chances that they would experience negative events such as falling ill, being injured in accidents, losing their jobs, and so forth. In addition, they were also asked to assess the chances that they would experience positive events such as finding a spouse, winning money in the lottery, and so forth.
When the researchers compared the answers given by both groups with the true objective probabilities of each event, they found that the clinically depressed, who were also quite pessimistic, were much more accurate than their healthy counterparts in assessing the probabilities of both positive and negative events. Depression, it turns out, makes you much more realistic. Nevertheless, it is difficult to conclude from this that depressive realism confers great survival advantages. Quite the opposite. It is the nonrealistic illusions that healthier individuals live with that make daily living easier and give us better chances at survival—assuming that the rosy illusions do not get too far away from reality. An overdose of self-confidence can be fatal.
Niederle and Vesterlund conducted another interesting study relating to self-confidence, this time comparing men and women.4 In contrast to popular opinion, men do not have greater self-confidence than women. Both sexes share equally in the bias toward overconfidence. But a significant difference was noted in the ways that men versus women update their self-assessments after receiving indications. Men, in general, are better at updating the probabilities regarding their own abilities. They give sufficient weight to both positive and negative indicators and more readily change their initial assessments. Women, by contrast, are more stable in their self-assessments (whether those are low or high). Successes and failures have less of an effect on their self-assessments.
It is possible that these sex-related differences have an evolutionary basis, but even if the evolutionary basis for differences in self-assessments is marginal, social interactions tend to amplify it. In the market for mates, each individual has an interest in broadcasting the characteristics that are most closely associated with that individual’s gender (that is, women wish to stress their femininity while men show off their masculinity). Sexual attraction works in most human beings quite similarly to the way it works in other highly developed animals—each individual seeks a mate with as many characteristics of the opposite sex as possible.
There is a story about the legendary investor Warren Buffet receiving a telephone call from his wife one day while he was driving on Route 1 near Boston. “Warren, drive carefully,” said his wife. “I just heard on the radio that there is some idiot driving against the flow of traffic on Route 1.”
“My dear,” replied Buffet, “I wish it were only one idiot. I see dozens of cars doing so!”
In this joke, Buffet exhibited more than supreme overconfidence—he also showed off his nonconformism and his refusal to bow down to social conventions. As we will see in the next chapter, however, despite our tendency toward overconfidence, the truth is that we usually behave in very conformist ways.