When I was finishing my PhD, I had to wonder whether the five years of hard work for pauper’s wages were worth it. Was I employable? “You’ll get a job,” my mentor and doctoral advisor, Max Bazerman, assured me.
“What if I don’t?” I pressed anxiously.
Max’s response was to offer me an insurance policy. “Listen,” he said, “if you don’t get a job, I promise to pay you, from my own pocket, next year’s prevailing starting salary for assistant professors: $90,000.” But this insurance would come at a price. I would have to pay Max $5,000. Was it a good deal?
Let’s assume for the moment that Max had the money and that I wouldn’t feel bad taking it from him. Assume further that taking Max up on his offer would not affect my subsequent employability. That is, my salary and career prospects would be the same after a year on Max’s gravy train as they would after a year of employment as an assistant professor.
If I bought Max’s insurance, I could count on $90,000 minus the $5,000 insurance premium, so $85,000. On the other hand, if I declined Max’s insurance policy and didn’t get a job, I’d be out of luck: $0. If, however, I did get a job, I’d get to keep the full $90,000. So how much was Max’s insurance policy worth? That depended on the probability that I would get a job. If I stood only a 50 percent chance of getting a job, then multiplying the $90,000 salary by 50 percent yielded an expected value of $45,000. The $85,000 I could get with Max’s insurance would be much better than that.
How bad would my chances have to get to make Max’s insurance policy a good deal? To estimate the probability at which the cost of Max’s insurance exceeded its value, I divided $5,000 by $90,000. That’s 5.6 percent. If my chance of being unemployed was greater than 5.6 percent, I reasoned that I would be better off buying Max’s insurance.
This is the logic of expected value. To assess the present value of an uncertain prospect (like an insurance policy), multiply its probability by its value. When I teach this lesson in class, I do not offer my students employment guarantees. Instead, I offer them a chance to bid on a coin flip: If they correctly anticipate how the flip will come out, I pay $20. Otherwise, they get nothing. One person in class gets to play. When I open up the bidding, it quickly goes to $10 and then stops. That makes perfect sense because a 50 percent chance at $20 has an expected value of $10.
Using the logic of expected value, the insurance policy Max offered was a good deal only if the probability of my getting a job that paid $90,000 was less than 94.4 percent. I thought about it seriously and decided to say no to his insurance policy. “See?” Max responded with satisfaction. “You also think you’re going to get a job.”
WISHFUL THINKING
In hindsight, I believe I was right to reject Max’s insurance. But it is easy to make mistakes calculating expected value. One of the most common is to let desirability influence your assessment of probability. This can sometimes happen as a matter of wishful thinking: the fact that you want something to happen affects your estimate of its likelihood. Sports fans, for example, routinely overestimate the chances that their favorite team will win. Partisan political pollsters routinely overestimate their favored candidates’ chances. And corporate leaders sometimes let wishful thinking bias their beliefs, as happened to Jerry Yang in 2008.
Jerry Yang’s family had emigrated from Taiwan to California when Yang was ten years old. At the time, he knew only one word of English: shoe. It was not particularly useful. “We got made fun of a lot at first,” Yang recalls. He quickly overcame this early handicap. Yang graduated first in his class in high school in San Jose while also playing on the tennis team and serving as student body president. He completed his bachelor’s and master’s degrees in electrical engineering at Stanford in four short years, while working part time.
In 1995, Yang was pursuing a PhD when he and his classmate David Filo grew frustrated that the massive amounts of information becoming available online lacked any useful organization. In response, they created “Jerry and David’s Guide to the World Wide Web.” Their service was so popular that Yang dropped out of the PhD program to run the company that he and Filo created. Fortunately, they realized the company needed a shorter name, and they dubbed it Yahoo! From the start, Yang was one of the company’s most enthusiastic champions. In the words of Stanford president John Hennessey, an adviser to Yang, “He’s everything from technical visionary to chief strategist to corporate spokesman and cheerleader to Washington lobbyist to the company’s conscience.” Yang took the title “Chief Yahoo” and told employees, “All of you know that I have always, and will always bleed purple” (Yahoo!’s signature color).
It was Yang who decided to reject an offer from Microsoft to buy Yahoo! for $44.6 billion in 2008. At the time, it was already clear to many that Google spelled the end for Yahoo!’s internet search business. But Yang insisted, “Yahoo is positioned for accelerated financial growth. We have a powerful consumer brand, a huge global audience, and a highly profitable operating model.” In retrospect, it seems possible that Yang’s rosy assessment might have been biased by wishful thinking. Yahoo!’s stock fell after it rejected Microsoft’s offer, and Yang was ousted as CEO by Yahoo!’s board later that year. In 2016, Verizon bought Yahoo!’s core businesses for $4.83 billion, roughly one-tenth of what Microsoft had offered eight years earlier.
Sometimes people intentionally overestimate the probability of positive outcomes. This overestimation often reflects an attempt to be optimistic. For a research paper entitled “Prescribed Optimism: Is It Right to Be Wrong about the Future?” David Armor, Cade Massey, and Aaron Sackett asked research volunteers if they prefer to be accurate in their beliefs about the future or whether it is instead better to be optimistic, even when optimism means exaggerating the probability of positive outcomes. Optimism won in a landslide. One reason is the faith that optimism increases the chances of positive outcomes. For example, volunteers recommended that a patient undergoing physical therapy should be confident about her ultimate prospects for recovery because it would make that recovery more likely.
The book The Secret takes this view to an extreme verging on the ridiculous. The author, Rhonda Byrne, advises readers to believe they already have the things they desire: “Your belief that you have it, that undying faith, is your greatest power. When you believe you are receiving, get ready, and watch the magic begin!” She tells the story of her mother and a house she wanted: “My mother decided to use The Secret to make that house hers. She sat down and wrote her name and the new address of the house over and over. She continued doing this until it felt as though it was her address. She then imagined placing all of her furniture in that new house. Within hours of doing these things, she received a phone call saying her offer had been accepted.”
Before we dismiss Byrne’s “secret” as delusional fantasy, it is worth appreciating the evidentiary basis on which these beliefs may be built. All around us, we see that confidence precedes success. Confident political candidates are more likely to win. Cancer patients who are most confident and optimistic about their survival chances actually live longer. Confident entrepreneurs are more likely to secure backing from investors. Confident athletes are more likely to prevail. In these instances, as in innumerable others, the belief that you will succeed is inextricably tied to success. It is a durable correlation that we observe again and again.
But correlation is not causation. Just because two things are associated with one another does not mean one causes the other. The world is full of correlations that are not causal. Spring’s flowers reliably predict summer’s heat, but they do not cause it. Young people have more acne than do old people, but acne is not a cause of youth. Wealthy companies find themselves as defendants in more lawsuits, but those suits are not a cause of their profitability. And just because confidence is correlated with success does not mean that confidence causes success. It is entirely possible, even likely, that there is a third variable that accounts for the relationship. In the case of confidence and success, the oft-neglected third variable is competence.
Of course, truly competent people will usually be more confident. They have good reason to be. If you know that you can win the game, leap the chasm, or fly the plane, then you may proceed with confidence. You can do it. You have nothing to fear. Someone observing you in that moment may be tempted to conclude that your confidence has led to your success. Your confidence is evident for all to see. Invisible are the many hours you have invested in training and practice. If you have never before flown a plane, then you ought to lack the confidence to take the controls. It is when you lack competence that your confidence will be shakiest.
That is not to say that confidence and competence always move together. Sometimes they do, and then the combination of competence and confidence implies well-calibrated excellence. In the words of the boxer Muhammad Ali: “It’s not bragging if you can back it up.” Competence without confidence can also lead to success, provided one has the gumption to step up and make the effort despite uncertainty. But confidence without competence puts you in the danger zone, on the high wire without a net. Confidence invites you to dare and to take risks that, without underlying skill, are unlikely to work out well. Con men will attempt to impersonate trained professionals, but their indomitable confidence does not make them expert pilots or surgeons. While the notorious con man Frank Abagnale was impersonating a doctor, his confidence nearly proved fatal to a baby who was handed to him for treatment.
Is it possible for our beliefs about the future to be both accurate and optimistic? Many people believe that it is. In a study I conducted with Elizabeth Tenney and Jennifer Logg, we replicated the results obtained by David Armor and his colleagues: more than 80 percent of our research volunteers told us they believe that it is possible to be both optimistic and accurate. When we asked why, they told us that optimism causes success.
We attempted to test their optimistic beliefs. We wanted to give optimism its best chance, so we asked people what sorts of tasks they thought would most benefit from optimism. They told us that optimism is important on tests that require effort, including math tests. So we asked a different group of volunteers, whom we called the “predictors,” exactly how much they thought optimism would affect the performance of another group, the “test takers,” on a ten-question math test. We showed them all ten questions on the test so they knew exactly what was on it. We then led about half of those in the test-taker group to be optimistic, telling them they should expect that they would get 70 percent of the questions right. We told the other half of participants to expect they would get only 30 percent of questions right.
We invited the predictors to bet on the outcome and told them we would pay them more if their predictions were accurate. The predictors bet that the group led to feel optimistic would outscore the pessimists. They were wrong. In fact, there was no significant difference between the scores of test takers with optimistic and pessimistic expectations. In our study, optimism about the potential benefits of optimism was unwarranted.
We worried that the math test might not have given optimists the best opportunity to shine, despite the fact that people had told us they expected optimism to matter there. So we did the study again using another task. Optimism did not turn out to affect performance on a trivia test either. Likewise, it did not affect performance in tasks requiring physical persistence, athletic effort, or mental vigilance. It did not even help people find Waldo in a Where’s Waldo? challenge, but it did lead them to persist a little longer in looking for him. Volunteers in our studies were always eager to bet on the optimists. Yet none of the studies was able to document an effect of optimism on actual performance.
These results may come as a surprise to you, as they did to the predictors in our study. Maybe you feel skeptical, because you are thinking of all the times that optimism seemed to go along with better performance, and pessimism with poor performance. Again, the two are strongly correlated. But correlation does not mean that confidence causes better performance. Our studies were designed to understand the causal role of optimistic beliefs; that is, to answer the question, “How does optimism affect performance, separate from actual ability?” Answering this question requires experiments that manipulate optimism and observe its effect (or lack thereof).
What was crucial about my research with Tenney and Logg was that we randomly assigned some people to be optimistic and some people to be pessimistic. Life seldom provides the sort of well-controlled experimental evidence necessary to systematically test the effectiveness of optimism, as we did in our study. Consequently, common sense and daily experience are of limited value for assessing this relationship. Instead, we usually find ourselves in just one state: optimistic or pessimistic. Lacking the data we would need to accurately assess the effects of our optimistic expectations on performance, we are left to imagine how much worse things would have been if we had been less optimistic, without actually knowing how much of a difference it would have made. The evidence from our research suggests that it often makes little difference for performance; it certainly makes less difference than most of us think it does.
WHAT DO YOU HAVE TO FEAR?
This chapter explores the logic of expected value and the benefits of well-calibrated numbers to feed into those calculations. Thus far, I have dwelt on biases due to wishful thinking. But people are not always overly optimistic. The opposite of wishful thinking occurs when our fears exaggerate the subjective probability of some undesirable possibility. I, like many of my classmates in the PhD program, was terrified of being unemployed after graduation. That fear led me to obsess about what would happen, and to dwell on the humiliation of being ignored or neglected by institutions I so fervently wished would hire me. The bet Max offered invited me to put my money where my mouth was and forced me to think through my honest assessment of the probabilities.
Another example of exaggerated pessimism is stoked by terrorism. After the September 11 terrorist attacks, Americans estimated their chances of being injured or killed in a terrorist attack at 20 percent. Another poll found, shortly after 9/11, that fully 58 percent of Americans feared that they or their families would become victims of terrorism. These fears are outrageously exaggerated; even in 2001, the year of the attacks, less than 0.0001 percent of Americans could be counted as victims of terrorism. However, the inflated fear of terrorism persists, in part thanks to radical groups like ISIS that do what they can to publicize their appalling acts of violence. Even now, the number of people who fear for themselves and their families remains around 45 percent.
But the threat represented by ISIS was always far smaller than the danger posed by an overreaction to ISIS by the United States. Terrorists armed with knives, guns, trucks, bombs, or planes can inflict loss of life, but these tragedies do not come close to representing any sort of existential threat to the United States. Their risks are minuscule compared with the consequences of our nation’s actions. The American invasion of Iraq following the September 11 attacks, for example, has resulted in greater loss of life than the original attacks, including the deaths of over four thousand US military personnel and hundreds of thousands of Iraqis. Over thirty thousand Americans have suffered grievous battle injuries in Iraq, and the military operation has cost more than $2 trillion.
US president Franklin Roosevelt was thinking of another threat when he warned the nation, “The only thing we have to fear is fear itself—nameless, unreasoning, unjustified terror.” But his admonition still holds today. Our fear of terrorism has driven us to exaggerate its risks. This fear has led us to overreact, with grave costs to our nation and the world. This is not to belittle the potential risks of terrorist attacks. They are real. But the probabilities of any one person becoming a victim of terrorism are also minuscule, and those probabilities ought to figure in to how we react to the risk.
Likewise, it would be terrible to be infected with the Ebola virus, but I do not spend much time worrying about it, because the chance is so small. I worry more about cancer, heart disease, and diabetes, since their probabilities are far higher. On average, something like 60 percent of men will develop heart disease late in life; a healthy lifestyle can cut that risk by as much as half. Because of the higher probabilities associated with these risks, investments in doing something about them have a higher expected return.
When I was three years old, I asked my father to open the window so that the birdies could come in and sing to me. He refused: “No, no. They’re going to come in and poo-poo on you.” He lived his whole life beset by fears about what would get him in the end. We had a defibrillator in the house, and the trunk of our car was so full of disaster-preparedness equipment that we could not fit anything else in it. Indeed, as my sister noted in her eulogy at his memorial service, my father met his cancer diagnosis with a measure of relief. Finally, he knew what was going to get him. My father’s penchant for pessimistic rumination rivaled that of people who make their living doing much more dangerous work than he did.
Firefighters, soldiers, and police officers have more reason to cultivate pessimism. Their very survival depends on well-calibrated confidence. They know the risk that overconfidence poses for their lives and livelihoods. So they scrupulously practice the safety procedures with the equipment that protects them. In their own attempt to resist complacency, those who do dangerous work like coal mining or flying airplanes tend to cultivate a pessimistic sense of what could go wrong, understanding that optimism can put them at risk. It is easy to imagine that people who do dangerous things like big-wave surfing or rock climbing must be overconfident thrill seekers. But those who succeed in these endeavors have a keen sense of their vulnerabilities and their limitations. They worry about overconfidence. In the words of the big-wave surfer Brett Lickle, “As soon as you think, I’ve got this place wired. I’m the man! you’re about thirty minutes away from being pinned on the bottom for the beating of your life.”
When is it wise for people to cultivate a sense of foreboding? When they face real risk. People who live in Northern California should do more to protect themselves from earthquakes than people in Illinois. When their lives are at stake, it makes sense for people to take risks seriously. Where should we land between underconfidence and overconfidence? The best advice is to make realistic estimates of the probability and the consequence of potential disasters. When I was on the academic job market, I spent a lot of time thinking about how things could go wrong. That motivated me to plan ahead and do what I could to reduce the chances of these risks. But I also chose to decline Max’s insurance when he offered it, because I thought its price was too high relative to its expected value.
THE ENTREPRENEUR’S DILEMMA
My recommendation to hold rational and well-calibrated beliefs meets what might be its stiffest challenge from entrepreneurs. The San Francisco Bay Area in which I live is a hotbed of entrepreneurial activity, and many of my students at UC Berkeley have ideas for starting new businesses. Some seem to believe the advice offered in a 2014 article in Entrepreneur magazine: “If you want to be a successful entrepreneur, you have to bleed confidence.” This admonition fits comfortably with the mountains of advice encouraging would-be entrepreneurs to bolster their confidence.
It is true that entrepreneurs are an exceedingly confident lot. One study of nearly three thousand entrepreneurs found that 81 percent of them rated their chances of success at least 7 out of 10, and fully one-third of them rated their chances at 10 out of 10. Some of this is attributable to self-selection: only those most confident about their prospects choose to found new businesses. But there may also be another reason why entrepreneurs want to display confidence about the future of their business. As the journalist James Surowiecki put it, “Successful entrepreneurship involves hucksterism, the ability to convince investors and employees that they should risk their money, their time, and their effort on you. Like a con artist, you’re peddling optimism. . . . Of course the fundamental difference between entrepreneurs and con artists is that con artists ultimately know that the fantasies they’re selling are lies.”
Is delusion any better for you when it’s self-delusion? There are good reasons to think it’s likely to be worse. Fooling yourself about the chances that your startup will succeed poses real risks. If you have convinced yourself that your chances of success are 10 out of 10, then it might make sense to invest everything you can in giving your venture its best shot. In addition to toiling long hours at the expense of your family and your health, you should cash out your retirement plan, max out your credit cards, and take out a second mortgage on your house. Moreover, you should convince your friends and family members to do the same and loan you the money because it will prove such a lucrative investment.
This sort of optimism has got to qualify as overconfidence much of the time, given the high rates of entrepreneurial failure. Studies suggest that nearly 80 percent of new businesses are out of business within five years. Even so, the risks of entrepreneurship can still be worth it if the potential upside is big enough. In other words, life is so great if you turn out to be Jeff Bezos or Bill Gates that it’s worth the high probability of failure. However, this claim is undermined by analyses that suggest, even taking all this into account, that the average entrepreneurial venture has a negative expected value. The probability of hitting it big is just so small. Most potential entrepreneurs would be better off keeping their steady jobs and investing their money in index funds.
This is not to say that I think there should be less entrepreneurship. On the contrary, the dynamism of American entrepreneurship contributes to the vibrancy and growth of our economy. My home state owes much of its prosperity to the courage of entrepreneurs through history, from gold to technology. Yet although it is great for our country and our economy that there are so many eager entrepreneurs, it does not follow that founding your own company is a good career move. To draw an analogy, starting a new company in the hopes that you might become wealthy is a bit like buying a lottery ticket. Lotteries, like entrepreneurship, can have positive economic side effects. In some places, revenues from state lottery programs fund schools or other worthy programs. But that does not mean I recommend to my students that they should buy lottery tickets because it’s good for the elementary schools. Lottery tickets are still bad bets.
Imagine a set of one hundred potential entrepreneurs. Each one has to choose between keeping a steady job and entering a new market. Potential entrepreneurs should enter only when the expected value of entry is higher than the wages paid by their steady jobs. Let’s say that only one will strike it rich and will earn ten times what the steady job would have paid. If potential entrants cannot tell which among them will be more likely to strike it rich, then ten should enter. One will earn the big prize and the other nine will rue their bad luck. These nine may feel sorry for their misfortune, but their decisions to enter were justified by their expected value.
This analysis changes if some of the potential entrepreneurs delude themselves into thinking that their chances of victory are higher. Suppose if instead of thinking they possess an equal chance to win the big prize, they all fool themselves into thinking that their chance to win the big prize is twice as good as the other entrants’. Then twice as many of these optimistic strivers will enter, and the expected value of entry will fall to half the value of keeping the steady job. So fooling yourself doesn’t sound like such a great strategy. What should you do if you know that the other potential entrants are overconfident? You should probably stay out of the competition. Matching their level of delusion would be a mistake.
If there is natural variation in people’s optimism regarding their entrepreneurial prospects, then those who are most optimistic will be those most likely to enter. They will enter at higher rates than the realists, and they will drive down the expected value of entry. And just because the successful entrepreneur is one of the optimistic strivers, it does not follow that being more confident is a wise strategy for promoting your own success. Yes, your probability of striking it rich might go up, but the expected value will go down. It’s a bit like buying lottery tickets: buying more lottery tickets does increase your chance to win the lottery, but each ticket costs you more than its expected value. The more lottery tickets you buy, the poorer you should expect to be.
REALISTIC GOALS
One instance in which I have often seen people’s poorly calibrated confidence judgments get them into trouble is in the context of negotiation. I have taught negotiation classes to business students and working executives around the world, including Tony Robbins and his platinum partners. The single most important concept I teach in my negotiation classes is the BATNA: Best Alternative To a Negotiated Agreement. Your BATNA is what you get if you walk away from the negotiating table. Your BATNA defines how demanding you can be and when you should walk away. If you have a great alternative to this deal, you can hold out for a lot, knowing that if your negotiating counterparts don’t give you what you want, you can walk away. If your BATNA is terrible, then you have much less leverage.
Understanding this concept is crucial for negotiation planning. Before you walk into the meeting room, you should have a sense of how good your BATNA is and therefore when you might choose not to make a deal. As a part of my classes on the topic, I ask my students to prepare for negotiation by writing down what they think their BATNA is and what that implies about an offer so bad it would make them walk away. What I see over and over is students planning a walk-away price based on wishful thinking. It is common, for instance, for students to insist that they will hold out for what they believe is fair or what they deserve, even when that is far better than their BATNA.
Graduating MBA students, for example, have a strong sense of what a fair salary is, given what their classmates are getting. I encourage them to make these arguments forcefully at the bargaining table, explaining why their skills are at least as valuable as those of classmates who are getting paid more. However, when deciding whether to accept a particular job offer, the decision should depend not so much on what they think they deserve but on what other job offers they have (or are likely to get). Having the confidence to hold out for a $160,000 starting salary is just stupid if there are no such offers coming along. Appropriately calibrating your confidence in negotiation depends on having a sense of what you’re worth to the other side. What value do you offer to potential employers, and what sorts of salary offers can you expect? What does that mean for your BATNA?
I see a related error in the way organizations set goals and targets. Knowing that goals can focus attention, increase effort, and produce results, most companies set regular performance targets and provide rewards for their attainment. Insufficiently ambitious goals can depress performance, so managers instead err on the side of setting “stretch” goals. How far should they stretch? Well, if performance is correlated with the ambitiousness of the goal (the logic goes), then more ambitious goals are better. Taking this reasoning to its logical conclusion implies infinitely ambitious goals, with the obvious problem that unattainable goals undermine motivation. Workers who know they will earn a bonus only if they pick a thousand bushels of strawberries in a day or sell a hundred cars in a month are unlikely to be motivated by these goals. They are simply unattainable. Even Elon Musk avoids setting unattainable goals.
Unattainable goals undermine the motivation to actually achieve them, but they may motivate other, less desirable actions. For example, in an effort to address the growing problem of air pollution, the Chinese government set pollution-reduction targets and rewarded their attainment. Attainable goals led to measurable reductions in air pollution, with commensurate benefits for air quality and health. But in regions with more ambitious “stretch” goals for pollution reduction, official reports deviated from the objectively measured reality. In other words, local officials started lying about their pollution measurements. This highlights one of the ethical risks of overconfident ambition: it increases the temptation to engage in cheating and deception.
TAKING THE RIGHT RISKS
In 1963, the Nobel Prize–winning economist Paul Samuelson was at lunch with a colleague when he offered the following bet: If the colleague won a coin toss, Samuelson would pay him $200. If not, the colleague had to pay Samuelson $100. Would he take the bet? It is easy to see that this bet has positive expected value, since 50% × $200 − 50% × $100 = $50. Nevertheless, the colleague declined. He explained that he would feel the pain of the $100 loss more keenly than the pleasure of the $200 gain, giving the bet a negative expected utility. He was distinguishing between the expected value in monetary terms and expected utility. Utility enhances the computation of value by including subjective feelings.
The logic behind the sentiment of Samuelson’s colleague was formalized by Kahneman and Tversky’s prospect theory, mentioned in chapter 3. One tenet of prospect theory is that losses loom larger than gains. In other words, a loss of a given size is more painful—something like twice as painful—as a gain of the same size is pleasurable. Finding $20 on the street is delightful, but it does not affect your well-being as much as losing $20 undermines your happiness. There are many important implications of this asymmetry, but one of them is that we are often reluctant to take risks that entail a possibility of loss, even if they have a positive expected value.
Maybe you can relate to Samuelson’s colleague and agree that you would decline the bet. But this decision has a problematic implication. If you react this way to every risky opportunity that comes your way, you will wind up behaving in exceedingly risk-averse ways. Every day, you face versions of Samuelson’s bet. When you buy an apple, it could turn out to have a worm in it. Trying a new restaurant might be great, but there is also the risk of a disappointing meal. Taking a new colleague to lunch could be enjoyable but could get weird if he starts offering you bets on coin flips. Avoiding all apples, new restaurants, and lunches with new colleagues because they entail the possibility of loss will leave you worse off.
After declining the bet, Paul Samuelson’s colleague made a curious counteroffer. He said that although he would not take Samuelson’s bet, he would gladly take one hundred such bets. Averaged across a hundred repetitions, the probability of a loss is vanishingly small—less than 1 percent. And after you consider them as a bundle like this, what seemed like a risky prospect seems almost like a sure thing. Offered a gamble with a 99 percent probability of winning, most people would be inclined to take it. The problem is that life presents us with risky prospects one at a time. Each day we face a hundred separate small gambles, on everything from choosing restaurants to apples.
Here it is worth thinking of our behavior as implying a policy about how to behave in such situations. If you had lunch with Paul Samuelson every day and every day he offered you his bet, you should make a policy of accepting it. If you check your investment accounts every day, there will be many days on which the value of your portfolio has dropped. Those losses will hurt, and you might be tempted to sell. But you also know that there will be market fluctuations, that it is difficult to identify peaks and troughs that allow you to time the market, and that market returns are likely to be positive over the long term. The rational response to this series of risky daily bets with positive expected value is to make a policy of taking them. That is, let your money ride so that you can be sure you are fully invested on the days when the market goes up.
It is often tempting to violate your own policies. Even though you usually hold yourself to one dessert, the options on this particular dessert buffet look exceptionally enticing. Even though you usually have no more than two drinks at a sitting, you may be tempted to have more when the atmosphere is particularly convivial and the cocktails especially tasty. Even though you plan to go to bed by eleven, the show you are watching is especially engrossing. In such circumstances, it is worth considering what you would want yourself to do, faced with a hundred such choices. It’s okay to decide that this really isn’t like other desserts, and this time it really is worth indulging. But don’t fool yourself into thinking “this time is different” if it isn’t. Don’t pretend “I’ll be virtuous from now on” if you will succumb to the same temptation again tomorrow.
Here I must warn you of one of the most dangerous pitfalls when computing expected values: confusing utility for probability. It happens sometimes that consequential outcomes loom larger than they should. Because aviation accidents seem so scary, they loom large enough to induce real terror. By one estimate, a pathological fear of flying afflicts as much as 6 percent of all people. While going down in a fiery plane crash would be unfortunate, it is also fantastically unlikely. Airplane travel is, per mile traveled, among the safest modes of transportation. It is a mistake to inflate the risk of being in an airplane accident just because the prospect is scary. To pick a more positive example, your probability of winning the lottery does not go up just because the prize gets more alluring. Indeed, as the Powerball jackpot goes up, your probability of winning might go down if the prize money entices more people to buy lottery tickets. You do a better job computing expected value when you clearly distinguish probability from utility, making the most accurate estimates of each independently and using them to compute accurate expected values.
KEEPING SCORE
Applying the logic of expected value requires that you specify both the value of an outcome and its probability. When Max invited me to bet on my job prospects, he specified the dollar outcomes and challenged me to reflect on my subjective probabilities to determine the bet’s expected value. This turns out to be a worthy endeavor. Keeping track and writing down expected value calculations for your important decisions offers three clear benefits.
First, writing down your expected value calculations helps you learn over time. You might be concerned that your expected value calculations are imperfect. That’s okay; most are. But imperfect estimates are better than nothing. Having committed yourself to a specific prediction allows you to go back and score yourself. Get better at keeping track and keeping score. Forcing forecasters in the Good Judgment Project to commit to specific probability forecasts and then providing them with feedback on their accuracy was one of the most important things we did to help them get better. If you don’t write down your probability estimates ahead of time, it is too easy to fall victim to the hindsight bias and believe afterward that the outcome was inevitable. The hindsight bias fools the Monday-morning quarterback—the football fan who, on Monday after having watched his team lose Sunday’s big game, grouses, “How could they have run that dumb play? I knew it was going to turn out that way.”
Now, the problem that you will notice right away with tracking probabilities and outcomes is that while probability forecasts are continuous (ranging from 0 percent to 100 percent), the actual outcomes are much lumpier; things happen or they don’t. The meteorologist forecasts a probability of rain, but either it rains or it doesn’t. Nevertheless, it is possible to score probability estimates in such a way as to reward good calibration. Moreover, when you have a lot of data, you can average across outcomes to compare average hit rates with average predicted probabilities. The very exercise of looking for comparable events helpfully forces you to think about analogies between your situation and others like it. This gives you a better sense of the underlying probabilities, what could happen, and what you should expect.
The second benefit of documenting your expected value calculations is that it helps protect you from the capricious winds of chance. Sometimes you will make a good bet that turns out unlucky. This is always a risk with innovative products. Consider, for example, Apple’s bet on the Newton. Newton was a personal digital assistant—a handheld computer—that Apple released in 1993. Apple had invested about six years and $100 million developing the Newton ahead of its release. Let’s just say it was not a commercial success. After being widely lampooned as an overpriced digital notepad, the Newton was discontinued in 1997.
The Newton was ahead of its time. It was a risky bet, but one that could have paid off. Indeed, a few years later Apple released the iPhone, which was based on some of the innovations found in the Newton. The iPhone wound up succeeding where the Newton had failed. As of 2018, Apple had sold something like 1.4 billion iPhones. If each one produced $150 in profit for Apple, that’s over $200 billion in profits. If we imagine that, at the time of its release, the Newton had even a 5 percent chance of making the company $200 billion, then its $100 million price tag would have been well worth it.
Companies pursuing real innovation try to encourage risk taking by celebrating failure. Amazon and 3M are two examples of companies that have managed to innovate successfully over time thanks to their courage in rewarding employees who pursue bold and promising ideas even when they wind up failing. Both companies give their people the freedom to take costly risks and reward well-intentioned failure. What does it mean for failure to be well-intentioned? It means it has a positive expected value. After your innovation fails, how can you show it had a positive expected value? That is easier if you document its expected value before you take the plunge.
Document your reasoning for making a decision, based on its expected value. It can help you persuade your superiors or your bankers why they should bet on your risky idea. It can help you, as a boss or as an investor, figure out which risks are worth taking. Yes, people might be biased in the assessment of their ideas, but an explicit expected value calculation can help you unpack a rosy forecast into testable claims about potential revenues and probabilities of success. Moreover, this sort of documentation can be enormously useful after an outcome is known. This is the third benefit of writing down your expected value calculations.
Documenting your thinking at the time of a decision—when you make the bet—is also essential for helping you avoid the hindsight bias. I have reason to suspect that my PhD adviser, Max, did not take good notes of his reasoning at the time he offered me the insurance policy on my income. To my surprise, when I brought it up recently, he did not remember the conversation. “What insurance premium did I ask for?” he inquired.
“Five thousand dollars,” I told him. Having seen that I managed to get a job and even remain employed, it was tempting for Max to think that that outcome was more likely than it actually was.
“Boy, it sounds like I was ripping you off.” Max smiled.