During the 1990s, John Hennigan, an eccentric gambler who had been making a living by his wits and skills in poker and pool for several years, moved from Philadelphia to Las Vegas. His reputation and nickname, “Johnny World,” preceded him, due to his already exceptional skills and willingness to bet on anything. His talent has stood the test of time: he is a legendarily successful player in high-stakes games, and in major poker tournaments has earned four World Series of Poker bracelets, a World Poker Tour championship, and more than $6.5 million in prize money.
John was a perfect match for Las Vegas. He arrived already in rhythm with the town: sleeping all day and spending all night in poker games, pool halls, bars, and restaurants with adventurous, like-minded peers. He quickly found a group of professional gamblers with similar interests, many from the East Coast.
Although John and Vegas seemed made for each other, he had a love-hate relationship with the lifestyle. Playing poker for a living has the allure of giving you the freedom to make your own schedule but, once it boils down to your per-hour net advantage, you are tethered to putting in the hours. You’re “free” to play or not play whenever you want, but you can feel compelled to punch a clock. Worse, the best games are at night, so you’re working the graveyard shift. You get out of rhythm with the rest of the world, never see the sun, and your workplace is a smoke-filled room where you can’t even see outside. John felt this keenly.
One night, John was in a high-stakes poker game and the talk between hands somehow included the state capitol of Iowa, Des Moines. John had never been there or seen much of the Midwest, so he was curious about what life in Des Moines might be like—a “normal” life that increasingly seemed foreign to him, waking up in the morning and living in the daylight hours. This led to some good-natured ribbing as the other players in the game imagined the prospect of a nocturnal action junkie like John in a place that seemed, to them at least, like the opposite of Las Vegas: “There’s no gambling action.” “The bars close early.” “You’d hate it there.” Over the course of the evening, the discussion shifted to whether Hennigan could even live in such an unfamiliar place.
As is often the case with poker players, a conversation about a hypothetical turned into an opportunity to propose a wager. What would the stakes have to be for Hennigan to get up from the table, catch a flight, and relocate to Des Moines? If he took such a bet, how long would he have to live there?
John and the others landed on a month in Des Moines—a real commitment but not a permanent exile. When he seemed willing to, literally, walk out of a poker game and move 1,500 miles to a place he had never been, the other players added a diabolical condition to the negotiation: he would have to confine himself to one street in Des Moines, a street with a single hotel, restaurant, and bar, where everything closed at 10 p.m. That enforced idleness would be a challenge for anyone, regardless of the location. But for someone like John, a young, single, high-stakes gambler, this might actually count as torture. John said he would take such a challenge if they made one concession: he could practice and play at a nearby golf course.
After agreeing on the conditions, they still had to negotiate the size of the bet. The other players needed a number that was large enough to entice John to accept the wager, but not so large that it would entice John to stay even if he really hated it in Iowa. As one of the most successful cash-game players in Las Vegas, a month in Des Moines could cost John, potentially, six figures. On the other hand, if they offered him too large of an upside to stay in Des Moines, he would certainly endure the discomfort and boredom.
They settled on $30,000.
John considered two distinct and mutually exclusive alternatives: taking the bet or not taking the bet. Each came with new risks and new reward potentials. He could win or lose $30,000 if he took the bet (or win or lose greater dollar amounts at the poker table if he turned it down). He could also win to the decision to move to Des Moines long after the bet was over, if he used the golf-practice time to improve his chances gambling at high-stakes golf. He could further his reputation of being willing to bet on anything and being capable of anything, a profitable asset for professional gamblers. He also had to think about the other, less quantifiable things he might value. How much might he like the pace of life? How would he value taking a break from the action? Would he become more relaxed experiencing the more traditional schedule? Was the break worth it to take the big pay cut from not being able to play poker for a month? And then there were the real unknowns. He might just meet the love of his life on that one street in Iowa. He had to weigh all of this against the opportunity costs of leaving Vegas—money from lost earning opportunities, nights missing doing the things he enjoyed, and even perhaps missing meeting the love of his life at the Mirage during that month.
Johnny World moved to Des Moines.
Was a month of detox away from the nightly life of a high-stakes Vegas pro going to be a blessing or a curse?
It took just two days for him to realize that it was a curse. From his hotel room in Des Moines, John called one of his friends on the other side of the bet and tried to negotiate a settlement. Just as parties in commercial lawsuits often settle before trial, in the gambling world negotiated settlements are common. What was particularly funny about John’s call was that his opening offer was that the others pay him $15,000 to spare them the cost and indignity of losing the whole amount. He argued that since he was already in Des Moines, he was clearly capable of waiting out the month to get the full amount.
The other bettors, literally, were not buying it. After all, John made this offer after only two days. That was a pretty strong signal that not only would they likely win the bet, but they might earn a return (in fun) by needling John while he served out his sentence.
Within a few days, John agreed to pay $15,000 to get out of the bet and return to Vegas. John proved, in spectacular fashion, that the grass is always greener.
The punch line of the John Hennigan–Des Moines story—“after two days, he begged to get out of it”—made it part of gambling folklore. That punch line, however, obscures how usual the underlying analysis about whether to move was. The only real difference between Johnny World’s decision to move to Des Moines and anyone else’s decision to relocate or take a job was that he and the poker players made explicit that the decision was a bet on what would most improve their quality of life (financial, emotional, and otherwise).
John considered two distinct and mutually exclusive alternative futures: taking the bet and living for a month in Des Moines, or not taking the bet and staying in Las Vegas. Any of us thinking about relocating for a new job has this same choice between moving, with the potential to earn the money being offered, or staying where we are and maintaining the status quo. How does the new job pay compared to what we have now? There are plenty of things we value in addition to money; we might be willing to make less money to move to a place we imagine we would like a lot better. Will the new job have better opportunities for advancement and future gains, independent of short-term gains in compensation? What are the differences in pay, benefits, security, work environment, and the kind of work we’d be doing? What are we giving up by leaving our city, colleagues, and friends for a new place?
We have to inventory the potential upside and downside of taking the bet just like Hennigan did. That his $30,000 wasn’t a sure thing doesn’t make his decision distinct from other job or relocation decisions. People take jobs all the time where a large portion of the compensation is contingent. In many businesses, compensation includes bonuses, stock options, or performance-based pay. Even though most people don’t have to consider losing $30,000 when they take a job, every decision has risks, regardless of whether we acknowledge them. Even a set salary is still not “guaranteed.” We could get laid off or hate the job and quit (as John Hennigan did), or the company could go out of business. When we take a job, especially one promising big financial rewards, the commitment to work can cost us time with our family and affect those relationships, a costly if not losing compromise.
In addition, whenever we choose an alternative (whether it is taking a new job or moving to Des Moines for a month), we are automatically rejecting every other possible choice. All those rejected alternatives are paths to possible futures where things could be better or worse than the path we chose. There is potential opportunity cost in any choice we forgo.
Likewise, the players on the other side of that bet, risking $30,000 to see if John would live a month in Des Moines, thought about similar factors that employers consider in making job offers or spending money to create enticing workplace environments. The poker players had to strike a fine balance in offering that bet to Hennigan: the proposition had to be good enough to entice him to take the bet but not so good that it would be guaranteed to cost them the $30,000.
Although employers aren’t trying to entice employees to quit, their goal is similar in arriving at a compensation package to get the prospect to accept the offer and stay in the job. They must balance offering attractive pay and benefits with going too far and impairing their ability to make a profit. Employers also want employees to be loyal, and work long, productive hours, and maintain morale. An employer might or might not offer on-premises child care. That could encourage someone to work more hours . . . or scare off a prospective employee because it implies they may be expected to sacrifice aspects of their non-work lives. Offering paid vacation leave makes a job more attractive but, unlike offering free dining and exercise facilities, encourages them to spend time away from work.
Hiring an employee, like offering a bet, is not a riskless choice. Betting on hiring the wrong person can have a huge cost (as the CEO who fired his president can attest). Recruitment costs can be substantial, and every job offer has an associated opportunity cost. This is the only person you can offer this opportunity. You might have dodged the cost of hiring Bernie Madoff, but you might have lost the benefit of hiring Bill Gates.
The John Hennigan story seems so unusual because it started with a discussion about what Des Moines was like and ended with one of the people in the discussion moving there the next day. That happened, though, because when you are betting, you have to back up your belief by putting a price on it. You have to put your money where your mouth is. To me, the ironic thing about a story that seems so crazy is how the underlying analysis was actually very logical: a difference of opinion about alternatives, consequences, and probabilities.
By treating decisions as bets, poker players explicitly recognize that they are deciding on alternative futures, each with benefits and risks. They also recognize there are no simple answers. Some things are unknown or unknowable. The promise of this book is that if we follow the example of poker players by making explicit that our decisions are bets, we can make better decisions and anticipate (and take protective measures) when irrationality is likely to keep us from acting in our best interest.
Our traditional view of betting is very narrow: casinos, sporting events, lottery tickets, wagering against someone else on the chance of a favorable outcome of some event. The definition of “bet” is much broader. Merriam-Webster’s Online Dictionary defines “bet” as “a choice made by thinking about what will probably happen,” “to risk losing (something) when you try to do or achieve something” and “to make decisions that are based on the belief that something will happen or is true.” I have emphasized the broader, often overlooked, aspects of betting: choice, probability, risk, decision, belief. We can also see from this definition that betting doesn’t have to take place in a casino or against somebody else.
No matter how far we get from the familiarity of betting at a poker table or in a casino, our decisions are always bets. We routinely decide among alternatives, put resources at risk, assess the likelihood of different outcomes, and consider what it is that we value. Every decision commits us to some course of action that, by definition, eliminates acting on other alternatives. Not placing a bet on something is, itself, a bet. Choosing to go to the movies means that we are choosing to not do all the other things with our time that we might do during that two hours. If we accept a job offer, we are also choosing to foreclose all other alternatives: we aren’t sticking with our current job, or negotiating to get a better deal in our current job, or getting or taking other offers, or changing careers, or taking some time away from work. There is always opportunity cost in choosing one path over others.
The betting elements of decisions—choice, probability, risk, etc.—are more obvious in some situations than others. Investments are clearly bets. A decision about a stock (buy, don’t buy, sell, hold, not to mention esoteric investment options) involves a choice about the best use of financial resources. Incomplete information and factors outside of our control make all our investment choices uncertain. We evaluate what we can, figure out what we think will maximize our investment money, and execute. Deciding not to invest or not to sell a stock, likewise, is a bet. These are the same decisions I make during a hand of poker: fold, check, call, bet, or raise.
We don’t think of our parenting choices as bets but they are. We want our children to be happy, productive adults when we send them out into the world. Whenever we make a parenting choice (about discipline, nutrition, school, parenting philosophy, where to live, etc.), we are betting that our choice will achieve the future we want for our children more than any other choice we might make given the constraints of the limited resources we have to allocate—our time, our money, our attention.
Job and relocation decisions are bets. Sales negotiations and contracts are bets. Buying a house is a bet. Ordering the chicken instead of the steak is a bet. Everything is a bet.
One of the reasons we don’t naturally think of decisions as bets is because we get hung up on the zero-sum nature of the betting that occurs in the gambling world; betting against somebody else (or the casino), where the gains and losses are symmetrical. One person wins, the other loses, and the net between the two adds to zero. Betting includes, but is not limited to, those situations.
In most of our decisions, we are not betting against another person. Rather, we are betting against all the future versions of ourselves that we are not choosing. We are constantly deciding among alternative futures: one where we go to the movies, one where we go bowling, one where we stay home. Or futures where we take a job in Des Moines, stay at our current job, or take some time away from work. Whenever we make a choice, we are betting on a potential future. We are betting that the future version of us that results from the decisions we make will be better off. At stake in a decision is that the return to us (measured in money, time, happiness, health, or whatever we value in that circumstance) will be greater than what we are giving up by betting against the other alternative future versions of us.
Have you ever had a moment of regret after a decision where you felt, “I knew I should have made the other choice!”? That’s an alternative version of you saying, “See, I told you so!”
When Pete Carroll called for a pass on second down, he didn’t need an inner voice second-guessing him. He had the collective cry of the Seahawks fans yelling, “When you called for Wilson to pass, you bet on the wrong future!”
How can we be sure that we are choosing the alternative that is best for us? What if another alternative would bring us more happiness, satisfaction, or money? The answer, of course, is we can’t be sure. Things outside our control (luck) can influence the result. The futures we imagine are merely possible. They haven’t happened yet. We can only make our best guess, given what we know and don’t know, at what the future will look like. If we’ve never lived in Des Moines, how can we possibly be sure how we will like it? When we decide, we are betting whatever we value (happiness, success, satisfaction, money, time, reputation, etc.) on one of a set of possible and uncertain futures. That is where the risk is.
Poker players live in a world where that risk is made explicit. They can get comfortable with uncertainty because they put it up front in their decisions. Ignoring the risk and uncertainty in every decision might make us feel better in the short run, but the cost to the quality of our decision-making can be immense. If we can find ways to become more comfortable with uncertainty, we can see the world more accurately and be better for it.
In an episode of the classic sitcom WKRP in Cincinnati, called “Turkeys Away,” the radio station’s middle-aged manager, Mr. Carlson, tries to prove he can stage a successful promotion for the rock-and-roll station. He sends his veteran news reporter, Les Nessman, to a local shopping center and tells him to report, live, on a turkey giveaway he is about to unleash.
The station’s DJ, Johnny Fever, cuts from his show to a live “man on the scene” report from Nessman. Nessman fills time, describing a helicopter overhead. Then something comes out of the helicopter. “No parachutes yet . . . Those can’t be skydivers. I can’t tell what they are but—oh, my God! They’re turkeys! . . . One just went through the windshield of a parked car! This is terrible! . . . Oh, the humanity! . . . The turkeys are hitting the ground like sacks of wet cement!” Nessman has to flee amid an ensuing riot. He returns to the studio and describes how Mr. Carlson tried to land the helicopter and free the remaining turkeys, but they waged a counterattack.
Carlson enters, ragged and covered with feathers. “As God is my witness, I thought turkeys could fly.”
We bet based on what we believe about the world. Pete Carroll’s Super Bowl decision to pass on the Patriots’ one-yard line was driven by his beliefs—his beliefs about quarterback Russell Wilson’s likelihood of completing the pass, of having the pass intercepted, of getting sacked (or scrambling for a touchdown). He had data on and experience about all these things, and then had to apply that to this unique situation, considering his beliefs about the Patriots’ defense and how their coach, Bill Belichick, would set up the defense for a likely running play on the goal line. He then made a choice about the best play to call based on these beliefs. He bet on a pass play.
The CEO who suffered all that anguish over firing the president did what he did based on his beliefs. He made his decision based on his beliefs about how the company was doing compared with competitors, what he thought the president did that contributed to or detracted from that, the likelihood he could get the president’s performance to improve, the costs and benefits to splitting the job between two people, and the likelihood he could find a replacement. He bet on letting the president go.
John Hennigan had beliefs about how he would adapt to Des Moines. Our beliefs drive the bets we make: which brands of cars better retain their value, whether critics knew what they were talking about when they panned a movie we are thinking about seeing, how our employees will behave if we let them work from home.
This is ultimately very good news: part of the skill in life comes from learning to be a better belief calibrator, using experience and information to more objectively update our beliefs to more accurately represent the world. The more accurate our beliefs, the better the foundation of the bets we make. There is also skill in identifying when our thinking patterns might lead us astray, no matter what our beliefs are, and in developing strategies to work with (and sometimes around) those thinking patterns. There are effective strategies to be more open-minded, more objective, more accurate in our beliefs, more rational in our decisions and actions, and more compassionate toward ourselves in the process.
We have to start, however, with some bad news. As Mr. Carlson learned in WKRP in Cincinnati, our beliefs can be way, way off.
When I speak at professional conferences, I will occasionally bring up the subject of belief formation by asking the audience a question: “Who here knows how you can predict if a man will go bald?” People will raise their hands, I’ll call on someone, and they’ll say, “You look at the maternal grandfather.” Everyone nods in agreement. I’ll follow up by asking, “Does anyone know how you calculate a dog’s age in human years?” I can practically see audience members mouthing, “Multiply by seven.”
Both of these widely held beliefs aren’t actually accurate. If you search online for “common misconceptions,” the baldness myth is at the top of most lists. As Medical Daily explained in 2015, “a key gene for baldness is on the X chromosome, which you get from your mother” but “it is not the only genetic factor in play since men with bald fathers have an increased chance of going bald when compared to men whose fathers have a full set of hair. . . . [S]cientists say baldness anywhere in your family may be a sign of your own impending fate.”
As for the dog-to-human age ratio, it’s just a made-up number that’s been circulating with no basis, yet with increasing weight through repetition, since the thirteenth century. Where did we get these beliefs? And why do they persist, despite contrary science and logic?
We form beliefs in a haphazard way, believing all sorts of things based just on what we hear out in the world but haven’t researched for ourselves.
This is how we think we form abstract beliefs:
It turns out, though, that we actually form abstract beliefs this way:
Harvard psychology professor Daniel Gilbert, best known for his book Stumbling on Happiness and his starring role in Prudential Financial commercials, is also responsible for some pioneering work on belief formation. In a 1991 paper in which he summarized centuries of philosophical and scientific study on the subject, he concluded, “Findings from a multitude of research literatures converge on a single point: People are credulous creatures who find it very easy to believe and very difficult to doubt. In fact, believing is so easy, and perhaps so inevitable, that it may be more like involuntary comprehension than it is like rational assessment.”
Two years later, Gilbert and colleagues demonstrated through a series of experiments that our default is to believe that what we hear and read is true. Even when that information is clearly presented as being false, we are still likely to process it as true. In these experiments, subjects read a series of statements about a criminal defendant or a college student. These statements were color coded to make it clear whether they were true or false. Subjects under time pressure or who had their cognitive load increased by a minor distraction made more errors in recalling whether the statements were true or false. But the errors weren’t random. The subjects were not equally likely to ignore some statements labeled “true” as they were to rely on some statements labeled “false.” Rather, their errors went in one direction: under any sort of pressure, they presumed all the statements were true, regardless of their labeling. This suggests our default setting is to believe what we hear is true.
This is why we believe that baldness is passed down from the maternal grandfather. If you, like me until I looked it up for this book, held that belief, had you ever researched it for yourself? When I ask my audiences this question, they generally say it is just something they heard but they have no idea where or from whom. Yet they are very confident that this is true. That should be proof enough that the way we form beliefs is pretty goofy.
As with many of our irrationalities, how we form beliefs was shaped by the evolutionary push toward efficiency rather than accuracy. Abstract belief formation (that is, beliefs outside our direct experience, conveyed through language) is likely among the few things that are uniquely human, making it relatively new in the scope of evolutionary time. Before language, our ancestors could form new beliefs only through what they directly experienced of the physical world around them. For perceptual beliefs from direct sensory experience, it’s reasonable to presume our senses aren’t lying. Seeing is, after all, believing. If you see a tree right in front of you, it would generally be a waste of cognitive energy to question whether the tree exists. In fact, questioning what you see or hear can get you eaten. For survival-essential skills, type I errors (false positives) were less costly than type II errors (false negatives). In other words, better to be safe than sorry, especially when considering whether to believe that the rustling in the grass is a lion. We didn’t develop a high degree of skepticism when our beliefs were about things we directly experienced, especially when our lives were at stake.
As complex language evolved, we gained the ability to form beliefs about things we had not actually experienced for ourselves. And, as Gilbert pointed out, “nature does not start from scratch; rather, she is an inveterate jury rigger who rarely invents a new mechanism to do splendidly what an old mechanism can be modified to do tolerably well.” In this case, the system we already had was (1) experience it, (2) believe it to be true, and (3) maybe, and rarely, question it later. We may have more reasons to question this flood of secondhand information, but our older system is still in charge. (This is a very simple summary of a great deal of research and documentation. For some good overviews, I highly recommend Dan Gilbert’s Stumbling on Happiness, Gary Marcus’s Kluge, and Dan Kahneman’s Thinking, Fast and Slow, listed in the Selected Bibliography and Recommendations for Further Reading.)
A quick Google search will show that many of our commonly held beliefs are untrue. We just don’t get around to doing Google searches on these things. (Spoiler alerts: (1) Abner Doubleday had nothing to do with inventing the game of baseball. (2) We use all parts of our brain. The 10% figure was made up to sell self-improvement books; neural imaging and brain-injury studies disprove the fabrication. (3) Immigrants didn’t have their names Americanized, involuntarily or otherwise, at Ellis Island.)
Maybe it’s no big deal that some of these inconsequential common beliefs are clearly false. Presumably, people aren’t using a bogus dog-age calculator to make medical decisions for their pets, and veterinarians know better. But this is our general belief-formation process, and it applies in areas that can have significant consequences.
In poker, this belief-formation process can cost players a lot of money. One of the first things players learn in Texas Hold’em is a list of two-card starting hands to play or fold, based on your table position and actions from players before you.* When Texas Hold’em first developed in the sixties, some expert players innovated deceptive plays with middle cards consecutive in rank and of the same suit (like the six and seven of diamonds). In poker shorthand, such cards are called “suited connectors.”
Suited connectors have the attraction of making a powerful, camouflaged straight or a flush. Expert players might choose to play these types of hands in a very limited set of circumstances, namely where they feel they could fold the hand at a small loss; successfully bluff if it doesn’t develop; or extract maximum value in later betting rounds by trapping a player with conventionally stronger starting cards when the hand does develop favorably.
Unfortunately, the mantra of “win big or lose small with suited connectors” filtered down over the years without the subtlety of the expertise needed to play them well or the narrow circumstances needed to make those hands profitable. When I taught poker seminars, most of my students strongly believed suited connectors were profitable starting cards under pretty much any circumstances. When I asked why, I would hear “everyone knows that” or “I see players cleaning up with suited connectors all the time on TV.” But no one I asked had kept a P&L on their experience with suited connectors. “Do that,” I’d say, “and report back what you find.” Lo and behold, players who came back to me discovered they were net losers with suited connectors.
The same belief-formation process led hundreds of millions of people to bet the quality and length of their lives on their belief about the merits of a low-fat diet. Led by advice drawn, in part, from research secretly funded by the sugar industry, Americans in one generation cut a quarter of caloric intake from fat, replacing it with carbohydrates. The U.S. government revised the food pyramid to include six to eleven servings of carbohydrates and advised that the public consume fats sparingly. It encouraged the food industry (which enthusiastically followed) to substitute starch and sugar to produce “reduced-fat” foods. David Ludwig, a Harvard Medical School professor and doctor at Boston Children’s Hospital, summarized the cost of substituting carbs for fats in the Journal of the American Medical Association: “Contrary to prediction, total calorie intake increased substantially, the prevalence of obesity tripled, the incidence of type 2 diabetes increased many-fold, and the decades-long decrease in cardiovascular disease plateaued and may reverse, despite greater use of preventive drugs and surgical procedures.”
Low-fat diets became the suited connectors of our eating habits.
Even though our default is “true,” if we were good at updating our beliefs based on new information, our haphazard belief-formation process might cause relatively few problems. Sadly, this is not the way it works. We form beliefs without vetting most of them, and maintain them even after receiving clear, corrective information. In 1994, Hollyn Johnson and Colleen Seifert reported in the Journal of Experimental Psychology the results of a series of experiments in which subjects read messages about a warehouse fire. For subjects reading messages mentioning that the fire started near a closet containing paint cans and pressurized gas cylinders, that information (predictably) encouraged them to infer a connection. When, five messages later, subjects received a correction that the closet was empty, they still answered questions about the fire by blaming burning paint for toxic fumes and citing negligence for keeping flammable objects nearby. (This shouldn’t be a surprise to anyone recognizing the futility of issuing a retraction after reporting a news story with a factual error.)
Truthseeking, the desire to know the truth regardless of whether the truth aligns with the beliefs we currently hold, is not naturally supported by the way we process information. We might think of ourselves as open-minded and capable of updating our beliefs based on new information, but the research conclusively shows otherwise. Instead of altering our beliefs to fit new information, we do the opposite, altering our interpretation of that information to fit our beliefs.
As a college football season is about to close, all eyes are fixed on a fierce rivalry. The favorite, playing at home, has a twenty-two-game winning streak and is on the verge of completing a second consecutive undefeated season. The most emotional reception will be for Dick Kazmaier, the offensive star. One of the school’s all-time athletic heroes, he made the cover of Time and is in contention for All-American and other postseason honors. The visitors, however, have no intention of going down to defeat quietly. Although their record this season has been only average, they have a reputation for playing hard. Pulling off a stunning upset would be an unexpected treat.
Welcome to Princeton’s Palmer Stadium, November 23, 1951. The Dartmouth-Princeton football game became famous: part of a historic rivalry, the end of an epoch in Ivy League sports, and the subject of a groundbreaking scientific experiment.
First, the game. Princeton won, 13–0. The outcome was not in much doubt, but it was nevertheless a dirty, violent, penalty-laden game. Dartmouth received seventy yards in penalties, Princeton twenty-five. A fallen Princeton player got kicked in the ribs. One Dartmouth player broke a leg, and a second also suffered a leg injury. Kazmaier exited the game in the second quarter with a concussion and a broken nose. (He returned for the final play, earning a victory lap on his teammates’ shoulders. A few months later he became the last player from the Ivy League to win the Heisman Trophy.)
Surprised by the ferocity of the editorials in both schools’ newspapers after the game, a pair of psychology professors saw the occasion as an opportunity to study how beliefs can radically alter the way we process a common experience. Albert Hastorf of Dartmouth and Hadley Cantril of Princeton collected the newspaper stories, obtained a copy of the game film, showed it to groups of students from their schools, and had them complete questionnaires counting and characterizing the infractions on both sides. Their 1954 paper, “They Saw a Game,” could have been called “They Saw Two Games” because students from each school, based on their questionnaires and accounts, seemed to be watching different games.
Hastorf and Cantril collected anecdotal evidence of this in the lively accounts and editorials of the Dartmouth-Princeton game in local newspapers. The Daily Princetonian said, “Both teams were guilty but the blame must be laid primarily on Dartmouth’s doorstep.” The Princeton Alumni Weekly called out Dartmouth for a late hit on the play that ended Kazmaier’s college career and for kicking a prone Princeton player in the ribs. Meanwhile, an editorial in the Dartmouth placed heavy blame on Princeton coach Charley Caldwell. After the injury to the “Princeton idol,” “Caldwell instilled the old see-what-they-did-go-get-them attitude into his players. His talk got results,” the editorial asserted, referring to the pair of Dartmouth players suffering leg injuries in the third quarter. In the next issue of the Dartmouth, the paper listed star players from the opposing team that Princeton had stopped by a similar “concentrated effort.”
When the researchers showed groups of students the film of the game and asked them to fill out the questionnaires, the same difference of opinion about what they had seen appeared. Princeton students saw Dartmouth commit twice as many flagrant penalties and three times the mild penalties as Princeton. Dartmouth students saw each team commit an equal number of infractions.
Hastorf and Cantril concluded, “We do not simply ‘react to’ a happening. . . . We behave according to what we bring to the occasion.” Our beliefs affect how we process all new things, “whether the ‘thing’ is a football game, a presidential candidate, Communism, or spinach.”
A study in the 2012 Stanford Law Review called “They Saw a Protest” (the title is a homage to the original Hastorf and Cantril experiment) by Yale professor of law and psychology Dan Kahan, a leading researcher and analyst of biased reasoning, and four colleagues reinforces this notion that our beliefs drive the way we process information.
In the study, two groups of subjects watched a video of police action halting a political demonstration. One group was told the protest occurred outside an abortion clinic, aimed at protesting legalized abortion. Another group was told it occurred at a college career-placement facility, where the military was conducting interviews and protestors were demonstrating against the then-existing ban on openly gay and lesbian soldiers. It was the same video, carefully edited to blur or avoid giving away the subject of the actual protest. Researchers, after gathering information about the worldviews of the subjects, asked them about facts and conclusions from what they saw.
The results mirrored those found by Hastorf and Cantril nearly sixty years before: “Our subjects all viewed the same video. But what they saw—earnest voicing of dissent intended only to persuade, or physical intimidation calculated to interfere with the freedom of others—depended on the congruence of the protestors’ positions with the subjects’ own cultural values.” Whether it is a football game, a protest, or just about anything else, our pre-existing beliefs influence the way we experience the world. That those beliefs aren’t formed in a particularly orderly way leads to all sorts of mischief in our decision-making.
Flaws in forming and updating beliefs have the potential to snowball. Once a belief is lodged, it becomes difficult to dislodge. It takes on a life of its own, leading us to notice and seek out evidence confirming our belief, rarely challenge the validity of confirming evidence, and ignore or work hard to actively discredit information contradicting the belief. This irrational, circular information-processing pattern is called motivated reasoning. The way we process new information is driven by the beliefs we hold, strengthening them. Those strengthened beliefs then drive how we process further information, and so on.
During a break in a poker tournament, a player approached me for my opinion about how he played one of those suited-connector hands. I didn’t witness the hand, and he gave me a very abbreviated description of how he stealthily played the six and seven of diamonds to make a flush on the second-to-last card but “had the worst luck” when the other player made a full house on the very last card.
We had only a minute or two left in the break, so I asked what I thought to be the most relevant question: “Why were you playing six-seven of diamonds in the first place?” (Even a brief explanation, I expected, would fill in details on many of the areas that determine how to play a hand like that and whether it was a profitable choice, such as table position, pot size, chip stack sizes, his opponent’s style of play, how the table perceived his style, etc.)
His exasperated response was, “That’s not the point of the story!” Motivated reasoning tells us it’s not really the point of anyone’s story.
It doesn’t take much for any of us to believe something. And once we believe it, protecting that belief guides how we treat further information relevant to the belief. This is perhaps no more evident than in the rise in prominence of “fake news” and disinformation. The concept of “fake news,” an intentionally false story planted for financial or political gain, is hundreds of years old. It has included such legendary practitioners as Orson Welles, Joseph Pulitzer, and William Randolph Hearst. Disinformation is different than fake news in that the story has some true elements, embellished to spin a particular narrative. Fake news works because people who already hold beliefs consistent with the story generally won’t question the evidence. Disinformation is even more powerful because the confirmable facts in the story make it feel like the information has been vetted, adding to the power of the narrative being pushed.
Fake news isn’t meant to change minds. As we know, beliefs are hard to change. The potency of fake news is that it entrenches beliefs its intended audience already has, and then amplifies them. The Internet is a playground for motivated reasoning. It provides the promise of access to a greater diversity of information sources and opinions than we’ve ever had available, yet we gravitate toward sources that confirm our beliefs, that agree with us. Every flavor is out there, but we tend to stick with our favorite.
Making matters worse, many social media sites tailor our Internet experience to show us more of what we already like. Author Eli Pariser developed the term “filter bubble” in his 2011 book of the same name to describe the process of how companies like Google and Facebook use algorithms to keep pushing us in the directions we’re already headed. By collecting our search, browsing, and similar data from our friends and correspondents, they give users headlines and links that cater to what they’ve divined as our preferences. The Internet, which gives us access to a diversity of viewpoints with unimaginable ease, in fact speeds our retreat into a confirmatory bubble. No matter our political orientation, none of us is immune.
The most popular websites have been doing our motivated reasoning for us.*
Even when directly confronted with facts that disconfirm our beliefs, we don’t let facts get in the way. As Daniel Kahneman pointed out, we just want to think well of ourselves and feel that the narrative of our life story is a positive one. Being wrong doesn’t fit into that narrative. If we think of beliefs as only 100% right or 100% wrong, when confronting new information that might contradict our belief, we have only two options: (a) make the massive shift in our opinion of ourselves from 100% right to 100% wrong, or (b) ignore or discredit the new information. It feels bad to be wrong, so we choose (b). Information that disagrees with us is an assault on our self-narrative. We’ll work hard to swat that threat away. On the flip side, when additional information agrees with us, we effortlessly embrace it.
How we form beliefs, and our inflexibility about changing our beliefs, has serious consequences because we bet on those beliefs. Every bet we make in our lives depends on our beliefs: who we believe will make the best president, if we think we will like Des Moines, if we believe a low-fat diet will make us healthier, or even if we believe turkeys can fly.
The popular wisdom is that the smarter you are, the less susceptible you are to fake news or disinformation. After all, smart people are more likely to analyze and effectively evaluate where information is coming from, right? Part of being “smart” is being good at processing information, parsing the quality of an argument and the credibility of the source. So, intuitively, it feels like smart people should have the ability to spot motivated reasoning coming and should have more intellectual resources to fight it.
Surprisingly, being smart can actually make bias worse. Let me give you a different intuitive frame: the smarter you are, the better you are at constructing a narrative that supports your beliefs, rationalizing and framing the data to fit your argument or point of view. After all, people in the “spin room” in a political setting are generally pretty smart for a reason.
In 2012, psychologists Richard West, Russell Meserve, and Keith Stanovich tested the blind-spot bias—an irrationality where people are better at recognizing biased reasoning in others but are blind to bias in themselves. Overall, their work supported, across a variety of cognitive biases, that, yes, we all have a blind spot about recognizing our biases. The surprise is that blind-spot bias is greater the smarter you are. The researchers tested subjects for seven cognitive biases and found that cognitive ability did not attenuate the blind spot. “Furthermore, people who were aware of their own biases were not better able to overcome them.” In fact, in six of the seven biases tested, “more cognitively sophisticated participants showed larger bias blind spots.” (Emphasis added.) They have since replicated this result.
Dan Kahan’s work on motivated reasoning also indicates that smart people are not better equipped to combat bias—and may even be more susceptible. He and several colleagues looked at whether conclusions from objective data were driven by subjective pre-existing beliefs on a topic. When subjects were asked to analyze complex data on an experimental skin treatment (a “neutral” topic), their ability to interpret the data and reach a conclusion depended, as expected, on their numeracy (mathematical aptitude) rather than their opinions on skin cream (since they really had no opinions on the topic). More numerate subjects did a better job at figuring out whether the data showed that the skin treatment increased or decreased the incidence of rashes. (The data were made up, and for half the subjects, the results were reversed, so the correct or incorrect answer depended on using the data, not the actual effectiveness of a particular skin treatment.)
When the researchers kept the data the same but substituted “concealed-weapons bans” for “skin treatment” and “crime” for “rashes,” now the subjects’ opinions on those topics drove how subjects analyzed the exact same data. Subjects who identified as “Democrat” or “liberal” interpreted the data in a way supporting their political belief (gun control reduces crime). The “Republican” or “conservative” subjects interpreted the same data to support their opposing belief (gun control increases crime).
That generally fits what we understand about motivated reasoning. The surprise, though, was Kahan’s finding about subjects with differing math skills and the same political beliefs. He discovered that the more numerate people (whether pro- or anti-gun) made more mistakes interpreting the data on the emotionally charged topic than the less numerate subjects sharing those same beliefs. “This pattern of polarization . . . does not abate among high-Numeracy subjects. Indeed, it increases.” (Emphasis in original.)
It turns out the better you are with numbers, the better you are at spinning those numbers to conform to and support your beliefs.
Unfortunately, this is just the way evolution built us. We are wired to protect our beliefs even when our goal is to truthseek. This is one of those instances where being smart and aware of our capacity for irrationality alone doesn’t help us refrain from biased reasoning. As with visual illusions, we can’t make our minds work differently than they do no matter how smart we are. Just as we can’t unsee an illusion, intellect or willpower alone can’t make us resist motivated reasoning.
So far, this chapter has mainly been bad news. We bet on our beliefs. We don’t vet those beliefs well before we form them. We stubbornly refuse to update our beliefs. Now I’ve piled on by telling you that being smart doesn’t help—and can make it worse.
The good news starts here.
Imagine taking part in a conversation with a friend about the movie Citizen Kane. Best film of all time, introduced a bunch of new techniques by which directors could contribute to storytelling. “Obviously, it won the best-picture Oscar,” you gush, as part of a list of superlatives the film unquestionably deserves.
Then your friend says, “Wanna bet?”
Suddenly, you’re not so sure. That challenge puts you on your heels, causing you to back off your declaration and question the belief that you just declared with such assurance. When someone challenges us to bet on a belief, signaling their confidence that our belief is inaccurate in some way, ideally it triggers us to vet the belief, taking an inventory of the evidence that informed us.
Remember the order in which we form abstract beliefs:
“Wanna bet?” triggers us to engage in that third step that we only sometimes get to. Being asked if we are willing to bet money on it makes it much more likely that we will examine our information in a less biased way, be more honest with ourselves about how sure we are of our beliefs, and be more open to updating and calibrating our beliefs. The more objective we are, the more accurate our beliefs become. And the person who wins bets over the long run is the one with the more accurate beliefs.
Of course, in most instances, the person offering to bet isn’t actually looking to put any money on it. They are just making a point—a valid point that perhaps we overstated our conclusion or made our statement without including relevant caveats. Most people aren’t like poker players, around whom there is always the potential that someone might propose a bet and they will mean it.
Next thing you know, someone moves to Des Moines and there’s $30,000 at stake.
It’s a shame the social contract for poker players is so different than for the rest of us in this regard because a lot of good can result from someone saying, “Wanna bet?” Offering a wager brings the risk out in the open, making explicit what is already implicit (and frequently overlooked). The more we recognize that we are betting on our beliefs (with our happiness, attention, health, money, time, or some other limited resource), the more we are likely to temper our statements, getting closer to the truth as we acknowledge the risk inherent in what we believe.
Expecting everyone starting to throw the gauntlet down, challenging each other to bet on any opinion, is impractical if you aren’t hanging out in a poker room. (Even in poker rooms, this generally happens only among players who know each other well.) I imagine that if you went around challenging everyone with “Wanna bet?” it would be difficult to make friends and you’d lose the ones you have. But that doesn’t mean we can’t change the framework for ourselves in the way we think about our decisions. We can train ourselves to view the world through the lens of “Wanna bet?”
Once we start doing that, we are more likely to recognize that there is always a degree of uncertainty, that we are generally less sure than we thought we were, that practically nothing is black and white, 0% or 100%. And that’s a pretty good philosophy for living.
Not much is ever certain. Samuel Arbesman’s The Half-Life of Facts is a great read about how practically every fact we’ve ever known has been subject to revision or reversal. We are in a perpetual state of learning, and that can make any prior fact obsolete. One of many examples he provides is about the extinction of the coelacanth, a fish from the Late Cretaceous period. A mass-extinction event (such as a large meteor striking the Earth, a series of volcanic eruptions, or a permanent climate shift) ended the Cretaceous period. That was the end of dinosaurs, coelacanths, and a lot of other species. In the late 1930s and independently in the mid-1950s, however, coelacanths were found alive and well. A species becoming “unextinct” is pretty common. Arbesman cites the work of a pair of biologists at the University of Queensland who made a list of all 187 species of mammals declared extinct in the last five hundred years. More than a third of those species have subsequently been rediscovered.
Given that even scientific facts can have an expiration date, we would all be well-advised to take a good hard look at our beliefs, which are formed and updated in a much more haphazard way than those in science. We don’t need someone challenging us to an actual bet to do this. We can think like a bettor, purposefully and on our own, like it’s a game even if we’re just doing it ourselves.
We would be better served as communicators and decision-makers if we thought less about whether we are confident in our beliefs and more about how confident we are. Instead of thinking of confidence as all-or-nothing (“I’m confident” or “I’m not confident”), our expression of our confidence would then capture all the shades of grey in between.
When we express our beliefs (to others or just to ourselves as part of our internal decision-making dialogue), they don’t generally come with qualifications. What if, in addition to expressing what we believe, we also rated our level of confidence about the accuracy of our belief on a scale of zero to ten? Zero would mean we are certain a belief is not true. Ten would mean we are certain that our belief is true. A zero-to-ten scale translates directly to percentages. If you think the belief rates a three, that means you are 30% sure the belief is accurate. A nine means you are 90% sure. So instead of saying to ourselves, “Citizen Kane won the Oscar for best picture,” we would say, “I think Citizen Kane won the Oscar for best picture but I’m only a six on that.” Or “I’m 60% that Citizen Kane won the Oscar for best picture.” That means your level of certainty is such that 40% of the time it will turn out that Citizen Kane did not win the best-picture Oscar. Forcing ourselves to express how sure we are of our beliefs brings to plain sight the probabilistic nature of those beliefs, that what we believe is almost never 100% or 0% accurate but, rather, somewhere in between.
In a similar vein, the number can reflect several different kinds of uncertainty. “I’m 60% confident that Citizen Kane won best picture” reflects that our knowledge of this past event is incomplete. “I’m 60% confident the flight from Chicago will be late” incorporates a mix of our incomplete knowledge and the inherent uncertainty in predicting the future (e.g., the weather might intervene or there might be an unforeseen mechanical issue).
We can also express how confident we are by thinking about the number of plausible alternatives and declaring that range. For example, if I am stating my belief about what age Elvis died, I might say, “Somewhere between age forty and forty-seven.” I know he died in his forties and I remember that it was his earlier forties, so for me this is the range of plausible alternatives. The more we know about a topic, the better the quality of information we have, the tighter the range of plausible alternatives. (When it comes to predictions, the plausible range of outcomes would also be tighter when there is less luck involved.) The less we know about a topic or the more luck involved, the wider our range.
We can declare how sure we are whether we are thinking about a particular fact or set of facts (“dinosaurs were herd animals”), a prediction (“I think there is life on other planets”), or how the future will turn out given some decision we might make (“I think I will be happier if I move to Des Moines than I am where I live now” or “I think the company will be better off if we fire the president”). These are all beliefs of differing sorts.
Incorporating uncertainty into the way we think about our beliefs comes with many benefits. By expressing our level of confidence in what we believe, we are shifting our approach to how we view the world. Acknowledging uncertainty is the first step in measuring and narrowing it. Incorporating uncertainty in the way we think about what we believe creates open-mindedness, moving us closer to a more objective stance toward information that disagrees with us. We are less likely to succumb to motivated reasoning since it feels better to make small adjustments in degrees of certainty instead of having to grossly downgrade from “right” to “wrong.” When confronted with new evidence, it is a very different narrative to say, “I was 58% but now I’m 46%.” That doesn’t feel nearly as bad as “I thought I was right but now I’m wrong.” Our narrative of being a knowledgeable, educated, intelligent person who holds quality opinions isn’t compromised when we use new information to calibrate our beliefs, compared with having to make a full-on reversal. This shifts us away from treating information that disagrees with us as a threat, as something we have to defend against, making us better able to truthseek.
When we work toward belief calibration, we become less judgmental of ourselves. Incorporating percentages or ranges of alternatives into the expression of our beliefs means that our personal narrative no longer hinges on whether we were wrong or right but on how well we incorporate new information to adjust the estimate of how accurate our beliefs are. There is no sin in finding out there is evidence that contradicts what we believe. The only sin is in not using that evidence as objectively as possible to refine that belief going forward.
Declaring our uncertainty in our beliefs to others makes us more credible communicators. We assume that if we don’t come off as 100% confident, others will value our opinions less. The opposite is usually true. If one person expresses a belief as absolutely true, and someone else expresses a belief by saying, “I believe this to be true, and I’m 80% on it,” who are you more likely to believe? The fact that the person is expressing their confidence as less than 100% signals that they are trying to get at the truth, that they have considered the quantity and quality of their information with thoughtfulness and self-awareness. And thoughtful and self-aware people are more believable.
Expressing our level of confidence also invites people to be our collaborators. As I said, most of us don’t live our lives in poker rooms, where it is more socially acceptable to challenge a peer who expresses an opinion we believe to be inaccurate to a wager. Outside of the poker room, when we declare something as 100% fact, others might be reluctant to offer up new and relevant information that would inform our beliefs for two reasons. First, they might be afraid they are wrong and so won’t speak up, worried they will be judged for that, by us or themselves. Second, even if they are very confident their information is high quality, they might be afraid of making us feel bad or judged. By saying, “I’m 80%” and thereby communicating we aren’t sure, we open the door for others to tell us what they know. They realize they can contribute without having to confront us by saying or implying, “You’re wrong.” Admitting we are not sure is an invitation for help in refining our beliefs, and that will make our beliefs much more accurate over time as we are more likely to gather relevant information.
Expressing our beliefs this way also serves our listeners. We know that our default is to believe what we hear, without vetting the information too carefully. If we communicate to our listeners that we are not 100% on what we are saying, they are less likely to walk away having been infected by our beliefs. Expressing the belief as uncertain signals to our listeners that the belief needs further vetting, that step three is still in progress.
When scientists publish results of experiments, they share with the rest of their community their methods of gathering and analyzing the data, the data itself, and their confidence in that data. That makes it possible for others to assess the quality of the information being presented, systematized through peer review before publication. Confidence in the results is expressed through both p-values, the probability one would expect to get the result that was actually observed (akin to declaring your confidence on a scale of zero to ten), and confidence intervals (akin to declaring ranges of plausible alternatives). Scientists, by institutionalizing the expression of uncertainty, invite their community to share relevant information and to test and challenge the results and explanations. The information that gets shared back might confirm, disconfirm, or refine published hypotheses. The goal is to advance knowledge rather than affirm what we already believe. This is why science advances at a fast clip.*
By communicating our own uncertainty when sharing beliefs with others, we are inviting the people in our lives to act like scientists with us. This advances our beliefs at a faster clip because we miss out on fewer opportunities to get new information, information that would help us to calibrate the beliefs we have.
Acknowledging that decisions are bets based on our beliefs, getting comfortable with uncertainty, and redefining right and wrong are integral to a good overall approach to decision-making. But I don’t expect that, having dumped all these concepts in your lap, you should somehow know the best way to use them. These patterns are so engrained in our thinking that it takes more than knowing the problem or even having the right outlook to overcome the irrationalities that hold us back. What I’ve done so far, really, is identify the target; now that we are facing the right direction, thinking in bets is a tool to be somewhat better at hitting it.