CHAPTER 1

A New System

The Irrational Human Mind and the First Few Who Recognized It

Traditionally, talent scouts have been the ones to pick which players to draft in baseball. It’s a time-honored position, steeped in ritual and lore, but it’s also a position shrouded in the mysteries of human intuition. It is thought that talent scouts draw on a rich reservoir of experience to discern qualities in players that are imperceptible to the casual observer: a crisp swing of the bat, a quick step, or a fluidity to their movement. They carefully study the way the players carry themselves to detect a possible underlying confidence cloaked in subtle movements and expressions, verbal or nonverbal. They are craftsmen of human intuition. They see beyond the statistics. They quantify the unquantifiable.

Or so it was thought.

In 2002 the Oakland A’s, a small-market team with a small-market budget, set about finding a better way to pick players. Their general manager, Billy Beane, began ignoring the advice from the team’s scouts and turned to a purely data-driven approach to picking players. He zeroed in on the single metric that the data shouted mattered the most: the percentage a player gets on base. It didn’t matter, Beane reasoned, whether the player was short, fat, slow, or appeared unathletic; whatever variables caused the scouts to pass them over were irrelevant. It didn’t even matter how they got on base—whether it was by a hit or an unglamorous walk—as long as they got on base.

It was a daring move, to be sure, and everyone thought Beane and his management crew were crazy. The difference between a good athlete and a great athlete was impossible to capture through statistics, the scouts shot back. It was buried in deep and cryptic perception, an acuity measured in nanoseconds. And only a scout whose life was dedicated to picking up these subtleties could determine who would remain average and who would surface as the next star.

Yet Oakland’s management was convinced that these differences left a statistical trail of breadcrumbs. Scouts were human after all, and humans are capable of all sorts of misjudgment and bias. Statistics, however, were not. In the spring of 2002 many of the players who ran onto the field for the Oakland A’s were there because of a mathematical formula. The formula screamed that these players were undervalued. For the A’s in 2002, acquiring a player for cheap wasn’t just nice, it was a necessity. Out of the thirty teams in Major League Baseball, the Oakland A’s budget was twenty-eighth, three times less than that of the New York Yankees. They were acquiring players who had slipped through the cracks. They were getting the rejects who had passed unnoticed through the net of collective wisdom cast by the league’s talent scouts. But to the A’s, they were mispriced gems. To the scouts, the fans, the announcers, the team appeared to be a joke, a collection of misfits. What happened next shocked everyone: The Oakland A’s began to win.

Then, on August 13, 2002, the team of oddities broke the American League record with a winning streak of twenty consecutive games. They made it all the way to the American League divisional playoffs, a feat that stunned Major League Baseball and slayed a century of conventional wisdom. After their 2002 season the Oakland A’s moneyball approach to picking players was noticed by the rest of baseball. In 2004 the Boston Red Sox copied the A’s strategy and won their first World Series in almost a century. And won it again in 2007. And again in 2013.

Could a purely statistically driven approach to picking players really be better than the human intuition of the league’s talent scouts? Or was the success of the A’s and the Red Sox just dumb luck? Too much money was at stake not to find out, and sports franchises outside of baseball began to adopt the model. In the spring of 2006 Leslie Alexander, owner of the NBA team the Houston Rockets, hired self-proclaimed “nerd” Daryl Morey to apply to picking players an analytical model Morey had built over years of painstaking analysis. As had been true with Major League Baseball, the new importance placed on these “geeks” didn’t go over well in the culture of professional basketball. NBA all-star and network announcer Charles Barkley even went on a four-minute tirade during an NBA on-air half-time show about the league’s new courtship with data crunchers, zeroing in on Morey himself. Barkley described him as an “idiot” and said his analytics were a bunch of “crap.” Barkley finished his rant by claiming that guys like Morey were working in basketball because they “never got the girls in high school and they just want to be in the game.”

Yet, for better or worse, “geeks” like Morey had arrived, and if their models proved better at picking players than the NBA talent scouts, they weren’t going anywhere soon. Morey’s model rummaged through mountains of accumulated data and isolated the variables that appeared to matter, assigning a degree of importance to each one. The whole system was designed to sidestep the internal biases and misjudgments intrinsic to the reckonings of the NBA’s talent scouts.

Building a mathematical model to pick players, Morey could attest, was not at all easy. It was the product of years of trial and error, and it seemed never to be finished; the model was in a perpetual state of refinement. For example, a college player might have a record of scoring lots of points, but sometimes this was because the player hogged the ball, and the high scoring came at the expense of winning the game. Another player might look great in college only because he was older and physically more mature. The pool of data was not straightforward; it was full of statistical traps that had to be recognized and accounted for. In the end, however, Morey was confident the model offered a slight edge over the scouts. And in a game often decided by a tiny percentage of overall points, even a slight edge was enough.

But where did the scouts go wrong? Morey thought that the most obvious way appeared to be a flaw in human reasoning known as confirmation bias. Humans tend to make very quick judgments about people, often based on a range of subjective beliefs. The shape of someone’s face, a subtle expression, or a certain laugh might remind you of someone you like or dislike and will immediately color your impression of that individual. Research shows that when an impression forms in the human mind it’s hard to undo. Once an impression of someone is established—typically very quickly—you then tend to notice more closely those things that confirm your first impression and discount those that may contradict it. Like the rest of us, talent scouts also fall victim to confirmation bias.

A scout has a deep well of memories that flood to the surface when he or she evaluates a player. A prospective player might subconsciously remind the scout of a current star player or one of their successful picks: the delicate way the wrist drops upon release of the ball, a barely perceptible double head fake, or perhaps simply the player’s looks or physical build. These attributes have little to do with the player’s potential, but nevertheless the impressions coalesce in the scout’s mind, helping to form a rapid judgement: I like that player. Locked in a confirmation bias the scout begins to notice attributes that conform to his or her flawed first impression preferentially.

“Confirmation bias is the most insidious because you don’t even realize it’s happening,” said Morey. For example, a Chinese-American named Jeremy Lin entered the NBA draft after graduating from Harvard in 2010. Lin didn’t look like any of the NBA players anchored in the minds of the league’s talent scouts. But Lin caught the attention of Morey’s model. Lin’s statistics were entered, and the numbers were crunched: Morey’s model signaled that Jeremy Lin was a hot prospect. According to the model, Lin should have been drafted 15th. But, just like the other scouts, Morey was unable to surmount his internal bias: “Every fucking person, including me, thought he was unathletic,” he said. The Rockets passed Lin over. As did every other team. And Lin went undrafted.1

Looking back, Lin had a history of going unnoticed. Both of Lin’s parents were 5-foot, 6-inch Taiwanese immigrants. Yet Jeremy, the middle child of three sons, defied his genetic allotment and grew to 6 feet, 3 inches tall. When Lin was a boy, his father took him and his brothers to the local YMCA where he taught them to play basketball. Jeremy took to the game immediately. During his senior year Lin led his Palo Alto high school team to the Division II state title, ending the season with a record of 32 wins and 1 loss. For his efforts Lin was named First-Team All-State and Northern California Division II Player of the Year. Still, college scouts seemed to scarcely notice him. He was offered no Division I scholarships. Even though the scouts failed to recognize Lin’s talent, his high school coach and teammates did not, and declared that he possessed a sort of preternatural sense of the game. “He knows exactly what needs to be done at every point of the game,” said his coach. “He always knew how the defense was set up and where the weak spots were,” added a former teammate.

Rejected by his two “dream” schools, Stanford and UCLA, Lin’s 4.2 GPA helped to land him at Harvard. By his junior year he ranked in the top ten for scoring in his conference. During his senior year he was a unanimous selection for the All-Ivy League First Team, leading his Harvard team to a series of records. Lin left Harvard as the first player in Ivy League history to score more than 1,450 points. Still, despite his performance at Harvard, Lin again went unnoticed, with all thirty NBA teams passing him over in the 2010 draft.

Unwilling to give up, Lin managed to reach a partial contract with his hometown Golden State Warriors. Even so, he didn’t play much his rookie year and was demoted to the Development League three times before finally being let go. He was picked up by the New York Knicks in early 2011, again played little, and again spent time in the Development League. But in 2012, everything changed. Lin was sleeping on his brother’s couch in a one-bedroom, East Village apartment, on the verge of giving up on the NBA, when a series of injuries to key players left the Knicks in a desperate situation. They had lost eleven of their last thirteen games. In what appeared to be a final act of desperation, Lin was flagged from the bench and put in the game. That night he went out and lit up the court, singlehandedly leading a gritty turnaround win for the Knicks against the New Jersey Nets. Following his outstanding performance Lin was promoted to the starting lineup where he fronted a seven-game winning streak—even outscoring Kobe Bryant in a matchup against the Lakers. The Cinderella story instantly captivated both the media and NBA’s fan base and then spread around the globe, sparking a worldwide craze that quickly became known as “Linsanity.”

Later, after Lin underwent a battery of tests, Morey discovered that he was, in fact, incredibly athletic. Lin logged an explosive acceleration and change of direction rate that few in the NBA have matched. But because Lin didn’t conform to the mental model of what an NBA player should look like, he was passed over by every single talent scout. A 2012 New York Times article on Lin summarized it this way: “Coaches have said recruiters, in the age of who-does-he-remind-you-of evaluations, simply lacked a frame of reference for such an Asian-American talent.”2 When the dust settled, however, despite his mistake in passing Lin up, Morey’s system worked. And in the decade that followed, Morey’s analytic approach led the team to the third-best record in the NBA—a decade without a single losing season. The turn of the century had ushered in the rise of the “nerd.” And “nerd” was no longer considered a purely derogatory label; it had become a badge of success. Silicon Valley was minting fresh “nerdy” billionaires at an astonishing speed as new technologies spawned entire new industries. And now they were redefining sports, too. Morey gave his own definition of a nerd: “A person who knows his own mind enough to mistrust it.”3

Yet talent scouts had nothing but their mind. That was their tool. And now Morey was claiming that had been the problem all along. Until now, “talent scout” had been a canonized title. They were believed to possess a preternatural instinct honed over years of experience. Like any highly skilled individual, they were bestowed with the title of expert. What Morey was claiming was not trivial. He was casting doubt on the fundamental belief that the scout’s mind, the human mind, was a rational learning machine capable of real intuition. This raised an important question: How rational is the human mind?

Are People Rational or Irrational?

I’m smart enough to know that I’m dumb.

RICHARD FEYNMAN

Early twentieth-century economists built theories organized around a single basic assumption: People are mostly rational when it comes to making decisions. And how could we not be? The exalted rise of humanity was anchored in rationality. Starting in caves, human beings had perfected the use of tools, agriculture, animal domestication, and architecture—leading to the origin of entire civilizations. The Dark Ages were illuminated by the Enlightenment. Embracing the scientific method gave rise to the Industrial Revolution and technologies beyond our wildest dreams: light bulbs, telephones, cars, airplanes, and computers. It seemed intuitively obvious that humans rationally evaluated information when pressed to make a decision; the edifice of civilization was clear testimony to that fundamental assumption.

The assumption that humans are rational seemed so obvious, so self-evident, that no one in the social sciences had ever meaningfully challenged it. That is, until 1959. At first it was a Johns Hopkins psychologist named Ward Edwards who, without overtly challenging the assumption, asked a simple question: How do humans make decisions? Surprisingly, before Edwards, no one had really ever asked.

The innocent question posed by Edwards stirred something deep within the minds of two Israeli psychologists, Daniel Kahneman and Amos Tversky. When Edwards posed the question in 1959, Kahneman and Tversky were both professors in their mid-twenties at Hebrew University in Jerusalem. Kahneman’s childhood had been tumultuous. As a Jewish boy living in Paris when the war broke out, most of his memories are of his family’s desperate maneuvering to avoid capture by the Nazis. He remembers fleeing, staying in rooms provided by his father’s friends, hiding in barns and chicken coops. He recalls the Nazis pulling men off buses, stripping them naked to see if they were circumcised, and, if they were, killing them. He remembers his father’s death, his mother’s struggle, and that his only friend during those years was imaginary.

Then, suddenly, the war was over. One day, Kahneman recalled, a certain lightness filled the French air. Even so, the trauma of the war had left his mother with a festering unease with regard to Europe. They returned to their ransacked apartment in Paris for a while, but it no longer felt like home. In 1946 she moved her family to Jerusalem to start a new life.

Tversky, on the other hand, was born in Israel. His parents were among the early settlers hoping to build a new Zionist nation after fleeing Russia in the 1920s. Tversky’s childhood was consistent with the struggles of building and defending a nation under constant threat. For young Tversky, his surroundings and the people around him were rich with mystery—he credits his father for instilling in him the gift of curiosity. “[He] taught me to wonder,” Tversky would later write.4 To his father, people and their stories were endlessly fascinating. As he grew up Tversky was sculpted by the growing nation. The fledgling Israeli state was weaving a society that was incredibly tightly knit—neighbors, friends, family, all had to rely on each other unwaveringly. Rather than develop outward into the suburbs as did postwar America, Israel was developing inward. While America was exhaling, Israel was inhaling. The gestating society was built on human connections. Even the architecture’s density funneled people together; in the shops, at the barber, and in the cafés—there was constant interaction. One was rarely alone.

A grave obligation came with being an Israeli: Every able-bodied citizen, whether male or female, was mandated to serve in the military, first in active duty and then in the reserves. Kahneman and Tversky were no exceptions. Tversky volunteered to become a paratrooper, and Kahneman was conscripted into duty after graduating from college. Tversky quickly rose to the rank of platoon commander. Both men saw their fair share of combat. Neither man ever questioned his obligation.

In between wars, however, Kahneman and Tversky took off their military uniforms and entered a starkly different world as professors of psychology at Hebrew University. They traveled in different circles, and, although they occupied the same department, somehow maintained discrete orbits that never intersected. “It was the graduate students’ perception that Danny and Amos had some sort of rivalry. They were clearly the stars of the department who somehow or other hadn’t gotten in sync,” said a graduate student.5 The students who knew them both couldn’t help but notice their differences. Kahneman, a traditional psychologist, was shy, racked with self-doubt, and almost pathologically pessimistic—a morning person who kept a messy office. Tversky, a mathematical psychologist, was gregarious, dripping with self-confidence, and extraordinarily optimistic. He was a night owl, and his office was always meticulous.6

They finally discovered each other in the spring of 1969. The question that Edwards had posed—how do people make decisions?—had captivated both men. Meanwhile, Edwards, now at the University of Michigan, was conducting experiments designed to address his question. His experiments showed that, at least when it comes to judging probabilities, people were mostly rational. But this finding didn’t satisfy Kahneman.

Soon after they became acquainted, Kahneman invited Tversky to give a lecture at his graduate seminar, and Tversky gave a sheepish presentation summarizing Edwards’s tepid results, agreeing with his conclusion that humans are rational decision-makers. Edwards’s experiments suggested that human intuition is a decent judge of statistical probability and therefore could make rational predictions about the future. But, to Kahneman, Edwards’s results were overly simplistic, even slightly absurd—they didn’t reveal much, if anything, about how people make real-life decisions. Kahneman, who taught statistics at the university, could see the irrationality in his students. He was very surprised that Tversky appeared to accept Edwards’s conclusion so passively. After the lecture, Kahneman and Tversky found a quiet spot and launched into a substantive discussion. Kahneman challenged the notion that Edwards’s results represented anything meaningful. Tversky listened. The discussion that spring day would set in motion a lifelong partnership largely centered around a shared passion for a single, yet astonishingly complex question—a partnership that would blaze a trail through the social sciences.

Kahneman and Tversky began scheming. They wanted to capture the way people made real-life decisions. They wanted to avoid overly simplistic experiments that were scrubbed clean of the human mind’s messiness, with all its biases, firewalls, false assumptions, associations, and glitches. They sought to uncover the human fallibility that seeps into our minds as we grapple with choices. They devised questions carefully formulated to capture irrational patterns that they then gave to groups of experts and college students. By design, the questions had logical answers but had been deliberately crafted in a way to allow the respondents’ subtle mental biases to creep in.

For example, Kahneman and Tversky read a list of thirty-nine names to a group of students; twenty of the names were traditionally male names, and nineteen traditionally female. Among the nineteen female names, they sprinkled in a few famous ones—Elizabeth Taylor, for example. They then asked the students to report if the list contained more male or female names. The students overwhelmingly reported hearing more female names than male names. And when they performed the experiment in reverse—listing nineteen male names and twenty female names, with some famous male names interspersed—the students this time overwhelmingly reported hearing more male names. This simple exercise exposed what they would call the availability heuristic, the tendency for people to color their thinking, and decision-making, based on what they can remember, or what is immediately available to them for recall.

Kahneman and Tversky showed that people begin to misjudge probabilities based on what has happened before. Everyone knows a flipped coin has a fifty-fifty chance of landing “heads” or “tails” each time it is flipped. But if a coin lands heads a few times in a row, they showed, people begin to assign a higher probability that the next flipped coin will land tails—even though the coin still has an exactly 50 percent chance of landing heads or tails. Instinctually, most people adhere to the “lightning doesn’t strike in the same spot twice” axiom, even though lightning has the exact same probability of striking any single spot each time it strikes, regardless of whether it has struck that spot before.

Their work was critical in defining what is known as the anchoring bias, or the tendency to place too much arbitrary importance on the first piece of information one is given in a situation. Anchoring biases affect how we perceive the world every day. For example, the price of gas was about $1.85 a gallon in the early 2000s and then began to climb. As the price rose above $2.00 and then neared $3.00, people reported having a negative sentiment toward gas prices. But then the price reversed, eventually settling at around $2.50 a gallon. Now, after being anchored at $3.00, consumer sentiment toward gas prices shifted to the positive—at the exact same price they felt negatively about only a few months earlier.

In a separate experiment a researcher asked an audience to write down the last two digits of their social security number and consider whether they would pay this number of dollars for an item whose value they did not know, such as a bottle of wine, chocolate, or computer equipment. They were then asked to place bids for these items. Invariably, audience members who wrote down higher two-digit numbers would submit bids that were between 60 percent and 120 percent higher than those with lower social security numbers. The higher number, although completely arbitrary, had become their “anchor.”

Another experiment showed how the anchoring bias can even influence our other senses. A researcher at the University of Bordeaux in Talence, France, asked fifty-four oenology (the science of wine) students to describe two wines, one white and one red. The students all tasted both wines and wrote down their descriptions of each. In a second tasting the students were given the same white wine that now had secretly been colored red with a dye that imparted no taste. This time the description the students reported was vastly different. The students reporting tasting red wine characteristics in the white wine. Seeing a red-colored wine had “anchored” a bias into the minds of the students, directly influencing their perception of taste.

The partnership that developed between Kahneman and Tversky was as unique as the work they were generating. “They had a certain style of working, which is they just talked to each other for hour after hour after hour,” said a colleague. Kahneman, of his time with Tversky, mostly remembers the laughter. “We laughed a lot,” he would tell a reporter after Tversky had died. “We could finish each other’s thoughts.” The rate that they churned out paper after paper reflected the growing intensity of their relationship. “We had, jointly, a mind that was better than our separate minds. Our joint mind was very, very good,” said Kahneman.7

They exposed the reality that people are terrible at recognizing when events are random. For example, Londoners during the Second World War were convinced that the 2,419 German V-1 rockets launched at London from the shores of France and the Netherlands were aimed to hit certain parts of the city more than others because certain locations were hit multiple times and others not at all. In other words, the Londoners were sure the bombing was nonrandom. However, subsequent analysis revealed that the unguided German rockets had landed in a perfectly random distribution. This simple study revealed something important: Human minds are wired to find significance in randomness. This is perhaps why patterns leap into our minds as clouds drift by. We are wired to find meaning when it may not be there.

Kahneman and Tversky uncloaked one bias and heuristic after the next: availability, anchoring, adjustment, and vividness. Human irrationality was not random, they showed, it was patterned, it was something deeply baked into the human condition—as much a part of us as our internal organs.

The next experiment they performed uncovered a critically important heuristic. They created this scenario for a group of students:

Out of a pool of 100 people, 30 are lawyers and 70 are engineers. A person from the pool is picked at random. What is the likelihood the person is a lawyer?

The students correctly assigned the probability to be 30 percent. But then Kahneman and Tversky introduced a twist. They kept the scenario the same—a pool of 100 people, 30 of whom are lawyers and 70 of whom are engineers—but added the following description to one of the people in the pool:

Dick is a 30-year-old man. He is married with no children. A man of high ability and high motivation, he promises to be quite successful in his field. He is liked by his colleagues.

They then asked the students to determine the likelihood Dick was a lawyer. This time the students judged that there was an equal chance Dick was a lawyer or an engineer. Merely adding a description had somehow changed the probability from 30 to 50 percent in the students’ minds, even though the description had no information whatsoever that should have affected their answer.8

Tversky and Kahneman discovered that, as shown in the previous example, an arbitrary description can trigger associations to someone’s own history, and the individual will then assign value to those associations. Any descriptive input sets the brain into an automatic effort to “represent” what the input matches. These connections then come to the surface and begin to color judgment. Tversky and Kahneman coined this cognitive phenomenon the representation heuristic. We humans are storytellers, and we run narratives when we are trying to predict outcomes. “The stories about the past are so good that they create an illusion that life is understandable, and they create an illusion that you can predict the future,” said Kahneman.9

The significance of the representation heuristic in skewing judgment resulted in a 1973 paper titled “On the Psychology of Prediction.”10 In it Kahneman and Tversky wrote, “Consequently, intuitive predictions are insensitive to the reliability of the evidence or to the prior probability of the outcome, in violation of the logic of statistical prediction.” In other words, even when people know the statistical probability of an outcome, say a 50 percent chance of something being true, they will still let arbitrary information sway their prediction. Worse, Kahneman and Tversky showed that the factors that lead people to be more confident in their prediction are the same factors that cause the prediction to be less accurate.

The 1973 paper put them on the map. After hearing a talk Kahneman gave at Stanford University, one psychology professor commented: “I remember I came home from the talk and told my wife, ‘This is going to win the Nobel Prize in economics.’ I was so absolutely convinced. This was a psychological theory about economic man. I thought, what could be better? Here is why you get all these irrationalities and errors. They come from the inner workings of the human mind.”11

Still, Tversky and Kahneman were unsatisfied. They desperately wanted their work to expand beyond the confines of professional journals and permeate the real world. Their motivation wasn’t fame or fortune but their conviction that their theory could have a huge impact on society at large. What they were exposing cut to the heart of the everyday occurrences that changed people’s lives: judges determining sentences, politicians deciding whether or not to go to war, educators designing reform, and health care providers making countless medical decisions.

During the early 1970s Kahneman and Tversky’s work gained intensity. They narrowed their focus to how real-life decisions are made under the cloud of uncertainty, decisions involving loss and gain, decisions that matter the most in our everyday lives. They presented the following scenario to subjects:

Imagine you are a physician working in an Asian village, and 600 people have come down with a life-threatening disease. Two possible treatments exist. If you choose treatment A, you will save exactly 200 people. If you choose treatment B, there is a one-third chance that you will save all 600 people, and a two-thirds chance you will save no one. Which treatment do you choose, A or B?

Kahneman and Tversky found that most respondents (72 percent) chose treatment A, which saves exactly 200 people. They then presented another scenario to the same subjects:

You are a physician working in an Asian village, and 600 people have come down with a life-threatening disease. Two possible treatments exist. If you choose treatment C, exactly 400 people will die. If you choose treatment D, there is a one-third chance that no one will die, and a two-thirds chance that everyone will die. Which treatment do you choose, C or D?

In this case, they found that most respondents (78 percent) chose treatment D, which offers a one-third chance that no one will die.12

If you compare the two questions carefully, you will notice that they are identical. Treatments A and C are identical, and so are treatments B and D. The only thing that changes are the way the options are presented, or framed, for the subjects. What Kahneman and Tversky were showing was that people evaluate gains and losses differently. Thus, while treatments A and C are quantitatively identical, treatment A is framed as a gain (that is, 200 people are saved) while treatment C is framed as a loss (400 people will die). It appeared that people are more likely to take risks when it comes to losses than gains. In other words, people prefer a “sure thing” when it comes to a potential gain but are willing to take a chance if it involves avoiding a loss.

These sorts of data exposed a critical feature of real-life decision-making involving perceived risks and rewards. Kahneman and Tversky were finding that people are constantly making internal judgments that have little to do with purely rational statistical assessment. In other words, people incorporate all sorts of arbitrary, internal biases when a decision calls for them to be purely rational. We hate to lose, memories readily surface to remind us of other times we have lost. So just a little nudge—the framing of an option to remind us of the possibility of a loss rather than a gain, even when the odds are the same—will skew our decision. The reason we are more averse to loss than attracted to potential gain, reasoned Kahneman, is deeply embedded in our biology. “This is evolutionary. You would imagine in evolution that threats are more important than opportunities. And so it’s a very general phenomenon that bad things sort of preempt or are stronger than good things in our experience. So loss aversion is a special case of something much broader.”13

A powerful, real-world example of this comes from the way doctors present treatment options to patients. In the 1980s lung-cancer patients were given two choices: surgery or radiation. Surgery had a better chance of extending the patient’s life but also came with a 10 percent risk of death. When presenting the surgical option, if the doctor said, “You have a 90 percent chance of surviving surgery,” the patients opted for surgery over radiation 82 percent of the time. However, if the doctor said, “You have a 10 percent chance of dying from surgery,” the patients opted for surgery over radiation 54 percent of the time. In other words, life-and-death decisions are not determined by a raw assessment of the probabilities alone. They are influenced by arbitrary descriptions—by how the doctor “frames” the options.

The implications of this data were enormous. Tversky and Kahneman summarized their finding in a 1979 publication titled “Prospect Theory: An Analysis of Decision under Risk.”14 The work was a masterpiece, a combination of Tversky’s fierce mathematical logic and Kahneman’s ability to tease out the internal wiring of the human mind. Few, if any, other psychological theories had such far-reaching and important implications. “Prospect theory turned out to be the most significant work we ever did,” said Kahneman.15 Academically, the implications of prospect theory were also profound: If prospect theory was right, economic theory was wrong.

Of course, Tversky and Kahneman never intended to pit their theory against the entire edifice of economic theory. Their intent was innocent and pure: simply to reveal the true nature of man. But like it or not, their theory forced economists to contend with this new image of the human brain. Until the advent of prospect theory, psychology had not had much of an influence on economics. Anyone looking closely, however, could see something remarkable: The whole structure of most economic theory was propped up by a single psychological assumption. Psychology didn’t play much of a role in the math-heavy field of economics, but the role it did play was foundational. For a century, economic theory had been built around the single assumption that people are rational. They seek out the best information and act on it sensibly. They continually measure costs and benefits and maximize pleasure and profit. Now two psychologists were claiming that people were not entirely rational. They were disrupting an entire century of thought. They were claiming the economic human being was flawed and made errors when given information to act on. And what’s more, that these errors were not random but systematic; they were predictable, they were hardwired into us.

In the 1960s an economic theory called the “efficient market hypothesis” was introduced by Eugene Fama at the University of Chicago. Fama claimed that asset prices—stocks and bonds, for example—fully reflect all available information at any moment in time. In other words, the work of frenzied efforts of legions of professional investors poring over companies’ financials, newspaper articles, and annual reports, for every scrap of information they can find—even insider information—is all acted upon rationally by market participants. This results in an efficient market where at any moment the price of an asset reflected its true value. Given that all the information was available to all the experts, all the time, no one could gain an advantage. Much as water spilled from a cup onto the ground will inevitably seep nonpreferentially into every nook and crevice it can find, markets like the stock market inevitably smooth out in a continuum of “true” and “efficient” pricing.

Of course, the efficient market hypothesis, like all of economics, was founded on the assumption that humans are mostly rational and will make rational decisions about asset prices when presented with all the available data. Thousands of people bidding on a piece of art, a car, or a stock will “discover” the asset’s worth. The theory predicted that no one could gain an advantage in any market because there was no advantage to gain.

As the years passed, the two theories existed side by side in an uncomfortable truce, the older economists seemingly content to ignore the implications of Tversky and Kahneman’s prospect theory. The fact that psychology was still considered the backwater of the social sciences at the time made it somewhat easier to ignore. Economics, on the other hand, was the golden child of the social sciences and boasted large departments in prestigious institutions with lots of money. The separation was strange, especially considering that prospect theory was built on the tenet that economics was psychology. By 2010, however, “Prospect Theory” had become the second-most cited paper in all of economics and had given birth to a new field known as “behavioral economics.” Even so, the “hard form” of efficient market theory was still the dominant economic theory taught in every university across the country. “People tried to ignore it,” said one economist of Kahneman and Tversky’s work. “Old economists never change their mind.”16

Indeed, the old guard refused to bend. “The efficient market theory is one of the better models in the sense that it can be taken as true for every purpose I can think of. For investment purposes, there are very few investors that shouldn’t behave as if markets are totally efficient,” said Eugene Fama, who won the Nobel Prize in Economics in 2013 for his work developing the hypothesis.17 “There is no other proposition in economics that has more solid empirical evidence supporting it than the Efficient Market Hypothesis.… In the literature of finance, accounting, and the economics of uncertainty, the Efficient Market Hypothesis is accepted as a fact of life,” said Michael Jensen, the Jesse Isidor Straus Professor of Business Administration, Emeritus, at Harvard University.18

But here’s the rub: They could not both be right. Human beings were either one or the other, rational or systematically irrational.

Who Was Right? “Six Sigma”

The first principle is that you must not fool yourself—and you are the easiest person to fool.

RICHARD FEYNMAN

In the spring of 1995 Charles T. Munger took the podium at his alma mater, Harvard Law School. Munger was there to give a speech. Since his graduation from Harvard a few years after the end of the Second World War, Munger—along with his partner, Warren Buffett—had amassed a fortune at the helm of well-known investment company Berkshire Hathaway.

What Munger and Buffet had accomplished was extraordinary in the history of American business. Starting in 1964—five years after Buffett and Munger first met at the Omaha Club in Omaha, Nebraska—Berkshire Hathaway has averaged a return of 20.9 percent per year, while the S&P 500 index has averaged 9.9 percent per year. Put another way: $10,000 invested with Berkshire in 1964 is worth about $240 million today, as opposed to only $1.5 million if invested in the S&P. And, remarkably, Munger and Buffett accomplished this mostly through buying and selling stocks.

Even more remarkable was the fact that they accomplished this when economists at America’s best universities said it was not possible. According to the efficient market hypothesis, the accomplishment of Berkshire Hathaway was a fluke, nothing more than the product of random chance. What Munger and Buffett had achieved, economists said, could not last—it would average out in the end; it had to. They could not gain an advantage in the stock market because there was no advantage to gain. Yet Munger and Buffett had done the impossible. Year after year, Berkshire Hathaway defied the economists in their ivory towers. They were able to find the imperceptible ripples in the surface of the smooth continuum of “perfect” market valuations and exploit them. How did they do this? What was their secret? The Harvard audience, now sitting in hushed silence before Munger, wanted to know.

Anyone close to Charlie Munger had a deep appreciation for what he had achieved in his lifetime. His accomplishments were not the product of advantage or luck. His life did not track a charmed trajectory right up to the Harvard podium where he now stood. Rather, Munger had had to surmount turmoil and unimaginable tragedy that would take him to the brink of crushing despair and, ultimately, sculpt him into the person he became.

Charlie Munger was born in Omaha, Nebraska, in 1924. His father, Al Munger, the son of a federal judge and a respected lawyer in his own right, had modest expectations for life, yet was “successful in any sense that really matters,” according to Charlie. “He had exactly the marriage and family life that was his highest hope. He had pals he loved and who loved him.… He owned the best hunting dog in Nebraska, which meant a lot to him.”19 The family lived in modest circumstances at the western edge of Omaha. At the time, Omaha was the gateway to the open-ended vastness west of the Missouri River and was being settled by a richly diverse population. Neighborhoods of Germans, Italians, Irish, and Bohemians, together with a packing-house district, made up the town. The diversity didn’t manifest in pockets of isolation; an overriding sense of community enveloped the citizens of Omaha—kids roamed freely, and doors were left unlocked. “There was no crime at all,” said one resident. “There were better behavior standards in school and everywhere else,” recalled Munger. Even among the richly diverse inhabitants of the midwestern town, the Munger family stood out. They had developed a sort of aristocratic, statesmanlike way about them, with high expectations and an emphasis on discipline and learning. Reading, expected in the Munger household, was a proclivity that would stay with Charlie his entire life. His children and grandchildren would often refer to him as “a book with legs.”

As a boy Charlie read a sweeping range of subjects: science, medicine, and biographies of such great men as Benjamin Franklin, Thomas Jefferson, and Isaac Newton, for example. Reading about their lives wasn’t just of passing interest to Munger; the pages he consumed established serious values for Munger to live by and infused in him the flair of an old-world gentlemen—duty, honor, and a strong sense of fair play were instilled as his core operating system at an extraordinarily young age. “I like the idea of filial piety, the idea that there are values that are taught and duties that come naturally,” said Munger.20

Young Charlie’s precociousness extended socially, too. As a child he was naturally drawn to people. “He was always gregarious, friendly, social,” said a neighbor. In Omaha, the Munger family grew close to another neighborhood family, the Davises. Mr. Davis, a surgeon and good friend of Charlie’s father, became friends with Charlie, too. “Dr. Ed Davis was my father’s best friend, and I did something unusual for a person as young as I was—five, eight, twelve, fourteen—I became a friend of my father’s friend. I got along very well with Ed Davis. We understood one another,” said Munger.21 Charlie’s developing personality, with its deep reservoir of knowledge, however, was often perceived by others as arrogance. When an acquaintance claimed prosperity was making Munger pompous, an old law-school friend of Munger’s defended him: “Nonsense, I knew him when he was young and poor; he was always pompous.”22 Yet those who knew him well knew that the arrogance was nothing more than a veneer; at his core Munger was a fiercely loyal friend.

After graduating from high school at the age of seventeen, Munger left Omaha to study mathematics at the University of Michigan in Ann Arbor. Even as a teenager, Munger had developed a unique pattern of thinking. If most people have a tendency to repeat destructive patterns, or at least fail to learn life’s lessons, Munger was the exact opposite. Any lesson that he deemed meaningful was sealed into a mental vault. Once he learned a lesson, it was learned forever. In Ann Arbor, an introductory physics class his freshman year made just such an impression. “For me, it was a total eye-opener,” he would later confess.23 In its search for fundamental physical truths, the field of physics employed a methodology that, by necessity, excluded human bias. For Munger, this was a revelation. It wasn’t necessarily physics itself that captivated Munger, but the process of doing physics that entranced him. To reveal nature’s laws, reasoned Munger, was to achieve a kind of exalted purity of thought, to scrub clean the messiness of the human mind.

However, the high-minded academic lessons would come to an abrupt end. A year into his studies, the Japanese bombed Pearl Harbor. Only a semester into his sophomore year, with battles now raging throughout Europe, Munger felt the call of patriotic duty and enlisted in the Army Air Corps only a few days after his nineteenth birthday. Like so many young American boys, Munger suddenly was thrust into a future of uncertainty. He recalls a poignant conversation he had with a fellow soldier during basic training as they lay in a tent somewhere in Utah. Their new reality stripped away all pretensions from the conversation as their circumstances led them to reflect on what truly mattered in life. “I want a lot of children, a house with lots of books, and enough money to have freedom,” Munger remembers saying.24

Of course, expectations about the future were best kept modest for an enlisted private in 1942. But Munger was abruptly spared from the battlefield when he was ordered to take the Army General Classification Test. A score of 120 on the test automatically qualified a soldier to be commissioned as an officer. Munger scored 149. He was shipped off to the University of New Mexico and then to the California Institute of Technology in Pasadena to study meteorology. Munger liked California immediately. If being raised in the Midwest had put any sort of noose around his sense of possibility, the electric energy and air of boundless potential in California removed it. At the time, Munger’s sister, Mary, enrolled at nearby Scripps College, was also living in Pasadena. She introduced him to the daughter of a Pasadena shoe-store owner named Nancy Huggins. The spark was instantaneous. Charlie and Nancy’s impulsive whirlwind romance, intensified by youth and the intoxicating new environment—and the uncertainty of war—captivated them. At the age of twenty-one, Munger and Nancy impulsively married.

Munger now needed a career. This he approached in a more pragmatic manner. He had recognized early on that he lacked the mechanical skills necessary for a profession such as surgery, and instinctively knew his skill in mathematics was not on par with the upper ranks. So he settled on his father and grandfather’s profession: the law. Using the GI bill, Munger applied to Harvard Law School, his grandfather’s alma mater. With the help of family connections he nudged his way in. However, the type of profound revelations he had discovered in classes like freshman physics, Munger realized, were not to be found in law school. “I came to Harvard Law School very poorly educated, with desultory work habits and no college degree,” he said. And, according to Munger, he left not much better off. “I hurried through school,” he said, “I don’t think I’m a fair example [of an ideal education].”25

After Munger graduated from Harvard Law School, he and Nancy and their new son, Teddy, returned to Nancy’s hometown of Pasadena and settled in to start their lives. They chose California over Nebraska because it offered greater opportunity for Munger to realize his lofty ambitions. Los Angeles was undergoing dazzling post-war growth. So, too, was Munger’s career. He became a partner in a successful law practice, joined LA’s exclusive clubs, and began establishing important connections and friendships that would anchor him in the city. His family also grew, with the addition of two daughters, Molly and Wendy.

On the surface the Mungers appeared to be thriving, but below the surface things were not well—his whirlwind marriage was unraveling. In 1953 divorce was considered a disgrace, and Charlie and Nancy knew this. But their incompatibility, the fighting, and the silences had become intolerable to both. They agreed to end the marriage. Nancy stayed in the house, and Munger moved into a room at the University Club that his daughter later described as “dreadful.” She also remembered the beat-up yellow Pontiac he drove. While still reeling from the repercussions of divorce, they received devastating news. Their son Teddy was diagnosed with leukemia—a disease that in the 1950s was 100 percent fatal. Day by day Munger watched his son fade. With no insurance, he was left with mounting medical bills. A friend recalled that Munger would go to the cancer ward of the hospital and hold his son for long stretches. Then, distraught and feeling helpless, he would walk the streets of Pasadena and cry. Still, without fail, he would steel himself to cheerfully pick up his daughters from Nancy’s house every Saturday in his beat-up Pontiac. The stress was almost unbearable. By the time Teddy died, Munger had lost fifteen pounds.

The life he had envisioned as a private lying in a tent now seemed further away than ever. Yet, despite the overriding despair, Munger was able to analyze his situation objectively, learn from it, and file away another of life’s lessons into his mental vault: “You should never, when facing some unbelievable tragedy, let one tragedy increase into two or three through your failure of will.”26

Munger’s daughter, Molly, remembers a lightness, a kind of buoyancy her dad always conveyed, even in the face of pain and difficult circumstances. His optimism and enthusiasm for the future, she remembered, were always bubbling just below the surface. “Now I see he was almost broke. I knew he drove an awful car. But I never thought he was anything but a big success. Why did I think that? He just had this air—everything was to be first class, going to be great. He was going to put a patio on Edgewood Drive. He was going to get a boat for the island. He was going to build a house, build apartments. He had these enthusiasms for his projects and his future—his present. It was not as if you had to deny yourself in the present for the future. The focus was on how interesting things are today, how much fun to see them built. It was so much fun being in the moment. That’s what he always communicated.”27

Daniel Kahneman once said, “When action is needed, optimism, even of the mildly delusional variety, may be a good thing.”28 Indeed, Munger didn’t waste any time indulging in self-pity. His incessant optimism for the future, even if mildly delusional, carried him through the darkest moments. A year after Teddy died he remarried—to another Nancy, also recently divorced. His law practice was flourishing, and, through a series of real-estate development deals, Munger had accumulated $1.4 million. “That was a lot of money at the time,” he recalled. Nancy brought two young boys into the family, Munger had his two girls, and together they had four more children. By his forties Munger had realized his goal: financial independence, a large family, a beautiful home with the constant noise of kids and friends, and stacks of books spilling off numerous bookshelves.

Munger had also developed a unique way of looking at problems—a unique way of looking at life for that matter. He had developed a habit of approaching problems in the inverse: Instead of asking how he could make a situation better, he would flip it and ask what is making it worse, and work to fix or simply avoid that. “Problems are usually easier to solve if you turn them around in reverse. In other words, if you want to help India, the question you should ask is not: How should I help India? It should be: What’s doing the worst damage in India? And how do I avoid it?” When asked to give an opinion about something, the first thing Munger would do was make a mental list of all the counterarguments to his opinion. Only then would he offer an opinion. In life, he first cataloged the things that would make his life worse—sloth, unreliability, extreme ideology, and a self-serving bias, for example—and then tried to avoid them. Indeed, life had taught him this lesson the hard way. “Anytime you find yourself drifting into self-pity, I don’t care what the cause, your child could be dying of cancer, self-pity is not going to improve the situation.… It’s a ridiculous way to behave,” said Munger. By focusing on how to not make a situation worse, Munger realized, it tended to passively make it better. His method was a systems approach that operated in the negative; a system that didn’t focus on the light, but rather where the shadows would be cast. “You can say who wants to go through life anticipating trouble? Well I did. All my life I’ve gone through anticipating trouble … and I’ve had a favored life.… It didn’t make me unhappy to anticipate trouble all the time, and be ready to perform adequately if trouble came, it didn’t hurt me at all, in fact it helped me.”29

Munger’s intellectual doppelgänger, strangely, was also from Omaha, Nebraska. That a midwestern town would incubate two of America’s most unique minds seems improbable, yet virtually every mutual friend of Warren Buffett’s and Charlie Munger’s was awestruck by their similarities. In the late 1950s Buffett had visited the Davis home to ask if they would like to put money into a new partnership he was forming. As he gave his pitch Mrs. Davis listened intently, asking many good questions, while Mr. Davis, Charlie’s childhood friend, sat in the corner listening but not saying much. Later Buffett recalled, “When we got all the way through, Dorothy [Mrs. Davis] turned to Eddie and said, ‘What do you think?’ Eddie said, ‘Let’s give him a hundred thousand dollars.’ In a much more polite way, I said, ‘Dr. Davis, you know, I’m delighted to get this money. But you weren’t really paying a lot of attention to me while I was talking. How come you’re doing it?’ And he said, ‘Well, you remind me of Charlie Munger.’ I said, ‘Well, I don’t know who this Charlie Munger is, but I really like him.’”30

Most of us have a moment in our lives that is defining, where everything from that moment on is changed. For Charlie Munger and Warren Buffett that moment was on a Friday in the summer of 1959. Buffett was six years younger than Munger, so they had never crossed paths in school and socialized in different circles. Mutual friends, however, finally insisted that they meet. Munger’s father had recently died, and he was back in Omaha from LA to settle his estate. A meeting was arranged at the Omaha Club, an exclusive private club where the city’s elite came to do business. The arched doors opened, and, dressed more for the big city than for Omaha, Munger walked in and up the curved mahogany staircase to meet the kid with a crew cut he had heard so much about.

Soon after the introductions and usual pleasantries, Munger and Buffett launched into a more substantive discussion. The friends who had arranged the meeting sat listening raptly as the pace of the conversation quickened. It was clear to all that Munger had found his intellectual soul mate. As had Buffett. The relationship was born of their shared love of business, shared madness for boundless conversations, and ambitions that ran wildly beyond the usual for two kids from Omaha. Just as Kahneman and Tversky had found, it was the joy of sharing a single mind.

“Warren, what do you do specifically?” asked Munger, during a lull in the conversation. Buffett explained that he formed partnerships where people put in their money and paid Buffett a fee to manage it. In 1957, explained Buffett, his partnership had made over 10 percent, while the stock market had dropped over 8 percent. The next year the partnership shot up over 40 percent. So far Buffett had made over $80,000 in fees. Munger was entranced by the idea of forming a similar partnership back in LA. “I had a considerable passion to get rich. Not because I wanted Ferraris—I wanted the independence. I desperately wanted it. I thought it was undignified to have to send invoices to other people. I don’t know where I got that notion from, but I had it,” recalled Munger.31

Munger returned to LA, but his new relationship with Buffett continued to evolve. Daily hour-long phone conversations turned into daily two-hour conversations. When Munger’s wife asked why he was paying so much attention to the kid from Omaha, Munger replied, “You don’t understand. That is no ordinary human being.” But the business arrangement between Munger and Buffett was not formalized right away. Rather, it developed over time—discussion after discussion, deal after deal. But something else was happening that went beyond traditional business relationships. In one conversation after the next, Munger and Buffett were developing an intellectual framework for investing. The framework that grew from their time together was more psychological than analytical, steeped in a deep appreciation of human nature—what made people tick—rather than in economic theory. They were pinning down the variables that mattered and identifying those that did not. In doing so they were forced to grapple with their own human nature.

Right away they recognized that one of the most important qualities in an investor is temperament. “Investing is not a game where the guy with the 160 IQ beats the guy with the 130 IQ. Once you have ordinary intelligence, what you need is the temperament to control the urges that get other people into trouble in investing,” said Buffett.32 They were each observing their own mind—its urges, irrationalities and quirks, and mapping out the ways not to trust it. “How do you learn to be a great investor? First of all, you have to understand your own nature,” said Munger.33

They noticed how others made mistakes. Usually, they observed, mistakes were made by investors not recognizing the difference between what they truly knew and what they believed they knew—wandering into areas outside of their core expertise. To avoid this, they resolved to draw tight borders around their own competence, thus avoiding the pitfalls outside of it. “The game of investing,” said Munger, “is one of making better predictions about the future than other people. How are you going to do that? One way is to limit your tries to areas of competence.”34

As it logged dizzying returns year after year, Berkshire Hathaway became a thorn in the side of the efficient market hypothesis. Because Berkshire’s stunning returns were “impossible,” reasoned the economists, they must be a product of luck. It stands to reason, the economists said, that out of the massive universe of investors, some would rise to the top due to nothing more than sheer luck. A friend of mine who has a PhD in economics from MIT described it this way: “Well, if you have a stadium with 50,000 people in it and ask them all to flip coins repeatedly, a few in the stadium will get heads or tails 15 or 20 times in a row.” This is how economists explained the anomaly of Berkshire Hathaway. Berkshire Hathaway was, in their minds, an accident bound to happen.

During the 1970s, ’80s, and into the ’90s, the casual observer watching Berkshire Hathaway pile up mountains of money was convinced they had some secret algorithm, a highly sophisticated methodology for picking stocks. In truth, their system was shockingly simple. For the most part, it was simply to know their own thought patterns well enough to distinguish what was knowable from what was not—and thus avoid mistakes. “It is remarkable how much long-term advantage people like us have gotten by trying to be consistently not stupid, instead of trying to be very intelligent. There must be some wisdom in the folk saying, ‘It’s the strong swimmers that drown,’” said Munger.35

Over the decades Munger and Buffett watched the formation of such companies as Long-Term Capital Management, replete with the brightest mathematicians in the world, who developed impossibly sophisticated algorithms to predict the market, only to fail miserably. The dot-com boom came and went. They listened to the assertions that they had become outdated relics who didn’t understand the “new” internet economy. And yet they continued to beat the market.

“Esoteric Gibberish”

Munger stood before the Harvard audience. It was bigger than he expected. They, too, wanted to know the secret. How had he accomplished what Nobel Prize–winning economists said was impossible? Munger was about to tell them. He swiveled his head as he scanned the audience. And he began: “Although I am very interested in the subject of human misjudgment—and lord knows I’ve created a good bit of it—I don’t think I’ve created my full statistical share, and I think that one of the reasons was I tried to do something about this terrible ignorance I left the Harvard Law School with. When I saw this patterned irrationality, which was so extreme, and I had no theory or anything to deal with it, but I could see that it was extreme, and I could see that it was patterned, I just started to create my own system of psychology, partly by casual reading, but largely from personal experience, and I used that pattern to help me get through life.”36

For the next hour and a quarter Munger rattled through a homespun list of twenty-four cognitive biases he had documented over a lifetime. In long conversations with Buffett he had made dispassionate observations of human behavior—mulling over their own lives, the Great Depression, the Second World War, the irrational market booms and busts, they observed and learned and talked. Like Kahneman and Tversky, Munger and Buffett were students of human nature. Together they saw the absurdity in the pillar of modern economic theory. The efficient market theorists said stock prices were a smooth continuum of perfectly efficient pricing. Kahneman and Tversky claimed that if you looked closely, there were ripples in the continuum; ripples due to the fickleness of human nature. Munger and Buffet had found the ripples—and exploited them.

“Now let’s talk about efficient market theory,” Munger said to the Harvard crowd. “A wonderful economic doctrine that had a long vogue in spite of the experience of Berkshire Hathaway. In fact, one of the economists who won—he shared a Nobel Prize—and as he looked at Berkshire Hathaway year after year, which people would throw in his face as saying maybe the market isn’t quite as efficient as you think, he said, ‘Well, it’s a two-sigma event.’ And then he said we were a three-sigma event. And then he said we were a four-sigma event. And he finally got up to six sigmas—better to add a sigma than change a theory, just because the evidence comes in differently. And, of course, when this share of a Nobel Prize went into money management himself, he sank like a stone.”37

In 2016 Berkshire Hathaway had grown to the fourth-largest company by market capitalization in the United States. Only Microsoft, Alphabet, and Apple had larger valuations. “I have a name for people who went to the extreme efficient market theory—which is ‘bonkers,’” said Munger. “It is an intellectually consistent theory that enabled them to do pretty mathematics. So, I understand its seductiveness to people with large mathematical gifts. It just had a difficulty in that the fundamental assumption did not tie properly to reality.”38

In the end Munger and Buffett knew that on any given day the stock market echoes nothing more than the average price of millions of individual transactions. Each transaction reflects mostly how the buyer and seller feel about the price at that moment in time. Buffett and Munger knew instinctively that these transactions were not made after careful fundamental analysis, even by the experts. They recognized that the pillar believed to be supporting the efficient market hypothesis was an illusion. Investors—consumed by a lollapalooza of cognitive discord—could convince themselves of anything. Indeed, investors were a highly irrational group. But it was a patterned irrationality. At Berkshire they installed an internal system. Emotion, greed, fear, hope, panic—every useless emotion that gripped the masses was extirpated from their minds. The system distilled into one simple rule that would gain them a huge advantage over time: Be fearful when others are greedy and be greedy when others are fearful. To Buffett and Munger, financial metrics and equations were less important than simply gauging the collective emotional pulse of the market’s participants. And then running in the opposite direction of the irrational herd.

For the experts and average Americans, however, this seemingly obvious heuristic—the one both Munger and Buffett have been shouting from the rooftops for decades—is almost impossible to practice. There are very few who can resist the combined forces of greed and fear. Indeed, this has been known for a long time. In 1975 Charles Ellis, a consultant to professional money managers, wrote an article titled “The Loser’s Game” that showed that even so-called “expert” money managers failed to beat the market 85 percent of the time.39

It’s even worse today. Over the last fifteen years 92.2 percent of large cap funds, 95.4 percent of mid-cap funds and 93.2 percent of small cap funds, all managed by experts, have failed to beat a humble S&P 500 index fund. In 2017 Warren Buffett estimated conservatively that pension funds, endowments, and wealthy individuals had lost $100 billion over the preceding decade to expert money managers who promised to beat the market. According to Buffett, the root of this problem comes from a cognitive bias. “The wealthy are accustomed to feeling that it is their lot in life to get the best food, schooling, entertainment, housing, plastic surgery, sports tickets, you name it. Their money, they feel, should buy them something superior compared to what the masses receive. The financial ‘elites’—wealthy individuals, pension funds, endowments and the like—have great trouble meekly signing up for a financial product or service that is available as well to people investing only a few thousand dollars.”40 Buffett calls the advice these experts give “esoteric gibberish.”

Here it’s worth pausing for a moment of consideration. Because, as you drive across this country, you can see that the resources bestowed on these “experts” dispensing their “esoteric gibberish” is enormous. They are everywhere. In every city, high-rise tower, and downtown office building. Legions of investment specialists and financial advisors pitching mutual funds, hedge funds, and individual stocks to pension funds, endowments, wealthy individuals, and the general public—clerks, tradespeople, teachers, and other hardworking people. They look the part, too, clothed in expensive suits, sitting in fancy offices with mahogany desks, CNBC playing on the TV in the waiting room. This is serious business, the optics say, and these are serious people. But, just like the baseball scouts Billy Beane fired, these legions of experts have constructed an edifice of fiction—an illusion of wisdom, expertise, and intuition. If you look closely, there is nothing. The numbers don’t lie. Still, though their “esoteric gibberish” loses to a passive S&P index fund over 90 percent of the time, by the millions we are deceived. It is a massive financial fiction imposed on the real, hardworking people who do produce a net benefit for society. Still, they exist. This is the power of cognitive bias.

Yet, for the average person, bypassing the experts and trying to beat the market on their own is an even worse idea. Average investors lose to the market by a huge margin. Over the last 30 years an individual American investing in stocks has averaged 3.79 percent per year while the S&P 500 has averaged 11.06 percent per year. And it seems our minds have countless ways to deceive ourselves into continuing our poor performance. Humans are saddled with an overconfidence bias, meaning we consistently believe we are better than we are. We exaggerate our own abilities despite overwhelming evidence to the contrary. In one study, 81 percent of new business owners thought they themselves had a good chance of succeeding, but when asked about their peers gave them only a 39 percent chance.

One way around all of this is to embrace Munger’s “inverse rule” of problem solving and simply not try to beat the market at all. A sort of cognitive-bias hack in investing that is gaining in popularity is a process called indexing. Buying a passive fund that tracks an index like the S&P 500 is a way to bypass the potential for unnecessary loss due to the human mind’s failings. Buffett agrees: “Consistently buy an S&P 500 low-cost index fund, I think it’s the thing that makes the most sense practically all of the time.” Buffett described Jack Bogle, the creator of the index fund, as a “hero.” Investor and author Tony Robbins wrote, “When you own an index fund, you’re also protected against all the downright dumb, mildly misguided or merely unlucky decisions that active fund managers are liable to make.”41 In short, investing in an index fund will save you from yourself and the experts.

How difficult is it to do what Buffett and Munger have done? To beat the market (or the S&P index) consistently, decade after decade? Buffett offers a guess: “There are, of course, some skilled individuals who are highly likely to outperform the S&P over long stretches. In my lifetime, though, I’ve identified—early on—only 10 or so professionals that I expected would accomplish this feat.”42

The “secret” of Berkshire Hathaway’s success was in the title of Munger’s Harvard talk itself: “The Psychology of Human Misjudgment.” He and Buffett stuck to Feynman’s first principle that “You must not fool yourself—and you are the easiest person to fool.” They identified and avoided the myriad cognitive biases inherent in all of us. They learned their own minds. And, as Billy Beane would do years later for the Oakland A’s, they implemented a system that bypassed the most common misjudgments.

While Tversky and Kahneman were reshaping the social sciences during the 1970s, ’80s, and early ’90s, Berkshire Hathaway provided a parallel, real-world example of everything Tversky and Kahneman were saying. Berkshire’s success was a challenge to prevailing belief: the masses, even the experts, are not rational. They are systemically irrational. Berkshire offered seductive proof that the world as viewed by social scientists was upside down. While the social scientists claimed the broad swath of the population acted rationally, Berkshire was proof that they did not. Perhaps most importantly, Berkshire demonstrated how humanity could do better, be more efficient, and operate with less waste. Other institutions should have been paying attention. Berkshire’s success was an epiphany that implementing systems to counteract cognitive biases could result in dramatically improved outcomes. It seemed obvious to Buffett and Munger that their market competitors were stuck in a cognitive closed loop, as if the biases that prevented them from beating the market also made them incapable of recognizing the flaws in their own logic. Yet if that loop could somehow be opened, almost every societal institution stood to gain: education, finance, government, health care, corporate America, even people’s personal lives. “Making systems work is the great task of my generation of physicians and scientists,” observed surgeon and author Atul Gawande. “But I would go further and say that making systems work, whether in health care, education, climate change, making a pathway out of poverty, is the great task of our generation as a whole. In every field, knowledge has exploded, but it has brought complexity, it has brought specialization. And we’ve come to a place where we have no choice.”43

Indeed, one might imagine that if there were any institution able to identify cognitive biases and systematically build a framework to avoid them it would be the institution of health care. After all, there the stakes are measured not in winning games or piling up money but in human lives. Our health care system, perhaps more than any other enterprise, stands to gain from this approach. Because, in the end, medicine is supposed to be an exercise of removing bias, dispelling human folly, and systematically measuring which pills, procedures, and therapies work and which do not. Yet because it is an institution so irrevocably entangled with our deepest hopes and fears—bound to the essence of what it means to be human—it is fertile ground for cognitive biases to germinate and thrive.