So far we have examined how cognitive biases have created a range of inefficiencies across the institution of health care. And how recognizing the biases and confronting them through the use of clinical trials, careful statistical analysis, repurposed drugs, checklists, and other simple techniques can result in dramatically improved outcomes at a fraction of the cost. But what about the individual? How do cognitive biases creep into and affect our individual health on a personal level? In other words: What do data reveal regarding our health and well-being that really matters when compared to our own potentially biased viewpoint? We all have internal thoughts and intuitions regarding our health, but how biased are they? Do they serve us well? In the same way that evidence-based analysis can identify the best practice for a given medical procedure, could it also identify which lifestyle factors—diet, exercise, and stress, for example—are most important to our health and well-being?
As we will see, our individual cognitive biases tend to arise out of the same predicaments that afflict doctors when they treat us as patients: the necessity of making day-to-day decisions based on little or flawed knowledge. And, much like statistics have the power to steer doctors away from their flawed decisions, they can guide us, too, directing us toward the things that really matter and away from those that do not. Knowledge is power—the power to transcend our internally hardwired biases and focus on the variables that truly matter in our lives. Furthermore, maintaining our individual health allows us to circumvent the health care system, to bypass the risks of overtreatment, the diagnostic and therapeutic disparities, and the massive costs. Indeed, with regard to our own health, we can be a culture of one. A culture within. A culture that strives to identify and live by the lifestyle variables that will keep us out of the health care system. And, as we will also see, our internal biology is much more flexible and dynamic than scientists have previously appreciated. In other words, recent science has revealed that we have much more control over our own destiny than imagined.
Although we may all define health slightly differently, in general most people define it as the “state of being free of illness or injury.” And anyone who has dealt with illness knows how easy it can be to take health for granted. As seventeenth-century English historian Thomas Fuller wrote, “Health is not valued till sickness comes.”1 Until then, it feels like a right granted to us, our default state. It is all many of us have ever known. When sickness does come, only then do most of us fully appreciate the importance of our health. “It is health that is real wealth and not pieces of gold and silver,” said Gandhi.2
So how do we stay healthy? How do we ward off disease and live happy, productive lives until a ripe old age? One place to look for the answer is from our planet’s oldest residents. As we know, certain cultures revere their old more than others. In many cultures the elderly are treasures, enshrined as the most prized of citizens. In these cultures the old have a wisdom attainable only through experience. They are the threads that weave the culture’s past to its present. Even in America’s youth-obsessed culture, we are fascinated by centurions, but more for the secrets they must surely hold. Secrets about preserving the precious state of youth for as long as possible, yes, but also secrets about how to live happily in the face of all the misfortune and pain pervading our world. This is a select group, to be sure. Simply to make it into the exclusive club of centurions is a remarkable feat. In the industrialized world only 1 out of every 6,000 people will live to age 100. People who live to be older than 110—or supercentenarians as they are called—are extremely rare, with only 1 in 7 million reaching this category. What did they do to keep disease at bay? Indeed, the same question gets posed to this select group in every interview: “What is the secret to a long and happy life?” Here are some of their answers.
In 2015, 104-year-old Elizabeth Sullivan of Texas, claimed that the secret to her longevity was Dr Pepper; she had consumed three cans of it a day for the past forty years. In contrast, that same year Misao Okawa (117 years)—at the time believed to be the world’s oldest living person—said her secret was eating sushi and getting a good night’s sleep. In 2013, Jeralean Talley, a 115-year-old American, said it was hog’s head cheese that had kept her healthy for so long. Earth’s purported oldest resident in 2016, Emma Morano-Martinuzzi of Verbania, Italy (117), said the secret was eating raw eggs and cookies. Richard Overton, America’s oldest veteran of the Second World War at 112, swore by 12 cigars a day and a nip of whiskey in his morning coffee. And 111-year-old Grace Jones of England agreed; she claimed a nightly sip of whiskey and a “worry-free mentality” were key. Then there was Japan’s former oldest living resident, Jiroemon Kimura, who said he lived to be 116 thanks to sunbathing, while Jessie Gallan (109) from Scotland said her longevity was due to “staying away from men” and eating porridge.
On the other hand, 115-year-old Ana Maria Vela Rubio of Spain claimed it was a positive attitude. Koku Istambulova from Chechnya, however, currently believed the oldest person ever at age 129 (not officially verified) disagreed: She said she had not had a single happy day in her entire life and had no idea how she had managed to live so long. She said her long life had been a “punishment from God.” When asked for her secret to a long life, she responded, “It was God’s will. I did nothing to make it happen. I see people going in for sports, eating something special, keeping themselves fit, but I have no idea how I lived until now.” She continued, “Looking back at my unhappy life, I wish I had died when I was young. I worked all my life.” She said that she “did not have time for rest or entertainment.” She did confess, however, that she avoided meat and liked a “fermented milk drink.” And in 2016 the oldest known person in the United States, Adele Dunlap (113), swore by oatmeal. Both she and her son admitted to being befuddled by her extreme longevity. “She never went out jogging or anything like that. She smoked, and when my father had his first heart attack, they both stopped. I think she ate anything she wanted,” said her son.
So what is the secret? Is it raw eggs, oatmeal, Dr Pepper, or fermented milk? Or could it perhaps be plenty of sleep, a positive (or negative) attitude, being worry-free, drinking whiskey, or eating hog’s head cheese? The massive variation in these answers should come as no surprise. As individuals, we are incapable of knowing; cognitive biases seduce us into believing something has an effect when it most likely does not. Only statistical analysis can approach this question—teasing out the variables that matter from those that do not.
The human mind intrinsically wants to make things simple. It yearns to reduce what keeps us healthy down to as few variables as possible: raw eggs, exercise, or a good night’s sleep, for example. Unfortunately, maintaining our health cannot be reduced this easily. But when epidemiologists analyze the variables that impact longevity, a consistent pattern does emerge, and the pattern forces us to reckon with biological mechanisms that are more complex—and fascinating—than many of us ever imagined.
One question has always lingered at the center of biology: What determines our fate? Is it nature or nurture? The nature side of the question is now well described by the science of genetics—the combined effect of the 23 chromosomes we inherit from our mother and the 23 we inherit from our father. The nurture side of the question, however, has been much harder to pin down. In the past, nurture has typically been described in philosophical or psychological dimensions rather than concrete biological mechanisms.
Only a few decades ago, the vast majority of biologists would have claimed an individual’s fate lay in the genes inherited from his or her mother and father. But recent studies have shown that the genes we inherit play the lesser role in our destiny. Only approximately 20 percent of our life-span is hardwired into us by heritable genetics alone (nature). Conversely, this means that roughly 80 percent of our longevity is determined by lifestyle and chance events (nurture).
So how do such intangible variables of nurture as the experiences of victory, elation, nurturing friends and family—or, by the same token, pain, loss, trauma, and loneliness—become etched into our biology? The answer lies in an exploding new branch of molecular biology called epigenetics. Simply put, epigenetics is the study of why and how individual genes get turned on and off and the resultant effects on our health. Imagine your genes are like a piano. A piano cannot do anything by itself; it takes a pianist to play the piano to create music. Your genes are the same. By themselves, they don’t do anything. But through the process of epigenetics, the functionality of our genes is brought to life. Epigenetics is the music of our genomes. Another way to look at epigenetics: If the genome is the hardware, then the epigenome is the software. “I can load Windows, if I want, on my Mac,” says Joseph Ecker, PhD, a Salk Institute biologist. “You’re going to have the same chip in there, the same genome, but different software.”3
The biological importance of the signals, switches, signposts, and levers that turn on certain genes and turn off others in every human cell—all 40 trillion of them—was first recognized by the English geneticist Conrad Waddington in the early twentieth century, when the gene was still only a dim abstraction. In the winter of 1943, ten years before the structure of DNA was discovered by the Anglo-American duo Watson and Crick, the Nobel prize–winning quantum physicist Erwin Schrödinger gave the first of three public lectures at Trinity College, Dublin. His topic was a strange one for a physicist: “What Is Life?” In the lecture Schrödinger imagined life, all life, to be reduced to a kind of “code script.” Waddington’s imagination extended even further. He understood that certain traits like eye color or height might be determined by Schrödinger’s “code script” (later determined to be DNA), but he imagined other traits to be more complex, the result of communication among genes, or, put another way, the signals that turn genes on and off. To describe this imagined process Waddington coined the word epigenetics. “Epi” is Greek for “above” or “on.” As Waddington envisioned it—and could only describe metaphorically—epigenetics is the cellular functionality that exists “above” traditional genetics.
History would eventually show that Waddington was far ahead of his time. When Watson and Crick walked into the Eagle Pub on February 28, 1953, and Crick shouted “We’ve discovered the secret of life”—after the two had elucidated the structure of DNA—the double helix was planted in the center of the biological universe. This was Schrödinger’s illusory molecule finally brought to life. The irregular pattern of the paired molecules down the center of the helix suggested the “code script” that Schrödinger predicted. Few discoveries have captivated the imagination like the discovery of DNA’s structure. At no time before or since has science revealed a material structure so utterly overflowing with implications—so enormously suggestive.
Yet while the rest of the world was captivated by Watson and Crick’s discovery, Waddington remained unimpressed. “The actual ‘creative process’ by which the 1953 ‘breakthrough’ was achieved does not, however, in my opinion, rank very high as scientific creation goes,” wrote Waddington. “The major discoveries in science consist in finding new ways of looking at a whole group of phenomena. Why did anyone ever come to feel that the structure of DNA was the secret of life? It was the result of a long battle.… Solving a puzzle like this demands a very high intelligence … but this is not the sort of operation that was involved in such major scientific advances as Darwin’s theory of evolution, Einstein’s relativity or Plank’s quantum theory.”4 And while the world was intensely focused on the newly discovered gene, Waddington was already looking beyond it. “DNA plays a role in life rather like that played by the telephone directory in the social life of London: you can’t do anything much without it, but, having it, you need a lot of other things—telephones, wires and so on—as well,” wrote Waddington. Nevertheless, once the article announcing the discovery of DNA was circulated, a new “gene-centric” era of molecular biology was born. Surely, it was widely believed, all the mysteries of our biology would be found in the genetic code. Watson and Crick won the Nobel Prize, and Waddington’s epigenetics was all but forgotten.
But it would not be forgotten for long. At 10:19 a.m. on June 26, 2000, President Bill Clinton approached the podium in the East Room of the White House to announce that a rough draft of the human genome was finished. The inexorable sweep of technology had culminated in the Human Genome Project, a massive effort to read our autobiography, spelling out all 3.42 billion letters in our genome. “We are here to celebrate the completion of the first survey of the entire human genome. Without a doubt this is the most important map, most wondrous map ever produced by human kind,” said Clinton. Sitting three rows back and slightly to the right of the podium, in a white suit, was James Watson. Three minutes into the speech Clinton looked directly at Watson. “It was not even fifty years ago that a young Englishman named Crick and a brash, even younger American named Watson, first discovered the elegant structure of our genetic code. Dr. Watson, the way you announced your discovery in the journal Nature was one of the great understatements of all time. ‘This structure has novel features which are of considerable biological interest.’” As laughter erupted from the audience, Clinton bowed to Watson, and said, “Thank you, sir!” The laughter turned to applause. This map, Clinton went on, was sure to usher in a revolution in “the diagnosis, prevention, and treatment of most, if not all, human diseases.”5
This is where molecular biology took an interesting turn—a turn that mimicked the shift in perception forced upon astronomy when Copernicus displaced the earth as the center of the universe. The gene, once the center of the biological universe, would also be forced to reckon with a new reality—a reality that would redeem Conrad Waddington. When Clinton announced the rough draft of the human genome, it was believed, as he said, that individual genes were behind “most, if not all human disease.” But as the twenty-first century progressed, a new realization would surface—that there is astonishing little variation in our genomes. In other words, one person’s genes are not that different from the next, certainly not enough to account for “most, if not all human disease.”
“Our DNA is overwhelmingly identical. Indeed, all the beautiful permutations of the human form—the differences between the tallest and shortest, the brown-eyed and the green-eyed—are explained by just a tiny fraction of those base pairs,” wrote science reporter Brian Resnick. And, “finding the genetic differences that make one person taller or shorter than another is like looking for needles in a haystack.”6 For example, when Clinton made his announcement, Francis Collins, director of the NIH, predicted that there would be a dozen genes involved in diabetes, “and that all of them will be discovered in the next two years.” In the end, it was discovered that hundreds of genes are involved in diabetes, each with a tiny contribution to its development—making the prediction of who will and who won’t acquire the disease wickedly complex. “We’re going to hit a ceiling really quickly,” said Cecile Janssens, an epidemiologist at Emory, about the predictive power of genetic variation. “All the SNPs [genetic variations] that we are discovering now have such a tiny effect,” said Janssens.
If twentieth-century molecular biology was defined by genetics, then twenty-first-century molecular biology was shaping up to be defined by epigenetics. The twenty-first century ushered in the realization that epigenetics, the volume-control dials on each of the 22,000 genes in the human genome—and not genetic variation permanently inscribed into our DNA—is what really accounts for most of human disease. “Epigeneticists, once a subcaste of biologist nudged to the far peripheries of the discipline, now find themselves firmly at its epicenter,” wrote Pulitzer Prize–winning author Siddhartha Mukherjee.7
“Genes are not our destiny,” said Tim Spector, professor of genetics and director of the TwinsUK registry at King’s College London. Spector, perhaps more than anyone, has a unique insight into the role individual genes play in determining our health. When he founded TwinsUK in 1992, Spector appreciated that with identical twins nature had provided the perfect experimental system to help untangle the role individual genes play in complex diseases and traits. Identical twins share the same genetic code in every cell of their bodies. They are genetic clones of each other. Spector realized if he created a registry he would be able to follow thousands of pairs of twins over their lifetime and determine which diseases are 100 percent genetic, and which ones are more complex—an amalgam of genetic and environmental influences.
Pairs of identical twins serve as a proxy for the degree that genetics (inherited DNA) influences a disease process. If a disease is purely genetic—Huntington’s disease, for example—both twins will get it 100 percent of the time. Therefore, Huntington’s has what’s called a concordance rate of 100 percent; it is an exclusively genetic disease. Very few diseases fall into this category, however. Most diseases are a blend of genetic and environmental influences, like schizophrenia. If one twin develops schizophrenia, the other twin has a coin’s-flip chance of developing it. Its concordance rate is 50 percent. Therefore, schizophrenia is revealed to have equal genetic and environmental components. Overall, twin studies have led to estimates that inherited genes determine only 20 percent of our longevity. In other words, your genes hold a minority stake in how long you will live. “Most people when they read the sort of headlines describing twins that have died within hours of each other think this is very much the norm,” said Spector. “But nothing could be further from the truth. The fact is most twins die not within hours, not within months, often not within years. And they do not die of the same diseases. In fact, they don’t even get the same disease usually, even if those diseases are very strongly genetic and very common like diabetes, heart disease, arthritis, etc.”8
With the realization that much of our health is determined more by nurture than nature, researchers naturally asked, but how? What possible biological mechanisms could record something as effervescent and intangible as nurture? How could things like trauma, triumph, isolation, and love, for example, be unpacked into a biological stamp on our health? When a pair of twins, perfect genetic clones of each other, raised in similar environments for much of their lives, experience a divergence in their health—one develops diabetes, or heart disease, or cancer, for example, and the other does not—what is the cellular mechanism lurking behind the two drastically different outcomes? What cellular process nudged one twin onto a dramatically different course? This is where the science of epigenetics has made a spectacular return. Researchers like Spector have revealed that epigenetics provides a mechanistic explanation for the 80 percent of disease that our genes alone cannot account for. To fully appreciate the majesty of epigenetics, it is critical to understand how it operates at a cellular level.
Here’s where we stop for a brief refresher in molecular biology. Watson and Crick’s discovery of the structure of DNA answered an ancient query that went back as far as Hippocrates: Where were the instructions to build and run a living organism? But the closing of that door with Watson and Crick’s discovery immediately flung open another: How is the information contained within DNA translated into action? Crick soon worked out the details, a beautifully orchestrated decoding system that transformed the information contained within a segment of DNA (a gene) by way of an intermediate messenger molecule called messenger RNA (mRNA) into a protein. Proteins are the workhorse molecules of the cell. They provide structure, catalyze thousands of chemical reactions to generate cellular energy, act as signaling molecules (hormones), and form the receptors for other hormones to attach to. They function like intricate circuit boards within the cell, relaying signals from outside the cell and eliciting the designated biological response. Proteins carry out the day-to-day cellular operations. This is how biological information is transferred. This information flow—from gene to mRNA to protein—is the central process of life. Crick famously called this process the “central dogma of biology.”
In essence, epigenetics describes the cellular mechanisms that have the ability to change the rate of this information flow within the cell, the rate at which genes are expressed into their protein products. To manufacture more or less of a protein changes cellular functionality. For example, the epigenetic downregulation in production of a protein that forms insulin receptors reduces the cell’s ability to respond to circulating insulin, potentially leading to the development of type 2 diabetes. Conversely, the epigenetic upregulation of the same protein can increase the cell’s ability to respond to insulin.
Epigenetic modifications comprise a range of unique operations, from stable, long-lasting mechanisms to those that are either more transient and intermediate or extremely dynamic and moment-to-moment. The stable, long-lasting epigenetic mechanism is responsible for the different cell types that make up our tissues and organs. While each of our 40 trillion cells contains the same complement of DNA within its nucleus—23 chromosomes apiece from each of parent—differing cell types express dramatically different sets of genes. For example, a nerve cell has the genes responsible for producing stomach acid turned off, while the cells lining the stomach have these genes turned on. Skin cells will have the gene for melanin (the protein that makes you tan) turned on, and liver cells will have it turned off—both cells contain the same chip, but have different software installed. The epigenetic mechanism that “locks in” the functioning of a particular cell type, as in the examples just given, is a very stable modification called methylation. Methylation is the direct attachment of a methyl group molecule (one carbon atom and three hydrogen atoms) to specific nucleotides within a gene on a strand of DNA. The attached methyl group physically blocks RNA polymerase from transcribing the gene into mRNA, thus preventing the expression of that gene. The gene is permanently turned off.
However, most epigenetic mechanisms are not as stable and long-lived as direct methylation. Others are more adjustable, acting like volume-control dials that allow for the fine-tuning of a gene’s expression. In the late eighteenth century a German biochemist named Albrecht Kossel discovered that chromatin (all the material found in a cell’s nucleus, DNA and protein material) contains an interesting repeating protein that is attached to DNA. Kossel called the individual protein subunit a histone. In the decades following their discovery, the function of histones puzzled biologists. The biochemists showed that chromosomes were saturated with histones from end to end, but their purpose remained elusive.
By the middle of the twentieth century the purpose of histones was finally understood: They solved a molecular packaging problem. The human genome consists of about three billion nucleotides. Stretched end to end, the 46 chromosomes inside the nucleus reach over three meters in length. Clearly evolution had to come up with a solution to package our chromosomes into the incommodious volume of the nucleus, a compartment measuring only one millionth of an inch across. The solution was the evolution of histones. Histones are globular, ball-shaped proteins with a long protein tail that protrudes outward. Eight histones will naturally combine to form a structure that looks like eight table tennis balls stuck together in two layers, four on top and four directly under them. Like thread on a spool, DNA neatly wraps twice around each histone octamer to create a structure called a nucleosome. Nucleosomes then condense into a fiber that coils in upon itself again and again to form “supercoiled” loops (imagine a twisted telephone cord repeatedly coiling in on itself). In this manner over three meters of DNA can be neatly wound into a hyper-condensed state that is able to fit inside a nucleus.
For a time, despite their intimate association with DNA, histones appeared to be little more than a molecular organizer, a biological hose reel. They were “the biochemical equivalent of nuts, bolts, and bungee cords to keep the all-important genetic molecule in its proper dimensions,” with all the charm, as one researcher put it, “of a brick wall.”9 But a flurry of papers in 1996 by a biochemist named David Allis at Rockefeller University in New York, would shatter for good the mundane view of histones—transforming them from “a brick wall” to dynamic regulators of genetic expression.
Working with a single-cell protozoon (pond scum) Allis noticed something curious: Corresponding with the protozoa ramping up protein production was the attachment of a specific molecule called an acetyl group to the tails of histones. Conversely, when ramping down genetic expression the acetyl groups were missing from the histones’ tails. Inspired that he might have discovered a new epigenetic mechanism, Allis began a frenzied search for the control knobs—the enzymes responsible for attaching and detaching the acetyl groups from the histone tails. He soon discovered the protein responsible for the attaching, called a histone acetyltransferase, or HAT for short, and the one responsible for the detaching, called histone deacetylase, or HDAC for short. Together the two proteins could easily be imagined to function as the adjustment dials. HATs penciled in the instructions to turn up the expression of genes, and HDACs served as the erasers, turning down expression. “It couldn’t have been a more wonderful one-two punch. I am not sure the chromatin field has been the same since,” said Allis. “It wasn’t rocket science to figure out this enzyme pair of reactions might function as an on-off switch.… You couldn’t turn your back on what these findings were saying. Most people thought chromatin [histones] was just a passive platform that wraps DNA. But those two papers made people think about a more active process in which chromatin truly participates.”10
By the mid-1990s researchers were waking up to the importance of epigenetics. The recognition that there were mechanisms lurking within our genomes that acted “upon” our genes had unveiled a new image of the genome. The flat, binary, two-dimensional image of our genes was morphing into that of a rich, three-dimensional, dynamic, and flexible genome. Direct methylation, the stable, long-lived epigenetic modification, installed the operating code that imparted the unique functionality of different cell types. Histone acetylation provided a more flexible mechanism; it was a fine-tuning epigenetic modification that permitted more variability in the expression of our genes. And finally, a class of epigenetic modulators called transcription factors would reveal the genome to be an extraordinarily dynamic mosh pit of activity.
Indeed, in terms of life’s dynamic, day-to-day genetic operations, transcription factors play the lead role. “Transcription factors occupy the top of the hierarchy of epigenetic information,” as one researcher put it. The importance of transcription factors is revealed in their sheer quantity. Our genomes are littered with transcription factors. Of the 22,000 genes within the human genome, approximately 2,000, or 10 percent, encode transcription factors.
In our adult bodies, the epigenetic chatter of transcription factors that occurs every day, hour, even second, is staggering. Receptors on the cell surface are constantly assessing the internal physiological environment and can respond to subtle changes via transcription factors. An individual’s routine activities—taking a walk, eating, or experiencing an emotion—can elicit a complex epigenetic response. And epigenetic responses are far from mutually exclusive; each affects the other in a network of overlapping functionality.
To illustrate the epigenetic signaling from transcription factors real time, imagine two individuals having lunch. One eats a meal of steak from a feedlot (grain-fed) cow and French fries, and then chooses to watch TV. The other has wild salmon and fresh vegetables and then decides to go for a walk on a sunny day. Our first subject has now melted into the couch and is taking in intermittent segments of a show as he drifts in and out of sleep. The chains of carbohydrate (starch) in the French fries have been broken down into glucose in his small intestine, and the concentration of glucose is rising rapidly in his bloodstream. This triggers the release of insulin from his pancreatic beta cells. The combination of rising blood sugar and insulin release activates three important transcription factors: liver X receptor (LXR), sterol regulatory element-binding protein 1c (SREBP-1c), and carbohydrate response element-binding protein (ChREBP).
From a purely mechanistic standpoint, transcription factors are epigenetic modulators in the most real sense, literally exerting their effect “on top of” DNA, as Waddington had prophetically imagined it. The three transcription factors bind to designated sites within the DNA called regulatory regions, each factor docking onto a precisely matching regulatory site, where the nucleotides within the grooves of DNA’s helical structure fit the molecular architecture of the transcription factor’s surface like a lock and key. Now, each bound to its designated regulatory site within the genome, the three exert their effect. The regulatory sites, as their name implies, are concentrated in the “promoter” regions of the genes that produce the proteins responsible for setting in motion a series of metabolic processes required to turn the excess blood sugar into fat. The process, termed lipogenesis, takes the excess glucose and, through a series of steps, converts it into four fatty acids: palmitate, stearate, palmitoleate, and oleate. These fatty acids are then incorporated into triglycerides for transport. The triglycerides are then further packaged into fatty particles called very low-density lipoproteins (VLDL) and secreted into the bloodstream. Once in the capillaries of adipose tissue (fat tissue) and muscle tissue, the VLDL particles are broken down and can be used in two ways. If our subject was active, the broken-down fat would be taken up preferentially by the muscle tissue and burned for energy in a process called beta oxidation. However, because he’s not active, the majority of the VLDL particles are broken down and repackaged into triglycerides within the adipose cells for long-term energy storage (fat).
Conversely, consider our subject who ate salmon and vegetables for lunch. After lunch she notices it’s a nice day and decides to go for a walk. The sun begins to warm her skin. Some of the photons penetrate past the skin’s outer layers and mingle with a cholesterol-based provitamin circulating in her blood. The energy from the photon converts the provitamin to vitamin D3. The vitamin is then quickly converted to its active form in the liver and kidneys. Once active, the vitamin diffuses into the nuclei of cells and binds with its receptor. The pair then binds to another protein, the retinoic X receptor. The entire complex now acts as a transcription factor and will bind to small sequences of DNA called vitamin D response elements (VDREs), thousands of which have been identified throughout the genome.
As our subject walks, enjoying the warmth of the sun on her skin, hundreds of genes flicker to life, some affecting a range of biological processes that are not immediately obvious: increased calcium absorption, bone development, immune system modulation, and resistance to certain infectious disease, for example. Invigorated by the warmth of the sun and the fresh air, she walks faster. Her muscles began to strain. The proteins from the salmon are digested into individual amino acids. Her straining muscles need fuel, so they absorb glucose and a certain class of amino acid called branched-chain amino acids. The selective displacement of branched-chain amino acids from her bloodstream results in a higher circulating proportion of the amino acid tryptophan. This is significant because only one type of transport protein is capable of shuttling tryptophan through the blood-brain-barrier, and branched-chain amino acids strongly outcompete tryptophan for access to that carrier—like thousands of cars trying to cross a single-lane bridge at the same time. But, because she is rapidly burning the branched-chain amino acids, tryptophan is able to diffuse freely into her brain.
She has walked for 10 minutes, and with the sun still spilling onto her skin she’s made 20,000 IUs of vitamin D. Her liver has converted much of it to the active form of vitamin D. The activated vitamin D has then gone on to form transcription factors within the nuclei of cells that possess a vitamin D receptor, or vitamin D competent cells, in the words of Waddington. Ricocheting within the nucleus the transcription factors eventually bind to VDREs, finally eliciting the upregulation of a vast spectrum of genes.
One of the genes that is turned on codes for the enzyme tryptophan hydroxylase 2 (TPH2). TPH2 initiates the conversion of the tryptophan now flooding our subject’s brain into the neurotransmitter serotonin, and the concentration of serotonin begins to build within her presynaptic neurons.
Whether serotonin will be released into the neural synapse, where it exerts its effect, is influenced by several factors. One is the proportion of certain fatty acids in the bloodstream: Its release is inhibited by the presence in the bloodstream of specific omega-6 fatty acids and facilitated by the presence of marine omega-3 fatty acid eicosapentaenoic acid (EPA). Recall, our subject dined on salmon, which is rich in EPA. Another factor is the fluidity of the neural membrane, which determines the mobility of the serotonin receptors that span the neural membrane, floating unanchored like inner tubes on the surface of a lake. Docosahexaenoic acid (DHA), another omega-3 fatty acid, is crucial to neural membrane fluidity; the more DHA in a membrane, the more fluid the membrane is. It is the most abundant fatty acid in the brain, making up 30 percent of the fatty acid content. Serotonin receptors in DHA-rich membranes are supple, less rigid, and bind serotonin more easily—much like a well-oiled and pliable baseball glove is able to catch a ball more easily. Salmon also contains DHA.
Between the sunlight, exercise, and the composition of her lunch, an epigenetic chain reaction is set in motion: Sunlight triggers vitamin D synthesis, resulting in the transcription of the serotonin-synthesizing enzyme THP2; exercise promotes tryptophan from the digested salmon to enter the brain; and the fatty acids in the salmon help facilitate the release and uptake of the newly manufactured serotonin.
The edges of her thoughts soften. The anxious, subconscious shadows brighten. She feels a certain weightlessness. The singing of birds and the colors seem more vivid. Without her even realizing it, she thinks happier, more optimistic thoughts. Numerous studies have linked positive emotions to serotonin levels. Serotonin concentrates in the discrete brain regions known to regulate social cognition and decision-making. Collectively, these regions of the brain are called “the social brain.” The importance of serotonin’s effect on the social brain can be demonstrated by a simple experiment in which subjects are given boluses of branched-chain amino acids. The extremely high concentrations of branched-chain amino acids completely saturate the blood-brain-barrier transport protein, preventing tryptophan from entering the brain, and the subjects’ brain serotonin levels plummet. The behavior changes are immediate and obvious. The subjects become more impulsive, aggressive, and experience impaired learning and memory. They have an inability to resist short-term gratification and show difficulty in long-term planning. The results are thought-provoking, especially when considering approximately 70 percent of the world’s population has inadequate levels of vitamin D and equally inadequate intake of marine omega-3 fatty acids. On a graver scale, insufficient levels of vitamin D, EPA, or DHA, in combination with certain genetic factors, leads to dysfunctional serotonin activation at key developmental stages in growing children. This confluence of circumstances very likely contributes to serious neuropsychiatric disorders and depression.
Serotonin’s ability to flush out negative thoughts, encourage positive ones, and trigger a general sense of well-being triggers a cascade of physiological responses that improve one’s overall health. Cast over a lifetime, negative emotions can exert a powerful effect on longevity. Serotonin’s influence goes beyond the occasional good mood. In a classic study, autobiographies written by 22-year-olds were ranked from high to low for positive emotional content. The authors whose autobiographies ranked in the lowest quartile for positive emotions died on average 10 years earlier than those in the highest quartile.11
It is apparent that seemingly minor decisions, what to eat, how to interact with our environment, even how we choose to perceive the world—all the thoughts and memories that swarm through our minds—have profound effects on our epigenomes through the activation of transcription factors. The inputs are calculated every day. Genes are turned on and turned off moment to moment. The output of this vast and fluctuating algorithm influences our moods, colors our thinking, and governs the innermost workings of our bodies on a molecular level. It influences how active we are, how compulsive, thoughtful, enthusiastic, rational, reclusive, and creative. It affects our abilities, our health and vitality, our susceptibility to specific diseases and disorders, and how we interpret and respond to the world. “Nature,” wrote Conrad Waddington, “is more like an artist than an engineer.” Indeed, our bodies are unlike any machine. Engineers match form to function, but nature does the same in a way that is so artful in its flexibility, so staggering in its complexity, that it defies the imagination.
This brings us back to where we left off. First, with our new appreciation of epigenetics, let’s return to the question left unanswered: Why do identical twins, perfect genetic clones, usually die at different times from different diseases? The answer is because most of our health, approximately 80 percent, is dictated by the events in our lives that fall into the murky category of nurture, from random events like trauma and heartbreak to the more tangible—diet, social life, and exercise. Only around 20 percent of our health is predetermined by our inherited genes.
In the case of identical twins, the divergence in health due to random events, astonishingly, begins in utero. Monozygotic twins begin life the product of perfect symmetry: a single egg splitting into two identical halves. But soon after, the random messiness of life begins. When the placenta emerges to feed the growing identical twins, it can take two forms: a single placenta that feeds both twins, or two separate placentas, one for each twin. If a single placenta forms an asymmetry can be introduced, with one twin receiving a slightly better exchange of blood and, by extension, a better delivery of nutrients: vitamins, minerals, amino acids, fats, and growth factors, for example.
This asymmetry of nutrient delivery can have consequences. When life begins a sperm fertilizes an egg to form a single-cell zygote. At that point a group of specialized enzymes wipes our operating system (software) clean by removing all the methyl groups from the newly formed zygote’s DNA (remarkably, this process also rewinds the biological age of the zygote to age zero). The cell-by-cell installation of the new epigenetic operating system then occurs as the embryo develops. Each new cell establishes the proper methylation pattern so that the right genes are expressed to impart the critical functionality of whatever tissue the cell is forming—liver cells express liver-associated genes, muscle cells express muscle-associated genes, and so on. It is easy to imagine the effect of varying nutritional input on this critically important process. When researchers use extremely sensitive assays to measure the methylation patterns of cells from identical twins sharing a placenta, they can detect a difference. This difference is most likely due to the asymmetrical distribution of nutrients. Not yet born, the twins’ health has already begun to diverge—with each twin expressing their array of identical genes slightly differently.
Once set in motion, the epigenetic divergence of the erstwhile perfectly identical twins accelerates. When researchers track the methylation patterns of twins throughout their lives, a clear pattern emerges: The longer they are apart—leading different lives and collecting different experiences—the wider the divergence in their epigenomes. The “nurture” of their separate lives is sculpted directly “onto” their DNA. How does this divergence then manifest into disease propensity? Let’s look at an example.
Imagine the tragic scenario where one identical twin develops childhood leukemia and the other does not. When researchers analyze both twins’ genomes, they find a striking difference: the infamous BRCA1 gene is hypermethylated in the twin with leukemia and appears normal in the twin that is healthy. When we hear of BRCA1 it is usually associated with breast or ovarian cancer, and it is the inherited, mutated version of BRCA1. Angelina Jolie brought attention to BRCA1 in her New York Times article “My Medical Choice,” which highlighted her decision to have a preventive double mastectomy to change the odds handed to her by having inherited the BRCA1 gene mutation from her mother.12
However, in some cases of childhood leukemia and thyroid cancer, the individual who develops cancer did not inherit a mutated version of the gene, as Jolie had; rather, for unknown reasons, the gene has been epigenetically silenced by direct methylation. Functionally, the turning off of the inherited gene by direct methylation is essentially the same thing as inheriting a mutated version—in both cases, the BRCA1 protein (the product of the BRCA1 gene) cannot perform its designated biological task (BRCA1 helps to repair DNA damage and is involved in mitochondrial biogenesis). To date, researchers have found many divergent epigenetic modifications that are involved in a wide spectrum of diseases, including Alzheimer’s, autism, bipolar disorder, autoimmune diseases, mental illness, and a variety of cancers.
Although the conditions, events, or behaviors that lead to the hypermethylation of the BRCA1 gene in some children and not others remain a mystery, researchers have been able to link certain variables on the nurture side of the equation that are responsible for epigenetic changes that can dramatically affect our health. What are these important “nurture variables”? The answer may surprise you.
If you were to stand on a street corner and ask passersby the question posed at the beginning of this chapter: What “nurture variables,” or lifestyle factors, determine how long someone will live? You would likely get the same wide-ranging answers that the supercentenarians gave, everything from whiskey to hog’s head cheese. But if you ask enough people, a consistent set of common answers will emerge: genetics, diet, exercise, habits (smoking versus not smoking, for instance), and stress. I know because I’ve done this. Again thanks to twin studies, we know that the variables outside of genetics (nurture) account for 80 percent of our longevity. How much do these lifestyle factors matter? Recent research has shown that, although all the usual suspects—diet, exercise, smoking, and stress—do matter, there is another, perhaps underappreciated, factor that vastly eclipses the others in importance: your social life.
Julianne Holt-Lunstad, a professor of psychology at Brigham Young University, Utah, has asked the exact same question: Which lifestyle factors matter the most to our longevity? To establish the answer, she and her colleagues measured the effects of a variety of lifestyle factors in over four million people. The study revealed something remarkable about our biology. The top two factors for reducing the likelihood someone will die early are two features of their social life: strong connections, or close friends, and a factor called social integration—the amount of social interaction one has as one moves through the day: chatting with the neighbor walking by or people at the gym, or belonging to a club, for example. According to the study, having close personal friends and strong social integration are by far the most consistent and powerful predictors for how long someone will live.
Indeed, Holt-Lunstad’s research is clear, perceived loneliness is extraordinarily corrosive to our health—negatively impacting one’s health more than smoking over 15 cigarettes a day. Being lonely translates to a 50 percent greater risk of early death compared to those with a robust social life. By comparison, being obese raises the chance of dying before the age of 70 by around 30 percent. And, Holt-Lunstad’s research shows, moving through life with a rich social network is twice as important as exercise and diet.
Similar studies have shown the astonishingly tight connection between our social lives and our health with equally dramatic results. A 2006 University of California study tracked almost 3,000 women with breast cancer to see if there was a correlation between the richness of their social networks and their survival. The result was dramatic. The study showed that the women with fewer friends were four times as likely to die from the disease compared to women with more robust social connections.13 Other studies affirmed the association. For example, a study performed by psychologist Martha McClintock at the University of Chicago dramatically illustrated the connection between loneliness and cancer development in rats. McClintock isolated one group of 20 rats by putting each rat in a cage by itself. McClintock divided a second group of 20 rats into four cages, each containing five rats. All 40 rats were genetically prone to mammary cancer. The isolated rats, however, exhibited a 135 percent increase in the number of tumors over the grouped rats, and an 8,391 percent increase in the size of tumors. They exhibited anxious, nervous behavior and ultimately died sooner than the grouped rats. The impact of chronic loneliness on our health measured in terms of longevity is profound: People with more robust networks of friends are likely to live an average of fifteen years longer than lonely people.
But how does loneliness so drastically affect our health? How can something intangible, like social interaction, affect our health more drastically than real, tangible factors such as exercise, diet, air pollution, obesity, or smoking? The answer is a biological loop that knits the neurological inputs generated from interacting with people (perceptions) into a deeply complex cellular response that occurs at the level of the epigenome. The study of this endlessly complex and fascinating looping system has been coined social genomics—an emerging field that centers on how human interaction, or the lack thereof, ultimately affects our health by changing the expression of certain genes.
In the past researchers have tried to understand behavior primarily from the inside out—linking hormones, stress factors, and neurotransmitters to behavioral patterns. Perhaps, says McClintock, this has been the problem all along, “If you look at the journals in my field, 90 percent of the articles look at the effects of physiological, neural, and hormonal systems on behavior, and 10 percent look at the effects of behavior on hormones and the nervous system,” she said. “I don’t think a balance of 90–10 is an accurate reflection of how nature works.”14 Maybe, as McClintock suggests, researchers have gotten it backward. Maybe the more interesting and dramatic effects on our bodies come from the fuzzy interface of our own perception—face-to-face interactions stimulating the brain to result in effects that cascade through neural networks and penetrate to the level of our DNA through epigenetics.
One of the first studies to show how loneliness penetrates to our core biology came in 2007. The researcher analyzed genes in the white blood cells (immune cells) of healthy older adults who reported different levels of social connectedness. Among the 22,283 genes assayed, 209 showed a different level of expression in the cells of people who reported feeling lonely and distant compared to the cells of those who reported feeling less lonely consistently over the course of four years. When the researchers analyzed the specific genes that were expressed differently, a striking pattern emerged. “These effects did not involve a random smattering of all human genes, but focally impacted three specific groups of genes. Genes supporting the early ‘accelerator’ phase of the immune response—inflammation—were selectively upregulated. However, two groups of genes involved in the subsequent ‘steering’ of immune responses were down-regulated: genes involved in responses to viral infections, and genes involved in the production of antibodies by B lymphocytes.” In other words, the lonely subjects’ immune systems were more prone to deleterious inflammatory responses and less able to mount a targeted response to infection. The authors of the study went on, “These results provided a molecular framework for understanding why socially isolated individuals show heightened vulnerability to inflammation-driven cardiovascular diseases (i.e., excessive non-specific immune activity) and impaired responses to viral infections and vaccines (i.e., insufficient immune responses to specific pathogens). A major clue about the psychological pathways mediating these effects came from the observation that differential gene expression profiles were most strongly linked to a person’s subjective sense of isolation, rather than their objective number of social contacts.”15
That last sentence deserves examining—the “person’s subjective sense of isolation, rather than their objective number of social contacts.” In other words, loneliness isn’t an absolute condition. Rather, it is defined by an individual’s need. As with sleep or caloric requirements, individuals have different social requirements, too. One person might feel well-rested after six hours of sleep, for example, while another person needs nine hours. It is the same with loneliness. Introverts need fewer friends and social interactions throughout the day to feel “not-lonely,” whereas extroverts may require much more interaction.
At the very least, the importance of our social lives to our individual health is vastly underappreciated. “There is robust evidence that social isolation and loneliness significantly increase risk for premature mortality, and the magnitude of the risk exceeds that of many leading health indicators,” wrote Holt-Lunstad.16 Loneliness is a health crisis. Obesity, especially in America, tends to get stamped as public health enemy number one. But the results of Holt-Lunstad’s research are clear: As a health risk, loneliness eclipses obesity. “Being connected to others socially is widely considered a fundamental human need—crucial to both well-being and survival. Extreme examples show infants in custodial care who lack human contact fail to thrive and often die, and indeed, social isolation or solitary confinement has been used as a form of punishment. Yet an increasing portion of the US population now experiences isolation regularly.”
According to a study conducted by the AARP Foundation, approximately 47.8 million adults over age 45 in the United States are estimated to be suffering from chronic loneliness.17 The most recent US census data show that more than half of the population is unmarried and more than a quarter of the population now lives alone. And the trend continues. Marriage rates and the number of children per household have continued to decline. “With an increasing aging population, the effect on public health is only anticipated to increase. Indeed, many nations around the world now suggest we are facing a ‘loneliness epidemic.’ The challenge we face now is what can be done about it,” said Holt-Lunstad.
There are places, however, where there is no “loneliness epidemic.” Where societies are structured around human interaction. When epidemiologists look at the so-called blue zones, regions where the residents live far longer than average, statistical analysis and observation revel a consistent pattern of social cohesion. Sardinia, a remote, mountainous Italian island in the Mediterranean, is one such place. Six times as many centenarians live on Sardinia than on the Italian mainland, ten times as many as compared to North America. And, strangely, the Sardinian men live as long as the women. (In contrast to the rest of the industrialized world, where women live on average seven years longer than men.) Susan Pinker, author of The Village Effect, visited this region to see what made it unique. “Architectural beauty is not its main virtue; density is. Tightly spaced houses, interwoven alleys and streets, it means that the villagers’ lives constantly intersect,” wrote Pinker. “Wherever I went to interview these centenarians I found a kitchen party. I quickly discovered by being there in this ‘blue zone’ that as people age, and across their life-spans, they are always surrounded—by extended family, by friends, by neighbors, the priest, the barkeeper, the grocer. People are always there or dropping by; they are never left to live solitary lives.”18
Perhaps the best way to close this section is by looking at the person with the longest life-span ever recorded: Jeanne Louise Calment, of France. Calment was born in 1875, three years before Halsted arrived in Vienna for his European tour, and a year before Alexander Graham Bell patented the telephone. She died in 1997 at the age of 122. One might imagine that to live that long—far longer than most gerontologists thought was even possible—Calment must have had “perfect” genetics coupled to a “perfect” lifestyle. Yet nothing about her family could predict Calment’s extreme longevity. Members of her immediate family did live longer than average, but not extraordinarily so: Her brother lived to the age of 97, her father lived to 93, and her mother to 86.
At the age of 21 Calment married her second cousin, a wealthy shop owner, and consequently Calment never had to work. She and her husband lived a life of active leisure. She enjoyed fencing, mountaineering, bicycling, roller-skating, playing the piano, painting, attending the opera, tennis, swimming, and even tagged along on her husband’s hunting parties. “I had fun; I am having fun,” said Calment, looking back on her life. Calment’s husband introduced her to smoking after dinner when she was twenty-one. She smoked one or two cigarettes a day until she was 117. She enjoyed port wine, of which she usually had a glass or two per day. And she also enjoyed sweets, especially chocolate, of which she reportedly ate two pounds per week. According to those who knew Calment, her most distinguishing feature seemed to have been her steadiness. She never appeared stressed about anything. “I think she was someone who, constitutionally and biologically speaking, was immune to stress,” said a friend. “If you can’t do anything about it, don’t worry about it,” she once quipped. When Calment turned 100 she rode her bike from house to house in her hometown of Arles, France, to thank everyone who had congratulated her on her birthday. She reluctantly entered a nursing home at the age of 110. Calment claimed she didn’t care for socializing much. “I didn’t enjoy visiting, I didn’t like the fashionable world, but I loved being out in the fresh air.” But those observing her in the nursing home suggested otherwise. There she befriended one of the nurses who smoked a French brand of cigarette that was dark and particularly strong. She reportedly switched to the nurse’s brand and often joined her new friend in the evening for a smoke. One evening Calment fell on the stairs going up to the nurse’s room and broke her hip. After undergoing an operation to repair her hip she was warned that, due to her age, she might not be able to walk again. “I’ll wait, I’ve got plenty of time,” she replied. Within a few days she was able to get out of bed and even stand. However, the injury left her mostly dependent on a wheelchair. Most days, after an afternoon nap, Calment enjoyed going to other rooms and talking with the other residents about the current events she had learned about that day. She remained witty to the end, often sparring and joking with the reporters who interviewed her. “I’ve never had but one wrinkle, and I’m sitting on it,” she liked to joke. In every article I could find about Calment, I couldn’t find one where she confessed to being lonely. Although it seems she didn’t have a voracious requirement for social connections, she had more than enough throughout her life. She expressed mostly gratitude for her long life. At the age of 120 she said, “I dream, I think, I go over my life. I never get bored.”
All told, there is certainly nothing remarkable about Calment’s lifestyle. She smoked sparingly, drank a little bit, enjoyed good food, and ate desserts. She didn’t worry much. She was active and social, never professing to feeling lonely. Perhaps the way she lived her life is a good lesson for us all. Her life reflects the science behind longevity. Genetics, clean air, exercise, diet, and avoiding bad habits do matter, but perhaps not to the extent that many of us believe. What matters the most is being connected, fulfilled, and engaged in the world. It’s beautiful, in a way, that the science of health and longevity also serves as a guidepost for a rich and meaningful life. Don’t worry so much. Have an occasional dessert or drink if you want to. Be active. Play. Be moderate. Engage with the world and the people around you. Not only will you live longer, but you will live better.
Cognitive Traps and the Pursuit of Happiness
By the 1980s Tversky and Kahneman’s work began to receive a lot of attention. The implications of prospect theory infiltrated nearly every institution, corporation, and academic field in the United States and beyond, and people were noticing. But the growing attention was lopsided. In 1984 Tversky received a phone call notifying him that he had won a MacArthur “genius” grant, an award that came with a quarter of a million dollars. “He was pissed,” said a friend of Tversky. “What are these people thinking? How can they give a prize to just one of a winning pair? Do they not realize they are dealing the collaboration a death blow?” And it wasn’t just the MacArthur grant, it was an open spigot of praise and prizes directed at Tversky alone, as if Kahneman hadn’t existed. Soon after, Tversky received a Guggenheim Fellowship, an invitation to the American Academy of Arts and Sciences, honorary degrees from Yale and many other universities, and too many speaking invitations to count. The praise for their work was so one-sided that Kahneman couldn’t help but feel slighted. Predictably, it put a strain on their relationship.
In the winter of 1996 Tversky received terrible news: he had melanoma. And it had spread. The doctors told him he didn’t have much time. Tversky didn’t tell many people the news, but he told Kahneman. He and Kahneman spoke almost every day until the day he died. Kahneman was the person that he wanted to end his life talking to, and the conversations were at times painful. Tversky told him he was the person that caused him the most pain in his life. “Ditto,” said Kahneman. The intensity of their relationship coupled with the intensity of the fame that followed was bound to create some bad feelings. However, in the end, they loved each other as much as ever. Tversky died on June 2, 1996, at the age of 59.
After Tversky’s death the attention from their work slowly shifted to Kahneman. In 2001 Kahneman received a phone call inviting him to Stockholm to speak at a conference where members of the Nobel committee would be in the audience. It was clear Kahneman was under consideration for the Nobel Prize, and the prize he was being considering for was for the work he did with Tversky on prospect theory. The speech that he prepared, however, had nothing to do with the theory. In fact, many were befuddled by Kahneman’s choice to a present a different topic entirely. It was a topic that had swept Kahneman away for the last few years, a topic he found completely fascinating: happiness.
We often hear people refer to health and happiness interchangeably: “Wishing you and yours health and happiness,” or, “May you have a long, happy, healthy life.” Health may be required to live, but happiness is what makes life worth living. But what is happiness? This simple question had begun to consume Kahneman. In many ways Kahneman’s genius lay in asking the right questions. Prospect theory had been born of the simple question: How do people make decisions? Now, before the Nobel committee members, Kahneman was asking them to consider another simple, yet deeply profound question: What is happiness? What really matters regarding our individual happiness and well-being compared to what we think matters? Or, put another way: How do cognitive biases sabotage our ability to achieve happiness? Can we parse out what really does matter to our sense of well-being and happiness from what we think matters? The definition of happiness is, as Kahneman said, the state of being happy. But think about that—the state of being happy. This definition implies an element of time, being happy in the moment. In another moment you might not be happy; you might even be miserable. The temporal component of happiness intrigued Kahneman. When he read through previous studies of happiness he noticed something others had failed to detect. They all measured the happiness of the subjects involved by asking them: “Are you happy?” This question automatically impels the respondent to reflect back on his or her life to tap into a narrative of memories to answer the question. This was very different, reasoned Kahneman, than the state of being happy, or being happy in this very moment in time. Being happy right now was radically different than reflecting back and tallying up a lifetime of experiences.
Here Kahneman found it helpful to divide a human being into two distinct selves: the experiencing self and the remembering self. These two selves, reasoned Kahneman, were actually very different people. “I think of it in terms of two selves,” said Kahneman, “There is an experiencing self, that’s the one that’s doing the living moment to moment. And then there is a remembering, evaluating self. That’s the one that answers questions like, How was it? How was the experience? How was your vacation? How’s your life these days? This is a very different person that we’re asking, they are not necessarily the same.” He also describes the two selves this way, “One is being happy in your life and the other is being happy about your life.”19
The experiencing self is living through a continuum—a stretched-out plane of existence that is experienced seamlessly moment to moment and eliciting an ever-varying spectrum of emotions depending on our intrinsic neurochemistry and what is occurring at the time. The experiencing self is innocent and pure, capable only of existing and feeling at each fleeting moment in time. One moment we might feel the gut-wrenching pain of loss, while later we might feel the elation and laughter of being surrounded by friends and family at a holiday party. The existence of the experiencing self is transitory; once a moment in time is experienced it is gone forever. In comparison, the remembering self has permanence. The remembering self selects a finite series of moments to record as memories. These memories are then put into a mental photo album that becomes an ongoing narrative of our lives. The remembering self is a storyteller. When someone asks if you are happy, the remembering self is called into action. You will recall the ongoing story of your life, and perceived happiness begins to color your answer. Perhaps you think of the college degree you didn’t get, a failed marriage, or a low-paying job, and view your life as unhappy. While another may think of the professional degree, the successful marriage, or their high-paying job, and report their life as happy. According to Kahneman, there is a problem: In a way, the remembering self is devious, manipulative, and does not have the experiencing self’s best interest at heart.
In order to reveal the divergence between the remembering self and the experiencing self, Kahneman first needed a way to measure the happiness of the experiencing self. It is easy to measure the happiness of an individual’s remembering self, you just ask the person, “Are you happy?” Or, “Are you satisfied with your life?” But to measure the level of moment-to-moment happiness of the experiencing self was more challenging. Ultimately Kahneman and his colleagues found a way. They could do this by simply asking people, “Are you happy right now, in this moment?” In addition, they gave each of their subjects a beeper that would go off at various times throughout the day, prompting them to rate their level of happiness at that moment in time. In this way they could measure the happiness of their subjects’ experiencing selves as they moved through their lives.
Kahneman’s research revealed that the relationship between the remembering self and the experiencing self is boundlessly complex and fascinating. Kahneman offered an example from a report a student had given of listening to a recording of symphony music. The music, recalled the student, was absolutely glorious, and he had a wonderful experience listening to it. However, at the end of the recording there was a terrible screeching sound. “This ruined the entire experience for me,” reported the student, very emotionally. Kahneman thought this was crazy. Clearly the experience was wonderful. Sure, the screeching noise was a fleeting moment of displeasure, but this shouldn’t negate the 20 minutes of enjoyment the beautiful music had given the student. Yet the remembering self recorded a purely negative conception of the entire experience.
Kahneman performed another experiment that further exposed this divergence between the two selves. He had subjects put one hand in painfully cold water for 60 seconds and had them record the experience as they remembered it. He then repeated the experiment with the other hand, except this time at the end of the 60 seconds a valve was opened, allowing enough warm water to enter to raise the temperature by a degree. The water was still painfully cold, just slightly less so. The subjects had to keep the second hand in the slightly warmer water for an additional 30 seconds and then record the memory of the experience. The result was strange, even counterintuitive. Even though the subjects who experienced the slight raise in temperature experienced more total pain, 90 seconds versus 60 seconds, they remembered the experience as less painful. When told they had to repeat one of the two experiments, the majority chose to repeat the 90-second one. “This is direct conflict between the experiencing self and the remembering self,” says Kahneman. “What defines a story are changes, significant moments, and endings. Endings are very, very important.” If the experience or “story” ends at the peak moment of pain, we remember the entire event much more negatively. The same goes for the story of the student listening to the symphony; the ending defined how he remembered the entire experience. “Choices people make are guided by their memory, they are not guided by the reality of the experience.”20
According to Kahneman, the distorted perception of time represents another departure between the experiencing self and the remembering self. “The biggest difference between them is in the handling of time. From the point of view of the experiencing self, if you have a vacation, and the second week is just as good as the first then the two-week vacation is twice as good as the one-week vacation. That’s not the way it works at all for the remembering self. For the remembering self the two-week vacation is barely better than the one-week vacation. Because there are no new memories added, you have not changed the story.”21
The value of material items between the two selves is also vastly divergent. For example, when you ask people how much happiness their car brings them, the answer correlates pretty well to the blue book value of the car. In other words, people with expensive cars report that their car makes them happier. Yet, when you ask people how their commute to work was, the correlation to the price of the car is zero. People with expensive cars hate their commute as much as people with cheaper cars. “When you stop to think about it when do you get pleasure from your car? And the answer is when you’re thinking about your car,” says Kahneman. The same goes for a house. An expensive house brings pleasure only when the owner thinks about it. The moment-by-moment life of the experiencing self is not changed by the value of the house, it is dictated by what is happening inside the house—the difference between a house full of friends and family celebrating a birthday and a house where people seldom visit, for example. The divergence between the two selves is especially poignant when it comes to the amount of money someone makes. The remembering self is convinced that more money will equate to increased happiness, yet the data strongly suggest otherwise. Research shows that people’s day-to-day happiness level is strongly affected when they make less than $60,000 a year, but it suddenly flatlines at $60,000. In other words, making more than $60,000 per year does not result in increased happiness. “We looked at how feelings vary with income. It turns out that below an income of $60,000 a year, for Americans … people are unhappy, and they get progressively unhappier the poorer they get. Above that we get an absolutely flat line. I mean, I’ve rarely seen lines so flat. Clearly what is happening, money does not buy you experiencing happiness, but lack of money certainly buys you misery, and we can measure that misery very, very clearly. In terms of the remembering self you get a very different story. The more money you earn the more satisfied you are. That does not hold for emotions,” says Kahneman.22
The clear message from Kahneman’s research is that the experiencing self—a vulnerable, exposed being, only capable of feeling in the moment—is at the mercy of the remembering self. The experiencing self has no voice. The remembering self is making all the decisions in our lives, and the experiencing self is dragged along without a say, often to places that aren’t in his or her best interest. “We don’t think of our future as experiences, we think of our future as anticipated memories,” says Kahneman. As such, our remembering self often guides our lives toward a future of anticipated memories that are important only to the remembering self: material objects, higher-paying jobs, and bucket-list vacations, for example. The data clearly show that the experiencing self doesn’t care about these things.
To illustrate this dichotomy between the two selves, Kahneman has us perform a thought experiment. Pretend you get an offer to take a vacation anywhere you want to go in the world. But there is a condition. Once the vacation is over, all the photos and social-media posts will be destroyed, and you will be given a drug that wipes away the memory of the vacation. “Now, would you choose the same vacation?” asks Kahneman. “And if you would choose a different vacation there is a conflict between your two selves and you need to think about how to adjudicate that conflict.”
His research on happiness has, at times, perplexed Kahneman. The blurry interface between the two selves can be contradictory and counterintuitive. Early on Kahneman thought the often-neglected experiencing self mattered the most. After all, this is the version of ourselves doing most of the feeling. But the data have forced him to change his mind. The satisfaction or pain someone feels as they reel back though their life’s narrative turns out to be very important to us as human beings. And even when we are not directly thinking about our own life story it is always idling somewhere in the background, somehow bleeding over into the experiencing self and coloring our moment-by-moment feelings. “When I was young and foolish, and by that, I mean about eight years ago, I thought that really what matters is the experiencing self. Who cares about the remembering self, it’s just a story that we’re telling, and I thought we can neglect that. If we really succeed in studying the experiencing self, then we’ll have the real answer to people’s well-being,” said Kahneman. Yet, for Kahneman, this is where the subject of happiness—the hazy relationship between the two selves—becomes impossible to untangle. The scorecard of the remembering self, he discovered, matters more to people than he had anticipated. Even though the remembering self, the ultimate decision-maker in our lives, is laden with demonstrable cognitive biases—inaccurate memories of pleasure and pain, a flawed emphasis on material wealth, and an irrational preference for anticipated memories over experience, for example. Even so, says Kahneman, the importance of the remembering self’s version of our lives appears to be a critical variable in our overall happiness. “People just don’t go along with the idea that the experiencing self is really the end-all and be-all. It turns out that people have a narrative of their life, they have a story of their life, they care a great deal about that story. They make decisions for that story, they make choices to keep that story good or improve it. People care a great deal and take actions based on anticipated memories and on evaluations.” In the end, Kahneman leaves the question of which matters more, the experiencing self or the remembering self, up to each one of us. “I do not answer the philosophical question of which matters more.”23
And who is to say who is happier? We can imagine a person who seems very happy in the moment, someone who smiles and laughs a lot yet, in thinking back on life, sees only failure: perhaps no college degree, a low-paying job, or missed goals. Conversely, we can imagine someone who doesn’t smile or laugh much, who doesn’t seem happy moment to moment, yet is very satisfied with his or her life’s story, has achieved their goals, and fulfilled what they set out to do. Who is happier? No one can know.
Kahneman found another series of data on the happiness of the experiencing self quite puzzling: the effect of one’s country of origin on the experiencing self’s level of happiness. “Let me tell you the results and I find them stunning and I don’t understand where they come from. The Danes, the Swiss, the Dutch, 3 or 4 percent of the people report being depressed the day before. The Americans, the Greeks, the Indians report about 14 percent. The Palestinians report 30 percent and the Armenians report close to 50 percent. There are enormous differences. National circumstances has a big effect on people’s lives,” says Kahneman. The degree to which it matters is remarkable—why would the Dutch, Swiss, and Danes be over three times happier than the Americans? Yet the data speak clearly: Our national circumstances are critical to our level of happiness.
While Kahneman’s work on happiness, like his prospect theory, is beginning to influence public policy, it also has the capacity to have a profound impact on an individual level. Learning about the research on happiness allows one to confront and examine the two selves within. Have you favored one at the expense of the other? Has the remembering self, with all of its cognitive traps, held a tyrannical reign over your life? Have you relentlessly pursued the improvement of your life’s “story” to the detriment of experience? Chased money, materialism, and recognition over the things that the experiencing self craves—intimate relationships, friendships, and experiences? Or perhaps your life is tilted the other way. Perhaps you’ve invested in experience over the things that the remembering self prizes and missed out on achieving goals, resulting in a dissatisfying life story. Or is it possible that you have struck a perfect balance, answering both of their needs? In the end, we each have to answer these questions alone.
In writing this section I couldn’t help but reflect on how Kahneman’s happiness research applies to the next generation’s relationship with social media. For myself, I remember a childhood largely in service of the experiencing self. My brother and our friends spent most of our free time outside. We often played in the woods, lost in our imaginations, saturated in the moment. My kids and their friends today, however, have had very different childhoods, often defined by small screens and their interactions on social media. I imagine that for many these new influences are shifting the realm of childhood from being dominated by the experiencing self to being dominated by the remembering self. Most social media is nothing if not a running narrative of the remembering self’s life. One platform even calls this running narrative of posts a “story.” Now when one observes people on vacation it can seem as though the vacation was intended more to fuel a social media feed than for any actual enjoyment of the occasion.
The outcome of this experiment is as yet unknown. One thing is clear, however: Suicide rates among this generation have risen dramatically—especially for girls and women (across all age ranges) for whom the rate increased by 50 percent from 2006 to 2016, compared to a rise of 21 percent for all ages of boys and men during the same time period. The social media portrayal of young people’s lives is subject to the judgment of others every day, as if their self-worth is continually on trial. And this goes for adults as well. Rigorous research bears this out. One recent study designed to obtain a clearer picture of the relationship between social media use and well-being, used three waves of data from 5,208 adult Facebook users. The results were definitive. “Overall, our results showed that, while real-world social networks were positively associated with overall well-being, the use of Facebook was negatively associated with overall well-being. These results were particularly strong for mental health; most measures of Facebook use in one year predicted a decrease in mental health in a later year. We found consistently that both liking others’ content and clicking links significantly predicted a subsequent reduction in self-reported physical health, mental health, and life satisfaction.”24
In 2013, after studying happiness for two decades, Kahneman suddenly stopped. “I gradually became convinced that people don’t want to be happy. They want to be satisfied with their life,” he said. The reason he abandoned happiness research, as he explains it, seems to reflect an acquired cynicism for humanity’s pursuit of happiness. “People don’t want to be happy the way I’ve defined the term—what I experience here and now. In my view, it’s much more important for them to be satisfied, to experience life satisfaction, from the perspective of ‘What I remember,’ of the story they tell about their lives. I furthered the development of tools for understanding and advancing an asset that I think is important but most people aren’t interested in.”25
What you decide to do with this knowledge is of course up to you. For me, Kahneman’s study of happiness aided the realization that the remembering self—the evaluating, decision-making version of ourselves; the version that is in charge of steering our lives—is rife with cognitive biases. This version of ourself is not particularly good at recalling past incidents of pain and pleasure and using these to make future decisions about pain and pleasure. As Kahneman put it, “Choices people make are guided by their memory, they are not guided by the reality of the experience.” This version of ourself often chooses materialism and money over the things our experiencing self cherishes the most, such as intimacy, conversation, and friendship. This version of ourself is more concerned with the memory of a vacation than the experience of it. This version of ourself is always comparing ourself to others. “Life satisfaction is connected to a large degree to social yardsticks—achieving goals, meeting expectations. It’s based on comparisons with other people,” says Kahneman.26 Charlie Munger likes to point out that envy is the only one of the seven deadly sins with zero upside: “Combine gluttony and lust and you can have a helluva weekend … but with envy, you only feel bad.”27 In the end, Kahneman offers this very simple advice: “One way to improve life is simply by tilting the balance toward more affectively good activities, such as spending more time with friends or reducing commuting time.”
Kahneman’s ultimate realization that most people are more concerned with the remembering self’s version of happiness—or “life satisfaction,” as Kahneman calls it—over the experiencing self’s happiness, may be a cultural phenomenon that is more tightly woven into the fabric of certain societies than others. But there is an important connection to be made here: the relationship between the two selves and health. We’ve already learned that attending to our experiencing self’s needs for friendship, socialization, and intimacy can profoundly affect our epigenome in a way that promotes health and longevity. And this is indeed found to be the case in blue zones, where social cohesiveness binds generations of families and friends and society is built around these connections. There is no denying that we crave intimacy and acceptance all the way to the level of our genome. Our immune system simply doesn’t care what car we have in the garage.
It is pretty clear that Kahneman himself seems to be more empathetic to the needs of the experiencing self than the remembering self. Perhaps it is simply a function of his own ego: He is truly not concerned with awards, attention, and approval. Or perhaps it is because he has already achieved a life to his remembering self’s overwhelming satisfaction. Certainly, the life Kahneman appears to value more, the life of the experiencing self, is the life that both epidemiological data and genomics data strongly suggest is the better version for your health.
But it is not that simple. Few things in biology ever are. One intriguing study has revealed an interesting connection between the remembering self and longevity. This 2001 study looked at the difference in longevity between nominees and winners of Academy Awards compared to a control group of actors who were never nominated. The researchers identified 762 actors who were at one time nominated for an Academy Award in a leading or a supporting role. The researchers then matched each nominee with at least one other cast member of the same sex who was in the same film and born in the same era to serve as the control group. The results were telling. Within the group of 762 Academy Award nominees, 235 had won one or more Oscars. The 235 winners lived almost four years longer on average than the non-nominated, control group of actors. Further, actors who had won multiple Oscars lived up to six years longer on average than the control group of actors. “It suggests that an internal sense of self-esteem is an important aspect to health and health care,” said the lead author of the study.28
Kahneman’s two versions of ourselves, the remembering self and the experiencing self, may serve as a good models to help us better mentally categorize and think about happiness, yet this bifurcated model may not entirely reflect the true complexity of our biology. The two versions of ourselves are not standing in separate corners ignoring each other. They are bound together, continuously interacting, arguing, agreeing, bickering, and supporting each other. Our mind is constantly engaged in an intricate, internal dance of which our life is the choreographer.
In the summer of 2009 Atul Gawande received a surprise in the mail. It was a check for $20,000 sent unannounced by a man named Charlie Munger. Admittedly, Gawande was confused. He had never met Munger or even ever corresponded with him. Gawande, then 43, was best known for his writing. He had penned two best-selling books, Complications and Better, and was a staff writer for the New Yorker, which had just published his article “The Cost Conundrum.” The article, which I touched on earlier, was a deep-dive into the massive cost variations to be found among local health care markets across the United States. The piece focused specifically on McAllen, Texas, one of the most expensive markets in the country. Gawande had traveled there personally and uncovered the cause: a local culture of competitive entrepreneurship among the doctors that had manifested in “across-the-board overuse of medicine.” The McAllen doctors had been acting less and less like doctors and more and more like businessmen.
It was this article that had caught Munger’s attention and inspired him to impulsively write out a check for twenty grand and mail it to the article’s author. Gawande was given no explanation for the check other than that Munger had deemed his article “so socially useful.” Flattered, yet still confused by the spontaneous gift, Gawande mailed it back to Munger. “He sent it back to me again and said, ‘Do with it what you want.’” Gawande finally relented and donated the money to Brigham and Women’s Center for Surgery and Public Health, where he worked as a surgeon. The money was used to help supply oxygen monitors to low-income countries.
With this odd exchange behind him, and the money put to good use, Gawande settled back into his busy routine of operating, teaching at Harvard, writing, and lecturing. In 2010 Time magazine name Gawande one of the world’s most influential thinkers. In 2012 he published another New York Times best-seller, The Checklist Manifesto, and helped found Ariadne Labs, named for the Greek goddess who showed Theseus the way out of the Minotaur’s maze using a simple thread. Ariadne Labs served as a testing ground for high-impact “guiding threads” like surgical checklists—a bridge between Gawande and his staff’s innovative ideas about improving health care and their real-world implementation. Gawande penned more award-winning articles for the New Yorker in the years that followed, and in 2014 he presented the BBC’s distinguished Reith Lectures, delivering a series of four talks in London, Boston, Edinburgh, and Delhi, entitled “The Future of Medicine.” Then, in 2018, Gawande again heard from Munger.
The true architect behind the Berkshire-Amazon-JPMorgan health care consortium is a man named Todd Combs, one of two investment lieutenants handpicked by Buffett and Munger to succeed them. It was Combs who, in the winter of 2018, had a flash of insight. Like a parasite, America’s health care crisis had infected Berkshire and was relentlessly siphoning away resources and productivity. But perhaps Berkshire didn’t have to passively allow the broken health care system to exploit their organization, reasoned Combs. Perhaps they could address it internally—fix it from the inside out.
Combs is cut from the same cloth as Buffett and Munger, his recruitment by the pair reflecting the self-sustaining culture that Berkshire has cultivated its entire existence. Like Buffett and Munger, he is in a perpetual state of learning. “I read about 12 hours a day,” Combs once reported in a rare interview. Like Buffett and Munger, Combs is a student of human nature. “He [is] fascinated by psychology and the sorts of biases that drive decision-making. Books such as Talent Is Overrated and The Checklist Manifesto have become staples in the hedge fund world, where money managers are constantly looking for an edge. But Combs was interested in those ideas well before they became trendy,” said a colleague.29 Steeped in Berkshire’s culture, Combs knew that his first steps in tackling a problem as complex as health care were to learn as much as he could, and, vitally, to pick the right person to lead the new venture.
This meant talking to hundreds of people and researching their patterns of thought and their track records. Gawande doesn’t recall exactly when Combs began considering him to head the consortium that he had been quietly putting together for months. But the leadership at Berkshire had noticed something extraordinary in him, and he had climbed to the top of their internal list. And then, in the summer of 2018, he received a call offering him a job. They asked him to be the chief executive officer of the Berkshire, Amazon, and JPMorgan health care company. He accepted.
It’s easy to see why he was picked. Gawande fits perfectly into the Berkshire culture. Like Munger, Buffett, and Combs, Gawande is often described as humble yet brilliant, and he’s a voracious learner. “I can’t think of anybody who’d be better. I don’t know of anybody who’d be better,” said Arnold Epstein of Harvard’s T.H. Chan School of Public Health. “He’s excited. He’s nervous. And he’s also incredibly humble,” said another colleague. But the fledgling company, with its ambitious goals, does have its naysayers. “Just because you know an industry is underperforming and you have a lot of money doesn’t mean you have a successful strategy,” said Leemore Dafny, a professor at Harvard Business School.30 And Zack Cooper, Yale School of Public Health, tweeted: “I do hope Amazon, JPMorgan, and Berkshire succeed. Health care is wildly inefficient. However, it’s a bit like Mayo Clinic, Cleveland Clinic, and Partners in Health coming out and saying they don’t like their computers so they’re going to form a new IT company.”31
But the consortium is under no illusions about the difficulties that they face. And while others may see Gawande’s relative lack of experience as a negative, Bezos sees it as a strength. “We said at the outset that the degree of difficulty is high and success is going to require an expert’s knowledge, a beginner’s mind, and a long-term orientation. Atul embodies all three, and we’re starting strong as we move forward in this challenging and worthwhile endeavor,” said Bezos.32
For now, Gawande is tasked with delivering better and more efficient care for the venture’s 1.2 million employees. But the vision for the venture is much grander, and extends much further. If he can implement scalable improvements, then who knows? Other health systems may also adopt them. Politicians might even stand to learn from what Gawande and his team will do. Indeed, for three corporate giants, the goal of the venture is strangely altruistic. It will be set up as a nonprofit organization and serve as an incubator for ideas that, they hope, will be adopted around the world. “This work will take time but must be done,” said Gawande. “The system is broken, and better is possible.”33
Now the question becomes: How is better possible? What needs to be done? Indeed, just Atul Gawande being himself has proven to have incredible value. Six years after his New Yorker article about McAllen was published, Gawande looked back to see what, if anything, had changed there. What he discovered was striking: Between 2009 and 2012, costs in McAllen had dropped by almost $3,000 per Medicare recipient. Total savings to taxpayers were projected to have reached almost half a billion dollars by the end of 2014. After the article was published, journalists from Texas newspapers and television crews had swarmed the city. The shit hit the fan in McAllen.
“The reaction here was fierce, just a tremendous amount of finger-pointing and yelling and screaming,” said one of the whistleblowers in Gawande’s article. “We hated you,” another doctor told Gawande. “The story put us in a spotlight, in a bad way, but, in a good way at the same time.” Beyond the yelling and screaming and finger-pointing, the dustup from Gawande’s article had painfully real consequences. “Several federal prosecutions cracked down on outright fraud. Seven doctors agreed to a $28 million settlement for taking illegal kickbacks when they referred their patients to specialty medical services. An ambulance-company owner was indicted for reporting 621 ambulance rides that allegedly never happened. Four clinic operators were sent to jail for billing more than 13,000 visits and procedures under the name of a physician with dementia,” wrote Gawande. The prosecutions had a knock-on effect. One doctor told Gawande it caused the McAllen doctors to say, “Hey, we’re under the magnifying glass. We need to make sure we’re doing things strictly by the book.”
The changes that occurred in McAllen happened because Gawande shined a spotlight on what was going on. The pen can still be mightier than the sword. When what has always been invisible is suddenly made visible, the light can have a sterilizing effect. This is what happened in McAllen. Charlie Munger has often said that exposing fraud is one of the best things a person can do for a capitalistic society. For society to work, we all have to play fairly. They were not playing fairly, or honorably, in McAllen, and Gawande called them out.
Atul Gawande knows what he has to do. First, he recognizes precisely where we are in the sweep of history. In previous generations medicine had made massive leaps forward with the discovery of a single new therapy—penicillin, for example. We are beyond that. This is the age of the system. “We are going from the century of the molecule to the century of the system,” said one of Gawande’s colleagues. “DNA, genes, energy, future. It’s how the genes connect together that actually determine what diseases do. It’s how the neurons connect together and form networks that create consciousness and behavior. And it’s how the drugs and the devices and specialists all work together that create the care that we want,” said Gawande in one of his BBC lectures.
Nature provides a good example. Over three billion years ago life began as a self-replicating molecule. This evolved into a self-replicating cell, which in turn evolved into a multicellular organism, with individual cells now performing a certain task for a given system. Multicellular life is a collective, if you will, an economy of divided labor. Evolution is a march up the ladder of functionality; molecules to cells to multicellular organisms. Nature repeats patterns. Societal organization, including medicine, is marching up the same ladder of complexity. We have gone from caves, to cities, to the industrial revolution, and up and up. Medicine is following the same path, becoming a multicellular organism, in a sense, a system of divided specialties.
Looking down from 30,000 feet, Gawande knows that America’s biggest problems in health care come from the wild variations in care that have been highlighted in previous chapters. Period. “In 2010, the Institute of Medicine issued a report stating that waste accounted for 30 percent of health-care spending, or some $750 billion dollars a year, which was more than our nation’s entire budget for K–12 education. The report found that higher prices, administrative expenses, and fraud accounted for almost half of this waste. Bigger than any of those, however, was the amount spent on unnecessary health care services,” wrote Gawande. “Millions of people are receiving drugs that aren’t helping them, operations that aren’t going to make them better, and scans and tests that do nothing beneficial for them, and often cause harm.”34 Really, the needle doesn’t have to move much to see an improvement. The Commonwealth Fund report, a survey that ranks the quality of health care in ten developed nations, has ranked the United States the worst of the lot for five years running.
And here is the way forward. Just as Fisher’s clinical trial provided the statistical data to halt the radical mastectomy in its tracks, statistics of a new kind will guide reformers like Gawande—providing the means to identify, and cull out, the waste. Capitalistic markets find a way of rewarding those that do things better and cheaper. For example, Walmart has initiated a program for employees who require spine, heart, and transplant procedures—procedures that account for the lion’s share of the costs associated with unnecessary overtreatment. The program covers all expenses if the employee will go to one of six centers, places like Mayo and Cleveland Clinic where doctors are incentivized to not overtreat by being paid a salary. Here, if patients don’t need an operation, they won’t get one. The result: less suffering and less wasteful spending.
Humans have a tendency to overcomplicate things. In fact, there is a name for this propensity: the complexity bias. Complexity bias is a logical fallacy that leads us to give unjustified credence to complex concepts preferentially over more simple ones. Occam’s razor, the problem-solving principle that contends that simpler solutions are more likely to be correct than complex ones, might perhaps seem intuitively obvious. Yet somehow, as a civilization, it is here that we seem so often to fail. For example, there is a simple solution to our nation’s health care crisis that is dangling right in front of us: change our physicians’ payment incentive from fee-for-service to salary. If this was enacted as a regulatory law, the vast majority of physicians in the country would become completely different doctors overnight. Freed from worrying about which procedures might cause them to lose money, freed from having to think like businesspeople, they could finally focus on the thing that matters: delivering the best care to the patient in front of them. At a recent dinner party, where the wine was flowing freely, I overheard a doctor say that his practice had fired a doctor for “not being productive.” When I asked him what he meant he explained, “He wasn’t billing out enough.” This has to end. For humans, incentives are incredibly powerful. Charlie Munger knows this.
Merely making better use of EMR systems—like Brent James did at Intermountain—can result in massive improvements in health care. Early on James realized the power of data and exploited it to guide physicians’ decision-making, reducing the massive variation in treatment by allowing data to reveal the “best practice” for a given problem. Since then, a unique company named Flatiron Health—launched in 2012 when a cousin of one of the founders was diagnosed with leukemia—has developed an EMR system with a variety of unusual features. But Flatiron doesn’t simply market an EMR system. They are able to sift through records accumulated from multiple practices to extract and analyze meaningful “real-world data” that can better guide cancer treatment. Like James and his team at Intermountain, Flatiron Health is more than just an EMR, it is a comprehensive, top-down managed system. This type of system—coupled with artificial intelligence (AI)—is the future: a system that is constantly capturing data and learning the best way to treat patients.35
We are not entirely rational creatures. Psychologists like Kahneman and Tversky have defined our illogical circuits. The evidence-based systems like Intermountain, Geisinger, and Flatiron Health are attempting to “childproof” the environment for us—padding the corners and plugging the outlets. They save us from ourselves. And the outcomes of these systems speak for themselves. Simply put, systems make health care better. They funnel the wild variations in treatment down into a single “best practice.” And any argument that this funneling process is limiting a doctor’s ability to treat each patient individually doesn’t hold water. The data are very clear. To reduce variation by establishing a best practice improves outcomes. The funnel is not constraining physicians; it is narrowing their margin of error and permitting them to focus on the patient with more acuity. At its best, technology delivers us from the drudgery of life. It frees our hands and our attention. Over a hundred years ago traveling from one city to another by horse and buggy would consume all of someone’s effort and attention. When the automobile arrived, traveling became easier. Soon driverless cars will allow someone to focus on completely other things while traveling from one city to the next—to read, listen, or notice things that would not otherwise be noticed. Best practices are the driverless cars that free a physician to attend entirely to the patient’s needs. Best practices are not a straitjacket; they are the emancipators of intuition. The biggest danger now is that we won’t moneyball health care fast enough.
The incursion of big data and AI into medicine will only increase from here. In October of 2018 Google announced the development of a deep-learning tool called Lymph Node Assistant, or LYNA. This AI system is capable of telling the difference between cancerous and noncancerous biopsy slides 99 percent of the time. The system is capable of much greater accuracy than pathologists. When pathologists incorporate the system into their practice they find that it reduces the rate of missed micrometastases by a factor of two and cuts inspection time in half.
The inevitable backlash against the incursion of evidence-based systems and technology into medicine is sure to intensify. New technologies always encounter resistance before they are adopted. The automated push-button elevator was invented in 1900, yet it wasn’t until the 1950s that it was accepted. Even though the new elevators were safer, people had a hard time trusting an elevator without a human operator. Before Westinghouse invented the airbrake for trains in the mid-1800s, a “brakeman” would have to climb to the top of the train to manually crank a wheel to stop the train. It was a terribly dangerous job that had to be performed in all kinds of weather, and every year many brakemen were injured or killed. Still, the idea of automated brakes powered by air was resisted. The brakemen’s union even published newspaper ads to drum up public outrage: “Are you going to trust your life to air?” Driverless cars are sure to follow the same predictable pattern. And the same is true for flying: While it is considered very safe, 50 percent of all air travel accidents today are due to human error. But the continuous process of identifying these human errors and developing automated systems to correct them has made air travel safer and safer.
Another example is Russian chess grandmaster Gary Kasparov, considered by many to be the greatest player of all time, who has experienced a uniquely intimate incursion of technology into his profession over the span of his career. For Kasparov, it began with what’s known as a “simultaneous exhibition” match in the summer of 1985 in Hamburg, Germany. Kasparov’s opponents that summer day consisted of 32 personal computers programmed by four companies to play chess. Kasparov would walk from one board to the next, and the faceless computer would make its move the moment Kasparov arrived. Although one of the 32 games was uncomfortably close, in the end the human brain prevailed, and Kasparov won all 32 games in a clean sweep. Twelve years later, Kasparov again found himself sitting across from a computer. This time it was a single game against a $10 million IBM supercomputer named Deep Blue. The match received no lack of attention from the media. The Guardian proclaimed it was Kasparov’s job to “defend humankind from the inexorable advance of artificial intelligence.” Newsweek’s headline called it “The Brain’s Last Stand.” But that day in New York City Kasparov lost to Deep Blue—the first defeat of a reigning world chess champion by a computer under tournament conditions. When Kasparov lost to Deep Blue the world changed, and so did Kasparov. Kasparov was forced to grapple with the existential meaning of the loss and the rapidly evolving relationship between humans and machines. And there was a distinction. The relationship Kasparov was contending with was not simple; it didn’t exist merely on the physical plane, like replacing a brakeman with an automatic air brake. Rather, it was machines transcending into something uniquely human. The game of chess is a game of logical analysis, but it is also a game of subtle intuition—a nexus between reason and instinct. For a machine to beat the best human chess player alive was unsettling. Up until that day in New York, machines had woven their way into our lives from the bottom up by replacing simple tasks, those we believed “beneath” us. Now, however, a machine had proven better than humans at the very capabilities that define us, the abilities that set us apart from the rest of the animal kingdom: logic and intuition. It was almost as if the earth was no longer the center of the universe. Could a machine be programmed with creativity and the intuition of a human being? As he grappled with the meaning, what Kasparov did next was not to further the rivalry but to facilitate a truce of sorts. He developed something called “advanced chess,” which allows each player to pick a computer program to play with them—a person versus person game with each player assisted by a computer. The idea was to merge the best qualities of both—the nanosecond number-crunching ability of the computer alongside the subtle instincts that are thought to be uniquely human. And this is exactly what happened. Advanced chess brought the level of chess to heights unattainable by man or machine alone; it became a beautiful hybrid of cognitive strengths. And why should there be a distinction? Why are we naturally inclined to frame chess as “human versus machine” or “human alongside machine”? After all, machines are developed and programmed by humans, so advanced chess is really just humankind at its best. Kasparov’s advanced chess offers a powerful example of how health care might be transformed.
How the inevitable incursion of data into medicine is framed will be critical. As reformers like James and Gawande and technology like Google’s LYNA impinge on physicians’ autonomy—and perhaps their sense of self-worth—the cries of fear and resistance will be heard, just as happens with every new technology. As the infinite borders of a physician’s autonomy are drawn tighter by data-driven systems, the complaints of “cookbook medicine” and data “straitjackets” will become louder. But it would serve all of us well if we skipped the instinctual, knee-jerk reaction to medicine’s technological transformation. Perhaps it can be framed differently. Kasparov’s advanced chess should be held up as a beacon, a perfect metaphor for the merger of technology and medicine—advanced medicine—in the pursuit of a new and better culture. Combining the analytic strengths of evidence-based medicine and AI with human intuition will transport the practice of medicine to heights unattainable by humans alone. Technology will serve to liberate physicians’ intuition and enhance their capacity to make the imaginative leaps of insight that remain uniquely human. After all, Spock and Kirk were better together.
This transformation will free physicians to focus on something else: the human side of medicine. Sick people need empathy. Programs like Geisinger’s innovative Fresh Food Farmacy would not be successful without the human component. This fundamental human need is not some superficial nicety, we now know it penetrates deep into our core biology. Human interaction, including the empathetic support we receive from our health care providers, changes the expression of genes in our immune systems in a way we have not appreciated until recently. And this beneficial shift in immune function can drastically change our health outcomes for the better. Human-human interaction is part of healing, and it is something that technology will never replace.
On a societal level we need to do better. The current trend is disturbing: The citizens of western civilizations are feeling more and more isolated and lonely. A nationwide survey by the health insurer Cigna found that loneliness is widespread in America, with nearly 50 percent of respondents reporting that they felt alone or left out “always” or “sometimes.” More than half of survey respondents said that they “always” or “sometimes” feel that no one knew them well. Fifty-six percent reported they “sometimes” or “always” felt like the people around them “are not necessarily with them.” And two in five felt they lacked companionship, their relationships weren’t meaningful, and that they were isolated from others. Surprisingly, the survey revealed that younger generations are more strongly affected by loneliness. The survey used a scale range from 20 to 80, with people scoring 43 and above considered lonely and with higher scores suggesting a greater level of loneliness and social isolation. The 2018 survey revealed that “members of Generation Z, born between the mid-1990s and the early 2000s, had an overall loneliness score of 48.3. Millennials, just a little bit older, scored 45.3. By comparison, baby boomers scored 42.4. The Greatest Generation, people ages 72 and above, had a score of 38.6 on the loneliness scale.”36
The nuances revealed from the survey show that it’s difficult to pin the cause on social media alone, or American’s obsessive consumerism and overworking. For example, it’s not simply the existence of social media but how it’s used that matters. Just passively scrolling through feeds is associated with negative feelings, but actively reaching out to people in a way that facilitates meeting face to face, is associated with positive feelings. Likewise, working too much is associated with loneliness, but working too little, or not at all, is also associated with loneliness. Like most things in life, it’s the subtleties that matter. The balance.
One simple way to dramatically lower health care costs is through prevention. Benjamin Franklin’s quip that “an ounce of prevention is worth a pound of cure” is as true today as when he coined it hundreds of years ago. Yet our health care system is heavily weighted toward the procedures, devices, and medications aimed at treating problems once they have already occurred, with too little attention paid to preventing them from occurring in the first place. What is the way forward here? How do we effectively prevent health problems on a societal level? The science of social genomics presented in this book hints at a creative way to approach the problem. The data is becoming more and more convincing that human connections are dramatically important to our health and happiness. Our sense of well-being and our actual physical well-being—our innermost biology—are tangled together in a way that can’t be undone, they are forever quilted together—we need both to be healthy and happy. But there is no need to wait for society to change. We are all empowered to act. Examine your own life. Find the right balance between your remembering self and your experiencing self. Don’t get caught up in the American penchant for envy and comparison. Remember Kahneman’s research that clearly shows that happiness levels off at $60,000 a year. Warren Buffet once told a PBS reporter that his $77 billion worth of shares in Berkshire stock has, “no utility to me. They can’t do anything to make me happier. I’m already happy,” remarked Buffett. “If I could spend $100 million on a house that would make me a lot happier, I would do it.” But he is wise enough to know that it wouldn’t. Buffett has lived in the same house that he purchased for $31,500 in 1958. “For me, that’s the happiest house in the world. And it’s because it’s got memories, and people come back, and all that sort of thing.”
To be sure, the arc of history is a continuum of societal change. Societies can change from aggressive to passive. They can be inclusive or exclusive, welcoming or xenophobic—build walls or open their borders. Styles, trends, values, ethics, and norms change. Societies evolve, shift, and adapt. They live and breathe like the people that comprise them. Perhaps policy-makers will consider this happiness metric someday. In the U.K. they already do, with policy decisions aimed at maximizing their citizens’ happiness. Societies could be guided to be more like the blue zones. Every downtick in loneliness that we achieve translates directly into improved health—better cardiovascular health, improved immune function, and reduced “all-cause mortality.” And this effect appears to be even more amplified with younger people. Lonely, directionless young people are a tinderbox for any society.
Perhaps our policy-makers would do well to follow the advice of Thomas Jefferson, who said, “The care of human life and happiness, and not their destruction, is the first and only legitimate object of good government.”37 And again, as individuals possessing free will, we don’t have to wait for society to change. If you take nothing else from this book, I hope you take home the examples of the power of relationships. Indeed, Kahneman would not be Kahneman without Tversky, and vice versa. Buffet would not be Buffett without Munger, and vice versa. The sum of the two made up the whole. We are better, healthier, and happier together.
So, what is the future of medicine? How good can we get? To begin with, the low-hanging fruit must be picked. Eliminating payment systems that incentivize physicians to overtreat is obvious. Reducing the massive variation in treatment that occurs daily across the country by adopting evidence-based systems to establish best practices is another obvious need. Making full use of our pool of existing drugs can happen now by forming a panel of experts from government and the private sector to comb through the massive amount of data supporting the use of drugs like metformin in cancer, or ketamine for drug-resistant depression, for example. Their potential benefit versus their risk can then be rationally calculated and best practices established for their use. Common mistakes—adverse drug interactions, hospital-acquired infections, and so on—will be eliminated by fail-proof systems. Looking far into the future, the antiquated “magic bullet” approach to pharmacology will be replaced by combinations of drugs for complex diseases such as cancer and autoimmunity. Drugs will be given in elaborate combinations, timed precisely to inhibit an exact cellular pathway at exactly the right moment, rewriting the epigenetic tags responsible for the disease and reversing it at its source. Doctors’ office visits will routinely include a gut biome readout; missing species of good bacteria will be added back in, bad ones eliminated. Genes that result in pointless suffering will be edited out using CRISPR (clustered regularly interspaced short palindromic repeats), an otherworldly technology that allows scientists to rewrite our genomes at will.
Technologies will reverse the damage of aging—aggregated proteins will be dissolved, senescent cells eliminated, stem cells replaced. Whole organs will be regenerated in the lab from our own tissue and replaced as needed. Degenerative disease will slowly be eliminated altogether. Mental illness will be solved though targeted drugs and cognitive exercises that rewire or reestablish missing or errant neural connections. And these changes will march alongside societal changes that make a life worth living. Loneliness will be eliminated though social programs that bring people together and provide purpose. People won’t be allowed to fall through the cracks anymore. Communities will be planned and built more like the blue zones—the markets, streets, shops, and town centers all strategically designed around human connection.
Is this utopian vision realistic? I believe it is. Think of what we have accomplished in a single person’s lifetime. Jeanne Louise Calment, once the oldest living person known, witnessed the transformation of medicine from something barbaric into something almost angelic. Technology builds on technology, and the pace is exponential. Even with all the problems we have today, the vector of progress is always headed in the right direction. With creative reformers like Brent James and Atul Gawande, coupled with improving medicines, devices, and AI systems, it is a time for hope. The possibilities are limitless. Perhaps Brent James said it best: “We have not yet begun to understand how good we can be.”