None of us can comprehend how the human race might manage living 300 or 400 years, or any other outrageously long time, without first understanding the social and scientific forces that have made the idea of it possible in the first place. That begins with explaining why we die.
In 1899, tuberculosis exterminated more people in the United States than anything else. Its killing was hideous, and easy to spread: a bacterial infection that essentially shredded the lungs and ravaged the body. The white plague, they called it. After tuberculosis came the next biggest killers: pneumonia, diarrhea, and gastroenteritis.
This was one reason the average white American lived only 48 years, the average black American 34 at most—just 15 years longer than our ancestors had survived during their days wandering the plains of Africa. Three hundred millennia of evolution, 10,000 years of civilization, and all the human race had to show for it was a meager 15 years of additional longevity!
By the time Henry Ford rolled out his first Model Ts and the fox-trot was all the rage, the average U.S. citizen was lucky to make it past his fifth birthday. One out of four children died of typhus, pneumonia, or scarlet or rheumatic fever, vanishing at the rate of 10 to 35 percent a year. The simplest accident could snatch a person’s life. A worker might gash his hand at the factory, and die not long afterward of blood poisoning. In 1900, even the most advanced members of the medical arts would never in their wildest imaginations have considered that the average human could live 80 years.
And why would they? When archaeologists pored over the writings of healers from Mesopotamia, Egypt, India, and Israel, they found plenty about migraines, seizures, smallpox, cholera, dropsy, and leprosy, but precious little about cancer, diabetes, heart disease, stroke, or dementia. Why? Because aging never had time enough to get a toehold. There were far too many other ways to die.
But then, as the 20th century marched on, the statisticians who charted the nation’s actuarial tables began to notice people were living longer. Significantly longer. At first, this was mostly thanks to simple advances in sanitation. Water was cleaner. And milk, a major source of infectious bacteria, was pasteurized.
In 1890, the first American sewage treatment plant using chemical precipitation was built in Worcester, Massachusetts. Large sanitation projects in big cities throughout Europe and America followed in the early 20th century, and chlorination was adopted in many cities after it was used to stem a typhoid fever epidemic in England in 1905.
Medical care improved too. Whereas surgeons as recently as the early 20th century didn’t think twice about eating a sandwich while performing an amputation in the operating theaters of the day, they had learned by World War I that there was a connection between medical sanitation and the appalling number of deaths they had personally created. In fact, throughout society, the role that germs played in disease became better understood. The modern world grew cleaner, if not perfectly safe, from hospitals to restaurants to the workplace. Even the white plague began to vanish. By 1940, cases of tuberculosis in the United States plummeted by half.
The next big life extender was antibiotics. Even after improvements in sanitation, the really ugly killers were still infectious diseases. Often, the only barrier between life and a horrible death was the strength of whatever a person’s DNA and immune system had the good fortune to bestow. Then, in 1928, a British biologist named Alexander Fleming noticed something odd as he gazed through the microscope in his lab at St. Mary’s Hospital in London: The bacteria he was studying had stopped growing in their petri dishes. The reason: A few spores of a green mold called Penicillium notatum had accidentally gotten into the same dish.
Scientists already knew that certain molds and bacteria didn’t get along. They had been waging predatory war with one another at the cellular level far longer than the human race had been around—probably billions of years. But thanks to this new bit of information, Fleming suspected the green mold could be used to kill bacteria outright—maybe whole battalions of bacteria. “When I woke up just after dawn on September 28, 1928, I certainly didn’t plan to revolutionize all medicine by discovering the world’s first antibiotic,” Fleming later said. “But I guess that was exactly what I did.” He called the substance “mold juice,” but later named it penicillin.
Now all Fleming needed to do was create a vaccine or drug of some kind. But not being a chemist, his repeated efforts floundered. It took 12 more years—and the insights of an Australian pharmacologist named Howard Florey and a German-British biochemist named Ernst Chain—to manage that. In 1941, they purified enough penicillin to treat their first patient. It took three more years before the drug could be produced in bulk and applied the way doctors use antibiotics today.
Scientific techniques now snowballed, and waves of more vaccines and antibacterial drugs followed: chloramphenicol in 1947, tetracycline in 1948, the first safe vaccine for polio in 1952. Between 1940 and 1950, the number of medicines that doctors commonly used more than doubled, and nefarious diseases that had been killing human beings since time immemorial fell like dominoes. Life expectancy leaped forward. Between 1900 and World War II, the average American’s life span increased 26 years, nearly twice more than it had in the previous 300,000 years.
Nevertheless, people still died. But now they were dying later, and from different diseases. In 1899, cancer was not even listed among the top five killers in the United States.3 It was so rare that when a respected surgeon named Roswell Park argued that cancer would someday become the nation’s leading cause of death, the medical community thought he had lost touch with reality. And yet by 1950, cancer took its place as the nation’s second leading killer. In the space of a single generation, the number of people surviving beyond 60 had nearly doubled. Good news—except now formerly rare diseases like heart attacks, cancer, and stroke were increasing. Longer life had created a new class of killers.
This situation gave rise to something entirely new: gerontology. Élie Metchnikoff, a Russian Nobel laureate and pioneer in immunology, had coined the term (literally the “study of old men”) in 1903—but in those days, there really wasn’t much need for the field, because so few people actually grew old. Now, all of that was changing. Organizations like the Gerontological Society of America were formed, and pioneers like James Birren began to study how the body and brain aged.
The field quickly branched into examining anything at all related to advancing age: pharmacology, public health, and the psychological effects, economics, and sociology of aging. Yet not one of its practitioners—not even the biological branch—concerned itself with what actually caused aging, or what could be done to prevent it. Even the sister field of geriatrics focused only on treating and reacting to the inevitable deterioration of aging: problems like loss of memory, mobility, and strength, and diseases like osteoporosis, arthritis, heart disease, diabetes—whatever began to break down the body. But no one seemed in any way concerned with what could actually stop aging.
To gerontologists, the reasons for this were obvious, because everyone knew that aging was simply something the body did. No respectable doctor gave any real thought to how one might arrest it. After all, everything, everywhere broke down, given enough time. Bridges, roads, machines, dogs, cats. Even mountains and valleys. It was entropy at work. The great circle of life. Why should we humans be any different?
Of course, it was true that some people lived longer than others. And there was the vague and accepted understanding that genes and personal habits—either good (exercise and proper diet), or bad (overeating, too much alcohol)—had an effect on how quickly one’s biology would bite the dust. But there was never any doubt that bite it would. Aging was simply what happened to anyone who survived long enough to have it happen. The best you could hope to do was treat the inevitable symptoms. If you were really fortunate, your obituary might read: “Died from natural causes.” Which was to say, you wore out. But the idea of slowing aging, or reversing it? That was the stuff of science fiction, not serious research.
Nevertheless, the trends set in motion in the postwar medical world continued. Life was lengthening. Life expectancy in the 1960s now began to approach 70. But again, of those who lived longer, cancer took increasingly more lives, and heart disease was skyrocketing. By 1968, death from damaged hearts peaked at more than 350 per 100,000 people. Because so many had for so long died of infectious diseases, the connection between heart attacks, smoking (everybody smoked), high cholesterol, or high blood pressure had gone right over the heads of the medical experts—they had never been major killers.
Now they were, and researchers in the life sciences began to glimpse the ways high blood pressure and atherosclerosis damaged the vascular system and heart. Not that they had it all figured out. In the 1950s and ’60s, doctors still used a term called “essential hypertension” to explain that people actually needed higher blood pressure to get blood to patients’ brains. That was what made it “essential.” But when researchers began to develop beta-blockers and other drugs that reduced blood pressure, they realized “essential hypertension” wasn’t essential at all. It was just hypertension, and it was blowing people’s vascular systems apart like bad tires. Men (more than women) dropped over in their 60s, or even 50s, from acute coronary thrombosis or myocardial infarction with the random, but inevitable, destruction of a sudden thunderstorm. President Eisenhower had had two heart attacks while in office and finally passed through the veil in 1969. Louis Armstrong died in 1971. J. Edgar Hoover in 1972. President Lyndon Johnson in 1973. All from heart attacks.
Now, suddenly, saving hearts became imperative. On December 15, 1967, Time hailed Dr. Christiaan Neethling Barnard on its cover as the man who performed the first successful heart transplant. The surgery, the magazine said, was “epochal.” The feat seemed both the most outrageous and, at the same time, perfectly sensible way to defeat the new scourge. Nevertheless, it wasn’t a panacea. For most, heart disease, stroke, diabetes, and cancer simply became the new ways to die. It was sad, but what else could one do?
Researchers did have some ideas. Building on the breakthroughs in antibiotics and postwar molecular drug development, they began to connect new dots and develop pharmaceuticals that, if they couldn’t cure the emerging diseases with the same lethal efficiency vaccines and antibiotics had, could perhaps treat the symptoms and slow the damage. The first beta-blockers became available in 1958. They included diuretics designed to slow damage from congestive heart failure and hypertension. Then came calcium channel blockers, and ACE (angiotensin-converting enzyme) inhibitors: more medications for treating high blood pressure, heart failure, and diabetic neuropathy. A long, long list of other “vasoconstriction” drugs flowed as the pipelines of the pharmaceutical industry continued to widen.
Yet, despite these incremental advances, none represented the sorts of breakthroughs that had been made with infectious disease, nothing like a single vaccine or drug, some silver bullet that could wipe out whole classes of killers. That was because these disorders were far more complex. By definition, infectious diseases came in the form of viral or bacterial attacks from outside the body. If you killed the bug, you killed the disease. But the newest top killers were different. They were the result of the body’s own complex and inscrutable biology. Yes, smoking and poor diet and other outside factors could be, and often were, contributing factors. But diabetes, cancer, heart, and vascular disease were largely killing people because they were simply getting older, and the sources of the deterioration were tough to unmask.
Medicine had hit a new kind of wall. There were no knockout punches for these diseases. When it came to aging, the best that Medicine—the kind with a capital M—could manage was to nibble at the edges.
None of this, of course, was immediately obvious. When President Richard Nixon launched the war on cancer in 1971 with the passage of the National Cancer Act, it had the scent of John Kennedy’s race to the moon. Throw enough money, brainpower, and technology at a problem and the next thing you know, you’re bounding around the lunar surface hitting golf balls.
The problem was, going to the moon was largely about engineering and physics. Diseases were about biology, and biology was a far more unpredictable demon than engineering. In those days, the scientists spearheading the research thought cancer was a single disease. Forty years later, they had learned the hard way that it was 100 different diseases, at least, and each one required different treatments.
Not that progress wasn’t made. By 2008, heart disease had been halved, strokes decreased by two-thirds, and new drugs joined with increasingly ingenious treatments and early cancer detection to reduce the disease 21 percent over the previous 13 years.4 At Genentech, in the 1980s and ’90s, Herb Boyer, Art Levinson, and the rest of their teams were snipping DNA with recombinant technology to create pharmaceuticals that attacked diabetes, heart failure, and colon, ovarian, and rectal cancers at the virological/molecular level. Under Levinson’s leadership, Genentech developed some of the first monoclonal drugs, like trastuzumab (trade name Herceptin), which could seek out and destroy specific cancer cells—in this case, an ugly and lethal form of breast cancer. That, in turn, reduced the debilitating damage that radiation and chemotherapy did to the whole body. All of these were postponing death for millions of people. But there was still the dark side: People were living longer, yes, but too often they weren’t living better.