TWO

Image

The Great Divide

IN 1912 the prominent Harvard Medical School professor Lawrence Henderson observed that the United States had crossed a “Great Divide.”1 “For the first time in human history,” Henderson wrote, “a random patient with a random disease consulting a doctor chosen at random stands a better than 50/50 chance of benefiting from the encounter.” American medicine had advanced to the point, in other words, where it was better to be treated by physicians than to run in fear from them. The change was surely overdue: Even in the richest and healthiest nation in the world, life expectancy for men was still just forty-eight years.2

Today, a century later, male life expectancy in the United States exceeds seventy-six years—more than an extra quarter century of experience and activity and achievement.3 The average life expectancy of American women has risen from under fifty-two years at the beginning of the last century to nearly eighty years at the beginning of the current one.4 Of course, the United States was hardly alone in experiencing this “escape” from ill health and premature mortality, as Angus Deaton phrases it in his sweeping account of global economic development.5 (Deaton is the Nobel laureate economist whose work on mortality we encountered in the last chapter.) With somewhat different tempos—America near the lead at first, still a high performer in midcentury, but then falling behind, as we have just seen—rich nations all saw life expectancy rise and death rates fall, beginning with children and eventually extending to the last stages of life.6 In 2000 life expectancy in these affluent nations averaged nearly eighty years, with women living to around eighty-two.7 No other period in world history has witnessed such a dramatic improvement in mortality.8

We might assume that social progress, like human evolution (or at least our image of it), is gradual and even. If Americans lived half again as long in 2000 as they did in 1900, then they must have lived roughly half again as long in 1900 as they did in 1800. In fact, the improvement in health and mortality was rapid and discontinuous.9 It shot ahead in the twentieth century. Citizens of the late 1800s had conveniences and inventions and forms of social organization that their hunter-gatherer ancestors could not dream of. But their patterns of health and mortality were much closer to those ancestors’ than to ours. As three demographic experts conclude,

Until the late 1800s, the world’s lowest-mortality populations were not far below the observed range of variation for hunter-gatherers around the prime of life (when mortality is lowest), yet a greater than species-level jump in mortality reduction has been made since. Overall, the bulk of this larger gap in mortality between the longest-living populations and hunter-gatherers occurred during the past century. . . . In gross comparative terms, this means that during evolution from a chimp-like ancestor to anatomically modern humans, mortality levels once typical of prime-of-life individuals were pushed back to later ages at the rate of a decade every 1.3 million years, but the mortality levels typical of a 15-year-old in 1900 became typical of individuals a decade older about every 30 years since 1900.10

Put more simply, life expectancy increased far more in the last century than it did in the evolutionary leap from chimpanzee to human.

What happened to create such a momentous, positive, and still-progressing transformation? As Henderson’s 1912 observation suggests, the main answer is not advanced medical care. Nor, as we shall see, is increased national income the major story. What happened around the turn of the last century was neither a revolution in medical treatment nor a natural dividend of growth. It was the emergence of effective government action to improve the health of citizens. Funded by growing income, spurred by pressures from reformist social groups, and informed by a new awareness of the benefits of public health (and, eventually, new science that explained where disease came from), public authorities stepped in to use government’s distinctive powers to push back the specter of premature death that had plagued humanity for millennia.11 In the process, they enabled us to cross a Great Divide far more momentous than that described by Henderson: a divide that split centuries of slow growth and poor health from one of unprecedented, rapid improvement in the health of humans and the flourishing of their societies.12

What happened, in short, is that reform-minded leaders discovered, harnessed, and expanded the healing powers of the mixed economy. If we are to see what we are losing—and, even more important, understand what made such breakthroughs possible, not just in the United States but in all countries that crossed the Great Divide—we need to look back at the forgotten story of progress’s visible hand.

The Health of Nations

Adam Smith wrote about The Wealth of Nations. Yet it is in the health of nations that we can see most clearly why the modern mixed economy of combined public and private initiative represents such a powerful technology of progress. The fortunate constellation of countries able to harness the force multiplier of private markets and public authority experienced nothing less than a revolution in human flourishing. And they did so because government stepped in to translate increased wealth and improved science into rapid and sustained advance in the health of nations.

In many ways, the thirty-year growth in life expectancy experienced by Americans in the twentieth century understates just how profound the shift has been. To see more than a 50 percent increase in life span within a society requires truly staggering declines in the chance of death, especially in the earliest years of life. At the outset of the twentieth century, one in every ten American infants died before their first birthday, roughly the same share that die in contemporary Liberia.13 In some US cities, the ratio was a heartbreaking three in ten.14 Things were not much better in the countryside. The problem afflicted rich and poor, prominent and obscure alike.15 Thomas Jefferson lost five of the six children he had with his wife, Martha, who died after giving birth to the last. Lincoln saw two of his four sons die in childhood; a third died at age eighteen, six years after Lincoln’s assassination. Three of Rutherford Hayes’s seven children died before age two.16

By the end of the twentieth century, however, infant mortality in the United States was around 7 in 1,000, or 0.7 percent—higher than in peer nations, as we have seen, but more than 90 percent lower than a century earlier.17 And while the decline in infant mortality was the largest driver of improving life expectancy in the early decades of the twentieth century, as the century wore on, death rates dropped sharply for older age groups as well.18 If you had filled a room with forty-two representative Americans in 1900, the chances were that one of them would have been dead by the end of the year. By the end of the twentieth century, you would have had to pack three times as many people into the room, 125, to have the same chance of someone dying within the year. (To ensure comparability, these figures are adjusted for the changing age distribution over this period.)19

This advance deserves to be seen as the greatest positive development of the twentieth century. Monetized as best as we can monetize it, the increase in human longevity over this period, taken on its own, was worth at least as much to society as the spectacular growth of national income. If we could have somehow achieved these longevity increases without becoming a single dollar richer, according to influential calculations by the economist William Nordhaus, Americans would still be at least twice as well off economically, per person, as we were a century ago. Add in the more general increase in health, and the gains implied by a health-to-dollars calculation are staggering. The health of rich nations is worth at least as much as the wealth of rich nations.20

Yet even figures like these seem inadequate to convey how much better off we are for our longer lives; how much it means to be able to enjoy life and contribute to society for a longer period; how much more fortunate we are not to lose children in infancy or parents in early life or siblings in middle age. Perhaps more than the monetary total that Nordhaus estimates, the true value of our improved health is conveyed by his observation that had he, born in 1941, “experienced the 1900 life table, the odds are long that this paper would have been written from beyond the grave.” And it all happened because of the mixed economy.

Cleanup Time

Infants and young children were the first to benefit. In the first four decades of the century, as the economist David Cutler has shown in a series of pathbreaking essays, life expectancy in the United States increased by sixteen years. Somewhere between four and five of those additional years—more than a quarter of the total—resulted simply from children living past their first birthdays; around the same amount reflected children’s reduced risk of dying between their first birthday and their fourteenth. In the first decades of the last century, improved mortality was mainly a story of more and more children living into adulthood.21

And this in turn was a story of fewer and fewer children dying from infectious diseases. If you look at the mortality tables of the early twentieth century, what will strike you is how infrequently people died from the causes that kill them now. Cancer, cardiovascular disease, kidney disease, even low birth weight—all of these were remote dangers compared with infectious illnesses such as pneumonia and diarrheal diseases, which alone were responsible for about half of infant deaths.22

What did government do to save so many young lives? More than anything else, it cleaned up milk and water. The leading source of deadly infections among children was what infants drank when they did not drink breast milk: water, but especially cow’s milk. In the late nineteenth century, public health experts noticed that fewer infants died during wartime sieges—even as everyone around them was at greater risk. The reason was that their mothers were more likely to breast-feed them. Breast milk was not contaminated with bacteria; cow’s milk and water were.23

This observation was just one among many that bolstered support for what we know now as the “germ theory of disease”: the revolutionary idea that disease comes from microorganisms introduced into the body. Though the germ theory was accepted widely only after the initial health breakthroughs, the growing understanding of infection and disease—coupled with increased education and income—helped change private behavior and motivate philanthropic campaigns.24 But only government authorities with the power to restructure markets and compel behavior could translate this knowledge into sustained social progress.

Though taken for granted today, making milk safe and cleaning up water supplies were herculean efforts, involving massive investments of public dollars and new laws mandating that farmers, milk distributors, and other private actors change their behavior. No city required milk pasteurization at the beginning of the twentieth century. By the early 1920s, virtually all the largest cities did, and many offered “milk stations” where poorer residents could buy clean milk. Infant mortality plummeted.25

The change was equally dramatic in municipal water supplies. While cities had made strides in moving garbage and human waste away from urban dwellers, their water systems were pulling that refuse back in: Many cities poured sewage into the same lakes and waterways from which they drew water, and even those that did not failed to clean water before pumping it into homes.26 Private citizens and entrepreneurs—families, factories, small-scale providers of refuse disposal—dumped human and other waste in water sources as well. Filtration, chlorination, rerouting sewage, and other measures to purge municipal water of contaminants deserve recognition today as (in the words of health economists David Cutler and Grant Miller) “likely the most important public health intervention of the 20th Century.”27 Effective government action saved millions of lives.28

This spectacular achievement was not a result of better medical care. The sophisticated medical interventions that define our present era have done much to reduce mortality and morbidity, especially at later stages of life. Yet medicine was not the major factor in the initial huge fall in the chance of death in industrializing nations.29 More surprising, perhaps, neither was industrialization itself. Contrary to common perception, the growth unleashed in the nineteenth century did not result in significant health gains.30 Though increased wealth and education were generally good for health, industrial development also brought with it increased concentration of growing populations in crowded urban centers, where, in the words of one historian, communities were plagued by “four Ds”: disruption, deprivation, disease, and death.31 Those frightening infant mortality numbers—three in ten infants dead within a year of birth, higher than anywhere in the world today—suggest just how threatening the four Ds were.

To be sure, the economic growth unleashed by early industrialization created resources for new public and private efforts to improve health. But the plummet in the rate of premature death required a new capacity to harness that increased income to tackle the threats to life lurking all around.32 When this capacity was realized, however, the results were profound. In cities that cleaned up milk and water, the retreat of death’s shadow was sudden, discontinuous, and rapid. People were significantly richer, on average, in cities than in the countryside, but before big investments in public health, death rates were higher in urban centers; afterward, death rates converged.33 As three economists conclude after a careful review of existing studies, reduced mortality “comes from institutional ability and political willingness to implement known technologies, neither of which is an automatic consequence of rising incomes.”34 In fact, we could say just as easily that improved public health fostered economic growth as claim the opposite. By increasing the size and productivity of the working-age population, the public health measures pursued by government in the early twentieth century were a major cause of the rapid growth that occurred.35 Effective governance made a huge difference.

The Drugs That Changed the World

If the gains in health engineered by public leaders before the 1940s seem like low-hanging fruit, the next acts in the mortality revolution showed that advanced nations could do much more than reduce disease through sanitation. By promoting medical science, acting on it, and spreading access to its fruits, they could conquer illnesses that had plagued humanity for centuries.

Antibiotics and vaccines are the heroes of this act, but they are misunderstood heroes. Today we think of these as routine drugs, associated with private pharmaceutical companies. In fact, both were largely products of the combined energy of scientists—usually publicly funded—and government agencies. More important, neither would have had the positive effects they did if not for extensive public action and substantial government constraints on individual freedom.

In one form or another, vaccination has been around for a long time. In 1721 the Reverend Cotton Mather (of Salem witch trials fame) urged Boston doctors to deliberately infect patients with mild versions of smallpox to reduce the toll of a major epidemic (roughly 6,000 cases and 850 deaths in a city of just over 10,000 citizens). His slave had told him about the millennium-old practice originating in Asia. In 1799 an English doctor developed the world’s first vaccine for smallpox; unlike infection with a live virus (or “variolation”), it involved much lower risk.36 Almost immediately, European governments and the fledgling American Republic pursued large-scale vaccination campaigns. Thomas Jefferson, his life shaped tragically by infectious disease, took such an interest in vaccination that he personally inoculated much of his own family, came up with a means of transporting the virus safely, promoted vaccination as a public health measure, and insisted that Lewis and Clark take vaccine with them to inoculate Indians during their expedition.37

It was not until the late 1800s, however, that vaccines began to be developed for a range of deadly and debilitating conditions. Louis Pasteur, the French microbiologist who discovered that boiling liquids reduced bacteria, was a pioneer, developing successful inoculating agents for anthrax and rabies.38 Others vaccines—for diphtheria, cholera, pertussis, tuberculosis, tetanus, polio, measles, mumps, rubella, and many diseases now all but forgotten—followed. Virtually all were promoted by large-scale public funding of research and development.39 The exception that proved the rule was the polio vaccine, developed in 1955, which relied on unprecedented philanthropic activism by the March of Dimes. But private drug companies never played a major role, and today they shy away from the area because it is so much less profitable than other fields of drug development.

When it came to vaccines, the free market failed. If John Doe got vaccinated, that was an enormous benefit to everyone he might infect otherwise—yet this enormous social benefit was not one that John had direct personal interest in considering. Perhaps as important, there were the inevitable free riders: people who recognized that if everyone else was vaccinated, they could go without and avoid the small but real risk vaccines posed. But less-than-universal vaccinations meant much higher risk of disease.40 Some people had to be compelled to be vaccinated. In the United States and elsewhere, they were—most effectively, by requiring that children receive vaccines when entering public school.41 Public authority, not the market, was key. The same has been true of developing nations in recent decades, as the usually market-championing World Bank concludes: “Had it been left to private markets during the last few decades, it is inconceivable that today some 80 per cent of the world’s children would be immunized against the six major vaccine-preventable childhood diseases.”42

Even more so than vaccines—many of which were developed after the diseases they targeted had declined greatly—antibiotics were responsible for the substantial fall in the chance of death that occurred in the middle of the twentieth century.43 Their broad use is a now-forgotten story of the public mobilization of science and industry in the midcentury United States.44

Today antibiotics are losing ground to drug-resistant bacteria—itself a failure of coordinated action to reduce the indiscriminate use of antibiotics and to develop new drugs as private companies have deserted the field for more lucrative pastures.45 When they achieved wide availability at midcentury, however, they were a medical revolution. Antibiotics such as penicillin prevented death at all age levels, including among the elderly, who succumbed frequently to influenza and pneumonia. In contrast with the early twentieth century, midcentury gains in life expectancy occurred across the age distribution, reaching old as well as young.

The first class of the new miracle drugs was sulfonamides. Developed in Europe, they were soon overshadowed by penicillin and other broad-spectrum antibiotics, which had fewer side effects. Still, sulfa drugs ushered in the antibiotic era. As it happened, they also ushered in modern pharmaceutical regulation, when faulty manufacturing of sulfonamide led to more than a hundred deaths in 1937 and helped galvanize Congress into greatly expanding the powers of the US Food and Drug Administration.46

The role of the federal government in developing penicillin was even more direct. In essence, federal officials created a Manhattan Project for penicillin, comparable in vision, if not scope or secrecy, to the rush to develop the atom bomb during World War II. The effort relied on the free exchange of scientific information, cooperation across scores of public agencies and private companies, public investment in research and development, and even coordinated production of the drug itself. The Scottish bacteriologist Alexander Fleming’s famous 1928 discovery of penicillin occasioned no real interest from private companies. It was up to scientists at Oxford and elsewhere to push forward the refinement of the mold into a usable antibiotic. Federal scientists working for the US Department of Agriculture did much of the pioneering research, quadrupling yields in a matter of months. Yet it was the US War Production Board, working with other public agencies and leading private companies, that in the four short years between 1941 and 1945 transformed penicillin from a “laboratory curiosity” into a “mass-produced drug.”47

In contrast to today’s preferred model of drug development, patents were not crucial to what is still the greatest pharmaceutical breakthrough of all time. Fleming himself never patented penicillin, and the US federal government patented many of the production processes and shared them freely with companies.48 But under this open-source model, private industry thrived. Following the wartime production of penicillin, the firms involved adapted the new production strategies to the creation of new antibiotics, related classes of drugs, and brand-new compounds. As one historian concludes, “The experience and technology garnered from the government-coordinated development of penicillin were significant and vital predecessors to the biotechnology revolution.”49

Insuring Health

The final act in the longevity explosion in advanced societies began before the second ended. Its hallmark was expanded government efforts to broaden the quality of and access to medical care. Here the landmarks are more familiar: public investments in the infrastructure of care and in medical research and education, the creation of programs of national health insurance, and reforms designed to slow the increase of costs while safeguarding health.

Yet two points about these developments are less well understood. The first is that they, too, have resulted in spectacular gains. The huge amount of waste and inefficiency in American health care—exemplified by our declining performance on key indicators of health even as we spend vastly more than other rich nations—should not blind us to the enormous improvement in the treatment of conditions that once spelled rapid death, most notably, cardiovascular disease.50 Since the 1960s, the mortality revolution in rich nations has occurred primarily among the aged. Life expectancy at age sixty-five barely budged between 1900 and 1950, but shot upward between 1950 and 2000. In the last few decades of the twentieth century, around three-quarters of gains in life expectancy at birth were due to increasing years of life after age sixty-five.51

Some of this improved longevity, to be sure, was due to interventions that helped the young. After all, today’s elderly received pre-1950 medical care in their youth, and we know that early life experiences powerfully affect future health. But much of the improvement was due to better medical treatment of the aged when they became sick.52 What is sometimes called the “medicalization” of health—the development of new treatments and technologies and improved coordination of and access to care—drove the change as it hadn’t before midcentury.

The second underappreciated feature of this third act is that it, too, was very far from a natural market development. The expansion of medicine’s capacity and broadening of access to sophisticated care were results of private activity stimulated by public authority. This was true even in the United States, where private funding of medical care remains substantially greater than in other rich countries (but still less than half the total when you take into account public employees’ health benefits and tax breaks for private health insurance). Consider just a few of the main US federal policies of these decades: investments in hospital construction and medical research and development (embodied in the National Institutes of Health), encouragement of private health insurance through federally sanctioned collective bargaining and tax breaks worth eventually hundreds of billions of dollars a year, and the passage of Medicare for the aged and Medicaid for the poor in 1965.53 In all these ways and others, improved health after midcentury depended on government.54

The fruits of these early and extensive investments are countless. The MRI (magnetic resonance imaging) emerged from a series of National Science Foundation grants starting in the mid-1950s. The laser, also vital to medical practice as well as consumer electronics and much else, grew out of military-funded research.55 In drug development, a 1995 investigation by researchers at the Massachusetts Institute of Technology (MIT) found that government research had led to eleven of the fourteen most medically significant drugs over the prior quarter century.56 Another study showed that public funding of research was instrumental in the development of more than 70 percent of the drugs with the greatest therapeutic value introduced between 1965 and 1992. (Most of the rest received public funding and research during their testing in clinical trials.)57 The same is true of almost all the biggest medical breakthroughs of recent decades: According to a 1997 study of important scientific papers cited in medical industry patents, nearly three-quarters of those funded by American (rather than foreign) sources were financed by the federal government.58

Of course, the United States did not enact national health insurance in the middle part of the twentieth century, as most advanced industrial countries did. But it followed the model of other rich nations by constructing its own version of national health insurance for the group most dependent on modern medical science: those older than sixty-five. And the United States actually led the rich world in educational and regulatory efforts to reduce hazardous health behaviors.59 The highly successful campaign against smoking was a public campaign: Tobacco companies fought tenaciously against efforts to inform consumers about the risks of smoking or shift the societal costs of smoking onto tobacco companies and their customers through taxation and regulation.

Also important to this revolution in public health were broader public efforts to increase economic security and reduce inequality. Though research in this area is fraught with methodological challenges, evidence suggests that job loss and other forms of insecurity reduce life expectancy. According to one careful study, job displacement leads to a 10 percent to 15 percent increase in workers’ chance of death for the next twenty years: a worker displaced in midcareer can expect to die about two years sooner than one not displaced.60 The recent economic crisis has provided social scientists with a chilling new look at how extreme financial events affect the chance that people will take their own lives. The conclusion: Across the American states, suicide rates rose about 1 percent for each 1 point increase in the unemployment rate.61 For all these reasons, it is not so surprising that, as one analysis sums up the emerging evidence, “the welfare state is good for life expectancy.”62 Big government and long lives go together.63

Prosperity Found

What is true of health is also true of other measures of human welfare: income and standards of living; education and scientific progress; reductions in economic hardship, insecurity, and inequality. Indeed, a strong case can be made that the twentieth century witnessed greater progress on key dimensions of social well-being than did all the centuries of human experience before it combined.

This claim might seem like hyperbole, or dismissive of the great tragedies of the century. But the point is not that the twentieth century was a magical period in which all nations propelled themselves into peaceful affluence. To the contrary, states and markets did much damage as well as much good, and the gains were far from automatic. They required the right conditions: democratic capitalism that combined publicly responsive state authority open to scientific expertise with well-regulated markets and vibrant voluntary sectors.64 But where these conditions arose, where the mixed economy came to prevail, the gains were remarkable.

Of Riches and Residuals

Put yourself for a moment in the shoes of an American of the late nineteenth century. You would soon live in the richest nation in the world (the United States was just surpassing the United Kingdom in per capita income), and the first industrial revolution had already occurred.65 Steam-powered railroads, mass-production factories, and the first forms of mechanized agriculture had brought to a close centuries of anemic growth, during which incomes barely grew fast enough to sustain an expanding population, and periodic plagues and crises wiped out these small gains seemingly overnight.66 In escaping this Malthusian misery, you could be thankful.

Yet for all this, your life remained precarious and, in all likelihood, grim. Death and illness, we have seen, were all around you. Your job, most likely on a very cash-strapped farm, was probably backbreaking and paid little. Despite increased industrial output and the expansion of cities, the United States was still overwhelmingly rural and agricultural. Most farms barely provided subsistence, never mind a surplus.67 If you were lucky enough to own a home, as the economist Robert Gordon reminds us, it would be “not only dark but also smoky” from cooking fires, candles, and lamps. It wouldn’t have running water, much less indoor plumbing, so you would be making lots of urgent trips outside and hauling lots of heavy water inside. If you lived in or ventured into cities, you would encounter a different but hardly more pleasant problem: streets filled with manure from the horses that were the only practical means of transporting goods and people short distances.68 And unless you were rich, you would live—by today’s standards—in abject poverty. Average income per head in 1900 was around $5,000 a year (in 2015 dollars), roughly the contemporary level for Bolivia.69

That all changed in the twentieth century, and quickly. In a handful of decades, the United States was electrified, clean indoor plumbing became nearly universal, whole dwellings could be heated without fire and (by the 1950s) cooled using the same basic technology employed to preserve food, urban mass transit became common and automobiles ubiquitous, farms emptied out while cities and factories filled up. And incomes began to grow and grow and grow.

Before his death in 2010, the British economist Angus Maddison painstakingly assembled data on income levels across the world, going back in some cases to the year AD 1. What these data show is what we see with life expectancy: an almost flat line until (in world-historical terms) the very recent past, when incomes in rich countries suddenly shoot upward. The American data do not go back to AD 1, of course, and there are controversies over the accuracy of particular estimates. But all the sources show the same basic picture: Incomes took off around the dawn of the twentieth century and, with the visible exception of the Great Depression, skyrocketed with little interruption—until the 2008 financial crisis. By the end of the twentieth century, Americans were producing and consuming six times as much per person as they had in 1900.70

They were doing so because they were becoming vastly more productive. People with the same basic mental and physical capacities could produce orders of magnitude more in a given amount of time. In the United States in 1870, an average hour of work produced just over $2 in GDP. By the end of the twentieth century, the average amount of GDP produced per hour of work was over $34. In other words, labor productivity increased more than fifteenfold, with the strongest growth occurring in the middle of the twentieth century, when real productivity rose regularly by 2.5 percent to 3 percent a year.71

For centuries before this great productivity leap, economic thinkers had seen growth as tied to the abundance of fundamental factors of production: land, labor, capital—the fuel, if you will, in the prosperity machine.72 Growth required increasing inputs of one or more of these factors. In the twentieth century, however, factors of production proved far less important than technologies of production, fuel less important than the efficiency of the machine burning it. Writing at midcentury, the MIT economist Robert Solow made a basic but profound observation for which he would later win a Nobel Prize: The meteoric growth of the first half of the twentieth century could not be explained by the increased use of labor or even capital. It had to be the result primarily of increased productivity—changes that allowed us to get more out of a given input of labor or capital. Solow ascribed this startling rise in productivity to “technological progress in the broadest sense,” by which he meant improved knowledge, machinery, techniques, and skills—basically, anything that transformed a given lump of labor or capital into a final product society could enjoy.73

A year after introducing this simple model of growth, Solow did some calculations with the best data available at the time and came to a startling conclusion: Between 1909 and 1949, essentially all of the increase in US productivity was due to technological change, broadly understood. More precisely, 12 percent of the doubling of workers’ productivity over this period resulted from increased use of capital, while 88 percent reflected technological changes that allowed workers to do more with that capital.74 It was not even close.

Solow’s findings are among the most famous in economics, and his 88 percent estimate even has a name, catchy by the standard of economics: “the Solow residual.” Yet the implications remain poorly appreciated even today.75 What Solow demonstrated is that the main reason we are so much richer than those living in 1900 is not that we work harder or somehow have more capable brains. Nor is it because masters of industry and finance are pouring more capital into the economy. Instead, we have figured out how to produce vastly more with our work and capital. As Solow’s contemporary William Baumol expressed it in 2000, almost all of our current output—Baumol calculated 90 percent, a tad higher than Solow’s finding—is a result of “innovation carried out since 1870.”76

What does “innovation” mean in this context, and where does it come from? Here we enter a grey area for economists. Everyone agrees that four factors are crucial: the advance of knowledge (ideas), the transmission of knowledge (education), the application of knowledge (technology), and the political and social structures that make all these work (institutions). But uncertainty remains about what drives the virtuous institutional circle that allows better ideas, transmitted through education and embedded in technology, to progress so rapidly.

One thing is clear: This virtuous circle is fundamentally a social product. Individual investments and inventions are important, but they don’t arise without key social institutions that develop and disseminate knowledge and encourage improvements in economic production, from research institutions and educational systems to public economic policies.77 Decades of research on scientific and technological progress, for instance, has shown that big breakthroughs are typically made by multiple innovators at the same time, often with each innovator ignorant of the others.78 From the profound (calculus, DNA’s structure) to the practical (the steam engine, the polio vaccine)—two or more (frequently many more) people working on their own come upon the same discovery around the same time. One revealing contemporary indicator: Upward of 98 percent of patent lawsuits are not against copycats but against other inventors who claim to have gotten there first.79

It’s not that anybody can come up with a valuable innovation. It’s that the conditions have to be right for a certain class of somebodies in the right places with the right skills to do so. Most advances are tweaks of existing ideas and technology. Even the biggest build on an enormous stock of existing knowledge.80 This stock is precisely what the Solow residual highlights: the prior advance of “technology in the broadest sense.” As the economic historian Joel Mokyr captures it, the Solow residual is as close as economic reality gets to a “free lunch”—that is, an “increase in output that is not commensurate with the increase in effort and cost necessary to bring it about.”81 Only it’s not free. It is paid for by the collective efforts that make it possible: today and in the past.

The Political Economy of Prosperity

Around the time of America’s great growth explosion, a Jewish émigré left Russia in the wake of the revolution. Simon Kuznets came to New York to join his father, but the timing of his arrival in 1922 could not have been more propitious for the breakthrough in economic knowledge that he would soon create.82 At Columbia University, where he sailed through graduate training in economics, he met Wesley Mitchell, the head of the newly created National Bureau of Economic Research. The NBER was doing for economics what similar institutions were doing for the natural sciences, fostering improved scientific practice (better data, better methods) with an eye toward practical applications (better policy).83

The innovation that Mitchell sought—and which Kuznets achieved—was an accurate measure of the size of the US economy.84 Put more technically, Mitchell wanted to know what our gross national product was, a statistic that could underpin many others: income per capita, the size of different sectors of the economy, the scope of government spending, and so on.

Now that GNP—or, more commonly today, GDP, the total domestic economy—is one of those stand-alone acronyms used without explanation, it is easy to neglect how complex and valuable it is to add up the total value of all goods and services that a nation produces. As late as the 1930s, presidents Herbert Hoover and then Franklin Roosevelt had only a series of incomplete statistics as their compass, from stock market prices to freight car loadings.85 It may not have been the “economic dark ages,” as one recent celebration of GDP puts it, but the light that leaders could use to illuminate a huge and growing national economy that self-evidently required greater understanding and oversight was appallingly dim.86 The man who ended up brightening that light was Simon Kuznets.

He was hardly alone in his quest. Pioneering thinkers in Britain had made the first intellectual steps toward national income accounts long before Kuznets became involved. In many ways, the development of GDP followed the model of penicillin, albeit on a smaller scale. The NBER did some of the first work. Then the US Department of Commerce commissioned Kuznets to turn these early attempts into official, comprehensive statistics. From 1944 to 1946, a period when monitoring industrial output was particularly crucial, the Russian transplant worked for the US War Production Board.87 As this close intermingling of public authority and private scholarship suggests, Kuznets’s fascination with statistics was not primarily intellectual. It was practical. He wanted to give decision makers the information they needed to manage the macroeconomy more effectively.

A quarter century later, Kuznets won the Nobel Prize. The honor, however, was not for developing the measures for which he is now best known; it came instead for what he did with those measures: namely, show that the rich world had entered a qualitatively new economic epoch, the era of what he called “modern economic growth.” In his Nobel address, Kuznets described this new reality as a “controlled revolution”—a revolution because it featured a massive step-up in growth rates accompanied by unprecedented structural shifts; a controlled revolution because ultimately this growth was restricted to a small circle of nations that had come up with social, political, and economic structures that promoted rapid economic transformation.88

Kuznets’s lifework was figuring out what animated this revolution, so different from the disastrous one he’d left behind in Russia. He believed economists could find the answer only if they looked beyond the economy. In a 1955 address celebrating his presidency of the American Economic Association, Kuznets told his colleagues, “If we are to deal adequately with processes of economic growth . . . it is inevitable that we venture into fields beyond those recognized in recent decades as the province of economics proper. . . . Effective work in this field necessarily calls for a shift from market economics to political and social economy.”89

A half century later, we understand far more about “the political and social economy” of modern growth. Most recently, the economist Daron Acemoglu and the political scientist James Robinson synthesized and expanded this work in their landmark book Why Nations Fail.90 Though the title focuses on the negative, the argument focuses on the positive: What allowed countries like the United States to become prosperous and productive, while most other nations languished in poverty, sickness, and dysfunction? The switchman in their account is a nation’s collective institutions, especially its political system. Political systems where economic and political leaders run the economy like a personal ATM—“extractive” systems—feature high inequality, slow growth, and short lives. Those where ordinary citizens are part of the deal—“inclusive” systems—feature low inequality, high growth, and long lives.91 Individual talent is necessary, geography helps or hurts, but institutions matter most.92

Phrased this way, it sounds like a tautology (successful nations have institutions that promote success), and to a degree, it is. Yet through a series of clever comparisons and case studies, Acemoglu and Robinson show that modern growth rests on the broad distribution of power and opportunity. “Economic institutions shape economic incentives,” they conclude. “It is the political process that determines what economic institutions people live under.”93

But what exactly is it about inclusive institutions that makes rapid growth possible? Here Acemoglu and Robinson have much less to say. In a tantalizing but brief discussion, they make clear that the critical ingredient, paradoxically, is not private freedom but public constraint:

Secure property rights, the law, public services, and the freedom to contract and exchange all rely on the state, the institution with the coercive capacity to impose order, prevent theft and fraud, and enforce contracts between private parties. To function well, society also needs other public services: roads and a transport network so that goods can be transported, a public infrastructure so that economic activity can flourish, and some type of basic regulation to prevent fraud and malfeasance. Though many of these public services can be provided by markets and private citizens, the degree of coordination necessary to do so on a large scale often eludes all but a central authority. The state is thus inexorably intertwined with economic institutions.94

The Public Foundations of Modern Economic Growth

Acemoglu and Robinson’s observation can help us make sense of a paradox. In nations that became rich in the twentieth century, it wasn’t just the economy that underwent spectacular growth. The size of government did too. Indeed, it grew even more quickly. At the end of the nineteenth century, government spending (at all levels) accounted for around 1 in 10 dollars of output in the wealthiest nations. By the end of the twentieth, it averaged over 4 in 10 dollars, with the public sector accounting for 6 in 10 dollars of GDP in the highest-spending rich nations.95 In some ways, these numbers overstate government’s size, since much of government spending essentially shifts private income from one person or household to another rather than financing goods or services directly. Yet standard measures also understate the size of government, because they don’t include many of the ways that government affects the economy: from regulation, to protections against risk, to the provision of legal safeguards. Suffice it to say that for all their imperfections and ambiguities, the numbers capture something real: Government has grown much bigger.

Before looking at statistics such as these, it might be assumed that poor countries have large governments—at least compared with the size of their puny economies—and rich countries, small governments. After all, there are a couple of big tasks that governments have to do just to remain governments: provide at least a modicum of protection against internal violence and protect against external threats. These are the basic minimums for a state to be a state—an organization, as the German sociologist Max Weber famously defined it, overseeing a defined territory with the legitimate authority to use force.96 And these are pretty much fixed costs, or at least costs that vary with country and population size far more than economic heft. So you might expect that as the economy grows, the relative size of the state shrinks.

But that is not at all what we see in Kuznets’s epoch of modern economic growth. The richest countries expanded their governments the most.97 Yes, rich countries vary in spending, the structure of government, and the like. But in general, they all look very different from poor countries, and they all expanded their public spending dramatically during the period in which they grew most quickly.

Of course, rich states also do much more than spend: They regulate, they have expansive legal systems, they offer implicit and explicit guarantees to private actors that are costless on paper but almost incalculably valuable in practice (such as serving as lenders of last resort). Modern growth occurred where, and only where, activist government emerged.98 And therein lies a big clue as to why the Great Divide was crossed.

Grade Inflation

Perhaps the most important thing that big states do is educate their citizens. Modern growth commences when people rapidly increase their ability to do more with less. They can do more because they know more; they have mastered skills and technologies that allow them to be more productive. Indeed, economists who have tried to break the Solow residual into its constituent parts have concluded that roughly a third of rising productivity is tied directly to increased education, with most of the rest due to general advances in knowledge.99

Yet even this understates the role of education, since improved education gives an enormous boost to those advances in knowledge, fostering a scientific community and educated population capable of contributing to innovation and growth.100 Formal schooling probably plays the starring role, for instance, in the so-called Flynn effect—the generation-to-generation rise in IQ scores in industrialized societies documented first by the political scientist James Flynn.101 The machinery of our brains hasn’t changed. What has changed is our capacity to use that machinery for higher-level cognitive tasks, especially those involving science and technology. And again and again, studies find schooling to be a major, often the major, factor in this transformation.102 The contribution of education to the rapid growth of the twentieth century is unquestionably much larger than the substantial direct economic returns of additional years of schooling.103 The sustained prosperity of the West was built, in significant part, on the school ground.

And not just any school ground; the public school ground.104 As one conservative notes ruefully, “Schooling is publicly provided by every nation. Such a unique position is shared only by a very limited range of goods: national defense, courts, police, and roads.”105 To this commentator, the hegemony of public schools is strange because, after all, there are and have long been private schools. But for reasons well understood by generally market-appreciative thinkers from Adam Smith to Milton Friedman, mass schooling has never occurred in the absence of government leadership.106 The most fundamental reason is that education is not merely a private investment but also a social investment: It improves overall economic (and civic) outcomes at least as much as it benefits individuals. Ultimately, only the public sector has the incentive (attracting residents, responding to voters) and the means (tax financing of public schools, compulsory attendance laws) to make that investment happen.

It’s no coincidence, then, that nations that became the world’s dominant economic powers also led the world in the expansion of schooling—first, elementary institutions, then high schools, and then colleges and universities.107 Education increases the productivity of workers. No less important, it increases opportunities for those on the periphery of the labor force or outside it altogether. Mass education mobilizes an enormous amount of untapped human talent into the economy; the benefits accrue not only to those who go to school but to society as a whole.

Social Science

What about “the growth of science”? By now, the answer shouldn’t be surprising. Yet it is striking how frequently observers miss even the most obvious examples of government’s pivotal role. As pointed out by Harvey Brooks, former head of the American Academy of Arts and Sciences, “Government has been a much more important and direct influence on the direction and rate of technological innovation than much of our national ideology and public rhetoric would lead us to suppose.”108

We have seen already that most major medical innovations of the twentieth century relied on public authority. Few medicines have had a greater impact on patients’ lives in recent decades than those used to treat hypertension—which emerged out of the Veterans Health Administration.109 Few breakthroughs in medical research have saved more lives than the identification of smoking’s deadly health risks—which again depended overwhelmingly on publicly funded or directed research.110 A short list of nonmedical technologies that originated in government-funded research or contracts would include semiconductors, integrated circuits, nuclear power, satellite communications, GPS, radar, the microwave (used in communication as well as cooking), jet engines, the radio (and its sister technology, television), and a dazzling range of high-tech materials and innovative methods for making them, from titanium to powder metallurgy.111

To these examples could be added countless more. Indeed, it is hard to find a major innovation that did not significantly owe its birth to public support. But consider just two that define our era: computers and the internet. It is hard to imagine our economy without them, in part because they are what Brooks calls “generic technology”—building blocks that create the capacity to do many things more efficiently, catalyzing private-sector innovation. (Think Microsoft, Apple, Google.)112 Moreover, computers and the internet are also characterized by rapid and increasing—or, more technically, exponential—growth in capacity.113 They’re not just fundamental generic technologies, in other words; they enable bigger and bigger leaps in technical capacity long after their initial introduction.

And both owe their origins to government, working with the private sector and scientific institutions during and after World War II. Of the twenty-five biggest advances in computing technology during the critical period between 1946 and 1965—breakthroughs like magnetic core memory, graphics displays, and multiple central processors—the US government financed eighteen.114 After these breakthroughs, it poured increasing money into computer science and electrical engineering, funding 60 percent to 70 percent of university research in these areas from the mid-1970s into the mid-1990s.115 Meanwhile, the US Defense Department literally created the internet, setting up its precursor, ARPANET (the network of the department’s Advanced Research Projects Agency, or ARPA), in partnership with computing centers at top universities.116 Until the internet and computers reached a large scale, the private market largely ignored this burgeoning network. ARPA tried unsuccessfully to find commercial operators to spearhead its development; none saw the potential.

This was the pattern for much of the twentieth century. In the United States as well as other rich nations, government funding was the critical catalyst for basic research and its applications in new technologies. Before World War II, this role revolved around improvements in agricultural productivity, mineral and resource extraction, and applied areas of engineering, including aeronautics.117 During and after the war, government funding poured into both basic and applied research in medicine and advanced technology.118 At least as crucial, government fostered the environment in which scientific progress could take place, not just by spurring massively increased enrollment in higher education but also by supporting individual scientists, many of whom received private research funding only after foundational work with public dollars.

That you can show that government spurred so many specific breakthroughs is enormously revealing, because these developments are really just the last stage of a process of scientific progress that, we have seen, is deeply tied up with effective state action. Even if not a single application of new knowledge could be linked directly to government, the entire infrastructure of scientific and technological advances in the twentieth century depended heavily on public policy. The foundation of Kuznets’s modern economic growth, the roots of Solow’s growth residual, was the infrastructure of scientific progress that government fostered.119

Paving the Road to Prosperity

We come to the last of the major public foundations of modern prosperity: the physical infrastructure that helped make the scientific infrastructure possible and productive. Even before rich countries came to depend on public investments in science and technology for rapid growth, they depended on public investments in national transportation and communications networks that linked together producers and their suppliers and consumers. Among other benefits, public infrastructure facilitated the rapid flow of materials and people across long distances, allowed manufacturers to benefit from economies of scale that supported modern assembly-line techniques, allowed innovations to diffuse and goods to reach far-flung consumers, and created opportunities for workers to find jobs that matched their skills.

No account of modern growth would be complete without mention of railroads. As one historian puts it, their contribution to economic development “is so large and so obvious as to defy accurate calculation.”120 What is often forgotten, however, is that railroads emerged out of huge public inducements, including grants of land and public loans as well as direct spending. Government was even more centrally involved in the creation of waterways that helped make the movement of goods within and between nations cheaper than ever. In the United States, often caricatured as a laissez-faire economy, most trade flowed through navigation routes created by the US Army Corps of Engineers.121 In the nineteenth century, the corps even loaned officers to corporations to help with engineering projects—in effect subsidizing private infrastructure.122

Indeed, government investments in infrastructure lie behind nearly every aspect of what economists call “the second industrial revolution”—the rapid growth from the end of the nineteenth century through the immediate decades after World War II. The economist Robert Gordon has identified five major elements of the second industrial revolution: (1) electricity and its spinoffs; (2) improved transportation, especially the automobile; (3) running water and indoor plumbing; (4) enhanced communications such as the radio; and (5) an enhanced ability to rearrange molecules, including pharmaceuticals.123 All but the last of these—which had much to do with government policy but less to do with infrastructure—rested on public efforts to encourage and spread technological innovations through modern infrastructure.

Electrification, perhaps the most important effort of all, was a policy achievement of the highest order. As the political economists Gar Alperovitz and Lew Daly put it in their powerful book Unjust Deserts, “Our generalized system for producing and distributing power is perhaps what most distinguishes the nineteenth from the twentieth century—and the latter’s great ‘inventions’ and sustained economic growth from the marginal and often reversible gains of all previous periods.”124 Like other profound technological developments, electric power showcases the slow, evolutionary process of discovery. Early acknowledgment of electricity dates back to at least 600 BC. Our modern knowledge of electricity began to develop in the 1600s. These scientific advances culminated in the late nineteenth century with the development of technologies—by multiple inventors more or less at the same time—that made large-scale production and transmission feasible.125

Yet it would be several decades before electricity transformed the developed world. Exploiting this extraordinary innovation required the creation of a nationwide system for getting power from generators into factories and homes. And who created (or impelled the creation of ) this network? Government. It is impossible to imagine the twentieth-century advance in standards of living without electrification. Nor would it have been possible for innovative firms to produce new products reliant on electricity without the basic infrastructure that government laid down.126

On the heels of electrification came another essential form of public infrastructure: modern highway systems. Like consumer products dependent upon a consistent energy grid, the automobile could achieve its transformative potential if and only if there were paved roads going where drivers wanted to go or where goods needed to be taken. Even after automobiles became common, that was frequently not the case, especially in the expansive United States. A young soldier known to his family as “Little Ike” recalled crossing the country in 1919 as part of an army convoy that took sixty-two days to travel from Washington to Oakland.127 Four decades later, that soldier, by then president, led a bipartisan coalition to pass the National Interstate and Defense Highways Act of 1956, the infrastructure equivalent of the federal government’s dramatic investment in science and technology after World War II. The road to mass prosperity was paved by government.

Growing Together

The crossing of the Great Divide is a story of unprecedented progress, of rapidly receding human limitation within the life span of a single human being. “When one looks behind the rather unrevealing economic aggregates,” observed Kuznets in a rare bit of rhetorical flourish, “one finds a stream of technological changes representing the application of new inventions and new knowledge—and contributing, when applied, to further learning, discovery, and invention.”128

Within this stream, however, the institution that bears the greatest credit often gets short shrift: that combination of government dexterity and market nimbleness known as the mixed economy. The improvement of health, standards of living, and so much else we take for granted occurred when and where government overcame market failures, invested in the advance of science, safeguarded and supported the smooth functioning of markets, and ensured that economic gains became social gains. Government did not do it alone, nor were its efforts unprompted. Expanded democracy and civic mobilization were crucial. But government was the pivotal and forgotten partner in the dance of prosperity. It is now time for us to understand why.