8
The Digital Age
(TWENTY-FIRST CENTURY)
It is estimated that in 2020 the world will create and transmit roughly 44 zettabytes of data per day.1 In numbers, that is 44,000,000,000,000,000,000,000 bytes, each byte carrying the information of one letter or number. Yet soon enough, these staggering numbers will be superseded by even more remarkable numbers. The ubiquity and scale of data processing and transmission are utterly mind-boggling. Here are some other estimates as of 2019:
•   1.6 billion Facebook log-ons each day
•   3.5 billion Google searches each day
•   5 billion YouTube videos watched each day
•   4.4 billion Internet users (as of June 30, 2019), of which 829 million are in China, 560 million in India, and 293 million in the United States
•   $5 trillion cross-border settlements daily through the SWIFT banking system2
In the twenty-first century, the world has arrived at ubiquitous connectivity. And there is more connectivity to come with advances in the coverage and capabilities of the Internet and related digital systems such as 5G. The digital revolution is so deep that we can rightly consider our era to be a new seventh age of globalization.
This new age of globalization, like the past ages, will create new patterns of global economic activity, jobs, lifestyles, and geopolitics. This new age arrives together with another fundamental development: a human-caused ecological crisis hitting the planet. The dramatic successes of globalization during the past two centuries have sown the seeds of ecological crisis as well, as human activities—especially fossil-fuel use, farming, transport, and industrial production—have created new and profound challenges of human-induced climate change, the mass destruction of biodiversity, and the dire pollution of the air, soils, freshwater, and oceans. Another set of challenges will arise from further rapid changes in demographics, including the size of the world population, its age structure, its distribution by region, and the share of the world living in urban versus rural areas.
In this century, therefore, we will see the unfolding of several powerful trends: the continued economic rise of China and India, the relative decline of the United States in world output and global power, the rapid population and economic growth of Africa, and a further steep rise in urbanization, along with the ubiquity of digital technologies and their uses. Our social and political systems will be under great stress given the dramatic changes ahead. As the great evolutionary biologist E. O. Wilson has summarized it in his book The Social Conquest of Earth, we exist with a bizarre combination of “Stone Age emotions, medieval institutions, and godlike technology.”
The Digital Revolution
The uptake of digital technologies is the fastest technological change in history. Facebook, Google, and Amazon came out of nowhere to become, in a few short years, among the most powerful companies in the world. Smartphones are only a decade old, but they have already upended how we live. How did this revolution come about?
The roots of the digital revolution can be traced to a remarkable paper by British genius Alan Turing, writing in 1936. Turing envisioned a new conceptual device, a universal computing machine—a Turing machine, as it became known—that could read an endless tape of 0s and 1s in order to calculate anything that could be calculated. Turing had conceptualized a general-purpose programmable computer before one had been invented. His ideas would fundamentally shape the digital revolution to come. Turing also made legendary contributions to the Allied war effort by showing how to use mathematical cryptography and an early electronic device to decipher the Nazi military secret code. (For all his genius and his contributions, a towering figure in the entire history of mathematics, Turing was hounded by British authorities after World War II for his homosexuality, and possibly driven to suicide, as the cause of his death remains disputed.)
The next step in the digital revolution came out of another remarkable mind, that of John von Neumann, who conceptualized in 1945 the basic architecture of the modern computer, with a processing unit, control unit, working memory, input and output devices, and external mass storage. Von Neumann’s computer architecture became the design of the first computers, devices using vacuum tubes to implement the computer’s logical circuitry. MIT engineer and mathematician Claude Shannon provided the mathematics of the logical gates and processing systems to implement Turing’s programs of 0s and 1s on von Neumann’s computer architecture.
The next piece of the puzzle was solved in 1947, with the invention of the modern transistor at Bell Laboratories, which built on advances in understanding of semiconductors gained during the radar work of World War II. The transistor replaced the vacuum tube in Shannon’s logical circuitry and enabled the development of microprocessing units with first thousands, then millions, and then billions of transistors. In the early 1950s, the individual transistors were soldered onto motherboards. From 1958 to 1961, two pioneers, Robert Noyce and Jack Kilby, developed ways to etch transistors and other electronic components directly onto silicon wafers, inventing the integrated circuit. With the integrated circuit, it became possible to put larger and larger numbers of transistors, and therefore faster and more powerful microprocessors, onto a silicon chip. This miniaturization enabled the exponential increases in computing speed, memory, and data transmission that underpin the digital revolution.
As computers began to penetrate scientific, military, and business work, the U.S. Department of Defense asked a basic question: How can computers communicate with each other, and do so in a resilient way that would survive the disruption of networks in a war? The answer was a method for sending data packets (bits of 0s and 1s) between computers according to flexible routing, a method known as “packet switching,” that became the basis for a new Internet. Initially a U.S. government project, the Internet was later made available to a group of participating U.S. universities before it was opened for commercial use in 1987.
In 1965, Gordon Moore, then the head of Intel, an early manufacturer of integrated circuitry that would become the global pacesetter, noticed that the transistor count etched into a microchip of silicon was doubling roughly every one to two years. Moreover, he predicted that the trend would continue for the coming decade. That was a half-century ago, and Moore’s observation and prediction proved to be prescient. The doubling time for various attributes of microprocessing (speed, transistor count, and cost, among others) continued the pattern of rapid geometric growth until the 2010s, with a modest recent slowdown compensated by gains in other dimensions of computation. Intel’s 4004 microprocessor in 1971 had 2,300 transistors. Intel’s Xeon Platinum microprocessor in 2017 had 8 billion transistors. This is roughly a two-year doubling time over forty-six years, or twenty-three doublings. Moore’s law is shown in figure 8.1, illustrated by the development of Intel’s microprocessors.
8.1  Moore’s Law in Action: Transistor Count on Intel Chips, 1971–2016
Source: Wikipedia contributors;Transistor count Wikipedia, https://en.wikipedia.org/w/index.php?title=Transistor_count&oldid=923570554.
Computer capacities soared, and so too did connectivity. The development of fiber-optic cables enabled a vast increase in the speed, accuracy, and scale of data transmission. Microwave transmission enabled a revolution in wireless connectivity, so that mobile devices could connect to the Internet. At the same time, massive advances were made in the ability to digitize materials—text, images, and video—along with countless advances in scientific probes and measurements, such as satellite imagery, gene sequencing, and sensors collecting vast amounts of real-time information from devices.
The uptake of mobile phones parallels the Internet in speed of the dissemination of a breakthrough digital technology. The mobile phone was invented at Bell Labs in 1973. From a few thousand phone subscribers in the early 1980s, mobile subscriptions reached 7.8 billion in 2017 (figure 8.2).
8.2  Mobile Subscribers Worldwide, 1990–2017
Source: “Mobile Phone Market Forecast - 2019.” areppim: information, pure and simple, 2019, https://stats.areppim.com/stats/stats_mobilex2019.htm.
The third dimension of the digital revolution is the intelligence of the computers. Once again, Turing took the lead, asking the pivotal question: Can machines have intelligence, and if so, how would we know? In 1950, he posed the famous Turing test of machine intelligence: An intelligent machine (computer-based system) would be able to interact with humans in a way that the humans would not be able to distinguish whether they were interacting with a machine or a human being. For example, the human subject could carry on a conversation with a machine or a person located in another room, passing messages to and receiving messages from that room, without knowing whether the counterpart was a person or an intelligent machine.
Whether or not machines will reach a form of generalized intelligence, there is no doubt that machines are increasingly able to learn and carry out sophisticated tasks once regarded as the unique purview of highly intelligent human beings. Smart machines now routinely translate texts, identify objects in pictures, drive cars, and play games requiring highly sophisticated skills. Marvelous breakthroughs have been achieved in the past decade through applications of artificial neural networks, currently the mainstay of artificial intelligence.
Artificial neural networks process digital inputs and generate digital outputs based on processing of the inputs through a sequence of layers of artificial neurons. As shown in figure 8.3, digital data from the input level are processed one layer at the time until the signals culminate at the output layer, which then selects an action. The input layer may, for example, code the pixels of a digital image such as an X-ray, or code the board position of a game of chess, or code digitally a natural-language text. The output level would then code the machine’s diagnosis of the X-ray, or its chess move, or the computer translation of text into a designated natural language.
8.3  The Basic Structure of Neural Networks for Artificial Intelligence
The key to the “intelligence” of the artificial neural network is the mathematical weighting that each artificial neuron attaches to incoming signals that it receives from the lower layer of neurons, which determine the signal that the neuron sends onward to the neurons in the next higher level. These weights may be analogized to the strength of synapses connecting neurons in the human brain. They define the network of artificial neurons that translate the digital signals of the input layer into the digital signals produced by the output layer.
The mathematical weights are adjusted by “training” the machine using sophisticated algorithms that update the weights assigned to each neuron based on the performance of the machine in a given test run. The weights are adjusted in order to improve the performance of the computer, for example in correctly identifying images, or winning chess games, or translating text. The mathematical process of refining the weights in order to generate high-quality output actions is called “machine learning.” For example, if the machine is being trained to identify tumors in a digital X-ray, the mathematical weights connecting the artificial neurons are adjusted depending on whether the machine’s diagnosis is correct or incorrect on each test image. With enough “supervised learning” of this sort, and using sophisticated mathematical techniques for updating the weights of the artificial neural network, machine learning results in artificial intelligence systems with remarkable skills.
With the vast increases in computational capacity and speed of computers represented by Moore’s law, artificial intelligence systems are now being built with hundreds of layers of digital neurons and very high-dimensional digital inputs and outputs. With sufficiently large “training sets” of data or ingenious designs of self-play described below, neural networks are achieving superhuman skills on a rapidly expanding array of challenges, from board games like Chess and Go, to interpersonal games such as poker, to sophisticated language operations such as real-time translation, and to professional medical skills such as complex diagnostics.
The rapidity of advancement has been breathtaking. In 1997, former world chess champion Garry Kasparov played IBM’s Deep Blue computer. To Kasparov’s amazement and consternation, he was beaten by the computer. In that early case, Deep Blue had been programmed in expert play using a vast library of historic games and board positions. Today, a “self-taught” AI chess system can learn chess from scratch in a few hours, with no library of games or any other expert inputs on chess strategy, and trounce not only the current world chess champion but all past computer champions such as Deep Blue.
In 2011, another IBM system, named Watson, learned to play the TV game show Jeopardy, with all of the puns and quips of popular culture and natural language, and beat world-class Jeopardy champions live on television. This too was a startling achievement, edging yet closer to passing the Turing test. After the Jeopardy championship, Watson went on to the field of medicine, working with doctors to hone expert diagnostic systems.
More recently, we have seen stunning breakthroughs in deep neural networks, that is neural networks with hundreds of layers of artificial neurons. In 2016, an AI system, AlphaGo from the company Deep Mind, took on the world’s eighteen-time world Go champion, Lee Sedol. Go is a board game of such sophistication and subtlety that it was widely believed that machines would be unable to compete with human experts for years or decades to come. Sedol, like Kasparov before him, believed that he would triumph easily over AlphaGo. In the event, he was decisively defeated by the system. Then, to make matters even more dramatic, AlphaGo was decisively defeated by a next-generation AI system that learned Go from scratch in self-play over a few hours. Once again, hundreds of years of expert study and competition could be surpassed in a few hours of learning through self-play.
The advent of learning through self-play, sometimes called “tabula rasa” or blank-slate learning, is mind-boggling. In tabula-rasa learning, the AI system is trained to play against itself, for example in millions of games of chess, with the weights of the neural networks updated depending on the wins and losses in self-play. Starting from no information whatsoever other than the rules of chess, the AI system plays against itself in millions of chess games and uses the results to update the neural-network weights in order to learn chess-playing skills. Remarkably, in just four hours of self-play, an advanced computer AI system developed by the company DeepMind learned all of the skills needed to handily defeat the world’s best human chess players as well as the previous AI world-champion chess player!3 A few hours of blank-slate learning bested 600 years of learning of chess play by all of the chess experts in history.
Technological Advances and the End of Poverty
In 2006, I published a book titled The End of Poverty in which I suggested that the end of extreme poverty was within the reach of our generation, indeed by 2025, if we made increased global efforts to help the poor.4 I had in mind special efforts to bolster health, education, and infrastructure for the world’s poorest people, notably in sub-Saharan African and South Asia, home to most of the world’s extreme poverty. Since the end of the last century, remarkable progress has indeed been achieved. The World Bank data for the period 1990 to 2015 are shown in figure 8.4. In 1990, an estimated 1.9 billion people lived in extreme poverty, equal to 35.9 percent of the world’s population. By 2015, the number had dropped to 736 million, or just 10 percent of the world’s population.5
8.4  The Rate of Extreme Poverty (Rate and Headcount), 1990–2015
The most important single reason for this progress was certainly the rapid advances in technologies that enabled major achievements in disease control, access to knowledge, financial inclusion (such as the ability to secure loans), and rise in incomes and decent work conditions in even the poorest parts of the world. The digital revolution is speeding the uptake not only of digitally related technologies but of other technologies as well, through the rapid dissemination of knowledge, skills, and technical systems facilitated through digital connectivity. The greatest advances in poverty reduction were certainly those achieved by China, where rates of extreme poverty plummeted from an estimated 66 percent of the Chinese population in 1990 to essentially zero by 2020, an economic miracle by any standard!6
Even faster global poverty reduction could have been achieved by now, and can still be achieved in the future, if the global community makes a greater targeted effort. When aid has been targeted to specific challenges of very poor communities—such as disease control, school attendance, and access to infrastructure—progress has been much faster than when progress depends on the general forces of economic growth alone. Still, the progress to date gave the UN member states the confidence to set 2030 as the target date for ending extreme poverty when they adopted the Sustainable Development Goals in 2015. Achieving SDG 1, ending extreme poverty by 2030, is a huge ambition and is indeed out of reach with business as usual, but it could be accomplished if the rich countries took their responsibilities and commitments towards the poor countries more seriously.
Convergent Growth and China’s Surge to the Forefront
The second half of the twentieth century was marked by the shift from overall global economic divergence to overall global convergence. The first 150 years of industrialization widened the gap between the rich and poor countries, and indeed left much of the developing world under the imperial yoke of Europe’s industrial nations. Yet after World War II, the poor regions of the world were able to increase their rate of growth after they achieved independence from colonial rule. Political sovereignty gave the newly independent nations the freedom of maneuver to increase public investments in health, education, and infrastructure. Not all managed well. Some fell into debt, others into high inflation, but many succeeded in building systems of public health and education, and raising the human capital needed for economic growth. On average, the developing countries grew more rapidly in GDP per capita than the high-income nations, so that the relative gap in incomes began to shrink.
This pattern has continued into the twenty-first century, as shown by the International Monetary Fund data in figure 8.5. The growth rate of GDP per capita of the developing countries has generally outpaced that of the developed countries by 1–5 percentage points per year, though by a diminished margin in the 2010s. The faster growth in GDP per capita, combined with a higher rate of population growth, has meant that the share of global output produced by the developing countries has also been rising—the same pattern that we observed in the previous chapter for the period between 1950 and 2008. The shifting proportions of global output of the developed and developing countries are shown in figure 8.6. Whereas the developed countries accounted for 57 percent of world output in 2000, their share declined to around 41 percent of world output as of 2018 according to the IMF estimates. Of course the developing country share rose from 43 percent to 59 percent. Within nineteen years, the two regions had traded places in their global shares of output.
8.5  Growth Rate of GDP Per Capita, Developed and Developing Countries, 2000–2018
Source: IMF World Economic Outlook. Developed countries are the “Advanced Economies,” and developing countries are the “Emerging market and developing countries.” Data are for GDP per capita at 2011 international dollars.
8.6  Trading Places: Shares of Global Output Produced by Advanced and Developing Countries, 2000–2018
Source: International Monetary Fund, World Economic Outlook Database, October 2019.
The most dramatic single change in recent times has been the surge in economic development, and therefore the global role, of China. After nearly 140 years of economic and social strife, marked by foreign incursions, domestic rebellions, civil wars, and internal policy blunders of historic dimensions, China settled down after 1978 to stable, open, market-based production and trade, relying on the catch-up strategy that it had observed to be so successful in its near-neighborhood. Japan had pioneered the strategy back at the time of the Meiji Restoration in 1868 and the years that followed, and had applied it again in Japan’s post–World War II recovery. Then the four “Asian tigers”—South Korea, Taiwan, Hong Kong, and Singapore—had demonstrated the success of export-led, labor-intensive manufacturing. China embarked on that path decisively with the rise to power of the brilliant pragmatic reformer Deng Xiaoping in 1978.
Following Deng’s sage advice on pragmatic market opening and his famed nonideological approach (“It doesn’t matter whether a cat is black or white so long as it catches mice”), China achieved around 10 percent per year GDP growth for nearly thirty-five years, roughly from 1980 to 2015. Growth at 10 percent per year results in a doubling every seven years. Over thirty-five years, that means five doublings, or a cumulative growth of 2 × 2 × 2 × 2 × 2 = 32 times. In fact, according to IMF data, China grew just under 10 percent per year (9.8 percent), so that cumulative growth came to an increase of twenty-six times, an extraordinary result.7
The result is shown in figure 8.7. Measured at purchasing-power-adjusted prices, China is now the world’s largest economy, surpassing the United States (on the IMF’s measure) in the year 2013, with the gap in favor of China continuing in recent years. China’s growth has been roughly 3–4 percentage points per year higher than that of the United States (6 percent per annum in China compared with 3 percent in the United States most recently). Note that China’s overtaking of the United States is in aggregate terms. China’s per capita GDP is still only around one-third that of the United States in purchasing-power-parity terms, and roughly one-fifth the U.S. level at market exchange rates and prices. Because China’s per capita income is still far lower than that of the US and other high-income countries, China still has the opportunity for rapid “catching-up” growth, albeit at a pace that is slower than during 1978–2015. As China continues to narrow the relative gap in GDP per capita with the US, China’s economy will become significantly larger than the US economy in absolute size, given that China’s population is roughly four times larger.
8.7  Changing Places: Chinese and U.S. Shares of World Output, 1980–2018
Source: International Monetary Fund. “China: Gross domestic product based on purchasing-power-parity (PPP) share of world total (Percent)”, World Economic Outlook (April 2019).
One of the key reasons we should expect China’s continued vitality and rapid economic growth is that China has moved from being an importer of technologies from the United States and Europe to becoming a major technology innovator and exporter in its own right. An example of China’s new technological prowess is in high-speed wireless technology, notably 5G systems. It is the Chinese company Huawei, not a U.S. or European firm, that is leading the rollout of 5G. The United States has expressed alarm at Huawei’s success and has tried to block its access to world markets, accusing Huawei of being a security threat. Yet one cannot help feeling that such claims are merely geopolitics at play. The U.S. government seems to be alarmed mainly by Huawei’s success in a cutting-edge digital technology rather than by any specific security risk. Indeed, the U.S. government has provided no evidence of specific risks in its public campaign against the company.
More generally, China’s efforts at innovation are soaring. Based on key metrics of research and development—including R&D expenditures, the training and employment of technical workers, the number of new patents, and the sales of high-tech goods—China has rapidly become a high-tech world power. Figure 8.8 shows R&D outlays as a share of GDP for the United States, the European Union, and China. It is clear that China’s R&D investments are rising rapidly, overtaking the EU on this measure. It is also clear that venture capital (VC) funds are moving into Chinese companies at a greatly increased rate, with VC investments in China overtaking VC investments in the European Union, as shown in figure 8.9.
8.8  R&D Outlays as a Share of GDP, United States, EU and China
Source: National Science Board. In Science and Engineering Indicators 2018 Alexandria, VA: National Science Foundation, 2018.
8.9  Early- and Later-stage Venture Capitalism Investments
Source: National Science Board. In Science and Engineering Indicators 2018 Alexandria, VA: National Science Foundation, 2018.
The results are paying off in patents. According to the World Intellectual Property Organization, as of 2017 China became the second largest source of patent applications under the Patent Cooperation Treaty (PCT). In 2017, the United States filed 56,624 PCT applications, followed by China at 48,882, Japan at 48,208, Germany at 18,982, and South Korea at 15,763.8 If we think regionally rather than nationally, we can say that there are now three centers of endogenous growth in the world economy: the United States; the European Union; and northeast Asia, including three R&D powerhouses: China, Japan, and South Korea. For the first time since the industrial revolution, innovation is not centered in the North Atlantic region alone. As during the long stretch of globalization before 1500 CE, we are again likely to see key technologies of the future in a two-way flow between east and west.
The Challenges of Sustainable Development
With convergent growth and falling poverty, the world economy might seem to be out of the woods. Technological advances have put the end of poverty within reach, along with a rebalancing of the international order that is much fairer to the countries outside of the North Atlantic region. Yet complacency would be misplaced, and the rising anxiety levels seen around the world reflect deep reasons for worry. This Digital Age poses at least three great risks.
The first global risk is a dramatic and destabilizing increase in economic inequality at the very time when technology properly harnessed holds the promise of ending poverty. The gains from economic growth are not being evenly shared. Within many countries, including both the United States and China, inequality has soared alongside economic growth. While the earnings of some workers are soaring, especially those with advanced degrees, the earnings of workers whose jobs are being replaced by robots and artificial intelligence are stagnant or falling. While those enjoying a boost in income could, in principle, compensate those falling behind, in fact, there is far too little income redistribution taking place in the United States and many other countries.
The second global risk is a devastating global environmental crisis. Two hundred years of rapid economic growth have unleashed several interconnected global environmental shocks. The first is human-induced global warming resulting from the massive emission of heat-absorbing greenhouse gases into the atmosphere. The biggest culprit is carbon dioxide (CO2) emitted by burning fossil fuels. The second is the massive loss of biodiversity, with an estimated 1 million species under threat of extinction according to a major recent analysis.9 The main culprit in biodiversity loss is the massive conversion of land agricultural production, with so much habitat taken from other species that they are being pushed to the edge of extinction. The third is the mega-pollution of the air, soils, freshwater, and oceans. We are assaulting the environment with industrial chemicals, plastics, and other waste flows that are not properly recycled or reduced in production and consumption.
The third global risk is war, in a world armed to the teeth. War at this moment among the major countries might seem unimaginable, so terrible and devastating would be the consequences. Yet the same was said about the possibility of major war in 1910, on the eve of the First World War. It is widely supposed today, as it was supposed in 1910, that the lack of war between the major powers would be sustained indefinitely into the future. Yet history proves otherwise. Each new age of globalization, accompanied by deep shifts in geopolitical power, have typically been accompanied by war. We will have to make extraordinary peacebuilding efforts in the coming years to avoid the self-defeating patterns of conflict that have been so prevalent throughout history.
These challenges—inequality, environmental crisis, and the fragility of peace—are the key reasons that many scientists, moral leaders, and statesmen have urged the world to adopt the precepts of sustainable development. The concept itself stands for a holistic approach to globalization, one that combines economic growth with social inclusion, environmental sustainability, and peaceful societies. The theory of sustainable development and the history of globalization suggest that market-based growth can never be enough. Since the start of capitalist globalization in the 1500s, the global economic system has been a ruthless, violent affair, not one in which inequality and war were fundamentally solved. And now we have the added environmental challenges that are complex, global in scale, and without precedent for our species. We are endangering the planet in ways we have never done before, without a guidebook on how to move forward.
The Challenge of Inequality
Technological advances contain within them the seeds of rising inequality, as new technologies create winners and losers in the marketplace. The advent of the spinning jenny and power loom displaced and impoverished multitudes of spinners and weavers in India. The mechanization of agriculture impoverished countless smallholder farmers around the world who desperately fled to the cities to find a livelihood. The introduction of robots on the assembly lines of automobile plants have created unemployment and falling wages for workers laid off from those factories. And now comes the digital economy, with even smarter machines and systems to do the tasks currently carried out by workers. Who will win and who will lose?
Generally, the future labor-market winners will be those with higher skills that machines cannot displace, or with the skills to work alongside the new intelligent machines, such as the tech skills to program the new machines. The losers will be the workers whose tasks are more easily replaced by robots and artificial intelligence. In the past forty years, job losses have been concentrated in the goods-producing sectors, notably in agriculture, mining, and manufacturing. Those job losses will continue in the future. Both agriculture and mining are increasingly being automated, with self-driving vehicles such as tractor-combines and large digging and transport equipment at mining sites. Robots are continuing to replace workers on factory floors in several manufacturing sectors. And it seems clear that other jobs in the service sector will also vanish in the future. Trucks and taxis may well become self-driving, thereby displacing millions of professional drivers. Warehouses are increasingly operated with robots carrying, stacking, and packaging the merchandise. And retail stores are giving way to e-commerce and direct delivery of purchases, again with expert systems and potentially self-driving delivery vehicles.
In recent decades, lower-skilled workers displaced by machines have seen their earnings stagnate or decline, while higher-skilled workers have been made more productive by those same machines and have seen their earnings rise. These trends have been a key reason for the rising inequality of income in many countries, notably including the United States. Yet the ultimate effect of this tendency depends on two additional factors. To the extent that low-skilled workers can gain higher skills through increased education and training, the proportion of the workforce suffering from stagnant or declining earnings can be reduced. And even when market wages are pushed down, governments can compensate for those adverse market forces through increased taxation of those with high and rising incomes and increased transfers to those with low and falling incomes, so that all segments of society share in the gains from technological advance.
The development challenges may also be amplified for the poorest countries in the world, since those countries generally depend on labor-intensive export earnings to finance their future economic growth. Yet the digital revolution is replacing low-cost labor with smart machines. The rapid advances in robotics, for example, are resulting in the automation of jobs in textiles and apparel that in the past were the stepping-stone industries for low-wage countries climbing the ladder of economic development. While the digital revolution will surely help the poorest countries in certain areas—such as low-cost health care, expanded educational opportunities, and improvements in infrastructure—the digital revolution may also cut off traditional pathways for economic development. In that case, global solidarity, wherein rich countries provide added development assistance to enable the poorest countries to invest in the new digital technologies and the accompanying skills, may become vital.
The Challenge of Planetary Boundaries
The environmental challenges may seem even more daunting and, in the view of many observers, insoluble. Is there not an inherent contradiction between endless growth of the world economy and a finite planet? The world economy has increased roughly a hundredfold over the past two centuries: roughly ten times the population and ten times the GDP per capita. Yet the physical planet has remained constant, and the human impact on the environment has therefore intensified dramatically.
One basic calculation puts it this way: The human impact is equal to the population times GDP/population times impact/GDP, sometimes summarized as I = P × A × T, where I is impact, P is population, A is affluence (GDP per capita), and T is technology (impact/GDP).10 What is clear from this equation is that per capita economic growth (a rise in A) or population growth (a rise in P) must lead to a greater human impact (I) on the planet unless offset by an improvement in technology (lower T), in the sense of a lower environmental impact per unit of GDP.
Some kinds of technological advances, such as the steam engine, raise A but also raise T because of greenhouse-gas emissions and air pollution. Other kinds of technological advances, such as improvements in photovoltaic solar cells, raise A and lower the environmental impact per unit of GDP (a fall in T), with a net overall effect of lowering rather than raising the human impact on the planet. Economic growth is therefore sustainable if the rise in P and A are offset by a sufficiently large decline in T—that is, by technologies that lower the impact on the planet per unit of GDP.
The bad news is that global growth during the past two hundred years has tended to be neutral or increasing in T. Dependence on fossil fuels, land clearing for agriculture, bottom trawling for fish, clear-cutting of tropical hardwoods, and fracking for oil and gas are all examples of technological advances that intensify the human impact on the environment. We have arrived in the twenty-first century, therefore, with a planet at the very limits of habitability as a result of two centuries of rapid growth combined with intensifying environmental impacts.
The good news is that there are plenty of opportunities today for major technological shifts to lower T, the human impact per unit of GDP. These include the shift from fossil fuels to renewable energy (wind, solar, hydro, geothermal, and others), which would provide more energy with lower greenhouse-gas emissions. Another opportunity is the shift in diet from heavy meat eating, especially beef eating, toward the use of more plant proteins, which would improve human health while also reducing the pressures on land for feed grains and pastures. A third opportunity is improved building designs, which can greatly reduce the need for heating and cooling and thereby the demand for energy. A fourth opportunity is precision agriculture, meaning more precise applications of water and fertilizers—for example, through drip irrigation and fertigation (direct injection of the fertilizers via the irrigation system).
The key to sustainability, in short, is the transformation of technologies and behaviors (such as plant-based diets, or choosing walking over driving) that can deliver the same GDP or higher GDP with a lower environmental impact. Recent breakthroughs in technology, such as dramatic cost reductions in photovoltaics, the development of biodegradable plastics, the development of plant-based substitutes for meats, and the improvement of agricultural methods to reduce the use of pesticides, water, and chemical fertilizers, are all examples of trajectories that combine higher GDP with lower environmental costs. Throughout most of history, humanity has been profligate with nature: use it, lose it, and move on. Yet in our time, there is no possibility of simply moving on. We have filled every nook and cranny of the planet and pushed the environmental crisis to a global scale. The scale of the sustainability challenge is therefore unprecedented, threatening all of the planet, and all of humanity, in ways that we have never before faced. We must therefore lower T, our impact on the planet per unit of GDP.
The framework of Planetary Boundaries helps us keep track of the key environmental challenges and the needed technologies and behaviors to address them. In the iconic depiction of planetary boundaries shown in figure 8.10, there are nine main planetary boundaries. Starting from due north and moving counter-clockwise around the circle, the planetary boundaries are climate change (from greenhouse-gas emissions); biospheric integrity (both genetic diversity and functional diversity); land-system change (notably deforestation); freshwater use (heavily related to irrigation); biogeochemical flows (notably nitrogen and phosphorus from fertilizer use); ocean acidification (from the high concentration of CO2 in the atmosphere); atmospheric aerosol loading (from burning fossil fuels and biomass); stratospheric ozone depletion (from the use of chlorofluorocarbons); and novel entities (chemical pollutants including pesticides and plastics).
8.10  Planetary Boundaries
Source: J. Lokrantz/Azote based on Will Steffen, Katherine Richardson, Johan Rockström, Sarah E. Cornell, Ingo Fetzer, Elena M. Bennett, Reinette Biggs, et al. “Planetary Boundaries: Guiding Human Development on a Changing Planet.” Science 347, no. 6223 (2015): 1259855.
These planetary boundaries are threatened mainly by greenhouse-gas emissions, poor agricultural practices and diets, and chemical pollutants and inadequate waste management. All of these problems have technological and behavioral solutions that can raise or sustain output while lowering environmental impacts. Our challenge is to plan carefully and soundly, and then regulate businesses methodically, to diminish or ban those technologies that are exacerbating the environmental crises.
The global challenge is not only the range of changes needed, but also their urgency and global scale. Everywhere we look on the planet we see dire and rising threats. The air across Asia, for example, is chronically polluted from fossil-fuel use and often from biomass burning. Figure 8.11 shows Guangzhou, China, beset by smog. Life-threatening air pollution afflicts major cities around the world.
8.11  Smog in Guangzhou, China
Source: Stefan Leitner. “Guangzhou,” licensed under CC BY-NC-SA 2.0
Figure 8.12, a scene of desperation along the Kenya-Somalia border in the drought of 2011, reminds us of the growing intensity of droughts in many of the world’s most impoverished drylands, creating conditions of famine and displacement that threaten the survival of the poorest of the poor. Figure 8.13 shows vividly the hazards of excessive nitrogen and phosphorous flows from farms to the coasts, in this case in northeastern China. The beaches are covered in algal blooms that will lead to oxygen-deficient waters and a die-off of marine life. Figure 8.14 is a global map prepared by the U.S. space agency NASA. The red coastal areas show the parts the planet that would be inundated by a six-meter sea-level rise, a scale of sea-level rise that is alas consistent with our current trajectory of global warming.
8.12  Drought in Kenya-Somalia Border Region, 2011
Source: Sodexo USA, “IMG_0748_JPG,” licensed under CC BY 2.0
8.13  Young Boy Swimming in Algal Bloom in Shandong, China
Source: Photo: Reuters/China Daily
8.14  Areas (in red) That Will Be Submerged by a Six-Meter Sea Level Rise
Source: NASA
The Risks of Conflict
The transition from one age of globalization to the next has often been accompanied by war. The passage from the Neolithic Age to the Equestrian Age was marked by cavalry wars arriving from the steppes. The transition to the Ocean Age of global empires was marked by the violence of European conquerors toward native populations and African slaves in the Americas. The transition to the Industrial Age was marked by Britain’s conquests of India and its wars against China, and the mass suffering that ensued. Now the transition to the Digital Age threatens conflict anew, with one of the biggest risks being a possible clash between the two largest economies, China and the United States.
There is, of course, nothing inevitable about such a clash. Indeed, the consequences would be so dire as to make such a conflict almost unimaginable. Yet the structural conditions of our age pose an obvious risk. China is a rising power that will end America’s recent status as the sole superpower. As the political scientist Graham Allison has noted, historical cases in which a dominant power has been challenged by a rising power have raised the risks of conflict.11 Either the dominant power (in the current case, the United States) attacks the rising power (in this case, China) to put down a competitive challenge “before it’s too late,” or the rising power peremptorily attacks the dominant power out of fear of otherwise being blocked on its path of growth. These threats ring true. Already, many U.S. politicians speak of China as an inherent threat to U.S. interests, or to U.S. “primacy,” while China not unreasonably views the United States as trying to “contain” China’s progress.
If history provides lessons, it is to think the unthinkable, and then to work assiduously to head off the worst cases. China and the United States are already circling each other warily, each believing the worst of the other. Some Chinese strategists believe that the United States will never accept a strong and powerful China, while some American strategists believe that China is out for world conquest. Both of these views are far too deterministic and pessimistic. We should be endeavoring to cultivate the conditions for trust and peace between these two nations, and indeed among the world’s major powers, rather than standing by and putting our bets on war. How to cultivate peace in the twenty-first century is one of the core questions of the next and final chapter.
Some Lessons from the Digital Age
The very success of economic growth in the Digital Age has laid several traps for an unwary world. The world economy is producing vast wealth, but failing in three other dimensions of sustainable development. Inequalities are soaring, in part because of the differential effects of digital technologies on high-skilled and low-skilled workers. Environmental degradation is rampant, a reflection of a global economy that has reached nearly $100 trillion in annual output without taking care to ensure that the impacts on the planet are kept to a safe and sustainable level. And the risk of conflict is rising, especially given the rapid shifts in geopolitics, and the anxieties that are being created in the US, China, and elsewhere.
All is not lost—not by a longshot. Humanity has the low-impact technologies (such as renewable energy and precision agriculture) and the policy knowhow needed to head off the environmental crises. We also have the benefit of global experience, if we choose to use it, to redistribute income from the rich to the poor, while finding diplomatic solutions to rising geopolitical tensions. We even have a new globally agreed approach to governance—sustainable development—that can provide a roadmap for action. The next and final chapter looks forward to see how we can achieve the goals of prosperity, social justice, environmental sustainability, and peace, that all the world has adopted.