1

DEEP IN THE ANTHROPOCENE

1.1. PERILS AND PROSPECTS

A few years ago, I met a well-known tycoon from India. Knowing I had the English title of ‘Astronomer Royal’, he asked, ‘Do you do the Queen’s horoscopes’? I responded, with a straight face: ‘If she wanted one, I’m the person she’d ask’. He seemed eager to hear my predictions. I told him that stocks would fluctuate, there would be new tensions in the Middle East, and so forth. He paid rapt attention to these ‘insights’. But then I came clean. I said I was just an astronomer—not an astrologer. He abruptly lost all interest in my predictions. And rightly so: scientists are rotten forecasters—almost as bad as economists. For instance, in the 1950s an earlier Astronomer Royal said that space travel was ‘utter bilge’.

Nor do politicians and lawyers have a sure touch. One rather surprising futurologist was F. E. Smith, Earl of Birkenhead, crony of Churchill and the UK’s Lord Chancellor in the 1920s. In 1930 he wrote a book titled The World in 2030.1 He’d read the futurologists of his era; he envisaged babies incubated in flasks, flying cars, and such fantasies. In contrast, he foresaw social stagnation. Here’s a quote: ‘In 2030 women will still, by their wit and charms, inspire the most able men towards heights that they could never themselves achieve’.

Enough said!

*   *   *

Back in 2003 I wrote a book which I titled Our Final Century? My UK publisher deleted the question mark. The American publishers changed the title to Our Final Hour.2 My theme was this: Our Earth is forty-five million centuries old. But this century is the first in which one species—ours—can determine the biosphere’s fate. I didn’t think we’d wipe ourselves out. But I did think we’d be lucky to avoid devastating breakdowns. That’s because of unsustainable stresses on ecosystems; there are more of us (world population is higher) and we’re all more demanding of resources. And—even more scary—technology empowers us more and more, and thereby exposes us to novel vulnerabilities.

I was inspired by, among others, a great sage of the early twentieth century. In 1902 the young H. G. Wells gave a celebrated lecture at the Royal Institution in London.3 ‘Humanity’, he proclaimed,

has come some way, and the distance we have travelled gives us some insight of the way we have to go.… It is possible to believe that all the past is but the beginning of a beginning, and that all that is and has been is but the twilight of the dawn. It is possible to believe that all that the human mind has accomplished is but the dream before the awakening; out of our lineage, minds will spring that will reach back to us in our littleness to know us better than we know ourselves. A day will come, one day in the unending succession of days, when beings, beings who are now latent in our thoughts and hidden in our loins, shall stand upon this earth as one stands upon a footstool, and shall laugh and reach out their hands amidst the stars.

His rather purple prose still resonates more than a hundred years later—he realised that we humans aren’t the culmination of emergent life.

But Wells wasn’t an optimist. He also highlighted the risk of global disaster:

It is impossible to show why certain things should not utterly destroy and end the human story … and make all our efforts vain … something from space, or pestilence, or some great disease of the atmosphere, some trailing cometary poison, some great emanation of vapour from the interior of the Earth, or new animals to prey on us, or some drug or wrecking madness in the mind of man.

I quote Wells because he reflects the mix of optimism and anxiety—and of speculation and science—which I will try to convey in this book. Were he writing today he would be elated by our expanded vision of life and the cosmos, but he would be even more anxious about the perils we face. The stakes are indeed getting higher; new science offers huge opportunities, but its consequences could jeopardise our survival. Many are concerned that it is ‘running away’ so fast that neither politicians nor the lay public can assimilate or cope with it.

*   *   *

You may guess that, being an astronomer, anxiety about asteroid collisions keeps me awake at night. Not so. Indeed, this is one of the few threats that we can quantify—and be confident is unlikely. Every ten million years or so, a body a few kilometres across will hit the Earth, causing global catastrophe—so there are a few chances in a million that such an impact occurs within a human lifetime. There are larger numbers of smaller asteroids that could cause regional or local devastation. The 1908 Tunguska event, which flattened hundreds of square kilometres of (fortunately unpopulated) forests in Siberia, released energy equivalent to several hundred Hiroshima bombs.

Can we be forewarned of these crash landings? The answer is yes. Plans are afoot to create a data set of the one million potential Earth-crossing asteroids larger than 50 metres and track their orbits precisely enough to identify those that might come dangerously close. With the forewarning of an impact, the most vulnerable areas could be evacuated. Even better news is that we could feasibly develop spacecraft that could protect us. A ‘nudge’, imparted in space several years before the threatened impact, would only need to change an asteroid’s velocity by a few centimetres per second to deflect it from a collision course with the Earth.

If you calculate an insurance premium in the usual way, by multiplying probability by consequences, it turns out to be worth spending a few hundred million dollars a year to reduce the asteroid risk.

Other natural threats—earthquakes and volcanoes—are less predictable. So far there is no credible way to prevent them (or even predict them reliably). But there’s one reassuring thing about these events, just as there is about asteroids: their rate isn’t increasing. It’s about the same for us as it was for Neanderthals—or indeed for dinosaurs. But the consequences of such events depend on the vulnerability and value of the infrastructure that’s at risk, which is much greater in today’s urbanised world. There are, moreover, cosmic phenomena to which the Neanderthals (and indeed all pre-nineteenth-century humans) would have been oblivious: giant flares from the Sun. These trigger magnetic storms that could disrupt electricity grids and electronic communications worldwide.

Despite these natural threats, the hazards that should make us most anxious are those that humans themselves engender. These now loom far larger, and they are becoming more probable, and potentially more catastrophic, with each decade that passes.

We’ve had one lucky escape already.

1.2. NUCLEAR THREATS

In the Cold War era—when armament levels escalated beyond all reason—the superpowers could have stumbled towards Armageddon through muddle and miscalculation. It was the era of ‘fallout shelters’. During the Cuban missile crisis, my fellow students and I participated in vigils and demonstrations—our mood lightened only by the ‘protest songs’, such as Tom Lehrer’s lyrics: ‘We’ll all go together when we go, all suffused with an incandescent glow’. But we would have been even more scared had we truly realised just how close we were to catastrophe. President Kennedy was later quoted as having said that the odds were ‘somewhere between one out of three and even’. And only when he was long retired did Robert McNamara state frankly that ‘we came within a hairbreadth of nuclear war without realizing it. It’s no credit to us that we escaped—Khrushchev and Kennedy were lucky as well as wise’.

We now know more details of one of the tensest moments. Vasili Arkhipov, a highly respected and decorated officer in Russia’s navy, was serving as number two on a submarine carrying nuclear missiles. When the United States attacked the submarine with depth charges, the captain inferred that war had broken out and wanted the crew to launch the missiles. Protocol required the top three officers on board to agree. Arkhipov held out against such action—and thereby avoided triggering a nuclear exchange that could have escalated catastrophically.

Post-Cuba assessments suggest that the annual risk of thermonuclear destruction during the Cold War was about ten thousand times higher than the mean death rate from asteroid impact. And indeed, there were other ‘near misses’ when catastrophe was only avoided by a thread. In 1983 Stanislav Petrov, a Russian Air Force officer, was monitoring a screen when an ‘alert’ indicated that five Minuteman intercontinental ballistic missiles had been launched by the United States towards the Soviet Union. Petrov’s instructions, when this happened, were to alert his superior (who could, within minutes, trigger nuclear retaliation). He decided, on no more than a hunch, to ignore what he’d seen on the screen, guessing it was a malfunction in the early warning system. And so it was; the system had mistaken the reflection of the Sun’s rays off the tops of clouds for a missile launch.

Many now assert that nuclear deterrence worked. In a sense, it did. But that doesn’t mean it was a wise policy. If you play Russian roulette with one or two bullets in the cylinder, you are more likely to survive than not, but the stakes would need to be astonishingly high—or the value you place on your life inordinately low—for this to be a wise gamble. We were dragooned into just such a gamble throughout the Cold War era. It would be interesting to know what level of risk other leaders thought they were exposing us to, and what odds most European citizens would have accepted, if they’d been asked to give informed consent. For my part, I would not have chosen to risk a one in three—or even a one in six—chance of a catastrophe that would have killed hundreds of millions and shattered the historic fabric of all European cities, even if the alternative were certain Soviet dominance of Western Europe. And, of course, the devastating consequences of thermonuclear war would have spread far beyond the countries that faced a direct threat, especially if a ‘nuclear winter’ were triggered.

Nuclear annihilation still looms over us: the only consolation is that, thanks to arms control efforts between the superpowers, there are about five times fewer weapons than during the Cold War—Russia and the United States each have about seven thousand—and fewer are on ‘hair trigger’ alert. However, there are now nine nuclear powers, and a higher chance than ever before that smaller nuclear arsenals might be used regionally, or even by terrorists. Moreover, we can’t rule out, later in the century, a geopolitical realignment leading to a standoff between new superpowers. A new generation may face its own ‘Cuba’—and one that could be handled less well (or less luckily) than the 1962 crisis was. A near-existential nuclear threat is merely in abeyance.

Chapter 2 will address the twenty-first-century sciences—bio, cyber, and AI—and what they might portend. Their misuse looms as an increasing risk. The techniques and expertise for bio- or cyberattacks will be accessible to millions—they do not require large special-purpose facilities like nuclear weapons do. Cybersabotage efforts like ‘Stuxnet’ (which destroyed the centrifuges used in the Iranian nuclear weapons programme), and frequent hacking of financial institutions, have already bumped these concerns up the political agenda. A report from the Pentagon’s Science Board claimed that the impact of cyberattack (shutting down, for instance, the US electricity grid) could be catastrophic enough to justify a nuclear response.4

But before that let’s focus on the potential devastation that could be wrought by human-induced environmental degradation, and by climate change. These interlinked threats are long-term and insidious. They stem from humanity’s ever-heavier collective ‘footprint’. Unless future generations tread more softly (or unless population levels fall) our finite planet’s ecology will be stressed beyond sustainable limits.

1.3. ECO-THREATS AND TIPPING POINTS

Fifty years ago, the world’s population was about 3.5 billion. It is now estimated to be 7.6 billion. But the growth is slowing. Indeed, the number of births per year, worldwide, peaked a few years ago and is now decreasing. Nonetheless, the world’s population is forecast to rise to around nine billion, or even higher, by 2050.5 This is because most people in the developing world are still young and have not had children, and because they will live longer; the age histogram for the developing world will come to look more like it does for Europe. The largest current growth is in East Asia, where the world’s human and financial resources will become concentrated—ending four centuries of North Atlantic hegemony.

Demographers predict continuing urbanisation, with 70 percent of people living in cities by 2050. Even by 2030, Lagos, São Paulo, and Delhi will have populations greater than thirty million. Preventing megacities from becoming turbulent dystopias will be a major challenge to governance.

Population growth is currently underdiscussed. This may be partly because doom-laden forecasts of mass starvation—in, for instance, Paul Ehrlich’s 1968 book The Population Bomb and the pronouncements of the Club of Rome—have proved off the mark. Also, some deem population growth to be a taboo subject—tainted by association with eugenics in the 1920s and ’30s, with Indian policies under Indira Gandhi, and more recently with China’s hard-line one-child policy. As it turns out, food production and resource extraction have kept pace with rising population; famines still occur, but they are due to conflict or maldistribution, not overall scarcity.6

We can’t specify an ‘optimum population’ for the world because we can’t confidently conceive what people’s lifestyles, diet, travel patterns, and energy needs will be beyond 2050. The world couldn’t sustain anywhere near its present population if everyone lived as profligately—each using as much energy and eating as much beef—as the better-off Americans do today. On the other hand, twenty billion could live sustainably, with a tolerable (albeit ascetic) quality of life, if all adopted a vegan diet, travelled little, lived in small high-density apartments, and interacted via super-internet and virtual reality. This latter scenario is plainly improbable, and certainly not alluring. But the spread between these extremes highlights how naive it is to quote an unqualified headline figure for the world’s ‘carrying capacity’.

A world with nine billion people, a number that could be reached (or indeed somewhat exceeded) by 2050, needn’t signal catastrophe. Modern agriculture—low-till, water-conserving, and perhaps involving genetically modified (GM) crops, together with better engineering to reduce waste, improve irrigation, and so forth—could plausibly feed that number. The buzz phrase is ‘sustainable intensification’. But there will be constraints on energy—and in some regions severe pressure on water supplies. The quoted figures are remarkable. To grow one kilogram of wheat takes 1,500 litres of water and several megajoules of energy. But a kilogram of beef takes ten times as much water and twenty times as much energy. Food production uses 30 percent of the world’s energy production and 70 percent of water withdrawals.

Agricultural techniques using GM organisms can be beneficial. To take one specific instance, the World Health Organization (WHO) estimates that 40 percent of children under the age of five in the developing world suffer from vitamin A deficiency; this is the leading cause of childhood blindness globally, affecting hundreds of thousands of children each year. So-called golden rice, first developed in the 1990s and subsequently improved, delivers beta-carotene, the precursor of vitamin A, and alleviates vitamin-A deficiency. Regrettably, campaigning organisations, Greenpeace in particular, have impeded the cultivation of golden rice. Of course, there is concern about ‘tampering with nature’, but in this instance, new techniques could have enhanced ‘sustainable intensification’. Moreover, there are hopes that a more drastic modification of the rice genome (the so-called C4 pathway) could enhance the efficiency of photosynthesis, thus allowing faster and more intensive growth of the world’s number one staple crop.

Two potential dietary innovations do not confront a high technical barrier: converting insects—highly nutritious and protein rich—into palatable food; and making artificial meat from vegetable protein. As for the latter, ‘beef’ burgers (made mainly of wheat, coconut, and potato) have been sold since 2015 by a California company called Impossible Foods. It will be a while, though, before these burgers will satisfy carnivorous gourmands for whom beetroot juice is a poor substitute for blood. But biochemists are on the case, exploring more sophisticated techniques. In principle, it is possible to ‘grow’ meat by taking a few cells from an animal and then stimulating growth with appropriate nutrients. Another method, called acellular agriculture, uses genetically modified bacteria, yeast, fungi, or algae to produce the proteins and fats that are found in (for instance) milk and eggs. There is a clear financial incentive as well as an ecological imperative to develop acceptable meat substitutes, so one can be optimistic of rapid progress.

We can be technological optimists regarding food—and health and education as well. But it’s hard not to be a political pessimist. Enhancing the life chances of the world’s poorest people by providing adequate nourishment, primary education, and other basics is a readily achievable goal; the impediments are mainly political.

If the benefits of innovation are to be spread worldwide, there will need to be lifestyle changes for us all. But these need not signal hardship. Indeed, all can, by 2050, have a quality of life that is at least as good as profligate Westerners enjoy today—provided that technology is developed appropriately, and deployed wisely. Gandhi proclaimed the mantra: ‘There’s enough for everyone’s need but not for everyone’s greed’. This need not be a call for austerity; rather, it calls for economic growth driven by innovations that are sparing of natural resources and energy.

The phrase ‘sustainable development’ gained currency in 1987, when the World Commission on Environment and Development, chaired by Gro Harlem Brundtland, prime minister of Norway, defined it as ‘development that meets the needs of the present—especially the poor—without compromising the ability of future generations to meet their own needs’.7 We all surely want to ‘sign up’ to reach this goal in the hope that by 2050 there will be a narrower gap between the lifestyle that privileged societies enjoy and that which is available to the rest of the world. This can’t happen if developing countries mimic the path to industrialisation that Europe and North America followed. These countries need to leapfrog directly to a more efficient and less wasteful mode of life. The goal is not anti-technology. More technology will be needed, but channeled appropriately, so that it underpins the needed innovation. The more developed nations must make this transition too.

Information technology (IT) and social media are now globally pervasive. Rural farmers in Africa can access market information that prevents them from being ripped off by traders, and they can transfer funds electronically. But these same technologies mean that those in deprived parts of the world are aware of what they are missing. This awareness will trigger greater embitterment, motivating mass migration or conflict, if these contrasts are perceived to be excessive and unjust. It is not only a moral imperative, but a matter of self-interest too, for fortunate nations to promote greater equality—by direct financial aid (and by ceasing the current exploitative extraction of raw materials) and also by investing in infrastructure and manufacturing in countries where there are displaced refugees, so that the dispossessed are under less pressure to migrate to find work.

Yet long-term goals tend to slip down the political agenda, trumped by immediate problems—and a focus on the next election. The president of the European Commission, Jean-Claude Juncker, said, ‘We all know what to do; we just don’t know how to get re-elected after we’ve done it’.8 He was referring to financial crises, but his remark is even more appropriate for environmental challenges (and it’s playing out now with the discouragingly slow implementation of the UN’s Sustainable Development Goals).

There is a depressing gap between what could be done and what actually happens. Offering more aid is not in itself enough. Stability, good governance, and effective infrastructure are needed if these benefits are to permeate the developing world. The Sudanese tycoon Mo Ibrahim, whose company led the penetration of mobile phones into Africa, in 2007 set up a prize of $5 million (plus $200,000 a year thereafter) to recognise exemplary and noncorrupt leaders of African countries—and the Mo Ibrahim Prize for Achievement in African Leadership has been awarded five times.

The relevant actions aren’t necessarily best taken at the nation-state level. Some of course require multinational cooperation, but many effective reforms need implementation more locally. There are huge opportunities for enlightened cities to become pathfinders, spearheading the high-tech innovation that will be needed in the megacities of the developing world where the challenges are especially daunting.

Short-termism isn’t just a feature of electoral politics. Private investors don’t have a long enough horizon either. Property developers won’t put up a new office building unless they get payback within (say) thirty years. Indeed, most high-rise buildings in cities have a ‘designed lifetime’ of only fifty years (a consolation for those of us who deplore their dominance of the skyline). Potential benefits and downsides beyond that time horizon are discounted away.

What about the more distant future? Population trends beyond 2050 are harder to predict. They will depend on what today’s young people, and those as yet unborn, will decide about the number and spacing of their children. Enhanced education and empowerment of women—surely a priority in itself—could reduce fertility rates where they’re now highest. But this demographic transition hasn’t yet reached parts of India and sub-Saharan Africa.

The mean number of births per woman in some parts of Africa—Niger, or rural Ethiopia, for instance—is still more than seven. Although fertility is likely to decrease, it is possible, according to the United Nations, that Africa’s population could double again to four billion between 2050 and 2100, thereby raising the global population to eleven billion. Nigeria alone would then have as large a population as Europe and North America combined, and almost half of all the world’s children would be in Africa.

Optimists remind us that each extra mouth brings also two hands and a brain. Nonetheless, the greater the population becomes, the greater will be the pressures on resources, especially if the developing world narrows its gap with the developed world in its per capita consumption. And the harder it will be for Africa to escape the ‘poverty trap’. Indeed, some have noted that African cultural preferences may lead to a persistence of large families as a matter of choice even when child mortality is low. If this happens, the freedom to choose your family size, proclaimed as one of the UN’s fundamental rights, may come into question when the negative externalities of a rising world population are weighed in the balance.

We must hope that the global population declines rather than increases after 2050. Even though nine billion can be fed (with good governance and efficient agribusiness), and even if consumer items become cheaper to produce (via, for instance, 3D printing) and ‘clean energy’ becomes plentiful, food choices will be constrained and the quality of life will be reduced by overcrowding and reductions in green space.

1.4. STAYING WITHIN PLANETARY BOUNDARIES

We’re deep into the Anthropocene. This term was popularised by Paul Crutzen, one of the scientists who determined that the ozone in the upper atmosphere was being depleted by CFCs—chemicals then used in aerosol cans and refrigerators. The 1987 Montreal Protocol led to the ban on these chemicals. This agreement seemed an encouraging precedent, but it worked because substitutes existed for CFCs that could be deployed without great economic costs. Sadly, it’s not so easy to deal with the other (more important) anthropogenic global changes consequent on a rising population, changes that are more demanding of food, energy, and other resources. All these issues are widely discussed. What’s depressing is the inaction—for politicians the immediate trumps the long term; the parochial trumps the global. We need to ask whether nations need to give more sovereignty to new organisations along the lines of the existing agencies under the auspices of the United Nations.

The pressures of rising populations and climate change will engender loss of biodiversity—an effect that would be aggravated if the extra land needed for food production or biofuels encroached on natural forests. Changes in climate and alterations to land use can, in combination, induce ‘tipping points’ that amplify each other and cause runaway and potentially irreversible change. If humanity’s collective impact on nature pushes too hard against what the Stockholm environmentalist Johan Rockström calls ‘planetary boundaries’,9 the resultant ‘ecological shock’ could irreversibly impoverish our biosphere.

Why does this matter so much? We are harmed if fish populations dwindle to extinction. There are plants in the rain forest that may be useful to us for medicinal purposes. But there is a spiritual value too, over and above the practical benefits of a diverse biosphere. In the words of the eminent ecologist E. O. Wilson,

At the heart of the environmentalist worldview is the conviction that human physical and spiritual health depends on the planet Earth.… Natural ecosystems—forests, coral reefs, marine blue waters—maintain the world as we would wish it to be maintained. Our body and our mind evolved to live in this particular planetary environment and no other.10

Extinction rates are rising—we’re destroying the book of life before we’ve read it. For instance, the populations of the ‘charismatic’ mammals have fallen, in some cases to levels that threaten species. Many of the six thousand species of frogs, toads, and salamanders are especially sensitive. And, to quote E. O. Wilson again, ‘if human actions lead to mass extinctions, it’s the sin that future generations will least forgive us for’.

Here, incidentally, the great religious faiths can be our allies. I’m on the council of the Pontifical Academy of Sciences (an ecumenical body; its seventy members represent all faiths or none). In 2014 the Cambridge economist Partha Dasgupta, along with Ram Ramanathan, a climate scientist from the Scripps Institute in California, organised a high-level conference on sustainability and climate held at the Vatican.11 This offered a timely scientific impetus into the 2015 papal encyclical ‘Laudato Si’. The Catholic Church transcends political divides; there’s no gainsaying its global reach, its durability and long-term vision, or its focus on the world’s poor. The Pope was given a standing ovation at the United Nations. His message resonated especially in Latin America, Africa, and East Asia.

The encyclical also offered a clear papal endorsement of the Franciscan view that humans have a duty to care for all of what Catholics believe is ‘God’s creation’—that the natural world has value in its own right, quite apart from its benefits to humans. This attitude resonates with the sentiments beautifully expressed more than a century ago by Alfred Russel Wallace, co-conceptualiser of evolution by natural selection:

I thought of the long ages of the past during which the successive generations of these things of beauty had run their course … with no intelligent eye to gaze upon their loveliness, to all appearances such a wanton waste of beauty.… This consideration must surely tell us that all living things were not made for man.… Their happiness and enjoyments, their loves and hates, their struggles for existence, their vigorous life and early death, would seem to be immediately related to their own well-being and perpetuation alone.12

The papal encyclical eased the path to agreement at the Paris climate conference in December 2015. It eloquently proclaimed that our responsibility—to our children, to the poorest, and to our stewardship of life’s diversity—demands that we don’t leave a depleted and hazardous world.

We all surely hold these sentiments. But our secular institutions—economic and political—don’t plan far enough ahead. I’ll return in my final chapters to address the daunting challenges to science and to governance that these threats pose.

Regulations can help. But regulations won’t gain traction unless the public mind-set changes. Attitudes in the West to, for instance, smoking and driving drunk have transformed in recent decades. We need a similar change in attitude so that manifestly excessive consumption and waste of materials and energy—4 × 4 SUVs (disparaged as Chelsea tractors in London, where they clog the streets in up-market districts), patio heaters, brightly illuminated houses, elaborate plastic wrappings, slavish following of fast-changing fashions, and the like—become perceived as ‘tacky’ rather than stylish. Indeed, a trend away from excessive consumption may happen without exterior pressure. For my generation, our living space (a student room and later something more spacious) was ‘personalised’ by books, CDs, and pictures. Now that books and music can be accessed online, we will perhaps become less sentimental about ‘home’. We will become nomadic—especially as more business and socialising can be done online. Consumerism could be replaced by a ‘sharing economy’. If this scenario transpires, it will be crucial that developing nations transition directly towards this lifestyle, bypassing the high-energy, high-consumption stage through which Europe and the United States have passed.

Effective campaigns need to be associated with a memorable logo. The BBC’s 2017 TV series Blue Planet II showed an albatross returning from wandering thousands of miles foraging in the southern oceans—and regurgitating for its young not the craved-for nutritious fish, but bits of plastic. Such an image publicises and motivates the case for recycling plastics, which otherwise accumulate in the oceans (and the food chains of the creatures that live there). Likewise, the longtime iconic image (albeit somewhat misleading) showing a polar bear clinging to a melting ice floe is emblematic of the climate change crisis—my next topic.

1.5. CLIMATE CHANGE

The world will get more crowded. And there’s a second prediction: it will gradually get warmer. Pressures on food supplies, and on the entire biosphere, will be aggravated by the consequent changes in global weather patterns. Climate change exemplifies the tension between the science, the public, and the politicians. In contrast to population issues, it is certainly not underdiscussed—despite the fact that in 2017 the Trump regime in the United States banned the terms ‘global warming’ and ‘climate change’ from public documents. But the implications of climate change are dismayingly under-acted-on.

One thing is not controversial. The concentration of CO2 in the air is rising, mainly due to the burning of fossil fuels. The scientist Charles Keeling measured CO2 levels using an instrument at the Mauna Loa Observatory in Hawai‘i, which has been operating continuously since 1958 (following Keeling’s death in 2005 the programme is being continued by his son, Ralph). And it is not controversial that this rise leads to a ‘greenhouse effect’. The sunlight that heats the Earth is reemitted as infrared radiation. But just as the glass in a greenhouse traps the infrared radiation (though it lets the light in) the CO2 likewise acts as a blanket that traps heat in the Earth’s atmosphere, land masses, and oceans. This has been understood since the nineteenth century. A rise in CO2 will induce a long-term warming trend, superimposed on all the other complicated effects that make climate fluctuate.

Doubling of CO2, if all other aspects of the atmosphere were unchanged, would cause 1.2 degrees (centigrade) of warming, averaged over the Earth—this is a straightforward calculation. But what is less well understood are associated changes in water vapour, cloud cover, and ocean circulation. We don’t know how important these feedback processes are. The fifth report from the Intergovernmental Panel on Climate Change (IPCC), published in 2013, presented a spread of projections, from which (despite the uncertainties) some things are clear. In particular, if annual CO2 emissions continue to rise unchecked we risk triggering drastic climate change—leading to devastating scenarios resonating centuries ahead, including the initiation of irreversible melting of ice in Greenland and Antarctica, which would eventually raise sea levels by many metres. It’s important to note that the ‘headline figure’ of a global temperature increase is just an average; what makes the effect more disruptive is that the rise is faster in some regions and can trigger drastic shifts in regional weather patterns.

The climate debate has been marred by too much blurring between science, politics, and commercial interests. Those who don’t like the implications of the IPCC projections have rubbished the science rather than calling for better science. The debate would be more constructive if those who oppose current policies recognise the imperative to refine and firm up the predictions—not just globally but, even more important, for individual regions. Scientists in Cambridge and California13 are pursuing a so-called Vital Signs project, which aims to use massive amounts of climatic and environmental data to find which local trends (droughts, heat waves, and such) are the most direct correlates of the mean temperature rise. This could offer politicians something more relevant and easier to appreciate than a mean global warming.

The build-up rate of CO2 in the atmosphere will depend on future population trends and the extent of the world’s continuing dependence on fossil fuels. But even for a specific scenario for CO2 emission, we can’t predict how fast the mean temperature will rise, because of the ‘climate sensitivity factor’ due to uncertain feedback. The consensus of the IPCC experts was that business as usual, with a rising population and continuing dependence on fossil fuels, has a 5 percent chance of triggering more than six degrees warming in the next century. If we think of current expenditure on cutting CO2 emissions as an insurance policy, the main justification is to avoid the small chance of something really catastrophic (as a rise of six degrees would be) rather than the 50 percent chance of something seriously damaging but which could be adapted to.

The goal proclaimed at the Paris conference was to prevent the mean temperature rise from exceeding two degrees—and if possible to constrain it to 1.5 degrees. This is an appropriate goal if we are to reduce the risk of crossing dangerous ‘tipping points’. But the question is: how to implement it? The amount of CO2 that can be released without violating this limit is uncertain by a factor of two, simply because of the unknown climate sensitivity factor. The target is therefore an unsatisfactory one—and will obviously encourage fossil fuel interests to ‘promote’ scientific findings that predict low sensitivity.

Despite the uncertainties—both in the science and in population and economic projections—two messages are important:

1.  Regional disruptions in weather patterns within the next twenty to thirty years will aggravate pressures on food and water, cause more ‘extreme events’, and engender migration.

2.  Under ‘business as usual’ scenarios in which the world continues to depend on fossil fuels, we can’t rule out, later in the century, really catastrophic warming, and tipping points triggering long-term trends like the melting of Greenland’s ice cap.

But even those who accept both these statements and agree that there’s a significant risk of climate catastrophe a century hence, will differ in how urgently they advocate action today. Their assessment will depend on expectations of future growth and optimism about technological fixes. Above all, however, it depends on an ethical issue—the extent to which we should limit our own gratification for the benefit of future generations.

Bjørn Lomborg achieved prominence (along with ‘bogyman status’ among many climate scientists) through his book The Skeptical Environmentalist. He has convened a Copenhagen Consensus of economists to pronounce on global problems and policy.14 These economists apply a standard discount rate, thereby in effect writing off what happens beyond 2050. There is indeed little risk of catastrophe within that time horizon, so unsurprisingly they downplay the priority of addressing climate change compared to other ways of helping the world’s poor. But, as Nicholas Stern15 and Martin Woltzman16 would argue, if you apply a lower discount rate—and, in effect, don’t discriminate on grounds of date of birth and care about those who’ll live into the twenty-second century and beyond—then you may deem it worth making an investment now to protect those future generations against the worst-case scenario.

Consider this analogy. Suppose astronomers had tracked an asteroid and calculated that it would hit the Earth in 2100, not with certainty, but with (say) 10 percent probability. Would we relax, saying that it’s a problem that can be set on one side for fifty years—people will then be richer, and it may turn out then that it’s going to miss the Earth anyway? I don’t think we would. There would be a consensus that we should start straight away and do our damnedest to find ways to deflect it or mitigate its effects. We’d realise that most of today’s young children will still be alive in 2100, and we care about them.

(As a parenthesis, I’d note that there’s one policy context when an essentially zero discount rate is applied: radioactive waste disposal, where the depositories deep underground, such as that being constructed at Onkalo in Finland, and proposed [but then aborted] under Yucca Mountain in the United States, are required to prevent leakage for ten thousand or even a million years—somewhat ironic when we can’t plan the rest of energy policy even thirty years ahead.)

1.6. CLEAN ENERGY—AND A ‘PLAN B’?

Why do governments respond with torpor to the climate threat? It is mainly because concerns about future generations (and about people in poorer parts of the world) tend to slip down the agenda. Indeed, the difficulty of impelling CO2 reductions (by, for instance, a carbon tax) is that the impact of any action not only lies decades ahead but also is globally diffused. The pledges made at the 2015 Paris conference, with a commitment to renew and revise them every five years, are a positive step. But the issues that gained prominence during that conference will slip down the agenda again unless there’s continuing public concern—unless the issues still show up in politicians’ in-boxes and in the press.

In the 1960s the Stanford University psychologist Walter Mischel did some classic experiments. He offered children a choice: one marshmallow immediately, or two if they waited for fifteen minutes. He claimed that the children who chose to delay their gratification became happier and more successful adults.17 This is an apt metaphor for the dilemma nations face today. If short-term payback—instant gratification—is prioritised, then the welfare of future generations is jeopardised. The planning horizon for infrastructure and environmental policies needs to stretch fifty or more years into the future. If you care about future generations, it isn’t ethical to discount future benefits (and dis-benefits) at the same rate as you would if you were a property developer planning an office building. And this rate of discounting is a crucial factor in the climate-policy debate.

Many still hope that our civilisation can segue smoothly towards a low-carbon future. But politicians won’t gain much resonance by advocating a bare-bones approach that entails unwelcome lifestyle changes—especially if the benefits are far away and decades into the future. Indeed, it is easier to gain support for adaptation to climate change rather than mitigation because the benefits of the former accrue locally. For instance, the government of Cuba, whose coastal areas are especially vulnerable to hurricanes and a rise in sea level, has formulated a carefully worked-out plan stretching for a century.18

Nonetheless, three measures that could mitigate climate change seem politically realistic—indeed, almost ‘win-win’.

First, all countries could improve energy efficiency and thereby actually save money. There could be incentives to ensure ‘greener’ design of buildings. This is not just a matter of improved insulation—it requires re-thinking construction as well. To take one example, when a building is demolished, some of its elements—steel girders and plastic piping, for instance—will hardly have degraded and could be reused. Moreover, girders could be more cleverly designed at the outset so as to offer the same strength with less weight, thereby saving on steel production. This exemplifies a concept that is gaining traction: the circular economy—where the aim is to recycle as much material as possible.19

Often, technical advances make appliances more efficient. It would then make sense to scrap the old ones, but only if the efficiency gain is at least enough to compensate for the extra cost of manufacturing the updated version. Appliances and vehicles could be designed in a more modular way so that they could be readily upgraded by replacing parts rather than by being thrown away. Electric cars could be encouraged—and could be dominant by 2040. This transition would reduce pollution (and noise) in cities. But its effect on CO2 levels depends, of course, on where the electricity comes from that charges the batteries.

Effective action needs a change in mind-set. We need to value long-lasting things—and urge producers and retailers to highlight durability. We need to repair and upgrade rather than replace. Or do without. Token reductions may make us feel virtuous but won’t be enough—if everyone does a little, we’ll only achieve a little.

A second ‘win-win’ policy would target cuts to methane, black carbon, and CFC emissions. These are subsidiary contributors to greenhouse warming. But unlike CO2 they cause local pollution too—in Chinese cities, for instance—so there’s a stronger incentive to reduce them. (In European countries the effort to reduce pollution starts off with a handicap. In the 1990s there was pressure in favour of diesel cars because of their greater fuel economy. This is only now being reversed because they emit polluting microparticles that endanger healthy living in cities.)

But the third measure is the most crucial. Nations should expand Research and Development (R&D) into all forms of low-carbon energy generation (renewables, fourth-generation nuclear, fusion, and the rest), and into other technologies where parallel progress is crucial—especially storage and smart grids. That is why an encouraging outcome of the 2015 Paris conference was an initiative called Mission Innovation. It was launched by President Obama and by the Indian prime minister, Narendra Modi, and endorsed by the countries of the G7 plus India, China, and eleven other nations. It is hoped they’ll pledge to double their publicly funded R&D into clean energy by 2020 and to coordinate efforts. This target is a modest one. Presently, only 2 percent of publicly funded R&D is devoted to these challenges. Why shouldn’t the percentage be comparable to spending on medical or defence research? Bill Gates and other private philanthropists have pledged a parallel commitment.

The impediment to ‘decarbonising’ the global economy is that renewable energy is still expensive to generate. The faster these ‘clean’ technologies advance, the sooner their prices will fall so they will become affordable to developing countries, where more generating capacity will be needed, where the health of the poor is jeopardised by smoky stoves burning wood or dung, and where there would otherwise be pressure to build coal-fired power stations.

The Sun provides five thousand times more energy to the Earth’s surface than our total human demand for energy. It shines especially intensely on Asia and Africa where energy demand is predicted to rise fastest. Unlike fossil fuel, it produces no pollution, and no miners get killed. Unlike nuclear fission, it leaves no radioactive waste. Solar energy is already competitive for the thousands of villages in India and Africa that are off the grid. But on a larger scale it remains more expensive than fossil fuels and only becomes economically viable due to subsidies or feed-in tariffs. But eventually these subsidies have to stop.

If the Sun (or wind) is to become the primary source of our energy, there must be some way to store it, so there’s still a supply at night and on days when the wind doesn’t blow. There’s already a big investment in improving batteries and scaling them up. In late 2017 Elon Musk’s SolarCity company installed an array of lithium-ion batteries with 100 megawatts capacity at a location in south Australia. Other energy-storage possibilities include thermal storage, capacitors, compressed air, flywheels, molten salt, pumped hydro, and hydrogen.

The transition to electric cars has given an impetus to battery technology (the requirements for car batteries are more demanding than for those in households or ‘battery farms’, in terms of weight and recharging speed). We’ll need high-voltage direct current (HVDC) grids to transmit efficiently over large distances. In the long run these grids should be transcontinental—carrying solar energy from North Africa and Spain to the less sunny northern Europe, and east–west to smooth peak demand over different time zones in North America and Eurasia.

It would be hard to think of a more inspiring challenge for young engineers than devising clean energy systems for the world.

Other methods of power generation apart from the Sun and wind have geographical niches. Geothermal power is readily available in Iceland; wave power may be feasible but is of course as erratic as wind. Harnessing the energy in the tides seems attractive—they rise and fall predictably—but it is actually unpromising, except in a few places where the topography leads to an especially high tidal range. The west coast of Britain, with a tidal range of up to 15 metres, is one such place, and there have been feasibility studies of how turbines could extract energy from the fast tidally induced flows around some capes and promontories. A barrage placed across the wide estuary of the River Severn could yield as much power as several nuclear power stations. But this proposal remains controversial because of concerns about its ecological impact. An alternative scheme involves tidal lagoons, created by building embankments to close off areas of sea several miles across. The difference between the sea level inside and outside is used to drive turbines. These lagoons have the virtue that the capital cost is in low-tech and long-lived earthworks, which could be amortised over centuries.

Current projections suggest that it may be several decades before clean energy sources provide all our needs, especially in the developing world. If, for instance, solar energy and storage by hydrogen and batteries are inadequate (and these seem currently the best bets), then backup will still be needed in midcentury. Gas power would be acceptable if it were combined with carbon sequestration (carbon capture and storage, CCS) whereby the CO2 is extracted from the exhaust gases at the power station and then transported and permanently stored underground.

Some claim that it would be advantageous to actually cut the CO2 concentration back down to its preindustrial level—to sequester not just the future emission from power stations but also to ‘suck out’ what has been emitted in the past century. The case for this isn’t obvious. There’s nothing ‘optimal’ about the world’s twentieth-century climate—what’s damaging is that the anthropogenic rate of change has been faster than the natural changes in the past, and therefore not easy for us or the natural world to adjust to. But if this reduction were thought worthwhile, there are two ways of achieving it. One is direct extraction from the atmosphere; this is possible, but inefficient as CO2 is only 0.02 percent of the air. Another technique is to grow crops, which of course soak up CO2 from the atmosphere, use them as biofuels, and then capture (and bury) the CO2 that is re-emitted in the power station when they are burned. This is fine in principle but is problematic because of the amount of land needed to grow the fuel (which would otherwise be available for food—or conserved as natural forest), and because the permanent sequestering of the billions of tons of CO2 isn’t straightforward. A higher-tech variant would use ‘artificial leaves’ to incorporate CO2 directly into fuel.

What is the role of nuclear power? I myself would favour the United Kingdom and the United States having at least a replacement generation of power stations. But the hazards of a nuclear accident, even if improbable, cause anxiety; public and political opinion is volatile. After the Fukushima Daiichi disaster in 2011, antinuclear sentiment surged not only (unsurprisingly) in Japan but also in Germany. Moreover, one cannot feel comfortable about a worldwide programme of nuclear power unless internationally regulated fuel banks are established to provide enriched uranium and remove and store the waste—plus a strictly enforced safety code to guard against risks analogous to those from subprime airlines, and a firm nonproliferation agreement to prevent diversion of radioactive material towards weapon production.

Despite the ambivalence about widespread nuclear energy, it’s worthwhile to boost R&D into a variety of fourth-generation concepts, which could prove to be more flexible in size, and safer. The industry has been relatively dormant for the last twenty years, and current designs date back to the 1960s or earlier. In particular, it is worth studying the economics of standardised small modular reactors which could be built in substantial numbers and are small enough to be assembled in a factory before being transported to a final location. Moreover, some designs from the 1960s deserve reconsideration—in particular, the thorium-based reactor, which has the advantage that thorium is more abundant in the Earth’s crust than uranium, and also produces less hazardous waste.

Attempts to harness nuclear fusion—the process that powers the Sun—have been pursued ever since the 1950s, but the history encompasses receding horizons; commercial fusion power is still at least thirty years away. The challenge is to use magnetic forces to confine gas at a temperature of millions of degrees—as hot as the centre of the Sun—and to devise materials to contain the reactor that can survive prolonged irradiation. Despite its cost, the potential payoff from fusion is so great that it is worth continuing to develop experiments and prototypes. The largest such effort is the International Thermonuclear Experimental Reactor (ITER), in France. Similar projects, but on a smaller scale, are being pursued in Korea, the United Kingdom, and the United States. An alternative concept, whereby converging beams from immense lasers zap and implode tiny deuterium pellets, is being pursued at the Lawrence Livermore National Laboratory in the United States, but this National Ignition Facility is primarily a defence project that will provide lab-scale substitutes for H-bomb tests; the promise of controlled fusion power is a political fig leaf.

A ‘dread factor’, and a feeling of helplessness, exaggerates public fear of radiation. As a consequence, all fission and fusion projects are impeded by disproportionate concern about even very low radiation levels.

The Japanese tsunami in 2011 claimed thirty thousand lives, mainly through drowning. It also destroyed the Fukushima Daiichi nuclear power stations, which were inadequately protected against a fifteen-metre-high wall of water, and suboptimally designed (for instance, the emergency generators were located low down, and were inactivated by flooding). Consequently, radioactive materials leaked and spread. The surrounding villages were evacuated in an uncoordinated way—initially just those within three kilometres of the power stations, then twenty kilometres, and then thirty—and with inadequate regard for the asymmetric way the wind was spreading the contamination. Some evacuees had to move three times. And some villages remain uninhabited, with devastating consequences for the lives of longtime residents. Indeed, the mental trauma, and other health problems such as diabetes, have proved more debilitating than the radiation risk. Many evacuees, especially the elderly ones, would be prepared to accept a substantially higher cancer risk in return for the freedom to live out their days in familiar surroundings. They should have that option. (Likewise, the mass evacuations after the Chernobyl disaster weren’t necessarily in the best interests of those who were displaced.)

Overstringent guidelines about the dangers of low-level radiation worsen the entire economics of nuclear power. After the decommissioning of the Dounreay experimental ‘fast breeder’ reactor in the north of Scotland, billions of pounds are being spent on an ‘interim cleanup’ between now and the 2030s, to be followed by further expense spread over several further decades. And nearly 100 billion pounds is budgeted, over the next century, to restore to “greenfields” the Sellafield nuclear installations in England. Another policy concern is this: were a city centre to be attacked by a ‘dirty bomb’ (a conventional chemical explosion laced with radioactive material), some evacuation would be needed. But, just as in Fukushima, present guidelines would lead to a response that was unduly drastic, both in the extent and the duration of the evacuation. The immediate aftermath of a nuclear incident is not the optimum time for a balanced debate. That is why this topic needs a new assessment now and wide dissemination of clear and appropriate guidelines.

*   *   *

What will actually happen on the climate front? My pessimistic guess is that political efforts to decarbonise energy production won’t gain traction, and that the CO2 concentration in the atmosphere will increase at an accelerating rate through the next twenty years, even if the Paris pledges are honoured. But by then we’ll know with far more confidence—from a longer time base of data, and from better modelling—just how strong the feedback from water vapour and clouds actually is. If the ‘climate sensitivity’ is low, we’ll relax. But if it’s high, and climate consequently seems on an irreversible trajectory into dangerous territory (tracking the steepest of the temperature rise scenarios in the fifth IPCC report), there may then be a pressure for ‘panic measures’. This could involve a ‘plan B’—being fatalistic about continuing dependence on fossil fuels but combating the effects of releasing CO2 into the atmosphere via a massive investment in carbon capture and storage at all fossil-fuel-powered power stations.

More controversially, the climate could be actively controlled by geoengineering.20 The ‘greenhouse warming’ could be counteracted by (for instance) putting reflecting aerosols in the upper atmosphere, or even vast sunshades in space. It seems feasible to throw enough material into the stratosphere to change the world’s climate—indeed, what is scary is that this might be within the resources of a single nation, or perhaps even a single corporation. The political problems of such geoengineering may be overwhelming. There could be unintended side effects. Moreover, the warming would return with a vengeance if the countermeasures were ever discontinued, and other consequences of rising CO2 levels (especially the deleterious effects of ocean acidification) would be unchecked.

Geoengineering of this kind would be an utter political nightmare; not all nations would want to adjust the thermostat in the same way. Very elaborate climatic modelling would be needed in order to calculate the regional impacts of any artificial intervention. It would be a bonanza for lawyers if an individual or a nation could be blamed for bad weather! (Note, however, that a different kind of remedy—direct extraction of CO2 from the atmosphere—wouldn’t arouse disquiet. This doesn’t now seem economically feasible, but it is unobjectionable because it would merely be undoing the geoengineering that humans have already perpetrated through burning fossil fuels.)

Despite its unappealing features, geoengineering is worth exploring to clarify which options make sense and perhaps quench undue optimism about a technical ‘quick fix’ for our climate. It would also be wise to sort out the complex governance issues raised—and to ensure that these are clarified before climate change becomes so serious that there is pressure for urgent action.

As emphasised in the introduction, this is the first era in which humanity can affect our planet’s entire habitat: the climate, the biosphere, and the supply of natural resources. Changes are happening on a timescale of decades. This is far more rapid than the natural changes that occurred throughout the geological past; on the other hand, it is slow enough to give us, collectively or on a national basis, time to plan a response—to mitigate or adapt to a changing climate and modify lifestyles. Such adjustments are possible in principle—though a depressing theme threading though this book is the gap between what is technically desirable and what actually occurs.

We should be evangelists for new technologies—without them we’d lack much of what makes our lives better than the lives of earlier generations. Without technology the world can’t provide food, and sustainable energy, for an expanding and more demanding population. But we need it to be wisely directed. Renewable energy systems, medical advances, and high-tech food production (artificial meat, and so forth) are wise goals; geoengineering techniques probably are not. However, scientific and technical breakthroughs can happen so fast and unpredictably that we may not properly cope with them; it will be a challenge to harness their benefits while avoiding the downsides. The tensions between the promises and the hazards of new technology are the theme of the next chapters.