15

Preventing Human Extinction

Could we suffer the same fate as the dinosaurs? It is now generally accepted that about sixty-five million years ago a large asteroid or comet collided with Earth, throwing so much dust into the atmosphere that the Earth became too cold for dinosaurs to survive. According to the U.S. National Aeronautics and Space Administration (NASA), collisions with very large objects in space occur “on average once per 100,000 years, or less often.”1 Perhaps next time, if the object is large enough, it will wipe out our own species. NASA’s Near Earth Object Program is already detecting and tracking asteroids that could pose some risk of a collision with our planet. Should we also be putting resources into developing the ability to deflect any objects that appear to be heading for us? What about other risks of extinction? The risks may be very small, but human extinction would, most people think, be very bad. If we are interested in doing the most good or preventing the most harm, then we should not ignore small risks of major catastrophes.

Nick Bostrom, the director of the Future of Humanity Institute at the University of Oxford, uses the term existential risk to mean a situation in which “an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.”2 The reason for specifying “Earth-originating intelligent life” is that what matters is the type of life that exists—is it intelligent? does it have positive experiences? and so on—not its species. There may be intelligent life elsewhere in the universe, but the universe is not like, say, a mountain valley in which, if one herd of deer is killed, other herbivores will soon migrate into the valley and fill the vacant ecological niche. The universe is so vast and so sparsely inhabited with intelligent life that the extinction of intelligent life originating on Earth would not leave a niche likely to be filled anytime soon, and so it is likely to reduce very substantially the number of intelligent beings who would ever live.

What are the major existential risks? and how likely are we to be able to reduce them? Apart from the risk of a large asteroid colliding with our planet, here are some other ways in which we might become extinct:

Nuclear war: Although the danger seems to have receded since the end of the Cold War, the nuclear powers still possess about seventeen thousand nuclear warheads, more than enough to cause the extinction of all large animals on the planet, including us.3

Pandemic of natural origin: The present century has already seen the emergence of several deadly new viruses for which there is no cure. Fortunately, none of them have been highly contagious, but that could change.

Pandemic caused by bioterrorism: Viruses could be deliberately engineered to be both deadly and highly contagious.

Global warming: The most likely predictions are that over the next century global warming will cause regional catastrophes, but not human extinction. The big unknown, however, is feedback loops, for example, from the release of methane caused by the thawing of the Siberian permafrost, which might go so far as to make the planet uninhabitable, if not in the next century, then within the next five hundred years. That kind of time-scale may give us time to colonize another planet, but it is hard to be confident about that.

Nanotech accident: This scenario involves tiny self-replicating robots multiplying until the entire planet is covered in them. It’s also known as the “gray goo” scenario. Let’s hope it stays in the realm of science fiction.

Physics research producing hyperdense “strange matter”: There has been some speculation that the development of devices like the Large Hadron Collider could produce matter so dense that it would attract nearby nuclei until the entire planet becomes a hyperdense sphere about one hundred meters in diameter.

Superintelligent unfriendly artificial intelligence: Some computer scientists believe that at some point during the present century, artificial intelligence will surpass human intelligence and will then be independent of human control. If so, it might be sufficiently hostile to humans to eliminate us.

In some of these scenarios it isn’t easy to say how great the risk is. In others we may be able to estimate the risk but not know how to reduce it. I began this chapter with asteroid collision because we do have a rough idea of the odds in this case and of how to reduce them. If, as NASA says, a collision with an extinction-size asteroid is likely to happen “once in every 100,000 years or less often,” we can begin by asking what we ought to do if the upper bound is correct and then consider what difference the “or less often” makes. So, starting with the idea that such a collision is likely to happen once in every one hundred thousand years, the chances of it happening in the next century are 1 in 1,000. NASA is, as mentioned, searching for and tracking objects in space that could collide with us, but if it were to discover something big that was on course to hit us, we do not now have the technical capacity to do anything about it. Whether NASA’s tracking system would give us enough warning to develop that capacity—for instance, to build a rocket with a nuclear warhead that could intercept the asteroid and deflect it off its course—is not clear. We could, however, begin to develop the capacity now. Let’s say that bringing the project to fruition over the next decade would cost $100 billion. If we assume that it will have a useful lifetime of one hundred years, then there is only a 1 in 1,000 chance we will use it. If we don’t, we will have wasted $100 billion. For this expenditure to make sense, we have to value preventing human extinction at more than 1,000 × $100 billion, or more than $100 trillion. How should we judge that figure? U.S. government agencies like the Environmental Protection Agency and the Department of Transportation make estimates of the value of a human life to determine how much it is worth spending to prevent a single death. Their current estimates range from $6 million to $9.1 million.4 If we suppose the collision will occur at the midpoint of the century, 2050, when the world’s population is estimated to reach ten billion, a figure of $100 trillion values each human life at only $10,000. On the basis of the U.S. government agencies’ estimates, $100 trillion does not cover even the value of the lives of the more than three hundred million U.S. citizens who would be killed. This suggests that if an extinction-size asteroid is likely to collide with us once in every one hundred thousand years, developing the capacity to deflect an asteroid would be extremely good value.

What if we bring back the “or less often” qualification that NASA put on the likely frequency of an extinction-sized asteroid smashing into the Earth and reduce the odds of it happening within the next century from 1:1000 to 1:100,000 (meaning that such a collision happens only once in every ten million years). If we can eliminate even this much smaller risk for $100 billion, that still looks like good value because it is still valuing a human life at the relatively modest sum of $1 million.

So far we have taken account only of the loss of life of humans existing at the time the collision occurs. That is not all that is at stake. It leaves out both the extinction of other species on our planet and the loss of future generations of human beings. For the sake of simplicity, let’s focus on the loss of future generations of human beings. How much difference does that make? Derek Parfit raises the question we are now considering by inviting us to compare three possible outcomes for the planet:

1.Peace;

2.A nuclear war that kills 99 percent of the world’s existing population;

3.A nuclear war that kills 100 percent.

Parfit comments as follows:

(2) would be worse than (1), and (3) would be worse than (2). Which is the greater of these two differences? Most people believe that the greater difference is between (1) and (2). I believe that the difference between (2) and (3) is very much greater. … The Earth will remain habitable for at least another billion years. Civilization began only a few thousand years ago. If we do not destroy mankind, these few thousand years may be only a tiny fraction of the whole of civilized human history. The difference between (2) and (3) may thus be the difference between this tiny fraction and all of the rest of this history. If we compare this possible history to a day, what has occurred so far is only a fraction of a second.5

Bostrom takes a similar view, beginning his discussion by inviting us to “assume that, holding the quality and duration of a life constant, its value does not depend on when it occurs or on whether it already exists or is yet to be brought into existence as a result of future events and choices.”6 This assumption implies that the value lost when an existing person dies is no greater than the value lost when a child is not conceived, if the quality and duration of the lives are the same. In practice, other factors, like the grief caused to the family of the existing person, would affect our overall judgment of how bad it is for someone to die, as compared to a new person not being conceived. Bostrom is discussing only the more abstract question of the value of a life and is not saying that nothing else matters; nevertheless to think like this about the value of a life takes the idea of impartiality discussed in chapter 7 a controversial step further.

If we accept Bostrom’s assumption and if we accept, as Parfit and Bostrom clearly do, that life is to be valued positively—either as it already is or as it is likely to become—then the value lost by human extinction would dwarf the deaths of ten billion people who would be killed if extinction occurs in 2050. Bostrom takes up Parfit’s statement that the Earth will remain habitable for a billion years and suggests that we could conservatively assume that it can sustain a population of a billion people for that period. That comes to a billion billion (that is, 1018 or a quintillion) human life-years. Even this very large number shrinks to almost nothing when compared with the figures that Bostrom arrives at in his book Superintelligence, in which he considers the number of planets that we may, in future, be able to colonize and the possibility that we will develop conscious minds that exist in computer operations rather than biological brains. Allowing for conscious computers gets Bostrom to 1058 possible mind-lives. To aid our comprehension of so vast a number, Bostrom writes, “If we represent all the happiness experienced during one entire such life with a single teardrop of joy, then the happiness of these souls could fill and refill the Earth’s oceans every second, and keep doing so for a hundred billion billion millennia. It is really important that we make sure these truly are tears of joy.”7 It isn’t necessary to accept these more speculative scenarios, however, to show that, if we accept Bostrom’s assumption about the value of a human life, reducing existential risk dominates calculations of expected utility. Even the more “conservative” figure of 1018 life-years is so large that the expected utility of a very modest reduction of the risk of human extinction overwhelms all the other good things we could possibly achieve.

If reducing existential risk is so important, why has it received so little attention? Bostrom offers several reasons. There has never been an existential catastrophe, and so the prospect of one occurring seems far-fetched. It doesn’t help that the topic has been, as Bostrom puts it, “besieged by doom-mongers and crackpots.” Other reasons for neglect are familiar from discussions of the psychological barriers that militate against people giving more to reduce global poverty: the lack of identifiable victims and the diffusion of responsibility that occurs when no particular individual, agency, or nation is more responsible for dealing with the problem than any other.8 I have argued that effective altruists tend to be more influenced by reasoning than by emotions and thus are likely to give where they can do the most good, whether or not there is an identifiable victim. Existential risk, however, takes this abstraction a step further because the overwhelming majority of those who will benefit from the reduction of existential risk do not now exist and, if we were to fail to avert the risk, never would exist. This further step, some will say, isn’t just a step beyond our emotional capacity for empathy but one to which our reason can also object. It overlooks what is really so tragic about premature death: that it cuts short the lives of specific living persons whose plans and goals are thwarted. If people are never born, they have not formed any plans or set any goals and hence have less to lose. As this line of argument suggests, just how bad the extinction of intelligent life on our planet would be depends crucially on how we value lives that have not yet begun and perhaps never will begin.

The extraordinary implications of the view that every life counts equally, whether it is the life of someone who will exist whatever we do or a life that will exist only if we make certain choices, might make us keen to reject this view. It has, however, been shared by leading utilitarians. Sidgwick, for example, wrote, “It seems clear that, supposing the average happiness enjoyed remains undiminished, Utilitarianism directs us to make the number enjoying it as great as possible.”9 He then went on to ask what utilitarians should do if increasing the number of people reduces the average level of happiness, but not by enough to fully offset the increase that the additional people bring to the total quantity of happiness. His answer was that we should aim for the greatest total quantity of happiness, not the highest average. Bostrom does not, however, need to take a stance on this issue because he thinks it likely that if we can avoid extinction in the next century or two we will develop the means to make life much better than it is today, so both the total and the average will increase.

The alternative to Bostrom’s assumption is what I have elsewhere called the prior existence view: that if people or, more broadly, sentient beings, exist or will exist independently of anything we choose to do, we ought to make their lives as good as possible; but we have no obligation to try to bring about the existence of people who, but for our actions, would not have existed at all.10 This view fits with the common belief that there is no obligation to reproduce, even if one could give one’s children a good start in life and they would be likely to live happy lives. It does, however, face powerful objections. Would it be right for us to solve our environmental problems by all agreeing to put a sterilizing agent in the water supply, thereby deciding to become the last generation on Earth? Assume this is really what, on balance, everyone wants, and no one is unhappy about not having children or troubled by the thought of our species becoming extinct. What everyone wants is the kind of luxurious lifestyle that requires burning vast amounts of fossil fuel but without the guilt that would be involved in handing on to future generations a blighted planet. (If you are wondering what will happen to nonhuman animals, assume that we can find a way of sterilizing them all too.) If ending the extraordinary story of intelligent life on Earth benefits existing beings and harms no one, is it ethically acceptable? If not, then the prior existence view cannot be the whole truth about the value of future beings.11

In practice, as the example of preventing a possible asteroid collision indicates, the case for reducing at least some of the risks of extinction is extremely strong, whether we do it to preserve the lives of those who already exist or will exist independently of what we do, or for the sake of future generations who will come into existence only if intelligent life continues on this planet. What is at stake in the perplexing philosophical debate about the value of merely possible future generations is the extent of the efforts we should make to reduce the risk of extinction. On the expected utility calculations that follow from even the more conservative of Bostrom’s calculations, it seems that reducing existential risk should take priority over doing other good things. Resources are scarce, and in particular altruistic dollars are scarce, so the more effective altruists donate to reducing existential risk, the less they will be able to donate to helping people in extreme poverty or to reducing the suffering of animals. Should the reduction of existential risk really take priority over these other causes? Bostrom is willing to draw this conclusion: “Unrestricted altruism is not so common that we can afford to fritter it away on a plethora of feel-good projects of suboptimal efficacy. If benefiting humanity by increasing existential safety achieves expected good on a scale many orders of magnitude greater than that of alternative contributions, we would do well to focus on this most efficient philanthropy.”12 To refer to donating to help the global poor or reduce animal suffering as a “feel-good project” on which resources are “frittered away” is harsh language. It no doubt reflects Bostrom’s frustration that existential risk reduction is not receiving the attention it should have, on the basis of its expected utility. Using such language is nevertheless likely to be counterproductive. We need to encourage more people to be effective altruists, and causes like helping the global poor are more likely to draw people toward thinking and acting as effective altruists than the cause of reducing existential risk. The larger the number of people who are effective altruists, the greater the likelihood that at least some of them will become concerned about reducing existential risk and will provide resources for doing so.

One obstacle to the conclusion that reducing existential risk is the most efficient form of philanthropy is that often we do not have a clear sense of how we can reduce that risk. Bostrom himself has written, “The problem of how to minimize existential risk has no known solution.”13 That isn’t true of all existential risks. We know enough about what it would take to prevent a large asteroid from colliding with our planet to begin work on that project. For many other risks, however, we do not. What will it take to stop bioterrorism? Today, scientists working with viruses are in a situation similar to that of scientists working in atomic physics before World War II. Physicists at the time discussed whether they should publish material that could show how to build a bomb far more lethal than any that had previously been possible. Some of this work was published, and German as well as British and American physicists were aware of it. We are fortunate that the Nazis did not succeed in building an atomic bomb. Now, for good or evil, it is no longer possible to put nuclear weapons back in the box. In the life sciences, to give just one of several examples, researchers at the State University of New York at Stony Brook synthesized a live polio virus. They published the results in Science, saying that they “made the virus to send a warning that terrorists might be able to make biological weapons without obtaining a natural virus.”14 Did the research and its publication therefore reduce the risk of bioterrorism causing human extinction? Or did it alert potential bioterrorists to the possibility of synthesizing new viruses? How can we know?

Some effective altruists have shown special interest in the dangers inherent in the development of artificial intelligence (AI). They see the problem as one of ensuring that AI will be friendly, by which they mean, friendly to humans. Luke Muehlhauser, the executive director of Machine Intelligence Research Institute, or MIRI, argues that once we develop a form of AI sophisticated enough to start improving itself, a cascade of further improvements will follow, and “at that point, we might as well be dumb chimpanzees watching as those newfangled ‘humans’ invent fire and farming and writing and science and guns and planes and take over the whole world. And like the chimpanzee, at that point we won’t be in a position to negotiate with our superiors. Our future will depend on what they want.”15 The analogy is vivid but double-edged. Granted, the evolution of superior intelligence in humans was bad for chimpanzees, but it was good for humans. Whether it was good or bad from (to use Sidgwick’s phrase) “the point of view of the universe” is debatable, but if human life is sufficiently positive to offset the suffering we have inflicted on animals and if we can be hopeful that in the future life will get better both for humans and for animals, then perhaps it will turn out to have been good. Remember Bostrom’s definition of existential risk, which refers to the annihilation not of human beings but of “Earth-originating intelligent life.” The replacement of our species by some other form of conscious intelligent life is not in itself, impartially considered, catastrophic. Even if the intelligent machines kill all existing humans, that would be, as we have seen, a very small part of the loss of value that Parfit and Bostrom believe would be brought about by the extinction of Earth-originating intelligent life. The risk posed by the development of AI, therefore, is not so much whether it is friendly to us, but whether it is friendly to the idea of promoting well-being in general for all sentient beings it encounters, itself included. If there is any validity in the argument presented in chapter 8, that beings with highly developed capacities for reasoning are better able to take an impartial ethical stance, then there is some reason to believe that, even without any special effort on our part, superintelligent beings, whether biological or mechanical, will do the most good they possibly can.

If we have a clear understanding of what we can do to reduce some existential risks but not others, it may be better to focus on reducing those risks about which we do have such an understanding, while spending a modest amount of resources on doing more research into how we might reduce the risks about which we currently lack the necessary understanding.

Another strategy, for those who share Parfit’s and Bostrom’s view about the supreme importance of ensuring the preservation of at least some intelligent beings on this planet, even if 99 percent of them are annihilated, would be to build a secure, well-stocked refuge designed to protect a few hundred beings from a wide range of catastrophes that might otherwise cause the extinction of our species. The refuge could be kept populated with rotating populations, selected for genetic diversity and screened for infectious diseases before entering.16 Even so, the refuge could not protect its occupants against all the extinction scenarios mentioned above. Moreover, the existence of such a refuge poses the danger that political leaders may be more willing to take risks with the lives of others if they know they and their families would survive a nuclear war or some other disastrous consequence of a risky policy they may be tempted to implement. That danger could be avoided if national leaders were unable to enter the refuge, but with some of the political systems we have now, it is difficult to see how their entry could be prevented.

In this chapter I have been exploring the further reaches of conversations in which philosophers and some of the more philosophically minded effective altruists engage. If these discussions lead in strange directions, never mind. One common strategy on which we should all be able to agree is to take steps to reduce the risk of human extinction when those steps are also highly effective in benefiting existing sentient beings. For example, eliminating or decreasing the consumption of animal products will benefit animals, reduce greenhouse gas emissions, and lessen the chances of a pandemic resulting from a virus evolving among the animals crowded into today’s factory farms, which are an ideal breeding ground for viruses. That therefore looks like a high-priority strategy. Other strategies that offer immediate benefits while reducing existential risk might be educating and empowering women, who tend to be less aggressive than men. Giving them greater say in national and international affairs could therefore reduce the chances of nuclear war. Educating women has also been shown to lead them to have fewer and healthier children, and that will give us a better chance of stabilizing the world’s population at a sustainable level.