HUMANITY’S FUTURE ON EARTH
2.1. BIOTECH
Robert Boyle is best remembered today for ‘Boyle’s law’, relating the pressure and density of gases. He was one of the ‘ingenious and curious gentlemen’ who, in 1660, founded the Royal Society of London, which still exists as the United Kingdom’s academy of sciences. These men (and there were no women among them) would have called themselves ‘natural philosophers’ (the term ‘scientist’ didn’t exist until the nineteenth century); in the words of Francis Bacon, whose writings deeply influenced them, they were ‘merchants of light’, pursuing enlightenment for its own sake; but they were also practical men engaged in the problems of their time, and aiming (to quote Bacon again) at ‘the relief of Man’s estate’.
Boyle was a polymath. After he died in 1691, a handwritten note was found among his papers, with a ‘wish list’ of inventions that would benefit humankind.1 In the quaint idiom of his time, he envisaged some advances that have now been achieved, and some that still elude us more than three centuries later. Here is part of his list:
The Prolongation of Life.
The Recovery of Youth, or at least some of the Marks of it, as new Teeth, new Hair colour’d as in youth
The art of flying
The Art of Continuing long under water, and exercising functions freely there
Great Strength and Agility of Body exemplify’d by that of Frantick Epileptick and Hystericall persons
The Acceleration of the Production of things out of Seed
The making of Parabolicall and Hyperbolicall Glasses
The practicable and certain way of finding Longitudes
Potent Druggs to alter or Exalt Imagination, Waking, Memory, and other functions, and appease pain, procure innocent sleep, harmless dreams, etc
A perpetuall Light
The Transmutation of Species in Mineralls, Animals, and Vegetables
The Attaining Gigantick Dimensions
Freedom from Necessity of much Sleeping exemplify’d by the Operations of Tea and what happens in Mad-Men and stimulants to keep you awake.2
Anyone from Boyle’s seventeenth century would be astonished by the modern world—far more than a Roman would have been by Boyle’s world. Moreover, many changes are still accelerating. Novel technologies—bio, cyber, and AI—will be transformative in ways that are hard to predict even a decade ahead. These technologies may offer new solutions to the crises that threaten our crowded world; on the other hand, they may create vulnerabilities that give us a bumpier ride through the century. Further progress will depend on findings that come fresh from research laboratories, so the speed of advance is especially unpredictable—a contrast with, for instance, nuclear power, which is based on twentieth-century physics, and with the nineteenth-century transformations wrought by steam and electricity.
A ‘headline’ trend in biotech has been the plummeting cost of sequencing the genome. The ‘first draft of the human genome’ was ‘big science’—an international project with a three-billion-dollar budget. Its completion was announced at a press conference at the White House in June 2000. But the cost has by 2018 fallen below one thousand dollars. Soon it will be routine for all of us to have our genome sequenced—raising the question of whether we really want to know if we carry the genes that give us a propensity to particular diseases.3
But now there’s a parallel development—the faster and cheaper ability to synthesise genomes. Already in 2004, the polio virus was synthesised—a portent of things to come. In 2018, the technique is now far advanced. Indeed, Craig Venter, the American biotechnologist and businessman, is developing a gene synthesiser that is, in effect, a 3D printer for genetic codes. Even if it is only able to reproduce short genomes, this could have varied applications. The ‘code’ for a vaccine could be electronically transmitted around the world—allowing instant global distribution of a vaccine created to counter a new epidemic.
People are typically uneasy about innovations that seem ‘against nature’ and that pose risks. Vaccination and heart transplants, for instance, aroused controversy in the past. More recently, concern has focused on embryo research, mitochondrial transplants, and stem cells. I followed closely the debate in the United Kingdom that led to legislation allowing experiments on embryos up to fourteen days old. This debate was well handled; it was characterised by constructive engagement between the researchers, the parliamentarians, and the wider public. There was opposition from the Catholic Church, some of whose representatives circulated pamphlets depicting a fourteen-day-old embryo as a structured ‘homunculus’. Scientists rightly emphasised how misleading this was; such an early-stage embryo is actually a microscopic and still undifferentiated group of cells. But the more sophisticated opponents would respond, ‘I know that, but it’s still sacred’—and to that belief science can offer no counterargument.
In contrast, the debate on genetically modified (GM) crops and animals was handled less well in the United Kingdom. Even before the public was fully engaged there was a standoff between Monsanto, a giant agrochemical corporation, and environmentalists. Monsanto was accused of exploiting farmers in the developing world by requiring them to purchase seeds annually. The wider public was influenced by a newspaper campaign against ‘Frankenstein Foods’. There was a ‘yuck’ factor when we learned that scientists could ‘create’ rabbits that glow in the dark—an aggravated version of the distaste many of us feel at the exploitation of circus animals. Despite the fact that GM crops have been consumed by three hundred million Americans for an entire decade without manifest damage, they are still severely restricted within the European Union. And, as mentioned in section 1.3, providing GM foodstuffs to undernourished children to combat dietary deficiencies has been impeded by anti-GM campaigners. But there are genuine concerns that reduced genetic diversity in crucial crops (wheat, maize, and such) might render the world’s food supply more vulnerable to plant diseases.
The new gene-editing technology, CRISPR/Cas9, could modify gene sequences in a more acceptable manner than earlier techniques. CRISPR/Cas9 makes small changes in the sequences of DNA to suppress (or alter the expression of) damaging genes. But it doesn’t ‘cross the species barrier’. In humans, this most benign and uncontroversial use of gene editing removes single genes that cause specific disease.
In vitro fertilisation (IVF) already provides a less invasive way than CRISPR/Cas9 to weed out damaging genes. In this procedure, after hormone treatment to induce ovulation, several eggs are harvested, fertilised in vitro, and allowed to develop to an early stage. A cell from each embryo is then tested for the presence of the undesirable gene, and one that is free of it is then implanted for a normal pregnancy.
A different technique is now available that can replace a particular category of faulty genes. Some of the genetic material in a cell is found in organelles called mitochondria; these are separate from the cell’s nucleus. If a faulty gene is mitochondrial, it is possible to replace it with mitochondria from another female—leading to ‘three-parent babies’. This technique was legalised by the United Kingdom’s parliament in 2015. The next step would be to use gene editing on DNA in the cell nucleus.
In the public’s mind, a sharp distinction exists between artificial medical interventions that remove something harmful and deploying similar techniques to offer ‘enhancement’. Most characteristics (size, intelligence, and such) are determined by an aggregate of many genes. Only when the DNA of millions of people is available will it become possible (using pattern-recognition systems aided by AI) to identify the relevant combination of genes. In the short term, this knowledge could be used to inform embryo selection for IVF. But modification or redesign of the genome is a more remote (and of course more risky and dubious) prospect. Not until this can be done—and until DNA with the required prescription can be artificially sequenced—will ‘designer babies’ become conceivable (in both senses of that word!). Interestingly, it is unclear how much parental desire there would be for offspring ‘enhanced’ in this fashion (as opposed to the more feasible single-gene editing needed to remove propensities towards specific diseases or disabilities). In the 1980s, the Repository for Germinal Choice was established in California with the aim of enabling the conception of ‘designer babies’; it was a sperm bank, dubbed the Nobel prize sperm bank, with only ‘elite’ donors, including William Shockley, co-inventor of the transistor and a Nobel prize winner, who achieved notoriety later in life for being a proponent of eugenics. He was surprised—though most of us were probably gratified—that there was no great demand.
The advances in medicine and surgery already achieved—and those we can confidently expect in the coming decades—will be acclaimed as a net blessing. They will nonetheless sharpen up some ethical issues—in particular, they will render more acute the dilemmas involved in treating those at the very beginning and the end of their lives. An extension of our healthy lifespan will be welcome. But what is becoming more problematic is the growing gap between how long we will survive in healthy old age and how much longer some kind of life can be extended by extreme measures. Many of us would choose to request non-resuscitation, and solely palliative treatment, as soon as our quality of life and our prognosis dropped below a threshold. We dread clinging on for years in the grip of advanced dementia—a drain on resources, and on the sympathy of others. Similarly, one must question whether the efforts to save extremely premature or irreversibly damaged babies has gone too far. In late 2017, for instance, a team of UK surgeons tried—with immense commitment and dedication—to save an infant born with her heart outside her body.
Belgium, Holland, Switzerland, and several US states have legalised ‘assisted dying’—thereby ensuring that a person of sound mind with a terminal disease can be helped to a peaceful death. Relatives, or physicians and their helpers, can carry out the necessary procedures without being threatened with criminal prosecution for ‘aiding a suicide’. Nothing similar yet has parliamentary approval in the United Kingdom. The objections are based on fundamental religious grounds, on the view that participation in such acts is contrary to a doctor’s ethical code, and on worries that vulnerable people might feel pressured to take this course by their families or by undue concern about placing burdensome demands on others. This inaction in the United Kingdom persists, despite 80 percent public support for ‘assisted dying’. I am firmly in that 80 percent. Knowledge that this option was available would comfort many more than the number who would actually make use of it. Modern medicine and surgery obviously benefit most of us, for most of our lives, and we can expect further advances in the coming decades that can prolong healthy lives. Nonetheless, I expect (and hope) there will be enhanced pressure for legalising euthanasia under regulated conditions.
Another consequence of medical advances is the blurring of the transition between life and death. Death is now normally defined as ‘brain death’—the stage when all measurable signs of brain activity become extinguished. This is the criterion transplant surgeons use in deciding when they can properly ‘harvest’ a body’s organs. But the line is being blurred further by proposals that the heart can be artificially restarted after ‘brain death’, simply to keep the targeted organs ‘fresh’ for longer. This introduces further moral ambiguity into transplant surgery. Already, ‘agents’ are inducing impoverished Bangladeshis to sell a kidney or other organ that will be resold with a huge markup to benefit wealthy potential recipients. And we’ve all seen distressing TV footage of a mother with a sick child pleading that she is ‘desperate for a donor’—desperate, in other words, for another child to die, perhaps from a fatal accident, to supply the needed organ. These moral ambiguities, together with a shortage of organ donors, will continue (and indeed be aggravated) until xenotransplantation—harvesting organs for human use from pigs or other animals—becomes routine and safe. Better still (though more futuristically), techniques akin to those being developed in order to make artificial meat could enable 3D printing of replacement organs. These are advances that should be prioritised.
Advances in microbiology—diagnostics, vaccines, and antibiotics—offer prospects of sustaining health, controlling disease, and containing pandemics. But these benefits have triggered a dangerous ‘fight back’ by the pathogens themselves. There are concerns about antibiotic resistance whereby bacteria evolve (via speeded-up Darwinian selection) to be immune against the antibiotics used to suppress them. This has led, for example, to a resurgence in tuberculosis (TB). Unless new antibiotics are developed, the risks of (for instance) untreatable postoperative infections will surge to the level of a century ago. In the short term, it’s urgent to prevent the overuse of antibiotics—for instance, in cattle in the United States—and to incentivise the development of new antibiotics, even though these are less profitable to pharmaceutical companies than the drugs that control long-term conditions.
And studies of viruses, carried out in the hope of thereby developing improved vaccines, have controversial aspects. For instance, in 2011, two research groups, one in Holland and another in Wisconsin, showed that it was surprisingly easy to make the H5N1 influenza virus both more virulent and more transmissible—in contrast to the natural trend for these two features to be anticorrelated. The justification adduced for these experiments was that by staying one step ahead of natural mutations, it would be easier to prepare vaccines in good time. But, to many, this benefit was outweighed by the enhanced risks of unintentional release of dangerous viruses, plus the wider dissemination of techniques that could be helpful to bioterrorists. In 2014, the US government ceased funding these so-called gain of function experiments—but in 2017 this ban was relaxed. In 2018 a paper was published reporting the synthesis of the horsepox virus—with the implication that a smallpox virus could be similarly synthesised.4 Some questioned the justification for this research, carried out by a group in Edmonton, Alberta, because a safe smallpox virus already exists and is stockpiled; others argued that even if the research were justifiable, publication was a mistake.
As already mentioned, experiments using CRISPR/Cas9 techniques on human embryos raise ethical concerns. And the rapid advance of biotech will bring up further instances where there’s concern about the safety of experiments, the dissemination of ‘dangerous knowledge’, and the ethics of how it’s applied. Procedures that affect not just an individual but his or her progeny—altering the germ line—are disquieting. There has, for instance, been an attempt, with 90 percent success, to make sterile, and thereby wipe out, the species of mosquito that spreads the Dengue and Zika strains of virus. In the United Kingdom, a ‘gene drive’ has been invoked to remove grey squirrels—regarded as a ‘pest’ that threatens the cuddlier red variety. (A more benign tactic is to engineer the red squirrel so that it can resist the parapoxvirus that is spread by the grey squirrels.) Similar techniques are being proposed that could preserve the unique ecology of the Galapagos Islands by eliminating invasive species—black rats in particular. But it’s worth noting that in a recent book, Inheritors of the Earth, Chris Thomas, a distinguished ecologist, argues that the spread of species can often have a positive impact in ensuring a more varied and robust ecology.5
In 1975, in the early days of recombinant DNA research, a group of leading molecular biologists met at the Asilomar Conference Grounds in Pacific Grove, California, and agreed on guidelines defining what experiments should and should not be done. This seemingly encouraging precedent has triggered several meetings, convened by national academies, to discuss recent developments in the same spirit. But today, more than forty years after the first Asilomar meeting, the research community is far more broadly international, and more influenced by commercial pressures. I’d worry that whatever regulations are imposed, on prudential or ethical grounds, cannot be enforced worldwide—any more than the drug laws can, or tax laws. Whatever can be done will be done by someone, somewhere. And that’s a nightmare. In contrast to the elaborate and conspicuous special-purpose equipment needed to create a nuclear weapon, biotech involves small-scale dual-use equipment. Indeed, biohacking is burgeoning even as a hobby and competitive game.
Back in 2003 I was worried about these hazards and rated the chance of bio error or bio terror leading to a million deaths as 50 percent by 2020. I was surprised at how many of my colleagues thought a catastrophe was even more likely than I did. More recently, however, psychologist/author Steven Pinker took me up on that bet, with a two-hundred-dollar stake. This is a bet that I fervently hope to lose, but I was not surprised that the author of The Better Angels of Our Nature6 should take an optimistic line. Pinker’s fascinating book is infused with optimism. He quotes statistics pointing to a gratifying downward trend in violence and conflict—a decline that has been obscured by the fact that global news networks report disasters that would have been unreported in earlier times. But this trend can lull us into undue confidence. In the financial world, gains and losses are asymmetric; many years of gradual gains can be wiped out by a sudden loss. In biotech and pandemics, the risk is dominated by the rare but extreme events. Moreover, as science empowers us more, and because our world is so interconnected, the magnitude of the worst potential catastrophes has grown unprecedentedly large, and too many are in denial about them.
By the way, the societal fallout from pandemics would be far higher than in earlier centuries. European villages in the mid-fourteenth century continued to function even when the Black Death almost halved their populations; the survivors were fatalistic about a massive death toll. In contrast, the feeling of entitlement is so strong in today’s wealthier countries that there would be a breakdown in the social order as soon as hospitals overflowed, key workers stayed at home, and health services were overwhelmed. This could occur when those infected were still a fraction of 1 percent. The fatality rate would, however, probably be highest in the megacities of the developing world.
Pandemics are an ever-present natural threat, but is it just scaremongering to raise concerns about human-induced risks from bio error or bio terror? Sadly, I don’t think it is. We know all too well that technical expertise doesn’t guarantee balanced rationality. The global village will have its village idiots and they’ll have global range. The spread of an artificially released pathogen can’t be predicted or controlled; this realisation inhibits the use of bioweapons by governments, or even by terrorist groups with specific well-defined aims (which is why I focused on nuclear and cyber threats in section 1.2). So, my worst nightmare would be an unbalanced ‘loner’, with biotech expertise, who believed, for instance, that there were too many humans on the planet and didn’t care who, or how many, were infected. The rising empowerment of tech-savvy groups (or even individuals) by bio- as well as cybertechnology will pose an intractable challenge to governments and aggravate the tension among freedom, privacy, and security. Most likely there will be a societal shift towards more intrusion and less privacy. (Indeed, the rash abandon with which people put their intimate details on Facebook, and our acquiescence in ubiquitous CCTV [video surveillance], suggests that such a shift would meet surprisingly little resistance.)
Bio error and bio terror are possible in the near term—within ten or fifteen years. And in the longer term they will be aggravated as it becomes possible to ‘design’ and synthesise viruses—the ‘ultimate’ weapon would combine high lethality with the transmissibility of the common cold.
What advances might biologists bring us in 2050 and beyond? Freeman Dyson projects a time when children will design and create new organisms just as routinely as his generation played with chemistry sets.7 If one day it became possible to ‘play God on a kitchen table’, our ecology (and even our species) might not survive long unscathed. Dyson, however, isn’t a biologist; he’s one of the twentieth century’s leading theoretical physicists. But unlike many such people he’s a divergent and speculative thinker—often expressing a contrarian bent. For instance, back in the 1950s he was part of a group that explored the speculative concept ‘Project Orion’. The group’s aim was to achieve interstellar travel with spaceships powered by exploding H-bombs (nuclear pulse propulsion) behind the (well shielded) vehicle. Even in 2018, Dyson remains sceptical about the need for an urgent response to climate change.
Research on aging is being seriously prioritised. Will the benefits be incremental? Or is aging a ‘disease’ that can be cured? Serious research focuses on telomeres, stretches of DNA at the ends of chromosomes, which shorten as people age. It’s been possible to achieve a tenfold increase in the lifespan of nematode worms, but the effect on more complex animals is less dramatic. The only effective way to extend the life of rats is by giving them a near-starvation diet. But there’s one unprepossessing creature, the naked mole rat, which may have some special biological lessons for us; some live for more than thirty years—several times longer than the lifespan of other small mammals.
Any major breakthrough in life extension for humans would alter population projections in a drastic way; the social effects, obviously huge, would depend on whether the years of senility were prolonged too, and on whether the age of women at menopause would increase with total lifespan. But various types of human enhancement via hormonal treatment may become possible as the human endocrine system is better understood—and some degree of life extension is likely to be among these enhancements. As with so much technology, priorities are unduly slanted towards the wealthy. And the desire for a longer lifespan is so powerful that it creates a ready market for exotic therapies with untested efficacy. Ambrosia, a 2016 start-up, offers Silicon Valley executives a transfusion of ‘young blood’. Another recent craze was metformin, a drug intended to treat diabetes, but which is claimed to stave off dementia and cancer; others extol the benefits of placental cells. Craig Venter has a company called Human Longevity, which received $300 million in start-up funds. This goes beyond 23andMe (the firm that analyses our genome well enough to reveal interesting results about our vulnerability to some diseases, and about our ancestry). Venter aims to analyse the genomes of the thousands of species of ‘bugs’ in our gut. It is believed (very plausibly) that this internal ‘ecosystem’ is crucial to our health.
The ‘push’ from Silicon Valley towards achieving ‘eternal youth’ stems not only from the immense surplus wealth that’s been accumulated there, but also because it’s a place with a youth-based culture. Those older than thirty are thought to be ‘over the hill’. The futurist Ray Kurzweil speaks zealously of attaining a metaphorical ‘escape velocity’—when medicine advances so fast that life expectancy rises by more than a year in each year, offering potential immortality. He ingests more than one hundred supplements a day—some routine, some more exotic. But he’s worried that ‘escape velocity’ may not be achieved within his ‘natural’ lifetime. So, he wants his body frozen until this nirvana is reached.
I was once interviewed by a group of ‘cryonics’ enthusiasts—based in California—called the ‘society for the abolition of involuntary death’. I told them I’d rather end my days in an English churchyard than a Californian refrigerator. They derided me as a ‘deathist’—really old fashioned. I was surprised to learn later that three academics in England (though I’m glad to say not from my university) had signed up for ‘cryonics’. Two have paid the full whack; the third has taken the cut-price option of contracting for just his head to be frozen. The contract is with a company called Alcor in Scottsdale, Arizona. These colleagues are realistic enough to accept that the chance of resurrection may be small, but they point out that without this investment the chance is zero. So they wear a medallion displaying instructions to immediately freeze them when they die and replace their blood with liquid nitrogen.
It is hard for most of us mortals to take this aspiration seriously; moreover, if cryonics had a real prospect of success, I don’t think it would be admirable either. If Alcor didn’t go bust and dutifully maintained the refrigeration and stewardship for the requisite centuries, the corpses would be revived into a world where they would be strangers—refugees from the past. Perhaps they would be indulgently treated, as we feel we should treat (for instance) distressed asylum seekers, or Amazonian tribal people who’ve been forced from their familiar habitat. But the difference is that the ‘thawed-out corpses’ would be burdening future generations by choice; so, it’s not clear how much consideration they would deserve. This is reminiscent of a similar dilemma that may not always be science fiction, even if it should remain so: cloning a Neanderthal. One of the experts (a Stanford professor) queried: ‘Would we put him in a zoo or send him to Harvard?’
2.2. CYBERTECHNOLOGY, ROBOTICS, AND AI
Cells, viruses, and other biological microstructures are essentially ‘machines’ with components on the molecular scale—proteins, ribosomes, and so forth. We owe the dramatic advances in computers to the fast-advancing ability to manufacture electronic components on the nanoscale, thereby allowing almost biological-level complexity to be packed into the processors that power smartphones, robots, and computer networks.
Thanks to these transformative advances, the internet and its ancillaries have created the most rapid ‘penetration’ of new technology in history—and also the most fully global. Their spread in Africa and China proceeded faster than nearly all ‘expert’ predictions. Our lives have been enriched by consumer electronics and web-based services that are affordable by literally billions. And the impact on the developing world is emblematic of how optimally applied science can transform impoverished regions. Broadband internet, soon to achieve worldwide reach via low-orbiting satellites, high-altitude balloons, or solar-powered drones, should further stimulate education and the adoption of modern health care, agricultural methods, and technology; even the poorest can thereby leapfrog into a connected economy and enjoy social media—even though many are still denied the benefits of nineteenth-century technological advances such as proper sanitation. People in Africa can use smartphones to access market information, make mobile payments, and so forth; China has the most automated financial system in the world. These developments have a ‘consumer surplus’ and generate enterprise and optimism in the developing world. And such benefits have been augmented by effective programmes aiming to eliminate infectious diseases such as malaria. According to the Pew Research Center, 82 percent of Chinese people and 76 percent of Indians believe that their children will be better off than they themselves are today.
Indians now have an electronic identity card that makes it easier for them to register for welfare benefits. This card doesn’t need passwords. The vein pattern in our eyes allows the use of ‘iris recognition’ software—a substantial improvement on fingerprints or facial recognition. This is precise enough to unambiguously identify individuals, among the 1.3 billion Indians. And it is a foretaste of the benefits that can come from future advances in AI.
Speech recognition, face recognition, and similar applications use a technique called generalised machine learning. This operates in a fashion that resembles how humans use their eyes. The ‘visual’ part of human brains integrates information from the retina through a multistage process. Successive layers of processing identify horizontal and vertical lines, sharp edges, and so forth; each layer processes information from a ‘lower’ layer and then passes its output to other layers.8
The basic machine-learning concepts date from the 1980s; an important pioneer was the Anglo-Canadian Geoff Hinton. But the applications only really ‘took off’ two decades later, when the steady operation of Moore’s law—a doubling of computer speeds every two years—led to machines with a thousand times faster processing speed. Computers use ‘brute force’ methods. They learn to translate by reading millions of pages of (for example) multilingual European Union documents (they never get bored!). They learn to identify dogs, cats, and human faces by ‘crunching’ through millions of images viewed from different perspectives.
Exciting advances have been spearheaded by DeepMind, a London company now owned by Google. DeepMind’s cofounder and CEO, Demis Hassabis, has had a precocious career. At thirteen he was ranked the number two chess champion in the world for his category. He qualified for admission to Cambridge at fifteen but delayed admission for two years, during which time he worked on computer games, including conceiving the highly successful Theme Park. After studying computer science at Cambridge, he started a computer games company. He then returned to academia and earned a PhD at University College London, followed by postdoctoral work on cognitive neuroscience. He studied the nature of episodic memory and how to simulate groups of human brain cells in neural net machines.
In 2016, DeepMind achieved a remarkable feat—its computer beat the world champion of the game of Go. This may not seem a ‘big deal’ because it’s been more than twenty years since IBM’s supercomputer Deep Blue beat Garry Kasparov, the world chess champion. But it was a ‘game change’ in the colloquial as well as literal sense. Deep Blue had been programmed by expert players. In contrast, the AlphaGo machine gained expertise by absorbing huge numbers of games and playing itself. Its designers don’t know how the machine makes its decisions. And in 2017 AlphaGo Zero went a step further; it was just given the rules—no actual games—and learned completely from scratch, becoming world-class within a day. This is astonishing. The scientific paper describing the feat concluded with the thought that
humankind has accumulated Go knowledge from millions of games played over thousands of years, collectively distilled into patterns, proverbs and books. In the space of a few days, starting tabula rasa, AlphaGo Zero was able to rediscover much of this Go knowledge, as well as novel strategies that provide new insight into the oldest of games.9
Using similar techniques, the machine reached Kasparov-level chess competence within a few hours, without expert input, and similar prowess in the Japanese game of Shogi. A computer at Carnegie Mellon University has learned to bluff and calculate as well as the best professional poker players. But Kasparov himself has emphasised that in games like chess humans offer distinctive ‘added value’ and that a person plus a machine, in combination, can surpass what either could accomplish separately.
AI earns its advantage over humans through its ability to analyse vast volumes of data and rapidly manipulate and respond to complex input. It excels in optimising elaborate networks, like the electricity grid or city traffic. When the energy management of its large data farms was handed over to a machine, Google claimed energy savings of 40 percent. But there are still limitations. The hardware underlying AlphaGo used hundreds of kilowatts of power. In contrast, the brain of Lee Sedol, AlphaGo’s Korean challenger, consumes about thirty watts (like a lightbulb) and can do many other things apart from play board games.
Sensor technology, speech recognition, information searches, and so forth are advancing apace. So (albeit with a more substantial lag) is physical dexterity. Robots are still clumsier than a child in moving pieces on a real chessboard, tying shoelaces, or cutting toenails. But here too there is progress. In 2017, Boston Dynamics demonstrated a fearsome-looking robot called Handel (a successor to the earlier four-legged Big Dog), with wheels as well as two legs, that is agile enough to perform back flips. But it will be a long time before machines outclass human gymnasts—or indeed interact with the real world with the agility of monkeys and squirrels that jump from tree to tree—still less achieve the overall versatility of humans.
Machine learning, enabled by the ever-increasing number-crunching power of computers, is a potentially stupendous breakthrough. It allows machines to gain expertise—not just in game playing, but in recognising faces, translating between languages, managing networks, and so forth—without being programmed in detail. But the implications for human society are ambivalent. There is no ‘operator’ who knows exactly how the machine reaches a decision. If there is a ‘bug’ in the software of an AI system, it is currently not always possible to track it down; this is likely to create public concern if the system’s ‘decisions’ have potentially grave consequences for individuals. If we are sentenced to a term in prison, recommended for surgery, or even given a poor credit rating, we would expect the reasons to be accessible to us—and contestable by us. If such decisions were entirely delegated to an algorithm, we would be entitled to feel uneasy, even if presented with compelling evidence that, on average, the machines make better decisions than the humans they have usurped.
Integration of these AI systems has an impact on everyday life—and will become more intrusive and pervasive. Records of all our movements, our interactions with others, our health, and our financial transactions, will be in the ‘cloud’, managed by a multinational quasi-monopoly. The data may be used for benign reasons (for instance, for medical research, or to warn us of incipient health risks), but its availability to internet companies is already shifting the balance of power from governments to the commercial world. Indeed, employers can now monitor individual workers far more intrusively than the most autocratic or ‘control freak’ traditional bosses. There will be other privacy concerns. Are you happy if a random stranger sitting near you in a restaurant or on public transportation can, via facial recognition, identify you, and invade your privacy? Or if ‘fake’ videos of you become so convincing that visual evidence can no longer be trusted?
2.3. WHAT ABOUT OUR JOBS?
The pattern of our lives—the way we access information and entertainment, and our social networks—has already changed to a degree that we would hardly have envisioned twenty years ago. Moreover, AI is just at the ‘baby stage’ compared to what its proponents expect in coming decades. There will plainly be drastic shifts in the nature of work, which not only provides our income but also helps give meaning to our lives and our communities. So, the prime social and economic question is this: Will this ‘new machine age’ be like earlier disruptive technologies—the railways, or electrification, for instance—and create as many jobs as it destroys? Or is it really different this time?
During the last decade the real wages of unskilled people in Europe and North America fell. So did the security of such people’s employment. Despite that, one countervailing factor has offered all of us greater subjective well-being: the consumer surplus offered by the ever more pervasive digital world. Smartphones and laptops have improved vastly. I value internet access far more than I value owning a car, and it’s far cheaper.
Clearly, machines will take over much of the work of manufacturing and retail distribution. They can replace many white-collar jobs: routine legal work (such as conveyancing), accountancy, computer coding, medical diagnostics, and even surgery. Many ‘professionals’ will find their hard-earned skills in less demand. In contrast, some skilled service-sector jobs—plumbing and gardening, for instance—require nonroutine interactions with the external world and so will be among the hardest jobs to automate. To take a much-cited example, how vulnerable are the jobs of three million truck drivers in the United States?
Self-driving vehicles may be quickly accepted in limited areas where they will have the roads to themselves—in designated parts of city centres, or maybe in special lanes on motorways. And there is a potential for using driverless machines in farming and harvesting, operating off road. But what is not so clear is whether automated vehicles will ever be able to operate safely when confronted with all the complexities of routine driving—navigating small, winding roads and sharing city streets with human-driven vehicles and cycles and pedestrians. I think there will be public resistance to this.
Would a fully autonomous car be safer than a car with a human driver? If an object obstructs the road ahead, could it distinguish between a paper bag, a dog, or a child? The claim is that it cannot infallibly do so but will do better than the average human driver. Is that true? Some would say yes. If the cars are wirelessly connected to one another, they would learn faster by sharing experiences.
On the other hand, we should not forget that every innovation is initially risky—think of the early days of railways, or the pioneering use of surgical operations that are now routine. Regarding road safety, here are some figures from the United Kingdom. In 1930, when there were only a million vehicles on the roads, there were more than 7,000 fatalities; in 2017 there were about 1,700 fatalities—a drop by a factor of four, even though there are about thirty times more vehicles on the roads than there were in 1930.10 The trend is due partly to better roads, but largely to safer cars and, in recent years, to satellite-based navigation systems (satnavs) and other electronic gadgetry. This trend will continue, making driving safer and easier. But fully automatic vehicles sharing ordinary roads with mixed traffic would be a truly disjunctive change. We are justified in being sceptical about how feasible and acceptable this transition would be.
It may be a long time before truck and car drivers are redundant. As a parallel, consider what is happening in civil aviation. Although air travel was once dangerous, it is now amazingly safe. During 2017 there was not a single fatality, worldwide, on any scheduled airliner. Most flying is done on autopilot; a real pilot is needed only in emergencies. But the concern is that he or she may not be alert at the crucial time. The 2009 crash of an Air France plane, en route from Rio de Janeiro, Brazil, to Paris, in the South Atlantic demonstrates what can go wrong: the pilots took too long to resume control when there was an emergency and mistakenly aggravated the problem. On the other hand, suicidal pilots have actually caused devastating crashes that the autopilot couldn’t prevent. Will the public ever be relaxed about boarding a plane with no pilot? I doubt it. But pilotless planes may be acceptable for air freight. Small delivery drones have a promising future; indeed, in Singapore, there are plans to replace delivery vehicles at ground level with drones flying above the streets. But even for these, we are too complacent about the risk of collisions, especially if they proliferate. For ordinary cars, software errors and cyberattacks cannot be ruled out. We are already seeing the hackability of the ever more sophisticated software and security systems found in automobiles. Can we confidently protect brakes and steering against hacking?
An oft-quoted benefit of driverless cars is that they will be hired and shared rather than owned. This could reduce the amount of parking space needed in cities—blurring the line between public and private transport. But what is not clear is how far this will go—whether the wish to possess one’s own car will indeed disappear. If driverless cars catch on, they will boost road travel at the expense of traditional rail travel. Many people in Europe prefer taking the train for a 200-mile journey; it is less stressful than driving and opens up time to work or read. But if we had an ‘electronic chauffeur’ who could be trusted for the entire journey, many of us would prefer to travel by car and get door-to-door service. This would reduce demand for long-distance train routes—but at the same time provide an incentive for inventing novel forms of transport, such as intercity hyperloops. Best of all, of course, would be high-grade telecommunications that obviate the need for most nonleisure travel.
The digital revolution generates enormous wealth for an elite group of innovators and for global companies, but preserving a healthy society will require redistribution of that wealth. There is talk of using it to provide a universal income. The snags to implementing this are well known, and the societal disadvantages are intimidating. It would be far better to subsidise the types of jobs for which there is currently a large unmet demand and for which pay and status is unjustly low.
It’s instructive to observe (sometimes with bemusement) the spending choices made by those who are not financially constrained. Rich people value personal service; they employ personal trainers, nannies, and butlers. When they’re elderly, they employ human caregivers. The criterion for a progressive government should be to provide for everyone the kind of support preferred by the best-off—the ones who now have the freest choice. To create a humane society, governments will need to vastly enhance the number and status of those who carry out caregiving roles; there are currently far too few, and even in wealthy countries caregivers are poorly paid and insecure in their positions. (It’s true that robots can take over some aspects of routine care—indeed, we may find it less embarrassing for basic washing, feeding, and bedpan routines to be handled by an automaton. But those who can afford it want the attention of real human beings as well.) And there are other jobs that would make our lives better and could provide worthwhile employment for far more people—for example, gardeners in public parks, custodians, and so forth.
It’s not just the very young and very old who need human support. When so much business, including interaction with government, is done via the internet, we should worry about, for instance, a disabled person living alone who needs to access websites online to claim their rightful government benefits, or to order basic provisions. Think of the anxiety and frustration when something goes wrong. Such people will have peace of mind only if there are computer-savvy caregivers to help the bewildered cope with IT, to ensure that they can get help and are not disadvantaged. Otherwise, the ‘digitally deprived’ will become a new ‘underclass’.
It is better when we can all perform socially useful work rather than receive a handout. However, the typical working week could be shortened—to shorter even than France’s current thirty-five hours. Those for whom work is intrinsically satisfying are atypical and especially lucky. Most people would welcome shorter hours, which would release more time for entertainment, socialising, and for participation in collective rituals—whether religious, cultural, or sporting.
There will also be a resurgence of arts and crafts. We’ve seen the emergence of ‘celebrity chefs’—even celebrity hairdressers. We’ll see more scope for other crafts, and more respect accorded to their most talented exponents. Again, the wealthy, those who have the most freedom of choice, spend heavily on patronising labour-intensive activities.
The erosion of routine work and lifetime careers will stimulate ‘life-long learning’. Formal education, based on teaching done in classrooms and lecture halls, is perhaps the most sclerotic sector of societies worldwide. Distance learning via online courses may never replace the experience of attending a residential college that offers personal mentoring and tuition, but it will become a cost effective and more flexible replacement for the typical ‘mass university’. There is boundless potential for the model pioneered by the United Kingdom’s Open University, a model that is now being spread widely via US organisations like Coursera and edX, where leading academics provide content for online courses. Teachers who do this best can become global online stars. These courses will be enhanced by the personalisation that AI will increasingly be able to provide. Those who become scientists often attribute their initial motivation to the web or media rather than to classroom instruction.
The lifestyle a more automated world offers seems benign—indeed enticing—and could in principle promote Scandinavian-level satisfaction throughout Europe and North America. However, citizens of these privileged nations are becoming far less isolated from the disadvantaged parts of the world. Unless inequality between countries is reduced, embitterment and instability will become more acute because the poor, worldwide, are now, via IT and the media, far more aware of what they’re missing. Technical advances could amplify international disruption. Moreover, if robotics renders it economically viable for wealthy countries to shore up manufacturing within their own borders, the transient but crucial developmental boost that the ‘tigers’ in the Far East received by undercutting Western labour costs will be denied to the still-poor nations in Africa and the Middle East, rendering the inequalities more persistent.
Also, the nature of migration has changed. A hundred years ago a European or Asian individual’s decision to move to North America or Australia required severing ties with his or her indigenous culture and extended family. There was therefore an incentive to integrate into the new society. In contrast, daily video calls and social media contacts now enable immigrants, if they so choose, to remain embedded in the culture of their homeland, and affordable intercontinental travel can sustain personal contacts.
National and religious loyalties and divisions will persist (or even be strengthened by internet echo chambers) despite greater mobility and less sentimentality about ‘place’. Nomads of the technocratic world will expand in numbers. The impoverished will see ‘following the money’ as their best hope—migrating legally or illegally. International tensions will get more acute.
If there is indeed a growing risk of conflicts triggered by ideology or perceived unjust inequality, it will be aggravated by the impact of new technology on warfare and terrorism. For the last decade at least, we’ve seen TV reports of drones or rockets attacking targets in the Middle East. They are controlled from bunkers in the continental United States—by individuals even more remote from the consequences of their actions than aircrews carrying out bombing raids. The ethical queasiness this engenders is somewhat allayed by claims that higher-precision targeting reduces collateral damage. But at least there is a human ‘in the loop’ who decides when and what to attack. In contrast, there is now the possibility of autonomous weapons, which can seek out a target—using facial recognition to identify individuals and then kill them. This would be a precursor to automated warfare—a development that raises deep concerns. Near-term possibilities include automated machine guns; drones; and armoured vehicles or submarines that can identify targets, decide whether to open fire, and learn as they go.
There is rising concern about ‘killer robots’. In August 2017, the heads of one hundred leading companies in this field signed an open letter calling on the United Nations to outlaw ‘lethal autonomous weapons’, just as international conventions constrain the use of chemical and biological weapons.11 The signatories warn about an electronic battlefield ‘at a scale greater than ever, and at timescales faster than humans can comprehend’. How effective any such treaty would be remains unclear; just as in the case of bioweapons, nations may pursue these technologies for allegedly ‘defensive’ motives, and through fear that rogue nations or extremist groups would go ahead with such developments anyway.
These are near-term concerns, for which the key technologies are already understood. But let’s now look further ahead.
2.4. HUMAN-LEVEL INTELLIGENCE?
The scenarios discussed in the last section are sufficiently near term that we need to plan for them and adjust to them. But what about the longer-term prospects? These are murkier, and there is no consensus among experts on the speed of advance in machine intelligence—and indeed on what the limits to AI might be. It seems plausible that an AI linked to the internet could ‘clean up’ on the stock market by analysing far more data far faster than any human. To some extent this is what quantitative hedge funds are already doing. But for interactions with humans, or even with the complex and fast-changing environment encountered by a driverless car on an ordinary road, processing power is not enough; computers would need sensors that enable them to see and hear as well as humans do, and the software to process and interpret what the sensors relay.
But even that would not be sufficient. Computers learn from a ‘training set’ of similar activities, where success is immediately ‘rewarded’ and reinforced. Game-playing computers play millions of games; photo-interpreting computers gain expertise by studying millions of images; for driverless cars to achieve this expertise, they would need to communicate with one another, to share and update their knowledge. But learning about human behaviour involves observing actual people in real homes or workplaces. The machine would feel sensorily deprived by the slowness of real life and would be bewildered. To quote Stuart Russell, a leading AI theorist, ‘it could try all kinds of things: scrambling eggs, stacking wooden blocks, chewing wires, poking its finger into electric outlets. But nothing would produce a strong enough feedback loop to convince the computer it was on the right track and lead it to the next necessary action’.12
Only when this barrier can be surmounted will AIs truly be perceived as intelligent beings, to which (or to whom) we can relate, at least in some respects, as we do to other people. And their far faster ‘thoughts’ and reactions could then give them an advantage over us.
Some scientists fear that computers may develop ‘minds of their own’ and pursue goals hostile to humanity. Would a powerful futuristic AI remain docile, or ‘go rogue’? Would it understand human goals and motives and align with them? Would it learn enough ethics and common sense so that it ‘knew’ when these should override its other motives? If it could infiltrate the internet of things, it could manipulate the rest of the world. Its goals may be contrary to human wishes, or it may even treat humans as encumbrances. AI must have a ‘goal’, but what really is difficult to instil is ‘common sense’. AI should not pursue its goal obsessively and should be prepared to desist from its efforts rather than violating ethical norms.
Computers will vastly enhance mathematical skills, and perhaps even creativity. Already our smartphones substitute for routine memory storage and give near-instant access to the world’s information. Soon translation between languages will be routine. The next step could be to ‘plug in’ extra memory or acquire language skills by direct input into the brain—though the feasibility of this isn’t clear. If we can augment our brains with electronic implants, we might be able to download our thoughts and memories into a machine. If present technical trends proceed unimpeded, then some people now living could attain immortality—at least in the limited sense that their downloaded thoughts and memories could have a life span unconstrained by their present bodies. Those who seek this kind of eternal life will, in old-style spiritualist parlance, ‘go over to the other side’.
We then confront the classic philosophical problem of personal identity. If your brain were downloaded into a machine, in what sense would it still be ‘you’? Should you feel relaxed about your body then being destroyed? What would happen if several ‘clones’ were made of ‘you’? And is the input into our sense organs, and physical interactions with the real external world, so essential to our being that this transition would be not only abhorrent but also impossible? These are ancient conundrums for philosophers, but practical ethicists may soon need to address them because they might be relevant to choices that real humans will make within this century.
In regard to all these post-2050 speculations, we don’t know where the boundary lies between what may happen and what will remain science fiction—just as we don’t know whether to take seriously Freeman Dyson’s vision of biohacking by children. There are widely divergent views. Some experts, for instance Stuart Russell at Berkeley, and Demis Hassabis of DeepMind, think that the AI field, like synthetic biotech, already needs guidelines for ‘responsible innovation’. Moreover, the fact that AlphaGo achieved a goal that its creators thought would have taken several more years to reach has rendered DeepMind’s staff even more bullish about the speed of advancement. But others, like the roboticist Rodney Brooks (creator of the Baxter robot and the Roomba vacuum cleaner) think these concerns are too far from realisation to be worth worrying about—they remain less anxious about artificial intelligence than about real stupidity. Companies like Google, working closely with academia and government, lead the research into AI. These sectors now speak with one voice in highlighting the need to promote ‘robust and beneficial’ AI, but tensions may emerge when AI moves from the research and development phase to being a potentially massive money-spinner for global companies.
But does it matter if AI systems are having conscious thoughts in the sense that humans do? In the view of the computer science pioneer Edsger Dijkstra, it’s a nonquestion: ‘Whether machines can think is about as relevant as the question of whether submarines can swim’. Both a whale and a submarine make forward progress through the water, but they do it in fundamentally different ways. But to many it matters deeply whether intelligent machines are self-aware. In a scenario (see section 3.5) where future evolution is dominated by entities that are electronic, rather than having the ‘wet’ hardware we have in our skulls, it would seem depressing if we’d been surpassed in competence by ‘zombies’ who couldn’t appreciate the wonders of the universe they were in and couldn’t ‘sense’ the outside world as humans can. Be that as it may, society will be transformed by autonomous robots, even though the jury’s out on whether they’ll possess what we’d call real understanding or whether they’ll be ‘idiot savants’—with competence without comprehension.
A sufficiently versatile superintelligent robot could be the last invention that humans need to make. Once machines surpass human intelligence, they could design and assemble a new generation of even more intelligent machines. Some of the ‘staples’ of speculative science that flummox physicists today—time travel, space warps, and the ultracomplex—may be harnessed by the new machines, transforming the world physically. Ray Kurzweil (mentioned in section 2.1 in connection with cryonics) argues that this could lead to a runaway intelligence explosion: the ‘singularity’.13
Few people doubt that machines will one day surpass most distinctively human capabilities; the disagreements are about the rate of travel, not the direction. If the AI enthusiasts are vindicated, it may take just decades before flesh-and-blood humans are transcended—or it may take centuries. But, compared to the aeons of evolutionary time that led to humanity’s emergence, even that is a mere blink of the eye. This is not a fatalistic projection. It is cause for optimism. The civilisation that supplants us could accomplish unimaginable advances—feats, perhaps, that we cannot even understand. I’ll scan horizons beyond the Earth in chapter 3.
2.5. TRULY EXISTENTIAL RISKS?
Our world increasingly depends on elaborate networks: electricity power grids, air traffic control, international finance, globally dispersed manufacturing, and so forth. Unless these networks are highly resilient, their benefits could be outweighed by catastrophic (albeit rare) breakdowns—real-world analogues of what happened in the 2008 global financial crisis. Cities would be paralysed without electricity—the lights would go out, but that would be far from the most serious consequence. Within a few days our cities would be uninhabitable and anarchic. Air travel can spread a pandemic worldwide within days, wreaking havoc on the disorganised megacities of the developing world. And social media can spread panic and rumour, and economic contagion, literally at the speed of light.
When we realise the power of biotech, robotics, cybertechnology, and AI—and, still more, their potential in the coming decades—we can’t avoid anxieties about how this empowerment could be misused. The historical record reveals episodes when ‘civilisations’ have crumbled and even been extinguished. Our world is so interconnected it’s unlikely a catastrophe could hit any region without its consequences cascading globally. For the first time, we need to contemplate a collapse—societal or ecological—that would be a truly global setback to civilisation. The setback could be temporary. On the other hand, it could be so devastating (and could have entailed so much environmental or genetic degradation) that the survivors could never regenerate a civilisation at the present level.
But this prompts the question: could there be a separate class of extreme events that would be ‘curtains’ for us all—catastrophes that could snuff out all humanity or even all life? Physicists working on the Manhattan Project during World War II raised these kinds of Promethean concerns. Could we be absolutely sure that a nuclear explosion wouldn’t ignite all the world’s atmosphere or oceans? Before the 1945 Trinity Test of the first atomic bomb in New Mexico, Edward Teller and two colleagues addressed this issue in a calculation that was (much later) published by the Los Alamos Laboratory; they convinced themselves that there was a large safety factor. And luckily, they were right. We now know for certain that a single nuclear weapon, devastating though it is, cannot trigger a nuclear chain reaction that would utterly destroy the Earth or its atmosphere.
But what about even more extreme experiments? Physicists aim to understand the particles that make up the world and the forces that govern those particles. They are eager to probe the most extreme energies, pressures, and temperatures; for this purpose, they build huge, elaborate machines—particle accelerators. The optimum way to produce an intense concentration of energy is to accelerate atoms to enormous speeds, close to the speed of light, and crash them together. When two atoms crash together, their constituent protons and neutrons implode to a density and pressure far greater than when they were packed into a normal nucleus, releasing their constituent quarks. They may then break up into still smaller particles. The conditions replicate, in microcosm, those that prevailed in the first nanosecond after the big bang.
Some physicists raised the possibility that these experiments might do something far worse—destroy the Earth, or even the entire universe. Maybe a black hole could form, and then suck in everything around it. According to Einstein’s theory of relativity, the energy needed to make even the smallest black hole would far exceed what these collisions could generate. Some new theories, however, invoke extra spatial dimensions beyond our usual three; a consequence would be to strengthen gravity’s grip, rendering it less difficult for a small object to implode into a black hole.
The second scary possibility is that the quarks would reassemble themselves into compressed objects called strangelets. That in itself would be harmless. However, under some hypotheses, a strangelet could, by contagion, convert anything else it encountered into a new form of matter, transforming the entire Earth into a hyperdense sphere about a hundred metres across.
The third risk from these collision experiments is still more exotic, and potentially the most disastrous of all: a catastrophe that engulfs space itself. Empty space—what physicists call ‘the vacuum’—is more than just nothingness. It is the arena for everything that happens; it has, latent in it, all the forces and particles that govern the physical world. It is the repository of the dark energy that controls the universe’s fate. Space might exist in different ‘phases’, as water can exist in three forms: ice, liquid, or steam. Moreover, the present vacuum could be fragile and unstable. The analogy here is with water that is ‘supercooled’. Water can cool below its normal freezing point if it is pure and still; however, it only takes a small localised disturbance—for instance, a speck of dust falling into it—to trigger supercooled water’s conversion into ice. Likewise, some have speculated that the concentrated energy created when particles crash together could trigger a ‘phase transition’ that would rip the fabric of space. This would be a cosmic calamity—not just a terrestrial one.
The most favoured theories are reassuring; they imply that the risks from the kind of experiments within our current powers are zero. However, physicists can dream up alternative theories (and write down equations for them) that are consistent with everything we know, and therefore can’t be absolutely ruled out, which would allow one or another of these catastrophes to happen. These alternative theories may not be frontrunners, but are they all so incredible that we needn’t worry?
Physicists were (in my view quite rightly) pressured to address these speculative ‘existential risks’ when powerful new accelerators came on line at the Brookhaven National Laboratory and at CERN in Geneva, generating unprecedented concentrations of energy. Fortunately, reassurance could be offered; indeed, I was one of those who pointed out that ‘cosmic rays’—particles of much higher energies than can be made in accelerators—collide frequently in the galaxy but haven’t ripped space apart.14 And they have penetrated very dense stars without triggering their conversion into strangelets.
So how risk averse should we be? Some would argue that odds of ten million to one against an existential disaster would be good enough, because that is below the chance that, within the next year, an asteroid large enough to cause global devastation will hit the Earth. (This is like arguing that the extra carcinogenic effect of artificial radiation is acceptable if it doesn’t so much as double the risk from natural radiation—radon in the local rocks, for example.) But to some, this limit may not seem stringent enough. If there were a threat to the entire Earth, the public might properly demand assurance that the probability is below one in a billion—even one in a trillion—before sanctioning such an experiment if the purpose was simply to assuage the curiosity of theoretical physicists.
Can we credibly give such assurances? We may offer these odds against the Sun not rising tomorrow, or against a fair die giving one hundred sixes in a row, because we’re confident that we understand these things. But if our understanding is shaky—as it plainly is at the frontiers of physics—we can’t really assign a probability, or confidently assert that something is unlikely. It’s presumptuous to place confidence in any theories about what happens when atoms are smashed together with unprecedented energy. If a congressional committee asked: ‘Are you really claiming that there’s less than a one in a billion chance that you’re wrong?’ I’d feel uncomfortable saying yes.
But on the other hand, if a congressman asked: ‘Could such an experiment disclose a transformative discovery that—for instance—provided a new source of energy for the world?’ I’d again offer odds against it. The issue is then the relative likelihood of these two unlikely events—one hugely beneficial; the other catastrophic. I would guess that the ‘upside’—a benefit to humanity—though highly improbable, was much less unlikely than the ‘universal doom’ scenario. Such thoughts would remove any compunction about going ahead—but it is impossible to quantify the relative probabilities. So, it might be hard to make a convincingly reassuring case for such a Faustian bargain. Innovation is often hazardous, but if we don’t take risks we may forgo benefits. Application of the ‘precautionary principle’ has an opportunity cost—‘the hidden cost of saying no’.
Nonetheless, physicists should be circumspect about carrying out experiments that generate conditions with no precedent, even in the cosmos. In the same way, biologists should avoid creation of potentially devastating genetically modified pathogens, or large-scale modification of the human germ line. Cyberexperts are aware of the risk of a cascading breakdown in global infrastructure. Innovators who are furthering the beneficent uses of advanced AI should avoid scenarios where a machine ‘takes over’. Many of us are inclined to dismiss these risks as science fiction—but given the stakes, they should not be ignored, even if deemed highly improbable.
These examples of near-existential risks also exemplify the need for interdisciplinary expertise, and for proper interaction between experts and the public. Moreover, ensuring that novel technologies are harnessed optimally will require communities to think globally and in a longer-term context. These ethical and political issues are discussed further in chapter 5.
And, by the way, the priority we should accord to avoiding truly existential disasters depends on an ethical question that has been discussed by the philosopher Derek Parfit: the rights of those who aren’t yet born. Consider two scenarios: scenario A wipes out 90 percent of humanity; scenario B wipes out 100 percent. How much worse is B than A? Some would say 10 percent worse: the body count is 10 percent higher. But Parfit would argue that B might be incomparably worse, because human extinction forecloses the existence of billions, even trillions, of future people—and indeed an open-ended posthuman future spreading far beyond the Earth.15 Some philosophers criticise Parfit’s argument, denying that ‘possible people’ should be weighted as much as actual ones (‘We want to make more people happy, not to make more happy people’). And even if one takes these naive utilitarian arguments seriously, one should note that if aliens already existed (see section 3.5), terrestrial expansion, by squeezing their habitats, might make a net negative contribution to overall ‘cosmic contentment’!
However, aside from these intellectual games about ‘possible people’, the prospect of an end to the human story would sadden those of us now living. Most of us, aware of the heritage we’ve been left by past generations, would be depressed if we believed that there would not be many generations to come.
(This is a megaversion of the issues that arise in climate policy, discussed in section 1.5, where it is controversial how much weight we should give to those as yet unborn who will live a century from now. It also influences our attitude to global population growth.)
Even if we’d bet against an accelerator experiment or a genetic disaster destroying humanity, I think it is worth considering such scenarios as a ‘thought experiment’. We have no grounds for assuming that human-induced threats far worse than those on our current risk register can be dismissed. Indeed, we have zero grounds for confidence that we can survive the worst that future technologies could bring. It’s an important maxim that ‘the unfamiliar is not the same as the improbable’.16
These ethical questions are far from the ‘everyday’, but it’s not premature to address them—it’s good that some philosophers are doing so. But they also challenge scientists. Indeed, they suggest an extra reason for addressing questions about the physical world that may seem arcane and remote: the stability of space itself, the emergence of life, and the extent and nature of what we might call ‘physical reality’.
Such thoughts lead us from a terrestrial focus to a more cosmic perspective, which will be the theme of the next chapter. Despite the ‘glamour’ of human spaceflight, space is a hostile environment to which humans are ill-adapted. So, it’s there that robots, enabled by human-level AI, will have the grandest scope, and where humans may use bio- and cybertechniques to evolve further.