Chapter 9

Augmented wisdom?

APPLE IS THE largest company there has ever been, by market capitalisation. Its visual colophon, on every gadget, is an apple with a bite taken out of it. This is a reference to one of the central myths of the Christian era, the fall from grace in the Garden of Eden. In John Milton’s epic poem Paradise Lost (written at the time of the scientific revolution), the Devil tempts Eve to eat the forbidden fruit of the tree of knowledge, the ‘Sacred, Wise, and Wisdom-giving Plant,/Mother of Science’ because she will then not only be able to ‘discerne/Things in their Causes, but to trace the wayes/of highest Agents’. From the moment of original sin, when Adam and Eve discover their nakedness and their mortality, they begin to be tortured by the puzzles of science and self-consciousness, knowledge and wisdom. They are also thrown out of Paradise, a walled garden of the kind Apple is criticised for creating on their devices.

The digital ape has come a long way very quickly, and is still accelerating. We share 96 per cent of our genes with our nearest relative, the chimpanzee — and 70 per cent with a packet of fish fingers. The unique four per cent that makes us human will never completely outwit the ‘animal’ 96 per cent. Nor vice versa. But stupendous multipliers are now available that increase the effects of that four per cent, whose key ingredients are the genes that give us opposable thumbs, which cause us to be tool-makers, and makers of collective wonders like language, culture, and knowledge. Milton’s apple is also the one that mythically taught his contemporary Newton about gravity, and perhaps also the one eaten by a persecuted suicidal Turing, laced with poison. The tree of knowledge is still dangerous, and we are still fixated by the promise of immortality held by the tree of life.

As a species, our defining characteristic is that we know ourselves and about ourselves. Up to a point. We learn. Up to a point. So we understand our relationship over hundreds of thousands of generations with our tools. We see those tools for what they now are: super-fast, hyper-complex, immensely powerful. Digital instruments, entangled with our very nature, have come to dominate our environment, a new kind of digital habitat, emerging very rapidly. What do we do with our knowledge of this?

We have entered a new phase of companionship with robots, which are increasingly able to perform both social and industrial tasks once confined to humans. We have noted Luciano Floridi’s speculation that this could, paradoxically, lead us backwards to an animist world, where most people believe their environment is alive with spirits. Which in a limited sense it undoubtedly will be. But we doubt that the immensely sophisticated digital ape will fall for any crude misunderstanding of the status of the things around us. And, to repeat, gestation of a conscious non-biological entity is simply beyond our present capacity.

We made a declaration in our opening chapter, and have tried to illustrate it in the whole book: the digital ape has choices to make. There is a variety of extreme anti-Luddite thought, which amounts to the claim that, since technological change as a whole is inevitable, so is the full-scale implementation of every individual new technology, and there is nothing anybody can do about it. So it is pointless to think that the widespread use of robots and artificial intelligence can be tempered or managed. It’s just going to happen, uncontrolled. A moment’s thought should convince the reader that that viewpoint is simply wrong. For over 70 years, tens of thousands of people have had their fingers on nuclear triggers. At the time of writing, none has launched an attack in anger since 1945. This has been a global social and political choice. The automobile is a tremendous boon. It also kills. The populations of France and the UK are very similar in size. Total miles driven are similar. The death rate in France is twice that of the UK. For two main reasons. French roads, autoroutes in particular, are designed to a lower safety standard than UK roads. And seat belt laws and drunk driving laws are largely obeyed in the UK, less so in France. These are governmental and social choices, from populations with different priorities. Or a final example: here is a stark headline from The Onion (a web spoof newspaper) on the occasion of yet another US mass shooting, this one in Las Vegas: ‘“No Way to Prevent This,” Says Only Nation Where This Regularly Happens.’

Automobile and highway technology is a corpus of knowledge available to the whole world. Gunpowder and metal, and how to make them interact, have been known to all for centuries. Once Russian spies had passed on US and NATO nuclear secrets, that science was known to any nation rich enough to implement it. But different choices have been made in different places, and collective choices have been made about holding back from unacceptable risk. The same can, should, and will be true about artificial intelligence.

Is the digital ape, all senses augmented, swamped with information, aided by gangs of robots and algorithms, likely to make the right choices about how to live on this planet, about what kind of ape to be as the decades roll on? Let’s look at some specific choices we have now.

*

As a matter of fact, this book contains 113,000 words. Well, it’s not that simple. As a matter of fact, this book contains 112,665 words in Microsoft Word, and 114,982 words in Apple’s Pages. Believe it or not, two of the world’s largest corporations, Microsoft and Apple, disagree about what a word is. Specifically, Apple counts the words in a hyperlink; Microsoft thinks a hyperlink, however long, is only one word. And Apple thinks two words hyphenated or connected by a slash are two words and Microsoft thinks they are one. No, it is one. The phrase ‘Dallas/Fort Worth airport is a hyper-complex fact-free zone’ is 11 words long in Pages and eight words long in Word. No wonder lifelong specialists can disagree on the exact size of the US economy.

A widespread and potent meme at present is that the western democracies are in a ‘post-fact’ phase. All those people who wrongly voted for the egregious Donald Trump. The majority of the electorate in the UK referendum who defied the conventional metropolitan wisdom and decided they would rather not be in the European Union any longer. They are all, obviously, fact deniers, anti-sense and anti-science. If they weren’t, they’d agree with us, the intelligent people, wouldn’t they? In the UK, much is made of the Out faction driving about in a campaign bus with, painted on the side, a declaration of the weekly cash equivalent of the lost sovereignty, which the In faction regarded as inaccurate. Therefore, Brexit happened because nobody cares about the difference between truth and lies any more.

Except that, if we move from transitory specifics to the general idea, it is difficult to see what kind of evidence, what kind of performance indicators even, would support or undermine the hypothesis that nobody cares about the views of experts, professionals, authority figures any more, and nobody is interested in the truth. When arguably the opposite is the case. Here are three broad swings at why:

First, let’s assume that the nearly 7.5 billion people on the planet today have at least as many things to disagree about as the 3.5 billion people on the planet 50 years ago. And their disagreements are more upfront, in their faces, discovered more quickly. Well-ordered societies and rational individuals settle their differences by discussion of fact, theory, moral precepts, inventive sarcastic abuse, not in armed conflict. Churchill, a lifelong and mighty warrior, was surely right, speaking at a White House dinner in 1954: it is better to jaw-jaw than to war-war. Harvard professor Stephen Pinker, in The Better Angels of Our Nature, adduces masses of evidence for what is now the broadly accepted thesis: the rate of violence in the world declined throughout the nineteenth and twentieth centuries. It declined internationally, in wars between states, the total falling despite horrific exceptions in two World Wars and many smaller wars. It declined within nation states, despite horrific pogroms and holocausts. It declined at the domestic level, the murder rate in all the major countries has fallen consistently for decades. In short, the planet has decided to settle its differences by discussion not by violence. Post-rational?

Second, the number of years of school attendance, and the proportion of young (and older) people attending higher education, are increasing in every developed country. There is a voracious appetite in the UK, for instance, to attend university. The number of universities has trebled over 50 years. The number of university students in 2016 was the highest it has ever been, and within those numbers, access for the poorest students is twice as good as it was 10 years before. This is broadly true in the US, too, and across all richer countries. Never has there been such a passion to become qualified, to have a profession, to train to understand. Post-expert?

Third, remember that people around the world make over 6.5 billion internet searches per day, around 2.5 trillion attempts per year to find stuff out. Never mind that many of them just want to know which celebrity is dating which other celebrity, or what time Walmart closes on Saturday. They also sustain the largest and most accessible encyclopaedia ever known, and more citizen science and more professional science than has ever before been available. The range of topics covers every aspect of life on the planet. None of that existed at all 50 years ago. How on earth can 2.5 trillion searches for information lead us to believe that nobody any longer wants somebody more expert than them to tell them the facts? Post-factual?

One interesting phenomenon, worthy of some exegesis in itself: virtually all sensible, digitally literate people now self-diagnose their illnesses, real and imagined. Before and after visiting a doctor. They self-analyse their legal problems. They self-advise how to deal with their financial problems. The vast extension of the individual’s ability to trawl the marketplace for goods and services has been paralleled by a vast extension of the individual’s ability to trawl the marketplace for information, and that inevitably overlaps with what might previously have been sourced at the premises of the doctor, the solicitor, the bank manager.

And, naturally, that leads to a credibility gap, on top of any that experts may have caused for themselves. After a stupendous economic crash that hurt a lot of people, unpredicted far and wide in the economic community, why would a rational person believe any economist’s prognostications? There is an answer to that, but it does not rest with merely checking that the last economist to speak had a degree certificate from a fine old university. Any intelligent person must take their own counsel, first and last. And should always doubt themselves for it, though that is harder to do. Few of us purposefully buy a daily newspaper that we disagree with. For at least a century, and arguably before that, too, people have sought out sources for news and opinion that broadly coincide with their own view of the world, whilst also demanding that professional journalism be properly sourced and objective. A difficult balance, but not a contradiction. The new technology has hugely accelerated this. For every CBS or BBC, there are a thousand well-intentioned bloggers repeating what they think they heard from other well-intentioned bloggers, of every political and social stripe. Almost any theory or prejudice can easily be confirmed with a few keystrokes. The blogosphere is a vast archipelago of islands each with its own culture and values, disconnected from its neighbours, where it is possible to believe anything one likes. But does this make intelligent people more prejudiced? Readers can ask themselves whether they are smart enough to keep taking their own counsel as well.

What also seems a simple truth is that important political choices should, so far as possible, be based on the facts as far as they can be ascertained, or conscious assessment of the degrees of risk and ignorance about the facts involved. So-called ‘evidence-based policy’. This is both tritely true and an oxymoron. The digital ape thinks and acts within ethical, normative frameworks, and tautologically should always do so. Policy is the analysis and implementation of ethics, and no amount of facts can lead from what is to what ought to be. Bad facts and fake news add an extra dimension which, non-controversially, we could all do without; but good facts and hard news don’t remove the need for moral choices.

Scammers, spammers, and crooks lawful and unlawful rush to fill the credibility gap. Tim Berners-Lee is surely right both to campaign against ‘fake news’, and also seek technical solutions:

Today, most people find news and information on the web through just a handful of social media sites and search engines. These sites make more money when we click on the links they show us. And, they choose what to show us based on algorithms which learn from our personal data that they are constantly harvesting. The net result is that these sites show us content they think we’ll click on — meaning that misinformation, or ‘fake news’, which is surprising, shocking, or designed to appeal to our biases can spread like wildfire. And through the use of data science and armies of bots, those with bad intentions can game the system to spread misinformation for financial or political gain …

We must push back against misinformation by encouraging gatekeepers such as Google and Facebook to continue their efforts to combat the problem, while avoiding the creation of any central bodies to decide what is ‘true’ or not.

‘Three Challenges for the Web, According to its Inventor’, World Wide Web Foundation website, 12 March 2017

The present authors would certainly argue for respect for expertise, and a cool, hard look at anything that claims to be fact, in a cross-disciplinary world in which everyone has limited expertise and knowledge, as both are increasing exponentially. The Canadian science fiction author A. E. van Vogt coined the term ‘nexialism’ to describe an inter-disciplinary approach, the science of joining together, in an orderly fashion, the knowledge of one field of learning with that of other fields. The most basic rule of nexialism is this: a model in discipline A should not make assumptions in the field of discipline B that would not be respectable in that discipline. For instance, it is common practice in economics departments to create models with psychological postulates that would be laughed at in the psychology department a few doors down the corridor. When the Israeli-American Daniel Kahneman applied straightforward psychology to economic decision-making, he won the Nobel Prize.

*

We discussed in Chapter 3 the long history of upset about our cousinage with the apes. There are famous Victorian cartoons and paintings depicting Darwin as half man, half ape. Bishop Wilberforce demanded to know from Darwin’s champion, Thomas Henry Huxley, whether he was descended on his grandfather’s or grandmother’s side from a monkey. Yet the human form is an ape form, without misshaping us to look like our monkey or chimpanzee cousins. Darwin was an ape already, and a tool-using ape; but not yet a digital ape.

Bishop Wilberforce, an intelligent man, and good enough scientist to be a Fellow of the Royal Society, misunderstood several points, including the fundamental one about time. It seemed impossible to sophisticated adults that Darwin’s summer hedgerow, teeming and humming with thousands of species of life, dependent on each other for food and for pollination, could have developed by accident. Even the most brilliant pigeon breeder, working consciously, would have taken thousands of lifetimes to breed such variety from some imaginary original rootstock. For it to happen via the slow road of fitness to the environment … The answer to that lay in Charles Lyell, a close friend and colleague of Darwin’s. Lyell’s influential Principles of Geology had established in the minds of many of his contemporary scientists both the very great age of the earth, and, more, that geological transformations happened by constant small events building up over immensely long periods of time. But what was the mechanism of change in life forms?

The answer to that is Gregor Mendel, the founder of modern genetics. And, and there’s the rub, Darwin never read Mendel. (It is not apparently true, as sometimes claimed, that he had an uncut copy of ‘Experiments on Plant Hybridization’ in his library.) And so his grasp of some of the fundamental implications of the meaning of his major work eluded him. At the time of the first edition of On the Origin of Species, he imagined, rather vaguely, that the child was a kind of blend of the characteristics of its mother and father. Mother has blue eyes; Father has brown; so their daughter’s might be green-ish. She’s tall; he’s small; so she might be medium-ish. Smart readers of the first edition wrote to him, pointing out that such a view sat very uneasily with the otherwise persuasive principles of natural selection. Interesting and useful new character traits thrown up by whatever the unknown (to Darwin) variation method was would just average away again in a couple of generations. Darwin corrected later editions of On the Origin of Species, but was never aware of Mendel’s genetics.

Darwin’s vast intellectual correspondence was an immensely powerful network, arguably as effective as e-mail exchange would have been, and centred in a Victorian household run as a laboratory, largely happy, if troubled by morbidity and mortality. The key failure of one of the most influential and successful scientists of all time was that, for some reason, he baulked at spending some of that great pile of pottery money (he married a Wedgwood cousin) on an assistant on the premises. Paul Johnson, the British historian, is very good on this downside of Darwin’s methodology:

The truth is, he did not always use his ample financial resources to the best effect. He might build new greenhouses and recruit an extra gardener or two, but he held back on employing trained scientific assistants. A young man with language and mathematical skills, with specific instructions to comb through foreign scientific publications for news of work relevant to Darwin’s particular interest … would almost certainly have drawn his attention to Mendel’s work and given him a digest in English. There is no question that Darwin could have afforded such help … Darwin and Mendel, two of the greatest scientists of the epoch, never came into contact.

Darwin: portrait of a genius, 2012

In the Google universe we now inhabit, many new kinds of communication failure have been invented. But this old one should be extinct. The digital Darwin surely knows about the digital Alfred Russel Wallace and the digital Mendel. Academic life in western universities suffers from fashion and prejudice just as much as any other field of social endeavour. But peer-review journals have well-established procedures for sorting much wheat from most chaff. New versions of this process are now extended over many professional open access websites.

But this is nowhere near as strong a force for good as it could be. The staggeringly large companies that now own cyberspace are problematic. They do have immense piles of information, which form an integral part of the world’s digital infrastructure, yet these are managed and curated for private, monopolistic gain. Google has data about trillions of searches: what was asked first, what was asked next, where and what time of the day the question arose. The spread of diseases can be tracked from the pattern of searches about headaches, or rashes in children. Sentiment analysis, across large populations, can reap a fine harvest of new understanding of society, of politics, of dangers. But not if we have no access to it. There are several strong arguments that could be made here, to only some of which, on balance, the present authors subscribe. First, that Google is simply too large, too powerful, too monopolistic, too unanswerable to the citizenry, to be allowed to continue as it is. In principle, it could be broken up into either its existing divisions — search, cars, advertising, Google Earth, server estate etc. — or into geographical competing baby Googles. Just as AT&T, originally Bell Telephone, which owned pretty much all of the US system, was broken into seven baby Bells in 1984. Difficult, right now, to see how that would work internationally, but we should bear it in mind. Second, governments could require regulated access to the massive research possibilities of the records that Google holds. We would favour that, if it were the only way forward, although frankly a corporation that prides itself that its motto is ‘Don’t Be Evil’ should just get off its backside and voluntarily do a lot of good. Thirdly, Google should pay a full and proper amount of tax. No question about that one. It should happen right now, internationally.

We are not Google bashers; this argument applies to all of the largest platform internet companies. This isn’t anti-capitalist. We are taught that perfect information makes for perfect markets, but only if the information is equally accessible. Making suitable amendment for the different trades they are in, much the same is true for the other big internet empires. They all scandalously pay less tax than they should. They all control large estates of essential infrastructure, for instance servers, which need to be considered and treated for planning and protection purposes as public assets, whoever the legal owner may be. It is worth saying that old-fashioned commercial corporations are just as greedy and deceptive as newer ones. Volkswagen and other large automobile manufacturers rigged the software in the engine control systems under millions of hoods. The software recognised when the engine was being tested by government agencies for dangerous polluting emissions, and changed the behaviour of the engines for the duration of the tests. But they were caught and heavily fined: legal systems and legislatures remain fit for purpose in the face of new kinds of fraud on consumers and society.

There are other ways of behaving. The dozen or so huge digital empires are, as we have noted, owned by a small number of rich, white American men. Yet that has not been the path of two at least of the modern heroes of this story. Tim Berners-Lee legendarily refused to register any intellectual property in the World Wide Web or HTML, the substrate of the fortunes of those corporate founders. Rather than pre-emulate Tobias Hill’s cryptographer and become a quadrillionaire, as a matter of principle he wanted the web to be free to and for all. Jimmy Wales, the devisor of Wikipedia, took a similar view. Wikipedia has a unique form of governance. Although it is owned by a small charitable foundation, it operates as a collective of the Wikipedians, who are self-selected.

An important role has been played in the past two decades by young, technically very able, financiers, and the elite of tremendously powerful digital capitalists have fashioned for us a hyper-complex world. The dangers of that hyper-complex world continue to emerge, with uncertain consequences, from the changes they have wrought. Changes have then spread, often beginning with young people, certainly with better off and better educated people, before changing the world for everybody. Those stepping stones do not have to be the pattern for the future. A higher proportion of women, artists, grown-ups in computing would have led to a different web, different devices, a different approach to the present odd balance between secrecy and showing off; and should be encouraged vigorously.

*

A different danger lies in the hidden nature of much machine decision-making. Algorithms need to be accountable. This is a major new covert change in the relationship between individuals and big corporate and state bodies. The basis for official or commercial or safety-critical judgements, with direct implications for citizens and consumers, used to be visible, questionable. That seems to be increasingly untrue. Powerful machines making choices important to individuals can, as we have pointed out, powerfully and obscurely come to the wrong answer. Concern, for instance, about sentencing software in the US has been widely reported. The trouble is, to be honest, that it works. The stated aims are good. The huge prison population in the US is a shameful national scandal. The rate is the highest in the world: Americans are five times more likely to be locked up than Chinese or Brits in their homelands. We know that some convicted felons will re-offend when they emerge from jail, some won’t. Finely tuned data exists about the differences between the two broad groups, and there are dozens of identifiable sub-groups. Society jails criminals in part to punish, yes, but also to keep them away from endangering the public. So it seemed to legislators in the US that a sensible start on prison reduction would be to give judges and magistrates well-grounded information about whether the person in front of them, about to be jailed, was likely to re-offend, and if not, reduce sentences accordingly.

The trouble is that when computer-generated risk scores are being used for sentencing and parole decisions, designers of software, in building up its general principles to apply to individual cases, do not merely look at records of re-offending, but also give a value to whatever the re-offence was. There are, unfortunately, two elements of bias there. First, some classes of people, in the data examined, were more likely to be convicted than others, because judges and jurors were more likely to have previously thought they were the criminal type. African-Americans are far more likely to be incarcerated than their non-black peers. Second, the same categories of people were, over the years of analysis, more likely to have been given tougher sentences. Therefore they must have committed higher value offences, mustn’t they? When a judge, on conviction, asks the software to help with awarding a proper sentence, the software examines all the relevant characteristics, pulls all its encoded prejudices out of its silicon bones, and reinforces the stereotype. It, in effect, accelerates the problem. The trial, unwittingly and with generally the best of intentions, has its own prejudices, which are then multiplied by the prejudice in previous trials. And because the computer-aided process has been running for over a decade in many parts of the US, the computer-aided bias is now reflected in the data which updates the software. Human rights activists, whilst applauding the desire to reduce needlessly long sentences, don’t like bias being hard-wired into the judicial process.

There is a generalisable proposition here. There need to be clear rules for the transparency of algorithmic decision-making, the principles and procedures on which choices about the lives of individuals and groups are being made. Just as there are already laws about contracts and accounts, and, in some places, plain language in government and commerce. On the positive side, algorithms will soon remove many of the boring parts of the administrative functions of the modern state, both slimming down bureaucracy, and allowing policymakers, at the head of teams of algorithms, to concentrate on issues and solutions rather than process.

*

For all the phenomenal augmentation of the common digital ape’s everyday capacities, still many are excluded. Both in the poorer half of the world, and in the poorest (in information terms) tenth of the developed world. Digital pioneer Martha Lane Fox founded Doteveryone:

Making the Internet work for everyone … Almost half the world’s population is online. But while digital technology connects us as individuals more than ever before, it brings new divisions and inequalities we must face as a society.

Our visions of the future are dominated by a few large companies. Our citizens are engaging digitally with politicians who don’t understand the channels they’re using. Our end users are diverse, but our designers and developers are not. We need a deeper digital understanding that enables everyone to shape the future of tech. That’s why Doteveryone exists: to help make technology more accountable, diverse, useful, and fair. To make the internet work for everyone. Following her 2015 Dimbleby lecture, Martha Lane Fox founded Doteveryone, a team of researchers, designers, technologists, and makers based in London. We explore how digital technology is changing society, build proofs of concept to show it could be better for all, and partner with other organisations to provoke and deliver mainstream change. We believe technology can change the world for the better — but it needs to be deployed responsibly. And we’re here to help make that happen.

Doteveryone website

So the next phase of the digital revolution simply must encompass everyone. The overwhelming mass of us did not make any direct choice about the old world being overtaken by this new one. In the round, we just happened to be here. We did, true, make lots of individual choices about, say, whether to have a Twitter or Facebook account, and if those two fascinating and positive means of communication had not been immensely attractive to millions of people they would not exist in the form and on the scale that they do. But those of us who do not have a Twitter account still live in a world in which those who do can spread gossip about us to millions at immense speed. Many of us can’t help but feel that the pace of transformation has been too great. All of us certainly should agree that government needs to carefully manage societal transition as robots and algorithms change our workplaces. Some think heavier resistance is called for.

Digital dissent is not a new phenomenon. It would be quite wrong to regard it as simply a new variant of Luddism. Much of it is a perhaps growing feeling, shared by many people, that we have rushed headlong into our new, uncertain, hyper-complex habitat. This dissent is understandable. The simple fact remains that the common people, the great washed, us, did not choose the extraordinary transition we and everyone dear to us are in. The digital ape may be about to become the first ever species to consciously change itself, ‘improve’ its own DNA. To most people, the decisions about the scope of new technology seem to be being made somewhere else. The objects and options themselves come out of sorcerers’ laboratories. The digital ape is bootstrapping itself not merely into the Anthropocene, the age when what we do is the only thing that matters to the planet, but also into a different version of itself, with control over its genes, the nature of physical reality, of place, of time and space. Perhaps a self-conscious species, bound up in language and symbolic thought, on every planet that this occurs, inevitably either blows itself or the atmosphere up when it discovers nuclear bombs and fridge chemicals; or takes over its own genes and chooses its own nature.

*

And to move to the sharp end of life and death … Cyber warfare takes several forms, all of them involving the weaponisation of artificial intelligence. First came the AI control of conventional weapons. Cruise missiles as far back as the first Iraq War used very smart GPS, navigation, and terrain-following software systems to zone in on precise targets. The United States now has around 10,000 unpiloted aerial vehicles — or, at least, there are 2000 ground-based military pilots of five times as many drones. The Predator craft, for instance, carries Hellfire missiles. Every year in the past decade, Predator and other marques have carried out hundreds of what are best described as executions of American enemies, in Afghanistan, Iraq, and Pakistan. Those supposedly friendly governments are no longer informed in advance of the executions. Since cheap drones are widely available and flown now by many private citizens, criminals already make use of them to fly contraband in and out of jails, and presumably across borders, too. ISIS used drones to drop explosives in the fight for Mosul.

Second, nation states now use digital techniques to attack each other. The Stuxnet virus badly damaged the Iranian nuclear programme in a series of assaults on centrifuges at its Natanz facility. Although the virus weapon may have been deployed by the Israeli intelligence agency Mossad, the security services of the United States are both its originator and main user in many other stealthy attacks. The United States, unsurprisingly, spends around $15 billion a year on cyber defences. Russia has been accused of interfering electronically in the 2016 US election, both to rig the vote-counting machines and to steal and leak documents embarrassing to Hillary Clinton. The former is far-fetched and unproven; the latter probably true. (Although … why shouldn’t voters know embarrassing facts if they exist?)

Russian and Chinese cyber attacks on western states and industry, to damage or to spy, are now legion. Sooner rather than later, someone will transgress unacceptably, and there will be a hue and cry for retaliation. What we need is, in effect, a tariff, spoken and unspoken, just as there is for conventional spying and military attacks. The US catches a Russian spy, they expel 20 ‘diplomats’ from the Russian embassy in Washington, and the Russians do the converse in Moscow. If North Korea fires a missile at the US, bigger missiles rain down on Pyongyang. Workable treaties on strategic arms limitation (SALT) have been effective.

*

Authoritarian regimes have existed for millennia, and were the predominant model in much of the world in the twentieth century. It is tempting to feel that the all-pervasive potential of new technology is Kafkaesque and Orwellian. The new technology adds hyper-complex twists to the armoury of repression, but it equally creates new ways in which liberal ideals can flourish. We should remember, though, that Kafka died in his native Prague in 1924 and Orwell in London in 1950, both in their forties and both from tuberculosis, well before the new technology was created. Their novels were brilliantly prescient satirical reactions to regimes that seemed already to approach total control: arbitrary, overcomplicated, and vicious.

The institutions of democracy have been fashioned around the simple fact that complex decisions have, until now, always been made by a number of people small enough to fit into a large room. Only the simplest decisions, where a yes/no answer, or pick a person from this list, will do, have been made formally by much larger groups. New technology removes these underlying constraints. It is now possible to have elections and public consultations of any complexity, if they are run online. The internet polling company YouGov ran a national budget simulator in spring 2011 in the UK. Budget simulators allow all citizens to give their opinion on the overall emphasis as well as the individual elements of the national budget. It is quite possible to build a device that not only says how we want our taxes spent, but also how we want our taxes raised. It would be a crucial addition to our democracy if the government, as a matter of routine, consulted the public in this new detailed way, and were obliged to explain the grounds for their (perfectly proper) divergence from the majority view of the public.

New legislation could, in principal, be crowd-sourced. It is probably unlikely that great numbers of people will want to participate in drafting food safety regulations, or minor acts of parliament. It is already beginning to happen amongst a restricted community of lawyers and legislators.

Whether we are thinking about developments in international finance, in cyber warfare and nuclear weaponry, in the market for personal relationship networks exploited by Facebook and Ashley Madison, it is not obvious that we can trust selfless, robust common sense to prevail against the unpredictable results of hyper-complexity and institutionalised self-interest. We need a new framework to govern the innovations, which might enable individuals, en masse, to temper the continued concentration of ownership and power. Decisions that affect a lot of people should involve a lot of people, and new technologies can enable this digital democracy to be a powerful factor in the new world.

Complex democracy should flourish. Social machines can and do exist in the public realm as readily as in the private and communal. The co-author of this book, Roger Hampson, devised and promoted the web-based budget simulators built in Redbridge and sold by YouGov and used by over 50 public authorities around the world. Participation is also possible via citizen internet panels. A western democracy would establish a national panel scheme, run separately from government as an independent agency. The panel, ultimately, should comprise millions of people. Active citizenship would be as socially honoured as voting in elections is now. The panel would conduct surveys, perhaps with small cash payments or other benefits for participation. The identity card is a hated concept in the UK, but loyalty cards are ubiquitous. Some new hybrid might become attractive. The panel could engage in mass collaboration, using budget collaboration tools. It could use modern deliberative techniques to tease out the best new ideas. It could build consensus around both live and speculative national issues. The overall cost would be a small proportion of the costs of the thousands of surveys constantly conducted by old-fashioned, less accurate methods by every government department and agency. The new technology offers a whole host of new techniques for democratic engagement and participation.

It is equally possible for the tax authorities to inform each individual taxpayer what their overall income tax bill is, and ask them how they would prefer it to be distributed among the governments’ objectives. A responsive government could then collate the results to inform policy. In principle, the pattern of all taxes, the pattern of all public expenditures, could be decided that way, with the total take decided by central government.

*

In sum, here are some bald assertions. We would hesitate philosophically to call them self-evident truths. But they are what we believe follows from our analysis. We have a right to make choices, enhanced by all the technical means available, about the delightfully and dangerously changing world we live in. Complex decisions can now easily involve large numbers of people. Equally, we urgently need descriptions of the world in terms we can comprehend. We need better presentation of ‘facts’, true and imaginary, by politicians and others. And we must take our own data back, dispersing the balance of power on the internet by moving information stores and processing capacity to ourselves at the edge, rather than leave them in the clutches of the massive corporate interests at the centre. We have a right to understand government policies in ways that enable us to make informed choices. We have a right to understand the meaning of technological innovations in advance of their impact. We have a right to access and examine all the facts in detail, using our own schema, our own devices. We should not collectively lower our expectations of each other about privacy. We should each take responsibility for data about ourselves, and people we look after, in much the same way as we take responsibility for personal bank accounts, and our governments should help us to do so. Academics, politicians, and citizens have an obligation to understand, and an obligation to explain. Or, more precisely, to compose explanations that ordinary people might reasonably be expected to understand. Social machines, such as wiki-based websites, are crucial to this.

Homo sapiens is the only known fully self-aware being in the universe, and the only one to be responsible for its own destiny. We have done well so far. Nuclear weaponry has not resulted in Armageddon; hunger, disease, and poverty are widespread, but dramatically less so than they were, and will not end the species. Neither will global warming and climate change, although there will be massive disruption. We should regard the present emergence of hyper-complex systems as an equal threat. To repeat the play on words, as an emergency.

For as long as human rights remains a useful concept, then digital rights — net neutrality, fair and equal access for all to the internet — should be encoded with them. Datasets should be much more open to the public, competing institutions, corporations, and groups, except where the data relates to individuals. The precise mechanisms, or exact phrases in digital rights documents, matter only to experts, as long as that clear policy objective is met.

We must stop allowing policy on topics important to the survival of the species to be made almost entirely in private by commercial organisations. Too many owners of technology businesses, whilst obsessed with impressing financiers and consumers, have no interest in ensuring that their users understand how their products work. The big corporations are at constant war over thousands of patents, most of which, despite the name, are deeply obscure. Only a small number of technical experts in the developed world are in a position to take a measured view of the choices being made on behalf of everyone. The ancient divide between what the high priests discuss with each other in their specialist (hieratic) language, and what we discuss in our ordinary (demotic) language has never been so great. It becomes greater every day, as magical devices become more and more powerful.

Discussion of the dangers and opportunities presented by the world of intelligent machines should be as central to our cultural life as argument about other global challenges. Governments and supra-national organisations need to take the lead. Self-designing and self-reproducing machines, particularly nano-machines, should be subject to the same moral and legal frameworks that we currently apply to medical research, cloning, and biological warfare. We must not make the same mistakes with the possibilities offered by human augmentation and enhancement as we have with our attempts to manage narcotics. Of course we need to mitigate possible damage to individuals, but not at the cost of a devastating black market, powered by widespread desire to participate. We must establish conventions that curb the continual weaponisation of the digital realm, even in the age of terror. We must define reasonable limits for the collection and analysis of information.

*

Humans cannot live without tools. We would already find it extremely difficult to function without sophisticated machines. The changes to us will soon be so ingrained, at first socially, then genetically, that we will find functioning without digital technology almost impossible.

Some simple, marvellous things will never happen. The many millions worldwide who still lack a reliable clean water supply can’t have it delivered by WiFi, spouting from a virtual tap on their smartphone. WiFi and digital communication are a great boon to the poorer places of the world, but they don’t in themselves solve fundamental issues. We in the rich countries can now talk to Alexa in the comfort of our own kitchen, ask her to tell Amazon to deliver bottled water this afternoon. We can communicate easily with the company that pipes our reliable water supply. We can do both of those because of the underlying infrastructure, of dams and reservoirs and pipes; of bottling factories and Amazon fulfilment centres, of road networks and delivery trucks; which coexists with the marvellous new digital technology. The fact is, that although the smartphone is just as miraculous in Eritrea as it is in Edmonton, new technologies can and do coexist in many places with an awful lack of basic facilities, and will not in themselves outweigh their absence.

What is undeniable is that there is no going back. If desired, we could, in principle, pull the plug on Facebook, or the National Security Agency, or legislate against drones or self-driving cars. Or any other particular organisation or digital way of behaving or governing or policing. But we simply cannot remove the technical knowledge and experience on which those developments are based. So something like them is part of the modern landscape, and will remain so.

Emergence, hyper-complexity, and machine intelligence are primarily political and social problems rather than technology problems. The acute danger is that our tools evolve so quickly that they either bamboozle us, or, more likely, lead to a future in which the majority are diminished in favour of a small group of super-enhanced digital elites who make choices for the rest of us. There will not be an AI apocalypse in the strict sense; there will, for the foreseeable future, be humans with the ability to pull the plug if they choose to. But which humans? In the seventeenth century, scientists, philosophers, and poets, from Newton to Pepys to Milton, from Locke to Voltaire to Hobbes, imagined radical changes in how the people understood the world and their place in it. Algorithms and hyper-complex data flows present the most extraordinary opportunities to us. They will continue to increase our wealth, and augment our minds and our sympathies. The digital ape, too, should ideally usher forth a second Age of Enlightenment.

Desmond Morris, 50 years ago, ended The Naked Ape with this warning:

We must somehow improve in quality rather than in sheer quantity. If we do this, we can continue to progress technologically in a dramatic and exciting way without denying our evolutionary inheritance. If we do not, then our suppressed biological urges will build up and up until the dam bursts and the whole of our elaborate existence is swept away in the flood.

The Naked Ape, 1967

In the half-century since Morris wrote this, the world’s human population has doubled. Every other quantifiable account of what we do and what we are has also engrossed and magnified and multiplied. Most significantly, exponentially, in those aspects which have led us to title the present book The Digital Ape. Yet the dam has not burst. Far from it. The world is richer, less violent, and happier. In large part, that is precisely because our minds have been augmented by clever machines far more than our suppressed biological urges have been empowered. That will continue.

We will need all of our augmented wisdom to grasp and secure all the possibilities.