Chapter 2
Our hyper-complex habitat
STEPHEN HAWKING used to cut a familiar figure as he negotiated the streets of Cambridge, England, in his iconic motorised chair. The late director of research at the university’s Centre for Theoretical Cosmology was the most famous physicist since Einstein. His A Brief History of Time is one of the best selling science books ever published. He presented successful television shows on the nature of the universe and appeared in Star Trek and The Simpsons. Yet the substance of his diverse intellectual contributions remains, to the many millions who would recognise him instantly, utterly opaque. What everyone knows about — not least from an Oscar-winning film — is his triumph over physical adversity. Diagnosed in his early twenties with rapid-onset motor neurone disease, for decades he used sophisticated electronic technology to compensate for his declining physical abilities. With a single cheek muscle he operated a speech synthesiser that endowed him with a rasping voice as well-known as any on the planet.
All of us depend on electronic tools these days. Even the rare person who has no smartphone or television or automobile still turns on the electric light and consumes groceries, both of which are distributed by smart systems. In this sense, Professor Hawking’s life is an extended metaphor for the contemporary human race’s relationship with its technological environment. The habitat of all primates consists of air; water; the earth and its vegetation; and other animals, whether friends, foes, or food sources. The habitat of Homo sapiens, over and above these features, has for thousands of years consisted in great part of objects we have fashioned with tools, and of accelerated social relationships with other humans. Both of which are products of our highly successful brains. The distinctive characteristic of the digital ape is the increasing proportion of our habitat defined by devices supercharged by highly complex mathematics.
In late 2014, engineers from the Intel Corporation, who worked with Hawking for over 20 years, dramatically upgraded the system that enabled him to write and communicate, including adding prediction software from London start-up SwiftKey. It doubled the speed of his speech and allowed him to write 10 times more quickly. What the computer could do — just how smart it was — disturbed Hawking. It seemed to know what he wanted to write next. This set him thinking about the speed with which computers were becoming intelligent. His fear that super-smart computers could spell the end of humankind received worldwide publicity. He warned: ‘Our future is a race between the growing power of technology and the wisdom with which we use it.’ This was followed rapidly by the publication of an open letter, arguing for vigilance, from 150 AI scientists and technologists, including Hawking and the serial entrepreneur Elon Musk, manufacturer of the Tesla electric automobile.
This threat might well be substantial. If so, it would take its place alongside well-established gross threats to human life. Only nuclear weapons stand a serious chance of obliterating our species entirely this century. A collision with a large asteroid would also pose a grave danger to human life, but is highly unlikely. Diseases are ever present. A major plague may kill a lot of people. Perhaps a virus that leaps to us from another species, or a consequence of the diminishing effectiveness of over-used antibiotics. Climate change probably cannot now be prevented, or even significantly mitigated. Millions will have to change where they live and what crops sustain them. The world’s population approaches 8 billion, presenting acute pressure on resources, not least water. If the intelligent machines we have surrounded ourselves with are a threat, as Hawking thought, then is the vigilance he asked for different in kind from the scanning we do, and the safety measures we put in place, for all the other perils of the twenty-first century?
After all, a central principle of Darwinian biology is that all species are both fitted to their environment as it has been, and constantly threatened by changes to it. Non-human intelligence has been around much longer than Homo sapiens. We learned to outwit every other species. There are dangerous animals around, but none of them will destroy us. Artificial intelligence is a new problem, which grows out of the brains endowed us by our flint-knapping forebears, and has so far been an adjunct to them. We cut ourselves accidentally or on purpose with our knives; houses we build fall down on our heads; equally, our machines may fail us or turn round and bite us. Digital apes need to assess all the challenges of our habitat, their scale and momentum. How good are we at that? It also helps to unpack some of the terms, like ‘digital’ and ‘network’ and ‘risk’, taken for granted by experts. The total danger may be on a grand scale, but we will also glance at some of the more personal risks.
Machines do share in every important aspect of digital apes’ lives. If the computers that manage our electricity supply fail, or fall to enemy action, within a week there will be no fuel, transport, food, heat, or light. With every year that passes, the machines become smarter, quicker, more deeply embedded. We can and do, on an everyday basis, crunch more and bigger numbers than could have been dreamed of a couple of decades ago.
Processing capacity has grown every year for over 50 years. A home computer bought today is roughly twice as powerful as one the same money could buy 18 months ago. And obsolete in terms of the latest from the Research and Development division. All of us make daily use of devices that are a million times more powerful than any machine of the 1970s. If air travel had improved as rapidly, we would now be able to fly from London to Sydney in less than a tenth of a second. We also have free access to astounding digital tools, such as instant universal maps. Those that can afford them have an apparently infinite choice of goods and services. Everything around us seems intelligent: from the objects with which we work, to the machines in which we live and travel. We spend much of our increased leisure time consuming often banal entertainment on very sophisticated devices. We also use, and increasingly we wear, tools that augment nearly all the cognitive functions that distinguish us from chimpanzees.
These enhancements are entrancing. They are also, without doubt, dangerous. The processes and networks on which our lives depend make decisions on a scale and at speeds that are millions of times faster than any group of humans could make them. That makes them as opaque as Hawking’s physics to most of us. A major factor in the financial crash of 2008 was that the tools used by global financiers were sophisticated beyond the comprehension of all but a few. Gillian Tett is the US managing editor of the Financial Times and an acute critic of the behaviour of finance houses. She describes the scene on Wall Street before the crisis:
As the pace of innovation heated up, credit products were spinning off into a cyberworld that eventually even the financiers struggled to understand … The debt was being sliced and diced so many times that the risk could be calculated only with complex computer models. But most investors had no idea how the banks were crafting their models and didn’t have the mathematical expertise to evaluate them anyway.
Fool’s Gold: how unrestrained greed corrupted a dream, shattered global markets and unleashed a catastrophe, 2009
This was hyper-complexity out of control. The US, UK, and the European Community recognised after the 2008 financial crash that new, stringent, and technically sophisticated regulation of banks and finance houses was needed. Specific, thoughtful measures have been designed. Yet the publics of those countries are, largely, still awaiting broad safety measures. Not least in the non-technical aspects, the culture in the finance houses that allowed supercharged stupidity to thrive in the first place.
The machines and the connections between them are everywhere. The internet is not just the World Wide Web. E-mail and office systems; domestic and business broadband; video and data services; smartphone applications; and server farms are all also vital. A crucial component of the digital ape’s habitat is the multiple layers of networks that sustain us. Many of them are so-called mesh networks, webs in which each crossover point, or node, can communicate with all the others, either by routing along the shortest channel to another node, or by flooding all the other nodes with the same message, or resource. The British term for portable phones — ‘mobile’ — has an obvious derivation: they do not require a landline wire, so can be carried about. ‘Cell’, the American term, is more instructive, since it derives from the underlying infrastructure. Their wireless radio contact is with the nearest points in a network of radio towers, widespread in at least the town areas of western countries, but now spreading rapidly across much of the world. A map of the network, if lines were drawn to connect the towers or dishes, would look like a honeycomb made by drunken bees. Each space bounded by the lines is what is called the cell. The phone knows its position in respect to the towers. Therefore the owners of the network can work out pretty much exactly where the phone is, and tell you and others. Even when it looks as if it is switched off.
Two-thirds of the UK population use a fixed computer every day, either at work or at home or, in most cases, both. In the UK, where smartphones are particularly popular, there are around 65 million people, living in about 27 million households. Between them they had, in 2016, over 90 million mobile phones. Over two-thirds of those were smartphones, and the share increases each year. There are about three times as many phones as households, and twice as many smartphones as households. That does not mean that every household has access to this way of life. About a fifth of homes don’t have fixed landlines, but even many of these have smartphones. Indeed, landlines are in decline across the richer nations. Only 60 per cent of US households now have them, down from 90 per cent just 10 years ago.
There is now a global infrastructure of intelligent machines, some owned by governments, some by large corporations, which are linked to devices in practically every household, home, and handbag. Several elites control almost all of that infrastructure, including the communications networks that underpin everything from the smartphone to electricity distribution. In principle, they are answerable to citizens and shareholders. Gates of Microsoft, Bezos of Amazon, Zuckerberg of Facebook, Page and Brin of Google — to name a few — are only in practice beholden to their own decency. Many of the prime movers of the new technology have been geeks, nerds, hackers, gamers, coders, and scruffy (mostly white) boys. A few of them now own some of the most valuable corporations the world has ever known.
A crucial characteristic of the habitat of the digital ape is catalysed by the ubiquity of these networks. We have universal information; universal access; universal selection; universal geography. On the billions of smartphones, or on many other easily portable devices, digital apes, wherever they may be, can check any fact, bone up on any theory, see a picture of any notable person or place in the western world — and a high proportion of ordinary people and places — when the thought occurs to them. These billions of creatures can instantly contact almost anyone who cares to take their call or read their text message. They have universal maps in their hands, which know where they are, and can tell them how to get to anywhere else, and how long the journey would take, right now, by different forms of transport. The maps know the precise state of traffic congestion across whole countries, distributed from myriad phones and other sensors relaying the information. Digital apes can shop for nearly anything with a couple of clicks and a credit card. In big cities, those online stores deliver to homes all day long, in such quantities that traffic managers in big conurbations want deliveries moved to off-peak hours, and loads better organised.
Between them, Amazon, eBay, and the online supermarkets offer a range of goods which, in 1990, could not have been procured in, say, the whole of New York, even by a billionaire with a team of a hundred flunkies combing the yellow pages. Amazon claims to have something close to the aforementioned universal selection just in their own store, to be able to source any book in print anywhere in the world, and a high proportion of those out of print. Books were for Amazon just the first wave, now bringing in only about five per cent of their revenue. Yes, they sell around 5 million titles, but the US store alone sells a total of 488 million different products. There are more distinct items in the store than there are people in the country. Online stores can also both customise and personalise shopping: the store front changes what it looks like to match the preferences of the particular customer, and makes personal recommendations to them. As digital apes enter the virtual door, they are reminded of what they prefer, and of what people who are like them prefer. Recommendation engines steam cheerfully around, towing a collaborative, collective view of what it is to be a digital ape today.
*
The digital ape’s habitat has more sinister aspects than the astonishing availability of consumer goods. Governments have the power to police citizens in ways that the totalitarian regimes of the twentieth century could have only dreamed about. In the so-called ‘attention economy’ big business platforms use online surveillance to monetise our habits and preferences. But citizens, too, have unprecedented techniques to control bureaucracies and actively participate in government, as we will show in a later chapter. There has been an exponential growth of the digital data hoard held by the state and by corporations, and a parallel, equally exponential growth in our ability to analyse and exploit that data. The first steps have been taken in some countries to open up the information that governments and others collect. Traffic data has been unlocked, and apps built which allow individuals to plan their journeys better. The detail of government contracts has been exposed, giving new, small, hungry businesses the chance to bid. Health data from several sources mashed together shows which drugs work best. This has already stimulated some highly innovative new businesses. But it will also enable us to see what powerful interests are doing to us, or on our behalf.
The quantity of knowledge is unprecedented, and increases by the second. Shakespeare probably read a high proportion of all the books in print in England in his time. Fewer than a hundred were published in English in the course of 1600, the year he may have written Hamlet, along with several hundred in Latin, many of them reprints of existing texts. Scriveners still ran good businesses in manuscripts, too. The Bard had rare skills. He was possibly savant-like in his ability to collect and reprocess what was then known about the world. Nevertheless, the sum of information available was on a human scale. A group of intelligent people at Queen Elizabeth’s court, or writers meeting in a tavern, might between them have a good grasp of the whole European intellectual agenda, and most of its significant nuances. Being on the wrong side of a nuance could still lead to death at the stake. Now, more titles than existed in the whole world in 1600 are published every day. Approximately 2.2 million new titles come out each year, about 6000 a day. Ten times the English production in 1600, every 24 hours. In 2014, 448,000 titles (225,000 of them new) were published in China alone, the world leader in book production. Welcome avalanches descend of academic articles, magazines, news pages, diaries and blogs, new forms of micro-publishing like Facebook and Twitter. But even at a time when every individual can easily issue their own newspaper and broadcast their own video, Murdoch still owns more media channels than anyone else, and still has immense influence. Mega-brands and mega-corporations dominate the political economy: Microsoft and Apple are no different from Ford or Standard Oil in their day.
The networks, and the super-fast torrents of information that surge through them, interact with societies and burgeoning economies in a hyper-complex fashion. Given enough time, if our species survives, our tools, our culture, and our genes will evolve reciprocally with that environment. We don’t know and can’t know what happens when we multiply together hyper-complexity and magical tools and worldwide economic expansion and climate change. But we need now to start carefully building the best models we can, and implementing the lessons from them. It is impossible to do more than roughly predict the emergent product from these powerful vectors. We can however lay out the dangers, as well as the marvellous opportunities, and take informed positions on the best way to govern ourselves to live with both.
There is a play on the word ‘emergency’ to be made here, which may serve to illustrate this important point. In science, an emergent property is one that arises when a complex system produces a new characteristic or state or pattern which none of the original component parts had. It emerges, sometimes unexpectedly, from the combination.
The last hominin species to die out was the Neanderthals. They were probably not actively massacred by Homo sapiens. The archaeological evidence suggests that they were just crowded out by a better, faster species of ape that could sweep up the available food sources more efficiently than they could. We might, like the Neanderthals, be overwhelmed by an emergency which is beyond our capacity. Let’s not take that chance.
There are serious dangers in the present confluence of digital information and lightning-fast processing which cannot be analysed fully until they emerge. That is obviously true of all social and historical trends; the future is not here yet. But there are vectors in the present situation which make it reasonable to suppose that when their product emerges we will think of that emergence as … an emergency. Now in two senses. The major disruptive vectors are the speed of change and the nature of the digital revolution.
As we said in the previous chapter, for most, the mathematically-driven technologies we now live with might as well be sorcery, for all we understand them. It was Arthur C. Clarke, the British science writer and inventor, perhaps most famous as the screenwriter of Stanley Kubrick’s epic 2001: A Space Odyssey, who in 1973 introduced the idea that ‘any sufficiently advanced technology is indistinguishable from magic’. The mystery of everyday magical objects has since deepened. A person of average intelligence in 1960 could, if they wanted to, understand the mechanics and physics of all the objects in their home. The same person today may have a working grasp of what the technology does, but has virtually no conception of how any of it really functions, and never will. Complexity is now so embedded that even expert technologists cannot hope to be familiar with the details of the myriad components and software in common objects. This creates a set of enormous, though not insuperable, challenges for democratic accountability and control.
*
Some of the time, we notice these dangers; some of the time, the digital habitat is just taken for granted. Few people now, on an everyday basis, comprehend the word ‘digital’ in the way that, say, the manufacturer of a digital television, or an electronics professor, understands it. A big proportion of the population habitually regards ‘digital’ as merely a buzzword of its time — like ‘hi-fi’ in the 1960s or ‘arterial’ in the 1930s — which, if it means anything, merely means ‘modern’. Or even simply ‘gadget assembled in China’.
In everyday parlance, a ‘digital’ object is in practice a box which does fascinating, baffling, endless tricks with music and pictures and coloured lights. In the early days of the present devices, they were called ‘digital computers’, to distinguish them from analogue computers. Now, analogue computers are generally only found in museums, so we just say ‘computer’. At the moment, we have digital radio, television etc. But soon that will be the only form of many things, and nobody will bother with the ‘digital’, any more than they say ‘digital iPad’. In the 1970s, people used to refer to a ‘colour television’. No longer; these days, the very occasional black and white set has its oddity remarked on. There are interesting exceptions to this rule. We don’t seem, yet, to refer to ‘analogue’ clocks and watches as though they were peculiar; and some technical functions are well and cheaply, sometimes better, performed by analogue devices. The idea though is important.
‘Digital’ has changed its main meaning over the past decades. It began as a seldom used word, originally Latin, meaning ‘pertaining to a finger’. The Shorter Oxford Dictionaries of 1972 and 1990, the large two-volume edition, both still confined themselves to that meaning. The technological version simply did not yet exist in the common culture they reflected. The Oxford English Dictionary’s website has a clear short article by Richard Holden, setting out how that usage was slowly overtaken by the alternative. You count on your fingers, so each single number can also be called a digit, a word that has been current for a long time. Then, only recently, the derived adjective, ‘digital’, used in this sense, came to mean ‘to do with numbers’. The word ‘computer’ originally meant a clerk in an office who added up — computed — accounts. The first computing machines were analogue, with whirling gear wheels. From the 1930s and 1940s onwards, they began to use numbers as proxies for anything, and very many useful processes began to be re-engineered as a manipulation of numbers. Then the ‘digital’ concept came into its own. It widened its scope to include machines and processes which use number-crunching as opposed to previous methods. And, by further extension (synecdoche to grammarians), it began to be used to denote whole areas of business which rely on numbers. Digital marketing, digital services.
Take as an example that smartphone that over half the population in the richer countries carries about with them wherever they go. Not only can the phone look at pictures — still or moving — or listen to music or speech, it can also store lots of films and photographs, pieces of music, documents, and books. Hundreds of them, right there in your hand.
Well, it can’t possibly actually do that. What it does is store numbers. Early phones etched them, temporarily, by using electricity to make tiny magnetic marks on intricate metal and silicon plates. Such marks are so small that an immense quantity of them can be carried inside the phone. More modern flash drives alter the state of electrons. But the broad principle is the same: very small, very numerous, readable records. Numbers like that, if organised, can be reconstituted as book pages, pictures on the screen, or as sounds. What is stored can then be accessed by touching the appropriate keyboard or ‘button’ shown as a picture on the screen.
Since about 30 years ago, whenever musicians have gone into the studio to record a song or symphony, the recording has been made digitally. Microphones ascribe numbers to dozens of different aspects of the noise made by instruments and voices, and those digits are arranged in virtual boxes. Other machines can alter or improve the sound using only those numbers. A noise which measures 6 on a 10-point scale can be made into a noise that measures 9 just by changing the number. Musical pieces meld many different sound waves, each of which can have many statistical aspects. There are nigh infinite variations that can be made mathematically to the overall sound. A piece can easily be put into a different key, or poor ensemble playing corrected so that it is all in the correct tempo and relative volumes. It can be slowed down, reversed, all simply by rearranging or mathematically manipulating the numbers in a computer. In principle, they can be inscribed by machine onto practically any material. Lasers can put them on discs, or they can be collated into digital files, to be downloaded over the internet.
That recording is a pile of numbers, a digital description, of the original sound. Not a physical reproduction — an analogy, or analogue. The core of any old-fashioned physical reproduction had to be large enough to, for instance, wobble a record player’s needle; one of those ancient 78-RPM shellac discs, perhaps. Numbers have no size at all in themselves. In the decades since numbers assumed this role, material scientists have been finding new ways of making each one take up less space, increasing the amounts that can be stored. At present, a magnetic drive the size of a thumbnail can easily store the millions of binary digits which represent songs which would previously have needed some thousands of 7-inch single discs.
A photograph was originally made by letting a short burst of light on to a chemically treated surface — a metal plate to start with, then a roll of film — which reacted according to how many photons fell on each small part of the material. Films were lots of these strung in a row. But once devices had the capacity to instantly remember great quantities of numbers, cameras could be made that looked much the same on the outside as the old ones, but were very different on the inside.
It is easy to describe a point on a picture, using numbers. Say someone has a portrait of their grandmother in bright watercolours (gouache) four feet by two feet. They mark the frame with 1000 numbers up the side and 500 numbers along the bottom. Now every point on their granny’s picture is one of half a million squares, each about a twentieth of an inch on each side, with coordinates which say where it is. They could then spend many days looking at each of those squares through a magnifying glass, and for each one writing down on a list whether it is mostly green, mostly red, mostly blue. If they did that reasonably well, they could add a further number — one for red, two for green, three for blue — to the coordinates, and they would have a numerical description of the whole. Then they could mail a letter with the numbers to someone else. That person could sit, for an equally long time, with very small brushes and three pots of paint, and make a reasonably good copy of the original. From a distance it would look much the same. Close up, there would just be lots of blobs in one of three colours.
Digital pictures are built up in much the same way. A television screen can be told, not by a letter in the mail but by electronic transfer, the coordinates and the colours of half a million points symmetrical with those in granny’s portrait, and can reconstruct the original picture from the numbers. A film of granny walking about is simply the same process, repeated rapidly. (Films usually have 24 frames of photograph per second, which may each be refreshed on the video screen several times.)
A like principle can be applied to words, in several different kinds of way. One way is that the screen of the phone can be thought of as (say) half a million miniscule light bulbs. Each can be lit to look white or black. Just as, in the 1930s, thousands of light bulbs in Times Square in New York, or Piccadilly Circus in London, could show moving writing and advertisements, so can the screen of the phone show words. Give each of those tiny light bulbs a number, a coordinate on the map of the screen. Those numbers if stored can be retrieved and used as a recipe to light up the screen showing the text as it was first seen. In effect, a photograph of the original page as shown.
There is a different method. The words of any text, from a limerick to the Bible, can be transcribed by giving every letter in the alphabet a code number, and ascribing other numbers to punctuation marks, spaces, capital letters, and so forth. The list of those numbers is an accurate numerical description of the content of the book or article. It does not, plainly, accurately describe what the text looked like when first typed onto a screen, or might look like if it were printed out by such and such a printer on such and such a day.
Any of these ways of making a record of texts or pictures or sounds, indeed almost any well-formed digital description, has a key feature: it does not degrade, or wear out. It can be altered or destroyed; as can the material on which it is encoded. But the nines and eights don’t gradually fade into fives and fours, in the way paint fades in the sunshine, goes pale and shifts towards the blue end of the spectrum. (The digital record is, perhaps, a theory or a notion, where a page of a traditional book is a physical fact.) It is easy to copy; it is easy to search; it is easy to fold into other digital descriptions; it is easy to use mathematical techniques to make physical changes to a picture, or analyse text, or change musical pitch or key.
(Our description here is also, of course, a gross simplification of how sophisticated devices actually store and process pictures and text, to show the very basic principles. In particular, take note of our discussion at the end of this chapter about error correction.)
A stunning trick is available as an app. What a camera sees and translates into numbers, if it includes text on a shop sign, can be translated into a different language. The general-purpose processor behind the camera searches the picture it receives for mathematical patterns which it has been trained to recognise as letters. Those letters are checked against a dictionary, probably online rather than in the gadget itself. The processor also checks with a database of typefaces. Words it finds are translated into the chosen different language, encoded again, and those numbers inserted in the strings of numbers describing what the camera sees, taking care to adopt the typeface of the original words. The shop sign, or a paperback book, or a bus ticket, looks exactly as it does in life, but on the screen is in a different language from the one the user actually sees in front of them.
Arthur C. Clarke was right to call all this indistinguishable from magic. Okay, like all legerdemain, or Sherlock Holmes’ deductions, the trick is obvious enough when you know exactly how it was done. It only looks like sorcery. But it only looks like sorcery to most people most of the time.
*
It can be argued that overall the general impact of fast technical change on the digital ape’s habitat has not been as great in the past 50 years as it was in the first half of the twentieth century. The era between 1890 and 1950 began with cannons and rifles, horses and ships; long distance communication via Morse telegraph, and the printing press. Just 60 years later there were atom bombs and jet aeroplanes; cinema, television, and radio. A rural time traveller from 1890 would find metropolitan life in 1956 astonishing. In contrast, very few of the machines in everyday life now would baffle a visitor from the 1950s. Meeting a smarter television is not the same as meeting a television for the first time.
The grand exception to this has been digital computers. Here, the rates of change have been exponential. In the 1950s, the war-winning achievements at Bletchley Park and elsewhere were still secret, with many of the devices smashed on purpose to keep them so. Now voice, moving pictures, text, and GPS, come in trillions of binary bits via wireless to an astonishing pocket-sized smartphone, while the device itself processes them with power significantly greater than the largest device owned by the Pentagon at the time of the Cuban Missile Crisis. Big government and big business have acquired huge devices of enormous capacity. And everyone has the internet, to run applications from big online content providers, exchange e-mails, and visit the World Wide Web. The growth of the World Wide Web alone is staggering. There was one website in 1991, on Tim Berners-Lee’s NeXT computer at CERN in Switzerland. By 2014, there were 1 billion websites, spread across the globe, with, at the last count, 4.77 billion web pages.
The principle behind the rapid expansion in the power of digital machines is often referred to as Moore’s Law. This followed an observation in 1965 by Gordon E. Moore, the co-founder of the Intel Corporation, whose semi-conductor chips and micro-processors are found in many personal computers. He pointed out that, for some time, the number of transistors in an integrated circuit had doubled every year and, he guessed, from his knowledge of what was in the industry pipeline, would probably continue do so for the foreseeable future. In 1975, he revised the forecast to every two years. This observation gradually became a ‘law’, with the two years eventually fine-tuned to 18 months. Moore’s Law is now taken as a loose paradigm for the pace of change in the quality and speed of computers, their memory capacity, and the fall in their prices. Repeatedly multiplying by two makes any number very big, very quickly. A field of corn in 1968 that yielded a thousand loaves of bread, if it doubled its productivity every two years, would have yielded a million loaves in 1988, and 33 billion loaves in 2018. A car that drove 300 miles on a tank of gas in 1968 would, 50 years later, travel 10 billion miles on one full tank, if fuel efficiency in automobiles had improved at the same rate. No single human being has ever travelled that far around or from the earth in a lifetime, by any mode of transport. Even fractions of such efficiency gains would have transformed agriculture, industry, energy, and transport, as well as the political map of the world. Yet the digital machines that dominate our lives have changed, and are continuing to change, at that pace. That jumbo jet, if it was configured like a data processor, really would set off from London at the start of this sentence and already be in Sydney before the end. Of course, air travel is not like that. Human bodies can only stand so much acceleration or deceleration, so however great the metal container’s capacity for speed, there is a limit, certainly not yet reached, to how fast we can travel long distances. There may well be final frontiers to information processing capacity, but the journey from abacus to the latest supercomputer is several order-of-magnitude leaps greater than the journey from horse to any earthly vehicle. In principle, a starship at half the speed of light would travel 10 million times as fast as a galloping horse. That hasn’t happened. A computer revolution of that scale has.
*
The dimensions of our habitat are difficult for us to comprehend. In 1957, Dutch reformist educator, pacifist, and Quaker, Kees Boeke published Cosmic View. His book presented a description of the universe, from the very, very large to the very, very small. Boeke had always been interested in seeing things differently. He set up institutions that disrupted education by enabling children to make real decisions concerning their school. But it was Cosmic View that really caught the public imagination. Ten years after its publication, the Americans Charles and Ray Eames produced a film inspired by the book, and, in 1977, they made a second: Powers of Ten: A Film Dealing with the Relative Size of Things in the Universe and the Effect of Adding Another Zero. Both films became cult classics. In the pre-web world, they went viral. Twenty years later, the 1977 film was selected for preservation in the US National Film registry as being ‘culturally, historically and aesthetically significant’.
What the book and films do brilliantly is illustrate the relative size of various things using a logarithmic scale in which objects get bigger, or smaller, by a factor of 10. They first expand outwards from the earth until much of the universe is in shot, and then reduce inwards until a single atom and its constituents are in frame. By the time of the second film, science had moved on so much that Eames added an additional two powers of ten at each end of the scale. The films depict a journey from 1 x 10–16 to 1 x 1024 meters — 40 orders of magnitude — and are gripping to watch. Adding a zero 40 times seems easy in principle to comprehend, but the results are mind-boggling.
We as humans live, perceive, and act within a very narrow band of this scale. We are around 1.6 x 100 metres tall. The Dunbar Number, the roughly 150 personal relationships we can maintain at any one time, translates as 1.5 x 102. A few of us can run 1 x 102 metres in 1 x 101 seconds. We can sense only a fraction of the electromagnetic and acoustic spectrums. Yet we have built tools and environments that go way beyond this. We have constructed digital storage machines that are able to make those miniscule magnetic etchings or polarity changes. Nanometre scale encodings (1 x 10–9 metres) of vast amounts of new data, about 2.5 x 1018 bytes newly minted per day, powered by machines that contain billions of components and run at blistering cycles of computation. The digital ape has created a new virtual universe that is still expanding by powers of 10. We now need imaginative help to understand the scale of what we have built and what it might forebode. We need to meet the challenges both of great acceleration, and of hyper-complexity. We need to reflect on what this new habitat is, what its shape, structure, and organising principles are. It is vast, it is complex, and it is expanding. A new universe that we must explore and understand.
*
A significant part of this habitat has a very odd abstract geography. Much transactional business and collective memory occurs in what is termed ‘the cloud’. As if it were a notional floating nebulous space. In fact, this cloud bears little resemblance to cirrus or cumulonimbus. It looks more like hundreds of enormous warehouses full of inter-connected fridges. The cloud or clouds are often ‘offshore’, thousands of miles away from their owners, under different tax regimes and government aegis. Although in terms of privacy that is pretty meaningless, since any well-funded agency can burrow into almost anything offshore or on. Amazon, a leading provider of cloud, sells large amounts of processing and memory cheaply for short periods. It enables all kinds of previously impossible analyses to be done in very short timescales, by the application of massive computational power. In this digital habitat, once enough is known about any particular ape, and her behaviour has been mapped against enough of the behaviour patterns of others who are pretty obviously similar, then what she is likely to do next in either the digital or physical habitats she inhabits becomes increasingly straightforward to predict. She has a pleasant frisson when Amazon guesses from the books she buys what kind of newly released DVD she also might like. But what about when algorithms predict the illnesses she is likely to have this winter, or they match her to digital apes in her area she might like to meet, or advise her local police force that she is just the kind of person who might commit crime, or have dissident political opinions? And what will she think when the filtering process is accurate enough for the people at the NSA or GCHQ to look at her online supermarket orders — a vast retrospective database already — match them with her travel purchases and the books that she buys, and give her a potential terrorist ranking. This kind of ranking is done already. It is just not, yet, all that effective or widespread, despite what we see in films.
Many of these developments will seem obvious to digital apes and even their offspring. And yet they are little understood, partly, as Kees Boeke demonstrated, because the sizes and speeds involved are immensely difficult for the human brain to grasp. Partly because the nature of the digital habitat in which so much of their lives is conducted is still poorly understood. Partly also because of the very breadth and depth of the range of technologies involved. In a limited sense, this is no different to the inability of a feudal villager to explain how a tree works, or even to understand the question, in the modern sense. Photosynthesis was not accurately described until the 1930s. But a small number of people do now understand and build technical objects, which others only understand how to use. Again, that feudal villager did understand how to use a tree. But there was no cadre of barons, monarch, and priests who had cracked photosynthesis theory and used it to build enormous forests. A modern three-year-old can easily operate an iPad, and many do. Just press this button here, then touch the screen like this. YouTube, funny photos, games galore. They know how to work it, but have no understanding of how it works. And have been known to make comical pinching spreading and scrolling gestures at pictures in a magazine, a little surprised that what works on a screen does not work on paper. The three-year-old’s parent will have a more sophisticated view. Yet if asked by a passing alien to hand over the recipe for an iPad, the parent, even looking down the barrel of a ray-gun, could not come up to the mark. A sufficient description of the complexity and sophistication of the iPad itself, and its technological environment of microwave towers and server farms and software and micro-processors, is beyond practically all of us. The same has no doubt always been true for most people if asked to fully explain the fridge. The crucial difference is that most adults could easily elucidate the pros and cons of refrigeration. And, if push came to shove, could look the subject up on the World Wide Web, and gather the basic principles in 10 minutes.
There are two particularly intriguing aspects to this. The first is what experimental psychologists call ‘the illusion of explanatory depth’. Most people think they understand how the world works much better than they actually do. Ask them if they could explain how something as apparently simple as a zipper fits together, moves up and down, stays closed, and they say yes indeed they could. Ask them to actually do that — explain a zipper in detail — and they struggle. In part, this may be because they know how to make a zipper operate, just as they know how to get the milk out of the fridge and which icon to press on their mobile phone. In part, it seems to reflect the fact that we have a collective sense of knowledge: how zippers work is well-known to ‘us’ and therefore to me.
The second relates to the point we made in the last chapter: a modern urban citizen would struggle to fashion tools and coordinate a group to hunt, kill, and skin a bear. Not only do we not know how to make tools to skin a bear. (We could learn, as we shall see in the next chapter.) Neither do the overwhelming majority of us have the specialist skills needed to work in an abattoir. Our lives are dependent on things made by complicated multi-part processes in factories long distances away. This is a feature that digital devices share with many goods in the industrial era. No individual modern human could make a smartphone. Only a tiny number of people know in detail how it works. No individual, not Tim Berners-Lee nor Bill Gates, could sit down one afternoon, with a pile of all the materials, and just fit the jigsaw together. Like many modern objects, computing machines, and the software that animates them, derive from the systematic collective application of a wide range of specialist branches of knowledge in very large industrial plants in many locations. In particular, as we have noted, they run software that is the product of the work of many thousands of engineers — often chaotically reinventing the components, ingredients, and tools of the trade. The program stack of a modern device can resemble the geological strata in a cliff face. The marvellous work of large teams of expert software engineers in 1985, squashed by, but propping up, hugely expensive modifications financed by new mega-corporations in 1995. Each layer including patches stuck over problems and mistakes. And so on up to the great view from the cliff top today. Digital and virtual cathedrals that take decades to build on the grains of sand.
But the same is true of simpler artefacts, and has been for some centuries a truth about many manufactured goods. To illustrate the ironies of this, Thomas Thwaites of the Royal College of Art in London set himself a radical task. He took a cheap reliable modern electric machine — he chose a £4.99 toaster — and attempted to reproduce it himself. Not by buying the parts in a hardware store, but by building each part from scratch. The resulting quest is both hilarious and instructive. It led him to a copper mine in Wales to dig out material for the wiring, and to long discussions with professors of chemistry as to how he might cook up plastic for the shell. His toaster, after some months, was aesthetically a sorry mess. Polypropylene is, it turns out, rather difficult to brew at home. It did just about work, at the cost of much time, trouble, and travel. The toaster problem is that it is constructed out of parts which are all mass produced separately, even on different continents, to high standards which meet stringent protocols. Think of the conventions encoded in the three-pin mains plug Thwaites had to build himself: the exact configuration and size of the prongs, the colours of the wires, the strength of the electric current. Given the parts, many of us could have a go at assembling them. Anyway, we could easily survive in a world without toasters — stick the bread on a long fork and hold it over a flame. With a Samsung or Apple smartphone, not only could we not make the physical parts; not only could we not assemble them; but also the device is much more than the sum of its parts, in a way that is now a key feature of our habitat, not really replaceable by previous tools. It is a capsule for several gigabytes of operating system and application code, without which it will not function. Roughly speaking, the 100,000 words in this book you hold in your hands store as about half a megabyte of information. The code that sits inside a smartphone, its essential ingredient without which it is nothing, contains — again crudely speaking — the equivalent of a library of between 5 and 10 thousand books. Unlike most books, software has many authors. Several tens of thousands of technicians and coders will have had a hand in writing it over several years.
We should also add an interesting fact about being wrong.* It is simply not possible to type a book of 100,000 or so words into a MacBook without making mistakes. Mistakes of fingering. Mistakes of fact. Mistakes of judgement. Books are spell-checked and fact-checked and judgement-checked many times on their journey to publication to try to counteract this. But, well, sorry about the mistakes still in the published version. The 5 to 10 thousand books of numbers in the smartphone, written by thousands of people, inevitably contain hordes of mistakes. Unfortunately, a single mistake in a simple piece of software can crash it. So, for decades now, every computer program has had error correction built in. A smartphone has suites of dozens of different error correction techniques. Many involve redundancy: send a message several times, so that accidental gaps or mis-readings in one transmitted message can be corrected by the parallel one. Others involve adding self-checks into data: if sending a five-digit number, add the digits up and stick that on the end. If the receiver can’t make the sum add up to the five, something is wrong. All the torrents of data travelling through the digital ape’s life assume that the world is full of error.
[* Seek out Pulitzer prize-winning New Yorker staffer Kathryn Schulz’s wonderful book with that very title, Being Wrong, or watch her TED talk. Embrace fallibility!]
*
The importance of networks in the digital ape’s habitat cannot be overemphasised. This has been true of social networks for a long time, but physical infrastructure has also come to determine how our lives are lived. Structured built networks have been important for millennia. The Romans, after all, are legendary for laying down strong and mostly straight roads that all eventually led back to Rome. Just as practically all of those roads now lie under macadamised asphalt, so do the partial remains of many other important networks lie, as in a palimpsest, under later technologies. Big, modern communication networks do get replaced by other big, even more modern, communication networks.
Networked systems based on new technologies became widespread in the nineteenth century. Already many of those then new-fangled wonders have long faded away, in successive waves. New, more efficient modes overwhelmed them. Steam railways, which replaced coastal sea traffic, have themselves been replaced in the West by diesel electric locomotives, and even more by road systems and trucks and automobiles. And Brunel’s near perfect seven-foot gauge railway system was forced by an 1846 Act of Parliament to degrade to Stephenson’s standard four-foot eight-inch gauge, which had grown out of the northern English coalmines to cover over half the country. Partly Stephenson won for cost reasons, mostly because of a classic network effect. As railways became more widespread, it became obvious that all lines in the same land mass needed to be the same width. Brunel banked on his better system prevailing. Why on earth would superior comfort and quality not win through? He kept building them, even though most others were using the narrower gauge. He failed to see that every mile of rail laid down by both sides made his defeat more certain. If an engineer in the nineteenth century needed to narrow a railway track, they could easily hire some navvies for a couple of weeks, and ask them to lift the rails up and move them a couple of feet closer to each other. (‘Navigators’ were labourers who dug the canal system that preceded the railways — another layer of the palimpsest.) If they needed to do the opposite and widen a railway, they would have to hire a very large workforce. And rebuild the railbed much larger, negotiate with hundreds of landowners to broaden every embankment and cutting, rebuild broader and taller every bridge and tunnel over and under the line … Even then, the first shiny big train that travelled along it would get jammed at the first tight bend. Hence, many miles of track would need to be re-laid on entirely different routes with bends gradual enough to round natural features. Stephenson was always going to win that comparison, in a network version of Gresham’s law that bad money drives out the good. (Why would I accept a probably perfectly sound dollar bill from you if I knew that 20 per cent of them were being forged?)
The Morse code based electric telegraph that wired up the wild west, the still Morse based wireless telegraph that helped arrest the murderer Dr Crippen on a transatlantic vessel in 1910, were replaced by voice based telephones, wired and wireless. One of the most poignant and pointed examples was the pneumatique, Paris’ version of something that became common in the centres of many cities in the developed world from the 1850s onwards.
Ernest Hemingway habitually wrote at cafe tables in Paris in the 1920s. A Moveable Feast, his memoir of the time, composed out of old notebooks 30 years later, describes how one sunny noon, after finishing his work, he opened his mail from Canada (he was a stringer for the Toronto Star) and found a request to look out for the boxer Larry Gains, who was fighting that day on the other side of town. (Hemingway does not mention that Gains should have become world champion, but was never allowed to compete for the title, being black.) How to contact Gains instantly? Not even discussed. He sends a pneu, via the bartender, and has his answer within the hour. A pneu was no electronic marvel, neither telegram nor telegraph and certainly not some prototypical e-mail. One wrote by pen on a blue paper letterform, which physically travelled along tubes, pulled by vacuum in front, propelled by several atmospheres of air behind.
We send letters now as e-mail. So overwhelmingly that many organisations are trying to cut back on the plague. Concern about how messages travel is scarcely new. In the 3000 years before the electronic revolution, and after the invention of portable writing in several places around the world in the millennium before Christ, national government and administrators worried about how to move increasing piles of missives, quickly and securely, and invented many solutions. In the middle of the nineteenth century, many countries became interested in the power of pneumatic tubes. Brunel had experimented with a railway which had a big tube in the middle of the tracks. A mammoth piston under the traction carriages sat in the tube, and was pulled and pushed along by that combination of vacuum in front and air pressure behind. Great idea, mainly because it meant the immensely heavy iron steam-motor could stay put in a pretty brick engine-house, rather than have to pull its own weight along, a major constraint at the time. A failure, because the seals on the channel along the whole length of top of the pipe, which allowed the piston entry, tended to rot. Even just a few holes in many miles would let out sufficient air to badly reduce the pressure. Other engineers speculated accurately enough that having the whole train inside the tube would resolve that problem. A couple projected passenger transport systems, which failed to prosper, not least because the passenger experience was a daunting prospect. Some freight and parcel services were built. (The occasional daring individual took a ride in those.) But thin tubes, carrying small capsules, really did catch on. Networks of pipes were constructed in business districts in particular. They remain common within insti-tutions spread over large, but contained, sites — hospitals and department stores, for instance. London’s first was a 220-yard system constructed by Josiah Latimer Clark between the London Stock Exchange in Threadneedle Street, and the premises of the Electric Telegraph Company in Lothbury. The idea spread to many cities, usually first to the business districts.
Paris, in which the fashionable houses, the classy shops, and the government and financial areas are mostly all central in what is a small city given its historical importance, built a network hub in the centre and spokes out to the suburbs, with long pipes in sewers, the metro, and under roads. Buses had a post box on their rear which would drop messages off at the nearest network office. Classic French movies turn on the knock on the door from the PTT boy bringing the little blue pneu envelope, curled from the canister, arranging an illicit liaison later that day, or stabbing a political colleague in the back. Eventually, the pneumatique foundered on labour costs, in 1984. Messengers on bicycles were needed at both ends. There was a partly sentimental, but also hard-edged, fuss. There were suspicions that the pneumatique competed too keenly with the main nationalised postal services. Entrepreneurs offered to buy the still highly serviceable system at a fire-sale price, proposing to run it at least until the tubes wore out, but were turned down. Actually, traffic had declined by 90 per cent in its last 10 years. Virtually all western countries brought in mechanical sorting systems and postcodes in the 1970s. That both made the whole show more efficient, and changed the capital/labour proportion in traditionally very labour intensive services, and in a world of rising labour costs. In other words, the pneumatique’s original massive capital advantage from comprehensive hard infrastructure was outflanked by the greater capital advantage of mechanised sorting.
The death of the pneumatique can’t be laid at the door of the World Wide Web, which was not then even a private pet project of that young Englishman at CERN in Switzerland. The internet of the mid-1980s was used primarily by academia and the military. Already, modern sophisticated transport and electronic systems, particularly communication networks, had become immensely powerful. And when the internet and web came to overlay on top of them, they were not a cheap good idea. There are massive capital costs to e-mail. Ubiquitous personal and workplace devices, fibre-optic cables under every street, microwave towers and server farms. All need to be expensively manufactured and installed. Huge factories in China, and laboratories in Seattle and Cupertino. There are lots of labour elements in there: designing cool bevel edges; doing the fiddly human bits of assembly; digging holes in the street for the cables. Typing the damn messages. But mostly it’s a highly capital-intensive technology.
As so often, these developments were foreseen, albeit inexactly, by visionaries decades beforehand. Vannevar Bush, the American engineer and science administrator, made a prediction of something rather like the World Wide Web in his essay ‘As We May Think’ in 1945. He prophesied the appearance of wholly new forms of encyclopaedia, containing a built-in mesh of associative trails that run through them, ready to be amplified in a sophisticated microfilm viewer he had imagined called the Memex.
*
The digital ape has created an important part of our own habitat: we have devices which augment our collective intelligence, the supercharging we alluded to in our first chapter. The changes of most of the twentieth century extended the strength and variety of our tools. The changes of the past few years are to the other dimension of our humanity: our collective culture, memory, and knowledge. For the world’s richest 25 per cent, the fulfilment of Bush’s prophecy has already transformed the abundance, the cheapness, and the ease of access to social and commercial transactions. Most — but not all — of these developments are extensions of how life had been in the preceding decades.
In the next decade, there will be widespread implementation of micro-, macro- and probably nano-machines, ones that operate on an extremely small scale. With not only mathematical and computational talent far greater than any person or group of people, but also with the ability to perceive, analyse, and make very complex decisions. A start has been made on quantum computers, radically different in principle. If they can be implemented on anything other than the present, very limited, scale, they will change the speed of computing by orders of magnitude. If these technologies are used wisely, our personal and social abilities will be augmented in ways unprecedented since we began to use tools and have collective intelligence at all. This is not mere change. It is, we hold, progress.
The progress is more than technical. The technology, as always, is a catalyst which opens up new fields of operation to the extraordinary capacities of our species. Social machines, for example, to which we devote a later chapter, are aggregations of machines and humans. One such is Wikipedia, by far the largest and most widely used knowledge-base ever constructed. Another social machine is Galaxy Zoo, which harnessed the enthusiasm of thousands of professional and amateur astronomers to detect and classify various astronomical objects contained in millions of images from the Hubble satellite telescope. Foldit used the same technique — enlist a lot of enthusiasts for citizen science, give them simple training — to play an online puzzle game about protein folding. The object is to fold the structures of proteins selected by researchers at the University of Washington, who then assess their usefulness, in medicine or biological innovation. And almost all computerised machine distribution systems — Amazon’s warehouse, whether in a huge, crinkly shed, or that patent airship — incorporate humans to do the actual picking goods off shelves. One British internet shop, on making a sale, e-mails the automatic reply: ‘Thank you for your order. Our robots have begun doing what they do best — chasing humans around our warehouse with lasers until they have gathered the following items …’
There has been little attempt to put generic limits to what can be viewed on screens. The honourable exception to that is pornography, where gatekeeping options are built in, for instance, to search engines. Broadly though, a vast range of applications have been engineered on smart devices that seem capable of unlimited marvellous things: translate from one language to another, place a gamer in a virtually real landscape, find the best route through this afternoon’s traffic, chat about the weather with granny. There is little or no social policy framework yet around how these applications may be affecting our lives or our brains. There is not enough general understanding of the issues to begin to construct such a framework. Like Prospero’s sorcery in The Tempest, these magical transformations have just crept by us on the waters, and we have accepted them, without as yet sufficient policy response. We urgently need such a framework. (The Times certainly thinks elder care and robots and driverless cars each merit leaders.)
This matters: millions of digital ape minds combined constitute a tremendous, unprecedented creative power. A girl born in Milwaukee today can confidently expect to have the aggregate wisdom of millions of other women and men, young and old, alive and dead, on tap in her pocket for the rest of her life. She can ask her smartphone out loud where she is, what the weather will be like later, what is the best route to school or the office today, and Siri or another voice program will tell her. But she can also ask it for advice or information on any topic at all, and confidently expect a coherent, expert, and well-intentioned answer. That is a personal environment that has never before been available to any being on the planet.
*
We should not underestimate either, especially in the context of the new enlightenment, the important part that has been played in the new digital habitat by both artful, attractive design of products, and positively framed cultural pressure, manifest as fads, fashion, and norms. Take as an example the most iconic version of that ubiquitous smartphone, Apple’s iPhone. There are now many varieties of smartphone. They all do much the same things, but they look and feel slightly different, perhaps more to the connoisseur than to the civilian. The iPhone even has its own spelling, with a capital P as the second letter. This is because it is the third or fourth in a line of designer products with slightly silly trade names: iPod, iMac, and so forth. Much thought went into the trade names.
Apple was established in the 1970s and has been fashionable in a particular way more or less ever since, an exemplar of fine design, always thought of as classier than Microsoft, which for decades was larger and dominated the office desk and home kitchen table with its software, but not its hardware. Apple made both software and hardware, bound together. It was innovative, arty, edgy, expensive. But quirky. Some years ago, when it was a successful, but still niche, company, somebody said that Apple was the France of the technology world. Or did they say France was the Apple of nation states?
When cell or mobile phones were first introduced, the sight of somebody standing in the street talking to themself was widely regarded as comic. A conscious pose. Then people loudly talking on phones in, for instance, railway carriages became widespread. For some reason, again presumably related to the sociology of fashion, they found it necessary to act as if this astonishingly sophisticated gadget only worked if you shouted at it. Whilst the comedy nuisance of overheard one-sided conversations still exists, the culture has assimilated the device; people have generally learned how to use it in a quiet, casual way. The digital ape would find it difficult to conduct everyday life without it.
The iPhone is small, and feels heavy for its size. It measures about five inches by two and a half inches by a third of an inch. The four or five ounces it weighs therefore feel hefty, as if important and valuable. It is made of shiny metal and glass, with a few tiny inlaid switches and sockets. It is made like this on purpose, to be this size and weight, and look this way, because that is attractive as well as useful. It could functionally and practically be rather different, but its style is important to both the people who make them and the people who buy them. Its pleasing weight, caused by its battery; the sufficient tensile strength in its aluminium frame to stop it bending; and capacitive pressure sensors in its multi-layered screen, make it feel substantial and important, despite its small size.
Its smallness means it can be a constant companion. Young people in particular incessantly text or message each other, or are on social media. It has become an important part of growing up and making friends. Even quite young children carry phones, and even those who don’t carry them know how to use them from their pre-school years. The iPhone costs quite a lot to buy, or rent, or a mix of those. In western countries, about two per cent of annual household income would secure the purchase and upkeep of one phone. Such a household would be spending perhaps 7 or 8 per cent of its income on smartphones alone, and another 2 or 3 per cent on a fixed computer.
Smartphones are very fashionable, as well as very common. One of those fashions where non-participation is a rarity and, to some, upsetting. Apple, in the case of the iPhone, has made several versions over the past decade. The launch of each one has been carefully staged, so that the publicity makes it an even more desirable object. Queues form round the block for a new ‘limited edition’ white or rose-gold version.
All these kinds of emotions are carefully cultivated in product-oriented consumer societies, in which not only beautifully designed objects, but meticulously groomed ‘celebrities’, are turned into indispensable commodities. This will continue as a powerful lubricant and sales device as digital apes adjust to each next phase of technology.
*
How long will this wonderful hyper-complex habitat continue to nourish us? Will it implode, dragging us down with it, or turn into something more sinister? Ray Kurzweil — the polymath and currently a director of engineering at Google — as well as other very respectable scientists think they know how humanity in its present form perishes. They call it ‘the singularity’, or ‘transcendence’. It has been popular as a science fiction concept, and recently on television and in Hollywood films. Kurzweil was the star of a documentary, Transcendent Man in 2009, and Johnny Depp starred in a Hollywood version, Transcendence, in 2014.
Those who fear transcendence argue that, at some point in the relatively near future, the multitude of machines, linked across the world, will be comprehensively smarter than us. Thus far they agree, surely correctly, with Stephen Hawking and his many colleagues. Kurzweil, though, believes that the machines will combine with us to form complex life forms. That takeover would, Kurzweil argues, be a technical-social event unparalleled in human history, equivalent in impact to the arrival of a fleet of alien spaceships. Why would these intelligent machines put up with our pathetic response to global warming, which threatens them as much as us? They won’t value our human nature, least of all our vacillating emotions. They would also control our tools and our scope for learning. Since these are two of the things that make us human, humanity itself will have been diminished. Then our species, which continues to evolve via natural selection and mutation, will change to match this new environment. In films, this happens rapidly. Depending on who you read, it could take decades or centuries, or perhaps millennia, but if one accepts the premise that machines will soon be more powerful than us, and out of control, then this could be where the human race is headed.
The popular American TV series Person of Interest, for instance, is based on the premise that the US government persuades a maverick genius to build them an artificial super-intelligence. The machine is given or steals access to all CCTV cameras, all government and security service and local police force information, all the databases of big private corporations … and uses all that knowledge to pursue, initially, the wishes of its largely benign, democratically responsible owners, to fight terrorism and crime. But those interests become subverted by competitively vicious government agencies. And a rival private criminal organisation with an artificial super-intelligence emerges, and tries to kill off the original one. In parallel, there is the constant risk that either of these ASIs, or another, will simply start to pursue its own interests, ditching both its human masters, and the interests of humanity in general.
The maverick genius alone in an attic with a sonic screwdriver who endangers the world with his madcap invention is always good television, and always, in truth, absurd. A similar transformative secret scheme in real life was the Manhattan Project to build an atomic bomb in the Second World War. It took six years from Einstein’s letter to President Roosevelt of 1939 until the devastation of Hiroshima and Nagasaki, eventually employed 130,000 scientists, technicians, and soldiers, at Los Alamos and elsewhere, and cost the equivalent of $27 billion. Enterprises of that scale are not hidden any more. We know where the CIA, NSA, GCHQ, Apple, and Google live, and what it is in general terms that they do. Paradoxically perhaps, there is reasonably good democratic governance of the security agencies, and very poor democratic governance of the technology corporations.
The present authors are, frankly, a good deal more worried about old-fashioned natural stupidity than we are about deviant overweening artificial intelligence. The digital ape will remain a human adapted to use super-fast tools, and will be able to outwit or out-nasty a legion of artificial super-intelligences. It is worth putting the dangers in context. Let’s descend through Kees Boeke’s scale, noting some of the very tricky issues. When this present universe does come to an end, an extremely long time from now, when the last black hole has evaporated, will it renew itself and become the next universe? How will humans step from this one to the next? This is a very small subset of the problem of multiverses. Nobody knows. On a smaller scale, but still gigantic compared to the digital ape, astronomers are close to certain that, long before then, our own star, the sun, will turn into a red giant star, give out more light, and expand physically, unfortunately engulfing our neighbourhood. Earth has a few billion years at most. But humanity will need to leave much earlier, as the rising heat makes life impossible. If we survive the next (say) quarter of a billion years, it will be time to seek out another habitable planet, and work out how to travel there. Canada, Siberia, other places near the poles may well go a lot earlier. Reading the pattern of weather over the past million years is becoming a tougher job. It used to be conventional wisdom that glaciation in the northern hemisphere went through 12,000 year cycles, and the planet was 11,000 years through this one. Certainly a glance at the temperature chart for northern Europe over the past million years shows a roller coaster, with us at the latest acme and facing a vertiginous drop into permafrost winter sometime in the next thousand years. But if that is not close enough, 2016 was the warmest year Earth has had since modern records began. The mathematics and best data available now say that a key feature of the Anthropocene, the era in which we have dominated and radically changed our environment, is that we have heated the whole globe up. We face within a hundred years severe flooding of many coastal regions on every continent, and desertification of several large inland areas, at best. Louisiana, Bangladesh, East Anglia, will all be challenged. (At worst, there are respectable climatologists who fear that the process is close to unstoppable and Earth may turn into Venus.)
The point here is, how close to us do credible threats need to be, before we do something to mitigate or mend them? Anyone puts on warm clothing or takes an umbrella if the weather looks poor in the morning. Nobody, however horrified by the human condition, spends their day trying stop the end of the universe. Psychologist Daniel Kahneman won his Nobel Prize for his demonstrations of how poor almost all of us are at judgements about risks that standard economic textbooks assume we make easily and correctly. Humans need rules to live by. Many of the rules were established very early in our development. They include preferring a familiar pattern that has worked well so far, to an unfamiliar pattern, even if that might work better, and that makes us very bad at comparing the relative risk of different ways of doing things. For example, we live in a world of automobiles and trains. Automobiles kill many people all around the world. Trains are much safer. In the UK, where the roads are amongst the safest in the world, there were 1732 road deaths in 2015. In the 10 years up to and including 2015, there was a total of seven deaths in rail crashes, an average of less than one per year. Yet in 2001 a man fell asleep driving his Land Rover in Selby, Yorkshire, careered down an embankment onto the track, and was hit by a passenger and a freight train. Ten people died and dozens were injured. An immediate effect was a dip in passenger rail traffic, as horrified people decided to go by road instead. Roads which were, in that unusually bad year for rail, only killing over 300 times as many people. We become accustomed to different levels of risk for different activities or possibilities. When a particular risk worsens, we react negatively, even if the alternative is already much more dangerous.
There is a similar pattern with lifts and stairs. Lifts are extremely safe: in the US, around 30 people a year are killed in lift accidents. Just over 1300 people a year die in falls on stairs or steps, one of the most frequent types of accidental death. Stairs kill more than 40 times as many people as lifts. Yet very many are uneasy in elevators; very few have a phobia about stairs. We all know people who will habitually opt for the mass killer because it is ‘safer’. President Donald Trump, regarded as quaint for his rumoured bathmophobia, in this instance has the facts more on his side than the rest of us do. The digital ape really does need to lean more towards the digital and less towards the ape when assessing collective risks.
Present-day children take this habitat for granted. Yet it is without parallel in history, and does present significant dangers. When we do put our minds to it, we know how to collectively apprehend such risks. The UK government’s Office for Science analysed these behaviours, and made some good policy recommendations, which have sometimes been implemented. There has, for instance, been intense, well-informed debate about stem-cell research and cloning, and clear legal boundaries have been drawn. Other decisions have been made really badly: we still allow finance houses to operate and build hyper-complex and hyper-fast systems, with virtually no framework to govern them, despite the crash of 2008. There are, actually, many examples of good collective governmental scientific management of risks, real or apparent. That hole in the ozone layer is mending, thanks to coordinated international action around the use of CFCs. The British government acted promptly on exactly the right advice from the chief scientific advisor Professor Robert (now Lord) May on BSE in cattle. After initial inertia from politicians and bureaucrats, Oxford academic Lord Krebs, head of the Food Standards Agency, ensured that the right data and the best science prevailed in the 2001 foot-and-mouth epidemic. What could have been disasters were very much mitigated. And yet … there are many opposing instances where risk assessment has been mangled by governments, sometimes by accident, sometimes perhaps not. Start with the world-threatening weapons of mass destruction in Iraq used to justify the 2003 invasion.
The same has broadly been true of the dangers of gene manipulation, to which we will return. The Human Fertilisation and Embryology Authority and the Warnock review were exemplary in setting up structures and process before the genies were out of the bottle. And of course, the largest machine in the world, the Large Hadron Collider at CERN has neither blown up the planet, nor opened a tear in the fabric of the universe to let aliens in …
We can’t put the technological genie back in the bottle. But we do need to make sure he stays on our team. As we said in our first chapter, these large choices inevitably bring us back to the fundamental questions of ethics. Human values are what limit the power of the magic typewriter. Let’s amplify that, list the dangers, and sketch out some answers.
Kurzweil describes his transcendence subtly. He is certainly right that artificial intelligence is a powerful force whose impact on every major aspect of our lives has been, and will be, profound. But at its core, his proposition is simple enough. Machines themselves might, somehow, become so sophisticated and fast that they were able to outmanoeuvre humanity, and gain control of their own destiny and their own off switch. And then use that fact to pursue selfish machine ends of their own, disregarding or countermanding human instructions. But the central plank in the machines’ strategy would be survival of their species. Not merely the survival of this machine or that one, but the survival of machines in general. To pursue the fantasy one step further, that might, of course, involve competing Darwinian struggle between kinds of machines. Humans have had the ability to completely destroy ourselves with chemical, biological, radiological, and nuclear weaponry for the past 70 years or so. The number of fingers on the many triggers, the disparate and frightening varieties of fingers on the trigger, state and non-state, are increasing every year. Never mind Keynes’ aphorism, in the long run we’re all dead. In the short run, we need to take care lest we become radioactive toast. Perhaps, think Kurzweil and others, the machines will collectively make a safety play. Their risk algorithms will show them that humans can’t be trusted to face up to these existential threats, and they will take over all important decisions.
Some of the machine goals might coincide with our own. A machine answer to a global epidemic might be superior to ours: we would all gain. Machines might see their best future as having us as close partners, just as we undoubtedly have for several decades seen our best future as having them as close partners. But because they would be smarter than us, the overall strategy would be in their hands. They would have transcended us. It is important to repeat: transcendence does not merely involve machines being better at many intellectual techniques than us: memory and calculation and judgement of situations. Nor does it merely involve us delegating lots of decisions to machines, under our overall aegis, or being utterly dependent on them to protect us from material damage … All of those are in train. It involves the elites who now collectively call the shots no longer being able to do so, being ousted from the driving seat. There is no sign of that happening, and no plausible description of how it could.
Now it is worth linking this to real dangers we can easily appreciate. We are already very dependent on electronic systems, and getting more dependent on expert systems, and the inevitable dark side of that dependency is that disruption spells trouble. The electricity supply in towns and cities is generated long distances away and its distribution is managed by smart machines. In a suburban house, a power outage might be fun for young families, like camping. One can live without electricity in a leafy avenue for some days, if absolutely necessary. In residential towers and city apartment blocks and downtown offices the water supply is pumped to the roof tank by electricity. No power not only means no lifts. It also means no washing or cooking water, no sewage. Much less fun, and not viable for more than a day or so. If an outage covers any significant area, is state or country wide, then within days there is no fuel for delivery trucks, let alone private cars. The road tankers don’t get out of the refineries and anyway the petrol station pumps no longer work. Hospitals, elder care, schools cease to function. Panic sets in. We live in a just-in-time economy. Supermarkets have perhaps three days of stock, petrol stations about 24 hours. (This is probably, as those who run our supply chains claim, the most efficient way to organise them.) No food reaches the shops, which anyway have no refrigeration. Of course there are some back-up generators, and temporary power transfer can be rigged up from other regions. Nevertheless, widespread permanent power with smart control systems is essential for modern city and town life to function. The sensible consequence is that the electronic computing machines, which 24 hours a day manage vital infrastructure in the rich countries, are encased in hard shells, mostly steel and concrete, accessible only to a limited set of trusted keyholders. Collectively, we have been compelled for years to grant this authority to a select few. We could never let our lifestyle — our democracy — be vulnerable to just anybody barging in, running their own code, and either destroying or perverting systems. This overlaps, of course, with the need to also prevent physical attack or theft.
The network control systems, crucial to our habitat, vulnerable in this way, and therefore with access controlled by small groups, are already legion. The fuel networks include the power stations, generators, and storage and distribution conduits. Nuclear power stations and their switching mechanisms and gas pipelines. Military and security force installations, with particular reference to weapons facilities and very particular reference to chemical, biological, and nuclear weaponry. The water supply, from reservoir to tap. All our major transport modes: aeroplanes themselves, but also air traffic control. We are vulnerable to chaos as well as crashes and hijacks. The money supply and the banking system, from cash machines and supermarket tills, to the stock exchange. Access is equally, rightly, restricted to immense information stores — academic, commercial, private, and governmental. The all-important data infrastructure. This is where the information power of the big beasts lies, to which we will return.
These systems, and more, are protected, by business interests and institutionally, but also by the power of the state, ultimately by armed force. At the coalface they are managed by pass keys and gatekeepers, physically and digitally coded. So there are people — a lot of people, but a limited slice of the population — who are depended on by the rest of us to protect and manage our vital arteries. No single person can turn the world on and off, nor even one small group. For long, at any rate. Fuel tanker drivers with grievances came close to temporarily shutting down the UK in the autumn of 2000. The army was called to support the civil authorities, and the dispute ended. The networks of people, just like the networks of machines and wires and pipes, overlap and interconnect, but are also discrete. Equally, there are overlaps with commercial ownership, and with political and military status. Keyholders may well be ordinary working people, in no way a privileged sub-class, but answerable to the more powerful.
It is difficult writing now to see how this will ever change. Keyholder access to the smart control systems will always be restricted, and those gatekeepers will always be answerable to an equally small range of people with sway over infrastructure. The infrastructure would be too fragile otherwise. Their status may be commercial or political, bureaucratic or democratic or authoritarian. They will always be with us. The negotiable matter is, as it has been for a long time, how, if at all, do the rest of us call the powerful to account? How does the broad population control, or at least influence, the elites of decision makers and keyholders? The very positive converse of that difficult problem is that the idea of an all-conquering AI assumes that the machines somehow wrest multiple keys from the diverse elites. That bridge, thousands of years of experience with elites tells us, is not an easy one to cross. Its not how disparate meshing networks operate. An artificial super-intelligence can’t simply surround the TV station and broadcast martial music. Machines would need to be collectively and reciprocally organised — infinitely better than the tanker drivers in 2000 — and able to act against dozens of control systems simultaneously, without effective resistance. Most crucial junction gates are fail safe in practice, have dead man’s handles, and can be overridden from different locations. Decades of struggle against malevolent viruses and spyware have led to considerable counter-measures already being in place against any artificial intelligence invasion or insurgency, including one by a rogue artificial intelligence itself. Naturally, defences can be breached in one place or another. It is simply impossible to see how a self-directed non-human intelligence could overwhelm all the bulwarks before a counter-attack were launched. Which could consist simply of pulling a lot of plugs out of sockets. The present authors are just less apocalyptic, more pluralist, more down to earth, or more cynical if you like, than Kurzweil, whilst wholeheartedly agreeing with Hawking that vigilance is vital. Machines are not going to march down the streets to storm our citadels. Transcendence is not inevitable: the requisite sequence of events is deeply unlikely. What has changed is human potential, thanks to our transformative new tools.
To put it even more bluntly, the problem is not that machines might wrest control of our lives from the elites. The problem is that most of us might never be able to wrest control of the machines from the people who occupy the command posts.
Hence the true dangers. First, in the rich capitalist nations it follows almost axiomatically that we would be exploited by those closest to the control systems, and we are. Mainly they demand and take old-fashioned stuff: money and social status. It can’t have escaped anybody’s notice that, whatever may have happened to broad income equality in the vast mass of western populations, digital elites pay themselves staggeringly well. This is so in the banks, where a percentage of money creation is creamed off for the managers. Even very junior people at keyboards earn multiples of ordinary wages. It is so in the artificial intelligence industry, where the technology giants are owned and run by young billionaires, surrounded by very rich cadres of researchers, designers, and marketeers. More equal societies are feasible, and some are more equal than others already, but it is not easy. The prime movers of the new technology claim that the web’s libertarian core values are built into the ecosystem. There is no one overarching regulator, and the internet does not, at least in the West, belong to governments. The wired world, they say, is anti-hierarchal, anti-authoritarian, part of the levelling counter-culture that began in the 1960s. But how accurate or simply self-serving is this? T-shirt-wearing billionaires who own monopoly mega-brands do have a certain accessible glamour, but they make strange harbingers of liberty, equality, and fraternity.
Elites plus machines are very powerful. It is certainly easy to see how a breakaway group, or a powerful corporation, could gain hegemony or intense sway over the others for a while. But only over part of the world, for a period of time. The Chinese ruling elite at present might arguably be such a group. They may be a danger to the rest of us. They are very unlikely to make a move to control the whole world, however intense rivalries may become. They would be successfully resisted by other entrenched and insurgent powers if they did. What is true is that now great stores of potentially useful information is available to too few groups. We return to that question later.*
[* Jonathan Fenby’s Will China Dominate the 21st Century? is superb. And his answer is ‘No’. But China is the second most powerful cyber presence, will continue to be so, and her leaders are seized of the need for ‘artificial intelligence with Chinese characteristics’, which perhaps decodes disturbingly.]
*
Secondly, we are in danger of accidental or unforeseen crashes, in finance yes, but also in transport and defence and energy utilities. All systems go wrong. A nuclear missile can be launched accidently, perhaps because a flaw in a radar system decodes an errant holiday charter-flight as incoming enemy action. A USAF B-52 bomber collided with its tanker in 1966 whilst refuelling in the air. Both planes were destroyed. Four hydrogen bombs carried on the B-52 hit the area around Palomares in Spain. They failed to detonate, but covered a wide area in radioactive material. In 1974, defective systems led to a US submarine carrying 16 nuclear missiles colliding with a Soviet vessel off the coast of Scotland just outside Holy Loch. These are only two of dozens of incidents. Nuclear power stations can and do leak physically, and the monitoring and alarms may not be effective. We rely heavily on the machines making good choices under our general control. Sometimes good or at least acceptable general principles, taken to a hyper-fast conclusion, can lead to bad results. Machines programmed to sell stocks rapidly if the price falls can make financial markets very unstable indeed. Unpleasant, but correctable in that case. Less so when applied to a rogue nuclear launch.
Thirdly, there is the ever-present risk of purposeful external attack, targeted at control nodes to inflict the maximum damage. Cyber attacks by freelance hackers have become a permanent and well-known feature of twenty-first century life. State and terrorist attacks are now burgeoning, and criminals have seen the possibilities.
*
Machines have permeated our habitat, and that will intensify. There have already been distinct phases, starting with the agrarian and industrial revolutions of the last three centuries, and moving to the present supercharged digital landscape, which will surely last for many decades. The phase after that may perhaps be more nuanced, seductive even. Machines might, perhaps before the twenty-second century is very far gone, or earlier, simply become utterly reliable, and ever more responsive to the better parts of our nature. They don’t boss us about; they lose all their rough edges. As we have demonstrated, already very few people understand them at all, and nobody has comprehensive knowledge. At least in principle, in a world of well-behaved technology, stable societies might reach the point that nobody bothers or cares about the infrastructure. Our transport will arrive on time, always. The screens will glow sharply, and new, inventive, satisfying films will be created by perfect CGI actors fast enough to keep us all amused. (The back catalogues are already enormous.) Food will be ordered up regularly according to our preferences from perfect supermarkets … So in that nirvana our descendants might lose interest in how it all works, whilst engaging in intense discussions about overall strategy and the distribution of wealth, and leave the friendly machines to just hum along with the day job.
Frankly, somebody else can worry about that because the first phase has a long time to run and we have real world problems to deal with. We urgently need to bring these dangers within a framework of public, preferably democratic, accountability.
*
Let us look at a couple of other risks in the digital ape’s habitat, of a different kind, also both interesting and real. Since at least the time of the ancient Egyptians we have known that faster or fatter or more beautiful creatures can be rapidly artificially selected and bred. Darwin spent years corresponding with pigeon fanciers, and breeding his own. He understood that the same broad principles operating in the natural environment over millions of years had led to everything from pigeons to penguins, eagles to earwigs. All evolved slowly by random mutation and were fitted by natural selection to their environment. These biological processes can be used as the model for new types of machine development. The twenty-first century version of selective pigeon breeding is machines that can be ‘bred’ artificially, or allowed to ‘breed’ on their own. Homeostatics and autonomic self-repairing machines — machines that monitor their own state, and correct themselves — are already with us. Much work has been done over recent decades on the mathematics of genetics and biological evolution; and computer programs exist in which those evolutionary principles are applied to theoretical machines in test environments.
All this should worry every digital ape as much as it scared Stephen Hawking. Any competent science fiction author can now invent nightmare scenarios which are also plausible. How about … an extremely small, fast-breeding nano-machine escapes from the laboratory. It lives by seeking out copper cables and eventually sucking them dry of electricity. Or it just dusts through the air intakes of vehicles. Or through the air intakes of digital apes … The world we now inhabit, where such an instant story cannot, technically, be equally instantly ruled out is, plainly, in trouble. As Hawking advised, we should apply the same rules and moral frameworks to this research as we do to genetics laboratories that engage in cloning animals, and we must do it now. We simply cannot leave it to unaccountable private companies to shape this future.
There are myriad smaller-scale risks, inevitably given the radical all-pervasive nature of the changes we have undergone to become digital apes. Are the new devices bad for our health? There is as yet little evidence that having a mobile phone next to your ear fries your brain, frightening though the thought may be. Use of bright screens, particularly late in the evening, is almost certainly another matter. Parents used to scold children that if they watched too much television they would get square eyes. It was intended more as a rueful witticism than a medical warning. The generations who have spent their lives sitting in front of video devices of several sorts don’t seem to be opthalmologically different than their forebears. Many parents now worry about their children spending too much of their day looking at screens. They worry in part, of course, because they think young people should spend their time in idyllic safe outdoor pursuits, falling out of trees and off bicycles. Some eminent professors do now warn that the developing brains of young children may be warped by hours online. We think this is too pessimistic by far: the brain is exquisitely plastic, and adapts to the challenges and opportunities of the child’s environment. Even if there were some diminution, we would need to set that against the augmentation, and the preparation for the world they actually live in, which will, it seems at the moment, involve needing a brain that knows its way about screens. Our brains specialise in what brings the whole person better returns. Young people will transact increasing proportions of their lives in this way, and they need to learn how to do it. The new dangers are not in the first instance neurological, but in how the individual relates to their social and physical environment: a child who spends hours a day in front of online games is living a very different life from one who does not.
Whatever the truth, sensible parents the world over do seem to agree that their children should spend a limited part of their day staring at smart screens. That seems at the very least to be a sociological fact about modern families, of interest in itself. No doubt people generally believe all kinds of unfounded rubbish, and always have done, but there is surely something in this fear. Parents are right to be concerned, should probably put time restrictions on any activity which looks as if it has become obsessive, and certainly should prevent bright screen use before bedtime.
As far back as the 1980s, university administrations closed down campus computer halls every evening for half an hour, to enable them to prise undergraduate hands from the keyboards at the end of a prescribed working day for them. (Research staff were trusted to re-enter later.) This may have been partly a rationing of machine time, but it seemed to be mostly a feeling that students would be gripped endlessly around the clock unless an in loco parentis regime brought them back to ground. Universities varied in their approach. People now in their fifties who were undergraduates at the time do not seem to be differentially suffering from major brain diseases, depending on which university policy they studied under. Any more than anybody has shown that the minority of children in the 1950s and 1960s who grew up without television are in some way fitter for purpose than the rest of us in their later years.
China has a Cyberspace Administration. (How do they pace out their territory, distinctly from all the usual government departments and agencies, which all also have a cyber presence?) They have drafted regulations to ban all children from playing online games between midnight and 8am. That would seem straightforward enough, since in plain sense children should be forbidden anything at all other than sleep and breakfast during those hours. A viable plan in China, where a national ID number can be demanded before anyone plays any online game. It is part, however, of a strong campaign by the Chinese government to root out ‘what it considers an unhealthy and unproductive obsession’ according to The Times’ man in Beijing, Jamie Fullerton:
The proposal has raised fears that more children could be sent to bootcamp-style internet addiction centres. The administration said that schools should work with institutions to help minors with internet addictions: a disorder that China became the first country to officially recognise in 2008. In July 2014, China’s 1.3 billion population included 632 million internet users and the government believed that up to 24 million of those were addicts.
‘Children face night time ban on playing computer games’, The Times, 8 October 2016
‘Internet addiction’ is not a classification that mainstream western psychology is yet quite content with, although many well-respected psychologists and academics certainly recognise at least two broad categories of issue. First, that time spent on the internet may become something that disrupts a person’s life disproportionately, or is being used as the means to fuel out of control gambling, sex, or shopping. Loneliness, like everything else, has been changed by the wiring up of the world. Second, there has been interest in the Chinese claims about research showing brain changes as a result of excessive enveloping online activity, mostly, but not only, gaming. Broadly speaking, it almost certainly is true that the brain, particularly of younger people, is shaped by gaming. The brain is, after all, shaped by most experiences, because it moulds itself to resource the activities most asked of it. Neuroscience has known for a long time that when a person loses, for instance, their sight, the relevant brain space will be devoted over time to other senses, which will thus grow stronger, and substitute as far as they can. Nevertheless, the concept of digital detox feels like it makes real sense, the idea that we could all do with switching off the high-tech screens from time to time. Tear the kids away from Minecraft and send them out to get some fresh air and do something healthy.
*
Our habitat is still developing, with many unexpected features. The new technology can disrupt, disintermediate, any aspect of daily life and business, for good or ill, planned or accidental. Scott McDonald, the sociologist and long-time research director and marketing expert for Time Warner Inc. and Condé Nast, now CEO of the Advertising Research Foundation, has closely studied impulse-buying for decades. A significant proportion of high-end magazines are bought at newsstands, or racks in supermarkets, by people who are only half contemplating a purchase, then something catches their eye. These are often located where people, to avoid talking to their neighbour in a checkout queue or a waiting room, will pick up their favourite titles, or venture into a new realm, while waiting to do something else. As not very engaged vision travels over the stand as a whole, spending a microsecond on any individual magazine, it stops from time to time. Why? What is it about this particular cover — the colour, the print, the size, the kind of picture, or the shape of the masthead logo — which draws attention, while its neighbour sits shyly unasked? Millions of dollars have rested on this research, which has enabled publishers to carefully fine-tune all those catchy aspects of their products, to become the naturally chosen one. And now? Scott McDonald’s research shows that many in line at the supermarket still want to avoid the eye of their neighbours. So they look for a polite displacement activity, a valid distraction. We used to pick up magazines, and candy bars. Now we take our smartphones out of our pockets, even pretend to have noticed an important message which deserves our full attention.
*
Here is another disintermediation. Why do some (mostly) young people want to spray paintings, personal logos, caustic comments about life and the world, in acrylic paint on public surfaces? Presumably the same drives that lie behind all art, all loud statements to the community. Okay, but why then is graffiti on the decline in major cities around the world? The urge to create has surely not diminished. Better, more technically adept, policing using widespread CCTV is one answer. Another is that the frenetic desire to express oneself, to leave a mark, to shout out to passing girls … now diverts through different channels. An article in the The Economist in 2013 quoted an expert:
A generational shift is apparent, too. Fewer teenagers are getting into painting walls. They prefer to play with iPads and video games, reckons Boyd Hill, an artist known as Solo One.
‘The Writing’s on the Wall: having turned respectable, graffiti culture is dying’, The Economist, 9 November 2013
Not implausible. With Facebook and Snapchat on your ultra-smart gadget, why waste money on a spray can? When one of the authors of this book mentioned the theory to Boris Johnson, then Mayor of London, now a senior statesman, he pretended to consider launching a nostalgic Conservative party campaign to halt the terrible decline in the traditional crafts of the British graffiti worker.
The BBC reports that:
Behnaz Farahi, an architect and interaction designer at the University of Southern California, has created a 3D-printed garment, Caress of the Gaze, which detects when you are being stared at and moves in response. Another creation, Synapse, is a 3D-printed helmet which moves and illuminates in response to the wearer’s brain activity.
‘The 3D-Printed Clothing That Reacts to Your Environment’, BBC website, 3 August 2016
It is difficult to imagine that this art would actually turn into widespread fashion. The principle is fun, thought provoking even, in one garment, in a gallery. Weird, surely, in many garments on the street? Clothes that show what part of a body is being looked at? Again, an interesting occasional statement, difficult to live with if widespread.
*
One further characteristic of the digital ape’s habitat is crucial. The nature of what it is to be here has changed over centuries, but with increasing rapidity as technology becomes more capacious. Homo sapiens, in the nature of the beast, always had both detailed and abstract notions of other places, elsewhere from our present location. Ideas of there. That was closely related to the development of language and communication. Even perhaps before: many mammals, birds, fish, and insects seem to be able remember locations and return to them. Squirrels hide nuts; pigeons home, other birds migrate; salmon return to spawning grounds, bees buzz back to their hives. That, of course, does not mean they represent, let alone conceptualise, places as we do. At some stage in the history of hominins, that ability did emerge as, well … Something like the idea of ‘the hill we travel to at the full moon because the flints are good there’ must have existed at most times during the nearly 2 million years in which the various species of hominins were expert tool users, without language. Religious concepts of abstract other places seem to have been almost universal amongst Homo sapiens until the very recent atheist revolution following the Enlightenment in the western world. Where the dead go, where powerful spirits or gods live, were entangled with early ideas about knowledge and wisdom and where they come from, and future worlds after death, mediated by priests and shamen.
And every individual hominin alternated between communing with the world and other people, and internal dialogue of one kind or another. We will return to the work of Julian Jaynes in a later chapter: he built an intriguing theory of the history of the mind on this point. Certainly with self-knowledge comes the ability to notice one’s dreaming and scheming, and indeed one’s absence as well as one’s presence. There was practical communication with those other worlds always, of course: go and tell your dad his rabbit is cooked. Then with civilisation came the formal message, scraped or written, followed by postal systems. But leaving aside smoke signals and flashing mirrors, it was not till the telegraph in the nineteenth century that two people could talk in real time whilst not in each other’s physical presence. Even then, only a few professionals actually did this.
Then came the telephone, in parallel with the one-way broadcasters, radio, films, and television, gradually becoming universal in developed countries, and only recently via the mobile phone in poorer places. And thus began the extended self, with here in practice stretched to include (but not be confused with) the location of one’s interlocutor, and one’s collection of interlocutors. The geography of one’s location equally began to extend. Socially, one’s friends might mostly live in the same village or part of town. Work colleagues and business contacts might be further afield: a significant part of the direct contact with them was in the new, shared, extended space. The internet then brought e-mail and discussion groups, the web brought the ability to ‘go to’ abstract places, to be ‘on’ Facebook, to have an online life, with mutual broadcasting within abstract communities.
Many — not by any means all — people therefore now live in very extended, only partly geographical places, are constantly messaging, looking at news, talking, across several spheres. This is closely related to the odd notion we have already mentioned, of an abstract but geographical place, the cloud. The Swedish politician Gudrun Schyman, talking of the Feminist Initiative party she leads, said:
We have been good with social media, largely out of necessity. Of all the parties in Sweden we have the highest profile on social media, and that is where our members are, and that is their language.
Quoted in Dominic Hinde, A Utopia Like Any Other: inside the Swedish model, 2016
Schyman perceives, surely correctly, that here a large number of people commune together, with effective power and their own ways of talking to each other, at a non-existent, but utterly real, location.
Similarly, assortative mating has changed. Over only a few years in the past decade the concept of ‘meeting’ a new partner online has moved from being something the average citizen would regard as both risky and risqué, and a social embarrassment in polite company, to being an unremarkable everyday fact. Many people of all ages would consciously look for friendship and love on the web as readily as they would look for it to occur accidently through their workplace or social circles.
This has happened at the same time as the globalisation of world trade and world finances, and the ubiquitous ‘offshoring’ of capital, ownership of businesses and property, tax avoidance, crime, in a world of complexity. The overall consequence for the digital ape is an untethering of many aspects of life, and even the self, from the here and now. The digital ape can and does easily, much of the day, choose to be there not here. Perhaps that is one reason for the popularity of ‘mindfulness’, the conscious striving to choose to be present.
In summary, a major element of the digital ape’s habitat is now hyper-complex and super-fast systems, entwined with all the old features we adapted to. This has already added an extra dimension to our way of life. It has also spawned considerable new risks: instability; cyber-attack; insurgent artificial intelligence, amongst others. Our response needs to be vigilant, intelligent, and inventive. So long as we are, we will remain in control of the machines, and benefit greatly from them, but the perennial danger from powerful humans will intensify. We need to develop policy frameworks for this. Beyond the dangers, a world of opportunity arises from our new relationship with the subset of the machines which can be perhaps loosely called robots, which is markedly changing how we live. We devote two chapters to the very largely positive aspects of that. To do that properly, we need first to understand more deeply our very fundamental and aboriginal relationship to tools, and the appearance on the scene of the digital ape.