TWO

Increasingly Integrated Technology

‘a new world has come into existence; but it exists only in fragments.’ Lewis Mumford, Technics and Civilization (1934)

In the digital lifeworld, technology will permeate our world, inseparable from our daily experience and embedded in physical structures and objects that we never regarded previously as ‘technology’. Our lives will play out in a teeming network of connected people and ‘smart’ things, with little meaningful distinction between human and machine, online and offline, virtual and physical, or, as the author William Gibson puts it, between ‘cyberspace’ and ‘meatspace’.1 This is what I call increasingly integrated technology.

It already feels like we can’t escape from digital technology. Take smartphones. It’s estimated that more than 90 per cent of us keep ours within 3 feet of our bodies twenty-four hours a day.2 Sixty-three per cent of Americans check their device every hour. Nearly 10 per cent check every five minutes.3 It’s hard to believe they’ve only been with us for a decade or so. Yet the quantity of digital technology in the world is set to grow massively in the next few decades. Tens of billions, and eventually trillions, of everyday objects, from clothing to household appliances, will be endowed with computer processing power, equipped with sensors, and connected to the internet. These ‘smart’ devices will be able to make their own decisions by gathering, processing, and acting on the information they absorb from the world around them.4 As technology and design improves, we may stop noticing that digital objects are even ‘technology’. David Rose describes a world of ‘enchanted objects’—‘ordinary things made extraordinary’.5 This phenomenon, or variants of it, has been variously called ‘the internet of things’, ‘ubiquitous computing’, ‘distributed computing’, ‘ambient intelligence’, ‘augmented things’, and perhaps most elegantly, ‘everyware’.6

There are five underlying trends. Digital technology is becoming more pervasive, more connective, more sensitive, more constitutive, and more immersive. Let’s look at each in turn.

Pervasive

First, technology is becoming increasingly pervasive. Although estimates vary, it’s predicted that by 2020 there will be somewhere between 25 and 50 billion devices connected to the internet.7 Almost inconceivably, the Internet Business Solutions Group at Cisco Systems estimates that 99 per cent of the physical objects in the world will eventually be connected to a network.8 In such a world, processing power would be so ubiquitous that what we think of as ‘computers’ would effectively disappear.9

At home, refrigerators will monitor what you eat and replenish your online shopping basket; ovens and washing machines will respond to voice commands; coffee machines will brew your beverage when you stir in bed. Sensors will monitor the heat and light in your home, changing the temperature and opening the blinds accordingly. Your house might be protected by ‘smart locks’ that use biometric information like handprint, face, and retina scans, to control entry and exit.10

Although they’re not to everyone’s taste, apparently nearly half of consumers plan to buy wearable technologies by 2019.11 Pull on a Ralph Lauren ‘PoloTech’ shirt and it will monitor your steps, heart rate, and breathing intensity—providing you with personalized performance feedback.12 Snapchat Spectacles and similar early accoutrements, already on the market, can capture what you see in shareable ten-second clips.13 In the future, more sophisticated products will supersede the first generation of Nike Fuelbands, Jawbone fitness trackers, Fitbit wristbands, and Apple watches. ‘Epidermal electronics’—small stretchy patches worn on the skin—will be able to record your sun exposure, heart rate, and blood oxygenation.14 Meanwhile, when you toss a ball around the garden, the pigskin itself will record the distance, velocity, spin rate, spiral, and catch rate for post-game analysis.15

In public, smart waste bins will know when they are full, highways will know when they are cracked, and supermarket shelves will know when they are empty. Each will feed information back to the persons (or machines) responsible for fixing the problem. Smart signs, streetlamps, and traffic lights will interact with the driverless cars that pass by.16 ‘Smart cities’ are expected to grow in number. Authorities in Louisville, Kentucky, have already embedded GPS trackers inside inhalers to measure which parts of their cities are hotspots of air pollution.17

Connective

As well as permeating the physical world, technology will continue to grow more connective, facilitating the exchange of information between people, between people and machines, and between machines themselves. Since the turn of the millennium, the number of people connected to the internet has grown radically, from around 400 million in 2000 to 3.5 billion in 2016.18 That number is expected to rise to nearly 4.6 billion by 2021.19 It looks like most of the earth’s population will eventually be connected to wireless internet-based networks, not just through desktop computers but via ‘smart’ devices, smartphones, tablet computers, games consoles, and wearables. Facebook now boasts more than 2 billion active users.20 Twitter has more than 313 million, four-fifths of whom access it on mobile technology.21 YouTube has more than 1 billion active users.22

Digital technologies have also changed the nature of human connectivity as well as its extent. Perhaps the most profound change is the growth of decentralized modes of producing and distributing information, culture, and knowledge. Wikipedia is the most famous example. Together, tens of thousands of contributors from around the world have produced the greatest repository of human knowledge ever assembled, working cooperatively, not for profit, outside the market system, and not under the command of the state. Similarly, file-sharing websites like Tor are increasingly popular, and in 2015 there were more than 1 billion uses of Creative Commons, a collaboration-friendly copyright system that encourages the use and adaptation of content by others without further permission by the originator. As Yochai Benkler argues in The Wealth of Networks (2006) and The Penguin and the Leviathan (2011), it’s not that human nature has changed in the last twenty years to make us more cooperative. Rather, it’s that this scale of cooperative behaviour would have been impossible in the past. Connective technology has made it possible.23

The last few years have seen the emergence of another technology with potentially far-reaching implications for connectivity and cooperation. This is ‘blockchain’, invented by the mysterious pioneer (or pioneers) Satoshi Nakamoto. It’s best known as the system underpinning the cryptocurrency Bitcoin, launched in 2009. The workings of blockchain are technically complex, but the basic premise can be described simply. Imagine a giant digital ledger (or spreadsheet) of the kind we would previously have put on paper. This ledger contains a record of every transaction that has ever taken place between its users. Every few minutes, it is updated with a new ‘block’ of information containing all of the last ten minutes’ transactions. Every new block refers back to the previous block, creating an unbroken chain of custody of all assets reaching back to their inception. The ledger is not stored in a single place. Instead, it is stored (‘distributed’) simultaneously across thousands of computers around the world. For security, it can only be added to, and not changed; it is public and can be scrutinized; and most importantly, transactions are secured by powerful ‘public key’ cryptography.

Blockchain’s social significance is that it enables secure transactions between strangers without the help of a trusted third-party intermediary like a bank, credit card company, or the state. It, purports to solve a longstanding problem in computer science (and politics), which is how to create ‘trust’, or something like it, between people with no other personal connection. Digital currency is perhaps the most obvious use for blockchain technology, but in theory it can be used to record almost anything, from birth and death ­certificates to marriage licences.24 It could also provide solutions to other problems of digital life, such as how to produce and retain control over secure digital ‘wallets’ or IDs.25 Looking further ahead, it’s plausible to imagine ‘smart’ assets managing themselves by combining AI and blockchain: ‘Spare bedrooms, empty apartments, or vacant conference rooms could rent themselves out . . . autonomous agents could manage our homes and office buildings . . . ’26

Blockchain also offers a potential means of regulating more complex legal and social relations beyond simple rights of property or usage. A ‘smart contract’, for instance, is a piece of blockchain software that executes itself automatically under pre-agreed circumstances—like a purchase agreement which automatically transfers the ownership title of a car to a customer once all loan payments have been made.27 There are early ‘Decentralised Autonomous Organisations’ (DAOs) that seek to solve problems of collective action without a centralized power structure.28 Imagine services like Uber or Airbnb, but without any formal organization at the centre pulling the strings.29 The developers of the Ethereum blockchain, among ­others, have said they want to use DAOs to replace the state altogether.

Blockchain still presents serious challenges of scale, governance, and even security, which are yet to be overcome.30 Yet for a youthful technology it is already delivering some interesting results. The governments of Honduras, Georgia, and Sweden are trialling the use of blockchain to handle land titles,31 and the government of Estonia is using it to record more than 1 million patient health records.32 In the UK, the Department for Work and Pensions is piloting a blockchain solution for the payment of welfare benefits.33 In the US, the Defense Advanced Research Projects Agency (DARPA) is looking into using blockchain technology to protect its military networks and communications.34

Increasingly connective technology is not just about people connecting with other people. It is also about increasing connectivity between people and machines—through Siri-like ‘oracles’ which answer your questions and ‘genies’ that execute commands.35 In the future, when you leave your house, ‘the same conversation you were having with your vacuum cleaner or robot pet will be carried on seamlessly with your driverless car, as if one “person” inhabited all these devices’.36 Samsung is looking to put its AI voice assistance Bixby into household appliances, like TVs and refrigerators, to make them responsive to human voice command.37

Self-driving cars will communicate with one another to minimize traffic and avoid collisions. Within the home, Bluetooth Mesh technology could increasingly be used to connect ‘smart’ devices with one another, using every nearby device as a range booster to create a secure network connection between devices that would previously have been out of range.38 (It’s important to note, however, that one of the challenges for the ‘internet of things’ will be developing a unified protocol that enables devices to communicate seamlessly with one another.)39

Looking further ahead, developments in hardware could yield new and astonishing ways of communicating. In 2014, a person using an electroencephalogram (a recording of brain activity, known as an EEG) headset successfully sent a ‘thought’ to another person wearing a similar device in France, who was able to understand the message. This was the first scientific instance of ‘mind-to-mind’ communication, also known as telepathy.40 You can already buy basic brainwave-reading devices, such as the Muse headband, which aims to aid meditation by providing real-time feedback on brain activity.41 Companies such as NeuroSky sell headsets that allow you to operate apps and play games on your smartphone using only thoughts. The US army has (apparently not very well) flown a helicopter using this kind of technology.42 Brain–computer interfaces have been the subject of a good deal of attention in Silicon Valley.43

Overall, increasingly connective technology appears set to deliver the vision of Tim Berners-Lee, inventor of the world wide web, of ‘anything being potentially connected with anything’.44

Sensitive

In the future, we can expect a dramatic rise in the number of ­sensors in the world around us, together with a vast improvement in what they are able to detect. This is increasingly sensitive technology. Our handheld devices already contain microphones to measure sound, GPS chips to determine location, cameras to capture images, and several other sensors. Increasingly, the devices around us will use radar, sonar, lidar (the system used in self-driving cars to measure the distance to an object by firing a laser at it), motion sensors, bar code scanners, humidity gauges, pressure sensors, magnetometers, barometers, accelerometers, and other means of sensing, and hence interacting with, the physical world.

There are many reasons why we might want more sensors in our own homes and devices—for recovering lost or stolen items using GPS, for instance, or monitoring the security or temperature of our homes from afar.45 Industrial entities, too, benefit from real-time feedback on their machinery, whether in relation to humidity, air pressure, electrical resistivity, or chemical presence. Transit and delivery companies can monitor the workload and stress placed on their fleets. Engineering and architectural firms can measure corrosion rates and stress. Similarly, within water systems, ‘sensors can measure water quality, pressure and flow, enabling real-time management and maintenance of the pipes’.46 Automatic meter reading technology feeds back usage data to utility providers, allowing them to detect faults, match supply with demand, and send out bills automatically—with little or no human intervention.47

Municipal authorities already recognize the value of a ‘dense sensor network’ to enable ‘the monitoring of different conditions across a system or place’.48 Automated number plate recognition technology can be used to track vehicles as they cross a city and to levy penalties for traffic violations.49 The city of Santander in Spain has distributed 12,000 sensors in urban areas to measure ‘noise, temperature, ambient light levels, carbon monoxide concentration, and the availability and location of parking spaces’.50 Following its mission in Afghanistan, the US military left 1,500 ‘unattended ground sensors’ to monitor Afghan and Pakistani population movement.51 Researchers at the Senseable City Lab at the Massachusetts Institute of Technology are working on a cheap package of sensors to be put on top of street lights, which would make it possible to measure noise and pollution ‘almost house by house in real time’.52 More remarkable still are plans, presently on hold, for the development of ‘PlanIT Valley’ east of the city of Porto, in Portugal. This city would use an ‘Urban Operating System’ to gather information from more than 100 million embedded sensors, feeding the data back to applications that monitor and control the city’s systems.53

From the macro to the micro, ‘smart dust’ technology involves micro-electromechanical systems measuring less than 2 millimetres by 2 millimetres, equipped with tiny sensors capable of gathering a variety of data. One pilot study called ‘Underworlds’ seeks to harness the ‘data’ which ‘get flushed down the toilet.’ It envisages small robots moving through sewers, collecting samples for analysis, and measuring patterns of food intake, infectious diseases, and gastric health.54

Sensors are also moving into sensory realms previously only experienced by living creatures. One company, for instance, is developing a mobile chemical sensor able to ‘smell’ and ‘taste’. (One hopes that the sewer-robots described in the previous paragraph are not endowed with this ability.) Helpfully, your smartphone will be able to test your blood alcohol level, blood glucose level, or whether you have halitosis, using about 2,000 sensors to detect aromas and flavours—far more than the 400 sensors in the human nose.55 Scientists at MIT recently developed a type of spinach—implanted with nanoparticles and carbon nanotubes—capable of detecting nitro-aromatics in the soil around it and sending live feedback to a smartphone. The result? Bomb-detecting spinach.56 (At last, someone has found a use for spinach.)

In the field of machine vision, AI systems are increasingly able to find the most important part of an image and generate an accurate verbal caption of what they ‘see’ (e.g. ‘people shopping in an outdoor market’),57 and computerized face recognition is now so advanced that it is routinely used for security purposes at border crossings in Europe and Australia.58 Less loftily, face recognition technology is used by the toilet paper dispensers in machines at Beijing’s Temple of Heaven park to make sure that no individual takes more than their fair share.59

Increasingly sensitive technology will prompt a change in how we urge machines to do our bidding. We are currently in the era of the ‘glass slab’—smartphones and tablet computers that respond mainly to our touch, and other stimuli such as voice commands.60 Soon, machines will respond to other forms of command, such as eye movements61 or gesture: there are already robotic toys that ‘sit’ in response to a specific wave of the hand. Some interfaces will be of an entirely new kind, like the temporary tattoos developed by MIT which can be used to control your smartphone,62 or the Electrick spray-paint that turns any object into a sensor capable of reading finger presses like a touchscreen.63 In 2015, workers at the Epicenter hub in Stockholm implanted microchips into their hands, enabling them to open secure doors and operate photocopiers by waving over a sensor.64

The most intimately sensitive technologies will gather data directly from our bodies. Proteus Biomedical and Novartis have developed a ‘smart pill’ that can tell your smartphone how your body is reacting to medication.65 Neuroprosthetics, still at an early stage of development, interact directly with nerve tissue. A chip implanted in the motor cortex of a paralysed patient enabled him to spell out words by moving a cursor on a screen with his thoughts.66 In a survey of 800 executives conducted for the World Economic Forum, 82 per cent expected that the first implantable smartphone would be available commercially by 2025.67 By then, smartphones will have truly become what US Supreme Court Chief Justice John Roberts called, ‘an important feature of human anatomy’.68

Machines are becoming sensitive in a further important sense, in that they are increasingly able to detect human emotions. This is the field of affective computing. By looking at a human face, such systems can tell in real time whether that person is happy, confused, surprised, or disgusted. One developer claims to have built ‘the world’s largest emotional data repository with nearly 4 million faces’, from which the system has learned to interpret subtle emotional cues.69 Raffi Khatchadourian writes in the New Yorker that:70

computers can now outperform most people in distinguishing social smiles from those triggered by spontaneous joy, and in differentiating between faked pain and genuine pain. They can determine if a patient is depressed . . . they can register expressions so fleeting that they are unknown even to the person making them.

If Ludwig Wittgenstein is right that the face is the ‘soul of the body’,71 then affective computing will mark a spiritual upheaval in the relationship between humans and machines. And the face is not the only portal into our internal life:72

Emotion is communicated, for example, through . . . body movement that can be measured, for instance, by gyroscopic sensors; posture, detected through pressure-sensing chairs; and skin-conductance electrodes can pick up indicative changes in perspiration or in electrical resistance. It is also possible to infer emotional states from humans’ blinking patterns, head tilts and velocity, nods, heart rate, muscle tension, breathing rate, and, as might be expected, by electrical activity in the brain.

Machines are well placed to detect these signals. For instance, it’s possible to use the vocal pitch, rhythm, and intensity of a conversation between a woman and a child to determine whether the woman is the child’s mother.73 By bouncing ordinary WiFi signals off the human body, researchers at MIT claim to be able to determine, about 70 per cent of the time, the emotional state of a person they have never studied before. This rate improves with people known to the system.74 Another biometric is human gait (manner of walking), which AI systems can use to identify a known person from afar, or even to recognize suspicious behaviour in strangers.75

As well as reading our emotions, machines can increasingly adapt and respond to them too. This is artificial emotional intelligence. Its uses are manifold—from ATMs able to understand if you are in a relaxed mood and therefore receptive to advertising76 to AI ‘companions’ endowed with ‘faces’ and ‘eyes’ that can respond in ‘seemingly ­emotional ways’.77 Technologists are already working to replicate the most intimate connection of all, with artificial romantic partners capable of sexy speech and motion.78

Constitutive

By increasingly constitutive technology, I mean digital technology that makes itself felt in the hard, physical world of atoms and not just the ‘cyber’ world of bits. In large part this is the province of robotics. The practice of building mechanical automata dates back at least 2,000 years to Hero of Alexandria, who built a self-powered three-wheel cart.79 The earliest reference to the idea of an autonomous humanoid machine is the Golem of Jewish lore. In the imagination of Jorge Luis Borges the Golem was ‘a mannequin shaped with awkward hands’, which:

raised its sleepy eyelids,

saw forms and colors that it did not understand,

and confused by our babble

made fearful movements

Modern robotics remains a challenging field, in part due to Moravec’s paradox, which is that (contrary to what might be expected) ‘high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computation resources.’80 Thus it has always been easier to design problem-solving machines than to endow them with balance or athletic prowess equal to that of a human or animal. We still don’t have robots we would trust to cut our hair.

Nevertheless, the world population of robots is now more than 10 million,81 of which more than 1 million perform useful work (robots, for instance, now account for 80 per cent of the work in manufacturing a car).82 Amazon’s robots, which look like roving footstools, number more than 15,000. They bring goods out of storage and carry them to human employees.83 Ninety per cent of crop spraying in Japan is done by unmanned drones.84 In 2016 about 300,000 or so new industrial robots were installed,85 and global spending on robotics is expected to be more than four times higher in 2025 than it was in 2010.86 A ‘reasonable prediction’ is that by 2020 many households will have one or more robots, used for transport, cleaning, education, care, companionship, or entertainment.87

We already trust robotic systems to perform complex and important tasks. Foremost among these is surgery. Using advanced robotics, a team of surgeons in the United States was able to remove the gall bladder of a woman in France, nearly 4,000 miles across the Atlantic.88 Perhaps the most commonplace robots in the future will be self-driving cars, able to navigate the physical world safely ‘without getting tired or distracted’.89 Google’s fleet of autonomous vehicles has driven more than 2 million miles with only a handful of incidents, only one of which is said to have been the fault of the vehicle itself.90 Since human error is the ‘certain’ cause of at least 80 per cent of all crashes, increased safety will be one of the principal advantages.91 We are likely to see, in the next decade, driverless trucks and boats, as well as airborne drones of varying autonomy: the Federal Aviation Administration (FAA) estimates that 10,000 civilian drones could be flying in the United States by 2020.92

Nature has been the inspiration for many recent developments in robotic locomotion. Some robots can ‘break themselves up and ­re-assemble the parts, sometimes adopting a new shape—like a worm (able to traverse a narrow pipe), or a ball or multi-legged creature (suitable to level or rough ground respectively).’93 At Harvard, researchers are working on RoboBees, measuring less than an inch and weighing less than one-tenth of a gram. They fly using ‘artificial muscles’ comprised of materials that contract when a voltage is applied. Potential applications include crop pollination, search and rescue, surveillance, and high-resolution weather and climate mapping.94 Work is underway on robotic cockroaches95 and ‘spiders, snakes, dragonflies, and butterflies that can fly, crawl, and hop into caves, cracks, crevices, and behind enemy lines’.96 Researchers in the field of ‘soft robotics’ have developed the ‘Octobot’, a thumb-sized autonomous mollusc made from soft silicone rubber without any rigid structures in its body.97

Companionship is an increasingly important function of robots. Toyota’s palm-sized humanoid Kirobo Mini is designed to provoke a similar emotional response as a baby human.98 ‘Paro’ is a cuddly interactive baby seal with ‘charming black eyes and luxurious eyelashes’. It appears to be beneficial for elderly people and those with dementia. Future models will monitor owners’ vital signs, sending alerts to human carers where necessary.99 Zenbo, which costs the same as a smartphone, is a cute two-wheeled robot with a round ‘head’, equipped with cameras and sensitive to touch. It can move independently, respond to voice commands, and display emotions on its screen-face.100

The potential uses for robots are limited only by our creativity. In 2016, Russian authorities ‘arrested’ the humanoid ‘Promobot’, which was canvassing attendees at a rally on behalf of a candidate for the Russian parliament. After failing to handcuff the offender, the police eventually managed to escort it from the premises. Promobot reportedly put up no resistance.101

Nanotechnology, the field for which the 2016 Nobel Prize in Chemistry was awarded, is another burgeoning area of research. It involves the construction of devices so small that they are ­measured in nanometres—one-billionth of a metre. The nanoscale is 1–100 nanometres in size. A red blood cell, by comparison, is 7,000 nanometres wide.102 The possibilities of nanotechnology are mind-boggling: nanobots can already ‘swim through our ­bodies, relaying images, delivering targeted drugs, and attacking particular cells with a precision that makes even the finest of surgeons’ blades look blunt’.103 There are nanobots that can release drugs in response to human thought, potentially enabling them to detect and prevent an attack of epilepsy at the precise moment it occurs. Another less salubrious application of the same technology would be to ‘keep you at the perfect pitch of drunkenness, activated on demand’.104 Nanotechnology also has implications for data ­storage. Researchers at Delft University in the Netherlands have created an ‘atomic hard drive’ capable of storing 500 terabits of information in a single square inch. Put another way, it could store the entire contents of the US Library of Congress in a cube measuring 0.1 mm each way.105

Another constitutive technology is 3D printing, also known as additive manufacturing. It enables us to print physical things from digital designs. Some think it could herald an era of ‘desktop manufacturing’ in which many people have 3D printers in their home or office and can ‘print’ a wide range of objects.106 Or municipal 3D printers could allow people to print what they need using open-source online digital templates.107 So far, some of the most useful 3D-printed objects have been in medicine. Printing splints for broken limbs is now relatively common,108 and customized replacement tracheae (windpipes) can now be printed in fifteen minutes.109 Surgeons have printed stents, prosthetics, and even bespoke replacement segments of human skull.110 Researchers at Cornell University have printed a human ear.111 Human kidneys, livers, and other organs, as well as blood vessels, are in development.112 A 3D-printed exoskeleton embedded with bionic technology has restored mobility to people unable to walk.113

Outside medicine, 3D printers have been used to make full-sized replica motorbikes,114 bikinis,115 aeroplane parts,116 entire houses,117 synthetic chemical compounds (i.e. drugs),118 and replicas of sixteenth-century sculptures.119 Food is an area of growth, with 3D-printed chocolate, candy, pizza, ravioli, and chickpea nuggets all ‘on the menu’.120 Eventually, it is predicted, a plethora of materials will be used as ingredients for 3D-printing, including plastics, ­aluminium, ceramic, stainless steel, and advanced alloys. Producing these mater­ials used to be the work of an entire factory.121 ‘4D’ printing is also in the works—intended to create materials programmed to change shape or properties over time.122

Immersive

Humankind, wrote T. S. Eliot, ‘[c]annot bear very much reality’. In the future, we won’t have to. Technology will become radically more immersive, as a result of developments in augmented and ­virtual reality.

In the mid-twentieth century, the computer was ‘a room’ and if we wanted to work with it, we had to ‘walk inside’. Programming often meant ‘using a screwdriver’.123 Later, the ‘desktop’ became the primary interface between humans and computers, where information on a screen could be manipulated by means of a keyboard and, later, a mouse.124 As noted earlier, we’re now in the era of the ‘glass slab’.125

‘Augmented reality’ (referred to as AR) is when our sensory experience of the physical world is enhanced by computer-generated input, such as sound, graphics, or video. Smart glasses, still in their infancy, allow the wearer to experience digital images overlaid onto the physical world. They might show directions to the park, or assembly instructions for a new wardrobe. They might identify a wild bird or flower. They could even provide facts about the person the wearer is talking to—helpful for politicians expected to remember thousands of faces and names. In the realm of audial AR, Google has already developed earbuds said to be able to translate forty foreign languages in almost real time.126 The most celebrated early AR application is Snapchat’s Lenses, which allow selfie-takers to edit their portraits with animations and filters.

Another prominent (though slightly faddish) application is the smartphone game Pokémon Go, which overlays the real world with fantastical beasts to capture and train. Victory in Pokémon Go doesn’t come from lounging at a digital terminal; the player must seek glory in the real world of physical space. In one unfortunate incident, a Koffing (a spherical beast filled with noxious gases) was found to be roaming the Holocaust Museum in Washington, DC.127 The game has provoked some unreasonably strong feelings, with Saudi clerics declaring it ‘un-Islamic’ and a Cossack leader saying it ‘smacks of Satanism’.128 Democracy protesters have used Pokémon Go as a pretext for holding illegal meetings in Hong Kong, where, otherwise, gatherings must be legally registered and authorized.129 Holograms are another form of AR. A controversial gangster-rapper located in California ‘performed’ in Indiana via hologram. (The concert was shut down.) Protesters in Spain staged a hologrammatic virtual protest in a public space from which they had been banned.130

In time, more advanced AR will make it almost impossible to distinguish between reality and virtuality, even when both are being experienced simultaneously. The secretive startup MagicLeap is working on a ‘tiny 3D technology that can shine images on your retinas’ which ‘blends the real world with fantasy’.131

More profound still is the emergence of virtual reality (VR). When you put on a VR headset, you enter and experience a vivid three-dimensional world. Touch controllers bring your hands in with you as well.132 ‘Haptic’ clothing gives you sensual feedback through tiny vibrating motors spread over your body—you really don’t want to get stabbed or shot.133 Once inside, you’re free to see, feel, explore, and interact with a new dimension of existence. While AR technologies operate within the real world, VR technologies create an entirely new one. Technology giants including Facebook (Oculus Rift), Microsoft (HoloLens), Samsung (Gear VR), Google (Daydream), and Playstation (Playstation VR) are already in fierce competition to develop the best VR ­hardware.

Virtual reality feels remarkably real. After a few moments of adjustment, even resistance, users’ senses begin to adapt to the new universe around them. As time passes, disbelief is suspended and sensual memory of the ‘real’ world, as being something separate, begins to fade. I can affirm, from first-hand experience, that even a simple racing game can stimulate real feelings of exhilaration and fear. When testing an early VR racing system, my ‘car’ spun off the track and hurtled toward a steel barricade. Momentarily, I believed I was about to die. (For what it’s worth, my life did not flash before my eyes. In fact, the whole episode was rather less dramatic than I might have expected for a final reckoning.) Reporters have spoken of what it is like to be sexually assaulted in VR: even though the groping is not ‘real’ in the physical sense, it can cause lasting feelings of shock and violation.134

While much of today’s focus on VR is on gaming, in due course VR will be used to experience a great deal of life. Workers will attend virtual meetings, shoppers will peruse virtual supermarkets, sports fans will frequent virtual stadiums, artists will create in virtual studios, political philosophers will pontificate in virtual cafés, historians will wander virtual battlefields, socialites will hang out in virtual bars, and punters will seek out virtual brothels. Importantly, the experience in each case will not be limited by the constraints of the ‘real’ world—VR can generate entirely new worlds where ordinary rules (whether law, norms, or even the rules of physics) do not apply. Imagine being, in VR, an astronaut ferociously battling alien craft, an antelope galloping across the Serengeti, or a squid swimming through the deep. This isn’t about making the real virtual; it’s about making the virtual seem real. Eventually we may live in a mixed reality, where VR and AR are so advanced that the digital and physical become indistinguishable. In such a world it will be hard, and possibly futile, to discern where ‘technology’ begins and ends.

 This process is overseen by so-called ‘miners’, generally paid in cryptocurrency for their efforts, who turn the information in the latest ‘block’, together with some other information, into a short, seemingly random sequence of letters and numbers known as a ‘hash’. Each hash is unique: if just one character in a block is changed, its hash will change completely. As well as the information in the previous block, miners also incorporate the hash of the last block. Because each block’s hash incorporates the hash of the block before it, it is very hard to tamper with, because it would mean having to rewrite previous blocks, stretching back in time, as well as the latest one.