He had not a minute more to lose. He pulled the axe quite out, swung it with both arms, scarcely conscious of himself, and almost without effort, almost mechanically, brought the blunt side down on her head. He seemed not to use his own strength in this. But as soon as he had once brought the axe down, his strength returned to him…. Then he dealt her another and another blow with the blunt side and on the same spot. The blood gushed as from an overturned glass, the body fell back. He stepped back, let it fall, and at once bent over her face; she was dead.1
Fyodor Dostoyevsky, Crime and Punishment
Our immediate reaction is emotional: anger, horror, disgust. And then reason sets in. A crime has been committed. A punishment must follow.
Now imagine the perpetrator is not a human, but a robot. Does your response change? What if the victim is another robot? How should society, and the legal system, react?
For millennia, laws have ordered society, kept people safe and promoted commerce and prosperity. But until now, laws have only had one subject: humans. The rise of artificial intelligence (AI) presents novel issues for which current legal systems are only partially equipped. Who or what should be liable if an intelligent machine harms a person or property? Is it ever wrong to damage or destroy a robot? Can AI be made to follow any moral rules?
First: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second: A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
Third: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Fourth: A robot may not harm humanity or, by inaction, allow humanity to come to harm.2
But Asimov’s rules were never meant to serve as a blueprint for humanity’s actual interaction with AI. Far from it, they were written as science fiction and were always intended to lead to problems. Asimov himself said: “These laws are sufficiently ambiguous so that I can write story after story in which something strange happens, in which the robots don’t behave properly, in which the robots become positively dangerous”.3 Although they are simple and superficially attractive, it is easy to conceive of situations in which Asimov’s Laws are inadequate. They do not say what a robot should do if it is given contradictory orders by different humans. Nor do they account for orders which are iniquitous but fall short of requiring a robot to harm humans, such as commanding a robot to steal. They are hardly a complete code for managing our relationship with AI.
This book provides a roadmap for a new set of regulations, asking not just what the rules should be but—more importantly—who should shape them and how can they be upheld.
There is much fear and confusion surrounding AI and other developments in computing. A lot has already been written on near-term problems including data privacy and technological unemployment.4 Many writers have also speculated about events in the distant future, such as an AI apocalypse at one extreme,5 or a time when AI will bring a new age of peace and prosperity, at the other.6 All these matters are important, but they are not the focus of this book. The discussion here is not about robots taking our jobs, or taking over the world. Our aim is to set out how humanity and AI can coexist.
1 Origins of AI
Modern AI research began on a summer programme at Dartmouth College, New Hampshire, in 1956, when a group of academics and students set out to explore how machines could intelligently think.7 However, the idea of AI goes back much further.8 The creation of intelligent beings from inanimate materials can be traced to the very earliest stories known to humanity. Ancient Sumerian creation myths speak of a servant for the Gods being created from clay and blood.9 In Chinese mythology, the Goddess Nüwa made mankind from the yellow earth.10 The Judeo-Christian Bible and the Quran have words to similar effect: “And the Lord God formed man of the dust of the ground, and breathed into his nostrils the breath of life; and man became a living soul”.11 In one sense, humans were really the first AI.
In literature and the arts, the idea of technology being used to create sentient assistants for humans or Gods has been around for thousands of years. In Homer’s Iliad, which dates to around the eighth century BC, Hephaestus the blacksmith is “assisted by servant maids that he had made from gold to look like women”.12 In Eastern European Jewish folklore, there are tales of a rabbi in sixteenth century Prague who created the Golem, a giant human-like figure made from clay, in order to defend his ghetto from anti-Semitic pogroms.13 In the nineteenth century, Frankenstein’s monster brought to the popular imagination the dangers of humans attempting to create or recreate, intelligence through science and technology. In the twentieth century, ever since the term “robot” was popularised by Karel Čapek’s screenplay Rossum’s Universal Robots,14 there have been many examples of AI in films, television and other media forms. But now for the first time in human history, these concepts are no longer limited to the pages of books or the imagination of storytellers.
Today, many of our impressions of AI come from science fiction and involve anthropomorphic manifestations that are either friendly or, more usually, unfriendly. These might include the bumbling C-3PO from Star Wars, Arnold Schwarzenegger’s noble Terminator or the demonic HAL from 2001: A Space Odyssey.
On the one hand, these humanoid representations of AI constitute a simplified caricature—something to which people can easily relate, but which bears little resemblance to AI technology as it stands. On the other hand, they represent a paradigm which has influenced and shaped AI as successive generations of programmers are inspired to attempt to recreate versions of entities from books, films and other media. In the field of AI, first science then life imitates art. In 2017, Neuralink, a company backed by serial technology entrepreneur Elon Musk, announced that it was developing a “neural lace” interface between human brain tissue and artificial processors.15 Neural lace is—by Musk’s own admission—heavily influenced by the writings of science fiction authors including in particular the Culture novels of Iain M. Banks.16 Technologists have taken inspiration from stories found in faith as well as popular culture: Robert M. Geraci argues that, “[t]o understand robots, we must understand how the history of religion and the history of science have twined around each other, quite often working towards the same ends and quite often influencing another’s methods and objectives”.17
Although popular culture and religion have helped to shape the development of AI, these portrayals have also given rise to a misleading impression of AI in the minds of many people. The idea of AI as only meaning humanoid robots which look, sound and think like us, is mistaken. Such conceptions of AI make its advent appear to be distant, given that no technology at present comes remotely close to resembling the type of human-level functionality made familiar by science fiction.
The lack of a universal definition for AI means that those attempting to discuss it may end up speaking at cross-purposes. Therefore, before it is possible to demonstrate the spreading influence of AI or the need for legal controls, we must first set out what we mean by this term.
2 Narrow and General AI
It is helpful at the outset to distinguish two classifications for AI: narrow and general.18 Narrow (sometimes referred to as “weak”) AI denotes the ability of a system to achieve a certain stipulated goal or set of goals, in a manner or using techniques which qualify as intelligent (the meaning of “intelligence” is addressed below). These limited goals might include natural language processing functions like translation, or navigating through an unfamiliar physical environment. A narrow AI system is suited only to the task for which it is designed. The great majority of AI systems in the world today are closer to this narrow and limited type.
General (or “strong”) AI is the ability to achieve an unlimited range of goals, and even to set new goals independently, including in situations of uncertainty or vagueness. This encompasses many of the attributes we think of as intelligence in humans. Indeed, general AI is what we see portrayed in the robots and AI of popular culture discussed above. As yet, general AI approaching the level of human capabilities does not exist and some have even cast doubt on whether it is possible.19
Narrow and general AI are not hermetically sealed from each other. They represent different points on a continuum. As AI becomes more advanced, it will move further away from the narrow paradigm and closer to the general one.20 This trend may be hastened as AI systems learn to upgrade themselves21 and acquire greater capabilities than those with which they were originally programmed.22
3 Defining AI
The word “artificial” is relatively uncontroversial. It means something synthetic and which does not occur in nature. The key difficulty is with the word “intelligence”, which can describe a range of attributes or abilities. As computer science expert and futurist Jerry Kaplan says, the question “what is artificial intelligence?” is an “easy question to ask and a hard one to answer” because “there’s little agreement about what intelligence is”.23
Curiously, the lack of a precise, universally accepted definition of AI probably has helped the field to grow, blossom, and advance at an ever-accelerating pace. Practitioners, researchers, and developers of AI are instead guided by a rough sense of direction and an imperative to “get on with it”.24
Defining AI can resemble chasing the horizon: as soon as you get to where it was, it has moved somewhere into the distance. In the same way, many have observed that AI is the name we give to technological processes which we do not understand.25 When we have familiarised ourselves with a process, it stops being called AI and becomes just another clever computer programme. This phenomenon is known as the “AI effect”.26
Rather than asking “what is AI?” it is better to start with the question: “why do we need to define AI at all?” Many books are written on energy, medicine and other general concepts which do not start with a chapter on the definition of these terms.27 In fact, we go through life with a functional understanding of many abstract notions and ideas without necessarily being able to describe them perfectly. Time, irony and happiness are just a few examples of concepts that most people understand but would find difficult to define. Justice Potter Stewart of the US Supreme Court once said that he could not define hardcore pornography “But I know it when I see it”.28
However, when considering how to regulate AI, it is not sufficient to follow Justice Stewart. In order for a legal system to function effectively, its subjects must be able to understand the ambit and application of its rules. To this end, legal theorist Lon L. Fuller set out eight formal requirements for a system of law to satisfy certain basic moral norms—principally that humans have an opportunity to engage with them and shape their behaviour accordingly. Fuller’s desiderata include requirements that law should be promulgated so that citizens know the standards to which they are being held, and that laws should be understandable.29 To pass Fuller’s tests, legal systems must use specific and workable definitions when describing the conduct and phenomena which are subject to regulation. As Fuller says: “We need to share the anguish of the weary legislative draftsman who at 2:00 a.m. says to himself ‘I know this has got to be right, and if it isn’t people may be hauled into Court for things we don’t mean to cover at all. But for how long must I go on rewriting it?’”.30
In short, people cannot choose to comply with rules they do not understand. If the law is impossible to know in advance, then its role in guiding action is diminished if not destroyed. Unknown laws become little more than tools of the powerful. They can lead ultimately to the absurd and frightening scenario imagined in Kafka’s The Trial, where the protagonist is accused, condemned and ultimately executed for a crime which is never explained to him.31
Most of the universal definitions of AI that have been suggested to date fall into one of two categories: human-centric and rationalist.32
3.1 Human-Centric Definitions
Humanity has named itself homo sapiens: “wise man”. It is therefore perhaps unsurprising that some of the first attempts at defining intelligence in other entities referred to human characteristics. The most famous example of a human-centric definition of AI is known popularly as the “Turing Test”.
In a seminal 1950 paper, Alan Turing asked whether machines could think. He suggested an experiment called the “Imitation Game”.33 In the exercise, a human invigilator must try to identify which of the two players is a man pretending to be a woman, using only written questions and answers. Turing proposed a version of the game in which the AI machine takes the place of the man. If the machine is able to succeed in persuading the invigilator not only that it is human but also that it is the female player, then it has demonstrated intelligence.34 Modern versions of the Imitation Game simplify the task by asking a computer program as well as several human blind control subjects to each hold a five-minute typed conversation with a panel of human judges in a different room. The judges have to decide whether or not the entity with which they are corresponding is a human; if the computer can fool a sufficient proportion of them (a popular competition sets this at just 30%), then it has won.35
A major problem with Turing’s Imitation Game is that it tests only the ability to mimic a human in typed conversation, and that skilful impersonation does not equate to intelligence.36 Indeed, in some of the more “successful” tests of programmes designed to succeed in the Imitation Game, the programmers prevailed by creating a computer which exhibited frailties which we tend to associate with humans, such as spelling errors.37 Another tactic favoured by programmers in modern Turing tests is to use stock humorous responses so as to deflect attention away from their program’s lack of substantive answers to the judges’ questions.38
To avoid the deficiencies in Turing’s test, others have suggested definitions of intelligence which do not rely on the replication of one aspect of human behaviour or thought and are instead parasitic on society’s vague and shifting notion of what makes humans intelligent. Definitions of this type are often variants of the following: “AI is technology with the ability to perform tasks that would otherwise require human intelligence”.39
The inventor of the term AI, John McCarthy, has said that there is not yet “a solid definition of intelligence that doesn’t depend on relating it to human intelligence”.40 Similarly, futurist Ray Kurzweil wrote in 1992 that the most durable definition of AI is “[t]he art of creating machines that perform functions that require intelligence when performed by people”.41 The main problem with parasitic tests is that they are circular. Kurzweil admitted that his own definition, “… does not say a great deal beyond the words ‘artificial intelligence’”.42
In 2011, Nevada adopted the following human-centric definition for the purpose of legislation regulating self-driving cars: “the use of computers and related equipment to enable a machine to duplicate or mimic the behavior of human beings”.43 The definition was repealed in 2013 and replaced with a more detailed definition of “autonomous vehicle”, which was not tied to human actions at all.44
Although it is no longer on the statute books, Nevada’s 2011 law remains an instructive example of why human-centric definitions of intelligence are flawed. Like many human-centric approaches, this was both over- and under-inclusive. It was over-inclusive because humans do many things which are not “intelligent”. These include getting bored, tired or frustrated, as well as making mistakes such as forgetting to indicate when changing lanes. Furthermore, many cars already have non-AI features which could fall within this definition. For instance, automatic headlights which turn on at night would be mimicking the behaviour of a human being turning the lights on manually, but the behaviour would have been triggered by nothing more complex or mysterious than a light sensor coupled to simple logic gate.45
The 2011 Nevada definition was also under-inclusive because there are various emergent qualities that computer programs can display which go well beyond human capabilities. The manner in which humans solve problems is limited by the hardware available to us: our brains. AI has no such limits. DeepMind’s AlphaGo program achieved superhuman capabilities in Chess, Go, and other board games. DeepMind CEO Demis Hassabis explained: “It doesn’t play like a human, and it doesn’t play like a program, it plays in a third, almost alien, way”.46 At a sufficient point of advancement, it will no longer be accurate to describe AI as duplicating or mimicking the behaviour of humans—it will have surpassed us.
3.2 Rationalist Definitions
More recent AI definitions avoid the link to humanity by focussing on thinking or acting rationally. To think rationally means that an AI system has goals and reasons towards these goals. To act rationally is for the AI systems to perform in a manner that can be described as goal-directed.47 In this vein, Nils J. Nilsson says intelligence is “that quality that enables an entity to function appropriately and with foresight in its environment”.48
Although rationalist definitions are suitable to describe narrow AI systems which have a known set of functions or aims, later developments may come to pose problems.49 This is because rationalist definitions of AI are often premised, whether implicitly or explicitly, on the existence of external goals for the AI. The difficulty which may arise when applying such definitions to more advanced, general AI is that it is unlikely to have static goals by which its behaviour or computational processes can be assessed. Indeed, the existence of static goals is arguably anathema to the idea of all-purpose AI. Unsupervised machine learning by its nature does not have a set goal, except perhaps at a high level of abstraction—for instance to “sort data and recognise patterns”.50 The same can be said of AI systems which are capable of rewriting their own source code. Thus, whilst rationalist definitions of intelligence are adopted now by many in the AI community, they may not be appropriate to tomorrow’s technology.
Another type of rationalist definition for AI focusses on “doing the right thing at the right time”.51 This too is flawed. Having the quality of intelligence is not the same as selecting the option which is deemed the most intelligent in any given situation. First, it is likely to be impossible to know what the “right thing” is without (a) possessing an infallible moral system, which does not exist, and (b) having a perfect knowledge of the outcomes of a given action. Just as humans can be intelligent but also fallible, an entity which possesses the quality of AI may not always select the best outcome (whatever “best” might mean). Indeed, if AI was automatically imbued with an ability to always to the right thing, then there would be little need to regulate it.
Secondly, a test which relies on an entity doing the right thing at the right time tends to anthropomorphise the program or entity in question, by imposing human volitions and motivations on to it. This leads to the results of that test being over-inclusive. As the leading AI text book authors Stuart Russell and Peter Norvig point out, a clock which is designed to update its time when its wearer changes time zone would be displaying “successful behaviour” (or doing the right thing), but nonetheless it seems to fall somewhat short of true intelligence. Russell and Norvig explain: “…the intelligence in question belongs to the clock’s designer, rather than to the clock itself”.52
3.3 The Sceptics
Sceptics doubt the possibility of a universal definition for intelligence. Robert Sternberg, a psychologist, is reported to have said “there seem[ed] to be almost as many definitions of intelligence as there were experts asked to define it”.53 Edwin G. Boring, another psychologist, wrote “[i]ntelligence is what is measured by intelligence tests”.54 At first glance, Sternberg and Boring’s points may seem glib. In fact, they contain important insights. Boring shows that the quality of intelligence can differ depending on what the person seeking to define it is, or setting the test, is looking for. Sternberg made a similar observation: different experts look for different things, meaning that it is of little use comparing their tests side by side.
3.4 Our Definition
Unlike most of the examples above, this book does not seek to lay down a universal, all-purpose definition of AI which can be applied in any context. Its aim is much less ambitious: to arrive at a definition which is suited to the legal regulation of AI. One of the main principles of legal interpretation is to find out the purpose of the speaker.55 Our purpose is to regulate AI. In order to regulate AI, we must therefore ask: what is the unique factor of AI that needs regulation?
In this book, intelligence is used to refer to the ability to make choices. It is the nature of these choices—and their effect on the world—which is our key concern. Our definition of AI is therefore as follows:
Artificial Intelligence Is the Ability of a Non-natural Entity to Make Choices by an Evaluative Process
We will use the term “robot” to refer to a physical entity or system which uses AI. Although the word robot is frequently used to describe any type of automation of a process by a machine, here we add an extra requirement that the action is carried out by an entity using AI.56
As to the “artificial” part of the definition, “non-natural” is preferable to “man-made” because of the propensity of AI to design and create other AI. At some point, mankind may drop out of the picture. This is one of the emergent features of AI which means that it requires novel legal treatment, in that the chain of causation between AI and its original human “creator” can no longer be sustained.57
It is implicit in the definition’s reference to making choices that such decisions be autonomous: self-governing.58 Autonomy (from the Greek auto: self, and nomos: law) is different from automation, where a process is repeated by a machine. Autonomy does not require that AI instigates its own functioning; it can make an autonomous choice even if has interacted with a human in taking that decision. For instance, if a human types a query into a search engine, she has clearly had a causal impact on the AI functioning, and indeed, the AI might take into account her preferences in returning search engine results (based on her past searches, as well as many other variables such as her age or location). But ultimately the choice of what results are displayed remains that of the search engine.59
Turning to the final aspect of this book’s definition, an “evaluative process” is one where principles are weighed against each other before a conclusion is reached. Principles can be contrasted with rules. Rules are applicable in an “are-all-or-nothing” fashion.60 When a valid rule applies in a given case, it is conclusive. If two rules conflict, then one of them cannot be a valid rule. Principles give justificatory support to various courses of actions, but they are not necessarily conclusive. Unlike rules, principles have “weight”. When valid principles conflict, the proper method for resolving the conflict is to select the position that is supported by the principles that have the greatest aggregate weight.61
To illustrate the difference between systems involving principles (requiring evaluation) and rules (which do not), it is necessary to describe in very brief terms two types of technologies which have traditionally been described as intelligent.
In “symbolic AI”, sometimes known as “Good Old Fashioned AI”,62 programs consist of logical decision trees (in the format: if X, then Y).63 The decision trees are a set of rules or instructions as to what to do with a given input. Complex examples are known as “expert systems”. When programmed with a set of rules, expert systems use deductive reasoning to follow the decision tree through a series of yes or no answers so as to arrive at a predetermined final output.64 The decision-making process is deterministic, meaning that each step can in theory be traced back to decisions made by a programmer no matter how numerous the stages.
Artificial neural networks are computer systems made up of large number of interconnected units, each of which can usually compute only one thing.65 Whereas conventional networks fix the architecture before training starts, artificial neural networks use “weights” in order to determine the connectivity between inputs and outputs.66 Artificial neural networks can be designed to alter themselves by changing the weights on the connections which makes activity in one unit more or less likely to excite activity in another unit.67 In “machine learning” systems, the weights can be re-calibrated by the system over time—often using a process called backpropagation—in order to optimise outcomes.68
Broadly, symbolic programs are not AI under this book’s functional definition, whereas neural networks and machine learning systems are AI.69 Like Russell and Norvig’s clock, any intelligence reflected in a symbolic system is that of the programmer and not the system itself.70 By contrast, the independent ability of neural networks to determine weights between connections is an evaluative function characteristic of intelligence.
Neural networks and machine learning are techniques which fall within this book’s definition of AI, but they are not the only technologies capable of doing so. This book’s definition of AI is intended to cover neural networks but also to be sufficiently flexible to encompass also other technologies which may become more prevalent in the future—one example being whole brain emulation (the science of attempting to map and then reproduce the entire structure of an animal brain).
This functional definition may be under-inclusive from the perspective of those seeking a universal measure of intelligence. Unlike most other definitions, it does not attempt to encompass all the technologies which have traditionally been described as “intelligent”. However, as noted above, the intention is only to cover those aspects of technology which are salient from a legal perspective. Chapter 2 will discuss features of AI as defined in this book which make it unique as a phenomenon; expert systems would not meet this threshold.
In addition, the functional definition could also be seen as over-inclusive. Although there are debates as to whether general intelligence must include features such as imagination, emotions or consciousness, these capabilities are not relevant to the majority of aspects of AI which need to be regulated.71 Regulation is needed where AI has an impact on the world, and it can do so even without these additional features.72
The functional definition does not offer a simple “yes or no” answer as to whether any given piece of technology has AI or not. However, it is common for there to be some uncertainty at the outer boundaries of any legislative ordinance. This is the result of the inherently imprecise nature of language.73 For instance, a sign might stipulate “no vehicles are allowed in the park”.74 Most would agree that this prohibits cars and motorbikes, but it is unclear from the wording alone whether skateboards, bicycles or wheelchairs are also banned.75 Legislators can seek to avoid uncertainty by setting out a list of what is and is not allowed. The difficulty with using lists is that they ossify the law, and may be difficult to update or to apply to situations which were not contemplated at the time the list was drafted. The highly technical and fast-developing nature of AI renders the list-based approach unsuitable as a workable mechanism.
An alternative approach (and the one suggested here) is to set a core definition which captures the essence of a term, without delimiting its precise boundaries.76 Often the task of applying ambiguous legislation falls in the first instance to regulatory agencies, for example a park warden, and then in the second instance to a judge (if the decision of a warden to issue a fine is challenged).
As AI advances, questions as to its boundaries may well—at least using this book’s definition—become less difficult to draw. AI experts might point out that even deep learning systems, which involve multiple layers of neural networks, are far from being independent of human input and are instead constantly monitored and nudged by humans. However, it is suggested here that the further AI improves in terms of capability and the more it is deployed for use by non-experts, such human input is liable to decrease. The more remote that the actual decision-making procedure becomes from the original designer, the clearer it will be that the entity is making choices.
José Hernández-Orallo has proposed a universal test of intelligence, capable of covering the entire “machine world”, which includes not just artificial entities but also animals, humans and any hybrids of these groups.77 Hernández-Orallo focusses on computational principles for the measurement of intelligence, which are capable of scoring an entity as to the degree of its intelligence. Relevant features include “compositionality”, namely the capability of a system of building new concepts and skills over previous ones.78 If AI does need to be regulated separately merely automated machines and programs, then tests such as that proposed by Hernández-Orallo could become very significant in assisting authorities delineating questions at the boundary of what is and is not intelligent, as well as to track the progress of the field through advances in AI powers.
4 AI, AI Everywhere
Armed with a definition of AI, it is now possible to identify its current uses and growing prevalence.
It might be objected that some of the examples of AI suggested below do not fulfil our functional definition. It is indeed true that certain of the outcomes could be achieved without using AI, either because the entities use deterministic rules or because humans are actually making the choices. This could be called the “Mechanical Turk” objection, after the chess-playing machine which astounded audiences in the late eighteenth and early nineteenth centuries. As the name suggests, it resembled a Turban-wearing “Turkish” man, sitting at a desk. The Turk’s designer, Baron von Kempelen, claimed that it was able to use a mysterious form of mechanical intelligence to defeat opponents at chess. In fact, the Turk was merely a complex illusion. The Turk’s desk concealed a chamber in which a human chess player sat, directing the mechanical arms to move pieces.79 As with the Turk, in order to determine whether a process or a program uses real AI according to our definition, it is necessary to check under the bonnet and ascertain exactly how a decision is taken. More important than the outcome is how that outcome was reached.
The founding members of the Dartmouth College summer school expressed a desire to “find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves”.80 Over 60 years later, we interact with such machines on a daily basis. The smartphone is an instructive example. The Pew Research Centre calculated in 2016 that 68% of adults in the world’s 11 most advanced economies owned a smartphone, a device which provides instant access to the power of both the Internet and machine learning.81 Smartphone applications (or “apps”) including music library recommendations based on past listening history, as well as predictive text suggestions for messaging, are all potentially examples of AI. The complex algorithms behind search engines improve themselves based on our searches and reaction to the results. Every time we use a search engine, that search engine is using us.82
Virtual Personal Assistants including Apple’s Siri, the Google Assistant, Amazon’s Alexa and Microsoft’s Cortana are now commonplace. This trend is connected to the growth of the “Internet of Things”, where household devices are connected to the Internet.83 Whether it is a fridge which learns when you need eggs and orders them for you, or a hoover which can tell which parts of your floor need the most cleaning, AI is coming to fulfil the roles once played by domestic servants.84
The uses of AI as an aid to or even as a replacement for human judgement and decision-making can go from the immaterial—selection of which song to play next—to the highly consequential. For instance, in early 2017, a UK police force announced it was piloting a program called the Harm Assessment Risk Tool to determine whether a suspect should be kept in custody or released on bail, based on various data.85
Self-driving cars are among the most well-known examples of AI. Advanced prototypes are now being tested on our roads by both technology companies like Google and Uber, but also traditional car makers such as Tesla and Toyota.86 AI has also caused its first fatalities: in 2017, a Tesla Model S driving on autopilot crashed into a truck, killing its passenger87; and in 2018, an Uber test car in autonomous mode hit and killed a woman in Arizona.88 They will not be the last.
From AI which kills accidentally to AI which kills deliberately: several militaries are developing semi and even fully autonomous weapons systems. In the skies, AI drones are able to identify, track and potentially kill targets without the need for human input. A 2016 report of the US Department of Defense research division explored the potential for AI to become a cornerstone of US defense policy.89 A 2017 Chatham House Report concluded that militaries around the world were developing AI weapons capabilities “that could make them capable of undertaking tasks and missions on their own”.90 Allowing AI to kill targets without human intervention remains one of its most controversial potential uses. At the time of writing the most lethal known use of autonomous ground-based weapons was in a friendly fire incident, when a South African artillery cannon malfunctioned and killed nine soldiers.91 It is unlikely to be long before enemies too are in the crosshairs.
Robots can care as well as kill. Increasingly sophisticated AI systems are being used to provide physical and emotional support to older people in Israel and Japan,92 a trend which is surely likely to grow, both in those countries and elsewhere as the richer world continues to adapt to ageing populations. AI is also being used in medicine as an aid to clinical decision-making. Other systems under development and in operation allow for diagnosis and treatment to be fully automated.93
In commerce, the US Congressional Research Service estimates that algorithmic programs account for roughly 55% of trading volume in the US equities market and around 40% of European equities markets.94 Under our definition, most algorithmic trading does not involve the use of AI as yet. However, its capability of taking complex strategic decisions in a manner which surpasses human reasoning seems likely to make AI particularly well suited to this task.95
Even the creative industries are taking advantage of AI. Music composition programs were among the first examples of this development.96 In 1997, the New Scientist reported that a computer in California had written Mozart’s 42nd Symphony, a feat not even Mozart himself could manage.97 A program called Mubert is able to compose entirely new tracks which, its creators say, are “based on the laws of musical theory, mathematics and creative experience”.98 In 2016, a director and a New York University AI researcher collaborated to create an AI system which created a new horror film script, after being “fed” dozens of successful scripts. The neural network highlighted the recurrent themes and created a new work: Sunspring. The Guardian described it as “a weirdly entertaining, strangely moving dark sci-fi story of love and despair”.99
AI is now creating works of semi-abstract art. One of the most famous examples is Google’s DeepDream, a neural net which scans millions of images and can generate hybrid creations on demand.100 In early 2017, the Chinese company Tencent reported that it had successfully used deep learning techniques to identify fashion trends among millennials. Apparently, China’s post-1995 generation is particularly fond of “light black”.101
Even more ethically challenging uses of AI are in development or use. These include robots designed to satisfy human sexual desires (sexbots),102 as well as the potential for humans to physically augment themselves with AI capabilities, giving rise to hybrids or cyborgs.103
From this brief and by no means exhaustive survey of its impact, it is clear that AI is already in our homes, workplaces, hospitals, roads, cities and skies. The Dartmouth College group’s original funding proposal suggested that AI could “solve the kinds of problems now reserved for humans…if a carefully selected group of scientists work on it together for a summer”.104 The initial estimate may have been somewhat optimistic, but the scale of humanity’s achievements in AI in the past 60 years compared to the previous 200,000 of homo sapiens’ existence suggests that the Dartmouth group’s guess was not as wild as it may have seemed.
5 Superintelligence
In 1965, mathematician and former Second World War code-breaker I.J. Good predicted that “…an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind”.105 This remains the operating assumption of some AI experts today. In his influential book, Superintelligence Nick Bostrom describes the consequences of the AI explosion in dramatic terms, explaining that in some models it could be a matter of days between the development of the initial “seed” superintelligence and its spawn becoming so powerful that no human-controlled force is able to reassert control: “Once artificial intelligence reaches human level, there will be a positive feedback loop that will give the development a further boost. AIs would help constructing better AIs, which in turn would help building better AIs, and so forth”.106
The advent of fully general AI is associated by many writers with a phenomenon some have predicted, known as “the singularity”.107 This term is usually used to describe the point at which AI matches and then surpasses human intelligence. However, the conception of the singularity as a single discernible moment is unlikely to be accurate. Like the move from weak AI to general AI, the singularity is best seen as a process rather than a single event. There is no reason to think AI will match every human capability at once. Indeed, in many fields (such as the ability to undertake complex calculations), AI is already well ahead of humans, whereas in others such as the ability to recognise human emotions, it lags behind.
Proponents of superintelligence argue that AI has repeatedly surpassed expectations in recent years. In the mid to late twentieth century, many thought that a computer could never defeat a human Grandmaster at chess.108 Then, in 1997, IBM’s Deep Blue defeated former world champion109 Garry Kasparov in a best of six match. In the early 2000s, many thought that a computer could never defeat a human champion at Go, a vastly more complex board game popular in Asia. In fact, as late as 2013 Bostrom wrote “Go-playing programs have been improving at a rate of about 1 dan [a level of accomplishment]/year in recent years. If this rate of improvement continues, they might beat the human world champion in about a decade”.110 Just three years later, in March 2016, DeepMind’s AlphaGo defeated champion player Lee Sedol by four games to one—with the human champion even resigning in the final game, having been tactically and emotionally crushed.111 The killer move by AlphaGo was successful precisely because it used tactics which went against all traditional human schools of thought.112 Of course, winning board games is one thing but taking over the world is quite another.
The quality of intelligence to improve itself is separate from its capacity to solve other problems. Though humans have displayed general intelligence for hundreds of thousands of years, we have not yet managed to design programs with superior general intelligence to our own. We cannot be sure that AI technology will not meet a similar plateau, even after it achieves a form of general intelligence.113
Notwithstanding these limitations, in recent years there have been several significant developments in the capabilities of AI. In January 2017, Google Brain announced that technicians had created AI software which could itself develop further AI software.114 Similar announcements were made around this time by the research group OpenAI,115 MIT,116 the University of California, Berkeley and DeepMind.117 And these are only the ones we know about—companies, governments and even some independent individual AI engineers are likely to be working on processes which go far beyond what those have yet made public.
6 Optimists, Pessimists and Pragmatists
Commentators on the future of AI can be grouped into three camps: the optimists, the pessimists and the pragmatists.118
The optimists emphasise the benefits of AI and downplay any dangers. Ray Kurzweil has argued “… we have encountered comparable specters, like the possibility of a bioterrorist creating a new virus for which humankind has no defence. Technology has always been a double edged sword, since fire kept us warm but also burned down our villages”.119 Similarly, engineer and roboethicist Alan Winfield said in a 2014 article: “If we succeed in building human equivalent AI and if that AI acquires a full understanding of how it works, and if it then succeeds in improving itself to produce super-intelligent AI, and if that super-AI, accidentally or maliciously, starts to consume resources, and if we fail to pull the plug, then, yes, we may well have a problem. The risk, while not impossible, is improbable”.120 Fundamentally, optimists think humanity can and will overcome any challenges AI poses.
The pessimists include Nick Bostrom, whose “paperclip machine” thought experiment imagines an AI system asked to make paperclips which decides to seize and consume all resources in existence, in its blind aderence to that goal.121 Bostrom contemplates a form of superintelligence which is so powerful that humanity has no chance of stopping it from destroying the entire universe. Likewise, Elon Musk has said we risk “summoning a demon” and called AI “our biggest existential threat”.122
There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.
Combining optimism and pessimism, Stephen Hawking said that AI will be: “either the best, or the worst thing, ever to happen to humanity”.124
The most prominent futurists tend to concentrate on the long-term impact of potential superintelligence, which may still be decades away. By contrast, many legislators concentrate on the extreme short term, or even the past. Often the time lag between the development of a new technology and its regulation means that the law has several years to catch up. Overzealous regulation of technology can seem absurd in retrospect. We do not want to be in the position of the first automobile drivers in the nineteenth century, who were required to drive at no greater than two miles per hour in cities and to employ someone to walk in front of their vehicle waving a red flag.125
Technology is not always adopted uncritically: progress for the majority can often conflict with vested interests. In the early nineteenth century, the “Luddites”—aggrieved agricultural workers supposedly led by Ned Ludd—rioted for several years, destroying mechanised power looms which threatened their employment.126 Today debates continue as to whether countries should harness nuclear technology to satisfy insatiable demands for energy.
We are in danger of oscillating between the complacency of the optimists and the craven scruples of the pessimists. AI presents incredible opportunities for the benefit of humanity and we do not wish to fetter or shackle this progress unnecessarily.
The problem with headline-grabbing predictions about the destructive or beneficial potential of superintelligence or the singularity is that they distract the public from the more mundane, but ultimately far more important issues of how humanity and AI should interact now. As Pedro Domingos put it in a 2015 book: “People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world”.127
7 If Not Now, When?
Some will say this book is premature: although AI might one day require a change in our laws, for the moment it is unnecessary. General AI does not yet exist, and until then, we should spend our time more productively, rather than speculating or even legislating idly about a technology which might never arrive.
This attitude is overly complacent and relies on two incorrect assumptions: first, it underestimates the penetration of AI technology in the world today, and secondly, it rests on a hubristic belief that somehow human ingenuity will be able to address any issues without extra cost or difficulty at some unspecified later stage.
It is not surprising that most people have failed to notice AI’s tightening grip. Incremental developments in technology mean that we often do not even register its improvement. The significant upgrade of Google Translate in 2016 using machine learning is a rare outlier in that it was actually picked up by media.128 Companies carefully stagger the release of new technologies through software patches and upgrades, gradually immersing their users. Though barely noticeable at the time, the cumulative differences can be huge.129 Because of the natural psychological tendency not to notice a series of small changes, humans risk becoming like frogs in a restaurant. If you drop a live frog into a pot of boiling water it will try to escape. But if you place a frog in a pot of cold water and slowly bring it to the boil the frog will sit calmly, even as it is cooked alive.
What if 200 years ago, at the dawn of the Industrial Revolution, we had known the dangers of global warming? Perhaps we would have created institutions to study man’s impact on the environment. Perhaps we would have enshrined national laws and international treaties, agreeing for the good of humanity to constrain harmful activities and to promote sound ones. The world today could have been very different. We might be free from the scourge of rising sea temperatures and melting ice caps. We might have avoided decades of increasingly unpredictable weather cycles, bringing misery and destruction to millions of people. We might have achieved a fair and equitable settlement between richer and poorer nations, respected and honoured by all.
Instead, we are scrambling to legislate backwards to curb climate change. Relatively new innovations such as emissions-trading130 and self-imposed greenhouse gas limits131 are both projected to have a limited effect on reducing global warming, but climate scientists generally agree that enormously damaging changes will occur to our atmosphere without far more drastic action.
Humanity is unlikely to have to wait two centuries to see the enormous consequences of AI. The consultancy McKinsey has estimated that compared with the Industrial Revolution “this change is happening ten times faster and at 300 times the scale, or roughly 3,000 times the impact”.132
8 Robot Rules
It may not be immediately obvious why law is relevant to the various industries and aspects of society affected by AI. In fact, legal regulation is as crucial to their smooth operation as it is to every other element in our lives. Just because we do not have daily interactions with lawyers, judges, courts or the police does not mean that our legal system is not having an effect.
Laws “work” even when they are not being used in courtrooms to convict criminals or to award damages to claimants. Indeed, laws are most effective when they are a silent background condition allowing parties to deal with each other in a fair and predictable atmosphere. The legal system is like oxygen. Day to day we do not notice it; in fact, many readers will not have given any thought to their own breathing before coming to this paragraph. However, if the amount of oxygen in the air drops even by a small amount, life quickly becomes intolerable.
The law plays a vital role in solving “coordination problems” which arise where agents can choose from several options, none of which is obviously right or wrong, but where the system as a whole will only function correctly if everyone acts in a similar manner.133 It would not make sense to say that it is better to drive on the right or the left as a general moral proposition, but the laws of traffic in England dictate that all must drive on the left, because if people were allowed to choose for themselves, there would be chaos.134
Although autonomous vehicles may lack some of the fallibilities of human drivers, if there were multiple different AI systems using the roads each with their own internal safety systems, this could lead to more fatalities rather than fewer. Two cars heading in opposite directions might crash head-on because one takes evasive action by steering to its right and the other takes evasive action by steering to its left.135
- 1.
Responsibility: If AI were to cause harm, or to create something beneficial, who should be held responsible?
- 2.
Rights: Are there moral or pragmatic grounds for granting AI legal protections and responsibilities?
- 3.
Ethics: How should AI make important choices, and are there any decisions it should not be allowed to take?
The following chapters will expand on these themes, demonstrating the types of problems that are likely to arise and how they might be addressed by current legal systems. The latter part of the book will move on to examining how novel institutions and then rules could be designed, in order to solve these problems in a coherent, stable and politically legitimate manner.
Chapter 2 elaborates on why AI is unique as a legal phenomenon and calls into question certain fundamental assumptions across most if not all systems of law. Chapter 3 analyses various mechanisms for establishing who or what is responsible when AI causes harm or creates something beneficial. Chapter 4 discusses whether AI should at some point be granted rights from a moral perspective. Chapter 5 considers the pragmatic arguments for and against granting AI legal personality.136 Chapter 6 sets out how we can design international systems to create the types of new laws and regulations needed. Chapter 7 looks at controls on the human creators of AI, and finally, Chapter 8 discusses the possibility of building in or teaching rules to AI itself.
The biggest question in the next ten to twenty years is not going to be how to stop AI from destroying humanity, but how humanity should live alongside it. Today’s regulation is likely to influence how technology develops. In building structures for effective everyday legal regulation in the medium term, we can prepare ourselves far better for any existential threat.