Laws have been adapted over thousands of years to regulate many different phenomena.1 Some say the advent of AI is no different to other social and technological developments and can be addressed through established legal frameworks. This chapter explains why AI presents a unique difficulty for legal regulation. We do not need to do away with all existing laws and start afresh. However, certain fundamental principles will need to be reconsidered.
The chapter will begin by setting out arguments which have been raised against making major legal changes to accommodate AI. Next, it will analyse the concepts of agency2 and causation which underpin current legal systems. Finally, it will identify properties of AI which do not fit easily into established legal structures.
1 Sceptics of Novelty: Of Horses and HTTP
AI is not the first phenomenon to raise issues of legal responsibility for the acts of other intelligent beings. Some of the world’s oldest systems of law addressed responsibility for a semi-autonomous vehicle with more computational power and complexity than even the most advanced self-driving car. That vehicle was the horse.3 One policy response was to hold the animals themselves liable for harm caused. Other legal systems constructed means of ascribing liability for such intelligent (or somewhat intelligent) entities to humans. It is the latter which have proved more enduring.
Lots of cases deal with sales of horses; others deal with people kicked by horses; still more deal with the licensing and racing of horses, or with the care veterinarians give to horses, or with prizes at horse shows. Any effort to collect these strands into a course on ‘The Law of the Horse’ is doomed to be shallow and to miss unifying principles.5
Error in legislation is common, and never more so than when the technology is galloping forward. Let us not struggle to match an imperfect legal system to an evolving world that we understand poorly. Let us instead do what is essential to permit the participants in this evolving world to make their own decisions. That means three things: make rules clear; create property rights where now there are none; and facilitate the formation of bargaining institutions. Then let the world of cyberspace evolve as it will, and enjoy the benefits.6
As Harvard Law Professor and cyberlaw enthusiast Lawrence Lessig later put it, “‘Go home,’ in effect, was Judge Easterbrook’s welcome”.7 Following the same reasoning as Easterbrook, critics of this book’s thesis may argue that current legal concepts can be adapted to address AI. In part, this view stems from disagreement and uncertainty as to what is actually meant by the terms AI and robots.8
It may well be that when some commentators say that AI does not require any change in the law, they are talking about technologies which do not meet the test set out earlier in this book, namely that AI is the ability of a non-natural entity to make choices by an evaluative process. To the extent that an entity does not meet this threshold test, it will not require new legal principles.
Unhelpfully, some of those who argue for distinct laws to govern AI have avoided defining what needs to be regulated. For instance, Matthew Scherer wrote in a 2016 article which advocated new regulatory agencies for AI, that “[t]his paper will effectively punt on the definitional issue and ‘define’ AI for the purposes of this paper in a blissfully circular fashion: ‘artificial intelligence’ refers to machines that are capable of performing tasks that, if performed by a human, would be said to require intelligence”.9 Sceptics are unlikely to be convinced by this approach.
I don’t think we need much new law [for AI]. I think the nature of the law which most people don’t really get who aren’t lawyers is that the law is informed by principles and… there are already a very large list of principles of liability regimes. So we have rules of negligence, we have rules of product liability, we have rules of allocating risks in insurance law… We have troubles every time a new technology comes along. We have troubles applying laws to ships… we have troubles applying laws to horses. So it’s not obvious that the law doesn’t need to be adapted and disputed and litigated but I don’t think we need much fundamental new law.10
Appearing on the same programme, technology lawyer Mark Deem, agreed with Edwards and advocated incremental development as the solution, saying “…the law has this ability to fill the gaps, and we should embrace that”.11 The remainder of this chapter suggests why the incremental approach is problematic.
2 Fundamental Legal Concepts
2.1 Subjects and Agents
The first major legal concept to be challenged by AI is agency. The word agency can mean several things in law. In this context, we are not referring to a principal-agent relationship, where one entity (the principal) appoints another (the agent) to act on its behalf. Rather, and as explained below, we use “agency” in a wider philosophical sense.
Any system of law—whether common, civil, national or international, secular or religious12—tells humans what they should and should not do. In more formal terms, systems of law regulate behaviour by stipulating legal subjects: those whose behaviour is to be regulated. A legal subject is an entity which holds rights and obligations in a given system. The status of legal subject is something which is thrust upon a person, animal or thing.
A legal agent is a subject which can control and change its behaviour and understand the legal consequences of its actions or omissions.13 Legal agency requires knowledge of and engagement with the relevant norms. Agency is not simply imbued on passive recipients. Rather it is an interactive process.14 All legal agents must be subjects but not all subjects will be agents. Whereas there are many types of legal subjects—both human and non-human—legal agency is at present reserved only to humans. Advances in AI may undermine this monopoly.
In order for something to exercise agency, there are several prerequisites. The laws in question must be sufficiently clear and publicly promulgated so as to allow humans to regulate their behaviour on the basis of such norms.15 Not all humans are legal agents. Young children are not capable of understanding laws and modifying their behaviour accordingly. The notional agency of humans who are not themselves capable of exercising it is generally imparted to another true agent, for example that human’s parents or their doctors.16 The same applies to those with cognitive impairments, in comas or similar. Agency is not a binary matter—it can exist to a greater or lesser degree. As children develop and learn, they gradually become aware of more of their legal rights and obligations, and at a certain (usually arbitrary) point, the law treats that human as being legally responsible for their own actions.17
Many legal systems have a concept of “personhood” or “personality”,18 which can be held by humans (natural persons) and non-human entities (legal persons). Although legal personality takes different forms across legal systems,19 it only entails the status of subject and not agent. The following subsections analyse various categories of non-human subjects and legal persons through history, in order to demonstrate why none of these meet the threshold for agency described above. Chapter 5 addresses the separate question of whether, in light of its legal agency (amongst other features), AI should be granted legal personality.
2.1.1 Corporations
Companies (also known as corporations) are some of the oldest and today most common examples of non-human legal persons.20 Companies are entities owned by shareholders (which can themselves be companies), and they are controlled by directors (which too can be companies). They can sue and be sued and can even, in some systems, be subject to criminal liability in their own right.21
Though it is common to talk of a corporation acting in its own name, as the English Lord Chancellor, Viscount Haldane, put it in 1915, the reality is that “…a corporation is an abstraction. It has no mind of its own any more than it has a body of its own…”.22 Historian Yuval Harari explains that limited liability companies are among humanity’s most ingenious inventions but only exist as a “figment of our collective imagination”.23
Although we act as if corporations can do things independently of their owners, directors and employees, in reality they cannot. Corporations like General Motors, Royal Dutch Shell, Tencent, Google and Apple certainly wield enormous power and hold vast amounts of assets but if we strip away the human input, nothing is left. True, these companies exist on paper and in electronic form, as holders of bank accounts, tax liabilities and in entries on property registers. But without humans there would be no one to take decisions that are then ascribed to the company, on which basis the rights and obligations may be altered, created and destroyed.
We should not confuse a company with its physical expression. Collective fictions including companies can have a secondary effect on the physical world. We can build towering corporate headquarters, magnificent temples and august courthouses, but these would just be empty edifices without the collective belief in whichever fiction we have used to justify their construction. The disparity between the company as a fiction and as a physical reality is illustrated by Ugland House, a building in the Cayman Islands, where over 18,000 companies are registered.24 President Obama once said of Ugland House: “That’s either the biggest building in the world or the biggest tax scam in the world”.25
Nineteenth-century legal scholar Otto von Gierke argued that corporations are not mere fictions but in fact real “group-persons”.26 This concept can account for the fact that companies often take decisions which do not result from the choice of one single person being imputed to the company but rather from some expression of collective will, such as a vote of board members. In this context, von Gierke’s arguments rely on metaphysical and social constructs which turn on the “reality” of such collective will. But shared belief is clearly not the same as objective reality. Even if many people in medieval times believed that the English King’s touch could cure the unpleasant disease of scrofula, this did not mean it was true.27 A full critique of von Gierke’s thesis is outside the scope of the present work,28 but for present purposes it is sufficient to note that group personality rests ultimately on the collection of individual human decisions. To this extent, von Gierke’s thesis does not solve the problem of how legal systems can accommodate non-human decision-making.29
2.1.2 Countries
For even longer than legal systems have recognised corporations, countries have been able to create and change legal relations, despite not having any independent directing mind of their own.30 As jurist F.A. Mann commented, prior to recognition “...the non-recognised State does not exist. It is, if one prefers so to put it, a nullity”.31 In much the same way as corporations, countries are accorded the legal status of personality and are consequently subjects of international law as well as national law. In his book Imagined Communities, historian and sociologist Benedict Anderson explained how countries have no objective reality beyond being a social construct. The nation, Anderson said, is “an imagined political community… imagined because the members of even the smallest nation will never know most of their fellow-members, meet them, or even hear of them, yet in the minds of each lives the image of their communion”.32
Like corporations, whenever a country is said to have taken a decision, it is not in reality the country which has acted but rather one or more humans who are deemed to have the appropriate authority—whether the King, Queen, President, Prime Minister, Ambassador and so on.33 Nations may act legally and politically through institutions such as governments and ministries but beyond headed notepaper and grand buildings, they too ultimately rely on human decision-makers. The same principles apply both to subnational entities, such as regions or districts, as well as to supranational entities and groupings, such as the European Union, or the Organization of Petroleum Exporting Countries.34
2.1.3 Buildings, Objects, Deities and Concepts
Buildings, objects, deities and concepts have been granted some legal rights. In the UK case Bumper Development Corporation Ltd v. Commissioner of Police of the Metropolis,35 the Court of Appeal held that an Indian temple having a legal persona recognised in India could assert rights and make claims under English law. Even though it would not be recognised as a litigant if based in England and Wales, the temple was nonetheless entitled, in accordance with the principle of comity of nations, to sue in England for the return of a statue allegedly looted from it. Similarly in the US case Autocephalus Greek Orthodox Church of Cyprus v. Goldberg,36 the US 7th Circuit Court of Appeals held that mosaics should be returned to a church which was deemed to be their legal owner.
A Hindu idol is, according to long established authority, founded upon the religious customs of the Hindus, and the recognition thereof by courts of law of ‘juristic entity’. It has a juridical status with the power of suing and being sued. Its interests are attended to by the person who has the deity in his charge and who is in law its manager with all the powers which would, in such circumstances, on analogy be given to the manager of the estate of an infant heir.39
There are a few, possibly apocryphal, historical accounts of objects being punished. One tells of a statue erected by Athenians in honour of a famous athlete, Nikon of Thasos, which was pushed from its pedestal by his envious foes. As it fell, the statue crushed one of its assailants. Instead of laying blame on the other members of the mob, and perhaps on the unfortunate victim himself, the Athenians put the statue before a tribunal. The statue was found guilty and sentenced to be cast into the sea; history does not relate whether it was allowed to plead self-defence.40
It is also said that the eighteenth-century Japanese samurai and jurist Ōoka Tadasuke ruled that a jizo (statue) in a temple be bound with rope as punishment for having been the only witness to a crime (the theft of a piece of silk) and not doing anything to stop it.41 To this day, a statue—allegedly the same as punished by Ōoka Tadasuke—remains tied by a large number of ropes in Tokyo’s Narihira Temple.42
The Japanese example is instructive, given that in Shintō, a Japanese religion, or belief-structure (which remains the largest in that country),43 all things are said to possess kami, which translates as “spirit”, “soul” or “energy”. This includes people and animals, as well as inanimate objects or natural features, such as rocks, rivers and places.44 Consequently, to a Japanese audience the notion of an object holding rights and responsibilities is perhaps not as farfetched as it may appear to a Western observer.45
More recently, legal scholars and policy-makers have given serious consideration to the question of whether parts of the environment, such as plants, trees or coral reefs, might have legal standing.46 For instance, in 2010 Bolivia passed the “Law of the Rights of Mother Earth”, which included in Article 5 the following pronouncement: “For the purpose of protecting and enforcing its rights, Mother Earth takes on the character of collective public interest. Mother Earth and all its components, including human communities, are entitled to all the inherent rights recognized in this Law”.47 On the basis of a similar law, in 2011 a group of Ecuadorian citizens brought a successful legal action on behalf of the environment against the Provincial Government of Loja to halt expansion of a roadway which they claimed that was damaging an important watershed.48
The endowment of non-human entities with rights will be considered further in Chapter 4, but for present purposes it is sufficient to note that even if natural entities such as trees, rivers, mountains or even the environment as a whole are granted standing to sue, in reality it is a human which must decide to pursue the claim.49 As Bryson, Diamantis and Grant say: “Nature cannot protect itself in a court of law”.50
2.1.4 Animals
This section considers the legal regimes applicable to animals, both through history and in the current day. It is suggested here that although some legal systems now recognise animal rights 51 and in the past animals were also thought to be subject to responsibilities, animals do not meet the threshold for legal agency.
Historic legal treatment of animals
Edward Payson Evans described various instances of animals being tried for crimes in his 1906 work The Criminal Prosecution and Capital Punishment of Animals.52 The punishment of animals for “wrongs” can be traced back at least as far as the Old Testament: “If an ox gore a man or a woman that they die, then the ox shall be surely stoned, and his flesh shall not be eaten; but the owner of the ox shall be quit”.53 The ox is punished, and its owner is spared. However, in an early example of foreseeability of harm giving rise to vicarious liability, the next verse provides: “But if the ox were wont to push with his horn in time past, and it hath been testified to his owner, and he hath not kept him in, but that he hath killed a man or a woman; the ox shall be stoned, and his owner also shall be put to death”.54
Evans lists an extraordinary array of animals against which judicial proceedings were instituted, described by one commentator as “a veritable Noah’s Ark of creatures”, including “horseflies, Spanish flies and gadflies, beetles, grasshoppers, locusts, caterpillars, termites, weevils, bloodsuckers, snails, worms, rats, mice, moles, cows, bitches and she-asses, horses, mules, bulls, pigs, oxen, goats, cocks, cockchafers, dogs, wolves, snakes, eels, dolphins and turtledoves”.55 Among the crimes for which animals were put to death, Evans notes that “[i]n 1394, a pig was hanged at Mortaign for having sacrilegiously eaten a consecrated wafer”.56
Evans suggests various reasons as to why different societies saw fit to hold animals legally responsible for their actions. One justification relied simply on the aforementioned section in Exodus and reasoned by analogy from there that all animals should be subject to punishment where they cause harm. It is unclear from the Old Testament itself as to what was the reasoning behind such punishments, but it seems they could either be rationalised on the basis of (a) protection of society from an animal which, having caused harm in the past, might do so again; or (b) retribution against the animal.
Another justification for the punishment of animals, suggested by Esther Cohen, is that medieval society considered that animals were inferior to humans in the cosmic hierarchy, having been created for the latter’s utility. Thus, any animal which killed a human had upset the cosmic order and thereby offended God.57 Piers Beirnes contends that “there is no solid evidence of a general belief that the volition and intent of animals was of the same order as those of humans”.58 Generally though, the justifications for holding animals liable will have varied from place to place and time to time.59 Moreover, the ostensible justification offered for bestial trial and punishment may well have differed from the underlying one. Reviewing Evans’ work, psychologist Nicholas Humphrey concluded: “Taken together, Evans’ cases suggest that again and again, the true purpose of the [animal] trials was psychological. People were living at times of deep uncertainty”.60
As with companies, it can be seen that the decision to hold animals legally liable for crimes—and thereby to make them legal subjects—was generally speaking divorced from any view that the animals actually were aware of their obligations and could have acted as agents.
Modern legal treatment of animals
It might be argued that because animals exhibit many of the same capabilities and tendencies as AI, we ought to apply the same legal principles to both.61 On the surface, there are certainly some similarities between animals and AI: both can be trained (at least up to a point), both can follow simple commands, both can learn new skills or techniques based on their environments and the thought processes of both can at times be somewhat inscrutable to a human observer.
Broadly speaking, a balance must be struck between liability assumed by an animal’s owner which is based in part on the tendencies of the animal, and the countervailing principle that “everyone must take the risks associated with the ordinary characteristics of animals commonly kept in this country. These risks are part of the normal give and take of life”.62 In the UK, the liability for animals is governed partly by judge-made common law, including negligence, and partly by legislation under the Animals Act 1971.63 The latter provides for strict liability for the “keeper”64 of an animal in certain defined circumstances.
Though mechanisms for accommodating responsibility for animals may provide some assistance in designing systems for AI, there are several factors render it difficult to apply to AI all of the laws on liability for animals, at least in the long term.
First, many laws maintain some form of distinction between wild and domesticated animals. This distinction is inexact even when it comes to animals, as was demonstrated by McQuaker v. Goddard,65 a case which concerned the responsibility for a camel in the Chessington Zoological Garden which bit the hand of a visitor who had been feeding it apples. The Court of Appeal of England and Wales held, after some debate, that the camel was to be treated as “domesticated”, with the effect that the zoo’s owner was not held to be liable for its violent actions. Lord Justice Scott explained: “Wild animals are assumed to be dangerous to human beings because they have not been domesticated. Domestic animals are assumed not to be dangerous”. Domesticated animals, on the other hand, could be assumed to be safe unless it was shown that the owner or keeper had specific knowledge of dangerous tendencies. Unlike wild animals, AI does not (by definition) naturally exist in a state of freedom. This might perhaps be said to occur if an AI entity was somehow to “escape” from human control and develop independently. However, for the moment, this fundamental distinction in animal law remains difficult to apply across to AI.
Secondly, animals are limited by their natural faculties. Depending on the species, animals can be trained to perform a range of tasks, but there is a certain level of complexity at which further tuition becomes impossible.66 A dog may be taught to retrieve a ball, but it cannot be taught to fly an aeroplane or perform brain surgery. The eminent psychologist David Premack wrote: “A good rule of thumb is this: Concepts acquired by children after 3 years of age are never acquired by chimpanzees”.67 AI is not so limited. As discussed in Chapter 1 at Section 5, in recent years there have been significant advances in capabilities of AI systems. Even if there are further peaks and troughs of activity, it is reasonable to predict that in the coming decades the technology will continue to improve, and consequently to be delegated yet more important tasks by humans. Consequently, the legal and moral issues raised by the actions of AI are of a different order of complexity than those of animals.
Like AI, animals will not always act as expected. As the plaintiff in McQuaker v. Goddard discovered, a previously docile animal may suddenly lash out and bite a passer-by, or a trained horse may run into the middle of a road.68 But it is inconceivable that an animal might commit a securities fraud.69 The predictability of animals’ range of actions is connected to the next difference, namely the manner in which animals will go about achieving such actions.
Thirdly, the manner in which an animal will achieve a goal is broadly predictable and is more often attributable to evolution rather than individual decision-making. Examples of animals “solving” problems are limited to fairly rudimentary tasks within narrow cognitive boundaries, such as a monkey using a stick to poke a termite’s nest, or a bird dropping a snail’s shell from height in order to access the animal inside. These are hardly on a par with defeating human champions at poker.70
…it is only an illusion that animals can ‘solve’ … problems. No individual bird discovers a way to fly. Instead, each bird exploits a solution that evolved from countless reptile years of evolution. Similarly, although a person might find it very hard to design an oriole’s nest or a beaver’s dam, no oriole or beaver ever figured out such things at all. Those animals don’t ‘solve’ such problems themselves; they only exploit procedures available within their complicated gene-built brains.71
By contrast, AI can function not just by virtue of what it has been programmed to do but learns and changes of its own accord. It might be objected that Minsky’s quote above is over-simplistic, and that some animals are capable of learning and developing skills by themselves. This realisation may in turn require humans to re-think their relationship with animals and the rights which they are accorded.72 Such discussion is outside the remit of this book. Perhaps then, it is more correct to say that the difference between AI’s decision-making and that of animals is one of degree rather than type.
2.1.5 Conclusions on Agency
Though in common parlance we often speak of a company or a country “deciding” to do something, in reality this is shorthand for saying that the humans in control of that entity made such a decision. Animals may in a limited sense choose to take one action rather than another, but they lack the crucial second part of legal agency, namely the ability to understand and interact with a legal system. The final section of this chapter suggests that AI may meet both these requirements, independent of human input. The question of what it is to be legally “independent” of humans is addressed further below, in relation to causation.
2.2 Causation
The second fundamental principle challenged by AI is causation: the apparent connection between one event and others which follow.
The traditional view of causation is that events may be characterised as linked through relationships of cause and effect. This is easy to express in simple terms. If a brick is thrown at a glass window which then shatters, the brick being thrown is the cause and the window shattering is the effect. Many philosophical73 and scientific74 objections have been raised to this account of events, but it nonetheless remains the basis for most legal systems.
Without the notion of cause and effect, legal agency would not function. Legal agency is the ability to understand the consequences of one’s actions in legal terms and to adapt one’s behaviour accordingly, so as to bring about or avoid certain events. Causation provides the connection between acts or omissions and their consequences.
In law, the deemed cause of an event is not simply a question of objective fact but rather of policy and value judgements. The key question for present purposes is whether the relationships which we have to date treated as being causal can withstand the intervention of AI.
2.2.1 Factual Causation
Causation in law, at least with regard to allocating liability for harm, encompasses two separate elements: factual and legal. Donal Nolan explains that factual causation is “the question of whether or not there is a historical connection between the wrongful conduct of the defendant and the damage suffered by the claimant”, which is “analytically different” from legal causation, or “proximate cause”, namely “whether the historical connection between the defendant’s wrongful conduct and the damage suffered by the claimant is strong enough to justify the imposition of liability”.75
The most common expression of factual causation is to construct a hypothetical counterfactual, by asking whether “but for” the relevant potentially causative event, the relevant effect would have occurred.76 As Wex Malone noted, the “but for” test is an artificial construct: “…this very announcement is a statement of legal policy. It marks an effort to point out the bare minimum requirement for imposing liability”.77
If a murderer stabs a victim to death, then at an extreme one might say that the murderer’s parents are the cause of the murder, because without them then the murderer would never have been born to go on and commit the murder. Indeed, applying this reasoning one could keep going back indefinitely, to say that the grandparents, great-grandparents and so on were the factual cause of the murder.
Factual causation may appear at first glance to simply be a question of scientific evidence (“did he jump or was he pushed?”), but two examples show that it is treated in practice by legal systems as a policy-based issue.
“Under-determination” refers to situations where there is insufficient evidence to know whether a given event was a “but for” cause.78 Strictly speaking it a question of the adequacy of evidence rather than a principled objection to the but for test. That said, in the real world principles of causation must still be used even where (as is often the case) humans lack perfect knowledge of what happened.
From the late twentieth century onwards, there has been much litigation concerning liability for illnesses caused by certain tiny carcinogenic particles.79 These carcinogens, principally asbestos, were present in mining and industrial processes for many years. Scientific evidence suggests that exposure to just one molecule of the relevant carcinogen can lead to the development of a fatal cancer. Victims have often worked for more than one employer over the course of their careers in the relevant industries. Many years later when an unfortunate victim becomes ill, it is unclear which employer’s actions or omissions were the but for cause of the damage.80 In such circumstances, judges have departed from the usual “but for” test in order to provide a remedy for victims who would—under the normal principles—not have one.81
“Over-determination” occurs where there are two or more causes, each of which individually would have been sufficient to cause the effect in question. For instance, take a situation where person A lights a fire on one side of a house and person B independently lights another fire on the other side of the house, and the house burns down. But for the actions of either A or B then the house still would have burned down. In such circumstances, courts have recoiled from a strict interpretation of the “but for” test, according to which both would escape liability.82
2.2.2 Legal Causation
Once factual causation is established, the next stage is to inquire whether a particular factual cause was also a proximate or legal cause. Factual causation is a necessary, but not sufficient factor for legal causation.83 In legal causation, the question is not so much what was the cause of an event, but rather: what was the relevant cause?
To avoid the question of legal causation simply becoming a circular exercise (akin to saying “that which is legally relevant is selected because it is legally relevant”), there are certain meta-norms which inform many legal systems when a person will be held responsible for a given consequence.84 These vary across legal system and between different contexts—such as criminal and private law.
The overarching ingredients of legal causation for harm include: (a) the free, deliberate and informed action or omission of a legal agent; (b) that the agent either knew or ought to have known of the potential consequences of such action or omission; and (c) that there has been no intervening act (sometimes referred to in Latin as a novus actus interveniens) splitting factors (a) and (b) from the eventual consequences.85
Part (b) is sometimes referred to as the “foreseeability” or “remoteness” of certain consequences. Applying this doctrine, where an unfortunate but unpredictable chain of events leads from one action to damage, the person who originally caused the damage may be excused liability. In a leading US case from 1928, Palsgraf v. Long Island Railroad Co,86 a railway employee dropped a package on to the platform, which exploded, and caused a coin-operated scale standing at the other end of the platform to fall over, hitting Mrs. Palsgraf, and causing her psychological injury. The railway company was found not to be liable because the chain of events was deemed too unlikely as to have been foreseeable. Mrs. Palsgraf had not come within the group of people for whom “hazard was apparent to the eye of ordinary vigilance”, and therefore no actionable wrong had been committed towards her.
The three ingredients of legal causation in turn support the underlying tenets of legal agency, namely the ability of humans to understand the consequences of their actions and adapt their behaviour accordingly. If a person’s action is compelled, for example by force, then that person’s agency has been compromised. Likewise, if a person’s action was free but the consequences of it were unforeseeable, then the person cannot be said to have exercised full agency with regard to the result, because agency requires that the result at least could have been reasonably predicted. The emphasis on the free will of agents extends not just to the causation of damage but also the ability to create legal agreements. Where a person’s freedom of choice is vitiated by duress, or even misrepresentation, then a contract to which they have apparently agreed may be void.87 Finally, the emphasis on intervening acts upholds agency in general because it gives legal effect to the third party’s free and deliberate action.88
The above analysis has focussed primarily on situations where injury and damage have been caused, but causation can also play a significant role in establishing who is responsible for beneficial events, such as the creation of intellectual property rights in inventions and designs—which often have several factual sources.89 For instance, where an AI system writes a best-selling book, or creates a valuable work of art, questions arise as to who owns the relevant property.90 The creation and ascription of intellectual property rights is partly a question of fact (“Is this painting by Matisse?”), but also to a significant degree a question of policy (“How far should the design of a fabric be protected if it has been heavily influenced by a Matisse painting?”).91
The legal system in question might promote a range of aims, from fostering creativity for its own sake to increasing economic output.92
2.2.3 Conclusions on Causation
Whether it is a question of determining liability for harm or responsibility for beneficial events, causation is not simply a question of objective fact but rather one of economic, social and legal policy.93 The analysis encompasses, whether overtly or covertly, judgements about what types of behaviour we want to promote or discourage, as well as issues of justice and distribution. Seen in this light, it should become clear that seeking a human or even a fictional corporate agent behind every AI act is just one of many policy responses that could be chosen.94
3 Features of AI Which Challenge Fundamental Legal Concepts
AI law expert Ryan Calo says, in a paper which argues for “a moderate conception of legal exceptionalism for purposes of assessing robotics”,95 a technology is exceptional “when its introduction into the mainstream requires a systematic change to the law or legal institutions”.96
This section provides two reasons as to why AI is exceptional: it makes moral choices; and it can develop independently. As a result of these features, the fundamental legal concepts of agency and causation—at least in their present human-centric form—are likely to be stretched to breaking point.97
3.1 AI Makes Moral Choices
Given that the essence of AI is its autonomous choice-making function, AI is qualitatively unlike existing technologies in that it must sometimes take independent “moral” decisions. This is challenging to established legal systems because for the first time a piece of technology is interposing itself between humans and an eventual outcome. Rather than attempt to define what is meant by morality (which is itself the subject of much debate),98 it is sufficient for our purposes to say that AI takes choices which would be regarded as having a moral character or outcome if they were undertaken by a human.99
Life is full of moral choices, and where these are of a particularly serious or consequential nature, answers are often provided for by the law—saving each individual citizen from the terrible dilemma of making a decision. For example, in most countries voluntary euthanasia (assisted suicide) is illegal. However, in the Netherlands, Belgium, Canada and Switzerland it is allowed under strictly controlled circumstances.
It might be thought that no new laws are needed for AI, because it can simply follow the promulgated laws that apply to humans in any given legal system.100 Thus, it might be permissible for a robot in Switzerland to administer a fatal dose of drugs to a consenting patient, but not across the border in France where this would be illegal. However, the law does not take away all moral choices.
First, in many circumstances the law leaves gaps for discretion where no right or wrong answer is mandated.
Secondly, even where the law does stipulate a moral outcome, there are some circumstances in which that law (or perhaps its enforcement) might be overridden by other concerns. For instance, although assisting suicide is illegal in the UK, the Crown Prosecution Service has published guidance which provides a prosecution is less likely to be pursued if “the suspect was wholly motivated by compassion”.101
AI is the continuation of intelligence by other means. … It is thanks to this decoupling that AI can colonise tasks whenever this can be achieved without understanding, awareness, sensitivity, hunches, experience or even wisdom. In short, it is precisely when we stop trying to reproduce human intelligence that we can successfully replace it. Otherwise, AlphaGo would have never become so much better than anyone at playing Go.103
In philosopher Philippa Foot’s famous “Trolley Problem”104 thought experiment, participants are asked what they would do if they saw a train carriage (a trolley), heading down railway tracks, towards five workmen who are in the train’s path and would not have a chance to move before being hit. If the participant does nothing, the train will hit the five workmen. Next to the tracks is a switch, which will move the trolley onto a different spur of tracks. Unfortunately on the second spur is another workman, who will also be hit and killed if the train carriage is directed down that set of tracks. The participant has a choice: act, and divert the trolley so that it hits the one person, or do nothing and allow the trolley to kill five.105
The most direct analogy to the Trolley Problem for AI is the programming of self-driving cars.106 For instance: if a child steps into the road, should an AI car hit that child, or steer into a barrier and thereby kill the passenger? What if it is a criminal who steps into the road?107 The parameters can be tweaked endlessly, but the basic choice is the same—which of two (or more) unpleasant or imperfect outcomes should be chosen?
Aspects of the Trolley Problem are by no means unique to autonomous vehicles. For instance, whenever a passenger gets into a taxi, they delegate such decisions to the driver. Moreover, vehicles are often designed in a manner which strikes a balance between protection for pedestrians and other road users, and the safety of the passengers within that vehicle. Design features of cars such as curved bonnets might protect pedestrians, at the expense of those within the vehicle.108 However, although it is true that decisions are sometime delegated to human service providers, and trade-offs exist in other areas of design, AI is unique in that it will engender the delegation of important trade-offs to non-human decision-makers.109
What technological development guidelines are required to ensure that we do not blur the contours of a human society that places individuals, their freedom of development, their physical and intellectual integrity and their entitlement to social respect at the heart of its legal regime?110
The Commission set 15 “Ethical rules for automated and connected vehicular traffic”, including a requirement that: “The protection of individuals takes precedence over all other utilitarian considerations”. In keeping with Germany’s attitude to human dignity, set down in Article 1(1) of its Constitution, the ninth rule provides: “In the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited. It is also prohibited to offset victims against one another”.111
Moral issues arise not just for autonomous vehicles, but also in many other uses of AI. An AI system designed to assist with triage and prioritisation of patients in an accident and emergency ward may have to make moral choices as to which patient ought to be treated sooner, for instance when deciding whether to favour the elderly or younger patients. Indeed, any allocation of resources by AI between competing demands raises similar issues. An autonomous weapon may have to decide whether to fire a weapon at an enemy when the enemy is surrounded by civilians, taking the risk of causing collateral damage in order to eliminate the target.112
A common objection to the Trolley Problem or its variants being applied to AI is to say that humans are very rarely faced with extreme situations where they must choose between, for example, killing five schoolchildren or one member of their family. However, this objection confuses the individual example with the underlying philosophical dilemma. Moral dilemmas do not arise only in life and death situations. To this extent, the Trolley Problem is misleading in that it could encourage people to think that AI’s moral choices are serious, but rarely arise. In fact, all decisions involving choice and discretion will involve the weighing up of one or more values against others so as to arrive at an answer.113 Inevitably a decision to do one thing rather than another will involve the privileging of certain principles over others. For instance, an AI car might be programmed with a tendency to avoid certain areas when transporting its passengers from A to B—thereby leading to de facto social exclusion and marginalisation. This is a much more subtle aspect of the choices to which we delegate AI, but nonetheless one with profound consequences.
There may be a moral element involved when AI recommends news stories, books, songs or films—on the basis that these shape how we see the world and what actions we take. An angry or disaffected person who is repeatedly recommended violent films might be encouraged to commit violent acts; a person harbouring racist tendencies might find these exacerbated if she is shown sources which tend to support this world view.114 Recent controversy over political processes such as the 2016 US election and the UK’s “Brexit” referendum has demonstrated the potential power of the information on social media to create a feedback loop reinforcing various predilections and prejudices. Such information is increasingly chosen—and even generated—by AI.
In order to make the moral choices highlighted above, AI must necessarily engage with unclear laws and competing principles and be aware of their outcomes. This is the essence of acting as a moral (and more importantly for present purposes) a legal agent. As set out below, the increasing unpredictability of AI renders it ever more difficult to tether each decision AI takes to humans through a traditional chain of causation.
3.2 Independent Development
In this book, AI capable of “independent development” means a system which has at least one of the following qualities: (a) the capability to learn from data sets in a manner unplanned by AI system’s designers; and (b) the ability of AI systems to themselves develop new and improved AI systems which are not mere replications of the original “seed” program.115
3.2.1 Machine Learning and Adaptation
A machine learns whenever it changes its structure, program or data, in such a manner that its expected future performance improves.116 In 1959, Arthur Samuel, a pioneer in AI and computer gaming, is said to have defined machine learning as the “[f]ield of study that gives computers the ability to learn without being explicitly programmed”.117
In the 1990s, an expert in the field of “evolvable hardware”, Adrian Thompson, used a program which foreshadowed today’s machine learning AI to design a circuit that could discriminate between two audio tones. He was surprised to find that the circuit used fewer components than he had anticipated. In a striking early example of adaptive technology, it transpired that the circuit had made use of barely perceptible electromagnetic interference created as a side effect between adjacent components.118
Today, machine learning can be categorised broadly as supervised, unsupervised, or reinforcement. In supervised learning, the algorithm is given training data which contains the “correct answer” for each example.119 A supervised learning algorithm for credit card fraud detection could take as input a set of recorded transactions, and for each individual datum (i.e. each transaction), the training data would contain a flag that says if it is fraudulent or not.120 In supervised learning, specific error messages are crucial, as opposed to feedback which merely tells the system that it was mistaken. As a result of this feedback, the system generates hypotheses about how to categorise future unlabelled data, which it updates based on the feedback it is given each time. Although human input is required to monitor and provide feedback, the novel aspect of a supervised learning system is that its hypotheses about the data, and their improvements over time, are not pre-programmed.
It may seem somewhat mysterious to imagine what the machine could possibly learn given that it doesn’t get any feedback from its environment. However, it is possible to develop [a] formal framework for unsupervised learning based on the notion that the machine’s goal is to build representations of the input that can be used for decision making, predicting future inputs, efficiently communicating the inputs to another machine, etc. In a sense, unsupervised learning can be thought of as finding patterns in the data above and beyond what would be considered pure unstructured noise.122
A particularly vivid example of unsupervised learning was a program that, after being exposed to the entire YouTube library, was able to recognise images of cat faces, despite the data being unlabelled.123 This process is not limited to frivolous uses such as feline identification: its applications include genomics as well as in social network analysis.124
Reinforcement learning, sometimes referred to as “weak supervision”, is a type of machine learning which maps situations and actions so as to maximise a reward signal. The program is not told which actions to take, but instead has to discover which actions yield the most reward through an iterative process: in other words, it learns through trying different things out.125 One use of reinforcement learning involves a program being asked to achieve a certain goal, but without being told how it should do so.
In 2014, Ian Goodfellow and colleagues including Yoshua Bengio at the University of Montreal developed a new technique for machine learning which goes even further towards taking humans out of the picture: Generative Adversarial Nets (GANs). The team’s insight was to create two neural networks and pit them against each other, with one model creating new data instances and the other evaluating them for authenticity. Goodfellow et al. summarised this new technique as: “...analogous to a team of counterfeiters, trying to produce fake currency and use it without detection, while the discriminative model is analogous to the police, trying to detect the counterfeit currency. Competition in this game drives both teams to improve their methods until the counterfeits are indistiguishable from the genuine articles”.126 Yann LeCun, Director of AI Research at Facebook, has described GANs as “the most interesting idea in the last 10 years in [machine learning]”, and as a technique which “opens the door to an entire world of possibilities”.127
The above forms of machine learning—particularly those towards the fully unsupervised end of the spectrum—indicate AI systems’ ability to develop independently from human input and to achieve complex goals.128
Programs which utilise techniques of machine learning are not directly controlled by humans in the way they operate and solve problems. Indeed, the great advantage of such AI is that it does not approach matters in the same way that humans do. This ability not just to think, but to think differently from us, is potentially one of the most beneficial features of AI.
It is able to do this by using a novel form of reinforcement learning, in which AlphaGo Zero becomes its own teacher. The system starts off with a neural network that knows nothing about the game of Go. It then plays games against itself, by combining this neural network with a powerful search algorithm. As it plays, the neural network is tuned and updated to predict moves, as well as the eventual winner of the games.130
AlphaGo Zero is an excellent example of the capability for independent development in AI. Though the other versions of AlphaGo were able to create novel strategies unlike those used by human players, the program did so on the basis of data provided by humans. Through learning entirely from first principles, AlphaGo Zero shows that humans can be taken out of the loop altogether soon after a program’s inception. The causal link between the initial human input and the ultimate output is weakened yet further.
DeepMind say of AlphaGo Zero’s unexpected moves and strategies: “These moments of creativity give us confidence that AI will be a multiplier for human ingenuity, helping us with our mission to solve some of the most important challenges humanity is facing”.131 This may be so, but with such creativity and unpredictability comes attendant dangers for humans, and challenges for our legal system.
3.2.2 AI Generating New AI
Some AI systems are able to edit their own code—the equivalent of a biological entity being able to change its DNA. One example of this is a program built in 2016 by a team of researchers from Microsoft and Cambridge University, which used neural networks and machine learning in order to augment its own ability to solve mathematical problems in increasingly sophisticated ways.132
The Microsoft/Cambridge program derived data from multiple sources, including other programs. Some commentators described this as “stealing” code.133 This approach has been used in other instances, such as Prophet, a patch generation system that “works with a set of successful human patches obtained from open source software repositories to learn a probabilistic, application independent model of correct code”.134 Where the source used by AI to learn and develop is other AI, the causation and authorship of any new code generated can become yet more obscure.
Several papers published in 2016 showed that it is possible to train an AI network to learn to learn, a process known as “meta-learning”. Specifically, AI engineers created neural networks which then learned independently to perform a complex technique: stochastic gradient descent (SGD).135 SGD is particularly useful in machine learning because it optimises the system’s ability to perform a function with only a small number of training samples, rather than attempting to review all the data available to it.136
So not only are researcher[s] who hand optimize gradient descent solutions out of business, so are folks who make a living designing neural architectures! This is actually just the beginning of Deep Learning systems just bootstrapping themselves… This is absolutely shocking and there’s really no end in sight as to how quickly Deep Learning algorithms are going to improve. This meta capability allows you to apply it on itself, recursively creating better and better systems.137
As noted in Chapter 1, various companies and researchers announced in 2017 that they had created AI software which could itself develop further AI software.138
In May 2017, Google demonstrated a meta-learning technology called AutoML. Google CEO Sundar Pichai explained in a presentation that “[t]he way it works is we take a set of candidate neural nets, think of these as little baby neural nets, and we actually use a neural net to iterate through them until we arrive at the best neural net”.139
Two final points on AI’s capacity for independent development: First, the list of achievements above comes with a “best before” date. Even within the field of machine learning, developments will no doubt continue after this book’s publication. Second, although the present section has concentrated predominantly on forms of machine learning, this is only because they are the dominant techniques at the time of writing. As mentioned in Chapter 1, in the future other AI technologies may bring about even greater independence. One constant remains: with each advance in the automation of AI’s development, it becomes yet further removed from human input.140
3.3 Why AI Is Not Like Chemicals or Biological Products
It might be objected that AI is not the only man-made entity capable of independent development. A bacterium created in a laboratory might be able to adapt to different environments (such as whether it is hosted in a human or an animal) and/or to change its form over time, in response to stimuli such as new antibiotics.
Current legal systems have strategies for dealing with liability arising from such chemical or bio-engineered products, even in situations where these might continue to develop once they have been released from the laboratory. In the EU, questions of liability arising from chemical or biological products that change once released into the environment may be addressed the EU Product Liability Directive which imposes a strict liability regime broadly speaking on the “producer”141 of a defective product.142 The EU also utilises a range of prophylactic and ongoing processes for monitoring the safe use and development of such products.143 More general rules such as negligence can penalise a legal person whenever they did something they ought not to have done, or failed to do something which they ought to have done (which might include the reckless release of a dangerous substance into the environment).
However, the main difference between AI and other products which can develop and change is AI’s ability to take into account and interact with laws and rules when it undergoes such changes. Bacteria and viruses are not legal agents because they cannot interact with rules beyond anything more basic than perhaps an imperative to reproduce. Though AI at present may operate using simplistic reward or error functions that could be closer to bacteria than human level reasoning, at a theoretical level it is possible for AI to engage with an unlimited number of aims and operate within various parameters and constraints. The combination of the ability to take decisions and to take those decisions based on their predicted effect within a system of rules and norms is what renders an entity an agent.
4 Conclusions on the Unique Features of AI
AI is unlike other technologies, which are essentially fixed and static once human input has ended. A bicycle will not re-design itself to become faster. A baseball bat will not independently decide to hit a ball or smash a window.
No legal system ascribes all responsibility for human actions—at least where they are carried out by adults with normal mental faculties—to their parents, their teachers or their employers. At a certain age or level of maturity, humans are treated as being independent agents who are responsible held for their own deeds. A person’s tendency to undertake certain actions may have been shaped by their upbringing but this does not mean that parents are forever tethered to their children. In developmental psychology, the threshold is called the “age of reason”. In law, it is known as the “age of majority”.144 We are approaching this point for AI.145