© The Author(s) 2019
Jacob TurnerRobot Rules https://doi.org/10.1007/978-3-319-96235-1_3

3. Responsibility for AI

Jacob Turner1  
(1)
Fountain Court Chambers, London, UK
 
 
Jacob Turner

If AI is unique as a legal phenomenon, the next question is what we should do about it.

This chapter and the two following respond in three stages. Chapter 3 discusses responsibility for AI, in terms of both liability for harm and how to account for positive output such as the authorship of creative works. Chapter 4 addresses potential moral justifications for granting AI rights. Chapter 5 then brings together themes raised in 3 and 4: arguing that a legal person is formed from “a bundle of rights and obligations”, and proposing legal personality for AI as an elegant and pragmatic solution to the issues raised.

Various legal mechanisms could be used to determine who or what is responsible for AI when it causes harm or creates something of value. There is no single “silver bullet” answer. This chapter will explore how existing laws from legal systems around the world might be applied. Those which follow discuss what changes could be needed.

1 Private and Criminal Law Distinguished

Most legal systems split criminal and private law . AI can lead to consequences in both.

Private law refers to the legal relationship between people and involves the creation, alteration, and destruction of rights.1 Many private law relations are voluntary to begin with. For instance, people can usually choose whether or not to enter into a contract , but once they have done so it will be legally binding.

In private law , rights and obligations usually come in pairs: a liability for one party is a claim to another.2 Deterrence of wrongful conduct is a common aim to both private and criminal laws . Another purpose of private law is to ensure rights are vindicated and that parties are compensated for harm.3 The usual remedy in private law is a payment of money to the innocent party, although other remedies can require the defendant to undertake or desist from a particular act.4

Criminal law is arguably a society’s most powerful weapon against those who transgress. Criminal laws are usually enforced by the state and apply whether or not individual perpetrators have expressly agreed to be bound by them. Criminal laws have various purposes, including to signify the state’s disapproval of certain conduct, retribution, deterrence and protection of society as a whole.5 If a person commits a crime, then they will usually be punished, typically by imprisonment and/or a fine. Some legal systems still practise “corporal punishment”, which involves the infliction of pain, mutilation or even death on a criminal.6

Designation as a crime is a community’s most emphatic denunciation of conduct.7 For this reason, the requirements to be found guilty tend to exceed those for private law in terms of individual culpability or blameworthiness. There may in criminal law also be a higher burden of proof needed to show guilt. Unlike civil law liability in tort or contract , being found guilty of a crime will usually have lasting effects on an individual. A conviction can lead to both social stigma and permanent legal disabilities. In some jurisdictions, criminals are barred from voting and other basic civil rights.8 Indeed, when the USA imposed a general ban on slavery in the Thirteenth Amendment to the Constitution, it was nonetheless preserved “as a punishment for crime whereof the party shall have been duly convicted”.

2 Private Law

Private law obligations relating to AI are most likely to arise from two sources9: civil wrongs10 and contract .11 Civil wrongs occur where the legal rights of one party are infringed.12 If Damien throws a television out of a hotel room window and injures Charles, an unlucky pedestrian walking down the street outside, Damien has done a civil wrong to Charles by interfering with his right to walk down the street in peace and/or his right to bodily integrity. Charles will be able to seek damages from Damien under private law .13 Contract is based on agreement. If Evelyn agrees to sell a new car to Frederica, but instead delivers a second-hand model, then Frederica may sue Evelyn for breaching their agreement. The corollary is that if Evelyn delivers a new car as promised but Frederica refuses to pay for it, then Evelyn may also sue Frederica based on their exchange of promises.

Within civil wrongs, liability can arise in a number of different ways. Important categories for present purposes include negligence, strict and product liability, and vicarious liability . We will discuss these in turn.

2.1 Negligence

Negligence is conduct which fails to conform to a required standard.14 In a famous UK case Donoghue v. Stevenson, a producer of bottled ginger ale was required to pay compensation to a woman who fell ill after opening a bottle which contained a dead snail.15 The producer was held to have a duty of care to whoever might reasonably be expected to open the bottle, even though there was no direct contract between them.16 The judgement explained: “…you must take reasonable care to avoid acts or omissions which you can reasonably foresee would be likely to injure your neighbour”. Neighbours were defined as “persons who are so closely and directly affected by my act that I ought reasonably to have them in contemplation”.17

Similar rules apply across many different types of legal systems, including those of France,18 Germany19 and China .20

2.1.1 How Would the Law of Negligence Apply to AI?

If harm is caused, the first question is whether anyone was under a duty not to cause, or to prevent that harm. The owner of a robot lawnmower might be under a duty towards anyone in the vicinity of that lawnmower. This would include, for example, a duty to take reasonable care to ensure that the AI lawnmower does not stray into the next-door neighbour’s garden and decapitate their prize-winning roses.

The second question is whether the duty was breached. If the owner of the lawnmower has taken reasonable precautions in the circumstances then he will be exonerated, even if the lawnmower caused harm. If the neighbour decides to borrow the lawnmower without the owner’s permission and the neighbour uses it on her own garden, where it causes damage, then the owner would have a strong argument that the damage was not caused by his breach of any duty.

The third question is whether the breach of duty caused the damage. If the lawnmower was, through the owner’s negligence, rolling towards the neighbour’s garden but immediately before it damaged any flowers, a car ran off the road and destroyed the neighbour’s rosebed, then the lawnmower owner might have breached his duty to keep the machine under control but the damage would have not been caused by this breach because of the car driver’s intervening act.

A fourth question in some legal systems is whether the damage was of a type or extent which was reasonably foreseeable. The cost of replacing the roses is likely to be foreseeable, but a loss of prize money from a particularly lucrative rose-growing competition that the neighbour would otherwise have entered may not be.

The owner is not the only person who might be under a duty of care in the above situation. This might also apply to the designer of the AI, or the person (if any) who taught or trained it. For instance, if the design of the AI contained a fundamental flaw (let’s say it interpreted children as weeds to be destroyed), then the designer might have breached a duty to design a robot safely.

2.1.2 Advantages of Negligence

Duty Can Be Adapted Depending on Circumstance

The level of duty can expand and shrink according to context.21 This means the law of negligence can take into account the shifting uses to which AI might be put.22 As we move along the spectrum from narrow AI which can only be used for one task to general AI which is multipurpose, this feature of the law of negligence will become increasingly useful.

As a rule of thumb, the chance of harm occurring can be multiplied by the gravity of potential harm to arrive at a calculation of what precautions should be taken.23 When transporting nuclear waste, a high level of precaution is justified because although the chances of a leak may be very low, the danger is extreme. Sometimes courts will also take into account the potential benefit to society from an activity: beneficial but risky activities are likely to be given more lenient treatment than a dangerous activity of no public benefit. For instance, the police are less likely to be held liable for negligent driving when pursuing a criminal than a joyrider would be, because there is a social advantage to the former activity but not the latter.24

These features are helpful in that the producers, operators and owners of AI systems which are capable of causing great harm would be required to take the most precautions. As such, negligence can (at least in theory) avoid creating restrictive rules which might dampen innovation and development unnecessarily.

Flexibility as to Whom Duty Is Owed

There is no set list of people who might claim in negligence. This is useful because the people with whom AI interacts may change over time and may not be predictable at the outset. Moreover, many of the people who are potentially affected by anything that the AI does will have no prior contractual relationship with the AI’s creator, owner or controller.25 For example, an AI-enabled delivery drone might come into contact with all sorts of people and things on the way to its destination, especially if it is able to design its own route and to adapt it without human input.

Duty Can Be Voluntary or Involuntary

A duty giving rise to potential liability might be undertaken deliberately or it might arise out of a person’s dangerous activities. If Juan wants to practise juggling knives whilst walking down the street, then he will come under a duty of care to passers-by regardless of whether he wants to or not.

As noted above, contractual liability requires that parties agree to be liable. If people were only liable for AI when they decided to be, this would lead to gaps in protections for third parties who stand to be affected by activities involving AI. The non-voluntary aspect of negligence is helpful in that it encourages subjects in any given legal system to have a greater regard to all other participants than they might otherwise do if they were purely seeking a profit-maximising objective. In other words, the possibility of negligence liability can cause subjects to take into account the externalities of their actions and indeed to price these into their calculations (at least to the extent that such risk can be accurately calculated).

2.1.3 Shortcomings of Negligence

How Do We Set Standards for AI’s Behaviour?

The key question in negligence is generally whether the defendant acted in the same way as the average, reasonable person in that situation. In old English cases, judges illustrated this idea by asking whether a fictional “man on the Clapham Omnibus” might have done the same thing.26

However, problems arise when the reasonable person test is applied to humans using AI, all the more so to AI itself.

One option would be to ask what the reasonable designer or user of the AI might have done in the circumstances.27 For example, it may be reasonable to set a car to operate in a fully autonomous mode on a relatively clear motorway, but not in a hectic urban environment.28 Designers might supply AI with “health warnings” stipulating what is and is not advisable. This may be a workable short-term solution, but it runs into difficulties where there is no human operator of the AI on whom liability could be easily fixed.29 Moreover, using AI in the wrong way is only one source of potential harm. An AI entity designed for a specific purpose might still cause harm through some form of unforeseeable development even when it is used in that field. One example of AI causing harm through an attempt to carry out its stipulated goal is the intelligent toaster which burns a house down in a quest to make as much toast as possible.30 The more unpredictable the manner of failure, the more difficult it will be to hold the user or designer responsible without resorting to a form of strict liability .

In order to get around these issues, Ryan Abbot has proposed that if a manufacturer or retailer can show that an autonomous computer, robot or machine is safer than a reasonable person , then the supplier should only be liable in negligence rather than strict liability for harm caused by the autonomous entity.31 Abbott’s negligence test would focus on the AI’s “act instead of its design, and in a sense, it would treat a computer tortfeasor as a person rather than a product”.32 Abbot argues that negligence would be determined according to the standard of the “reasonable computer”, on the basis that “[i]t should be more or less possible to determine what a computer would have done in a particular situation”.33 Abbot contemplates establishing this standard by “considering the industry customary, average, or safest technology”.34

In practice, applying a “reasonable computer” standard may be very difficult. A reasonable human person is fairly easy to imagine. The law’s ability to set an objective standard of behaviour takes as its starting point the idea that all humans are similar. More precisely, the law assumes that we have a certain set of capabilities and limitations arising from our shared physiology. Some humans may be braver, cleverer or stronger than others, but when setting the negligence standard these variations do not matter. AI, on the other hand, is heterogeneous in nature: there are many different techniques for creating AI and the variety is likely only to increase in the future as new technologies are developed. Applying the same standard to all of these very different AI entities may be inappropriate.

Finally, certain applications of the reasonableness test in negligence are bound up with the way that humans operate in the world, in a manner which may not be applicable to artificial entities. For example, in UK law, a doctor will not be liable in negligence if she adopts a treatment accepted at the time as proper by a responsible body of medical opinion, even of other medical professionals would disagree.35 It is an open question whether this test would be applied to a medical AI, which one might reasonably expect to be not just as safe as a doctor, but even safer, much in the same way that we expect autonomous vehicles to be safer than those driven by humans.36

Reliance on Foreseeability

The law of negligence relies on the concept of foreseeability. It is used in establishing both the range of potential claimants by asking: “was it foreseeable that this person would be harmed?” and the recoverable harm by asking: “what type of damage was foreseeable?” As noted in Chapter 2, the actions of AI are likely to become increasingly unforeseeable, except perhaps at a very high level of abstraction and generality.37 In consequence, holding a human responsible for any and all actions of AI would become less focussed on the human’s fault (usually the hallmark of negligence) and more like a system of strict, or product liability—which are discussed further below.

2.2 Strict and Product Liability

Strict liability exists where a party is held liable regardless of their fault. It is controversial: by abandoning any mental requirements for liability, strict liability cuts against fundamental notions of human agency—namely the ability to understand consequences and plan for them.38 Justifications for strict liability include to ensure that the victim is properly compensated, to encourage those engaged in dangerous activities to take precautions, 39 and to place the costs of such activities on those who stand to benefit most.40

“Product liability” refers to a system of rules which establish who is liable when a given product causes harm. Often, the party held liable is the “producer” of that product, though intermediate suppliers may be included as well.41 The focus is on the defective status of a product, rather than an individual’s fault.42 These regimes became popular in the second half of the twentieth century,43 particularly in response to increasingly complex supply chains, as well as highly publicised scandals involving mass-produced defective goods—most notably the “morning sickness” drug Thalidomide, which caused severe physical handicaps in children.44

The remainder of this section will focus on two of the most developed systems of product liability45: the EU ’s Products Liability Directive of 198546 (the “Products Liability Directive”) and the US Restatement (Third) of Torts : Products Liability.47

In the EU , the test for defectiveness is somewhat open-ended. A product is defective when “it does not provide the safety which a person is entitled to expect, taking all circumstances into account, including (a) the presentation of the product; (b) the use to which it could reasonably be expected that the product would be put; (c) the time when the product was put into circulation”.48 The US Third Restatement adopts a slightly more structured approach.49 Defects subject to the regime must fall into at least one of three categories50: (a) design; (b) instruction or warnings; and/or (c) manufacturing.

These types of rules are by no means unique to the USA and Europe; for instance, the People’s Republic of China Product Quality Law 1993 (amended 2000) provides similarly that products shall be free from any unexpected dangers threatening the safety of people and property.51 Another example is Japan ’s Product Liability Act (Law no. 85 of 1994).52

2.2.1 How Would Product Liability Apply to AI?

Suppose Alpha Ltd designs AI optical recognition technology for autonomous vehicles and supplies that technology to Bravo Plc, which uses it in its cars. Unknown to all parties, the technology cannot distinguish between certain shades of blue paint and the sky. When driving a new car he has purchased from Bravo Plc, Charlie engages the autonomous driving mode. A truck painted sky blue crosses the path of the vehicle, which does not recognise the obstacle. Charlie is killed instantly when his car hits the truck.53 Charlie’s family might be able to make a claim against Alpha Ltd, as the original producer of the AI (in addition to Bravo Plc, as the more immediate supplier). In fact, Charlie’s family could pursue a supplier at any level of the supply chain, including those of constituent parts or raw materials, so long as they were part of the faulty product.

2.2.2 Advantages of Product Liability

Certainty

Product liability regimes specify in advance which party is to be held responsible. This is especially helpful to victims. The victim does not have to seek contribution from multiple different parties in proportion to their relative fault. Instead, once the supplier or producer of AI has been located, they are liable to the victim for 100% of the damages. The onus is on the supplier or producer to seek out any other liable parties and to sue them for a contribution where appropriate.

From the perspective of the supplier or producer of AI, the certainty of their primary liability has a value in that it allows for more accurate actuarial calculations. The risk of damages can therefore be priced into the eventual cost of products, as well as provided for in the accounting forecasts of companies and in investor disclosure such as “risk factors” in a prospectus.

Encourages Caution and Safety in AI Development

Strict product liability could encourage AI developers to design products with rigorous safety and control mechanisms. Even in a situation where the AI will develop in unforeseeable ways, the designer or producer of the AI may still be identified as the best-placed person to understand and control risks.54

Michael Gemignani wrote the following in 1981 of computers. The same principles arguably apply with even more force to AI:

While the computer is still in its infancy, it may prove to be as beneficial, or as potentially harmful, as atomic power. If imposition of strict liability in tort would make the manufacturers of computer hardware and software more careful and more thoughtful in their race to develop an ultimate product, that alone would justify its application.55

2.2.3 Shortcomings of Product Liability

Is AI a Product or a Service?

Product liability regimes are so-called because they relate to products not services. Many commentators have assumed that product liability regimes will apply to AI without examining the important preliminary question of whether it is a good or a service.56 In the EU , products are defined as “all movables” in Article 2 of the Product Liability Directive , which suggests that the regime applies only to physical goods. Consequently, a robot may be covered but some cloud-based AI may not be.

There have been debates in the past as to whether information contained in media such as books or maps is to be considered a “product” for the purpose of product liability. In the 1991 US case Winter v. G. P. Putnam’s Sons,57 the defendant published a book called The Encyclopedia of Mushrooms, which wrongly said that a poisonous mushroom was edible. Predictably, someone ate that mushroom and became critically ill. The US Court of Appeals for the 9th Circuit ruled that the information in the book was not a product for the purposes of the product liability regime. The court did say as an aside that “[c]omputer software that fails to yield the result for which it was designed” is to be treated as a product and therefore subject to product liability law. However, given that the judgement was from 1991, it seems reasonable to assume that the court was referring to traditional computer programs rather than those with AI capabilities.

The problem of bringing AI within product liability regimes applies outside the EU and USA. Fumio Shimpo, a member of the Japanese Government’s Cabinet Office Advisory Board on AI, writes “[for] an example of the current legal dilemma, I will refer the reader to an accident involving a robot which was caused by inaccurate information or software defect malfunction. At present, the questioning of the product liability of the information itself, which was the main cause of this accident, is outside the range of the current Japanese Product Liability Act”.58

To the extent that AI generates bespoke advice or output based on individualised input from a user, it would seem more closely to resemble the paradigm of a service rather than a product. In the light of this uncertainty, the European Commission (one of the three law-making bodies within the EU ’s governing institutions) promulgated an Evaluation Project of the Products Liability Directive, which was completed in July 2017. The Evaluation’s aims included “[to]…assess if the Directive is fit-for-purpose vis-à-vis the new technological developments such as the Internet of Things and autonomous systems”.59 It investigated matters including “whether apps and non-embedded software or the Internet of things based products are considered as ‘products’ for the purpose of the Directive”; and “whether an unintended, autonomous behaviour of an advanced robot could be considered a ‘defect’ according to the Directive”.

Respondents included consumers, producers, public authorities, law firms, academics and professional associations.60 The results were published in May 2017.61 In response to the question “According to your experience, are there products for which the application of the Directive on liability of defective products is or might become uncertain and/or problematic?”, 35.42% of respondents said “yes, to a significant extent”, and a further 22.92% of respondents said “yes, to a moderate extent”. When asked to name the products which might give rise to such issues, 35.42% of respondents named both those “performing automated tasks based on algorithms and data analysis (e.g. cars with parking assistance)” and those “performing automated tasks based on self-learning algorithms (Artificial Intelligence)”.62 At the time of writing, the European Commission is still formulating a response to this issue63 but as matters stand it seems increasingly clear that the Products Liability Directive will need to be reformed if its coverage is to extend to AI in a predictable manner, or indeed at all.

Assumes Products to be Static Once Released

Product liability regimes operate on the assumption that the product does not continue to change in an unpredictable manner once it has left the production line. AI does not follow this paradigm.

Based on the assumption of products being static, the US and EU systems are subject to a number of defences which may prove overly permissive when applied to producers of AI. In the EU , the carve-outs from liability include:

… having regard to the circumstances, it is probable that the defect which caused the damage did not exist at the time when the product was put into circulation by him or that this defect came into being afterwards; or …that the state of scientific and technical knowledge at the time when he put the product into circulation was not such as to enable the existence of the defect to be discovered….64

If products liability applies to AI at all, it is probable that producers will increasingly be able to take advantage of the above safe havens, thereby lessening the protections available to consumers.65

2.3 Vicarious Liability

Legal systems have a variety of mechanisms which create responsibility for one person, the “principal”, for actions undertaken by another person, the “agent”.66 In ancient times, several civilisations had highly developed criteria for determining the situations in which a master would be held liable for the acts of his slave.67 With the demise of slavery and the rise of industrial economies from the late eighteenth century onwards, at least some of the legal relationships originally developed for slavery came to be adapted and reapplied.68

Vicarious liability can arise today in employer–employee relationships (which, tellingly, are still sometimes called master–servant situations).69 Vicarious liability is also applied where one party takes responsibility for the acts of others such as parents or teachers.70 The broad drafting of Article 1384 of the French Civil Code is particularly well adapted to both human and non-human relationships: “A person is liable not only for the damages he causes by his own act, but also for that which is caused by the acts of persons for whom he is responsible, or by things which are in his custody”.71

The paradigm situation of legal responsibility is that every person is responsible for their own free, willing and informed actions. Vicarious liability is an exception to this standard in that an agent can cause harm, but someone else (the principal) will be held responsible for them having done so. This does not mean that the agent will be completely exonerated. Usually, the agent will also be liable for their harmful acts, but the victim may choose to pursue a claim against their principal on the grounds that the latter has deeper pockets. After having paid the victim, the principal can usually go on to pursue the agent for damages by way of contribution.72

Though the two concepts are similar, vicarious liability differs from strict liability in that not every act of the agent will render their principal liable. For vicarious liability , first there has to be a relationship between the principal and agent which falls into the recognised categories set out above (e.g. employment). Second, the wrongful act must usually take place within the scope of that relationship.73 The UK Supreme Court recently held in Mohamud v. WM Morrison Supermarkets plc74 that a petrol station owner was vicariously liable for the actions of its employee who subjected a customer to a vicious and racist assault after the customer had asked to use a printer. Crucial to this liability for the supermarket was the fact that there was a “close connection” between the assault and the employee’s employment, despite the fact that the assault clearly breached the terms of the employee’s contract .75

In addition, some legal systems (such as Germany) require also that for there to be vicarious liability , there has to be a wrongful act by the agent. So, if the agent did not act wrongfully (e.g. for want of foreseeability), there is no vicarious liability of the principal.

2.3.1 How Would Vicarious Liability Apply to AI?

A police force which uses a patrol robot might be vicariously liable in circumstances where that robot assaults an innocent member of the public during its patrol.76 Even if they did not create the AI system which the robot uses, the police force might be deemed most immediately responsible for the conduct of the robot and/or deriving a benefit from the robot. The assault may not have been desired or permitted by the police force, but it occurred within the scope of the robot’s assigned role. In a sense, the robot would be in a similar situation to a slave—namely an intelligent agent whose acts might be ascribed to a principal, without that agent being treated as a full legal person in itself.

2.3.2 Advantages of Vicarious Liability

Recognition of AI Agency

Vicarious liability strikes a balance between acknowledging the independent agency of AI and holding a currently recognised legal person liable for its acts. Whereas negligence and product liability tend to characterise AI as an object rather than an agent, vicarious liability is not so limited. For this reason, unilateral or autonomous actions of AI which are not foreseeable do not necessarily operate so as to break the chain of causation between the person held liable and the harm. The vicarious liability model is therefore better suited to the unique functions of AI which differentiate it from other man-made entities.

2.3.3 Shortcomings of Vicarious Liability

No Clarity on the Relationship Needed

The fact that vicarious liability is usually limited to a certain sphere of activities undertaken by the agent is both an advantage and a drawback. It means that not every act of an AI will necessarily be ascribable to the AI’s owner or operator. As such, the further AI strays from its delineated tasks, the more likely there is to be a gap in liability. In the short to medium term, whilst (predominantly narrow) AI continues to operate within tightly limited bands, this concern is less pressing.

AI could be treated as the “student”, “child”, “employee”, or “servant” and a human (or other legal person) as the “teacher”, “parent”, “employer” or “master”. Each of these models has particular nuances as to the scope and limits of responsibility of one party for the other. However, as noted at the end of Chapter 2, at some point, the primary offender (let’s say the child) is cut loose from being the responsibility of their potential principal (i.e. the parent). We would need to work out when, if ever, AI is to be cut loose from humans for legal purposes.

2.4 No-Fault Accident Compensation Scheme

A no-fault compensation scheme pays damages to victims of an accident, regardless of whether anyone else was at fault. The guaranteed nature of the damages means that as a corollary the victim will usually lose the right to sue anyone who might have caused the harm.77

New Zealand is the only country to operate such a scheme for all accidents.78 It has done so since 1974, thereby removing the tort system as a means of compensation for victims and deterrence of harmful conduct. The New Zealand scheme is funded by a series of dedicated levies held in different “accounts”: work, earners, non-earners, motor vehicles and (medical) treatment injuries. Money is raised from each relevant constituency by levies or taxes.79

New Zealand’s Accident Compensation Corporation, the government body which administers the scheme, explains: “Your levies pay for treatments, visits to health providers, rehabilitation programmes and equipment that may help in your recovery… We use levies to help you in your day to day life. This may be help with childcare, at home or transport to school and work”.

For people used to a system in which those cause harm may be held liable to pay damages to the victim, the idea that no one would be subject to liability for personal injuries can seem counterintuitive or even perverse. However, in at least some industries where insurance is mandatory, the New Zealand scheme is not so far in terms of economic effect from jurisdictions which maintain the classical tort-based system. For example, in many countries some form of third-party insurance is required for drivers of motor vehicles. This means that if someone else is injured by a driver, then the driver’s insurance company will pay any relevant damages for which the driver would otherwise be liable. It is the insurer which pays out, rather than the individual driver. The insurers are in turn funded by all drivers in the country, thereby spreading the costs of accidents through the whole of society.

2.4.1 How Would No-Fault Accident Compensation Apply to AI?

In New Zealand, if AI caused or contributed to an accident, this would be treated in exactly the same way as any other accident: no claim would need to be made against a person associated with the AI. Instead, the victim would visit a healthcare provider for treatment. The Accident Compensation Corporation would provide support and compensation to the victim. As regards revenue generation, a system adopting no-fault compensation for AI might raise a special levy from the AI industry (though defining any such industry might present its own difficulties).

2.4.2 Advantages of No-Fault Compensation

Encouragement of Safe Practices

One major objection to New Zealand’s system is that it might not adequately discourage dangerous behaviour given the disconnect between the causes of the harm and the paying party. Though this criticism has intuitive appeal, there is little evidence to support the idea that the New Zealand scheme leads to more tortious acts being committed.80 In practice, people are motivated to avoid causing harm to others by a range of social factors beyond the purely financial. In the “Haifa Kindergarten” experiment, a group of day care centres which suffered from the problem of parents failing to collect their children on time imposed a small fine each time that the parents were late. The parent absenteeism increased as soon as the fines were implemented. The reasoning for this surprising phenomenon is thought to be that a strong moral incentive to collect the children on time was replaced by a weaker financial one.81

The Accident Compensation Corporation seeks to shape behaviour so as to avoid harm on a prophylactic basis. Instead of using compensation and damages as deterrence, it engages in a range of preventative measures, including working with schools to teach children first aid and safety, as well as initiatives to improve health and productivity in the workplace.

Avoids Legal Questions of Liability

A no-fault compensation scheme escapes the complicated legal issues highlighted in this chapter involving causation and foreseeability of the acts of AI by avoiding them altogether. If no single person or entity needs to be held liable, then no legal theory is needed to link them to the accident. No-fault compensation combines the simplicity and certainty of a product liability or pure strict liability mechanism, but avoids their arbitrary nature by excluding any single person from paying for the harm. Instead, society as a whole (or at least the relevant industry) pays collectively.

2.4.3 Shortcomings of No-Fault Compensation

Difficulty in Scaling up

By way of indication of the scale of the scheme in New Zealand (a country of only approximately 4.7 million people),82 in 2016 there were 1.7 million claims, which cost the scheme NZ$2.3 billion83 (approximately US$1.16 billion).

For a small country like New Zealand, such a system is manageable. Economies of scale are possible, and the advent of “big data” processing technology may make this task yet easier. Nonetheless, it is unclear how feasible it would be to increase the scheme to a country with tens or hundreds of millions of citizens.

Political Objections

Even if a no-fault compensation scheme was logistically and economically possible, those politicians and members of the public who are keen to see a smaller rather than a larger state on ideological grounds may well rail against the idea of having such a large and powerful government-administered program. Despite the example of New Zealand, only a handful of other countries have adopted a similar scheme in the more than 40 years since it was instigated.84

Whether to Limit Only to Compensation for Physical Injury

One major limitation of the New Zealand scheme is that it only covers physical (and some instances of psychological) harm to humans.

Two major areas are left out: first, harm to property is not covered. Secondly, the New Zealand scheme does not cover financial loss which is not directly related to physical harm (known as “pure economic loss”).

The vast and increasing range of AI’s applications means that harm which it causes will not be limited merely to physical accidents. If an AI trading program invests all of a company’s money in a volatile commodity/financial instrument like Bitcoin immediately before a crash, then under the New Zealand scheme there would be no compensation available to the victim. They would have to seek recourse through the various other mechanisms identified above and below, such as negligence, product liability or contract .

2.5 Contract

A contract is a legally binding agreement, or set of promises.85 Not all promises are enforceable in law: a promise to meet a friend for dinner is unlikely to have contractual force. In order to distinguish a mere promise from a contract , legal systems impose a series of requirements. These can range from formalities such as a need for contracts to be made in writing,86 to a requirement that something of value be exchanged.87

2.5.1 How Would Contracts Apply to AI?

Determining Who Is Responsible

In a paradigm situation, two or more parties would enter into a formal agreement to determine who would be legally responsible for the acts of the AI in question. Typically, in return for a payment the seller of a product or service will make a series of promises (sometimes called representations and warranties) about what it is selling.88

Contracts can decrease as well as increase a party’s liability. Clauses in an agreement may exclude liability for all or some types of harm, or put limits on what is payable. The seller of a medical AI diagnostic program may exclude liability to a hospital buying the software for harm caused where the AI misdiagnoses a patient. At the other end of the spectrum, a seller of AI could agree to pay any relevant debts incurred by the buyer (i.e. indemnify her) for any harm which that AI causes. In 2015, the CEO of Volvo announced that the company would accept all liability for harm caused by its cars when they are operating autonomously.89 It is hard to say whether the CEO’s statement was intended to have contractual effect. However, in a seminal English case, Carlill v. Carbolic Smoke Ball Company,90 a company’s boast on a promotional poster that it would pay £100 to anyone who used their product and was not cured of ‘flu, was held to be binding. Volvo might end up being held to its promise.

Can AI Conclude a Contract in Its Own Right?

Suppose you are buying a new sofa online. You see a sofa you like, being sold by a vendor called SOFASELLER1. You pay the purchase price and the sofa is delivered. Would it matter if SOFASELLER1 was an AI system?

Where an AI system is contracting on behalf of a further principal, in the capacity of an agent then it seems likely in many situations that the contract will be effective. Indeed, this is how much trading occurs online, where automated programs are mandated to buy, sell and bid on behalf of people and companies. Fumio Shimpo points out that not all such contracts will be binding under Japanese Law; if the AI fails to identify itself as such and entices a person to enter into a contract , then such contract might be deemed “equivalent to a mistake of an element (Article 95 of the Japanese Civil Code)”, and potentially rendered ineffective.91

There are many automated contractual systems operating today—from consumer sales to high-frequency trading of financial instruments. At present, these all conclude contracts on behalf of recognised legal people. That may not always need to be the case. Blockchain technology is a system of automated records, known as distributed ledgers. Its uses can include chains of “self-executing” contracts, which can be executed without any need for human input. This technology has already given rise to novel and uncertain questions as to liability arising from a particular blockchain system in which all parts are interconnected.92 In a situation where AI concludes a contract without direct or indirect instructions from a principal, it remains unclear how a legal system would address liability arising from such an agreement; the AI would require legal personality to be able to go to court to enforce such contract —the possibility of which is discussed further in Chapter 5.

United Nations Convention on the Use of Electronic Communications in International Contracts
There have already been some attempts to create special laws to account for the role of computers in concluding contracts. Article 12 of United Nations Convention on the Use of Electronic Communications in International Contracts 2005 provides:

A contract formed by the interaction of an automated message system and a natural person, or by the interaction of automated message systems, shall not be denied validity or enforceability on the sole ground that no natural person reviewed or intervened in each of the individual actions carried out by the automated message systems or the resulting contract .

Legal commentators Čerkaa, Grigienėa and Sirbikytė have contended that Article 12: “states that a person (whether a natural person or a legal entity) on whose behalf a computer was programmed should ultimately be responsible for any message generated by the machine”. On this basis, they argue that Convention is an appropriate tool for the determining responsibility for AI, in the absence of other direct regulation, because “[s]uch an interpretation complies with a general rule that the principal of a tool is responsible for the results obtained by the use of that tool since the tool has no independent volition of its own”.93

However, Article 12 does not stand for the proposition that the aforementioned academics suggest.94 Article 12 is expressed as a negative proposition: computer-generated contracts are not to be denied validity solely because of a lack of review. The academics reverse this by suggesting a positive proposition, requiring that every computer has a person responsible—thereby transforming the meaning of Article 12. Even if Article 12 did fix responsibility for AI on the “person on whose behalf it was programmed”, application of this provision is likely to become increasingly problematic the more that AI is able to learn and develop independently of its original inception, and thereby act as an agent in its own right.95

2.5.2 Advantages of Contractual Liability

Respect for Parties’ Autonomy

Contracts give legal expression to human agency and choice. For this reason, in many economies and legal systems, freedom of contract is treated as a paramount value.96

Unlike the various other schemes described above where policy decisions as to risk allocation are taken either by judges or legislators, contract allows parties to exercise their autonomy so as to allocate risk between them. It can be assigned a price, and that price can be reflected in the transaction. In theory, this should lead to resources being allocated most efficiently according to market forces.

2.5.3 Shortcomings of Contractual Liability

Contracts Only Apply to a Limited Set of Parties

The main disadvantage of relying solely on contracts to regulate liability for AI is that they are very limited in terms of to whom they apply (a feature sometimes referred to as “privity”). Contracts only create rights and obligations between the contracting parties or occasionally a limited class of third-party beneficiaries.97 Contracts are therefore of no use in determining liability where there was no prior contractual agreement. A pedestrian who is injured by a self-driving car whilst walking down the pavement next to a road will not have agreed a contract with the designers, owners or operators of any vehicles driving past.

Secrecy

Parties to a contract may agree that its content, and even its existence, is to be kept private as between them. This can be very helpful for commercial entities who wish to protect certain elements of their dealings from competitors or the public. However, where contracts are private, then this can also have negative effects in terms of minimising the signalling effect such agreements might otherwise have to other market participants.98 Without accurate information about what certain parties are doing, others will find it difficult to regulate their own behaviour. Secrecy might prevent consistent market behaviour from developing and thereby increase the cost to parties of negotiating each individual agreement on liability from scratch.

AI companies may have strong individual incentives to hide their agreements on responsibility for harm. Even the existence of such an agreement might be reported in the press as suggesting that the AI is somehow unsafe. Many systems require certain transactions to be recorded on a public register, such as those relating to land. One solution to the secrecy issue would be for contracts concerning liability for AI to be made public. The obvious objection to this is that it would be enormously bureaucratic to store such details on a public register, and commercial parties may well refuse to do so, on the basis of well-established legal principles including confidentiality and privacy. Distributed ledger technology such as blockchain offers one option as to how contracts relating to AI might be made a matter of public record. However, it seems unlikely that many market participants would agree to this level of public scrutiny unless they were required to by law.

Quasi-Hidden Contracts

Contractual arrangements concerning AI will work best where arrangements are made between parties who are able to understand the obligations to which they are binding themselves, and are able to weigh up the benefits and disadvantages of the position they have taken. In reality, this often is not the case.

Members of the public enter into many different contracts on a daily basis without realising or consciously agreeing the terms. This can include accepting the conditions of carriage when we take a bus or a subway,99 or the End User License Agreement which mobile app users generally flick past before clicking a box to signify that they accept. Many apparently “free” services are provided on the basis of quasi-hidden contracts. Users might receive a utility such as online mapping services, and in return, they signify their consent by contract to the provider recording and using their location and search data. There is occasional disquiet when the extent of such agreements on personal data is brought to the attention of consumers—as occurred in 2018 when a scandal broke over Facebook ’s data collection and use by third parties such as Cambridge Analytica.100 Despite the somewhat manufactured outrage in the press, the extent to which people were signing away rights to their data secrecy would in most cases have been discoverable to any user who had looked closely enough at the terms and conditions to which they agreed as a quid pro quo.

Even if average consumers do not have the time or inclination to pore through dozens of pages of tightly worded legalese, there are often “safety-nets” which guarantee consumer rights against exploitative or unfair contracts. These can include legislation which bans unfair contractual terms101 or requires special attention to be drawn to particularly onerous terms.102 If contracts concerning AI are to become as widespread for non-expert members of the public, it may be necessary for the law to impose limits or safeguards upon the rights that people can unwittingly sign away.

Limitations of Language

A further disadvantage of using contracts to manage responsibility for AI is that though such legal agreements are very useful for planning what should happen in circumstances predicted by the parties, they are less helpful for determining what should happen where the contract is vague or silent. Individually negotiated contracts can often result in a compromise between the parties, with the result that neither agrees on the meaning of a contentious clause.

At least for written agreements, creative drafting may be able to cater for some uncertainty, but it remains likely that the rigid nature of contracts will have some difficulty in accommodating the unpredictability of AI. Moreover, the interpretation of words is an inherently uncertain exercise.103 Contractual disputes can be resolved by courts, but in advance of their decision any certainty will have been compromised.

2.6 Insurance

Insurance is a specific type of contract law, in which one party (the insurer) agrees either to pay certain amounts of money or, more rarely, to undertake steps to otherwise compensate another party (the insured), if certain events occur. Typically, in exchange the insured will pay a sum known as the “premium” at specified intervals, for example, monthly or annually.

Insurance is a form of risk management, whereby the insurer adopts the risk of certain events occurring, in exchange for a fee.104 Insured parties will often pay a relatively small premium comparative to the overall amount which is to be paid out. The less likely an event, the lower the ratio of premium to payout. A householder might pay $500 a year for building’s insurance, which might pay out $500,000 in the event that the building is destroyed by an insured risk, such as fire. The insurer benefits because—assuming they have got their calculations correct—the net amount of premiums it is paid will exceed the amounts of money it pays out to insured parties.105

2.6.1 How Would Insurance Apply to AI?

US Judge and author Curtis Karnow has suggested that best way of dealing with liability for artificial intelligence is to have an insurance scheme:

Just as insurance companies examine and certify candidates for life insurance, automobile insurance and the like, so too developers seeking coverage for an agent could submit it to a certification procedure, and if successful would be quoted a rate depending on the probable risks posed by the agent. That risk would be assessed along a spectrum of automation: the higher the intelligence, the higher the risk, and thus the higher the premium and vice versa.106

Insurers could sell “third-party” policies to potential defendants to protect against claims for harms caused to others by AI. They could also sell “first-party” policies to potential victims so as to ensure that they are compensated in the event that they are harmed by AI.

For most activities and industries, insurance policies are voluntary. As such, there can be gaps in coverage where an uninsured party causes harm and then disappears or is unable to satisfy claims for compensation made against it. There are some notable exceptions, such as mandatory automobile insurance,107 which is imposed by law on the basis of the high number of car users, the frequency of car accidents and a desire on the part of policy-makers to ensure that victims have a quick and certain recourse, particularly in the event that the driver at fault is impecunious.108 Similar policy considerations may well make it desirable for some form of AI insurance to be made mandatory, at least to cover risks to third parties.

2.6.2 Case Study: UK Automated and Electric Vehicles Act 2018

The UK Parliament enacted the Automated and Electric Vehicles Act in July 2018.109 This legislation extends the compulsory insurance scheme for normal road vehicles in the UK to cover automated ones. Section 2 of the Act provides:

(1) Where— (a) an accident is caused by an automated vehicle when driving itself..., (b) the vehicle is insured at the time of the accident, and (c) an insured person or any other person suffers damage as a result of the accident, the insurer is liable for that damage.

The point of section 2(1) of the Act is to make clear that an insurer will be required to provide coverage for accidents caused by a vehicle when driving in autonomous mode, where that vehicle is already insured. The Act also extends mandatory insurance from covering only harm to third-parties to include the party insured (often the driver of the vehicle). This is helpful from the perspective of legal certainty and will likely encourage the development of the UK ’s autonomous vehicle industry. However, the Act does not resolve underlying legal questions of ultimate responsibility. Section 5(1) provides: “any other person liable to the injured party in respect of the accident is under the same liability to the insurer or vehicle owner”. There is no indication as to whom these other liable parties may be. The result is that difficult questions of ultimate responsibility for AI are simply “kicked down the road”.

2.6.3 Advantages of Insurance

Partial Solution to Unpredictability

The essence of insurance law is to cater for situations of uncertainty. Insurance policies cover parties against matters as diffuse as natural disasters, incurable or debilitating illness, as well as human-caused events such as sabotage or terrorism.110 The unpredictability of AI which makes it particularly problematic for other areas of law may not be such an issue for insurers. By passing on the cost of harm to insurers for a fixed price, parties can plan for unknown risks with much greater certainty. The cost of insurance policies can therefore be written into financial predictions for investors and passed on to the end user of a good or service in the price they pay, thus spreading the burden throughout market participants.

Behaviour Channelling

Insurance typically has a channelling effect as regards the behaviour of the insured because the insurer has an interest in minimising the risk of harm. Insurers may require certain behaviour of the insured parties in order that their policy remains valid. For example, insurers of contents in a property may insist that there are locks on the doors and windows. As regards AI, insurers could require that insured parties adhere to certain minimum standards in design and its implementation.111

2.6.4 Shortcomings of Insurance

Parasitic on Underlying Liability

Insurance does not alter underlying legal liabilities. Rather, it redirects the liability to pay damages away from the person who caused harm (if any) to the insurer.112

If the victim of harm caused by AI would not have a right of recourse against the insured party, then the insurer would have no reason to pay the victim any money. Insurance only operates via the liability stipulated under the various other private law theories of responsibility and compensation set out above (or otherwise by specific legislative intervention—as in the Automated and Electric Vehicles Act 2018). A party may be insured for harm they negligently cause or for which they are strictly liable. This means that, from a victim’s perspective, an insurance policy taken out by an AI owner/controller will only be helpful to the extent that the victim can assert a right against the insured party.

One option is for the various different candidates to each insure themselves separately. So for an autonomous vehicle, insurance might be taken out by the company which has produced the vehicle (which we will assume for the sake of this example also designed the AI), as well as the owner of the vehicle. That way, if either a passenger or another road user is injured or suffers loss as a result of a crash caused by the AI, then there is at least some certainty for the victim that they will receive a payout, and there is further certainty for the insured parties that they will pay only the premium. However, this still does not stop the different insurers from fighting as to liability between them if one pays out the victim in full then seeks contribution payments from the others—as might occur under section 5(1) of the Act.

Exceptions and Exclusions

A prudent insurer will set boundaries on its liability. It will exclude liability for harm caused by the deliberate or wilful act of the insured. A building’s insurer would not pay out if the owner of the building deliberately sets it alight.113

Insurers might seek to exclude liability where the AI undertakes an activity outside a set range (e.g. if a delivery robot is used as a concierge). The more unpredictable the insured AI, the more difficult it will be for the insurer to assess and ultimately set a price for the likelihood of damage. Whether or not this renders insurance prohibitively expensive remains to be seen. As recent US experience in medical insurance exchanges demonstrates, it can be extremely difficult for a government to compel insurers to enter markets which they do not consider to be economically viable.114

3 Criminal Law

There may be significant overlap between the conduct which can give rise to civil and criminal consequences. Generally speaking, the more stringent measures available under criminal law require a higher degree of fault. Criminal liability usually requires not just a culpable act (sometimes referred to as actus reus ), but also a certain mental state on the part of the defendant: the guilty mind or mens rea . Unlike tort law, which usually uses an objective mental standard (asking what a reasonable person would have done), in criminal law the focus is generally on the defendant’s subjective state of mind: what did the perpetrator actually believe and intend to do.

The mental requirements necessary for a crime to have been committed differ between legal systems and between different crimes themselves. Sometimes the mens rea required for guilt go beyond defendant having foreseen the consequences of her actions and require that she actually intended, desired or willed the consequences to take place.115 Under English law, a person who throws a brick off a balcony is unlikely to be found guilty of murdering a person on whom the brick lands unless she intended either to cause death or serious harm.116

3.1 How Would Criminal Law Be Applied to Humans for the Actions of AI?

3.1.1 AI as an Innocent Agent

Where AI is deemed to have followed the instructions of a human and undertaken an act which, if carried out by a human, would be a crime, then the actions of the AI would normally be attributed to the human.117 Provided that the human had the requisite mental state, then she will be guilty. The AI would be legally irrelevant.118 It would be a mere tool in the hands of the perpetrator, like the knife used by a murderer. As the California Supreme Court found in People v. Davis: “Instruments other than traditional burglary tools certainly can be used to commit the offense of burglary… a robot could be used to enter the building”.119

Innocent agents need not be limited to inanimate objects. An entity which is considered to have some intelligence may still be an innocent agent. If an adult asks a child to pour a poisonous liquid into another person’s drink when they are not watching, then the adult who provided the poison and directed the child is likely to be found guilty of a crime, even if the child would not be. This section concerns the criminal liability of humans for the acts of AI. Section 4.5 of Chapter 5 will cover the possibility of criminal liability for the AI itself.

3.1.2 Vicarious Criminal Liability of Humans

Vicarious liability in criminal law operates in a broadly similar manner to private law and is subject to the same limitations as set out above. One major difference between the two is that private law vicarious liability does not focus on the mens rea of the principal; rather, the question is on the relationship between the principal and agent. By contrast, in criminal law , the principal must normally have the mens rea necessary for the relevant crime.120 If the mens rea requirement is merely that the principal was reckless as to harm (as opposed to intending harm), then this may not be a particularly difficult barrier for a prosecutor to overcome.

If an AI engineer creates an AI system for making toast and that machine then burns down a house, killing everyone in it, on the reasoning that “all the bread would be toasted”, then the programmer may face criminal consequences for their reckless behaviour in creating such a program. Legal scholar Gabriel Hallevy describes this as “natural-probable-consequence” liability, explaining that it “seems legally suitable for situations in which an AI entity committed an offense, while the programmer or user had no knowledge of it, had not intended it, and had not participated in it”.121

3.2 Advantages of Humans Being Criminally Responsible for AI

Criminal law functions best where it accords closely to society’s moral precepts.122 An effective system of criminal law cannot be imposed without reference to what a given polity thinks ought to be criminal. Psychological studies suggest that humans are innately retributivists: if someone has caused harm, our natural response is to seek out a person responsible who deserves to be made to suffer.123

3.3 Shortcomings of Humans Being Held Criminally Responsible for AI

3.3.1 Retribution Gap

Given that criminality is such a serious and often enduring sanction, it ought to be reserved for situations in which the perpetrator’s wrongdoing is of a particularly blameworthy character. The big challenge as regards AI is that the more advanced it becomes, the more difficult it will be to hold a human responsible, let alone blameworthy for its acts without stretching accepted notions of causation out of recognition. Legal philosopher John Danaher has described the delta between humanity’s expectations that someone will be held responsible, and our present inability to apply criminal law to AI as opening up a “retribution gap”.124

Though, as shown above, it is quite possible to split the function of assigning responsibility from the function of paying compensation in the private law context, splitting responsibility from punishment in criminal law is far more problematic. Retributive punishment is linked to moral desert and not just pragmatic considerations.125 Danaher cautions: “… I have noted how doctrines of command responsibility or gross negligence could be unfairly stretched so as to inappropriately blame the manufacturers and programmers. Anyone who cares about the strict requirements of retributive justice, or indeed justice more generally, should be concerned about the risk of moral scapegoating”.126

There are then two options: either to treat the actions of AI as “Acts of God” which have no legal consequences or to somehow find a “responsible” human. Unlike earthquakes or floods, the acts of AI are unlikely to be viewed as unfortunate but morally neutral natural disasters.

3.3.2 Over-Deterrence

The severity of criminal liability may lead to a chilling effect on progress and development of new and more powerful AI, if it is the case that programmers are potentially subject to criminal sanctions. The financial burden of compensation payments to victims of harm caused by AI can be passed on to an employer or insurer—or may even be treated simply as a business risk. Criminal liability by contrast is usually personal and it is difficult for an individual person to avoid by saying that he was merely following superior orders. Moreover, criminality has a social cost that cannot necessarily be displaced or expunged in monetary terms. If this threat hangs over programmers, then they might be less inclined to invent or release otherwise helpful technology.127

4 Responsibility for Beneficial Acts: AI and IP

The foregoing sections of this chapter, and indeed the majority of academic debate, have focussed on liability for harm caused by AI. The present section will address responsibility for beneficial acts or creations. When a human paints a picture, writes a book, invents a new medicine or designs a bridge, then most legal systems provide structures for determining ownership over that work and for protecting the author against unauthorised copying of their creation. Other laws protect commercial reputation. This is called the the law of “intellectual property” (IP).

AI is already creating new and innovative products and designs, whether in technical fields such as engineering and architecture,128 or in industries such as art or music production.129

AI systems can go even further than replicating a person’s style. Researchers from Rutgers University, the College of Charleston and Facebook ’s AI Research Lab have created AI capable of making abstract art so convincing that human experts could not tell which works were made by AI and which by human artists.130 Sceptics might argue that AI can never be truly “creative” in a philosophical sense, and that such programs merely synthesise and replicate existent work. The problem with this argument is that the same point could be made of virtually any human artistic or literary creation. Indeed, there is a good argument for saying that AI is even more creative than humans, in that all humans are restricted by our biological faculties, whereas AI is capable of “thinking” and operating in an entirely different manner. Regardless of one’s philosophical position on the matter, there is already ample evidence of AI creating works which would qualify for protection under intellectual property law were they created directly by a human.131

Despite these advances in creative technologies, legal structures for protecting creations are lagging well behind.

4.1 Copyright

Copyright is a system of protection of original works which focusses on the creative activity of the creator when he or she composed the work in question. Most other intellectual property rights focus instead on the objective character of the subject matter regardless of how it was brought into existence. Thus, if Vincent paints a picture which does not copy from anyone else’s picture or design, then he is likely to be accorded copyright protection in whatever he has painted, even if it is the same as a picture that someone else has painted (unbeknownst to Vincent). The focus of the protection for copyright is more on the creative process and less on the objective novelty of the output.

Under EU law, original literary and artistic works are covered by various copyright protections, which provide certain rights to the author.132 A work or part of a work is regarded as original if it is the author’s own intellectual creation,133 reflecting his or her personality through an expression of free and creative choices, thereby stamping the work with his or her personal touch.134

Although individual words, figures or mathematical concepts as such do not qualify as an original work, a sentence or phrase may be protected if it constitutes an expression of the intellectual creation of the author through the choice, sequence and combination.135 As noted above, AI is capable of creating original work for the purposes of this definition. Under EU law, the first owner of copyright is the author.136 The relevant legislation and case law assumes implicitly that the author is a legal person. The ownership of an original work can be adjusted by employment or other contractual relationship, but the point remains that in legal terms, copyright ownership always assumes that the creator is also an entity capable of holding rights.137

Generally speaking, legal systems do not provide for copyright-protected works being created by non-humans. Andres Guadamuz wrote in the Magazine of the World Intellectual Property Organisation: “Creative works qualify for copyright protection if they are original, with most definitions of originality requiring a human author. The legislation of several jurisdictions, including Spain and Germany, appear to suggest that only works created by a human can be protected by copyright”.138 The US Copyright Office has declared that it will “register an original work of authorship, provided that the work was created by a human being”,139 citing the 1884 case, Burrow-Giles Lithographic Co. v. Sarony.140

In the US case Comptroller of the Treasury v. Family Entertainment Centers,141 a Maryland Court was asked to decide whether animatronic puppets that danced and sang at restaurants triggered a state tax on food “where there is furnished a performance”. The court decided that the animatronic puppets were not performing:

[A] pre-programmed robot can perform a menial task but, because a pre-programmed robot has no ‘skill’ and therefore leaves no room for spontaneous human flaw in an exhibition, it cannot ‘perform’ a piece of music … Just as a wind-up toy does not perform for purposes of [the statute,] neither does a pre-programmed mechanical robot.142

Although this was a tax case, the discussion of creativity in relation to a performance could be relevant to copyright. The puppets in Family Entertainment Centers were not robots in the sense used in this book; as the court found, they were deterministic, pre-programmed automatons. There was no discretionary or unpredictable aspect to their performance. Based on the reasoning, it appears that the outcome of Family Entertainment Centers would have been different if the puppets in question used AI to adapt and perfect their performance over time.

Some legal systems have attempted to accommodate AI, or at least computer-generations, within their provisions on intellectual property.143 For instance, the UK , Ireland and New Zealand acknowledge that different principles are required for AI than for direct human creators, but nonetheless seek to establish a causal link between the eventual creation and an initial human input. The UK Copyright, Designs and Patents Act 1998 (CDPA) provides at section 9(3):

In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.144

Section 178 of the CDPA provides that a computer-generated work is one that “is generated by computer in circumstances such that there is no human author of the work”. This provision does not allow for AI itself to be considered the author. Instead, it engenders a two-stage analysis: the first stage is to identify whether there is a human author. If a human author cannot be found, the second stage is to identify the person “by whom the arrangements necessary for the creation of the work are undertaken”. Where the work is generated by an AI entity, disputes may arise at both stages.

As to the first, there may be issues including how far the inputs must be related to the outputs so as to classify the person who provided those inputs as the author. As to the second, it is unclear how one would identify the person who “made the arrangements”. It could be the person who built the system, the person who trained it or the person who fed it these specific inputs.145 Matters are complicated yet further if one or more of these parties is another AI entity.

4.2 Case Study: The “Monkey Selfie” Case

In 2014, a crested macaque monkey (or rather a charity which claimed to be acting on behalf of that monkey) demanded copyright in a “selfie” (self-portrait) which it had taken using a professional photographer’s camera.146 The monkey, Naruto, was named as a plaintiff in a case in the Northern District of California, against the photographer, David Slater.147 It was reported in late 2017 that the photographer had settled with the monkey’s representatives,148 after more than two years of costly legal battle, 149 which Slater said had left him broke.150 Slater was reportedly required by the settlement agreement to donate 25% of the earnings from his book to charities “that protect the habitat of Naruto and other crested macaques in Indonesia”, as the animal charity described it.151

In April 2018, despite the parties having settled out of court, the US Court of Appeals for the 9th Circuit chose to rule on the matter nonetheless and concluded that the relevant Copyright Act made no provision for animals to sue. There the story ended for Naruto’s selfie rights claim. Interestingly, the Court of Appeals left open the possibility of animals “asserting” constitutional rights in other contexts, noting that animals still had constitutional standing to bring claims in a Federal Court, following a precedent set in a previous case involving dolphins and whales.152

The Naruto case demonstrates the jurisprudential difficulties which arise when a “creative” act is carried out by a non-human entity. Although the eventual conclusion of the courts was that the relevant statute did not extend to protecting the intellectual property animals or other entities without legal personality , the wider question is whether it should do.

4.3 Patents and Other Protections

Copyright is not the only type of intellectual property law to be challenged by AI. Patents are a form of local monopoly granted over a particular invention. A classic example of an invention protectable by patent is a new pharmaceutical drug. By contrast to copyright’s emphasis on the state of mind of the creator, the criteria required for protection vary between systems but generally speaking patents will be granted if an application is made for an invention which is new, non-obvious and of some potential use, regardless of the process by which they came into existence.153 However, as with the “creativity” issue, current laws do not accommodate AI as the inventor of patents.154

The difference between copyright and patent protection is particularly important where AI is involved. It may be easier for AI to create subject matter protectable by patents (albeit not hold them) than for AI to create subject matter protectable by copyright.

Other tests apply to the creation and enforcement of IP rights known as trademarks (which protect branding) and designs (which protect the appearance of products). Like patents, the conditions for protection of these two categories are objective. After being exposed to a data set featuring furniture from many other companies (as well as perhaps other sources of inspiration, such as nature or art), it is quite conceivable that an AI system might create an entirely new design, let’s say for a chair. The AI system might even acquire a reputation for making innovative furniture. Both of the above are in theory capable of protection under IP law, at least when created or developed by humans.

Without either a new rule for ascribing AI’s works or discoveries to an existing legal person, such as a human or a company, current laws are manifestly unsuitable for accommodating and safeguarding AI’s creations. This lacuna in legal protection might in turn discourage the development of creative AI in circumstances where the original developers are unsure who, if anyone, would own its creations.

5 Free Speech and Hate Speech

The freedom to express ideas, within certain limits, is protected by many legal systems. In the USA , there is the First Amendment to the Constitution; in Europe, there is Article 10 of the European Convention on Human Rights . Similar protections exist under the constitutions of South Africa,155 India156 and other countries.

If AI can generate content which, if spoken or written by a human, would qualify for free speech protection, the question arises whether the AI’s speech should be granted the same protections. In order to address this question, it is first necessary to investigate the reasons underpinning legal protections for free speech . Toni Masaro and Helen Norton describe the compendium of reasons for protecting free speech (in the USA ) as follows:

…there is no unifying theory of the First Amendment. The most influential theories have been clustered into arguments based on democracy and self-governance, a marketplace of ideas model, and autonomy.157

Motivations like “autonomy” seem to be linked to conceptions of individual human dignity, which do not at present apply to AI.158 However, as regards instrumentalist values such as the “marketplace of ideas”, there does not seem to be any reason why society would derive less benefit from a new idea generated by AI than it would from a new idea generated by a human.159

Not all speech is protected, and in most systems, some speech is prohibited. Where speech is deemed injurious to another person, then it can lead to private law liability in libel or slander. Where it is thought harmful to religion, it can lead to criminal blasphemy charges. Speech insulting to the royal family or head of state in some countries can lead to charges under lese-majeste rules.160 Other laws may prohibit speech which incites violence. In short, there is a myriad of complex legal principles across the world which both protect and constrain what a person may say. In some countries, these protections are not limited to human persons. The US Supreme Court has confirmed that corporations are entitled to have their freedom of speech protected.161 The question of how such rights and restrictions might apply to AI remains undetermined.

These are not just hypothetical problems. Comedian Stephen Colbert helped design a Twitter bot called “@realhumanpraise”: a program which pairs epithets from a film review website with Fox News personalities, with sometimes scurrilous results.162 Though @realhumanpraise may not use AI, it is certainly conceivable that an AI-powered program might be used to similar (if not more offensive) effect. Where the relevant laws require some form of intent, as well as the harmful speech, then it seems difficult for a human to be held liable for the “speech” of the AI system. This is especially so where the combination of words and ideas used is not foreseeable.

Mr. Colbert’s program was intended as satirical, but many have raised concerns as to the possibility for automatically generated Internet content to shape human opinions and even elections. One prominent example is the alleged use of “Twitter Bots” by individuals and organisations aligned to Russia, to shape opinions in matters including the 2016 US election163 and the UK ’s Brexit vote.164 It is not clear whether AI has yet played any role in the generation of messages apparently designed to polarise voters, but the possibility is obvious.

In November 2015, Victor Collins was found dead in the hot tub of another man, James Bates. Mr. Bates was accused of murder. His Amazon Echo, a home speaker device incorporating an AI virtual assistant, was potentially a key “witness” to the alleged crime, and the Arkansas local police issued a warrant asking Apple to divulge data from the relevant period. In a February 2017 court filing, Amazon cited US First Amendment freedom of speech protections—not just for the human voice commands which may have been heard by the AI device, but also for the device’s responses. Amazon abandoned this argument a month later, but the episode again called into question whether the AI was entitled to have its speech protected.165

As with protections for free speech , the intentions or even the identity of the speaker of “harmful” speech may be of far less important than the content. Should a racist message be seen as any less problematic because it is generated by AI rather than a human? In one notable public relations disaster, Microsoft ’s flagship AI chatbot “Tay”, which was apparently modelled to speak like a “teen girl” was rapidly decommissioned after it began sending racist, neo-Nazi, conspiracy-theory-supporting and sexualised messages.166

Because current rules protecting and prohibiting speech are focussed on shaping the actions of humans, there remains a gap as to how the speech of AI is to be regulated. One option for AI-generated hate speech is to penalise the publisher on a strict liability basis (e.g. public social networks such as Facebook , Instagram or Twitter). A law enacted by Germany against social media hate speech (from any source) which is communicated by a social network has already been criticised by some as overstepping the mark.167 Moreover, it is not always certain that AI speech will be conducted via the medium of such a provider. In any case, until a solution is chosen the law will remain unclear, and potential loopholes for harmful speech will persist.168

6 Conclusions on Responsibility for AI

The aim of this chapter has been to demonstrate the ways in which established legal mechanisms might address responsibility for AI. Running through each is a tension as to whether AI should be treated as an object, a subject, a thing or a person. Current laws can and will in the short term continue to determine responsibility for AI in the ways set out above. The bigger question is whether society’s aims would be better served by reformulating our relationship with AI in a more radical fashion. The following chapters consider some of the changes we might make.