© The Author(s) 2019
Jacob TurnerRobot Rules https://doi.org/10.1007/978-3-319-96235-1_6

6. Building a Regulator

Jacob Turner1  
(1)
Fountain Court Chambers, London, UK
 
 
Jacob Turner

1 Why We Must Design Institutions Before We Can Write Laws

Like many other discussions of how AI should be regulated, this book began by reciting Isaac Asimov’s Laws of Robotics.1 Over and above their deliberate gaps, vagueness and oversimplification, they have one overriding problem: Asimov was starting in the wrong place by writing laws. The question to ask first is: “Who should write them?”

In contrast to the preceding chapters, this one will step back from granular legal issues of responsibility and rights and address instead the more general questions of how we ought to design, implement and enforce new rules tailored to AI.

1.1 Philosophy of Institutional Design

Why start with the design of the system? Among legal philosophers, there are two popular schools of thought as to what is needed to make laws binding authority on their subjects: Positivism and Natural Law.2 Legal philosopher John Gardner describes Positivism as the view that “[i]n any legal system, whether a given norm is legally valid, and hence whether it forms part of the law of that system, depends on its sources, not its merits”.3 Natural Law theorists on the other hand believe certain values inhere in nature or human reason, and that these ought to be reflected in the legal system.4 For Natural Lawyers, a law is binding authority only if it is good or just.5

Though the two approaches may not be incompatible,6 they lead to a difference in emphasis: Natural Lawyers focus more on ensuring the laws reflect a particular moral code and Positivists on creating institutions whose laws will be acceptable to their subjects.7

All this is significant to AI because if Natural Lawyers are correct, then there is only one set of rules which can be right in given circumstances. Any legal scholarship becomes a search for eternal truths. Natural Lawyers would begin and end, like Asimov did, by writing rules.

Positivism has the advantage of not needing to take a position as to whether there is one single morally correct system of values.8 Moreover, the lack of consensus on many moral issues—whether in relation to AI or otherwise—means that even if one did somehow arrive at the optimal set of rules, securing their adoption and enforcement would likely be impossible without some mechanism for ensuring that the rules are accepted and respected by their subjects.

1.2 AI Needs Principles Formulated by Public Bodies, Not Private Companies

In the absence of concerted governmental efforts to regulate AI, private companies have begun to act unilaterally. In October 2017, DeepMind, the world-leading AI company acquired by Google in 2014 and now owned by the parent company Alphabet, launched a new ethics board “to help technologists put ethics into practice, and to help society anticipate and direct the impact of AI so that it works for the benefit of all”.9 Similarly, in 2016 the Partnership on AI to Benefit People and Society was formed by six major tech companies—Amazon, Apple, DeepMind, Google, Facebook, IBM and Microsoft—“to study and formulate best practices on AI technologies”.10

Interestingly, the words “to benefit people and society” were subsequently dropped from the majority of the Partnership’s branding on its website—it now styles itself merely as the “Partnership on AI”—although at the time of writing, the Partnership still describes those aims as the organisation’s “mission”. Though DeepMind states it is prepared to hear “uncomfortable” criticism from its advisors, rules formulated by corporate ethics boards will always lack the legitimacy that a government can provide.11

On the one hand, it might be argued that there is no need for government regulation because responsible industry figures can be relied upon to regulate themselves.12 Proponents of industry-led regulation might say that because the companies understand far better the risks and capabilities of technology, they are best placed to set standards. However, allowing companies to regulate themselves without any government oversight may be dangerous.

It is the purpose of governments to act for the common good of everyone in society.13 Of course, some governments are swayed by powerful lobbies or corrupt individuals, but these represent divergences from the core concept of how governments are supposed to operate. By contrast, companies are usually required by corporate law to maximise value for their owners. This is not to say that companies will always chase profit no matter what the consequences. Most jurisdictions permit companies to act for wider social goals should they decide to do so in addition to profit-making and accord a company’s officers’ wide discretion to act in the company’s best interests. Clearly, corporate social responsibility and ethical considerations can and do form part of companies’ business plans. However, considerations of doing good are often secondary to or at the very least in tension with the requirement to create value for shareholders.14

Under most legal systems, profit-making entities are accountable to their owners, who can challenge the actions of directors.15 In one infamous example, the automobile industry pioneer Henry Ford declared that “[m]y ambition is to employ still more men, to spread the benefits of this industrial system to the greatest possible number, to help them build up their lives and their homes”. The Michigan Supreme Court upheld a complaint against him by his co-owners, the Dodge brothers, saying Ford’s aims were improper: “A business corporation is organized and carried on primarily for the profit of the stockholders”.16

1.3 Impartiality and Regulatory Capture

In 1954, the tobacco industry published the notorious “Frank Statement to Cigarette Smokers” in hundreds of US newspapers. In the face of growing but not-yet-conclusive evidence that smoking was harmful, the industry announced:

We are establishing a joint industry group consisting initially of the undersigned. This group will be known as [the] Tobacco Industry Research Committee. In charge of the research activities of the Committee will be a scientist of unimpeachable integrity and national repute. In addition there will be an Advisory Board of scientists disinterested in the cigarette industry. A group of distinguished men from medicine, science, and education will be invited to serve on this Board.17

Researchers have since linked the success of the tobacco industry’s campaign for self-regulation to millions of extra deaths from smoking and its side effects.18

Some technology companies have been keen to emphasise that their AI oversight bodies include independent experts, and are not merely public relations tools. The ethics board of DeepMind features prominent commentators, and the Partnership’s coalition now includes non-governmental not-for-profit organisations such as the American Civil Liberties Union, Human Rights Watch and the United Nations Children’s Emergency Fund (UNICEF).19

These initiatives may sound promising, but there is a risk that if governments do not act swiftly to create their own AI agencies, a significant proportion of thought-leaders in the field will become aligned to one corporate interest or another. Though experts appointed to tech companies’ boards will in most cases aim to maintain their independence, the fact of their association inevitably raises the risk that either they will be influenced to some extent by the interests of the company in question, or they will be seen to be so influenced. Either way, the public trust in their impartiality is liable to be compromised.

In some countries, trust in traditional figures of authority is already diminishing. As UK Government Minister Michael Gove put it: “people in this country have had enough of experts”.20 This may have been a dangerous over-generalisation, encouraging a self-fulfilling prophecy of anti-intellectualism. That said, if experts are seen to compromise their impartiality, then it does seem likely that people will take their pronouncements less seriously.

To the extent that private companies are currently driving the agenda in AI regulation, any governmental body which eventually enters this arena risks the phenomenon of “regulatory capture”: a situation where a regulator is heavily influenced by private interests. As industry self-regulation becomes more developed, it will be increasingly difficult for governments to start afresh by designing new institutions. Instead, governments are likely to endorse the systems of regulation already adopted by industry, not least because the industry itself will by that point have been shaped by its own internal regulations. Those systems will probably favour the corporate interests, causing the government’s system to be hamstrung from its inception.

1.4 Too Many Rules, and Too Few

A further problem with industry self-regulation is that it lacks the force of binding law. If ethical standards are only voluntary, companies may decide which rules to obey, giving some organisations advantages over others. For example, none of the major Chinese AI companies, including Alibaba, Tencent and Baidu, have announced that they will join the Partnership.21

Without one unifying framework, multiple private ethics boards could lead to there being too many sets of rules. Hobbes observed that without a central authoritative lawgiver, life would be “nasty, brutish and short”.22 It would be chaotic and dangerous if every major company had its own code for AI, just as it would be if every private citizen could set his or her own legal statutes. Only governments have the power and mandate to secure a fair system that commands this kind of adherence across the board.

2 Rules for AI Should Be Made on a Cross-Industry Basis

To date, the majority of legal debate on AI has been on two sectors: weapons23 and cars.24 The public, legal scholars and policy-makers have focussed on these areas at the expense of others. More importantly, though, it is misguided to approach the entirety of the regulation of AI solely on an industry-by-industry basis.

2.1 The Shift from Narrow to General AI

When seeking to create regulatory principles, it is not correct to think of narrow AI (which is adept at just one task) and general AI (which can fulfil an unlimited range of tasks) as being hermetically sealed from each other. Instead, there is a spectrum along which we are gradually moving.

As noted in Chapter 1, various writers have ruminated on how soon we might reach the end point on this spectrum: superintelligence 25 and some have raised powerful objections to the idea of superhuman AI ever being created.26 The observation that there is a continuum between narrow AI and general AI does not require one to take any position as to how soon (if ever) the singularity or superintelligence might appear. Rather, the spectrum analogy is merely a prediction that advancement in AI technology will involve iterative steps whereby programs individually and collectively become increasingly capable of mastering a range of techniques and tasks. This approach accords with the intelligence tests proposed by Ben Goertzel27 and José Hernández-Orallo,28 which focus on the creation of cognitive synergies measurable on a sliding scale, rather than a binary question of whether an entity is or is not intelligent.

Early and less-advanced AI systems were able to achieve the narrowest of tasks within a specific rule-based environment. TD-gammon, a backgammon playing program developed by IBM in 1992, learned entirely by reinforcement and eventually achieved a superhuman level of play.29

DeepMind’s DeepQ sits yet further along the spectrum. Researchers demonstrated that DeepQ could play seven different Atari computer games, beating most human players at six and beating master human players at three.30 DeepQ was not allowed to view the games’ source codes so as to manipulate them from the inside. It was limited solely to what a human player sees.31 DeepQ learned to play each game from scratch, using deep reinforcement learning—a series of neural layers connecting an input to an output through three hidden levels of reasoning32 as well as a new technique which the DeepMind researchers termed “experience replay”.33

DeepQ was limited in that it needed to be reset between each game with the effect that its memory of how to play the previous one was wiped. By contrast, the human mind’s versatility is one of its greatest assets. We can derive inferences from one activity and are able to apply them to the fulfilment of another. Original experiences of certain phenomena from early childhood onwards34 create lasting mental pathways and heuristics, such that when we are faced with a relevantly similar but non-identical situation, we have a reasonable idea of what to do.35

A team from DeepMind and Imperial College London published a paper in 2017 entitled “Overcoming catastrophic forgetting in neural networks”, which demonstrated how an AI system could learn to play several games and, crucially, derive lessons from each individual game which could then be applied to the others:

The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Until now neural networks have not been capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks that they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks.36

Other research projects have focused on AI’s ability to plan and “imagine” possible consequences of its actions under conditions of uncertainty—another step in the progression from narrower to more general AI.37

Leading technology companies are now focusing dedicated projects on multipurpose AI.38 Indeed, to accomplish many everyday tasks requires not just one discrete acumen, but rather multiple skills. Apple co-founder Steve Wozniak alluded to this point when he suggested in 2007 that we would never develop a robot with the numerous different capabilities needed to enter an unfamiliar home and make a coffee.39 Kazuo Yano, the Corporate Chief Engineer of the Research and Development Group at technology conglomerate Hitachi, has said:

Many new technologies are developed for many specific purposes… For example, mobile phones originated from phones specialized for car use. In many cases, a landmark change occurs in which the specialized technology is transformed into a multi-purpose technology… We thus decided to focus our efforts on multi-purpose AIs from the beginning, based on our forecast that AIs would need to gain such versatility very soon… Hitachi has a vast variety of connections with industries and customers all over the world, including electric utilities, manufacturers, distributors, finance companies, railway companies, transportation companies, and water supply companies.40

2.2 The Need for General Principles

Even if it is accepted that AI is becoming more multi-purpose, some might argue that in each individual industry the relevant rules should continue to apply, with no need for any additional layer of general regulation. There are several reasons why this approach would be problematic.

First, although most industries have their own technical rules and standards, some principles of the legal system apply generally to all human subjects. The shift from narrow to general AI means that attempting to keep each use of AI compartmentalised will become increasingly difficult. Civil wrongs, contracts and criminal law apply equally to a fireman as to a banker, in addition to individual sectoral regulation. Such general rules aid consistency and predictability for all participants in a legal system. Just as it would be confusing and detract from the rule of law to have a different law for each human profession, it would be equally so to attempt to do the same for each application of AI.

Secondly, AI raises various novel questions which apply across different industries. The Trolley Problem (discussed in Chapter 2 at Section 3.1) might apply equally to AI vehicles travelling by air as by land. Should a passenger’s life be valued differently depending on whether they are in a car or a plane? The answer may still be “yes”, but if each industry approaches such questions separately, then there is a risk that time and energy will be wasted in repeating the same exercises. However, at present governments are approaching AI cars and AI drones as completely separate issues. For instance, in 2015, the UK Government published a policy paper entitled “The Pathway to Driverless Cars: detailed review of regulations for automated vehicle technologies”,41 which did not mention any overlap with drone technology.42

Thirdly, differentiated sectoral regulation gives rise to boundary disputes or “edge problems”, where arguments abound as to whether a particular practice or asset should be treated as falling in one regime or another. Edge problems are particularly common in tax disputes, where authorities argue that something is classified in a higher tax band and taxpayers say the opposite. In one incident, the makers of a popular snack, the “Jaffa cake”, challenged their designation by the UK tax authorities as “cakes” which attracted value-added tax and not as “biscuits”, which were exempt. Eventually, the makers of Jaffa cakes prevailed: the court was persuaded by evidence including that on going stale a Jaffa cake becomes hard (like a cake) rather than soft (like a biscuit).43 The matter may sound flippant, but millions of pounds were at stake, and the Government spent significant amounts of public money on the litigation.44

Differential tax treatment for different assets and practices is justified on the basis that a government may wish to encourage and discourage various activities using economic incentives. All things being equal, the more complex a regulatory system is, the more time and energy that companies will expend in trying to comply or to secure themselves the most favourable treatment. Likewise, governments will spend more resources on arguing against companies as to how complex systems of overlapping regulation should be enforced. Differential regulation only makes sense to the extent it can be justified by other extraneous considerations, rather than being the default position. The public and private cost of disputing edge problems therefore presents a further powerful incentive for a regulatory regime which is as clear and consistent as possible across industries.

To be clear, it is not suggested here that every aspect of AI should be governed by entirely new regulations or a new regulator—this would be unworkable. Owing to their expertise and established rule-making infrastructures, individual sectoral regulators will continue to be a major source of governance in their own fields, from aviation to agriculture. Sectoral regulation is a necessary, but not a sufficient source of rules for AI. The key point is that individual regulators ought to be supplemented by governance structures which allow for overarching principles to be applied across different industries. This model might take the form of a pyramid: widest at the bottom where there will remain a multiplicity of individual industry regulators setting detailed rules, with each level of governance above that being responsible for a smaller and more refined set of principles. Rather than burdening companies with excessive rules, it is suggested here that building a coherent regulatory structure will enable them to operate in an efficient and predictable environment. A tentative suggestion for the top layer of guiding principles for AI is set out in Chapter 8.

3 New Laws for AI Should Be Made by Legislation, Not Judges

How should new laws for AI be created? There are several ways in which laws can be written, altered and adapted, some of which are more suitable for AI than others. In order to explain why this is so, it is first necessary to describe the two main categories of legal systems.45

3.1 Civil Law Systems

Civil law systems focus on legislation. In a paradigm civil law system, all rules are contained in a comprehensive written code. The main role for judges is to apply and interpret the law, but not usually to change it. Indeed, Article 5 of the French Civil Code prohibits judges from “pronounc[ing] judgment by way of general and regulatory dispositions”, theoretically discouraging judges from making law.

3.2 Common Law Systems

In a common law system,46 judges are entitled to make law as well as apply it. Judges exercise this role by deciding individual disputes brought before them by two or more opposing parties. Later courts are then bound by the earlier decision unless it is made by a court lower in the hierarchy, in which case it can be overturned.47 Judicial legal development takes place through judges drawing analogies between sufficiently similar circumstances. The first time a new situation comes before the courts is often described as a “test case”. Other courts will apply the same principles, allowing change to take place in incremental steps.

US judge Oliver Wendell Holmes Jr. summed up the common law approach by saying: “The life of the law has not been logic; it has been experience… The law embodies the story of a nation’s development through many centuries, and it cannot be dealt with as if it contained only the axioms and corollaries of a book of mathematics”.48

3.3 Writing Rules for AI

When judges change or apply rules in a common law system, this happens ex post facto—after the dispute has arisen. Although in some circumstances it is possible to seek interim relief from courts in the expectation of potential future harm, the damage has often already occurred by the time a matter comes before a judge.

Those who consider that no new laws are needed for AI often argue that the common law is well suited to drawing analogies between new and existing phenomena, for example by repurposing the law applicable to animals.49 Legal writer, politician and humorist A.P. Herbert parodied this tendency in the common law in the following satirical judgment, where an (imaginary) bench of the English Court of Appeal held in favour of a claimant who was injured by a motor car when crossing the road:

[The defendant’s] motor car should in law be regarded as a wild beast; and the boast of its makers that it contains the concentrated power of forty-five horses makes the comparison just. If a man were to bring upon the public street forty-five horses tethered together, and were to gallop them at their full speed past a frequented crossroad, no lack of agility, judgment, or presence of mind in the pedestrian would be counted such negligence as to excuse his injury.50

In written evidence provided for a UK House of Commons Science and Technology Committee Report on Robotics and artificial intelligence,51 the Law Society (the regulatory professional body for part of the UK legal profession) commented:

One of the disadvantages of leaving it to the Courts to develop solutions through case law is that the common law only develops by applying legal principles after the event when something untoward has already happened. This can be very expensive and stressful for all those affected. Moreover, whether and how the law develops depends on which cases are pursued, whether they are pursued all the way to trial and appeal, and what arguments the parties’ lawyers choose to pursue. The statutory approach ensures that there is a framework in place that everyone can understand.52

The Law Society’s assessment is correct. It is often said that “hard cases make bad law”, referring to a tendency that when faced with compelling facts such as a victim of a tragic misfortune who might otherwise go uncompensated, a judge may strain to bend legal rules in order that justice be done to the individual litigants, but at a detriment to overall coherence in the system.

Judges usually decide under significant time pressure with limited information as to the wider consequences of their decisions, whereas a legislator often has the freedom to deliberate for many years and undertake significant research when preparing rules.53

Attempting to create rules for AI through contested cases is vulnerable to misaligned incentives. In litigation, each party’s lawyers are compelled—both economically and as a matter of professional conduct—to achieve the best outcome possible for their client alone.54 There is no guarantee that either sides’ objectives will align with societal goals. Although in a test case judges might be able to lay down certain principles which could apply in future situations, the problem is that those principles are likely to have been shaped by the arguments in the case that was in front of the judges on the day, and not the wider concerns which a legislature could take into account.

A legislature is usually formed from society as a whole, whereas case law systems are presided over predominantly by judges who, in most systems, are not elected and will represent only a small (usually privileged) stratum of the population. This is not to say that judges all suffer from incorrigible elitism, but leaving important societal decisions solely to the judiciary risks creating a democratic deficit. Indeed, it is in the light of concerns such as these that judges will sometimes decline to rule on a certain issue, where it touches on matters beyond their institutional or constitutional competence.55

Finally, many cases of harm do not reach the stage of judicial determination at all. First, AI companies are likely to want to settle disputes outside of the courtroom, so as to avoid any damaging publicity and disclosure associated with a drawn-out legal battle. They may well be willing to pay substantial settlements out of court so as to preserve secrecy. Indeed, it is notable that most of the private claims arising from the few known self-driving car fatalities to date appear to have been settled swiftly out of court, and presumably accompanied by robust non-disclosure agreements to keep their outcomes secret.56 Secondly, the costs and uncertainty of any litigation are likely to encourage parties to at least consider settling outside court. Thirdly, at least where there is some form of prior agreement in place between the victim and the potentially liable party (e.g. the manufacturer of an AI car and the owner), then that agreement may provide for a secret and binding arbitration as regards any civil liability. The combination of these trends is likely to stymie further the development of new laws for AI via judicial decisions.

In conclusion, judge-made law could be helpful to smooth out the rough edges of any new legislation, but it would be both risky and inefficient for society to delegate the big decisions on regulating AI entirely to the judiciary.

4 Current Trends in Government AI Regulation

Government AI policies generally fall into at least one of the following three categories: promoting the growth of a local AI industry; ethics and regulation for AI; and managing the problem of unemployment caused by AI. Sometimes, these categories may be in tension, at other times, they can be mutually supportive. The focus in this section is on regulatory initiatives rather than economic or technological ones, though as will be seen the three are often interlinked. The brief survey below is not intended to be a comprehensive examination of all laws and government initiatives concerning AI regulation; matters are developing fast and any such information would soon go out of date. Instead, our aim is to capture some general regulatory approaches with a view to establishing the direction of travel for several of the major jurisdictions involved in the AI industry.

4.1 UK

At present, governmental bodies such as the UK’s House of Lords Select Committee on AI57 and the All-Party Parliamentary Group on AI58 seem to be in danger of attempting both too much and too little. They are attempting too much because their mandate tends to include economic questions such as AI’s impact on employment. This is an important issue, but it is distinct from the question of what new legal rules we might use to regulate AI.59

Conversely, UK Government initiatives are in danger of doing too little because there has been no concerted effort to develop comprehensive standards to govern AI. In a 2016 report, the UK Parliament’s Science and Technology Committee concluded:

… initiatives [for the regulation of AI] are being developed at the company level… at an industry-wide level … and at the European level …. It is not clear, however, if any cross-fertilisation of ideas, or learning, is taking place across these layers of governance or between the public and private sectors. As the Chief Executive of Nesta [a charitable foundation focussed on innovation] has argued, ‘it’s currently no-one’s job to work out what needs to be done’.60

In a speech to the World Economic Forum at Davos in 2018, Prime Minister Theresa May emphasised the importance of AI to the UK’s economy and signalled its willingness to participate in international regulation.

… in a global digital age we need the norms and rules we establish to be shared by all.

This includes establishing the rules and standards that can make the most of Artificial Intelligence in a responsible way, such as by ensuring that algorithms don’t perpetuate the human biases of their developers.

So we want our new world-leading Centre for Data Ethics and Innovation to work closely with international partners to build a common understanding of how to ensure the safe, ethical and innovative deployment of Artificial Intelligence…. the UK will also be joining the World Economic Forum’s new council on Artificial Intelligence to help shape global governance and applications of this new technology.61

Despite these fine words, specific policy developments remain elusive. Rowan Manthorpe, writing in the influential technology magazine Wired, argued that “May’s Davos speech exposed the emptiness in the UK’s AI strategy”. He continued: “…there are only bland pronouncements about the promise of innovation, that brush aside difficult questions, elide compromises, and obscure the trade-offs made in the name of the national good”.62 Another journalist, Rebecca Hill, has wondered whether the vaunted Centre for Data Ethics and Innovation will turn out to be “[a]nother toothless wonder”.63 Likewise, AI policy expert Michael Veale has voiced concerns that this body “will descend into one of many talking shops, producing a series of one-off reports looking at single abstract issues”.64 So long as it continues to lack a clear mandate, leadership or programme of action these concerns will remain.

The title of a report published in April 2018 by the House of Lords AI Committee asked whether the UK’s approach to AI was “ready, willing and able?”. It concluded that “[t]here is an opportunity for the UK to shape the development and use of AI worldwide, and we recommend that the Government work with Government-sponsored AI organisations in other leading AI countries to convene a global summit to establish international norms for the design, development, regulation and deployment of artificial intelligence”.65 In the light of the enormous upheavals caused by Brexit, both internally and in terms of the UK’s international relations, it remains to be seen whether the UK Government will have the resources, commitment or indeed the international clout to make good on this proposal.

4.2 France

In March 2018, France’s President Emmanuel Macron announced a major new AI strategy for his country, in a speech66 and an interview with Wired.67 Macron emphasised his aim for France and Europe generally to become leaders in the development of AI. He noted in this regard the crucial importance of rules saying:

My goal is to recreate a European sovereignty in AI… especially on regulation. You will have sovereignty battles to regulate, with countries trying to defend their collective choices. You will have a trade and innovation fight precisely as you have in different sectors. But I don’t believe that it will go to the extreme extents Elon Musk talks about [in terms of a third world war, for AI superiority], because I think if you want to progress, there is a huge advantage in an open innovation model.68

President Macron’s announcement was followed in March 2018 by the Villani Report,69 a major study commissioned by the French Prime Minister authored by mathematician and Member of Parliament Cédric Villani. The Villani Report was wide-ranging in focus, covering economic initiatives to grow the industry in France and Europe as well as its potential impact on employment. Part 5 of the Report was dedicated to the ethics of AI. In particular, Villani proposed: “the creation of a digital technology and AI ethics committee that is open to society”. He recommended that “[s]uch a body could be modelled on the CCNE (Comité consultatif national d’éthique - National Consultative Ethics Committee), created in 1983 for health and life sciences”.

These are certainly an encouraging steps in terms of governmental action, but it is as yet unclear how Macron’s grand strategy will be implemented, or indeed if Villani’s more detailed proposals will be adopted more widely.

4.3 EU

The EU has launched several initiatives aimed at the development of a comprehensive AI strategy, which include its regulation.70 The two key documents in this regard are the European Parliament’s Resolution of February 2017 on Civil Laws for Robotics and the General Data Protection Regulation (GDPR). Important provisions of both are addressed in some detail in the following two chapters. The February 2017 Resolution contained much interesting content, but it did not create binding law. Instead, it was a recommendation to the Commission for future action. The GDPR by contrast was not aimed specifically at AI, but its provisions seem likely to have some fairly drastic effects on the industry—potentially even beyond what its drafters might have intended.71

Taking forward the European Parliament’s call to bring forward binding legislation, in March 2018 the European Commission issued a call for a high-level expert group on artificial intelligence, which according to the Commission “will serve as the steering group for the European AI Alliance’s work, interact with other initiatives, help stimulate a multi-stakeholder dialogue, gather participants’ views and reflect them in its analysis and reports”.72 The work of the high-level expert group will include to “[p]ropose to the Commission AI ethics guidelines, covering issues such as fairness, safety, transparency, the future of work, democracy and more broadly the impact on the application of the Charter of Fundamental Rights, including privacy and personal data protection, dignity, consumer protection and non-discrimination”.

In April 2018, 25 EU countries signed a joint declaration of cooperation on AI, the terms of which included a commitment to: “[e]xchange views on ethical and legal frameworks related to AI in order to ensure responsible AI deployment”.73 Despite these encouraging signs and worthy intentions, the EU’s regulatory agenda remains at an incipient stage.

4.4 USA

In its final months, the Obama administration produced a major report on the Future of Artificial Intelligence, along with an accompanying strategy document.74 Though these focussed primarily on the economic impact of AI, they also covered (briefly) topics such as “AI and Regulation”, and “Fairness, Safety and Governance in AI”.75 In late 2016, a large group of US Universities, sponsored by the National Science Foundation, published “A Roadmap for US Robotics: From Internet to Robotics”, a 109-page document edited by Ryan Calo.76 This included calls for further work on AI ethics, safety and liability. However, the subsequent Trump administration appears to have abandoned the topic as a major priority.77 Though extensive private sector AI development is taking place, as well as considerable government investment (especially via the Department of Defense), at the time of writing the US Federal Government does not appear to be pursuing major national or international regulatory initiatives in AI.

4.5 Japan

Industry in Japan has for many years had a particular focus on automation and robotics.78 The Japanese Government has generated various strategy and policy papers with a view to maintaining this position. For instance, in its 5th Science and Technology Basic Plan (2016–2020), the Japanese Government declared its aim to “guide and mobilize action in science, technology, and innovation to achieve a prosperous, sustainable, and inclusive future that is, within the context of ever-growing digitalization and connectivity, empowered by the advancement of AI”.79

In line with these goals, the Japanese Government’s Cabinet Office convened an Advisory Board on Artificial Intelligence and Human Society in May 2016 under the initiative of the Minister of State for Science and Technology Policy “with the aim to assess different societal issues that could possibly be raised by the development and deployment of AI and to discuss its implication for society”.80 The Advisory Board produced a report in March 2017, which recommended further work on issues including ethics, law, economics, education, social impacts and R&D.81

The Japanese Government’s proactive approach, driven by its national industrial strategy and aided by the strong public discourse on AI, is an excellent example of how governments can foster discussion nationally and internationally. The challenge for Japan will be to sustain this early momentum, something which will be assisted if other countries follow its approach.

4.6 China

In July 2017, China’s State Council issued “A Next Generation Artificial Intelligence Development Plan”,82 a document described by two experienced analysts of Chinese digital technology as “[o]ne of the most significant developments in the artificial intelligence (AI) world”83 that year.

Although its main focus was on fostering economic growth through AI technology, the Plan also provided that “[b]y 2025 China will have seen the initial establishment of AI laws and regulations, ethical norms and policy systems, and the formation of AI security assessment and control capabilities”. Jeffrey Ding of the Future of Humanity Institute at Oxford University has commented of this statement: “[n]o further specifics were given, which fits in with what some have called opaque nature of Chinese discussion about the limits of ethical AI research”.84

In November 2017, Tencent Research, an institute within one of China’s largest technology companies, and the China Academy of Information and Communications Technology (CAICT) produced a book of 482 pages, the title of which translates roughly to: “A National Strategic Initiative for Artificial Intelligence”. Topics covered include law, governance and the morality of machines.

In a report entitled “Deciphering China’s AI Dream”,85 Ding hypothesises that “AI may be the first technology domain in which China successfully becomes the international standardsetter”.86 The report points out that the National Strategic Initiative for Artificial Intelligence book identified Chinese leadership on AI ethics and safety as a way for China to seize the strategic high ground. Tencent Research and CAICT wrote, “China should also actively construct the guidelines of AI ethics, play a leading role in promoting inclusive and beneficial development of AI. In addition, we should actively explore ways to go from being a follower to being a leader in areas such as AI legislation and regulation, education and personnel training, and responding to issues with AI”.87 Ding observes further:

One important indicator of China’s ambitions in shaping AI standards is the case of the International Organization for Standardization… Joint Technical Committee (JTC), one of the largest and most prolific technical committees in the international standardization, which recently formed a special committee on AI [SC 42]. The chair of this new committee is Wael Diab, a senior director at [Chinese multinational company] Huawei, and the committee’s first meeting will be held in April 2018 in Beijing, China - both the chair position and first meeting were hotly contested affairs that ultimately went China’s way.88

In furtherance of its policies, China established a national AI standardisation group and a national AI expert advisory group in January 2018.89 At these groups’ launch event, a division of China’s Ministry of Industry and Information Technology released a 98-page White Paper on AI standardisation.90 The White Paper noted that AI raised challenges in terms of legal liability, ethics and safety, and stated:

…considering that the current regulations on artificial intelligence management in various countries in the world are not uniform and relevant standards are still in a blank state, participants in the same AI technology may come from different countries which have not signed a shared contract for artificial intelligence. To this end, China should strengthen international cooperation and promote the formulation of a set of universal regulatory principles and standards to ensure the safety of artificial intelligence technology.91

China’s aim to become a leader in the regulation of AI may be one of the motivations behind its call in April 2018 “to negotiate and conclude a succinct protocol to ban the use of fully autonomous weapon systems”, made to United Nations Group of Governmental Experts on lethal autonomous weapons systems.92 In so doing, China adopted for the first time a different approach regarding autonomous weapons to the USA. The Campaign to Stop Killer Robots announced that China had joined 25 other nations in calling for such a ban.93

Paul Triolo and Jimmy Goodrich of the New America Institute, a think tank, state that “[a]s in many other areas, Chinese government leadership on AI at least nominally comes from the top. Xi has identified AI and other key technologies as critical to his goal of transforming China from a ‘large cyber power’ to a ‘strong cyber power’ (also translated as ‘cyber superpower’)”.94 This approach seems to be born out by the White Paper. In its key recommendations, the authors suggested:

Development of key, urgently needed standards such as reference frameworks, algorithm models, and technology platforms; promotion of international standardization work on artificial intelligence, gathering domestic resources for research and development, participating in the development of international standards, and improving international discourse power.

Reference to China’s development of “international discourse power” (国际话语权) concerning AI is particularly significant.95 The postmodernist term “discourse”, popularised by sociologist Michel Foucault, generally refers to “systems of thoughts composed of ideas, attitudes, courses of action, beliefs, and practices that systematically construct the subjects and the worlds of which they speak”.96 It is an example of “soft power”: the projection of influence through social, cultural and economic means.97 International discourse power was adopted as an official national policy aim in 2011.98 As Chinese analyst Jin Cai explained, “[t]o control the narrative, then, is the first step to controlling the situation”.99

4.7 Conclusions on Current Trends in Government AI Regulation

National AI policies are bound up with countries’ current positions in the global order, as well as where they are hoping to be in the future. Japan sees the development of AI regulation as part of its industrial strategy. For China, the issue is both one of economics and one of international politics. China’s efforts to develop a world-leading home-grown industry in AI are connected to but not the same as its efforts to influence the international discourse on AI; even if the first aim does not succeed, the second might. Recent indications suggest that China may now seek a leading role in shaping worldwide AI regulation, much as the USA did in numerous fields over the twentieth century. The US Government appears, temporarily at least, to have stepped back generally from a global rule-making role. Though the EU is now beginning to make moves towards the development of its own comprehensive AI regulatory strategy, it may find itself competing with China and Japan to be the main driver of any worldwide standards.

In the nineteenth century, the major European powers competed for influence over physical territory, in the “Great Game” for Afghanistan, and the “Struggle for Africa”. In the twentieth century, the USA and USSR competed against each other over technology in the “Space Race”. The twenty-first century may be characterised by a similar competition for power over AI—not just for developing the technology, but also in terms of writing the regulations.100

The following sections explore how international regulations could be designed and implemented for the benefit of all, notwithstanding divergent national interests.

5 International Regulation

5.1 An International Regulatory Agency for AI

The section above on current trends in government regulation described numerous proposals for national or even regional AI regulators.101 These bodies will play a vital role in shaping certain aspects of regulation to local demands, but ultimately both suggestions are too narrow. In addition to national and regional institutions, all countries stand to benefit from having a global regulator.

5.2 Arbitrary Nature of National Boundaries

Late on the night of 10 August 1945, two young US military officers, Dean Rusk and Charles Bonesteel, drew one of the most important lines of the twentieth century. In the final stages of World War II, the Allied Powers were deciding how Japan’s colonies ought to be divided between them following its likely defeat. For Rusk and Bonesteel, the task was to propose a division which would protect the interests of the USA, but also be acceptable to the USSR.102 They decided to draw a horizontal line, tracking the “38th Parallel”—a circle of latitude measured based on its distance from the Earth’s equator. One country ceased to exist, and in its place, two new countries were born: North Korea and South Korea. North Korea, which fell originally under Soviet control, is a brutal, secretive and repressive dictatorship beset by extreme poverty. South Korea is one of the world’s most economically developed and socially liberal countries. Though at the time of writing North Korea and South Korea may be moving towards an historic reconciliation, this potential rapprochement only serves to accentuate the absurdity of the original fissure. It is hard to think of greater differences between two nations resulting from so arbitrary a decision.

Though some borders follow physical divisions, such as a mountain range or a river, all such frontiers are ultimately human inventions. They can shift through war, gift or even sale.103 National systems of law are particularly effective when the subjects and objects of regulation have a tangible form which can be located in one place or another. The model begins to break down when the subject is not constrained by physical or political boundaries.

5.3 Cost of Uncertainty

If an AI entity operates in several jurisdictions, its designers will need to ensure that it is compatible with the rules in each of them. Where standards differ then barriers to trade and additional costs will arise, as AI that conforms to one country’s standards is barred from another. Because we lack rules which address the novel legal issues raised by AI, there is an opportunity to design a comprehensive set of principles which could be applied worldwide. This would save individual legislatures the costs and difficulty of regulating on their own, and it would save AI designers the costs of seeking to comply with multiple different codes.104 In turn, consumers and taxpayers would benefit from lower costs and more diverse AI products.

Unlike other products where individual countries’ regulations are shaped by many years of cultural, economic and political differences, for AI we have a blank slate. A single code would be far more efficient than waiting for individual countries to each develop their own ones. If we fail to prepare international standards, then regulation for AI will likely become Balkanised, with each territory setting its own mutually incompatible rules. The sunk costs and entrenched cultural differences in the regulation of AI may well render any future consolidation of standards impossible.

5.4 Avoiding Arbitrage

It is common for companies to restructure or shift their corporate location from one territory to another so as to achieve tax or regulatory advantages. Companies are thereby able to provide their goods or services in one territory, yet avoid its taxes and at least some of its regulations. This practice is known as arbitrage.

There have been various—largely unsuccessful—initiatives to harmonise tax laws across the world in order to diminish the opportunities for this practice.105 Part of the reason why it is so difficult to do so successfully is that countries have strong incentives to cut their taxes in order to attract businesses to register there, leading to a “race to the bottom”. Similarly, there may be an economic advantage for some countries to seek to incentivise less scrupulous AI developers to establish in their jurisdiction by adopting minimal regulations. When operating a technology as powerful and potentially dangerous as AI, this would be a worrisome trend. An international system of regulation could avoid at least some of these differences by stipulating a single standard which would apply wherever the AI may be.

6 Why Would Countries Agree to a Global Code?

6.1 Balancing Nationalism and Internationalism

Despite their fictional and arbitrary nature, the continuing psychological importance of nation states cannot be denied. Predictions that national boundaries would melt away have proved unfounded; the early twenty-first century has in fact seen resurgence in nationalism.106

Critics of international regulation are likely to argue that antagonistic national interests will prevent countries coming together to govern AI. Well-publicised splits and deadlocks in international bodies such as the UN Security Council would seem to support this pessimistic appraisal.

Even so, there are a number of less prominent examples of international regulation functioning efficiently and garnering wide support despite the many differences which separate nations otherwise.107 The solution to reconciling the urge for national self-determination with the need for international rules is to combine best practices.

6.2 Case Study: ICANN

The rather banal-sounding Internet Corporation for Assigned Names and Numbers (ICANN) means little to most people, yet billions each day make use of facilities which it maintains. ICANN is the organisation which administers, maintains and updates key infrastructure behind the internet. This includes assigning domain names and IP addresses. These “unique identifiers” are aligned with a standard set of protocol parameters which ensure computers can communicate on an agreed basis.108

ICANN started with a single individual: John Postel, an academic who established its forerunner at the University of Southern California to administer the assignment of Internet addresses under a contract with Defense Advanced Research Project Agency, part of the US Department of Defense.109 Despite its origins as a national military project, the Clinton administration committed to the privatisation of domain name systems management in a manner that would increase competition and facilitate international participation in its management.110 Following a wide consultation which received over 430 comments from members of governments, the private sector and civil society around the world, in February 1998 the US Government announced it would transfer the management of domain names to a new non-profit corporation based in the USA but with global representation.111 Later that year ICANN was created to fulfil this commitment.112

Since gaining independence ICANN has introduced numerous changes which are crucial to the Internet as we now know it. These include: the accreditation of private sector registrars to create and maintain domain names from 1999 (now over 3000)113 and the expansion of top-level domain names, including from 2012 in Chinese, Russian and Arabic scripts. Today ICANN’s mission remains to “organize the voices of volunteers worldwide dedicated to keeping the Internet secure, stable and interoperable”, as well as to promote competition and develop internet policy.114 ICANN explains its internal organisation as follows:

At the heart of ICANN’s policy-making is what is called a “multistakeholder model”. This decentralized governance model places individuals, industry, non-commercial interests and government on an equal level. Unlike more traditional, top-down governance models, where governments make policy decisions, the multistakeholder approach used by ICANN allows for community-based consensus-driven policy-making. The idea is that Internet governance should mimic the structure of the Internet itself – borderless and open to all.115

ICANN’s “At-Large” governance structure incorporates more than 165 local organisations, including professional societies (engineers, lawyers etc.), academic and research organisations, community networks, consumer advocacy groups, and civil society. These are grouped into five regions: Africa, Asia, Europe, Latin America and North America, thereby fostering global discussions.116

On 6 January 2017, the final formal agreement between ICANN and the US Department of Commerce expired, thereby ending the US Government’s authority to approve key changes for the internet. Lawrence Strickling, the US Assistant Secretary of Commerce for Communications and Information, 2009–2017 commented: “The successful transition of the IANA stewardship proves that the multistakeholder model can work”.117

6.3 Self-Interest and Altruism

In September 2017, President Trump addressed the General Assembly of the United Nations. He began by reiterating his campaign doctrine of “America First”:

As president of the United States, I will always put America first. Just like you, as the leaders of your countries, will always and should always put your countries first. All responsible leaders have an obligation to serve their own citizens, and the nation state remains the best vehicle for elevating the human condition.118

This appears to be a statement par excellence of the view in foreign policy that each nation should act solely in its own interests.119 Yet President Trump continued:

But making a better life for our people also requires us to with work together in close harmony and unity, to create a more safe and peaceful future for all people.

This caveat is crucial and demonstrates how even the most powerful nation in the world, led by one of its most nativist leaders, still acknowledges the importance of international coordination on certain global issues.

A 2013 paper produced by the UN System Task Team—a large group of UN bodies—described a “global commons”, namely “the High Seas, the Atmosphere, the Antarctica and the Outer Space”, noting that “These resource domains are guided by the principle of the common heritage of mankind”.120 Though it is not a physical resource, AI also qualifies as an equivalent global issue with potential to affect the whole of humanity.

Some nations may already recognise the enormous potential of AI and its potential to serve the world as a whole, if its power can be harnessed. Such countries are more likely to support an international rules-based system as a matter of altruistic principle.121 There are also pragmatic reasons why even the most self-regarding state might wish to see international regulation for AI, in addition to the economic incentives identified above. Game theory explains why self-interested rational actors might cooperate and indeed establish rules on which basis future cooperation may take place.122

For less economically developed nations, one major barrier to international regulation of certain industries, such as climate change, is a feeling that more developed countries made unfettered use of technologies with harmful side effects to grow rich in previous decades, and that it is now unfair to seek to impose constraints which could slow growth for those nations now attempting to catch up.123 Because AI technology remains relatively new even for developed countries, there are fewer structural disparities than in other industries. In consequence, there is an opportunity to forestall arguments against regulation based on historic injustice, by instituting international principles now rather than at a later juncture when the field is more mature.

Although the immediate prospect of superintelligence may be low this does not mean that the chances of developing AI which humans are subsequently unable to control can be discounted altogether. Moreover, even if the existential risks to all of humanity appear minimal at present, there are very many less powerful and advanced AI technologies that could cause serious harm, short of the singularity. As such our best chance of protecting against these is to pool resources and expertise in developing the technology within agreed parameters. An untrammelled international arms race in AI could lead to some countries to develop it in an irresponsible manner, prioritising achievement of immediate goals over safety.

Given the arbitrary nature of international borders, there is no reason why AI’s impacts should be self-contained within the country in which it originates. Instead, much like a wildfire, tsunami or virus, AI’s impacts will cross man-made boundaries with impunity. The danger of a country being cross-infected ought to encourage its leaders to promote international standards as a matter of national self-preservation as much as anything else.

6.4 Case Study: Space Law

On 17 December 1903, over a windy beach in North Carolina, USA, Orville Wright piloted the first powered aeroplane flight. Fewer than 60 years later, the USSR propelled the first human into the earth’s orbit. During the Cold War—which was at its most intense in the early 1960s—space technology gave rise to a number of concerns, the most immediate of which was the possibility of nuclear and conventional weapons being used from space.

As well as the security element, space technology was significant also to the scientific and cultural competition between the West and the Soviets. Each sought to prove that it was the world’s dominant civilisation through feats including putting the first man in space or the first man on the moon.

Following the launch of the USSR’s Sputnik 1 rocket in 1957—the first artificial satellite—the Western powers made a series of proposals to ban the use of outer space for military purposes.124 The USA and the USSR were the main participants in discussions given that they were the powers most advanced in this field.125 However, at an early stage the discussion was also internationalised to include the views of nations without space technology: the UN General Assembly passed a unanimous resolution entitled the “Declaration of Legal Principles Governing the Activities of States in the Exploration and Use of Outer Space” in October 1963, calling upon all states to refrain from introducing weapons of mass destruction into outer space.126 This was despite the lack of any provisions within the treaty for verification of whether states were adhering to its terms.

After successive draft treaties were submitted by the USA and USSR, their positions gradually aligned. The text of the Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, Including the Moon and Other Celestial Bodies (the Outer Space Treaty), was agreed on 19 December 1966. The Outer Space Treaty was put to a vote of the General Assembly, which approved it by unanimous acclamation. It entered into force in October 1967.

To date, the Outer Space Treaty has been ratified by 62 States, which include all those with space exploration capacities. Article I provides: “… exploration and use of outer space, including the moon and other celestial bodies, shall be carried out for the benefit and in the interests of all countries, irrespective of their degree of economic or scientific development, and shall be the province of all mankind”. Key provisions include an undertaking not to place in orbit around the Earth, install on the moon or any other celestial body, or otherwise station in outer space, any weapons of mass destruction; and limiting the use of celestial bodies exclusively to peaceful purposes. Other clauses emphasise the need for “co-operation and mutual assistance”, as well as the importance of “appropriate international consultations before proceeding with any [potentially harmful] activity or experiment”.127

These norms have been successful both in terms of adherence to the various prohibitions of militarisation and also in regard to fostering a continuing spirit of international cooperation concerning outer-space activities. The International Space Station was launched in 1998 and operates as a joint project between five space agencies.128 It is highly unlikely that this achievement would have been possible were it not for the Outer Space Treaty. The development of space law contains a number of lessons for AI.

First, it stands as a rebuke to those who suggest that states will be unwilling or unable to agree to international regulation of AI as a matter of principle; indeed when the Outer Space Treaty was agreed at the height of the Cold War, the use of space was far more intertwined with national and international security, as well as prestige and pride, than AI is at present.

Secondly, the process of negotiation and affirmation of the principles eventually enshrined in the Outer Space Treaty was carried out both between the nations which were at the time most advanced in the relevant technology, but also on an inclusive basis, thus ensuring that the principles agreed had legitimacy not just as between scientifically-advanced nations, but the entire international community.

Thirdly, the framers of the Outer Space Treaty adopted an incremental approach, starting with broad propositions that could be agreed by all nations, whilst leaving some gaps to be filled by later instruments. The Outer Space Treaty articulates a small number of high-level principles and prohibitions. It was followed by four other major international treaties on the topic.129

Fourthly, an international regulatory body, the United Nations Office for Outer Space Affairs, contributes to the sharing of information as between nations as well as capacity-building for less developed states, enabling them to benefit also from development in the field. In so doing it works closely alongside individual countries’ and regions’ own space agencies.130

7 Applying International Law to AI: The Toolbox

7.1 Traditional Structure of Public International Law

Laws operate at different levels. Civil and common law systems refer to regulatory choices within countries. There is a separate body of law which operates to regulate relations between countries: public international law.131

The traditional sources of international law are set out in Article 38(1) of the Statute of the International Court of Justice: treaties; international custom, as evidence of a general practice accepted as law132; the general principles of law recognised by civilised nations133; certain judicial decisions134; and the teachings of well-respected legal academics.135 Additionally, certain resolutions of the United Nations Security Council are accepted as being legally binding.136 Though public international law has applied historically to govern relations between sovereign states, its subjects now include private individuals, companies and international bodies, and non-governmental organisations.137

Aside from a small category of “peremptory” (or fundamental) laws, such as the prohibition on slavery, much public international law is voluntary to start with, but binding once agreed.138 For instance, a country can decide whether or not to accede to a treaty, and even if it does it can usually do so with reservations such that certain provisions of that treaty do not apply.139 The main reason is that individual countries have been viewed traditionally as independent sovereigns which are able to act unconstrained as regards their internal affairs.140

Any international system of regulations for AI is likely to require, at least at a high level, some form of treaty agreement so as to create a basic structural framework from which other norms can easily develop, for instance by creating the international regulator. Beyond the traditional forms of international law outlined above, the following sections set out a number of additional methods and techniques which might be used to build an effective system of regulation for AI.

7.2 Subsidiarity

The Catholic Church has, for over a millennium, balanced a system of centralised law-making focussed predominantly on the Vatican, with an incredibly wide jurisdiction stretching across much of the world. The Church developed a principle known as “subsidiarity”, whereby decisions are taken as closely as possible to the smallest possible unit of administration whilst maintaining the coherence and efficiency of the system as a whole. As Catholic theologian and legal academic Russell Hittinger explains, “the principle does not require ‘lowest possible level’ but rather the ‘proper level’”.141

The EU has also adopted subsidiarity, requiring decisions to be taken as closely as possible to the citizen and that constant checks are made to verify that action at EU level is justified in light of the possibilities available at national, regional or local level.142 In particular the EU offers a structured approach to this principle: action at the EU level is only justified if (a) the objectives of the proposed action cannot be sufficiently achieved by individual Member States (i.e. necessity); and (b) the action can by reason of its scale or effects, be implemented more successfully by the EU (i.e. added value).143

An AI regulator should utilise subsidiarity as a guiding principle when deciding whether and to what extent to lay down international rules. As is the case in the EU, it would be sensible if the actions of a global AI regulator could be challenged and overturned if they are found to breach the subsidiarity requirement.144

7.3 Varying Intensity of Regulation

It is a widely held misconception there is a binary choice between laws which are “hard” and binding, or “soft” and merely persuasive.145 In fact, there is a range of options which international organisations can use to maintain the efficacy of regulation whilst respecting national sovereignty. The EU has a particularly nuanced menu, which includes the following146:

Regulations

A ‘regulation’ is a binding legislative act. It must be applied in its entirety across the EU. For example, when the EU wanted to make sure that there are common safeguards on goods imported from outside the EU, the Council adopted a regulation.147

Directives

A ‘directive’ is a legislative act that sets out a goal that all EU countries must achieve. However, it is up to the individual countries to devise their own laws on how to reach these goals. One example is the EU consumer rights directive, which strengthens rights for consumers across the EU….148

Decisions

A ‘decision’ is binding on those to whom it is addressed (e.g. an EU country or an individual company) and is directly applicable. For example, the Commission issued a decision on the EU participating in the work of various counter-terrorism organisations…149

Recommendations

A ‘recommendation’ is not binding. When the Commission issued a recommendation that EU countries’ law authorities improve their use of videoconferencing to help judicial services work better across borders, this did not have any legal consequences.150 A recommendation allows the institutions to make their views known and to suggest a line of action without imposing any legal obligation on those to whom it is addressed.

In addition to the above, another mechanism for creating softer international law is the promulgation of “guidance” or “guidelines”, which provide how an institution considers a certain rule or result ought to be achieved or implemented, albeit without formally requiring subjects to comply.151

International regulation for AI ought to be formed by a combination of the above options. Regulations are the bluntest instrument because they allow nations no discretion whatsoever as to their implementation. Therefore their use ought to be restricted to only the most fundamental principles, for which national derogation of any kind would be impossible.

Rules (such as the EU’s Directives) which are binding only as to the result to be achieved offer a good compromise between the desirability of international rules and the prevalent instinct towards nations being able to choose their own methods and structures. The other selectively binding or non-binding options can be used as appropriate. Complete harmonisation of all laws relating to AI need not occur immediately. One model may be to begin with non-binding recommendations in certain areas, with a view to gradually increasing the extent to which they are mandatory over a number of years or even decades.152

7.4 Model Laws

Model laws allow an organisation to create a piece of legislation which its constituent members can adopt entirely, partially, or not at all. The advantage of a model law is that it has the detail of a full mandatory regulation but it does not require adherence.

Particularly in technical areas of regulation, it may be too costly and time consuming for certain less wealthy nations to devote the resources to design such laws themselves independently. Model laws allow for nations to pool and share expertise, creating a common good which reflects each of their input. After model laws are enacted, countries can then draw on each others’ experiences as an aid to implementation and interpretation.

There can be an economic advantage in terms of increased trade between countries which harmonise their laws. Model laws are thus especially useful in fields which feature interstate commerce. One example of a particularly successful model law is the United Nations Commission on International Trade Law’s Model Law on International Commercial Arbitration, versions of which have since 1985 been adopted by many nations.153

Model laws are popular in some federal countries where each individual state has discretion to set its own laws, yet there are demonstrable advantages to those laws being similar or the same. To this end, the US National Conference of Commissioners on Uniform State Laws154 was formed in 1892 for the purpose of promoting “uniformity in state laws on all subjects where uniformity is deemed desirable and practicable”. To date, the Commissioners have approved more than two hundred uniform laws, some of which have been adopted across all states.155

A global AI regulator could be a source of model laws, drawing on expertise from nations around the world. This option may well be more attractive to nations which are wary of giving up their freedom to legislate more generally. As with the use of non-binding recommendations and principles, model laws could form the first step towards greater harmonisation, depending on their uptake and effectiveness.

7.5 An International Academy for AI Law and Regulation

One major objection to the idea of international regulation for AI is that it might come to be dominated by personnel from the most powerful and developed nations on the basis that these countries have more resources to train the relevant experts in computer science, law and other fields. If a supposedly international body was controlled by specialists from just a handful of countries then its legitimacy would be severely diminished. Without sufficiently trained personnel some nations may not be able to develop or articulate their own viewpoints and may therefore be more inclined to simply follow regional leaders or bloc groupings to which they are aligned.

A lack of properly trained personnel distributed around the world would lead also to a diminution in the effectiveness of any AI laws. Passing global regulations is one thing; implementing them is another. Careful coordination and interaction will be needed between any global body and the national or regional mechanisms which are to enforce its directives. Absent local personnel who understand and are aligned with the goals of the global regulatory structure, the actual enforcement of any regulation in many areas will be impossible.

A partial solution is to create an International Academy for AI Law and Regulation, which would aim to contribute to the development and dissemination of knowledge and expertise in international AI law. There are certainly social benefits to having classes at one centralised location, through which participants from around the world could meet each other in person and thereby foster a sense of shared objectives and international camaraderie. However, it would also be possible to disseminate courses via an online platform, as has been achieved with several recent highly popular online courses run by universities including Harvard and MIT.

There is precedent for such a body: in 1988, the International Maritime Organisation, the body which oversees much of the implementation of the global Law of the Sea created the International Maritime Law Institute (IMLI) in Malta. The IMLI website explains: “The Institute provides suitably qualified candidates, particularly from developing countries, with high-level facilities for advanced training, study and research in international maritime law. It also focuses on legislative drafting techniques designed to assist participants in the process of incorporating international treaty rules into domestic law”.156

8 Implementation and Enforcement of AI Laws

8.1 Coordination with National Regulators

Different structures are available for an international institution. At one extreme, under a complete “top down” model the AI regulator would have its own staff, who might establish local offices and operate without recourse to or discussion with any national governments. The benefit of this would be a high degree of uniformity of application and enforcement of the relevant norms. However, such an intrusive model would doubtless be objectionable to governments and indeed many citizens as an interference with their sovereignty. Disobedience and resentment may result.

A far better model would be for the AI regulator to work in conjunction with established—or yet to be established—national authorities. These need not be new bodies, but states may find it useful to create new agencies at a local level. The EU’s regime for financial regulation does not refer to specific authorities in terms of setting national requirements, but refers rather to the “competent authority”, which is appointed by each Member State, and can be split between different national bodies if the Member State so desires.157

The designated national regulators of AI should each be required to have a minimum set of powers and competencies. For instance under the EU’s regime for financial markets, the competent authority in each Member State must have powers including to: “(a) have access to any document or other data in any form which the competent authority considers could be relevant for the performance of its duties and receive or take a copy of it; (b) require or demand the provision of information from any person and if necessary to summon and question a person with a view to obtaining information; (c) carry out on-site inspections or investigations; … (e) require the freezing or the sequestration of assets, or both; (f) require the temporary prohibition of professional activity; … [and] (q) issue public notices…”.158

To the extent that the local regulators lack the ability to achieve any of the minimum requirements, the global AI regulator might facilitate local institution and capacity-building programmes so as to train personnel on the ground, or for example to provide loans of the software and hardware needed to achieve the task. Training of personnel at the AI Academy might also foster such local growth.

A national AI regulatory body might be required to have the above powers, as well as others specific to AI, such as the ability to demand to view the source code of an AI system, and perhaps to be able to insist on amendments to programs which breach its requirements. As well as these more punitive measures, national AI bodies could also provide facilitative services such as the “sandboxing” of new technologies, namely the ability to test them in safe environments, as well as the licensing and certification of individuals and AI systems for compliance with relevant standards. Section 3.4 of Chapter 7 elaborates further on the methodology of regulatory sandboxes.

In an example of the type of international co-operation which could be applied to AI, in August 2018 the UK Financial Conduct Authority (FCA) and 11 other organisations created the Global Financial Innovation Network (GFIN). The FCA explained that GFIN “...will seek to provide a more efficient way for innovative firms to interact with regulators, helping them navigate between countries as they look to scale new ideas. It will also create a new framework for co-operation between financial services regulators on innovation-related topics, sharing different experiences and approaches”. Notably, the GFIN’s initial members included not just national financial regulators (such as the Australian Securities & Investments Commission and Hong Kong Monetary Authority) but also an NGO: the Consultative Group to Assist the Poor.159

8.2 Monitoring and Inspections

In order to ensure consistency of implementation and enforcement, the national model could be supplemented by a regime of periodic monitoring and inspections, either by the global regulator itself or by bodies operating at a regional level. The principle of subsidiarity ought to be used to decide which is most appropriate, though all things being equal it would usually be best for a country to be inspected and rated by its peers rather than at the global level. Regional organisations which might play such a role might include, for instance, the African Union, the EU and the Association of Southeast Asian Nations.

Various international bodies already utilise periodic inspections. The International Atomic Energy Agency monitors civilian and military use of nuclear power in this manner. As with any monitoring of AI development, nuclear power is a highly technical field which requires a significant degree of training and expertise. Inspectors from any AI regulator would likewise need to be expert in their field. In order to facilitate trust in their independence and legitimacy, it would be advisable for such personnel to be drawn from all around the world and to operate in teams which are diverse in terms of national origin. Unlike the physical inspection of individuals and sites required for controlling doping in sports, nuclear regulation and chemical weapons, it is possible that AI inspections could occur by remote access or even via distributed ledgers. These features may render an international monitoring system of AI less prone to obstruction by recalcitrant regimes than is the case for other technologies.

8.3 Sanctions for Non-compliance

After achieving the initial agreement to be bound to any international system of norms, sanctions for non-compliance are among the most difficult aspects of a regulatory scheme to design and enforce. Indeed some international accords do not contain any form of sanction-based enforcement mechanism at all: the Paris Climate Agreement of 2015 creates a mechanism to secure compliance, but states expressly that it shall be “non-adversarial and non-punitive”.160 Even where sanctions are available in theory, political considerations may render them impossible to implement. The UN Security Council is one of the most prominent bodies under international law empowered to order sanctions, and yet frequently it is unable to do so owing to structural deadlock—not least because of the power of the Permanent Five members (the USA, Russia, France, the UK and China) to veto any resolution with which they disagree.

Furthermore, some states have refused to become party to treaties such as the Rome Statute of the International Criminal Court out of a concern that its enforcement mechanisms might be used to target their citizens for political purposes, rather than those for which the institution was originally established.161 It will be important to ensure that any regulatory body for AI is not debased by political machinations, and instead remains true to its role as a regulatory and standard-setting body. One possible way to reduce the risk of politicisation of an AI regulator is to require that the membership of any body that has the power to recommend sanctions is properly qualified, and not filled by purely political appointees answerable to national governments.162

Rather than having to resort to direct sanctions, it would be preferable that parties to an international agreement on AI adhere to its provisions through self-interest in maintaining the integrity of the system as a whole. However, there will be occasions where states choose not to comply, in which case a system of sanctions may be necessary as a last resort. Instead of economic penalties, it would be preferable in the first instance to develop sanctions which are self-contained within the structure of the global regulator itself. These might include matters such as suspending certain membership or voting rights from a member which is in persistent violation. If well-designed, the desire of states to be part of the international standard-setting body would be sufficiently strong as to provide its own incentive to comply. Failure to do so could see the relevant state lose its place at the table.

8.4 Case Study: The EU’s Sanctioning Method for Member States

The EU has a form of self-contained sanctions, which were invoked for the first time against Poland in late 2017, in response to judicial reforms in that country which were deemed to breach minimum standards required of EU Member States to safeguard the rule of law.163 In order for such sanctions to take effect, they must pass through a number of stages, at each of which the country in default of its obligations is encouraged to enter into dialogue with a view to rectifying the situation. The first stage of this process was the European Commission proposing to another EU body, the Council of Ministers that sanctions be imposed. This triggered a three-month period for Poland to comply with the Council’s requests.164

The EU Member States considered that Poland’s actions constituted a “clear risk of a serious breach by a Member State of the values referred to in Article 2”, namely: “respect for human dignity, freedom, democracy, equality, [and] the rule of law and respect for human rights, including the rights of persons belonging to minorities”. Rather than fining Poland or seeking personal sanctions against its Ministers, the EU voted to begin the process of invoking Article 7 of the Treaty on the European Union, which provides that Member States “may decide to suspend certain of the rights deriving from the application of the Treaties to the Member State in question, including the voting rights of the representative of the government of that Member State in the Council”.165

The EU’s sanctioning method represents a somewhat useful precedent because: (a) it stipulates a limited number of high-level principles; and (b) there is a relatively high threshold required for their breach to have legal effects (“clear risk of serious breach”). The EU’s sanctions are not perfect however: Article 7.3 of the Treaty requires that Member States vote unanimously to move to the final stage of enforcement: something which is unlikely to be achieved save in the most extreme of cases (the above Polish example being a situation where further sanctions will likely be vetoed by the country’s regional allies). A better system might allow punishments based merely on some kind of super-majority.

8.5 Case Study: OECD Guidelines for Multinational Enterprises

The Organisation for Economic Cooperation and Development (OECD) is a forum whereby the governments of 30 democracies collaborate to address the economic, social and environmental challenges of globalisation.166 Originally articulated in 1976, the OECD’s Guidelines for Multinational Enterprises (the Guidelines) have undergone a number of revisions, most notably the addition of a human rights chapter in 2011.167 The Guidelines are a series of recommendations addressed by governments to multinational enterprises (in other words, a type of soft law). They provide voluntary principles and standards for responsible business conduct consistent with applicable laws.

The Guidelines are designed to apply to multinational enterprises (i.e. companies, organisations or groups) which operate in a number of jurisdictions and to secure a minimum degree of compliance with international best practices, especially in developing countries where the methods of protection and enforcement of such standards are otherwise weak.168

The principal means of enforcement of the Guidelines is a series of National Contact Points (NCPs), which are required by the OECD to be established in each state party. The role of the NCPs includes furthering the effectiveness of the Guidelines by undertaking promotional activities and handling enquiries. Governments have discretion as to how the NCP should be formed, for example whether it is part of the Executive, or independent of it. NCPs must, however, be “functionally equivalent” to each other, and to this end must all function in “a visible, accessible, transparent, and accountable manner”.169

As well as their educative function, a major feature of NCPs is to facilitate the resolution of complaints made against multinational enterprises for alleged breaches of the Guidelines. In the event that the NCP considers that there is a case to answer for a complained breach, it will attempt to establish dialogue between the complainant and the multinational enterprise with a view to resolving the matter to the satisfaction of both parties. If this is not possible and the breach is proved, then the NCP can issue a declaration of non-compliance against the party in breach. As at 2016 over 360 complaints had been handled by NCPs, addressing impacts from business operations in over 100 countries and territories.170

Despite lacking a specific punishment mechanism, the “naming and shaming” approach as well as the facilitation of dialogue between parties has been largely successful. Reasons for compliance despite the lack of punishment include the avoidance of bad publicity.171 Governments also consider the fulfilment of the Guidelines with regard to economic decisions, including as to public procurement, and in terms of providing diplomatic support to a company’s operations abroad. The OECD records that:

Between 2011 and 2015, approximately half of all specific instances which were accepted for further examination by NCPs resulted in an agreement between the parties. Agreements reached through NCP processes were often paired with other types of outcomes such as follow-up plans and have led to significant results, including changes to company policies, remediation of adverse impacts, and strengthened relationships between parties. Of all specific instances accepted for further examination between 2011-2015, approximately 36% resulted in an internal policy change by the company in question, contributing to potential prevention of adverse impacts in the future.172

In addition to the indirect economic and reputational risk for companies which breach the Guidelines, they may come to influence substantive law in some of the countries where they are implemented, particularly where the local laws require compliance with international best practices.173

In summary, the Guidelines are an example of how a non-binding and non-punitive system of rules and norms can achieve a high degree of compliance and effectiveness through gradualist, behaviour-shaping activities, whilst at the same time respecting national differences.

9 Conclusions on Building a Regulator

According to ancient Mesopotamian legend, repeated in the Old Testament, at one time “the whole earth was of one language, and of one speech”.174 At Babel, the people decided to build a tower so high that it would reach the heavens. God saw this tower, and realised the extraordinary power that mankind was able to exercise through acting together:

And the Lord said, Behold, the people is one, and they have all one language; and this they begin to do: and now nothing will be restrained from them, which they have imagined to do.175

God’s solution to this challenge was to “confound their language, that they may not understand one another’s speech”. People still had the physical tools to rebuild the tower, but without a shared language they lacked a common purpose. The legend of Babel is usually recounted as a cautionary tale of mankind’s overweening vanity, but it illustrates also the achievements humanity can make if we can overcome ethno-nationalism and instead learn to collaborate across cultures and borders.

Nations have not yet reached definitive positions on how AI should be governed. The clay of public opinion remains unformed. We have a unique opportunity to create laws and principles to govern AI from a shared basis, a new common language. If each country adopts its own rules for AI—or worse, none at all—we stand to bring upon ourselves the curse of Babel once again.