1 Creators and Creations
Questions of responsibility address legal liability for harm after it has occurred. The final two chapters of this book consider how we can prevent undesirable consequences arising in the first place.1 In so doing, we will engage with the third major issue raised by AI: “How should ethical standards be applied to the new technology?”
This chapter and Chapter 8 distinguish between those rules which apply to “creators” and those which apply to “creations”. The term creators refers to the humans who (at present) design, programme, operate, collaborate with and otherwise interact with AI. Creations means the AI itself.
Technology journalist John Markoff writes in Machines of Loving Grace: “[t]he best way to answer the hard questions about control in a world full of smart machines is by understanding the values of those who are actually building these systems”.2 This is true up to a point, but the answers to “hard questions” will also be shaped by what is technologically possible. Although human input is needed to write both sets of rules, the distinction relates to the addressees of standards rather than their source: rules for creators tell humans what to do; rules for creations do this for AI.
Rules for creators are a set of design ethics. They have an indirect effect in that the potential benefit or harm that they are seeking to promote or restrain happens via another entity: the AI. As a rule of thumb, rules for creators will be expressed in the following form: “When designing, operating or interacting with AI, you should…”. By contrast, the usual formulation of rules for creations will be simply: “You (the AI) should…”.
The remainder of this chapter will focus on rules for creators. Building from the bottom, first it will discuss how to build appropriate institutions for setting ethical rules; secondly, it will assess various codes proposed to date; and thirdly, it will consider how rules for creators might be implemented and enforced.
Most of the bodies writing moral codes for AI engineers3 have started—like Asimov did in writing his Laws of Robotics—at the second stage, with little consideration for the first and third. If any effective regulation for AI is to be possible, these other elements will be just as important as the rules’ substance, if not more so.
2 A Moral Regulator: The Quest for Legitimacy
Some choices in industrial and technical regulation are important but arbitrary. For example, people are more interested in being able to listen to their favourite radio stations than in choosing the delineation between radio frequencies allocated for civilian use and those allocated to public services such as the police or military. By contrast, most members of the public will have an opinion on the ethics and legality of “moral” matters such as euthanasia, abortion or gay marriage. Chapter 2 showed how AI is now engaging in these important moral choices.
It is suggested below that far-reaching decisions on moral questions should not be left to a technocratic elite. When designing ethical standards for AI, the first task is to ensure that ethical regulations take into account input from an appropriate range of sources.
2.1 “…of the People, by the People, for the People”
At Gettysburg, Abraham Lincoln stated his intention to create “government of the people, by the people, for the people”. In their discussion of how to legislate for new technology, Morag Goodwin and Roger Brownsword use this quotation to illustrate the point that: “…procedure lies at the heart of what we understand political legitimacy to be”.4
The UK’s decision to leave the EU—“Brexit”—is an example of what can happen if a legal system is perceived to lack legitimacy by all/or some of its subjects. British citizens benefited from a range of social and economic protections under EU laws,5 but this did not stop 52% of voters rejecting these so-called “foreign” and “undemocratic” laws in order to “take back control”6 of their laws in a 2016 referendum.
One major source of frustration for “Leave” voters seems to have been a notion that the EU institutions lacked legitimacy to make laws binding on the UK.7 This visceral feeling of distance and disenfranchisement from the EU serves as a cautionary tale of how a benign system of laws might be rejected by even a relatively prosperous and well-educated population if those laws do not garner sufficient public support.8
Public attitudes towards AI are at a crossroads: a consumer survey in Germany, the USA and Japan suggested that most people are comfortable with robots being part of their daily life.9 Likewise, an IPSOS Mori poll in April 2017 found that 29% of UK public considered the risks of machine learning outweighed the benefits, 36% viewed them as balanced, and 29% considered machine learning to be more risky than beneficial (the remaining 7% said they didn’t know).10 It appears that whilst many have misgivings, people are as yet relatively open to AI having a greater role. It is possible that public opinion could tip either way, in favour or against AI.11
Chapter 5 discussed at Section 3.6.4 the potential development of a dichotomy between Technophiles who embrace AI wholeheartedly and neo-Luddites who are fearful of or even hostile towards AI based on a combination of economic and social concerns. Unless there is a process of public consultation, the void in public discourse is liable to be filled by pressure groups advocating a dogmatic view that a new technology be banned from some or perhaps even all applications. The following sections suggest how governments might avoid such a situation.
2.2 Case Study: GM Crops and Food Safety
The varying reactions to genetically modified (GM) crops demonstrate the importance of public consultation on new technology.12 In the early 1970s, scientists developed a technique to transfer DNA from one organism to another. The prospects for improved agriculture were clear from the outset: if selected strands of DNA from one organism could be transferred to another, the second organism might then acquire a new trait.13 In a famous example, DNA from a fish able to survive in very cold water was added to a tomato plant, enabling the latter to better resist frost.14
It might be expected that the ability to produce plants which can resist common scourges such as extreme weather, parasites and disease would be celebrated. Instead, there was a significant public backlash in the EU. Although there is little evidence of GM crops being dangerous to humans or environmentally harmful, a fear of unknown risks and a sense of scientists “tampering with nature” led many to refuse to purchase GM crops or even to support campaigns that they be banned altogether.15 In April 2004, various major biotech companies abandoned GM field trials in England, citing concerns raised by British consumers.16 In 2015, more than half of the EU Member States banned their farmers from growing GM crops.17 In fact, this ban made very little practical difference: prior to 2015, only one GM crop had ever been approved and grown in the EU.18
In contrast [to the US], Europe had no central regulator to greenlight the technology and allay public fears and biotechnology was dealt with as a novel process requiring novel regulatory provisions… European field tests in the early 1990s failed to engage discussions between the public and governmental agencies23
It might be objected that consultative rule-making is a Western conceit suited only to liberal democracies. However, even in countries which have a non-democratic system of government, regulators have recognised the importance of public trust. Although the Chinese state exercises a significant degree of control over citizens’ lives, a 2015 survey indicated that 71% of Chinese consumers considered food and drug safety to be a problem, following repeated scandals concerning tainted milk, oil, meat and even counterfeit eggs.24 Many of the opportunities for adulteration arise from increased use of technology in the production and processing of food. In a traditional agrarian society, people purchase their food from a known food source, such as the farmer or perhaps through a single intermediary such as a market seller. Abandoning this model in favour of industrialised mass-production of food requires great trust in the integrity of the new system. In direct response to these issues, the Chinese state has in recent years attempted to tighten standards, for instance by passing a Food Safety Law in 2015.25
The Chinese example indicates that regardless of the culture or type of society involved, public confidence in rule-making standards for new technology is paramount to successful implementation and adoption.26 Without this, those people who have a choice whether or not to use the new technology may elect not to do so (as is the case with European consumers and GM foods). Where people do not have a choice (as is the case with Chinese food consumers), then they will be forced to engage with a technology they do not trust and in consequence social cohesion and trust in the institutions of government stands to be eroded.
3 Collaborative Lawmaking
The above examples illustrate how important it is to include subjects in the development of rules for new technology. Even if the legislation would have looked the same without public consultation, the important point is that the lawmakers should be seen to involve citizens and stakeholders. Doing so allows the public, and particularly those groups most affected by any new technology, to feel that they are part of the process and thereby to take greater ownership of any eventual regulations created. This is likely to precipitate a virtuous circle where collaborative regulation leads to greater uptake of the technology, which in turn leads to better feedback and adjustment of the rules.27
Rousseau wrote in The Social Contract that “the general will, to be really such… must both come from all and apply to all”.28 The fundamental right of citizens to participate in public affairs received international recognition in the twentieth century. It is enshrined in Article 25 of the International Covenant on Civil and Political Rights of 1954, a treaty with 169 State Parties: “Every citizen shall have the right and opportunity… to take part in the conduct of public affairs”.29
When putting this lofty rhetoric into practice, there is no “one size fits all” solution to involving the public in decision-making on AI. Instead, each country and, if appropriate, region ought to define its position in accordance with local political traditions. Although this book supports deliberative and responsive lawmaking, its aim is to suggest regulation for AI which is compatible with all existing legal and political systems. In order to achieve this balance, a degree of flexibility is necessary. Nonetheless, there are certain important tools and techniques which governments of all kinds can use to achieve the aim of civic participation.
Two of the most important factors to the success of public engagement with regulation will be the provision of information and education concerning the new technology. These prerequisites encourage people to make informed decisions as and when their opinion is sought.30 As shown later in this chapter, public education is also deeply important to the effective enforcement of norms. Another background condition for effective public participation is the freedom of speech for individuals and groups to voice their opinions and create a marketplace of ideas.31
The public does not speak with one voice. When a regulator has to take a decision between competing regulatory options, one or more parts of the population may well be left dissatisfied with the outcome. The solution for policy-makers is to engender a sense of what political philosopher John Rawls’ called “public reason” among subjects. This describes the notion that in a just society, rules which regulate public life should be justifiable or acceptable to all those affected. This does not mean that each individual citizen needs to agree with every rule, but they should at least consent to the system. There should be some common ideal which is accepted by all and which forms the basis for the legitimacy of the relevant lawmaking institutions.32
An AI regulator should take care to ensure that participation includes so far as possible a representative sample reflecting the entirety of society, adjusted, for example, by features including gender, geographic distribution, socio-economic background, religion and race. If one or more groups are deliberately or accidentally excluded from the consultation process, then policy decisions will lack legitimacy among those parts of the population, and future social fissures may result.33 Diversity is an issue particularly pertinent to AI, where many have already voiced fears that AI systems are likely to reflect the inherent biases of predominantly white, male engineers.34
A range of methods should be used to solicit opinions. For instance, a government or legislature could hold public consultations to gauge public views. Such consultations could be augmented by methodologies popular in the private sector, including targeted focus groups to solicit the opinions of key segments of the population who might otherwise not be reached. The legislature might also invite interest groups and experts to a series of open forums. The UK’s All Party Parliamentary Group (APPG) on AI held meetings over the course of 2017 and 2018 where experts were questioned by members of the legislature and the public on various issues. These meetings were streamed live and made available online.35 In the USA, a proposed rule is published on the Federal Register and then opened to public discussion, a procedure known as “notice and comment”.36
Between February and June 2017, the European Parliament undertook an online consultation on public attitudes to AI, with a particular emphasis on civil law rules. This was open worldwide to anyone who wished to respond, and published in all EU official languages. It included two separate questionnaires, adapted to their audience: a shorter version for the general public and a longer one for specialists.37 The survey provides empirical support for some of the policy solutions suggested in this book; among its key findings, a large majority of respondents expressed “the need for public regulation in the area” and considered “this regulation should be done at EU and/or International level”.38
Though it was theoretically a worldwide survey, the number of respondents was very small: there were only 39 responses from organisations and 259 from private individuals. Of the private individuals, 72% were male, and 65% had a master’s level or more advanced degree, putting them into a tiny minority of the world’s population. More meaningful surveys will need to do a better job of ensuring that a broad spectrum of the world’s population participates.
The Open Roboethics Institute (ORI) is perhaps a superior example of how an organisation can take an inclusive “bottom-up” approach to ethical questions. Founded in 2012, it has explored ethical questions on topics including self-driving vehicles, care robots and lethal autonomous weapons systems, with a particular emphasis on involving stakeholders from different groups.39 The ORI’s methods include questionnaires and surveys which are accompanied by neutral and balanced explanations of the technology and issues involved. Importantly, it uses simple and accessible language as opposed to the dry technical wording sometimes favoured by experts in AI or law; for instance an ORI poll on the role of AI in social care was snappily titled “Would you trust a robot to take care of your grandma?”.40
MIT’s “Moral Machine” simulator is another interesting method of a bottom-up approach to ethical standard setting. This is a website operated by the MIT Media Lab, available at the time of writing in ten languages, which operates as “a platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars”.41 The website explains: “[w]e show you moral dilemmas, where a driverless car must choose the lesser of two evils… As an outside observer, you judge which outcome you think is more acceptable”. In other words, the simulator is a practical example of various iterations of the Trolley Problem described in previous chapters. The Moral Machine project has been successful in gathering responses from a wide range of participants: by the end of 2017, 1.3 million people had responded. It has generated scientific papers of significant interest.42 Results also revealed certain regional variations in opinions.43
This type of crowd-sourced research is only one aspect of the type of consultative exercise needed in developing regulation. Care would still need to be taken to ensure that those taking the tests are representative, and governments ought also avoid succumbing to the tyranny of the majority. Nonetheless, the Moral Machine project represents a valuable example of how public engagement can be encouraged in novel ethical issues to which AI gives rise.
3.1 Multidisciplinary Experts
Regulating AI requires expertise from a variety of fields. Clearly, computer scientists and designers of AI ought to be involved. As noted above, any guidance would need to have a firm grounding in what is technically achievable.
Within the field of computer science, there are many different approaches to developing AI. Though deep learning and the use of neural networks are perhaps the most promising techniques at present, there are several other methods, including whole brain emulation and human–computer interfaces which may also be capable of generating AI (and indeed perhaps even more powerful AI than that can be generated by neural networks). For these reasons, it will be important to ensure that the range of technical AI experts is internally diverse.
Lawyers would be needed to draft the relevant guidance and also to explain how it would interact with existing laws. Though there is no closed list of those areas affected by AI, other professions to be represented in any consultation (and indeed in any regulator) would include ethicists, theologists and philosophers, medical personnel and specialists in robotics and engineering. Discussions are already underway within many of these professional groups as to how to react to AI, but the greater challenge will be to bring about a cross-fertilisation of ideas between them.
3.2 Stakeholders, Interest Groups and NGOs
Consultation should include those with a special interest in the regulation of AI and its particular applications. For instance, when designing any rules pertinent to medicine and care, it would be appropriate to consult with medical organisations, such as professional doctors’ bodies, as well as patient representative groups.
Those collating the information should bear in mind that NGOs, interest groups and other stakeholders are likely to have particularly strong and pronounced views. In the light of the GM example above, it will be important for AI regulators to ensure that they are not swayed by a small but noisy minority. In a notorious example, when the Netherlands government attempted a public consultation on GM foods, a coalition of anti-GM NGOs attempted to influence the evidence that could be made available to the public in order to shape the debate in their favour.44
3.3 Companies
Companies which produce AI technologies will clearly be an important determinant of regulation, as they are likely to be its most immediate subjects. The largest companies already have highly developed policy teams who are very experienced in liaising with regulators and governments—particularly in the fields of antitrust and, increasingly, in data privacy. Smaller companies often join industry associations such as the Confederation of British Industries, which form powerful lobbies for their members’ interests.
One growing source of industry-led regulation for AI is collectives of companies as well as other interest groups, such as the Partnership on AI which, as discussed in Chapter 6, was formed originally by US tech giants45 Google, DeepMind, IBM, Facebook, Microsoft, Amazon and Apple.46 Organisations such as the Partnership should certainly play a role in the regulation of AI, but for the reasons given in Chapter 5 (including their focus on shareholder benefit rather than public profit, as well as the voluntary nature of self-regulation), they are inappropriate as the sole source of rules on AI.
A further problem with the Partnership is that it is formed, funded and at least partially controlled by major tech powers (albeit that its Board of Directors now contains an equal split between for-profit and not-for-profit entities). It does not include the many small to medium-sized enterprises which are also developing AI. If the major tech powers are able to play a significant role in shaping AI regulatory policy, then they might tend to do so in a way which is harmful to competition and innovation from smaller rivals.
3.4 Case Study: The FCA FinTech Sandbox
The sandbox seeks to provide firms with:
- the ability to test products and services in a controlled environment
- reduced time-to-market at potentially lower cost
- support in identifying appropriate consumer protection safeguards to build into new products and services
- better access to finance
The sandbox also offers tools such as restricted authorisation, individual guidance, informal steers, waivers and no enforcement action letters.47
The FCA sandbox is not tailored to AI in particular, but many of its techniques could be applied in this field. For instance, the FCA sandbox allows FinTech companies to test their products on real consumers; for non-consumer facing (or back end) AI, the relevant challenge might be testing how the programme interacts with existing technology or even other AI still under development.
For AI, sandboxes will work particularly well in circumstances where current laws require that a human always be in control of a particular decision or process, such that absent a sandbox, using the AI at all might be illegal. A sandbox could be used to demonstrate the safety and efficiency of the AI system on a small scale, precipitating its wider legalisation for the rest of the jurisdiction (accompanied of course by appropriate safety standards, which the government will have also tested out in the sandbox).
A similar sandbox-type approach has been used in other countries and industrial sectors. The Monetary Authority of Singapore also operates a regulatory sandbox for FinTech companies and products.48 The Spanish region of Catalonia provides test bed facilities for autonomous vehicles, linking car manufacturers (Seat, Nissan), industry representatives (Ficosa, which produces automobile parts), telecommunication companies, academia and legislators (such as the transportation service and the Mayoralty of Barcelona).49
Data collected could be shared between both government and industry, thereby allowing each to benefit from the improved information. Sandboxing has an advantage over traditional industry consultation in that governments should be less reliant on slick presentations prepared by expensive lobbyists and public relations consultants and can instead focus on the empirical results of a trial. Developing technology in virtual reality simulations allows for yet further scope in terms of being able to model the behaviour of AI in an extremely complex system.
A number of indicators suggest that the sandbox is beginning to have a positive impact in terms of price and quality… As more firms with better products and services enter the market, we expect competitive pressure to improve incumbent firms’ consumer propositions50
The sandbox has enabled a variety of tests from firms with innovative business models that look to address the needs of more vulnerable consumers who may be particularly at risk of financial exclusion. The House of Lords Select Committee on Financial Inclusion published a report in March 2017 which cited the FCA sandbox as a positive way of encouraging fintech solutions to aspects of financial exclusion.51
Promoting the inclusion of the whole of society is essential to creating a sustainable environment for AI regulation and growth in the longer term. As discussed in Chapter 6 at Section 8.1, the FCA FinTech sandbox is now part of a global collaboration of financial regulators—demonstrating that this type of flexible and responsive governance technique presents multiple lessons for future AI regulation.
3.5 Industry Standards Bodies
Another type of industry-led regulation comes from standard-setting bodies. At the national level, these include the British Standards Institute,52 the US National Institute of Standards and Technology 53 and the Japanese Industrial Standards Committee. Some operate internationally, such as the International Organisation for Standardisation (ISO).54 The Institute of Electrical and Electronics Engineers (IEEE) is a professional body rather than a standards organisation, but in AI (as well as some other fields) it plays a standard-setting role. The Association for Computer Machinery is another professional body which promulgates non-binding standards of best practices in this field.55
At both the national and international levels, standards bodies play a crucial role in both setting and updating standards, as well as ensuring interoperability between different products and technologies.56 Standards bodies are usually formed by a large number of members; as at January 2018, the IEEE website listed over 420,000 across 160 countries.57 The ISO is the umbrella body for national standards bodies, again incorporating over 160 countries.58 This diffuse membership means that they are likely to be less-easily dominated by a small group of powerful corporate interests than organisations such as the Partnership, which have far fewer members, and less-transparent decision-making processes.
The international standards bodies provide a good example for how worldwide regulation of AI might function, in terms of their wide membership, scope of coverage and technical expertise. It might therefore be argued that industry standard-setting bodies ought to be the sole source of regulation for AI. This would be going too far; the ISO, IEEE and organisations like them are well-suited to the formulation of technical standards which do not have an ethical or societal dimension. Technical standards bodies are adept at dealing with arbitrary or uncontroversial, as opposed to moral choices.
The following case study explores the type of extra elements which an ethical regulator ought to incorporate.
3.6 Case Study: The UK Human Fertilisation and Embryology Authority
The Human Fertilisation and Embryology Authority (HFEA) is the UK’s independent regulator of fertility treatment and research using human embryos. The process by which it came into being and now operates contains many lessons for developing a similar regulator for AI.
In the late 1970s and early 1980s, scientists made significant advances in the field of biological reproduction, including in particular fertilisation and embryology. The first child resulting from in vitro fertilisation (popularly known as a “test-tube baby”) was born in 1978. Though much of this technology was at a theoretical stage, it seemed likely that the new developments would allow far greater scope for detecting and potentially remedying defects in embryos at an early point. Matters previously in the realm of science fiction, such as animal and human cloning, were no longer out of the question.
In 1982, the UK Government commissioned a report by a Committee of Inquiry into Human Fertilisation and Embryology chaired by Dame Mary Warnock (the Warnock Inquiry). The panel included a judge, consultant obstetricians and gynaecologists, a professor of theology, professors of psychology and directors of research institutes.59 Its mandate was: “[t]o consider recent and potential developments in medicine and science related to human fertilisation and embryology; to consider what policies and safeguards should be applied, including consideration of the social, ethical and legal implications of these developments; and to make recommendations”.60
What is common (and this too we have discovered from the evidence) is that people generally want some principles or other to govern the development and use of the new techniques… But in our pluralistic society it is not to be expected that any one set of principles can be enunciated to be completely accepted by everyone… The law itself, binding on everyone in society, whatever their beliefs, is the embodiment of a common moral position… In recommending legislation, then, we are recommending a kind of society that we can, all of us: praise and admire, even if, in detail, we may individually wish that it were different.61
… this is not exclusively, or even primarily, a medical or scientific body. It is concerned essentially with broader matters and with the protection of the public interest. If the public is to have confidence that this is an independent body, which is not to be unduly influenced by sectional interests, its membership must be wide-ranging and in particular the lay interests should be well represented.62
In accordance with the Committee’s proposals, the HFEA was created in 1990. Today, clinics and research centres in the field of reproductive technology must be inspected at least every two years by the HFEA to make sure they are continuing to operate safe, legal and quality services and research. Mindful of its public-facing role, the HFEA seeks to educate and inform not just those involved in the embryology industry but also the general public, for instance by maintaining a website with clear explanations of its role.63
3.7 A Minister for AI?
The HFEA is successful model but the same aims can be achieved through different institutional structures. Another option is to create a dedicated ministry on AI within the government.
At this point, it’s really about starting conversations — beginning conversations about regulations and figuring out what needs to be implemented in order to get to where we want to be. I hope that we can work with other governments and the private sector to help in our discussions and to really increase global participation in this debate. With regards to AI, one country can’t do everything. It’s a global effort67
In its 2017 findings, the UK’s APPG on AI key recommendation was for the Government to create a new UK Minister for AI.68 Of course, there is a significant gap between rhetoric and action, but the UAE’s move is nonetheless significant; it may not be long before other countries follow suit. Once the institution has been created, the next question is what output it might have.
4 Proposed Regulatory Codes
4.1 The Roboethics Roadmap
It is not a list of Questions & Answers. Actually there are no easy answers and the complex fields require careful consideration.
It cannot be a Declaration of Principles. The Euron Roboethics Atelier, and the sideline discussion undertaken, cannot be regarded as the institutional committee of scientists and experts entitled to draw a Declaration of Principles on Roboethics.70
The Roboethics Roadmap did not seek to create regulations, but rather laid down a challenge to others to do so. Since then, various organisations have taken up the mantle. What follows is a non-exhaustive selection of some of the most influential proposals to date.
4.2 The EPSRC and AHRC “Principles of Robotics”
In September 2010, a multidisciplinary group of UK academics including representatives from technology, industry, the arts, law and social sciences met at the joint Engineering and Physical Sciences Research Council (EPSRC) and Arts and Humanities Research Council (AHRC) Robotics Retreat to design a set of Principles of Robotics.71 The authors made clear that their principles are “rules for robotics (not robots)”, applicable “designers, builders and users of robots”, putting them firmly within the scope of the present chapter.
Rule | Semi-legal | General audience |
---|---|---|
1 | Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security | Robots should not be designed as weapons, except for national security reasons |
2 | Humans, not robots, are responsible agents. Robots should be designed and operated as far as is practicable to comply with existing laws, fundamental rights and freedoms, including privacy | Robots should be designed and operated to comply with existing law, including privacy |
3 | Robots are products. They should be designed using processes which assure their safety and security | Robots are products: as with other products, they should be designed to be safe and secure |
4 | Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead, their machine nature should be transparent | Robots are manufactured artefacts: the illusion of emotions and intent should not be used to exploit vulnerable users |
5 | The person with legal responsibility for a robot should be attributed | It should be possible to find out who is responsible for any robot |
The simplicity of the project’s output is laudable. But rather like Asimov’s Laws, with brevity comes hidden dangers of under-specification and over-generalisation.
For each principle, one part of the rule is aimed at a technical audience, with another more user-friendly version directed towards the general public. Though this approach may be intended to advance public understanding, great care will need to be taken to ensure that in rendering the technical rule more digestible its meaning is not changed. Otherwise, there is a risk that there will be a clash between the two norms, leading to uncertainty as to which is binding.
Rule 2 is an example of where the transposition between the two versions is somewhat incomplete. The technical rule includes the sentence “Humans, not robots, are responsible agents”, whereas that for the general audience does not. This raises the question whether that aspect of the rule is intended to be binding for all. Most likely, it was felt by the authors that questions of “responsibility” and “agency”, which are complex legal and philosophical terms, were too esoteric for a general audience. However, the authors’ failure to even attempt to describe them is problematic. The notion of providing simplified explanations breaks down if the authors omit to mention key parts of their rules in the public version.
4.3 CERNA Ethics of Robot Research
L’Alliance des Sciences et Technologies du Numerique (Allistene) is a major French academic and industry think tank focused on science and technology.73 Within Allistene, La Commission de Réflexion sur l’Éthique de la Recherche en sciences et technologies du Numérique d’Allistene (CERNA) is a sub-committee dealing with questions of ethics.74
- 1.
Maintaining control over transfers of decision-making
The researcher must consider when the operator or the user should take back control of a process (from a robot) and the roles that the robot can perform (at the expense of the human being), including in what circumstances such transfers of power should be permitted or obligatory. The researcher must also study the possibility for a human to “disengage” the autonomous functions of the robot.
- 2.
[Decisions] Outside of the knowledge of the operator
The researcher must ensure that robot decisions are not made without the knowledge of the operator so as not to create breaks in his understanding of the situation (i.e. so that the operator does not believe that the robot is in a certain state when in fact it is in another state).
- 3.
Influences [of Robots] on the behaviour of the operator
The researcher must be aware of the phenomena of [i] confidence bias, namely the tendency of an operator to rely on robot decisions and [ii] of moral distancing (‘Moral Buffer’) of the operator in relation to the actions of the robot.
- 4.
Programme limitations
The researcher must be careful to evaluate the robot’s programmes of perception, interpretation and decision-making, and to clarify the limits of these powers. In particular, programmes that aim to confer moral conduct to the robot ought to be subject to such limitations.
- 5.
[Robot] Characterisation of a situation
With respect to robot interpretation software, the researcher must evaluate how far the robot can characterise a situation correctly and discriminate between several apparently similar situations, especially if the decision of action taken by the operator or by the robot itself is based solely on this characterisation. In particular, we must evaluate how uncertainties are taken into account.
- 6.
Predictability of the human–robot system
More generally, the researcher must analyse the predictability of the system taken as a whole, taking into account the uncertainties of interpretation and possible failures of the robot and those of the operator, and analyse all the states achievable by this system.
- 7.
Tracing and explanations
The researcher must integrate tracing tools as soon as the robot is designed, which should enable the development of at least limited explanations addressed to robotics experts, operators or users.77
The CERNA recommendations are directed towards identifying issues, both moral and technical, arising from AI. This limited and modest approach is helpful, in that it seeks to identify potential problems first before charging headlong into an attempt at laying down definitive commands.
4.4 Asilomar 2017 Principles
In 1975, leading DNA researcher Paul Berg convened a conference at Asilomar Beach, California, on the dangers and potential regulation of Recombinant DNA technology.78 Around 140 people participated, including biologists, lawyers and doctors. The participants agreed principles for research, recommendations for the technology’s future use, and made declarations concerning prohibited experiments.79 The Asilomar 1975 Conference later came to be seen as a seminal moment not just in the regulation of DNA technology but also the public engagement with science.80
In January 2017, another conference was convened at Asilomar by the Future of Life Institute, a think tank which focusses on “Beneficial AI”. Much like the original Asilomar conference, Asilomar 2017 brought together more than 100 AI researchers from academia and industry, as well as specialists in economics, law, ethics and philosophy.81 The conference participants agreed 23 principles, grouped under three headings82:
- 1.
Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.
- 2.Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics and social studies, such as:
How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
How can we grow our prosperity through automation whilst maintaining people’s resources and purpose?
How can we update our legal systems to be more fair and efficient, to keep pace with AI and to manage the risks associated with AI?
What set of values should AI be aligned with, and what legal and ethical status should it have?
- 3.
Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.
- 4.
Research Culture: A culture of cooperation, trust and transparency should be fostered among researchers and developers of AI.
- 5.
Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.
- 6.
Safety: AI systems should be safe and secure throughout their operational lifetime and verifiably so where applicable and feasible.
- 7.
Failure Transparency : If an AI system causes harm, it should be possible to ascertain why.
- 8.
Judicial Transparency : Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
- 9.
Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse and actions, with a responsibility and opportunity to shape those implications.
- 10.
Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviours can be assured to align with human values throughout their operation.
- 11.
Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms and cultural diversity.
- 12.
Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyse and utilise that data.
- 13.
Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
- 14.
Shared Benefit: AI technologies should benefit and empower as many people as possible.
- 15.
Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
- 16.
Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.
- 17.
Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.
- 18.
AI Arms Race: An arms race in lethal autonomous weapons should be avoided.
- 19.
Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.
- 20.
Importance: Advanced AI could represent a profound change in the history of life on earth and should be planned for and managed with commensurate care and resources.
- 21.
Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.
- 22.
Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.
- 23.
Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals and for the benefit of all humanity rather than one state or organisation.83
The authors of the Asilomar Principles would probably admit that they need much further detail and specification if they were to form the basis for any eventual laws. However, the shortcomings in Asilomar was not so much in the content of its proposals, but in the process. The participants were hand-picked from a fairly small group of AI intelligentsia. Moreover, they were predominantly Western-based. Jeffrey Ding notes that “…out of more than 150 attendees, only one was working at a Chinese institution at the time (Andrew Ng, who has now left his role at Baidu)”.84 Another participant expressed surprise at being one of the small minority of non-native English speakers invited.85
The new Asilomar principles are a starting point. But they don’t dig into what is really at stake. And they lack the sophistication and inclusivity that are critical to responsive and responsible innovation. To be fair, the principles’ authors realize this, presenting them as ‘aspirational goals’. But within the broader context of a global society that is faced with living with the benefits and the perils of AI, they should be treated as hypotheses – the start of a conversation around responsible innovation rather than the end. They now need to be democratically tested.86
4.5 IEEE Ethically Aligned Design
The IEEE Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, Version 2 (EAD v2)87 was published in December 2017. Its authors describe EAD v2 as “the most comprehensive, crowd-sourced global treatise regarding the ethics of autonomous and intelligent systems available today”.88 The EAD papers are written by committees comprising several hundred multidisciplinary participants.89 EAD v2 was opened for public comment, with responses invited by the end of April 2018. A final version is due in 2019.
- Human Rights
- 1.
Governance frameworks, including standards and regulatory bodies, should be established to oversee processes assuring that the use of Autonomous and Intelligent Systems (A/IS) does not infringe upon human rights, freedoms, dignity and privacy and of traceability to contribute to the building of public trust in A/IS.
- 2.
A way to translate existing and forthcoming legal obligations into informed policy and technical considerations is needed. Such a method should allow for differing cultural norms as well as legal and regulatory frameworks.
- 3.
For the foreseeable future, A/IS should not be granted rights and privileges equal to human rights: A/IS should always be subordinate to human judgment and control.90
- 1.
Prioritise Well-Being
A/IS should prioritise human well-being as an outcome in all system designs, using the best available, and widely accepted, well-being metrics as their reference point.91
- Accountability
- 1.
Legislatures/courts should clarify issues of responsibility, culpability, liability and accountability for A/IS where possible during development and deployment (so that manufacturers and users understand their rights and obligations).
- 2.
Designers and developers of A/IS should remain aware of, and take into account when relevant, the diversity of existing cultural norms among the groups of users of these A/IS.
- 3.
Multi-stakeholder ecosystems should be developed to help create norms (which can mature to best practices and laws) where they do not exist because A/IS-oriented technology and their impacts are too new (including representatives of civil society, law enforcement, insurers, manufacturers, engineers, lawyers, etc.).
- 4.
Systems for registration and record-keeping should be created so that it is always possible to find out who is legally responsible for a particular A/IS….92
- 1.
Transparency
Develop new standards that describe measurable, testable levels of transparency, so that systems can be objectively assessed and levels of compliance determined. For designers, such standards will provide a guide for self-assessing transparency during development and suggest mechanisms for improving transparency. (The mechanisms by which transparency is provided will vary significantly, for instance- 1.
for users of care or domestic robots, a why-did-you-do-that button which, when pressed, causes the robot to explain the action it just took;
- 2.
for validation or certification agencies, the algorithms underlying the A/IS and how they have been verified; and
- 3.
for accident investigators, secure storage of sensor and internal state data, comparable to a flight data recorder or black box.)93
- 1.
EAD v2 is clearly the result of much thoughtful reflection.94 However, for the reasons given above, international standard-setting bodies remain ill-equipped to form the sole source of standards for AI. Notably, EAD v2 itself suggests at various points that national governments will need to set appropriate regulations to address issues identified therein.95
4.6 Microsoft Principles
A.I. must be designed to assist humanity: As we build more autonomous machines, we need to respect human autonomy. Collaborative robots, or co-bots, should do dangerous work like mining, thus creating a safety net and safeguards for human workers.
A.I. must be transparent: We should be aware of how the technology works and what its rules are. We want not just intelligent machines but intelligible machines. Not artificial intelligence but symbiotic intelligence. The tech will know things about humans, but the humans must know about the machines. People should have an understanding of how the technology sees and analyses the world. Ethics and design go hand in hand.
A.I. must maximise efficiencies without destroying the dignity of people: It should preserve cultural commitments, empowering diversity. We need broader, deeper, and more diverse engagement of populations in the design of these systems. The tech industry should not dictate the values and virtues of this future.
A.I. must be designed for intelligent privacy—sophisticated protections that secure personal and group information in ways that earn trust.
A.I. must have algorithmic accountability so that humans can undo unintended harm. We must design these technologies for the expected and the unexpected.
A.I. must guard against bias , ensuring proper, and representative research so that the wrong heuristics cannot be used to discriminate.
Technology writer James Vincent argues that “Nadella’s goals are as full of ambiguity as Asimov’s own Three Laws. But while loopholes in the latter were there to add intrigue to short stories… the vagueness of Nadella’s principles reflect the messy business of building robots and AI that deeply affect peoples’ lives”.97
In a 2018 publication: The Future Computed: Artificial Intelligence and Its Role in Society, the Microsoft Corporation set out its official manifesto on the societal issues raised by AI. Without explicitly mentioning Nadella’s article, the Corporation echoed its content, declaring that there are: “…six principles that we believe should guide the development of AI. Specifically, AI systems should be fair, reliable and safe, private and secure, inclusive, transparent, and accountable”.98
Interestingly, in the official Microsoft Corporation list, two of Nadella’s most far-reaching and altruistic principles, “A.I. must be designed to assist humanity”, and “A.I. must maximise efficiencies without destroying the dignity of people”, were replaced with more limited, technical aims: that AI be “fair”, “inclusive” and “reliable and safe”. One is led to wonder to what extent the concerns of Corporation’s shareholders might have had an impact on this small but significant shift.
4.7 EU Initiatives
As one of the three legislative bodies of the European Union (alongside the Council and the Commission) and the only one which is directly elected by citizens, the European Parliament plays an important role in scrutinising and enacting EU laws.99
[The European Parliament] [r]equests, on the basis of Article 225 TFEU, the Commission to submit, on the basis of Article 114 TFEU, a proposal for a directive on civil law rules on robotics, following the recommendations set out in the Annex hereto101;
– You should take into account the European values of dignity, autonomy and self-determination, freedom and justice before, during and after the process of design, development and delivery of such technologies including the need not to harm, injure, deceive or exploit (vulnerable) users.
– You should introduce trustworthy system design principles across all aspects of a robot’s operation, for both hardware and software design, and for any data processing on or off the platform for security purposes.
– You should introduce privacy by design features so as to ensure that private information is kept secure and only used appropriately.
– You should integrate obvious opt-out mechanisms (kill switches) that should be consistent with reasonable design objectives.
– You should ensure that a robot operates in a way that is in accordance with local, national and international ethical and legal principles.
– You should ensure that the robot’s decision-making steps are amenable to reconstruction and traceability.
– You should ensure that maximal transparency is required in the programming of robotic systems, as well as predictability of robotic behaviour.
– You should analyse the predictability of a human-robot system by considering uncertainty in interpretation and action and possible robotic or human failures.
– You should develop tracing tools at the robot’s design stage. These tools will facilitate accounting and explanation of robotic behaviour, even if limited, at the various levels intended for experts, operators and users.
– You should draw up design and evaluation protocols and join with potential users and stakeholders when evaluating the benefits and risks of robotics, including cognitive, psychological and environmental ones.
– You should ensure that robots are identifiable as robots when interacting with humans.
– You should safeguard the safety and health of those interacting and coming in touch with robotics, given that robots as products should be designed using processes which ensure their safety and security. A robotics engineer must preserve human wellbeing while also respecting human rights and may not deploy a robot without safeguarding the safety, efficacy and reversibility of the operation of the system.
– You should obtain a positive opinion from a Research Ethics Committee before testing a robot in a real environment or involving humans in its design and development procedures.102
– You are permitted to make use of a robot without risk or fear of physical or psychological harm.
– You should have the right to expect a robot to perform any task for which it has been explicitly designed.
– You should be aware that any robot may have perceptual, cognitive and actuation limitations.
– You should respect human frailty, both physical and psychological, and the emotional needs of humans.
– You should take the privacy rights of individuals into consideration, including the deactivation of video monitors during intimate procedures.
– You are not permitted to collect, use or disclose personal information without the explicit consent of the data subject.
– You are not permitted to use a robot in any way that contravenes ethical or legal principles and standards.
– You are not permitted to modify any robot to enable it to function as a weapon.103
It remains to be seen though whether and to what extent the European Parliament’s ambitious proposals will be adopted in legislative proposals by the Commission.
4.8 Japanese Initiatives
1) Principle of collaboration—Developers should pay attention to the interconnectivity and interoperability of AI systems.
2) Principle of transparency —Developers should pay attention to the verifiability of inputs/outputs of AI systems and the explainability of their judgments.
3) Principle of controllability—Developers should pay attention to the controllability of AI systems.
4) Principle of safety—Developers should take it into consideration that AI systems will not harm the life, body, or property of users or third parties through actuators or other devices.
5) Principle of security—Developers should pay attention to the security of AI systems.
6) Principle of privacy—Developers should take it into consideration that AI systems will not infringe the privacy of users or third parties.
7) Principle of ethics—Developers should respect human dignity and individual autonomy in R&D of AI systems.
8) Principle of user assistance—Developers should take it into consideration that AI systems will support users and make it possible to give them opportunities for choice in appropriate manners.
9) Principle of accountability—Developers should make efforts to fulfill their accountability to stakeholders including users of AI systems.105
Japan emphasised that the above principles were intended to be treated as soft law, but with a view to “accelerate the participation of multistakeholders involved in R&D and utilization of AI… at both national and international levels, in the discussions towards establishing ‘AI R&D Guidelines’ and ‘AI Utilization Guidelines’”.106
Non-governmental groups in Japan have also been active: the Japanese Society for Artificial Intelligence proposed Ethical Guidelines for an Artificial Intelligence Society in February 2017, aimed at its members.107 Fumio Shimpo, a member of the Japanese Government’s Cabinet Office Advisory Board, has proposed his own Eight Principles of the Laws of Robots.108
4.9 Chinese Initiatives
In furtherance of China’s Next Generation Artificial Intelligence Development Plan,109 and as mentioned in Chapter 6, in January 2018 a division of China’s Ministry of Industry and Information Technology released a 98-page White Paper on AI Standardization (the White Paper), the contents of which comprise China’s most comprehensive analysis to date of the ethical challenges raised by AI.110
Because the achieved goals of artificial intelligence technology are influenced by its initial settings, the goal of artificial intelligence design must be to ensure that the design goals of artificial intelligence are consistent with the interests and ethics of most human beings. So even in facing different environments in the decision-making process, artificial intelligence can make relatively safe decisions.116
1) Define the scope of needed artificial intelligence research. Artificial intelligence has turned from laboratory research to practical systems in various fields of application, taking on a fast-paced growth trend. This needs to be defined through a unified terminology, clarifying the core concepts of the connotation, extension and demand of artificial intelligence, and guiding the industry to correctly recognize and understand artificial intelligence technology, making it easier for the widespread use of artificial intelligence technology;
2) Describe the framework of the artificial intelligence system. When faced with the functions and implementation of artificial intelligence systems, users and developers generally regard artificial intelligence systems as a “black box”, but it is necessary to enhance the transparency of artificial intelligence systems through technical framework specifications. Due to the wide range of applications of artificial intelligence systems, it may be very difficult to provide a generic artificial intelligence framework. A more realistic approach is to give particular frameworks in particular scopes and problems…;
3) Evaluate the intelligence level of the artificial intelligence system. Differentiating an artificial intelligence system by level of intelligence has always been controversial, and providing a benchmark to measure its intelligence level is a difficult and challenging task…;
4) Promote the interoperability of artificial intelligence systems. Artificial intelligence systems and their components have a certain complexity, and different application scenarios involve different systems and components. System-to-system, component-to-component interaction and sharing of information needs to be ensured through interoperability. Artificial intelligence interoperability also involves interoperability between different smart module products to achieve data interoperability, that is, different intelligent products require standardized interfaces…;
5) Conduct assessments of artificial intelligence products. As an industrial product, an artificial intelligence system needs to be evaluated in terms of functions, performance, safety, compatibility, interoperability, etc. in order to ensure the quality and availability of the product and provide safeguards for the industry’s sustainable development…. According to standardized procedures and methods, scientific assessment results can be obtained through measurable indicators and quantifiable evaluation systems, and at the same time, coordinate training, promotion, and other means to promote the implementation of standards;
6) Begin standardization of key technologies. For key technologies that have already formed a model and are widely used, they should be standardized in a timely manner to prevent the fragmentation and independence of versions and ensure interoperability and continuity. For example, the user data bound to a deep learning framework should be clearly defined by the neural network’s data representation method and compression algorithms, in order to ensure data exchange while not being bound by the platform, and protect the user’s rights to the data….;
7) Ensure safety and ethics. Artificial intelligence collects a large amount of personal data, biological data, and data on other characteristics from various devices, applications, and networks. It is not necessarily possible to organize and manage properly and take appropriate privacy protection measures for these data from the very start of system design. Artificial intelligence systems that have a direct impact on human security and human life may pose a threat to humans. Before such artificial intelligence systems are widely used, they must be standardized and evaluated to ensure safety;
8) Standardization of the features of industry application. Apart from common technologies, the implementation of artificial intelligence in specific industries still has individualized needs and technical characteristics….117
The issues of safety, ethics and privacy covered in this section are challenges to the development of artificial intelligence. Safety is a prerequisite for sustainable technology. The development of technology poses risks to social trust. How to increase social trust and let the development of technology follow ethical requirements, especially, is an urgent problem to be solved to ensure that privacy will not be violated.119
This admission illustrates how one of the world’s most secure and powerful governments is nonetheless taking into account the desiderata identified earlier in this chapter concerning the need for legitimacy when regulating for new technology.
As explained in Chapter 6, the proposals in China’s standardisation White Paper are part of a coordinated effort by its Government to become a leader in both AI technology and its regulation. China’s findings and areas of priority in AI regulations do not differ radically from those suggested elsewhere, but the fact that such proposals are coming from an official, state-sanctioned source is significant.
5 Themes and Trends
In the light of the numerous proposals set out above, it is possible to draw together some broad themes and commonalities. The four most common themes which emerge from this brief survey are the need for some rules as to who is liable if AI causes harm, safety in design of the AI, transparency/explainability, and a requirement for AI to operate consistently with established human values.
Rules | Control of “Killer robots” | Safety in Design | Rules for attribution/liability | Explainability/Transparency | Benefits shared with all humanity | Act consistently with human rights | Ability to reassert human control | Privacy | Unbiased |
---|---|---|---|---|---|---|---|---|---|
EPSRC/AHRC | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | |||
CERNA | ✔ | ✔ | |||||||
Asilomar | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | |
IEEE EAD v2 | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
Satya Nadella/Microsoft | ✔ | ✔ | ✔ | ✔ (Nadella but not Microsoft) | ✔ | ✔ | ✔ | ||
European Parliament Resolution | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | |||
Japan Ministry of Communications | ✔ | ✔ | ✔ | ✔ | ✔ | ||||
China White Paper | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
6 Licensing and Education
Once we have arrived at a set of rules, the final question is how they can be implemented and enforced. In addition to the national and international regulatory bodies discussed in the previous chapter, another important aspect is the creation of structures to harmonise and improve the quality of education, training and professional standards for those involved in creating AI.
6.1 Historic Guilds
At least as far back as the late Roman period, skilled artisans and craftsmen formed associations which came to be known as guilds. The guilds imposed controls on the provision of services and the production of various goods, in terms of upholding standards and as cartels, restricting competition in their local area.120 Stripping away the anti-competitive aspects, which are now largely precluded by antitrust laws, the guilds played an important role in training, quality control and assurance long before these standards were enshrined in national law.121
Guilds were not just a set of internal rules: they were a way of life, a self-contained social system with customs, hierarchies and guiding norms. As economists Roberta Dessi and Sheilagh Ogilvie note, “many economists… regard the merchant guild as an exemplar of social capital: these guilds fostered shared norms, transmitted information effectively, punished deviants swiftly, and organized collective action efficiently”.122 Guilds’ trade-restricting function may have been curtailed, but their standard-setting role continues today in the form of modern professional associations, sometimes referred to simply as “the professions”.
6.2 Modern Professions
(1) they have specialist knowledge; (2) their admission depends on credentials; (3) their activities are regulated; and (4) they are bound by a common set of values.123
These four elements are interlocking: the initial (and sometimes ongoing) training inculcates a sense of common professional standards among the cohort of participants. The “common set of values” also provides a sense of shared identity for those engaged in the profession. The most well-known example of such a principle is the Hippocratic Oath taken by physicians, first recorded between the third and fifth centuries BC.124 Though no longer recited in its original form—beginning as that does with an incantation of the names of various Ancient Greek Gods—many of its lessons remain a core part of principles imparted to medical professionals as part of their training: confidentiality, abstaining from corruption and always doing benefit to the sick.125
In terms of regulation, detailed rules usually govern day-to-day practice in a modern profession. Finally the disciplinary system acts as the stick of enforcement, providing a signalling factor to other participants, aiming to deter conduct which falls below the specified guideline and again contributing to a sense of shared professional pride in the integrity of the profession. This provides professionals with security in the knowledge that they are only competing for business and collaborating with other individuals who will sustain the same ethical and quality standards. It also benefits the public, who are assured of a certain level of competence, expertise and probity when they deal with a member of a regulated profession.
Technical complexity: The most regulated professions are often ones which are particularly impenetrable to the average person. Fields such as the law, medicine or airline piloting are often difficult for a non-specialist to assess. Consequently, the public have little option but to believe the opinion of practitioners, which they are usually unable to second-guess.
Public interaction: The more engagement that members of the public have with a profession directly, the more it is in need of internal regulatory standards. A high level of technical knowledge is more significant relative to the knowledge and training of those who interact most directly with the profession, as well as established regulatory systems for its control. Nuclear physicists have an extremely high degree of technical knowledge, but because they work alongside various other professionals who are able to check and verify their output, imposing professional regulatory standards on those physicists is less pressing. By contrast, a profession where the practitioners deal directly with members of the public without other bodies of professionals acting as checks and balances is much more in need of regulation. Doctors and lawyers are a good example of the latter.
Societal importance: The more fundamental that a given profession is, whether from a commercial or social perspective, the more essential it will be for regulation to be put in place. Thus, musical instrument makers might fulfil the above two criteria, but it would be difficult to describe their role as being of such vital importance that it demands industry regulation. If an instrument maker creates a defective violin, then the violinist (and her audience) might be disappointed, but if a medical professional acts negligently, then the consequences could be fatal.
The development of AI fulfils all of these requirements.
6.3 A Hippocratic Oath for AI Professionals
In computer science, will concerns about the impact of AI mean that the study of ethics will become a requirement for computer programmers and researchers? We believe that’s a safe bet. Could we see a Hippocratic Oath for coders like we have for doctors? That could make sense. We’ll all need to learn together and with a strong commitment to broad societal responsibility. Ultimately the question is not only what computers can do. It’s what computers should do.126
…genuinely expresses a company value and aspiration that is deeply felt by employees. But “Don’t be evil” is mainly another way to empower employees… Googlers do regularly check their moral compass when making decisions.127
Whether or not the above is true is a matter of some debate,128 but it is nonetheless significant that one of the major technology giants has consciously limited itself through the adoption of such an overarching principle. Schmidt and Rosenberg described it as “a cultural lodestar that shines over all management layers, product plans and office politics”.129
The argument that other firms, like Microsoft and Amazon, are also participating doesn’t make this any less risky for Google. Google’s unique history, its motto Don’t Be Evil, and its direct reach into the lives of billions of users set it apart.132
The disgruntled Google employees prevailed. In June 2018, Google announced that it had abandoned Project Maven.133 Around the same time, Google released a set of ethical principles, which included that it would not design or deploy AI in “[w]eapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”.134
A motto, oath or principle is a useful starting point but to achieve the more complex aims of various ethical codes set out above, professional regulation will need to include mechanisms for standard setting, training and enforcement. The final sections of this chapter expand on this idea.
6.4 A Global Professional Body
As with global regulations (discussed in Chapter 6), a single worldwide body regulating AI professionals would encourage the maintenance of standards and avoid creating costly barriers between countries.135 If professional regulation is undertaken only on a national level, this may lead to significant barriers to the movement of services across borders. For instance, physicians in the USA must obtain a US Medical License in order to practise136 meaning that foreign doctors with equal qualifications may be frozen out of practising for many years.137
An EU law on the recognition of professional qualifications provides recognition for foreign qualifications in certain industries as between EU Member States.138 However, the system is cumbersome and contains many carve-outs in order to placate local interest groups. Instead of resorting to these byzantine workarounds, it would be far better to begin by having a single standard applicable across all countries.
6.5 AI Auditors
Alternatively or perhaps additionally to the designers and operators being regulated directly, we might create a group of “AI auditors”. In the same way as companies and charities in many countries are required to be audited on an annual (or even more frequent) basis by professional financial auditors, organisations using AI might be required to submit their algorithms to professional auditors who could independently assess their compliance with an external set of principles and values. It could be that AI inspectors or auditors will itself become a profession of the future, with its own worldwide standards and disciplinary processes (much like, for instance, the International Federation of Accountants, which maintains the International Standards on Auditing).
Depending on the dangers involved, AI auditing may not be needed to apply to all instances of AI use, just as some professional regulation applies only where the activity is carried out in a commercial or public setting. For instance, people are entitled to cook food and eat what they cook, including with friends and family. But when a person cooks and sells their food for profit, many governments require independent inspections to be carried out.
6.6 Objections and Responses
6.6.1 “Who Is an AI Professional?”
In order to regulate, we need to know who we are regulating. There are many roles in computer science, including programmers, engineers, analysts, software engineers and data scientists. New ones are constantly being created as the field develops. Further, none of these are terms of art, meaning that an “engineer” in one organisation might be a “programmer” in another. We would adopt the following definition, which focusses on functions rather than labels: “professional regulation should include all those whose work consistently involves the design, implementation, and manipulation of AI systems and applications”.
The meaning of the term “consistent” could vary depending on circumstances, but engaging in the above tasks at least once a week would likely be sufficient in most cases. Drawing a line between professionals and lay-users may become increasingly difficult as AI systems become easier for non-experts to manipulate. A data manager might design a form to collect data, which is then fed into a primary algorithm (for instance logistic regression) and adapted through a modular system for some application by another employee of the organisation in question. Depending on the consistency of their activities both could be deemed AI professionals, as would be the engineer who designed the AI system in question. The same pattern may well be repeated, particularly in circumstances where AI is trained in situ rather than in a laboratory.
In order to avoid uncertainty, the professional regulatory organisation could publish guidance and maintain a helpline or web-based chatroom for individuals and organisations unsure of whether they are covered. Of course, matters of cost and proportionality come into play when determining who should be regulated, but all things being equal the greater the capacity for harm caused by the use of the AI system, the more cost in terms of training time will be justified.
AI professional regulation need not be an all-or-nothing exercise. It is suggested below that lay-users of AI (which might eventually include the majority of a population) be given a minimum level of basic training.139 Even within the class of professionals, there might be a system of different classifications for licenses: occasional operators of non-safety-critical AI systems might only be required to hold an entry-level qualification, whereas operators of the most dangerous and complex systems might be required to have far more extensive training. One example of such a gradated system is that operated by the UK FCA, which authorises or approves individuals to carry out certain controlled functions. The authorisations and types of training required vary depending on the activity in question, for instance advising clients or trading derivatives.
6.6.2 “Professional Regulation Will Not Stop Wrongdoing”
Just as there are still rogue doctors and lawyers who negligently, recklessly or even deliberately break the rules, making AI engineering a regulated profession will not avoid all misfeasance and malfeasance.
The urge to break the rules can occur at a corporate as well as an individual level. Governments and/or companies may lean on AI professionals to create technology which serves national or corporate interest and in so doing supplants whatever professional regulations which are otherwise imposed on the sector. One notorious example is that of Nazi physicians who, especially under Josef Mengele, carried out horrific experiments on Jewish and other prisoners, notwithstanding their supposed fealty to the Hippocratic Oath.140
Nonetheless, there are reasons to be hopeful that professional regulation will have some effect on AI. On an individual level, professional standards offer the chance to inculcate those regulated with a system of norms superior even to political orders. Where a political order would compromise the professional code—especially in the case of overarching norms, then this could be a source of conscientious objection for the individuals in question. So long as the system of professional norms gives the individual a reason to doubt orders which violate it, then it will have had a positive impact.
Psychiatrist Dr. Anatoly Koryagin campaigned within the Soviet Union and then eventually as an emigré, against the imposition of regulations for doctors which defined mental illness as “disrupting social order or infringing the rules of the socialist community”.141 Before escaping the Soviet Union, Koryagin was imprisoned and tortured, but he refused to yield his professional standards to political exigencies. As the New York Times put it: “Dr. Koryagin’s crime was to believe in the Hippocratic Oath”.142
6.6.3 “There Are Too Many AI Professionals to Regulate”
A further related argument against making AI a regulated profession is that there are simply too many AI professionals, even under the description above. The argument runs that it would be impossible in practical terms to secure the training and enforcement of such a large and diverse group stretching across the world.
However, though it may be growing relatively fast the number of AI professionals should not be overstated: a recent study by the Chinese company Tencent estimated that there were just 300,000 AI researchers and practitioners worldwide at the end of 2017, of whom two-thirds were employed and a further one-third studying.143 Many AI professionals are clustered around a fairly small number of universities, private sector companies or government programmes (and occasionally overlapping across all three). Consequently, these three groups of institutions operate as bottlenecks through which AI researchers must pass, either in order to acquire their initial training or in order to gain access to the funding and wider resources necessary to progress their research. Provided that professionalism can be incorporated into one or more of these gateways, its coverage of the industry will be considerable.
6.6.4 “Professional Regulation Would Stifle Creativity”
Critics might also argue that imposing a code of professional ethics would hold back developments. Necessarily, if an ethical code is to have any effect it will mean that certain practices are controlled or prohibited. The question then is whether this is a worthwhile trade-off.
Constraints are already accepted in other areas of scientific research. Many of the proposals made at the Asilomar 1975 Conference on recombinant DNA research have been adopted as either a matter of law or professional practice.144 In many countries, certain types of experiments on human and animals are prohibited or at least require special licenses. Arguably, all of these constraints stand in the way of scientific progress, but that is a moral balance which society is willing to strike.
Adopting standards of professional regulation does not require complete homogeneity on training, which would dampen innovation unnecessarily. In order to become publicly accredited, the training courses for AI designers ought to fulfil certain minimum criteria, much as is the case with degrees in professions such as law and medicine already. One example of a minimum criterion for accreditation is compulsory modules on ethics as part of an AI course. In fact, many programming curriculums already cover this as a specific topic.145
… we still need to develop and adopt clear principles to guide the people building, using and applying AI systems…Otherwise people may not fully trust AI systems. And if people don’t trust AI systems, they will be less likely to contribute to the development of such systems and to use them.146
7 Regulating the Public: A Driver’s License for AI
7.1 Automatic for the People
Every day, members of the public take control of a powerful machine capable of doing great harm both to its users and to others: the car. In addition to the general civil law (a driver who crashes can be liable for negligence) and some specialised criminal laws (such as a dedicated offence in some countries of causing death by dangerous driving),147 most countries also require drivers to be licensed. Similar licensing regimes are used in various countries to regulate the public’s engagement in activities such as flying aeroplanes and owning guns.
The same observations apply to AI. As it becomes more widely used, and utilities such as the AI software library TensorFlow and the machine learning simplification tool AutoML become more available and easier to operate, it is possible that manipulating AI will become as easy and natural as training a dog. Dogs may be trained to fetch and sit still, but they can also be trained to attack and kill. Like owning a gun, driving a car or flying a plane, AI has the potential to be helpful, neutral or harmful. Much of that effect will depend on human input.
7.2 How Might a Public AI License Function?
There is a threshold question as to whom should the citizen code of AI ethics apply. In short, the answer is that people ought to be required to adhere to certain minimum moral/legal standards whenever they are in a position to exert some causal influence over the choices made by the AI. This situation might range from hobbyist AI engineers undertaking advanced programming, to mere users of products and services containing AI whose interactions with that AI will shape its future behaviour.
Substantive requirements for vehicle driving licenses often include compulsory training courses and assessments—both practical and theory-based. Ongoing periodic assessments might also be required. Within licensing, there could also be a number of categories: a license to drive a car might not qualify a person to drive an 18-wheel truck. The European Parliament’s draft rules for users of AI is one example of how such a consumer-focused code might look at a high level.
As with professional AI engineers, there may well be a number of bottlenecks through which members of the public are likely to pass, and which allow an opportunity for AI skills and ethics to be taught. The first such bottleneck in most countries is the education system, which in most countries is mandatory at least up to a certain age. As AI grows in importance, ethics and civic values associated with its use and design might be added to compulsory high school courses. Secondly, at least for countries which adopt compulsory military or community service as a civil rite of passage for young adults, AI ethics might again be taught at this stage. Thirdly, for more advanced amateur programmers/AI engineers there are opportunities to impart ethics values and training via open source programming resources such as TensorFlow.148
Though amateurs may soon be able to manipulate and shape ever more complex AI, this does not necessarily mean that programmes created by amateurs will achieve global uptake. Just as we would be more likely to trust medical advice from a registered doctor and legal advice from a qualified attorney, companies and other consumers are more likely to trust AI which has been created by a licensed professional. Even though anyone with the right equipment and a little knowledge can ferment and distil their own alcohol, in many countries only licensed producers are permitted to sell it commercially.149 That might not stop people breaking the law and providing unregulated alcohol for money or for free, but most people would hesitate before sampling “moonshine” spirits from an unlicensed source. In order to avoid a market becoming tainted with unregulated AI programmes, a digital accreditation system similar in effect to the “kitemark” quality assurance logo used by the British Standards Institute might be used to assist users of AI systems in determining whether they are from a reputable or licensed source. Distributed ledger technology could potentially be used to support such quality assurance, by providing an immutable record of a programme’s origin and subsequent changes.
In many countries, it is illegal to drive without a license, but it is also illegal to drive without adequate insurance to cover damage which the driver might cause to third parties. The two systems are interlinked: insurers will not be willing to issue insurance to drivers who do not possess a valid license. Those who have had to make claims on their insurance for causing damage to themselves or others are likely to pay higher insurance premiums, adding an economic incentive to drive safely.
A similar model of compulsory insurance may one day be adopted for members of the public who use, design and influence AI—not just for cars but for all types of AI use. There may also be grounds for limiting the extent to which minors are able to use AI outside of parental supervision; again, this is no different from driving, gun ownership and many other potentially dangerous activities.
8 Conclusions on Controlling the Creators
Once developed, social norms are difficult to shift. In Europe, states have monopolised the legitimate use of force for several hundred years150—with the corollary that private ownership of weapons has been tightly restricted. As a result, most European states only permit individuals to own weapons subject to a rigorous licensing process.151 In the UK, almost all handgun ownership was prohibited following an infamous massacre of schoolchildren in 1996.152 There was widespread public support for this change, and no serious attempts have been made to challenge it since.153 By contrast, in the USA the right to bear arms was enshrined in the Second Amendment to the Constitution, adopted in 1791 as part of the Bill of Rights. Large parts of the population consider the ability to purchase and own weapons with minimal constraints as one of their basic Constitutional and cultural rights. In consequence, gun control remains a highly politicised issue and mass-shootings continue.
Most AI systems are certainly not as harmful as guns and the foregoing paragraph is not intended to suggest otherwise. Chapter 6 expressed some concerns as to whether the public might reject AI, but it is certainly possible that matters will go the other way, and people adopt AI to the extent that they become resistant to regulation. The above examples show there is a strong case for imposing restraints at an early stage, prior to the crystallisation of social norms protecting their untrammelled use.
Setting and enforcing ethical constraints for AI design and use are not just problems for one area of society: they challenge all parts. These issues require a multifaceted response, which should involve governments, stakeholders, industry, academics and citizens. All of these different groups should contribute to the grand bargain: a right to participate in designing ethical controls, in exchange for themselves being regulated. Only this way can we create a culture of responsible AI use before dangerous habits develop.