1 The Missing Link?
In October 2017, Saudi Arabia granted “citizenship” to a humanoid robot named Sophia.1 This move was derided by commentators as a cynical media stunt and a particularly hypocritical act for a country which grants only limited rights to human women.2 Even so, the episode is significant because it was the first time that a country purported to grant a robot or AI entity any form of legal personality in its own right. Just days after the Saudi announcement, Tokyo’s Shibuya district announced that an AI system had been granted “residency”.3
In a seminal 1992 article, Lawrence B. Solum proposed a form of legal personhood4 for AI.5 When that paper was written, the world was still in the midst of the second “AI Winter”: a period when setbacks in AI development coupled with a lack of funding contributed to a period of relatively slow growth.6 For the following two decades, Solum’s ideas remained a mere thought experiment. Given the recent developments in the capabilities of AI and its growing use, it is now an appropriate time to reconsider this proposal.7
…creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently.9
Chapter 3 demonstrated that present laws will struggle to assign responsibility for AI. Chapter 4 showed that there may be moral arguments for AI to be given some rights. The present chapter considers whether an elegant solution to one or both these issues might be to grant AI legal personality. It asks first whether legal personality for AI would be possible, and secondly, whether it would be desirable. The chapter closes by considering what further questions need to be resolved if AI were to be granted this status.
2 Is Legal Personality for AI Possible?
2.1 A Bundle of Rights and Obligations
A corporation is an artificial being, invisible, intangible, and existing only in contemplation of law. Being the mere creature of law, it possesses only those properties which the charter of its creation confers upon it, either expressly or as incidental to its very existence. These are such as are supposed best calculated to effect the object for which it was created. Among the most important are immortality, and, if the expression may be allowed, individuality; properties by which a perpetual succession of many persons are considered as the same, and may act as a single individual. They enable a corporation to manage its own affairs and to hold property without the perplexing intricacies, the hazardous and endless necessity of perpetual conveyances for the purpose of transmitting it from hand to hand. It is chiefly for the purpose of clothing bodies of men, in succession, with these qualities and capacities that corporations were invented and are in use.11
Instead of being a single notion, legal personality is a technical label for a bundle of rights and responsibilities.12 Joanna Bryson,13 Mihalis Diamantis and Thomas Grant write that legal persons are “fictive, divisible, and not necessarily accountable”.14 They observe “legal personality is an artifice”, with the effect that “[l]egal people need not possess all the same rights and obligations, even within the same system”.15
As shown in Chapter 4, legal protections for humans have changed over time and continue to shift. By way of brief examples: 2000 years ago, in Roman law the paterfamilias or head of a family was the subject of legal rights and obligations on behalf of the whole household, including his wife and children16; 200 years ago, slaves were considered to be non-persons and only subsequently granted partial rights; even today, women continue to be denied full civil rights in various legal systems across the world.17
The rights and obligations of non-human legal persons can also undergo development. The US Supreme Court recently (and controversially) extended constitutional freedom of speech protections to companies, enabling them to play a greater role in election campaigns.18 There remain limits to the protections we give to legal persons compared to natural ones: in an earlier case, the US Supreme Court denied that corporations had the same right to avoid self-incrimination enjoyed by human citizens.19
2.2 Legal “Housing” of AI Within Existing Corporate Structures
Hermit crabs are known for their ability to find empty mollusc shells and adopt these as their home. Some legal scholars have suggested in effect that an AI entity might do the same with existing legal structures.
The US legal scholar and computer programmer Shawn Bayern has argued that the US law of limited liability companies (LLCs) could be utilised to bestow legal personality on any type of autonomous system.20 Bayern’s proposal seeks to take advantage of an apparent loophole under both New York’s LLC Law and the Revised Uniform LLC Act. Bayern considers it would be possible to create an LLC whose operating agreement placed it under the control of an AI system and then have every other member of the LLC withdraw, leaving the system unsupervised by humans. Beguiling though Bayern’s hypothesis might be, Matthew Scherer has launched a convincing counter-argument to the effect that the relevant statutes would not be construed by courts as leaving power to control the LLC in the hands of AI because this would be contrary to legislative intention.21
Given that at present most AI is “narrow” in nature and fairly limited in its range of abilities, any autonomous system granted legal personality through Bayern’s method is likely to lack the basic acumen necessary to take many business decision. Even if an AI entity could acquire personality, it might be subject to the same default laws as apply in the case of an LLC whose single human member suddenly became mentally incapacitated, such that they were no longer fit to manage the entity. The relevant LLC laws have not yet been tested on this point, so it remains unclear whether Bayern’s proposal would be endorsed by the courts.
Nonetheless, Bayern’s paper sparked discussion in several other countries as to whether their laws would operate in a similar manner to the US LLC provisions he describes. Along with Bayern, a group of legal experts from the UK, Switzerland and Germany wrote a further paper which considered how the legal systems in those countries might achieve the housing of AI within existing corporate structures.22 Their conclusions were that although under UK law it might be possible to house an unsupervised AI within a legal entity, German and Swiss law did not easily accommodate AI legal personality without any other controlling party. Nonetheless, Thomas Burri, one of the authors of that study, wrote subsequently “…given the present capacities of existing forms of legal entities, companies of various kinds can serve as a mechanism through which autonomous systems might engage with the legal system”.23
It is important to distinguish between the assets of a company and its corporate form. Another way of putting this is to say that the corporate form is the container, and its assets are the contents. It is certainly already possible for a person (whether human or corporate) to own the rights to an AI system, whether through proprietorship of its software or otherwise. Indeed, such an AI system might be the only asset of that company. However, this does not mean that the AI itself has legal personality; in the same way, if a company had as its sole asset a racehorse, this would not mean that the horse was a legal person in and of itself. The problems raised in Chapter 3 in terms of assigning responsibility for AI would not be solved by creating an entity whose sole asset was the AI, because there would still be difficulty in ascribing the AI’s actions to its owner.
Bayern aims to jump across the gap between container and contents by replacing the (existing) person in control of an LLC with an AI entity. However, it is questionable whether the AI entity in control of the LLC would be treated as having all of the LLC’s liabilities. Decision-making on behalf of an entity is not the same as having the same legal personality as that entity. A human controller of an LLC does not thereby become personally liable for the LLC’s debts, and, presumably, neither would the AI.
2.3 New Legal Persons
Notwithstanding the above, debates on whether existing corporate law can be stretched to accommodate AI are ultimately something of a sideshow. Regardless of whether legal systems could at present allow legal personality for AI, the other option is for countries to create new bespoke corporate structures.
[U]ltimately, the autonomy of robots raises the question of their nature in the light of the existing legal categories or whether a new category should be created, with its own specific features and implications.24
When a state grants legal personality to an entity, the new legal person is recognised as a “national” of that state.25 Under EU law, the power to create legal personality is vested in Member States, which are entitled to “lay down the conditions for the acquisition and loss of nationality”.26 Accordingly, it is a matter of national sovereignty as to what entities it decides to grant personality. As a general matter, the freedom to award nationality is one of the basic powers of all sovereign states.27 Even if the laws of nationality are not invoked, a state is free to do anything which is not prohibited under public international law 28; there are no international rules prohibiting legal personality for AI.
The extent of countries’ freedom to recognise legal persons is illustrated by the breadth of entities accorded this status around the world—from temples in India29 to eingetragener Vereins (registered voluntary associations) in Germany. Burri illustrates the point as follows: “[j]ust as national legislature could determine that great apes or certain rivers are persons within the domestic legal order, it could also state so, for instance, for webpages”.30
2.4 Mutual Recognition of Foreign Legal Persons
As soon as one country grants AI legal personality, this may have a domino effect on other nations.31 Many countries operate a doctrine of “mutual recognition” in their conflict of law provisions whereby they will accord the same status for legal persons recognised in other countries even where that entity would not be considered a person under the local law. This occurred in the UK case, Bumper Development Corp. v. Commissioner of Police for the Metropolis,32 where the Court of Appeal held that an Indian temple that was “little more than a pile of stones” could be a treated as a legal person in England because it held that status in India, even though English law had no equivalent standing for religious buildings.
Companies or firms formed in accordance with the law of a Member State and having their registered office, central administration or principal place of business within the Union shall, for the purposes of this Chapter, be treated in the same way as natural persons who are nationals of Member States.
‘Companies or firms’ means companies or firms constituted under civil or commercial law, including cooperative societies, and other legal persons governed by public or private law, save for those which are non-profit-making.
If AI is housed within an EU company in the manner contemplated by Bayern et al. above, then it may only take one country to recognise the validity of that legal structure within the EU to cause the entire bloc to do so.34 Indeed, the breadth of the definition of “companies or firms” in Article 54 of the TFEU suggests that even new forms of AI-specific legal persons would be covered, provided that they are “profit-making”.35 If the EU were to grant such recognition, it seems plausible that other major world economies would follow suit, so as to make themselves appear as attractive as possible to footloose AI designers and entrepreneurs who may wish to take advantage of the new legal status.
2.5 Robots in the Boardroom
… the Corporation itself is onely in abstracto, and resteth onely in intendment and consideration of the Law; for a Corporation aggregate of many is invisible, immortal, & resteth only in intendment and consideration of the Law; and therefore cannot have predecessor nor successor. They may not commit treason, nor be outlawed, nor excommunicate, for they have no souls, neither can they appear in person, but by Attorney. A Corporation aggregate of many cannot do fealty, for an invisible body cannot be in person, nor can swear, it is not subject to imbecilities, or death of the natural, body, and divers other cases.36
Because holding rights and taking decisions about those rights are separate functions, it would be possible for an AI to have its own legal personality but remain under the control of humans, just like any other special purpose corporate vehicle.
Once AI is capable of taking sufficiently complex decisions, it is conceivable that the need for human decision-making on company boards could be reduced or even eliminated altogether. Florian Möslein, an expert on corporate governance, predicts that “[d]ue to its rapid technological development, artificial intelligence will enter corporate boardrooms in the very near future”, and that “technology will probably soon offer the possibility of artificial intelligence not only supporting directors, but even replacing them”.37 After surveying current corporate law, Möslein concludes that changes will be needed to allow AI to take major corporate decisions absent of human oversight. Directors usually have a wide power to delegate some of their duties but nonetheless remain ultimately responsible for the management of the company.38
In 2014, an AI system was reportedly “appointed” to the board of directors of a Hong Kong venture capital firm to assist in decision-making.39 Particularly in industries where quantitative analysis and data science are paramount, AI may have significant advantages over humans. At the moment, AI acts as a decision-making aid to humans, but in the future the roles might be reversed.40 Indeed in various industries, human-gathered intelligence and data are already fed into an AI system, which then generates a recommendation for humans to execute.41
As to whether human directors could be replaced by AI, Möslein explains that “[o]n a more general level, corporate laws usually presuppose that only ‘persons’ can become directors”.42 As such, the question of whether AI is entitled to take decisions for a legal entity depends in turn on whether AI has its own legal personality.
3 Should We Grant AI Legal Personality?
In 2007, legal scholar and sociologist Gunther Teubner made the provocative argument that “[t]here is no compelling reason to restrict the attribution of action exclusively to humans and to social systems. Personifying other non-humans is a social reality today and a political necessity for the future”.43 In order to assess this claim, we need to begin by stipulating criteria by which the merits of legal personality for AI can be assessed.
3.1 Pragmatic Justifications: Setting the Threshold
We noted at the outset of Chapter 4 that there are two potential justifications for granting AI rights: moral and pragmatic. Legal personality is neither necessary nor sufficient to protect an entity’s moral rights. A company does not have any “moral” rights—just a legitimate expectation (held in reality not by the “company” but by its officers, employees and shareholders) that its legal entitlements will be respected. Conversely, animals can be said to have various moral claims, but generally speaking they lack the legal personality to advocate for these in their own name.44
Chapter 4 addressed several moral reasons why we might wish to grant AI rights. One vehicle for the protection of those rights could be legal personality. At least in the short to medium term, it does not appear that either technology or society is at a stage where moral rights for AI are widely recognised. Therefore, the remainder of the present chapter concentrates solely on pragmatic justifications.
1. to further the material interests of the legal persons it recognizes, and
2. to enforce as legal rights and obligations any sufficiently weighty moral rights and obligations, with the caveat that
3. should equally weighty moral rights of two types of entity conflict, legal systems should give preference to the moral rights held by human beings.45
It is not clear whether the above authors consider “moral rights” to include economic interests. If not, then the statement is inaccurate because economic rights often trump moral ones: a hungry person is not at liberty to rob a supermarket.
A slightly improved version of the formula against which granting legal personality to AI is to be measured would be as follows: (a) maintaining the integrity of the legal system as a whole and (b) advancing the interests of humans. For the avoidance of doubt, the term “interests” refers to economic as well as moral claims. The act of advancing interests of humans is less narrow than “giving preference to the moral rights held by humans”, because in many circumstances the interests of humans generally will be served by giving preference to a legal entity over a human, and thereby upholding the institution of separate legal personality fundamental to most advanced economies.
3.2 Filling the Accountability Gap
So long as we can conceive of these machines as ‘agents’ of some legal person (individual or virtual), our current system of products liability will be able to address the legal issues surrounding their introduction without significant modification. But the law is not necessarily equipped to address the legal issues that will start to arise when the inevitable occurs and these machines cause injury, but when there is no ‘principal’ directing the actions of the machine. How the law chooses to treat machines without principals will be the central legal question that accompanies the introduction of truly autonomous machines, and at some point, the law will need to have an answer to that question.47
Without legal personality for AI, our two pragmatic aims might pull in different directions: on the one hand, in order to advance the interests of humans we might want to find a legal person responsible for harm. However, seeking a human or corporate party responsible for AI might be at the expense of the integrity of the legal system as a whole.
In Greek mythology, the bandit Procrustes was famous for placing victims on a wooden bed then cutting off the end of their limbs if they were too tall or pulling limbs out of their joints them if they were too short. In a similar way, seeking always to find an existing legal person responsible for all AI actions we risk damaging the coherence of the legal system. Koops, Hildebrandt and Jaquet-Chiffell comment: “For tomorrow’s agents, however, applying and extending existing doctrines in these ways may stretch legal interpretation to the point of breaking”.48
Where the chain of causation between a recognised legal person and an outcome has been broken, interposing a new AI legal person provides an entity which can be held liable or responsible. AI personality allows liability to be achieved with minimal damage to fundamental concepts of causation and agency, thereby maintaining the coherence of the system as a whole.49
3.3 Encouraging Innovation and Economic Growth
The rights and liabilities of a company are usually separate from those of its owners or controllers.50 Creditors of a company only have recourse to that company’s own assets, a feature known as “limited liability”. The limited liability of companies is a powerful tool in protecting humans from risk and thereby encouraging innovation.51 A single human can create a company, which she owns and of which she is the sole director. She can take every decision for the company, and she alone can reap the benefits in terms of the increased value of her shareholding if the company is successful. In practice, there may be almost nothing to distinguish the company from its human owner. And yet, despite all this, corporate law allows for the company to be treated as separate from her. Even if the company goes into bankruptcy, absent any fraud or personal guarantees of its liabilities, the owner can walk away entirely unscathed.
Granting AI legal personality could be a valuable firewall between existing legal persons and the harm which AI could cause. Individual AI engineers and designers might be indemnified by their employers, but eventually creators of AI systems—even at the level of major corporates—may become increasingly hesitant in releasing innovative products to the market if the programmers are unsure as to what their liability will be for unforeseeable harm. We return to this point below when considering some of the objections raised against separate personality for AI.
Arguably, the justifications for providing such legal personality to AI are even stronger than for protecting human owners from the liability of companies. AI systems can do something that existing companies cannot: take decisions without human input. Whereas a company is merely a collective fiction for human volitions, AI by its nature has its own independent “will”.52 For this reason, legal academics Tom Allen and Robin Widdison suggest that when an agent is capable of developing its own strategy, it then makes sense for that agent to be held responsible for its independent actions.53
3.4 Distributing the Fruits of AI Creativity
Alongside issues arising from harm caused by AI, Chapter 3 also demonstrated that current legal systems are poorly suited to addressing how the fruits of AI creativity should be distributed. Existing institutions such as intellectual property protection as well as free speech and hate speech laws have not been adapted to cover situations where the creator of a meaningful output is not a recognised legal person.
Allowing AI to hold property would resolve the issues raised concerning the ownership of new intellectual property of which AI is the creator. To the extent that the relevant acts of creativity were split between humans and AI, the intellectual property rights could be shared accordingly, much as it is between multiple human creators at present.
Granting AI other civil rights, including perhaps to freedom of speech, may be justified in circumstances where that AI’s speech is a valuable contribution to the marketplace of ideas, which deserves to be protected for its benefit to human society. Without such protections, the powerful could simply restrict AI’s ability to generate important output and there may be no legal person in a position to complain on the AI’s behalf. A corollary effect is that AI could become subject to hate speech laws, which might then be used to prevent it from engaging in harmful discourse.
3.5 Skin in the Game
In his 2017 book Skin in the Game, Nassim Nicholas Taleb argues that in any given social system all participants should have some kind of a vested interest to encourage them to think properly and learn from their mistakes.54
More than just creating a pool for compensation, personality for AI could provide an AI system with the motivation to adhere to certain rules which, otherwise, it might abandon or eschew on the grounds that they conflict with its motives. Assuming that AI is trained to value its own assets, providing AI with personality could therefore give it skin in the game. Deterrence is a major feature of both civil and criminal law: shaping a rational actor’s behaviour by signalling that undesirable consequences will follow if a particular norm is transgressed. Humans agree to live in societies and submit to various laws because on the whole it serves our interests to do so. Provided AI could be imbued with a sufficiently nuanced model of the world, the same motivating structures might in theory be applied to it.
Though an AI entity may not be swayed by the psychological and emotional aspects of wanting to be seen by its peers to act lawfully, it seems easier to conceive of an AI system acting rationally to avoid diminution of its assets. F. Patrick Hubbard has termed this justification a “prudential grant of personhood”.55
3.6 Arguments Against Personality for AI
3.6.1 The “Android Fallacy”
The simplest—and least tenable—objection to AI personality rests on a mistaken conflation of the idea or personality with humanity.
Legal academic Neil Richards and roboticist William Smart argue “… one particularly seductive metaphor for robots should be rejected at all costs: the idea that robots are ‘just like people’… We call this idea ‘the Android Fallacy.’”56 Richards and Smart are correct to caution against the Android Fallacy, but this does not mean that one needs to abandon the concept of AI personality. Proposals for AI legal personality rarely go as far as giving robots all of the same rights as humans, and there is no logical or legal reason why they would need to.
Arguments against AI personality often shade into the Android Fallacy. Jonathan Margolis, writing in the Financial Times, declared that “Rights for robots is no more than an intellectual game”, and that though “AI could exceed our capabilities… its personhood is illusory”.57 However, this is more of an assertion than an argument.
From a technical perspective, [the proposal for electronic personhood] offers many [sic] bias based on an overvaluation of the actual capabilities of even the most advanced robots, a superficial understanding of unpredictability and self-learning capacities and, a robot perception distorted by Science-Fiction and a few recent sensational press announcements….
A legal status for a robot can’t derive from the Natural Person model, since the robot would then hold human rights, such as the right to dignity, the right to its integrity, the right to remuneration or the right to citizenship, thus directly confronting the Human rights. This would be in contradiction with the Charter of Fundamental Rights of the European Union and the Convention for the Protection of Human Rights and Fundamental Freedoms.59
When considering whether to grant legal personality to AI, the point is not whether the potential legal person understands the meaning of its actions. A temple or river cannot be said to be conscious of its legal personality. Indeed, we recognise the legal personality of humans who are unaware that they have it, including young children and people in permanent comas.60 Even though children and those of diminished mental faculties can usually only act via other representatives, nonetheless they are still legal persons. Seen in this light, there is no magic to granting AI legal personality. We are not declaring it is alive.
3.6.2 “Robots as Liability Shields”
There is nothing objectionable in itself about actors pursuing selfish ends through law. A well-balanced legal system, however, considers the impact of changes to the rules on the system as a whole, particularly so far as the legal rights of legal persons are concerned. We take the main case of the abuse of legal personality to be this: natural persons using an artificial person to shield themselves from the consequences of their conduct.61
The liability shield criticism assumes that any instance of separate legal personality will be abused by humans on a habitual basis. To the contrary, and as noted above, it has been recognised for centuries that separate legal personality plays a valuable economic role in enabling humans to take risks without sacrificing all of their own assets.62 Indeed, exactly the same liability shield exception might equally be raised against limited liability for companies. Surely even the most trenchant critics of AI personality would not advocate abolishing all companies, yet this is the logical conclusion of some of their arguments.
Bryson et al. cite a well-known international law case, JH Rayner (Mincing Lane) Ltd. v. Department of Trade and Industry63 as “foreshadow[ing] the risk that electronic personality would shield some human actors from accountability for violating the rights of other legal persons, particularly human or corporate”.64 In JH Rayner, a number of parties had made contracts with an international organisation, the International Tin Council (ITC), whose members included various individual states. In 1972, the UK had recognised the ITC as having its own legal personality, which enabled it to, among other things, enter into contracts. Various private parties contracted directly with the ITC, which eventually defaulted on some of these agreements. It transpired that the ITC itself did not have any assets. Various disappointed parties therefore attempted to sue one of its members: the UK (via its Department of Trade and Industry), arguing that it was not a separate legal person from the ITC. The UK’s House of Lords rejected the claimants’ case, holding that the ITC was separate from its members.
The real difficulty for the claimants in JH Rayner was that they had contracted with an entity that collapsed without any assets. This is hardly a problem unique to AI; in fact, it can exist in any situation where a party incurs a liability but then lacks the ability to be able to satisfy its debts. Where a company goes into liquidation, unsecured creditors may be left out of pocket. Where an impecunious person causes harm to others, the victims may not be able to seek financial recourse against the responsible party. In short, the problems complained of by Bryson et al. are nothing new. There are various ways of addressing them including insurance, taking adequate security and economically prudent behaviour—such as not entering into a contractual relationship with a party that may be unable to satisfy its obligations for want of assets. Simply put, if the claimants in JH Rayner had wanted to avoid financial risk, then they should not have contracted with the ITC.
Finally, in answer to the concern that AI might be used cynically by humans to harm others with impunity, well-established rules exist to prevent this from happening in corporate law. The same principles could be applied to AI.65 Where a company is used as a cloak for wrongdoing and its owners are seeking to exploit its corporate personality to shield themselves, the separate legal personality of the company can be ignored and liability can be fixed directly on its owners.66 This is known as “piercing the corporate veil”.67 In these situations, the law acknowledges that the fiction of a company is useful only up to a point.
Indeed, the idea of a human being able deliberately to exploit an AI system’s separate legal personality presupposes that the human is in sufficient control of the AI to know what it is going to do. AI personality, by contrast, is designed predominantly to cater for situations in which such control or foreseeability from a human perspective is no longer present. If a person intentionally employs AI as a vehicle to do harm to others or recklessly uses AI to achieve some other end, realising that harm to others is a likely result of such use, then the person in question may be held liable under existing regimes—both criminal and private law.
3.6.3 “Robots as Themselves Unaccountable Rights Violators”
A further argument raised by some critics against AI personality is that the robots themselves are unaccountable: “[a]dvanced robots would not necessarily have further legal persons to instruct or control them”.68 Bryson et al. argue that “[g]iving robots legal rights without counter-balancing legal obligations would only make matters worse”. This may be true, but why then not give robots legal obligations?
In order to make legal personality for AI useful from the perspective of settling liability, the AI will need to be given some way of holding funds, or at least having access to a pool of assets which can be used to satisfy creditors (such as a compulsory insurance scheme). One of the shortcomings of the European Parliament proposal was that it did not make clear enough how AI personality could be used to fill any responsibility gap. An ability for AI to hold property could have been explicitly linked to both its personality and to the ability of that AI to settle debts or pay compensation. Rights for AI in this sense (and legal protections for those rights) are a means to an end rather than end in themselves.
3.6.4 Social Dislocation and Disenfranchisement
In recent years, various commentators have observed that in addition to the traditional “right/left” economic and political divide (pursuant to which people and groups are seen as being, broadly, against or in favour of government intervention), a new gulf has emerged particularly in developed economies between groups who are “anywhere/somewhere”, “open/closed”69 or “drawbridge down/drawbridge up”.70 These categories refer to the difference in attitudes between people who favour globalisation and multiculturalism versus people who value their own local culture and economy and may be more resistant to what they perceive to be a loss of identity.
Anywheres dominate our culture and society. They tend to do well at school… then usually move from home to a residential university in their late teens and on to a career in the professions… Such people have portable ‘achieved’ identities, based on educational and career success which makes them generally comfortable and confident with new places and people.
Somewheres are more rooted and usually have ‘ascribed’ identities - Scottish farmer, working class Geordie, Cornish housewife - based on group belonging and particular places, which is why they often find rapid change more unsettling. One core group of Somewheres have been called the ‘left behind’ - mainly older white working class men with little education. They lost economically with the decline of well-paid jobs for people without qualifications and culturally, too, with the disappearance of a distinct working-class culture and the marginalisation of their views in the public conversation72
Why are these trends relevant to the question of whether to grant legal personality to AI? Though this book is not about the economic impact of AI and technological unemployment, this is undeniably a major concern for world economies and populations. White collar jobs may be increasingly threatened by AI, but nonetheless it remains likely that jobs requiring less skill and training will be replaced first, not least because those taking the relevant decisions are often skilled individuals who will not be keen to cannibalize their own jobs or those of their immediate friends and family.
Putting the two issues together, the somewhere/closed/drawbridge up group of the population may well consider it to be adding insult to injury to be told not only that an AI entity has taken their job, but also that the AI entity is going to be granted some form of legal rights. A new social fissure might be added to the growing list of descriptors: Technophiles versus neo-Luddites. Referring to the nineteenth-century bands who smashed machinery fearing its impact on their jobs, the latter term is not intended pejoratively. Technology writer Blake Snow describes (and advocates) “reformed Luddism”, saying: “to be a reform Luddite, all you have to do is recognize the many benefits of personal technology, but do so with an untrusting eye”.73
I am told repeatedly in the tech startup bubble that unemployed truckers in their 50s should retrain as web developers and machine-learning specialists, which is a convenient self-delusion. Far more likely is that, as the tech-savvy do better than ever, many truckers or taxi drivers without the necessary skills will drift off to more precarious, piecemeal, low-paid work.
Does anyone seriously think that drivers will passively let this happen, consoled that their great-grandchildren may be richer and less likely to die in a car crash? And what about when Donald Trump’s promised jobs don’t rematerialise, because of automation rather than offshoring and immigration? Given the endless articles outlining how “robots are coming for your jobs”, it would be extremely odd if people didn’t blame the robots, and take it out on them, too.74
Striking this balance is an ongoing challenge. Although the economic benefits to be gained from AI might first be enjoyed by those who are already highly fortunate, in turn it is to be hoped that AI will bring benefits for the whole of society. These questions of equity and distribution are outside the scope of the present work. Nonetheless, it is suggested here that the trade-off between granting AI some rights and also ensuring that the technology remains socially acceptable can be overcome, or at least managed effectively. The techniques for consultative rule-making set out in Chapters 6 and 7 aim to go some way towards bridging this gap.
4 Remaining Challenges
Notwithstanding the theoretical possibility of legal personality for AI, there are still some significant challenges and issues which remain to be resolved if AI is to be granted this status.
4.1 When Does AI Qualify for Legal Personality?
We may wish to set some minimum criteria for AI personality. F. Patrick Hubbard has suggested granting legal personality to an entity if it has the following capacities: (a) an ability to interact with its environment and to engage in complex thought and communication, (b) a sense of being a self with a concern for achieving its plan for its life, and (c) the ability to live in a community with other persons based on, at least, mutual self-interest.75 Hubbard’s second criterion resembles what this book would term “consciousness”.76 If legal rights for AI are viewed from a purely pragmatic perspective, then consciousness would not be necessary. Either way, at least Hubbard’s first and third criteria seem to be a good starting point as a threshold for AI personality.
The precise boundaries at which we determine that AI can or should be granted personality are a matter of legitimate debate, which can be addressed through the law-making mechanisms described in the following chapters. Further questions arise as to whether, when an entity meets the relevant tests, legal personality should be optional or compulsory. Ultimately, these are moral and political issues, rather than ones which can be resolved by legal reasoning alone.
4.2 Identification of the AI
Personality for AI or a robot presupposes that it is possible to identify that entity with reasonable certainty. This is an empirical question.
Because AI can change and adapt, it might reasonably be asked whether it is the same program from one instance to the next.77 However, we should not forget that the same question could be asked of humans, namely whether our identity persists as we change and develop throughout our lifetimes.78
The philosopher A.J. Ayer argued that human identity through time consists in the identity of our bodies.79 However, unlike a human mind, which is (at least for now) inextricably linked to a body, a robot mind could be held in any number of different repositories—and indeed more than one simultaneously.80
Although robots have a physical form, at present this is merely a vehicle for the operative AI, which could in most cases be transferred to other storage or operating systems. This problem may not apply if we were to develop embodied technologies via whole brain emulation, human-computer interfaces or similar—where the intelligent software and hardware are inextricably linked. Notwithstanding future developments, currently it would not make sense to conflate an AI system with the physical hardware from which it functions. Accordingly, we must look to something inherent and identifiable in the nature of the non-physical AI as its method of identification.
The issue is particularly acute where the AI entity in question is a unit of a greater whole, as part of a “swarm” or network of AI systems. Some consumer-facing AI already has these elements. For instance, since 2009 the search algorithm used by Google has learned from both individual users as well as wider data from the entire community.81 When a user signs into her unique Google account, her search results will reflect a combination of personalised data such as her past searches and location as well as general updates provided to all users of the platform.82 Difficult questions of identification arise when seeking to determine whether it is the same Google AI algorithm operating on person A’s smartphone as on person B’s computer.
In order for entities to take advantage of the benefits allowed by legal personality, it might be made a requirement that AI be registered83 and marked with an indelible and immutable electronic identifying “stamp” corresponding to that registry, such that identification is always possible.84 A distributed ledger or block chain system might be used to verify any register of AI and to prevent tampering with entries.
One option is for an AI system to be registered in more than one place: if an AI system takes updates from a centralised source, but is also personalised to its user, then it might be sensible for the AI to be regulated individually and collectively. Similar principles of overlapping duties apply to humans: if a lorry driver crashes and causes damage, she may be held liable in her capacity as an individual person, but she may also be held liable in her capacity as an employee of a larger entity. In the circumstances, a person harmed is likely to pursue whoever has deeper pockets (generally the employer).
If AI is to hold substantive economic rights, such as the ability to own property or hold funds, then there would need to be some way in which to link the given AI system to the method of ownership. This is one of the reasons why many countries require companies to be registered so that their identity can be verified and thereby linked to certain rights. A bank account containing money would need to be in the name of someone or something. To this extent, some form of registry for AI may be an inescapable requirement for AI to hold rights.
A human may be able to just about survive “off-grid”, without a social security number and outside of the knowledge of authorities, but in developed economies this is increasingly difficult. In order to access many basic goods and services, some form of local, federal or national identity verification is required. The same bottlenecking principles could be applied to AI. Thus, though not every AI entity would need to be registered, this could be made an essential prerequisite in order for that AI system to avail itself of certain pieces of legal and economic infrastructure, such as insurance, the banking system or even perhaps the Internet. As such, in order for AI systems to participate in activities in which legal liability is likely to arise, registration and licensing could be made mandatory.
4.3 What Legal Rights and Responsibilities Might AI Hold?
4.3.1 Potential Rights and Obligations
Based on the various justifications for AI personality set out in the preceding section, the rights and obligations which we may wish to grant AI include: separate legal personality and the corporate veil, the ability to own and dispose of assets, the rights to sue and be sued and the freedom to have certain expression or speech protected/prohibited.
First, by the recognition of an autonomous consent— which is not a fiction at all—it would solve the question of consent and of validity of declarations and contracts enacted or concluded by electronic agents without affecting too much the legal theories about consent and declaration, contractual freedom, and conclusion of contracts. Secondly, and also quite important, it would “reassure the owners-users of agents”, because, by considering the eventual “agents” liability, it could at least limit their own (human) responsibility for the ‘agents’ behaviour.85
In addition to the registration of AI on a distributed ledger such as blockchain, that ledger could also display the assets of the AI, with the result that any potential counterparty would know exactly how creditworthy the AI is. The international regulatory framework for banks (the current iteration of which is known as “Basel III”) requires that banks hold a minimum amount of regulatory capital which can be called upon in the case of an emergency.86 Similar requirements might be imposed on AI in order for it to be permitted to take advantages of the various benefits that personality brings.87 If an AI’s assets or credit rating drop below a certain level, then it could be frozen automatically out of certain legal and economic rights.
4.3.2 What Are the Limits?
The same bundle of rights accorded to companies in different legal systems need not be directly transposed on to AI. We are unlikely to want AI to hold various “civil” rights which are deeply entwined into shared concepts of human society, such as the right to vote, or to marry.88
There is also no need for AI legal rights to be absolute or indefeasible. Most human rights—including even the right to life—can be restricted or overridden in appropriate circumstances: police can shoot a dangerous assailant where necessary. Any AI rights must sit alongside other legal rights and norms, which occasionally clash and are subject to regulatory or judicial adjudication. This realisation also goes to answering the simplistic objection to AI personality which assumes that we will thereby create some kind of master race capable of always defeating the rights of humans. Balancing AI rights against those of existing legal persons will be a complex exercise and (as with many of the unanswered questions) is best answered through societal deliberation.
4.4 Would Anyone Own AI?
It might be thought that an entity cannot be both a person and property. This is incorrect; though today it may be repugnant to think of any human person as being owned by another, we experience no cognitive dissonance in viewing a company as both a legal person and the property of its shareholders.
At present, most corporate structures, no matter how complicated, end with humans as the ultimate beneficial owners.89 Just as no one “owns” a human, could we have a situation where no one “owns” AI? In theory, this would be possible, but we would need to decide as a society whether or not this would be desirable. For example, Koops et al. predict a three-tiered progression in terms of AI personality: in the short term, they predict “interpretation and extension of existing laws”. In the middle term, they predict “limited [AI] personhood with strict liability”,90 involving “introduc[tion of] strict liability for electronic agents if their unpredictable actions are felt to be too risky for business or consumers”.91 In the long term, they consider that we may develop “full AI personhood with ‘posthuman rights’”.92 The latter, they suggest, should only arise if and when machines develop self-consciousness.
4.5 Could a Robot Commit a Crime?
Chapter 3 addressed human criminal responsibility for the acts of AI. Granting AI legal personality, the subject of the present chapter, opens the door to AI having criminal responsibility for its own acts.
Even though a company has no “soul to damn or body to kick”,93 criminal liability for corporations has existed since the Middle Ages,94 and a similar concept might perhaps be extended to AI if and when it is granted legal personality. On the one hand, this might fill the “retribution gap” identified by John Danaher, namely the psychological expectation that there will be a criminally responsible agent to be punished for causing harm.95 However, a major difficulty remains in that it is hard to reconcile AI criminality with the general requirement in criminal law that a guilty party must “intend” to commit the criminal act.
4.5.1 Case Study: Random Darknet Shopper
In Switzerland, an artistic collective created a piece of software called Random Darknet Shopper which was enabled once a week to access the deep web, a hidden portion of the Internet, and purchase an item at random.96 Random Darknet Shopper purchased items including a pair of fake diesel jeans, a baseball cap with a hidden camera, 200 Chesterfield cigarettes, a set of fire-brigade issued master keys and 10 ecstasy pills.97
The ecstasy purchase came to the attention of the local St Gallen Police Force, which seized the physical computer hardware from which the Random Darknet Shopper was run, as well as the various items it had purchased. Interestingly, both the human designers and the AI system were formally charged with the crime of making an illegal purchase of a controlled substance. Three months later, the charges were dropped and all property was returned to the artistic collective (apart from the ecstasy, which was destroyed).98
4.5.2 Locating Intent
The key question for AI criminal liability is whether it has the relevant mens rea (guilty mind). There are two elements: first, ascertaining the AI’s decision-making process on a factual basis, and secondly, the social and policy question of how this should be treated under criminal law. Because mens rea criteria apply only to humans at present, they are tailored to (what we perceive to be) human thought processes. AI does not function in the same way, and in seeking to apply anthropomorphic concepts to it we risk incoherence and confusion.
It is possible to distinguish between situations in which AI has made a mistake as to a fact and those in which AI has applied the “wrong” rule to a known fact. When a factory robot thinks that a human operator’s head is a component in the manufacturing process and decides to crush it—killing the human—this is akin to a mistake of fact.99 However, where an AI supplants human instructions in some unexpected way—for example the toaster burning a house down to cook all the bread—then this might seem closer to a concept of a criminally guilty mind. Similarly, if AI was to develop the capability to deliberately disobey clear human instructions, then this might also be considered criminal.
Even if AI’s “mental state” could be measured and ascertained, we still need to ask whether it would be appropriate from a social and psychological perspective to apply criminal law tenets to a non-human entity. On one view, the notion of mens rea is something which is by its very nature only appropriate to humans. If correct, this seems to militate against AI ever having criminal intent in the sense currently recognised. A system of law might define a new culpable “mental” state applicable to AI, but then labelling it mens rea may no longer be appropriate.100
The designation of an act as criminal is usually linked to some form of penalty. If AI was held criminally responsible, there is a further question as to how AI might be punished. The final part of Chapter 8 explores further sanctions which might be used against AI.
5 Conclusions on Legal Personality for AI
Chopra and White write in the introduction to their book on AI personality: “The artificial agent is here to stay; our task is to accommodate it in a manner that does justice to our interests and its abilities”.101
It may be hard to imagine a world without separate legal personality for companies, but Paul G. Mahoney has pointed out in a historical study of the institution: “[h]ad property and contract law been permitted to keep evolving in the field of business operations, the set of asset partitioning rules that reside in the law of property, contract and tort (rather than the law of partnership and corporations) might be much larger”.102 Seen in this way, legal personality is neither special nor inevitable in any area; it is simply one tool available to humans to help achieve our extra-legal aims.
Even if separate legal personality for AI is accepted in theory, there remain various difficult unanswered questions as to how it should be structured. The following chapters suggest how we can build institutions capable of resolving these issues.
Eventually, an intelligent computer will end up before the courts. Computers will be acknowledged as persons in the interest of maintaining justice in a society of equals under the law. We should not be afraid that that day may come soon.103