4

Machines at the Helm

For most citizens in liberal democracies, life is dependent on the provision of good-quality public services in education, health care, retirement, policing, taxation, and the courts—to name but a few. These valuable public services are becoming extremely costly over time, contributing to the swelling of government deficits and the ballooning of national debts. Bad demographics make this trend worse, as many liberal democracies are trapped in a vicious circle of ever-diminishing tax receipts, due to the scarcity of young, taxable workers, and an aging population demanding more public services. This situation is clearly unsustainable. Artificial intelligence is a technology that could help alleviate some of these imbalances—for example, by optimizing the distribution of scarce public resources, improving decision-making based on intelligent predictions, and preventing tax fraud. Shouldn’t we therefore welcome the computer automation of government and public services using AI?

The relationship between computers and governments goes back a long way. In The Government Machine,1 historian Jon Agar argues that the ideological roots of computers are found in public administration. According to Agar, the mechanization of government started in the late eighteenth century when the public administration of the United Kingdom, trusted with running a global empire, invested in efficient operations, as well as in the collection and processing of information from across the world. In a liberal system of government, where civil servants run everyday government affairs, the computer is a reflection of a technocratic vision for efficiency and process management. As Agar says, “The general purpose computer is the apotheosis of the civil service.”2 Until recently, computer systems were acting as the “peripheral nervous system” of government organizations: automating some processes, collecting data, and using simple interfaces to serve citizens over the web. Extending the brain metaphor, the “central nervous system” of government has remained the cabal of top-level human administrators; the departmental directors, directors general, and ministers. For it is at that level where the ultimate responsibility for decision-making and action still resides. Nevertheless, complex government decisions require not just data and information fed upward by administration processes but expert advice too. Indeed, the rise of expert advisors in public administration—such as statisticians, health scientists, economists, and so on—is absolutely necessary for “evidence-based” politics, or what the German philosopher Jürgen Habermas calls the “scientization of politics.”

Enter AI, which can potentially automate the central nervous system of government, as well, and deliver an efficient, mechanized “organizational brain” that can make complex decisions autonomously by accessing vast amounts of diverse knowledge and data. By replacing human decision makers in public administration with informational processes controlled by AI algorithms, one could get the perfect government: impartial, efficient, and effective. But what would be the consequences of transforming the metaphorical “government machine” into a reality?

Automating the Government

Claiming that the combination of AI and big data is a better way to run government decision-making and services may be both sensible and undesirable at the same time. By extracting human empathy from government, and transforming human subjectivity into mechanical objectivity, we may end up with something worse than we bargained for. Virginia Eubanks, in her book Automating Inequality,3 presents a number of case studies in the use of automation and algorithms by public service agencies in the United States. In Indiana, an automated benefits system categorized its own errors as “failure to cooperate,” and, as a result, wrongful denials of food stamps soared from 1.5% to 12.2%.4 More worryingly, Eubanks notes, in the name of efficiency and fraud reduction automation replaced human caseworkers who would exercise compassion and common sense in helping vulnerable people. Many other examples point to the unintended consequences of replacing humans with algorithms. COMPAS is an AI algorithm widely used in the United States to guide sentencing by predicting the likelihood of a criminal reoffending. In May 2016 the US organization ProPublica reported that the system predicts that black defendants pose a higher risk of recidivism than they actually do.5 Another similar case is the algorithm PredPol, used by police in several US states to predict where crimes will take place. In 2016 the Human Rights Data Analysis Group found that the system led the police to unfairly target certain neighborhoods with a higher proportion of people from racial minorities.6

The problem with automation is linked to the data-driven nature of machine learning. Algorithms “learn” by self-adjusting internal probabilistic weights through thousands of successive iterations with data sets. But data may have inherent biases. Safiya Umoja Noble, in her book Algorithms of Oppression,7 recounts her experience when, looking for inspiration in how to entertain her preteen stepdaughter, she searched for “black girls” in Google. To her horror, instead of eliciting information that might be of interest to this demographic, the engine produced results awash with pornography. Algorithms that feed on user citations tend to pose a major threat to the human rights of marginalized groups. Black teenage boys, Noble notes, are to be found next to criminal background check products. Gender equality is also affected. The word “professor” returns almost exclusively white males, as does the word “CEO.” Because of that, Google’s online advertising, which feeds from search results, shows high-income jobs more often to men rather than to women, thus perpetuating the gender imbalance. Bias in data reinforces societal inequalities and prejudices. Moreover, a weakness of deep neural networks is that they cannot explain the reason for their outputs. The combination of data bias and algorithmic inexplicability can be highly problematic when AI systems have an impact on citizens’ lives. From a classic liberal perspective, it is politically intolerable as it alienates citizens from the state and transforms the latter into an authoritarian and oppressive machine.

AI algorithms can also shape public opinion, impact electoral results, and be a direct threat to the liberal system of government. Algorithms used in social media platforms optimize the content citizens receive by “profiling” them. By doing so, they reinforce our biases while excluding us from debates, information, and dialogues that may lie outside our narrow interests or be at odds with our political ideology. With two-thirds of American adults getting their news from social media,8 this “algorithmic segregation” played a crucial role during the 2016 US election. As social media platforms gain enormous influence in shaping public opinion, all kinds of possibilities are open for nefarious interference in national politics by powerful interest groups or hostile countries. Cambridge Analytica, a company based in the United Kingdom, used data from Facebook and its own AI algorithms to personalize messages and “microtarget” voters in the US election, to great effect.9 The fact that they were exposed by the media and driven to bankruptcy does not minimize the fact that what they did is what digital advertising agencies do every day. We live in a world where AI algorithms modulate content around our personal views and prejudices, constantly reinforcing them and rarely challenging them. That’s how advertising works, and advertising happens to be at the heart of business models in content-based platforms such as Google, Facebook, and Twitter. The undesirable by-effect of AI-powered personalization in politics is polarization, wherein everyone becomes convinced that they are right while all others of a different opinion are wrong. This leads to a breakdown of consensus within our societies and is contributing to the rise of illiberal populism and the establishment of Internet echo chambers instead of a democratic agora.

Nevertheless, given the huge benefits of AI, we must find a way to embed this revolutionary technology in our system of government without jeopardizing liberal values. How we achieve this is fundamentally a question of ethics, and many initiatives across the Western world are grappling with issues such as algorithmic explainability, data bias, reinforced social exclusion, and the limits of systems autonomy. The debate on AI ethics is discovering how complex it is to define universal standards for AI. The professional association of electrical and electronics engineers—the IEEE—launched a global initiative on Ethics of Autonomous and Intelligent Systems in order to develop consensus “on standards and solutions, certifications and codes of conduct . . . for ethical implementation of intelligent technologies.”10 Their approach decided to examine ethics not just from a “Western,” Judeo-Christian perspective but also from other cultural perspectives, such as African and Chinese. IEEE’s work is arguably the most comprehensive compendium to date on how to mitigate the societal risks of intelligent systems. But, as the authors aimed for a standard for “common good,” they discovered that the idea was inconsistent with a pluralistic society.11 Efforts to force a specific notion of common good would inevitably violate the freedom of those who do not share this goal and inevitably lead to paternalism, tyranny, and oppression. For example, the individualism of European and American societies often clashes with the communitarian values of African societies. The authors of the IEEE report, in agreeing that human values differ across cultures, highlight one of the most important aspects in the geopolitics of AI. Given differences in human cultures, we may indeed end up with different AIs in the future—for instance, African AIs that prime collaboration versus European and American AIs that prime competition.

Meanwhile, and as debates on AI ethics continue, regulation is already being put in place on both sides of the Atlantic in order to limit the discriminatory and biased nature of machine learning, as well as the tremendous influence that AI systems are already exerting on democratic processes. The European Union’s General Data Protection Regulation (GDPR), launched in 2018, requires that AI systems be capable of explaining their logic and demands transparency in how personal data are manipulated. In a similar vein the State of California passed, on June 28, 2018, the California Consumer Privacy Act, which protects the privacy rights of consumers within the state. At a federal level, in 2019 Senators Cory Booker (D-NJ) and Ron Wyden (D-OR), with a House equivalent sponsored by Rep. Yvette Clarke (D-NY), proposed the Algorithmic Accountability Act.12 The act asks the Federal Trade Commission to create rules for evaluating “highly sensitive” automated systems. Companies would have to assess whether the algorithms powering these tools are biased or discriminatory, as well as whether they pose a privacy or security risk to consumers.13 And while we grapple with the many challenges that AI poses for our liberal institutions, democratic system of government, and values, a different story is developing in communist China.

AI and Communism

In the late twentieth century the command economies of communist countries in Eastern Europe and the Soviet Union imploded because they were unable to compete with free and open markets. That happened because the latter were able to allocate capital more efficiently, by quickly discovering prices on the basis of supply and demand, and thus produce better products and services. In the planned economies of communist countries it was not possible to use the markets to discover prices. Instead, central planners were burdened with solving the so-called socialist calculation problem: data about production capacity and consumer demand had to be collected using surveys, and an enormous number of mostly manual calculations had to take place in order to discover prices that reflected the true value of economic exchanges. Those prices were then used to set production targets. The free market economist Ludwig von Mises thought that the socialist calculation problem was a fool’s errand.14 He argued that the complexity of an economy is so great that only a free market mechanism can discover true prices. Without true prices any economy is doomed to failure, as it would waste valuable resources in producing too many useless goods. For von Mises free markets are more efficient than central planning because they are natural calculators of prices.

Nevertheless, the dream of a centrally planned economy never died.15 The Polish economist and father of market socialism, Oskar Lange (1904–1964), claimed that solving the socialist calculation problem was theoretically feasible. He proposed that an economy could be described as a series of simultaneous equations. Leonid Kantorovich (1912–1986), a Soviet mathematician who invented linear programming and won the Nobel Prize in economics in 1975, tried to solve Lange’s equations. He spent six years gathering data and running calculations to optimize Soviet steel production. When Kantorovich was trying to solve Lange’s equations, the Soviet Union produced around 12 million types of products. Kantorovich failed not because he did not get good results but because his results would always come too late to be useful. Arguably, his failure was due to the slowness of his calculations, restricted by the computational power that was available to him at the time.

The 1970s saw one more effort by a socialist government to solve the socialist calculation problem, in Chile. As soon as Salvador Allende became the first Marxist president of Chile in 1970, he embarked on a massive program of nationalization and collectivization. His vision was to transform his country into a socialist utopia and succeed where the Soviet Union and Eastern Europe were failing by creating an egalitarian society where the economy was managed scientifically from the center for the benefit of every citizen. To his aid came the British cybernetician Stafford Beer, a pioneer in organizational cybernetics. Beer designed and helped build Cybersyn, a cybernetic system that would be the “nervous system” of the new Chilean socialist state.16 The system ran on a mainframe computer and collected data from factories via 500 telex machines. The data were then fed into a simulator that ran various mathematical models in order to predict possible outcomes of various decisions and policies. A central control room was built in Santiago, the Chilean capital, where the managers of the socialist Chilean economy could meet and control production levels across the country. This was supposed to be the future, the end of history, and the best way that the world ought to be run. Except Cybersyn was never put to the test. Following a military coup in 1973 the Allende government was ousted, and Chile became a military dictatorship under General Pinochet. Cybersyn was mothballed, and Milton Friedman, the University of Chicago guru of free market economics, arrived in Chile to nudge the country in a completely different economic direction.

Fast forward to today when machine-learning algorithms are capable of discovering correlations in massive, unstructured, and disparate data sets and of using them to make predictions and classifications that far surpass the capability of any human. Not even in their wildest dreams could Lange, Kantorovich, Beer, and Allende have imagined the current state of computer technology. Could AI, big data, and supercomputers provide the technological means to solve the socialist calculation problem? The Chinese communists definitely think so.

In May 2018, Professor Feng Xiang, one of China’s most prominent legal scholars, published an op-ed in the Washington Post entitled “AI Will Spell the End of Capitalism.”17 He argued that in China’s socialist market economy AI could rationally allocate resources through big data analysis and robust feedback loops—practically echoing the cybernetic model of a socialist economy that was imagined in Cybersyn. Given that work automation will cause mass unemployment and demand for universal welfare, Xiang suggests that AI and big data should be nationalized. Private companies would coexist in this cybernetic communist scenario but be closely monitored by the state and under social control. Instead of corporate bosses serving the needs of shareholders, as in capitalism, Chinese business leaders would serve the needs of the worker-citizens. Xiang’s op-ed was a window into how the political elite of China thinks of the future of the Chinese economy and political system in the age of intelligent machines. After all, the ultimate goal of communism is the elimination of wage labor. Automation of work should be celebrated and embraced. Kantorovich and Lange are being revisited in earnest, as evidenced in the work of Chinese economists Binbin Wang and Xiaoyan Li who demonstrated how a combination of machine learning and low-cost sensors could optimize production in real time, as well as personalize it to the needs of citizens.18

However, for this proposition to become reality, the Chinese government must have unimpeded access to citizen data. The AI solution to the socialist calculation problem requires citizen surveillance. China’s social credit system should be seen as part of this grand vision for the future of communism, and not simply as a way for the Chinese government to spy on citizen’s lives, although the latter is clearly one of its goals. For instance, the social credit system is mostly developed in the Xinjiang Province, where it is used to monitor and control the Uighur population; many citizens deemed unsafe are shut out of everyday life or sent to reeducation centers in the province.19 To the rest of China, the social credit system is making citizen surveillance into an—almost amusing—game whereby citizens are incentivized to behave in specific ways and, if successful, get rewards. The reward scheme is based on a point system with every citizen starting off with 100 points. Citizens can earn bonus points up to the value of 200 by carrying out “good deeds,” such as doing charity work, recycling rubbish, or donating blood. Having a high social credit score opens up doors for a better job, access to health and education, travel and leisure, and even getting matched to the right partner. But citizens can also loose points by acts such as not showing up to a restaurant without having cancelled the reservation, cheating in online games, leaving false product reviews, and jaywalking. Not having enough social credit points carries costs, such as citizens getting banned from taking flights and boarding trains. In early 2017, the country’s Supreme People’s Court announced during a press conference that 6.15 million Chinese citizens had been banned from taking flights for social misdeeds. For Chinese communists, inspired by Confucian ideals wherein the whole is prioritized over the individual, such top-down interventions are both ethical and legitimate. Social trust must be preserved at any cost, even if a great number of citizens get penalized forever. As President Xi Jinping has stated, rather bluntly, “once untrustworthy, always restricted.”20

The social credit system is also considered by the Chinese government as an alternative to the democratic process of elections. By scraping citizen data from social media feeds and other digital channels and analyzing them with AI algorithms, the communist rulers of China can “listen” to their people and understand what they want, think, and feel. This idea of replacing democracy with computer analysis echoes a science fiction short story written in 1955 by Isaac Asimov.21 In that story a single citizen, selected to represent an entire population, responded to questions generated by a computer named Multivac. The machine took this data and calculated the results of an election so that it never needed to happen. Asimov’s story was set in Bloomington, Indiana, but an approximation of Multivac is being built today in China.22

Sharing Power with AIs

The contrast between communist China and liberal democracies in the debate around algorithmic accountability and data privacy could not be starker. What our liberal values find shocking and abhorrent—such as government systems dictating outcomes for citizens without the power to appeal—the Chinese implement with enthusiastic zeal. Nevertheless, both liberal democracies and Chinese communists aspire to use AI and data in order to improve human lives, protect Earth, and give more opportunities to future generations. Therefore, we have a common interest in fully understanding and wisely deciding the degree and level of AI automation that we should allow in our societies. So let us imagine an extreme future scenario in which intelligent machines have replaced humans at the helm of government, automating most decision-making processes, crunching massive data in fractions of the time that it would take humans to do, solving the socialist calculation problem, predicting crime offenses and loan defaults, and controlling every aspect of citizen behavior in general.

Let us also imagine—this time in a liberal democracy context—that we have solved the problem of data bias and algorithmic oppression, and that our algorithms are now ethical, they can explain their reasoning, and there is an appeals process in place to protect citizens from algorithmic bias. The running of government is now largely outsourced to those ethical intelligent systems—our new agents—that “know” everything better than us because they can learn everything faster and can be everywhere, anytime, even while we sleep. The machines are making the big and small decisions of running a country; they distribute resources according to one’s needs, enforce the laws, and defend the realm. Perhaps there is some kind of democratic oversight of this algorithmically based government; let’s say there is an elected parliament that regularly checks the performance of the algorithms and has the power to order their removal or improvement. Meanwhile, we citizens can enjoy our lives all watched over by machines of loving grace,23 without worrying about corrupt politicians, special interest groups, corporations, bankers, or boom-and-bust cycles. The machines are objective, have no emotions, have no vested interests, and will make sure none of the past shortcomings of politics will ever harm us. The machines will protect and defend liberal values and ensure the protection of citizen and human rights. Their interests will align perfectly with the interests of the many, and the agent-principal problem of representational governance will be all but solved. From a liberal perspective, such a fully automated future may seem ideal. From an economic perspective the AIs would deliver maximum efficiency. It all sounds great, but would such a future be desirable?

The key to exploring the desirability of a fully automated government lies in the concept of “autonomy,” that is, in how the responsibility for an action, or a decision, is shared between a human operator and a machine. The degree of autonomy also implies the degree of influence that humans have over the application and further development and evolution of AI systems. In the futuristic scenario under discussion, the machines are fully autonomous and there is no need for a human to be involved. The autonomy level just before that is when a machine is largely autonomous but a human must intervene in a preselected set of decisions that usually have to do with extreme events and circumstances; that is, the human is a “supervisor” of last resort. Let’s examine and compare those two extreme scenarios of autonomy, not in politics, but in real cases wherein the outcome was either life or death.

To optimize aerodynamics and fuel efficiency, Boeing designed its new aircraft 737 Max 8 so that it incorporated a fully autonomous system that took independent action to correct pitch and prevent stalls when the plane climbed too steeply. Unfortunately, the system—called the Maneuvering Characteristics Augmentation System (MCAS)—was shown to have caused the crashing of two flights in 2019, Ethiopian Airlines Flight 302 in Ethiopia and Lion Air Flight 610 in Indonesia, with a combined loss of 346 passengers and crew.24 In both cases the pilots failed to counteract actions by the autonomous system. They fought the automated system trying to pull the nose back up but did not succeed. By making the system fully autonomous Boeing had virtually cut the pilots out of the aviation process. The full automation of flight has been an aspiration for many in the industry. There used to be a saying that in the future airplanes would need only one pilot and a dog in order to fly. The pilot would feed the dog and the dog would make sure that the pilot did not fly the plane. MCAS was the dog.

The second case, where the intelligent system was not fully autonomous but allowed for human supervision, had a happier ending. On September 26, 1983, just a few minutes after midnight, Lt. Col. Stanislav Petrov was sitting in the commander’s chair inside a secret nuclear missile launch bunker outside Moscow when alarms went off: satellite data indicated that the United States had launched nuclear missiles. Computers were certain that the Soviet Union was under attack. They suggested an immediate retaliation in kind. It was a testament to common sense and moral integrity that Lt. Col. Petrov decided to act against the advice of computers that night. He did not push the launch button. Had he done otherwise, we would probably not be alive today. In fact, if the Soviet Union had developed fully autonomous AI systems for its nuclear arsenal that did not need the supervision of officers like Petrov, mutual nuclear annihilation would have probably happened already. The Rand Corporation examined in detail such cases in its Security 2040 project25 and found that fully autonomous AI would significantly increase the risk of tensions escalating into a full-blown nuclear war.26

Nevertheless, we must not regard the Lt. Col. Petrov’s case as conclusive evidence that humans as supervisors of autonomous systems are the better approach to automation. In fact, having a human in the loop can often be catastrophic. The fatalities recorded so far with driverless cars have been mostly due to the not yet fully autonomous car handing control over to the human driver with little time for the human to react. It seems that we humans are not good at taking considered action by overriding complex autonomous systems in time-critical situations. When working with such systems, we get “automation complacency” and trust the machines too much, until it is too late. It is notable that insurers considering risk in driverless cars are regarding this uncertain interface between human supervision and system autonomy as highly risky.

Faced with the dilemma of full versus partial automation, we seem to be caught between Scylla and Charybdis. Both options could lead to catastrophic outcomes. So what should we do to capture the opportunity of AI while mitigating the risk? Perhaps, and in order to extricate ourselves from the conundrum, we need to revisit what AI means. Our current idea of AI is one where humans are viewed in juxtaposition to algorithms. Artificial intelligence systems are like aliens invading our world, and we are now forced to somehow find a way to accommodate them and hopefully share power and responsibility with them. So, maybe, we got this whole thing wrong. Maybe we need to rethink AI, not as something that we must try to adopt by reducing the risk of bad outcomes through regulation and safety checks but as something that has to be embedded in human systems in a symbiotic way, and only when it maximizes our personal and collective goals. Such an approach would require a more dynamic relationship between AI and humans and more decentralized, bottom-up, and democratic processes that empower citizens to influence the further development and evolution of AI algorithms. In a later chapter I will propose and discuss a theoretical framework that can help us rethink AI in those democratic and human-centric terms. But before delving into the details of this framework, let us take a high-altitude view and examine how AI and its accompanying dilemmas are reshaping the world order, as well as what options are available to liberal democracies in order to absorb the impact of automation while remaining true to liberal values.