‘Is a democracy, such as we know it, the last improvement possible in government? Is it not possible to take a step further towards recognizing and organizing the rights of man?’
Henry David Thoreau, Resistance to Civil Government (1849)
We turn now to the future of democracy. The digital lifeworld will offer some interesting opportunities for those who want their system of government to combine the values of liberty, equality, human flourishing, epistemic superiority, stability, and protection from tyranny. But it will also present some challenges to democracy as we have traditionally understood it. This chapter is structured around five distinct conceptions of self-rule, some old and some new: Deliberative Democracy, Direct Democracy, Wiki Democracy, Data Democracy, and AI Democracy.
We begin with Deliberative Democracy, an ancient but increasingly fragile form of self-government.
Deliberation is the process by which members of a community rationally discuss political issues in order to find solutions that can be accepted by all (or most) reasonable people. In an ideal process of deliberation, everyone has the same opportunity to participate on equal terms, and anyone can question the topic or the way it is being discussed.1 Political debate has always been messy, but deliberation is seen as an important part of the process because it pools knowledge and information, encourages mutual respect, allows people to change their views, exposes who is pursuing their own interest, and increases the hope of consensus rather than simply totting up the ayes and noes. Supporters of deliberation say it is the only grown-up way of accommodating reasonable moral disagreements.2 ‘Deliberative Democrats’ go further, arguing that deliberation is not just one part of democracy but an essential part of it: only decisions taken with the benefit of genuine public deliberation can claim the mantle of democratic legitimacy. The classical Athenians would have agreed.
The arrival of the internet prompted a great deal of optimism about the future of Deliberative Democracy. Cyberspace would become a vibrant forum for political debate. Great multitudes of dispersed individuals, not just a few mass media outlets, would create and exchange reliable political information. Rather than passively absorbing information, citizens would participate in discussion, debate, and deliberation.3 Unfortunately, it hasn’t really turned out that way. Though there are more opportunities than ever for ordinary citizens to have their say, the result hasn’t been an increase in the quality of deliberation or political discourse more generally. On the contrary, politics feels as divisive and ill-informed as it did in the past, possibly even more so.4 Without a change in course, there is a risk that the quality of our deliberation could wither still further in the digital lifeworld. This is the result of four threats: perception-control, fragmented reality, online anonymity, and the growing threat posed by bots.
We’ve already seen that in the future, how we perceive the world will be increasingly determined by what is revealed or concealed by digital systems. These systems—news and search services, communication channels, affective computing, and AR platforms—will determine what we know, what we feel, what we want, and what we do. In turn, those who own and operate these systems will have the power to shape our political preferences. The first threat to Deliberative Democracy, therefore, is that our very perceptions are increasingly susceptible to control, sometimes by the very institutions we would seek to hold to account. It’s hard to contribute rationally when your political thoughts and feelings are structured and shaped for you by someone else.
The second threat comes from the disintegration and polarization of public discourse.5 People tend to talk to those they like and read news that confirms their beliefs, while filtering out information and people they find disagreeable.6 Technology increasingly allows them to do so. If you are a liberal who uses Twitter to follow races for the US House of Representatives, 90 per cent of the tweets you see (on average) will come from Democrats; if you are a conservative, 90 per cent of the tweets you see will typically come from Republicans.7 In the early days of the internet it was predicted that we would personally customize our own information environment, choosing what we would read on the basis of its political content. Increasingly, however, the work of filtering is done for us by automated systems that choose what is worthy of being reported or documented, and decide how much context and detail is necessary. Problematically, this means that the world I see every day may be profoundly different from the one you see.
‘You are entitled to your own opinion,’ said the US Senator and ambassador to the United Nations Daniel Moynihan, ‘but you are not entitled to your own facts.’8 In the digital lifeworld, the risk is that rival factions will claim not only their own opinions but their own facts too. This is already becoming a problem. When deliberation takes place over digital networks, truth and lie can be hard to distinguish. As Barack Obama put it, ‘An explanation of climate change from a Nobel Prize-winning physicist looks exactly the same on your Facebook page as the denial of climate change by somebody on the Koch brothers’ payroll . . . everything is true and nothing is true’:9
Ideally, in a democracy, everybody would agree that climate change is the consequence of man-made behavior, because that’s what ninety-nine per cent of scientists tell us . . . And then we would have a debate about how to fix it . . . you’d argue about means, but there was a baseline of facts that we could all work off of. And now we just don’t have that.
The term fake news was initially used to describe falsehoods that were propounded and given wide circulation on the internet. Now even the term fake news itself has been drained of meaning, used as a way to describe anything the speaker disagrees with. Although some social media platforms have taken steps to counter it, the nature of online communication (as currently engineered) is conducive to the rapid spread of misinformation. The result is so-called post-truth politics. Think about for this a moment: in the final three months of the 2017 US presidential campaign, the top twenty fake news stories on Facebook generated more shares, reactions, and comments than the top twenty stories from the major news outlets combined (including the New York Times, Washington Post, and Huffington Post).10 A poll in December 2016 found that 75 per cent of people who saw fake news headlines believed them to be true.11
Two further factors exacerbate the problem of post-truth politics. The first is that, in addition to filtering, individualized political messaging from political élites will mean that the information you receive from a given candidate or party will not be the same as the information I receive. Each will be tailored to what we most want to hear.12 Second, our innate tendency toward group polarization means that members of a group who share the same views tend, over time, to become more extreme in those views. As Cass Sunstein puts it, ‘it is precisely the people most likely to filter out opposing views who most need to hear them.’13 I refer to the dual phenomenon of polarization and post-truth politics as fragmented reality. (It’s related to the idea of fragmented morality discussed in chapter eleven.)
If the digital lifeworld falls victim to fragmented reality we’ll have fewer and fewer common terms of reference and shared experiences. If that happens, rational deliberation will become increasingly difficult. How can we agree on anything when the information environment encourages us to disagree on everything? ‘I am a great believer in the people,’ Abraham Lincoln is supposed to have said. ‘If given the truth, they can be depended upon to meet any national crisis. The great point is to bring them the real facts.’
Who will bring us the real facts?
The cause of deliberation is not helped by a third threat, that many online platforms allow us to participate anonymously or pseudonymously. This encourages us to behave in a way that we wouldn’t dream of in face-to-face interactions. The sense that our actions can’t be attributed to us, that nobody knows what we look like, that these are not real people, and that this is not the real world, combine to cause many of us to behave atrociously.14 It’s like slipping on J. R. R. Tolkien’s One Ring (or if you’re being posh, Plato’s Ring of Gyges)15 that makes us invisible and free to do as we please. The fact that it’s often technically possible for authorities to discover who we are doesn’t help the quality of deliberation.
The Athenians would have scoffed at the idea that deliberation could take place without revealing who you were. In the Assembly you couldn’t possibly conceal your identity, allegiance, or interests. That would defy the whole point. In the digital lifeworld, though, more and more discourse will take place in a digital medium—with no meaningful distinction between online and offline. This raises an important question: should deliberation be seen as a private act, done anonymously by individuals in pursuit of their own self-interest; or should it be treated as something public, done in the open by members of a community in pursuit of the common good? If the latter, then we need to code our digital platforms in a way that reflects that ideal. Some work is already being done on this, discussed in a few pages’ time.
If you want people to hate your enemies, one strategy is to masquerade as those enemies and say repulsive things. On Twitter, so-called ‘minstrel accounts’ do this by impersonating minority groups and spewing out stereotypes and invective. To counter them, accounts like @ImposterBuster track down minstrel accounts and reveal them as the frauds they are. Impersonating the enemy is a venerable political technique. Forgeries like the Protocols of the Elders of Zion have circulated for centuries as a means of rousing hatred against Jews. But there’s one critical difference today: neither the minstrel accounts on Twitter nor @ImposterBuster are humans.16 Both are AI bots that have ‘learned’ to mimic human speech.
Bots have only just begun to colonize online discourse, but they are swiftly rising in significance. One 2017 study estimates that 48 million (9 to 15 per cent of accounts) on Twitter are bots.17 In the 2016 US presidential election, pro-Trump bots using hashtags like #LockHerUp flooded social media, outgunning the Clinton campaign’s own bots by 5:1 and spreading a whopping dose of fake news. It’s estimated that around one-third of all traffic on Twitter in the buildup to the EU Brexit referendum came from bots. Almost all were for the Leave side.18 Not all bots are bad for deliberation: so-called HoneyPot bots distract human ‘trolls’ by using provocative messages to lure them into endless futile online debate.19 But by and large, bots’ impact so far has not been benign.
Can Deliberative Democracy survive in a system where deliberation itself is no longer the preserve of human beings? It’s possible that human voices could be crowded out of the public sphere altogether by bots that care little for our conversational norms. In the future, these won’t just be disembodied lines of code: they could look and sound like humans, endowed with faces and voices and extraordinary rhetorical gifts. How can we, with our feeble brains and limited knowledge, participate meaningfully in deliberations if our views are instantaneously ripped to shreds by armies of bots armed with a million smart-ass retorts? Advocates of bots might put it differently: why spend time deliberating when increasingly sophisticated bots can debate the issues faster and more effectively on our behalf?
The bots that are used to fix errors on Wikipedia may offer a small indicator of what the world might look like if bots were constantly arguing among themselves. Behind the scenes, it turns out that many of these simple software systems have been locked in ferocious battle for years, undoing each other’s edits and editing each other’s hyperlinks. Between 2009 and 2010, for instance, a bot called Xqbot undid more than 2,000 edits made by another called Darknessbot. Darknessbot retaliated by undoing more than 1,700 of Xqbot’s own changes.20
In due course we could see the automation of deliberation itself. It’s not an especially appetizing prospect.
The prospects look a little bleak for Deliberative Democracy, and not just for its claim to epistemic superiority. Think of the argument from liberty: can we really claim to govern ourselves as free citizens if the laws we choose are based on lies and fabrications? Or the argument from equality: how can we have an equal chance of influencing the decisions that affect our lives if the deliberative process is at the mercy of whoever has the most sophisticated army of bots? Would Aristotle and John Stuart Mill really think we were ennobling or improving ourselves by fighting like animals over the truth of the most basic facts? Would a regime elected on the basis of fake news really be stable if its survival depended on the masses never discovering the truth?
Importantly, the problems described in this section don’t need to be part of the digital lifeworld. We can find technical solutions. Social network proprietors are slowly taking steps to regulate their discussion spaces. Software engineers like those at loomio.org are trying to create ideal deliberation platforms using code. The Taiwanese vTaiwan platform has enabled consensus to be reached on several matters of public policy, including online alcohol sales policy, ridesharing regulations, and laws concerning the sharing economy and Airbnb.21 Digital fact-checking and troll-spotting are rising in prominence22 and the process of automating this work has begun, albeit imperfectly.23 These endeavours are important. The survival of deliberation in the digital lifeworld will depend in large part on whether they succeed. What’s clear is that a marketplace of ideas, attractive though the idea sounds, may not be what’s best. If content is framed and prioritized according to how many clicks it receives (and how much advertising revenue flows as a result) then truth will often be the casualty. If the debate chamber is dominated by whoever has the power to filter, or unleashes the most ferocious army of bots, then the conversation will be skewed in favour of those with the better technology, not necessarily the better ideas. Deliberative Democracy needs a forum for civil discussion, not a marketplace of screaming merchants.
That said, we shouldn’t think that the challenges facing Deliberative Democracy are purely technical in nature. They raise philosophical problems too. One is the problem of extreme speech. In most democratic societies it’s accepted that some limits on free speech are necessary when such speech would pose such an unacceptable threat to other freedoms or values. Words of violence that have nothing to do with politics, incitement to criminality, and threats fall into this category. But there’s no consensus on where the line should be drawn. The First Amendment to the US Constitution, for instance, grants an unusual amount of protection to speech that would be illegal elsewhere. To deny the Holocaust is a crime in Austria, France, and Germany, but lawful in the US. In Europe there are strict laws prohibiting hate speech against racial, religious, and ethnic groups, while in the US neo-Nazis can happily wave swastikas in a town of Holocaust survivors. In Britain you can be arrested for recklessly publishing a statement that’s likely to be understood by some of the public as an indirect encouragement to commit an act of terror. In the US, the same speech would have to be directed to inciting or producing imminent lawless action and likely to incite or produce such action.24
In the digital lifeworld, as we’ve seen, those who control digital platforms will increasingly police the speech of others. At present, tech firms are growing more bold about restricting obviously hateful speech. Few among us will have shed a tear, for instance, when Apple removed from its platform several apps that claimed to help ‘cure’ gay men of their sexuality.25 Nor when several content intermediaries stopped trafficking content from right-wing hate groups after white supremacist demonstrations in Charlottesville in mid-2017. (The delivery network Cloudfare terminated the account of the neo-Nazi Daily Stormer.26 The music streaming service Spotify stopped providing music from ‘hate bands’.27 The gaming chat app Discord shut down accounts associated with the Charlottesville fracas. Facebook banned a number of far-right groups with names like ‘Red Winged Knight’, ‘White Nationalists United’, ‘Right Wing Death Squad’, and ‘Vanguard America’.)28
But what about when Facebook removed the page belonging to the mayor of a large Kurdish city, despite it having been ‘liked’ by more than four hundred thousand people? According to Zeynep Tufekci, Facebook took this action because it was unable to distinguish ‘ordinary content that was merely about Kurds and their culture’ from propaganda issued by the PKK, a group designated as a terrorist organization by the US State Department.29 In Tufekci’s words, it ‘was like banning any Irish page featuring a shamrock or a leprechaun as an Irish Republican Army page’.30
My purpose is not to critique these individual decisions, of which literally millions are made every year, many by automated systems. The bigger point is that the power to decide what is considered so annoying, disgusting, scary, hurtful, or offensive that it should not be uttered at all has a significant bearing on the overall quality of our deliberation. It’s not clear why so-called ‘community guidelines’ would be the best way to manage this at a systemic level: the ultimate ‘community’ affected is the political community as a whole. To pretend that these platforms are like private debating clubs is naïve: they’re the new agorae and their consequences affect us all.
So as a political community—whether we personally use a particular platform or not—we need to be vigilant about the way that speech is policed by digital intermediaries. That does mean accepting that some restrictions on speech (wise restraints, if you will) will be necessary for deliberation to survive in the digital lifeworld. The idea of unfettered freedom of speech on digital platforms is surely a non-starter. Some forms of extreme speech should not be tolerated. Even in the nineteenth century John Stuart Mill accepted that certain restrictions were necessary. In his example, it’s acceptable to tell a newspaper that ‘corn-dealers are starvers of the poor’ but not acceptable to bellow the same words ‘to an excited mob assembled before the house of a corn-dealer’.31 Mill understood that we certainly shouldn’t be squeamish about rules that focus on the form of speech as opposed to its content. Just as it’s not too burdensome to refrain from screaming in a residential area at midnight, we can also surely accept that online discourse should be conducted according to rules that clearly and fairly define who can speak, when, for how long, and so forth. In the digital lifeworld this will be more important than ever: Mill’s ‘excited mob’ is much easier to convene, whether physically or digitally, using the technologies we have at our disposal.
Another difficult issue of principle relates to fragmented reality. It would be easy to blame post-truth politics on digital technology alone. But the truth (!) is that humans have a long and rich history of using deceit for political purposes.32 Richard Hofstadter’s 1963 description of the ‘paranoid style’ in public life—‘heated exaggeration, suspiciousness, and conspiratorial fantasy’—could have been meant to describe today.33 So could Hannah Arendt’s observation in ‘Truth and Politics’ (1967): ‘No one has ever doubted that truth and politics are on rather bad terms with each other.’34 So too could George Orwell’s complaint, in his diary of 1942, that:35
We are all drowning in filth. When I talk to anyone or read the writings of anyone who has any axe to grind, I feel that intellectual honesty and balanced judgment have simply disappeared from the face of the earth . . . everyone is simply putting a ‘case’ with deliberate suppression of his opponent’s point of view, and, what is more, with complete insensitiveness to any sufferings except those of himself and his friends.
Today’s problems have developed because of technology, no doubt, but also in part because of a political and intellectual climate that is itself hostile to the idea of objective truth. Influential postmodernist and constructivist thinkers have long argued that the notion of truth is nonsense. Beliefs are just plain beliefs; no more, no less. What we think is true is what we agree with, or what ‘society’ tells us to believe, or the product of language games without any basis in objective reality.36 Or if there is an objective reality out there, it’s so vaporous and elusive that there’s no point in trying to catch it. Foucault even argued that truth itself was an instrument of repression:37
we must speak the truth; we are constrained or condemned to confess or to discover the truth. Power never ceases its interrogation, its inquisition, its registration of truth: it institutionalises, professionalises and rewards its pursuit.
This isn’t the only school of thought in the academy, but it has a vocal following.
I suggest that if you introduce technologies capable of rapidly disseminating falsehoods into a political ecosystem in which truth is not seen as a paramount political virtue—or is even seen in some quarters as a vice—then there are going to be grave consequences for the quality of public debate. We need to confront some hard questions about the relationship between democracy and truth.
First, is democracy’s purpose to pool our collective wisdom so that we might find our way toward a discoverable ‘truth’ (an instrumentalist perspective)? Or for practical purposes, is what is ‘true’ and ‘false’, ‘right’ or ‘wrong’, simply what the multitude decides at any given time? This question has troubled many people since the election of Donald Trump and the Brexit referendum in the United Kingdom. The people have spoken—but could they be wrong? A related question is whether democracy should be seen as the process by which the community determines—on the basis of agreed underlying facts—what to do, or whether it should be seen as the process by which the community determines what the underlying facts themselves are? The writer Don Tapscott suggests that Holocaust denial could be countered by means of ‘algorithms that show consensus regarding the truth’.38 But what if an ignorant majority at a certain time didn’t believe that the Holocaust happened? Would that mean that, for political purposes, it didn’t happen? Surely not.
These are (sigh) questions on which people can disagree, but I don’t accept that they are questions of pure theory, of interest only to philosophers—and they certainly can’t be left to tech firms to answer. I know which side I favour. As Matthew D’Ancona puts it, truth is a ‘social necessity . . . a gradual and hard-won achievement’ that acts as a ‘binding force’ not just in politics, but in science, law, and commerce.39 It’s one thing to say that the politician whose ‘truth’ is accepted by the public is likely to be the winner of a democratic election. It’s another to say that such a victory was legitimate or desirable—if it turns out that that so-called ‘truth’ was not true at all.
In a Direct Democracy, the people vote directly on the issues rather than electing politicians to decide for them. A show of hands, a heap of ballots, a roar of acclamation: it’s democracy at its purest.
And yet for most of history it has been a fiction.
As we have seen, the vast majority of democracies have been indirect or representative, operating within a competitive elitist framework. There have been many reasons for this, but foremost among them is the problem of practicality: ‘One can hardly imagine,’ says Rousseau, ‘that all the people would sit permanently in an assembly to deal with public affairs.’40 In the digital lifeworld, however, the people will not need to ‘sit permanently’ in order to sustain Direct Democracy. It will be technically feasible for citizens to do as much real-time voting as they like on a wide range of issues.
It’s not hard to conceive of a daily notification on your smartphone (or whatever replaces it) listing the issues up for decision each week—whether a new building development should go ahead, whether a new school curriculum should be adopted, whether we should commit more troops to a conflict—accompanied by a brief AI-generated introduction to the issues with punchy summaries of the arguments for and against. You could cast your vote from your bed or on the train. Voting apps for private use already exist41 and although internet voting is not yet secure or transparent enough for general use, it’s reasonable to expect that it might be in the future. Some think that the eventual solution might come from Blockchain technology, with its promise of unhackable encryption.42
So Direct Democracy may become practicable, but is it desirable?
The arguments in favour are well-known. Direct democracy would allow genuine self-government on an equal basis. Everyone could perform meaningful public service and thereby enhance their moral faculties. It would bring the unmediated wisdom of the crowd to a vastly greater array of public policy decisions. And because the people would buy into political decisions, knowing them to be truly their own, the system would be stable and secure. Perhaps best of all, Direct Democracy would mean no more need for politicians. Marx would have been overjoyed. He once wrote that electoral democracy was no more than deciding once every three or six years ‘which member of the ruling class was to misrepresent the people in Parliament’.43 Good riddance to them!
And yet.
In our heart of hearts, do we really trust ourselves to make unfiltered decisions on complex matters of public policy? Is that a burden we wish to bear? We all have our areas of interest, to be sure, but is my deep and rich knowledge of Monty Python really going to be useful in deciding between different schemes of environmental regulation?
It’s not just that most of us have a limited knowledge of public policy, or even that we sometimes vote irrationally. On one level, it’s actually irrational to vote at all: since polls are usually decided by many thousands or millions of votes, each of us only has a tiny say in the eventual outcome. Why bother? This has been called the ‘problem of rational ignorance’ and it’s a big challenge for Direct Democracy.44 In practical terms, it means that people may not actually end up participating. Is an exhausted working parent really going to set aside precious minutes in the day to consider the merits of a new regulation on financial derivatives? Should he or she have to?
We should also reflect carefully before consigning politicians to the scrapheap. Perhaps there is something to be said for having a class of professional politicians to do the day-to-day work of governing for us, sparing us the hassle and worry. Sunstein argues that democracy in America was never based on the idea that Direct Democracy was desirable but merely infeasible. On the contrary, for the founding fathers, ‘good democratic order’ involved ‘informed and reflective decisions, not simply snapshots of individual opinions’.45 That’s why James Madison favoured ‘the total exclusion of the people in their collective capacity from any share’ in the government.46
A possible middle way lies in a system of partial Direct Democracy. We need not ask all the people to vote on all the issues all the time. Citizens could whittle down the issues on which they wish to vote, by geography (I want to vote on issues affecting London, where I live), expertise (I want to vote on issues relating to the energy industry, which I know a lot about), or interests (I want to vote on agricultural issues which affect my livelihood as a farmer). (I know that few farmers live in London but you get the point.) Such a system would radically fragment the work of national government, but in pursuit of the broader ideal of a more authentic democracy. On the other hand, the risk would remain that particular areas of public policy could be hijacked by local or specialized interest groups. And as ever, the apathetic or disconnected might be left behind.
A more unorthodox system would be one in which we could delegate our votes on certain issues, not just to politicians but to anyone we liked. Instead of abstaining on issues we don’t know or care about, we could give our proxy vote to people who do know or care about them. On matters of national security, for instance, I might want a serving army officer to vote on my behalf; on questions of urban planning, I might want a celebrity architect to cast my vote; on healthcare I might delegate my say to a consortium of nurses, doctors, and patient groups. A digital platform for this model of democracy has already been pioneered by the creators of DemocracyOS47 and used by various political parties in Europe. It’s called liquid democracy.48 The idea has a long vintage. Back in the nineteenth century, John Stuart Mill observed in ‘Thoughts on Parliamentary Reform’ (1859) that there is no one who, ‘in any matter which concerns himself, would not rather have his affairs managed by a person of greater knowledge and intelligence, than by one of less’.49 A suitably constituted system of liquid democracy in the digital lifeworld could balance the need for legitimacy, stability, and expertise.
Direct Democracy would mark a radical break from Schumpeterian competitive elitism—by eliminating or radically reducing the need for elected politicians. Another model would achieve the same result by involving the populace in the work of drafting legislation: Wiki Democracy.
Imagine that instead of sending delegates to constitutional conventions, the entire eighteenth-century population of the United States had tried to write the Constitution together. How would they have done it? Perhaps they would have gathered in a massive stretch of countryside somewhere. The noise would have been cacophonic. Even the finest orators would have been drowned out by the din. Few attendees would have known what was going on at any given time. There would probably have been a festival atmosphere, with much taking of drink, revelry, occasional stampedes, and perhaps some copulation and outbursts of brawling. Rival drafts of the document would have circulated at the same time. No doubt some would have been defaced, torn up, and lost as every citizen tried to have his or her say. Chaos.
Until fairly recently it was not feasible for a large group of strangers to collaborate efficiently or meaningfully in the production of content, let alone to draft a precise and sensitive set of rules to govern their collective life. This has now changed. The internet has given rise to a new way of producing content in which individuals who have never met can cooperate to produce material of great sophistication. Although there are fewer successful examples than some predicted, the most famous is Wikipedia, the online encyclopaedia whose content is written and reviewed by anyone who wishes to contribute. The other often-cited exemplar is open-source (or ‘free’) software, including the operating system Linux that runs on tablets, televisions, smartphones, servers, and supercomputers around the world. The code is curated by nearly 12,000 contributors, each working on the premise that any technical problem—no matter how difficult—can be solved if enough people are working on it. Where it is undertaken without top-down control, this kind of activity has been called commons-based peer production or open-source production.50 Where there’s more central direction and control, it tends to be called crowdsourcing.
In the digital lifeworld it will be possible, using commons-based peer production or crowdsourcing, to invite the citizenry directly to help set the political agenda, devise policies, and draft and refine legislation. Advocates of this sort of democracy, or variants of it, have called it wiki-government, collaborative democracy, and crowdocracy.51 I refer to it as Wiki Democracy.
Small experiments in Wiki Democracy have already been tried with some success. As long ago as 2007, New Zealand gave citizens the chance to participate in writing the new Policing Act using a wiki.52 In Brazil, about a third of the final text of the Youth Statute Bill was crowdsourced from young Brazilians and the Internet Civil Rights Bill received hundreds of contributions on the e-Democracia Wikilegis platform.53 These were carefully planned exercises within closely confined parameters. There is potential for more as digital platforms grow increasingly sophisticated.
Like Direct Democracy, Wiki Democracy would reduce the role of elected representatives. And in a Wiki Democracy we would not merely be asked to say yes or no to a set of pre-ordained questions decided by someone else; instead we would have the chance to shape the agenda ourselves in a richer and more meaningful way. Wiki Democracy also enjoys some of the same epistemic advantages of Direct Democracy, in that it would draw on the wisdom of crowds generally and (where appropriate) experts in particular.
In a full-blown Wiki Democracy, as in a Direct Democracy, there would have to be flexibility in how and to what extent individuals contributed. The policymaking process could be broken down into various parts (diagnosis, framing, data-collection, drafting and refining legislation, and so forth) and each part could be guided by the groups and individuals most willing or best placed to contribute.54 In a world of code-ified law (see chapter six) the code/law could theoretically be reprogrammable by the public at large, or by persons or AI systems delegated to undertake the task for them.
However, the idea of full-blown Wiki Democracy is beset with difficulties. More than any other model of democracy, it places serious demands of time and attention on its participants. Not everyone would feel comfortable editing a law. Even fewer would feel comfortable tinkering with code. The result could be a rise in apathy and a decline in legitimacy, as Wiki Democracy slid into a Wiki Aristocracy of the learned and leisured classes.
It could also give rise to delay and gridlock without an obvious mechanism for taking and sticking to decisions. Unlike Direct Democracy, which is inherently decisive, there is no clear end-point to the growth and evolution of a collaborative process. Linux and Wikipedia are constantly changing. ‘Discourses,’ as Jürgen Habermas put it in the twentieth century, ‘do not govern.’55 The same might be said of wikis—at least if they are left open-ended.
It’s also unclear how well a wiki could function in circumstances where the basic aims of collaboration were themselves contested. At least on Wikipedia the overall goal is clear: to produce verifiable encyclopaedic content. It’s apparent when contributors are trying to advance that goal and when troublemakers are seeking to undermine it. But when it comes to laws, there will always be reasonable disagreement on the goal. How do I contribute to a new wiki law legalizing drugs if I don’t believe that drugs should be legalized at all? By deleting the whole statute?
That a wiki can be refined and adapted over time, much like the common law, is desirable. But the common law moves at a stately pace while a wiki may change thousands of times each second. Jaron Lanier rightly invites us to imagine ‘the jittery shifts’ of wiki law: ‘It’s a terrifying thing to consider. Superenergized people would be struggling to shift the wording of the tax code on a frantic, never-ending basis.’56
The practical problems with Wiki Democracy seem overwhelming. But they are only fatal if we try to defend a model of pure Wiki Democracy without any checks or balances. To do that would be nonsensical. With a proper constitution (perhaps not one that can be altered by anyone at the click of a button) a Wiki Democracy could be built on the basis of clear rules about which laws may be edited, when and by whom, what they may or may not contain, and so forth. This would not be the first time that humans have had to consider how the chaos of unbridled democracy can be harnessed into something stable and useful. Fluctuating and unstable majorities, the tyranny of a dominant class, uncertainty of the law: these were exactly the sorts of problems that animated John Locke in the seventeenth century and those in the liberal democratic tradition. The fact that Wiki Democracy would need brakes, controls, checks, and balances, does not make it illegitimate or impossible.
We have seen that one of the main purposes of democracy is to unleash the information and knowledge contained in people’s minds and put it to political use. But if you think about it, elections and referendums do not yield a particularly rich trove of information. A vote on a small number of questions—usually which party or candidate to support—produces only a small number of data points. Put in the context of an increasingly quantified society, the amount of information generated by the democratic process—even when private polling is taken into account—is laughably small. Recall that by 2020 there will be 40 zettabytes of data in the world—the equivalent of about 3 million books for every living person. It’s expected that we’ll generate the same amount of information every couple of hours as we did from the dawn of civilization until 2003.57 This data will provide a log of human life that would have been unimaginable to our predecessors. This prompts the question: if by 2020 there will be about 3 million books’ worth of information for every living person, why would we govern on the basis of a tick in a box every few years? A new and better system of synthesizing information in society must be possible. Drawing on the work of Hiroki Azuma and Yuval Noah Harari,58 we can call such a system Data Democracy.
In a Data Democracy, ultimate political power would rest with the people but some political decisions would be taken on the basis of data rather than votes. By gathering together and synthesizing large amounts of the available data—giving equal consideration to everyone’s interests, preferences, and values—we could create the sharpest and fullest possible portrait of the common good. Under this model, policy would be based on an incomparably rich and accurate picture of our lives: what we do, what we need, what we think, what we say, how we feel. The data would be fresh and updated in real time rather than in a four- or five-year cycle. It would, in theory, ensure a greater measure of political equality—as it would be drawn from everyone equally, not just those who tend to get involved in the political process. And data, the argument runs, doesn’t lie: it shows us as we are, not as we think we are. It circumvents our cognitive biases. We are biased, for example, toward arguments that favour our own special interests. We tend to dismiss things that are inconsistent with our worldview. We view the world through frames drawn by élites. We dislike being inconsistent, even when changing our minds would be the rational thing to do. We are overly influenced by others, especially those in authority. We like to conform and be liked by others. We prefer our intuitions to reason. We favour the status quo.59
Machine learning systems are increasingly able to infer our views from what we do and say, and the technology already exists to analyse public opinion by processing mass sentiment on social media.60 Digital systems can also predict our individual views with increasing accuracy. Facebook’s algorithm, for instance, needs only ten ‘Likes’ before it can predict your opinions better than your colleagues, 150 before it can beat your family members, and 300 before it can predict your opinion better than your spouse.61 And that’s on the basis of a tiny amount of data compared to the amount that will be available in the digital lifeworld.
In short, the argument in favour of Data Democracy is that it would be a really representative system—more representative than any other model of democracy in human history.
It’s true that governments already use data in order to make policy decisions.62 The rise of ‘civic data’ is a welcome development. A Data Democrat, however, would say that there is a difference between using data sporadically as a matter of discretion and using it all the time as a matter of moral necessity. We wouldn’t be happy with an electoral democracy where elections were held on an ad hoc basis when ruling élites felt like it. By the same token, if democracy is about taking into account people’s preferences, then using data is something that must happen, and not merely a sign of good governance. A government that ignores data, on this argument, is as bad as one that ignores how the people vote. The more data in government, the more ‘democratic’ the system.
Pausing for a moment, there are some clear problems with a pure model of Data Democracy. On a practical level, the system would depend on data of a decent quality, uncorrupted by malfeasance or bot-interference. This isn’t something that can be guaranteed.
On a more philosophical plane, we know that democracy is not just about epistemic superiority. Those who see democracy as being based on liberty would argue that Data Democracy reduces the important role of human will in the democratic process. A vote is not just a data point: it is also an important act of consent on the part of the voter. By consciously participating in the democratic process, we agree to abide by the rules of the regime that emerges from it, even if we occasionally disagree with those rules. A Data Democrat might respond that human will could be incorporated into a system of Data Democracy—perhaps through the conscious act of agreeing (or refusing) to submit certain data to the process. A more strident retort might be that if Data Democracy produced dramatically better outcomes than electoral democracy then it would have its own legitimacy by virtue of those outcomes.
Another argument against Data Democracy is that by making our entire lives an act of subconscious political participation, the system deprives us of the benefits of conscious political participation. Which ‘noble actions’, to return to Aristotle, does it allow us to perform? In what way does it help us to flourish as humans? Democracy is about more than the competent administration of collective affairs.
The most powerful argument against Data Democracy is that data is useless in making the kind of political decisions that are often at stake in elections. A democratic system needs to be able to resolve issues of reasonable moral disagreement. Some of these are about scarce resources: should more be spent on education or on healthcare? Others concern ethics: should the infirm be allowed the right to die? It’s hard to see how even the most advanced systems—even those that can predict our future behaviour—could help us answer these questions. Data shows us what is, but it doesn’t show what ought to be. In a country where the consumption of alcohol is strictly prohibited, data revealing a low rate of alcohol consumption reflects only the fact that people obey the law, not that the law itself is right.
This difficulty would not have troubled Auguste Comte, who believed that all human behaviour was pre-determined by ‘a law which is as necessary as that of gravity’.63 But most of us would not see prediction as a substitute for moral reasoning. A system of Data Democracy would therefore need to be overlaid with some kind of overarching moral framework, perhaps itself the subject of democratic choice or deliberation. Or to put it more simply, Data Democracy might be more useful at the level of policy rather than principle.
Data Democracy is a flawed and challenging idea but democratic theorists cannot sensibly ignore it. The minimal argument in favour is that by incorporating elements of it we could dramatically improve the democratic processes we already have. The stronger version of the argument holds that Data Democracy could ultimately provide a more desirable political system than electoral democracy. The question is: which aspects are worthwhile and which are not?
What role will artificial intelligence come to play in governing human affairs? What role should it play? These questions have been floating around since the earliest computing machines. In the twentieth century, reactions to the first question tended to involve dark premonitions of humankind languishing under the boot of its robotic overlords. Reflection on the second question has been somewhat limited and deserves more careful thought.
We know that there are already hundreds, if not thousands, of tasks and activities formerly done only by humans that can now be done by AI systems, often better and on a much greater scale. These systems can now beat the most expert humans in almost every game. We have good reason to expect not only that these systems will grow more powerful, but that their rate of development will accelerate over time.
Increasingly, we entrust AI systems with tasks of the utmost significance and sensitivity. On our behalf they trade stocks and shares worth billions of dollars, report the news, and diagnose our fatal diseases. In the near future they will drive our cars for us, and we will trust them to get us there safely. We are already comfortable with AI systems taking our lives and livelihoods in their (metaphorical) hands. As they become explosively more capable, our comfort will be increasingly justified.
Aside from tech, in recent decades we have also become more interested in the idea that some political matters might be best handled by experts, rather than dragged through the ideological maelstrom of party politics. ‘Experts’ are sometimes derided and frequently ignored, but the increased prominence of central bankers, independent commissions, and (in some places) ‘technocratic’ politicians is testament to the fact that we don’t always mind them taking difficult, sober, long-term decisions on our behalf. Since Plato, in fact, countless political theorists have argued that rule by benevolent guardians would be preferable to rule by the masses.
In the circumstances, it’s not unreasonable, let alone crazy, to ask under what circumstances we might allow AI systems to partake in some of the work of government. If Deep Knowledge Ventures, a Hong-Kong based investor, can appoint an algorithm to its board of directors, is it so fanciful to consider that in the digital lifeworld we might appoint an AI system to the local water board or energy authority? Now is the time for political theorists to take seriously the idea that politics—just like commerce and the professions—may have a place for artificial intelligence.
What form might AI Democracy take, and how could it be squared with democratic norms?
In the first place, we might use simple AI systems to help us make the choices democracy requires of us. Apps already exist to advise us who we ought to vote for, based on our answers to questions.64 One such app brands itself as ‘matchmaking for politics’,65 which sounds a bit like turning up to a blind date to find a creepy politician waiting at the bar. In the future such apps will be considerably more sophisticated, drawing not on questionnaires but on the data that reveals our actual lives and priorities.
As time goes on, we might even let such systems vote on our behalf in the democratic process. This would involve delegating authority (in matters big or small, as we wish) to specialist systems that we believe are better placed to determine our interests than we are. Taxation, consumer welfare, environmental policy, financial regulation—these are all areas where complexity or ignorance may encourage us to let an AI system make a decision for us, based on what it knows of our lived experience and our moral preferences. In a frenetic Direct Democracy of the kind described earlier in this chapter, delegating your vote to a trusted AI system could save a lot of hours in the day.
A still more advanced model might involve the central government making inquiries of the population thousands of times each day, rather than once every few years—without having to disturb us at all.66 AI systems could respond to government nano-ballots on our behalf, at lightening speed, and their answers would not need not be confined to a binary yes or no. They could contain caveats (my citizen supports this aspect of this proposal but not that aspect) or expressions of intensity (my citizen mildly opposes this but strongly supports that). Such a model would have a far greater claim to taking into account the interests of the population than the competitive elitist model with which we live today.
In due course, AIs might also take part in the legislative process, helping to draft and amend legislation (particularly in the far future when such legislation might take the form of code itself: see chapter six). And in the long run, we might even allow AIs, incorporated as legal persons, to ‘stand’ for election to administrative and technical positions in government.
AI systems could play a part in democracy while remaining subordinate to traditional democratic processes like human deliberation and human votes. And they could be made subject to the ethics of their human masters. It should not be necessary for citizens to surrender their moral judgment if they don’t wish to.
There are nevertheless serious objections to the idea of AI Democracy. Foremost among them is the transparency objection: can we really call a system democratic if we don’t really understand the basis of the decisions made on our behalf? Although AI Democracy could make us freer or more prosperous in our day-to-day lives, it would also rather enslave us to the systems that decide on our behalf. One can see Pericles shaking his head in disgust.
In the past humans were prepared, in the right circumstances, to surrender their political affairs to powerful unseen intelligences. Before they had kings, the Hebrews of the Old Testament lived without earthly politics. They were subject only to the rule of God Himself, bound by the covenant that their forebears had sworn with Him.67 The ancient Greeks consulted omens and oracles. The Romans looked to the stars. These practices now seem quaint and faraway, inconsistent with what we know of rationality and the scientific method. But they prompt introspection. How far are we prepared to go—what are we prepared to sacrifice—to find a system of government that actually represents the people?
Back to the tentative thesis from the introduction: when a society develops new technologies of information and communication, we might expect political changes as well. This applies even to a concept as venerable as democracy. The classical model in ancient Greece, the liberal and competitive elitist models in the modern era: these were all models of ‘democracy’ but tailored to the conditions of their time. The digital lifeworld will challenge us to decide which aspects of democracy are most important to us. Do we care enough about deliberation to save it from the threats that face it? If we value liberty and human flourishing, then why not a system of Direct Democracy or Wiki Democracy? If we want the best outcomes and equal consideration of interests, then is there a place for Data Democracy and AI Democracy?
I started this chapter by observing how much has been written about democracy. As the digital lifeworld comes into view, it turns out that there is still a great deal more thinking and debating to do. In a future thick with new and strange forms of power, we’ll need a form of democracy that can truly hold our masters to heel.