‘Thinking too has a time for ploughing and a time for gathering the harvest.’
Ludwig Wittgenstein, Culture and Value (1970)
It has been a long journey through new and sometimes strange conceptual terrain. Let’s take a moment to reflect on what we’ve seen. If you’ve leapt to this chapter from the beginning, be warned: it draws on concepts and arguments from throughout the book.
We stand at the edge of the digital lifeworld, a world populated by digital systems that will come to rival and surpass humans across a wide range of functions. Over time they’ll grow more integrated, permeating the world around us in structures and objects that we never previously regarded as technology. Human life will be increasingly quantified, with ever more of our actions, utterances, movements, relationships, and emotions captured and recorded as data, then sorted, stored, and processed by digital systems.
Although we will enjoy dazzling new opportunities in the digital lifeworld, certain technologies will emerge as formidable instruments of power. Some will be able to force us to do things. Others will achieve the same result by scrutinizing us. Still others will exert power by controlling our perception of the world. The code animating those technologies will be highly adaptable and sophisticated, capable of shaping our lives in a dynamic and granular way. Those who control these technologies will be powerful in a broad sense, meaning they’ll possess a stable and wide-ranging capacity to get others to do things of significance that they would not otherwise do, or not to do things they might otherwise have done. Any entity (public or private) that aspires to power will seek to control these technologies.
The most immediate political beneficiaries will be the state and tech firms, who will vie for control of the new technologies of power. The state will gain a supercharged ability to enforce the law, and certain powerful tech firms will be able to define the limits of our liberty, determine the health of our democracy, and decide vital questions of social justice. The economic structure of the digital lifeworld, left to itself, could cause wealth to become concentrated in the hands of a few mighty entities.
Almost a century ago, Max Weber wrote in The Profession and Vocation of Politics (1919) that the task of politics is to ‘look at the realities of life with an unsparing gaze, to bear these realities and be a match for them inwardly’.1 In this penultimate chapter I offer some unsparing thoughts on what may lie ahead.
At its heart, this is not a book about technology, or even about political theory. It is a book about people.
Many of the problems referred to in these pages, past and future, can be attributed to the choices of individuals. The Airbnb host who refuses to host black guests; the troll who posts defamatory statements on social media; the anti-Semite who ‘games’ Google so that articles denying the Holocaust get more prominence;2 the joker who teaches Microsoft’s Tay to say ‘Fuck my robot pussy daddy’; the hacker who holds people’s medical data for ransom; the engineer who fails to train his systems to recognize women—these aren’t problems with technology. They’re problems with us.
The digital lifeworld will demand more of us all. From the shiniest CEO to the most junior programmer, those who work in tech will assume a role of particular influence. ‘In the long history of the world,’ said John F. Kennedy in 1961, ‘only a few generations have been granted the role of defending freedom in its hour of maximum danger.’ The future of politics will depend, in large part, on how the current generation of technologists approaches its work. That’s their burden whether they like it or not. Indeed, whether they know it or not. Plato wrote in The Republic that ‘there will be no end to the troubles of states . . . till philosophers become kings in this world, or till those we now call kings and rulers really and truly become philosophers.’3 Now it is technologists who must become philosophers, if we are to preserve our liberty, enhance democracy, and keep the arc of history bent toward justice. We need philosophical engineers worthy of the name.
Those of us who don’t work in tech can perhaps be guided by the principle of Digital Republicanism, which holds that a freedom that depends on the restraint of the powerful is no kind of freedom at all. We must be able to understand and shape the powers that govern our lives. It falls to us to design, theorize, and critique the world that’s emerging. Vigilance, prudence, curiosity, persistence, assertiveness, and public-spiritedness: without them, the digital lifeworld will slip into darkness.
We have to develop systems and structures that encourage our best instincts while constraining our worst ones. In doing so, we’ll face a number of systemic challenges. The first is what to do about the power of the supercharged state. In chapter ten I asked rhetorically whether we might be devising systems of power that are too complete and too effective for the flawed and damaged human beings they govern. In truth, I believe we are. The impending threats to liberty are unprecedented. We have to get working on the ‘wise restraints’ that could mitigate the oppressive consequences. In chapter eleven I sketched out a range of alternative perspectives on this issue, which I won’t rehearse here, but it suffices to say that this is an area that aches for further theory, not just from political theorists but also lawyers, sociologists, psychologists, criminologists, technologists, and more.
Against the grain of recent history, I also suggest that democracy will be even more important in the future than it was in the past. Growth in the state’s power will demand a corresponding growth in the people’s capacity to hold it to account. Fortunately, as we’ve seen, new and interesting forms of democracy are emerging just at the time that they are needed most. ‘[H]istory’, wrote Rosa Luxemburg, ‘has the fine habit of always producing along with any real social need the means to its satisfaction, along with the task simultaneously the solution.’4 The social need is clear: to protect people from servitude. The task is also clear: to keep the power of the supercharged state in check. The solution, I hope, will be a new and more robust form of democracy: not the tired competitive elitist model, but one that combines the most promising elements of Deliberative Democracy, Direct Democracy, Wiki Democracy, Data Democracy, and AI Democracy.
The second systemic challenge lies with tech firms. As we’ve seen, the issue isn’t (just) one of big companies getting richer and the rest of us getting poorer. It’s about the power they’ll wield through the technologies they control.
We don’t typically let others dominate us without good reason, or at least without our permission. If tech firms are to acquire such power, then that power ought to be legitimate. For some this will sound odd. If an economic entity creates a product that consumers engage with, then why shouldn’t it enjoy the power that comes with its success? This type of thinking is sensible up to a point, but ultimately it confuses the logic of economics with the logic of politics. In the marketplace, investment, risk-taking, and hard work will often lead to the legitimate acquisition of wealth. But to say that the legitimacy of a tech firm’s political power should be judged by market standards because it originated in the market is like saying that the legitimacy of a military junta should be judged by military standards because it originated in a coup d’état. On any faintly liberal or democratic view, what matters for the purpose of legitimacy is the perspective of the people subject to the power in question. The political realm is not a marketplace. The principles that justify political power are different from those that justify commercial success. Riches, fame, and even celebrity can legitimately be won in the marketplace. But not great power. The acquisition of power must be justified by reference to political principles, not economic ones.
That being said, some of the principles historically used by the strong to justify their power have been unconvincing. Divine Rule: the king was appointed by God; we must obey him. Original Sin: our despicable urges must be restrained. The Great Chain of Being: we must all submit to our place in the natural hierarchy. Tradition: it’s always been this way. Patriarchy: as the father justly rules his family, so too does the prince rule his subjects. Might is right: in politics there’s no right and wrong, only strong and weak. Pragmatism: it’s better not to rock the boat. Charisma: he’s extraordinary, let’s follow him! Apathy: no one really cares anyway.
These are all justifications, or excuses, made in the name of power. You may doubt whether they are good ones.
The citizen in a modern democracy has a solid idea of what makes the power of the state legitimate. It’s that with the consent of the governed, the state guarantees a certain amount of liberty and enacts laws that are consistent (as far as possible) with the shared beliefs and interests of the governed. Call this the liberal-democratic principle of legitimacy. Because the power of tech firms will differ in scope and extent from that enjoyed by the state, the liberal-democratic principle may be too high a standard by which to judge the legitimacy of their power. You don’t need to hold elections for the Board of Facebook to make its power legitimate.
Some will say that we should simply nationalize these new tech behemoths and be done with it. This would be a mistake, and not just because the market is a vital engine of innovation. The lesson of liberalism is that the state is itself a massive concentration of power that must be kept in check. We know that even democratic states can abuse their power to the detriment of citizens. Already our governments are moving to control the technologies of power without the hassle of owning them, by co-opting the help of tech firms, legislating to bring them under their control, and hacking their systems when other methods fail. Every increase in the state’s power must be justifiable, and it’s naïve to suppose that handing the technologies of power to the state wouldn’t generate its own threats to liberty, democracy, and social justice.
A more sensible approach would be to regulate private tech firms in a way that gives their power an authentic stamp of legitimacy.5 We already regulate private companies in various ways: consumer protection rules, employee rights, food standards, and so forth. The state aggressively regulates utilities to ensure that their power is exercised responsibly. Tech firms too are subject to regulation, but a growing number of scholars and commentators argue that we need to go further. Think of Google. As we’ve seen, its search results can be unjust. If you search for an African-American sounding name, you’re more likely to see an advertisement for websites offering criminal background checks than if you enter a non-African-American sounding name. If you type ‘Why do gay guys . . . ’, Google offers the completed question, ‘Why do gay guys have weird voices?’ (See chapter sixteen.) Legal scholar Frank Pasquale proposes that Google should face regulation to counter this sort of injustice. Among other measures, he would force Google to ‘label, monitor, and explain hate-driven search results’; allow for ‘limited outside annotations’ on defamatory posts; ‘hire more humans to judge complaints’, and possibly even ‘ban certain content’.6 Elsewhere, Pasquale and Danielle Keats Citron propose a system of procedural safeguards, overseen by regulators, to allow people to challenge the decisions of algorithms that have made important decisions about their lives (like their credit scores).7 This is sensible stuff. But the very idea of regulation will make some people queasy, precisely because it means handing even more power to the state. That queasiness will have to be suppressed, at least in part. Some of the power accruing to tech firms—the power, for instance, to control our perception of the world—is so extraordinary that it rivals or exceeds the power of any corporate entity of the past.
This is a book about principles and ideas. It isn’t intended to offer specific regulatory proposals. But political theorists should be able to say, in broad terms, what would be needed to make tech firms legitimate in their exercise of power. I offer three suggestions.
The most obvious source of legitimacy is consent: the power of tech firms can be justified by the fact that people consent to what they do. The idea of consent is useful in situations where power is exerted in a small and ad hoc way.8 For instance, if you type a single search query into Google, you tacitly accept Google’s right, on that occasion, to filter the world for you as it sees fit. The same goes for inferred consent, where a system determines what a person ‘would have’ consented to in a particular instance.9 To adapt Gerald Dworkin’s example, if a robotic surgeon is repairing a broken bone and discovers a cancerous tumour, it may legitimately infer that the patient would have consented to it removing the tumour immediately.10
The use of consent to govern longer-term power relationships with tech firms, however, is more problematic. It’s sometimes said in defence of big tech firms (usually by their lawyers) that when a person expressly agrees to a set of terms, he or she gives the tech firm permission to do whatever those terms provide.11 But who in their right mind actually reads the terms and conditions before they use an app or a device? Scholars estimate it would take seventy-six working days to read all the privacy policies that we encounter in a year.12 The timeworn practice of thrusting a huge legal document under someone’s nose and saying ‘sign here’ is from another age. And even if we could and did read the terms and conditions, ticking a box once a decade is not a satisfactory means of surrendering important rights and freedoms.
Another difficulty with the consent principle is that often we won’t have a choice whether or not to engage with a particular technology. This might be because of ubiquity (like public surveillance systems) or equally necessity (in a cashless economy, we’ll have no choice but to use the standard payment technology). To confer legitimacy, consent must be free and informed. Necessary consent is not really consent at all. Finally, to return to the Google example, the consent principle doesn’t protect us at a systemic level from abuses, even if it can justify an ad hoc exercise of power over a single search. Consent alone—at least consent of the kind given in the market economy—may not be enough.
An alternative source of legitimacy comes from the principle of fairness: if you accept benefits from a digital platform, it is said, then you have a duty of fairness to accept the reasonable burdens too.13 This is similar to the consent principle but differs in one important respect. Whereas the consent principle derives from the idea of freedom (if you freely consent to power, the loss of liberty you suffer is legitimate), the fairness principle comes from the idea of justice (if you receive X, it is just that you surrender Y in return).
The fairness principle is intuitively sensible but, as with the consent principle, if a tech firm extracts our data without making it clear that it is doing so, then it can scarcely be said that fairness requires us to let it do as it pleases. It’s only legitimate if we know about it. Similarly, it’s not exactly fair if people have no choice but to engage the services of a digital platform. When I walk through the streets of a ‘smart’ city I am watched by surveillance cameras. But I didn’t choose to be the subject of surveillance. It’s something that was done to me. First I’m given no choice but to engage with the technology, then I’m told that because I engaged with it I somehow have a duty of fairness to accept its rules and let its controllers use the data gathered about me for other commercial ends. That doesn’t seem fair at all.
Finally, it might be said that a tech firm’s power is legitimate to the extent that it reflects or embodies the shared values of its users.14 According to this view, for instance, what would give a news platform the legitimacy to filter the news in a certain way is that its algorithms reflect or embody the shared values of its users as to how that filtering should be done. This means, however, that the power in question must always be sufficiently transparent that users can see whether it does indeed reflect or embody those values. If the news gathering, editing, and ranking algorithms are hidden from sight, its users can’t possibly say whether or not they are engineered in a way that reflects their shared beliefs. They might seem to be, but it would be pretty difficult to know otherwise until something went wrong, as it did for Facebook in 2016 when Russian operatives purchased more than 3,000 targeted messages to interfere in the US presidential election.15 Without transparency, the shared values principle is an article of faith and not a principle of legitimacy.
It follows from these principles that at least two types of regulation are likely to be necessary in the digital lifeworld. The first is regulation to ensure transparency; the second is regulation to break up massive concentrations of power.
It would be unacceptable for humans to become subject to significant powers that are utterly opaque to them. For this reason there must be transparency, not only in relation to algorithms but in relation to data use and the values (if any) that are consciously coded into technology. There should also be requirements of simplicity, compelling tech firms to explain their systems in basic terms that the rest of us can understand. (EU authorities already plan a legal right to explanation, although only in relation to fully automated decisions.)16
The case for transparency has secure philosophical foundations, the first of which is the concept of legitimacy just discussed. If we consider that people have effectively consented to be the subject of a tech firm’s power (the consent principle), or that those who accept its benefits have a duty to accept the burdens (the fairness principle), or that its exercise of power reflects or embodies the shared values of its users (the shared values principle), then yes, the power of that tech firm may be said to be legitimate. But so long as tech firms keep their algorithms hidden under lock and key, their data policies obscure, and their values undefined, they cannot possibly claim a single one of these forms of legitimacy. How can we freely consent when we don’t know to what we’re consenting? How can we be said to have accepted burdens if we don’t know what they are? How can we know if a tech firm shares our values if we don’t know how its algorithms work, or what the firm does with our data? The case for transparency is reinforced by the principle of liberty. We are not truly free if we are subject to rules we cannot see, and that are determined by the whim and fancy of people other than ourselves.
Progress is being made toward developing principles of algorithmic audit.17 One scholar suggests that there should be ‘an FDA for algorithms’ with a ‘broad mandate to ensure that unacceptably dangerous algorithms are not released onto the market’.18 This is an interesting idea. But tech firms will object that revealing their commercially sensitive algorithms would do irreparable harm to their business or enable malevolent parties to game and abuse the system (an argument favoured by Google). One way to address these concerns would be to entrust the work of algorithmic audit to discrete third-party professionals who ‘would take a vow of impartiality and confidentiality, much as accountants and certain other professionals do now. They would evaluate the selection of data sources, the choice of analytical and predictive tools, and the interpretation of results’ before issuing a certificate of good health.19 On this model, rather than citizens holding tech firms to account, we’d leave it to regulatory authorities and independent auditors who would be better-placed, financially and technically, to serve as a check and balance. This idea is attractive but also a little regressive. It leaves us with yet another class of rulers—those who write the code and those who audit it—who understand the workings of power while the rest of us remain in the dark and rely on their benevolence and competence. A system of algorithmic audit would shift power away from tech firms, to be sure, but only to another technocratic élite of auditors, albeit one that has pledged to serve the public interest.
We’ll also, I believe, need structural regulation. By this I mean political intervention to ensure that the technologies of power don’t become too concentrated in the hands of a small number of firms and individuals. A thriving market economy might make this unnecessary, but if the analysis in Part V is correct then the trend will be toward greater concentration. Big tech firms may need to be broken up.
Again, there are sound philosophical reasons for structural intervention. The simplest is that it would prevent the accrual of dangerous levels of power. Imagine, for instance, that all or most political speech was channelled through a single digital platform. Users would have no choice but to accept its rules: they couldn’t go elsewhere. And that platform would provide a single chokepoint for the stifling of speech altogether. That’s too much power for one entity. Similarly, if a single platform (like the imaginary Delphi described in chapter eight) controlled all the apparatus of perception-control within an area, its capacity to shape the behaviour of the population would be unacceptably large.
Structural regulation could also be justified on grounds of legitimacy. You can’t be said to owe a duty of fairness to a tech firm, or to have consented to its power, in circumstances where there is only one provider (or a few similar ones) and you have no other choice. A system with one tech firm that operated transparently might be able to satisfy the shared values principle—but even then we’d still be relying on its benevolence to keep things that way. From the Digital Republican perspective, this would be unacceptable. Digital Confederalists would go further, saying we need, at all times, a plurality of available digital systems and the opportunity to move between them without adverse consequences.
We already have some legal machinery to deal with concentrations of economic power. In the US it’s called antitrust law and in Europe it’s known as competition law. Both are aimed at the promotion of economic competition, that is, restricting cartels, preventing collusion, overseeing mergers and acquisitions, and preventing large economic entities from abusing their dominant position in the market. There have been some recent antitrust victories against big tech firms, like the €2.4 billion fine imposed by the EU on Google in 2017 for manipulating search results to promote its own shopping-comparison service over others’. (The US system is less robust. Google was not fined for the same conduct there.)
Important though it is, antitrust regulation was designed for a different set of problems from those we will confront in the digital lifeworld. There will be plenty of tech firms in the digital lifeworld, for example, that wield considerable power but are not unlawful monopolies for the purposes of antitrust law. Moreover, tech firms could routinely exercise power in ways that are wrong-headed, foolish, or unprincipled, but which don’t properly fall into the category of ‘abuses’. For now at least, the core aim of antitrust regulation is to prevent economic abuses in the form of price discrimination, predatory pricing, and the like, rather than to shape and constrain political power. But as we saw in chapter eighteen, the Data Deal means that many services are provided for free. No issue of economic abuse necessarily arises. The risk, therefore, is that the power of tech firms will fall outside the antitrust regulatory framework altogether. And even if an antitrust regulator was able to break up a tech monopoly, it still wouldn’t necessarily guarantee the plurality of options that would be needed for true Digital Confederalism.
What we need from structural regulation is to provide citizens with some choice over the powers that govern them, not (just) fair prices for consumers. The philosopher Baron de Montesquieu is credited with developing the idea of a separation of powers to distribute political power across three branches of the state: the legislature, the executive, and the judiciary.20 Montesquieu believed that the best way to prevent any one person or entity from assuming too much power was to keep the three branches independent from each other (together with a system of ‘checks and balances’, another term he invented). We can learn from Montesquieu. Arguably the most serious difficulty with antitrust regulation is that its regulatory domain is structured by reference to markets, not forms of power. What I mean by this is that antitrust regulators begin by identifying a particular market or markets—telecoms, road transportation, and so forth—within which a firm might be said to be abusing its dominance. But for political purposes, market dominance is not what matters. What ultimately matters are the forms of power themselves—force, scrutiny, perception-control—not the economic arenas in which they originate.
The idea should be to create a political system in which two conditions obtain. The first is that no firm is allowed a monopoly over each of the means of force, scrutiny, and perception-control. The second is that no firm is allowed significant control over more than one of the means of force, scrutiny, and perception-control together. Structurally, that’s the best (and possibly only) way to ensure liberty and legitimacy. Instead of trying to bend antitrust law to serve a political function—hammering a legal peg into a political hole—we need to think in terms of a new separation of powers.
A final way for citizens to hold on to control would be to give them a direct say in the rules that govern them. Make power accountable, not just transparent. Bring the values of democracy to the private sector. This could mean more ‘free software’. Free software, as Lessig explains, is not free in the sense that ‘code writers don’t get paid’ but rather free in the sense that anyone has the right to ‘modify’ the code ‘as he or she sees fit’.21 It also could mean more flexible operating systems, that is, fewer Apple-style systems that allow for little or no meaningful customization, and more Android-style systems that can be customized much more readily. Contrast a self-driving car that is programmed to kill the child rather than the trucker with one that lets you decide, in advance, what it ought to do in situations of that kind (see chapter five). It could mean the right to challenge or appeal a particular decision, prediction, or action of a digital system.22 Engineers working on important new technologies could face public scrutiny before they unleash the consequences of their inventions.23 And tech firms should be more answerable to their users. According to this view, Facebook shouldn’t be able to change functionalities that affect liberty, democracy, and justice without the permission of the people affected by that change.
These are radical ideas but in philosophical terms they’re not new. Many distinguished thinkers have rejected the idea that democracy begins and ends with the ballot box. Instead, they say, democratic principles should be present in as much of day-to-day life as possible, and at the very least should inform all decisions ‘that concern the common activities which are among the conditions for self-development’.24
Think for a moment about the finest political speech you’ve ever heard. It probably inspired you in some way or convinced you of a cause. Perhaps it moved you to tears. For many, Martin Luther King Jr’s ‘I Have a Dream’ speech from August 1963 represents the pinnacle of rhetorical force:
I have a dream that one day this nation will rise up and live out the true meaning of its creed: ‘We hold these truths to be self-evident, that all men are created equal.’
I have a dream that one day on the red hills of Georgia, the sons of former slaves and the sons of former slave owners will be able to sit down together at the table of brotherhood.
I have a dream that one day even the state of Mississippi, a state sweltering with the heat of injustice, sweltering with the heat of oppression, will be transformed into an oasis of freedom and justice.
I have a dream that my four little children will one day live in a nation where they will not be judged by the color of their skin but by the content of their character.
I have a dream today!
In the very first chapter of this book I noted that engineers have now built an AI system capable of writing political speeches. How would you feel if you found out that the speech you so admire had been generated by that system? That it was the product of an algorithm that ‘learned’ to compose great speeches by analysing thousands of them from the past? Would knowledge of this fact diminish the value of that speech in your eyes?
For me, it would.
When we hear a great political speech, we hear words but we also hear the authentic voice of the speaker. We bear witness to their moral courage. We glimpse their soul. It’s why we like to know if politicians write their own speeches or get someone else to draft them on their behalf. Once you know that a speech was written by a machine, its words sound a little more empty and formulaic.
At this point in the book (well done on making it this far, by the way) you’ve probably had quite enough of principles, but there’s room for one more. It’s this: there are some things that digital systems shouldn’t do, even if they can (technically) do them better than humans. Such a principle might apply to things that have value precisely because they are the product of the human mind, the human hand, or the human heart. To borrow from Michael Sandel, these might be inherently ‘corrupted or degraded’, or at the very least, diminished, by the fact that they are the product of machines.25 It might even be said that we are degraded in some way by having an algorithm for a boss, a VR system for a lover, a robot for a carer, or an AI system for a government.
As I said in the Introduction, the great debate of the last century was about how much of our collective life should be determined by the state and what should be left to market forces and civil society. In the future, the question will be how much of our collective life should be directed and controlled by powerful digital systems—and on what terms. We cannot, through inactivity, allow ourselves to become ‘the plaything of alien powers’, subject always to decisions made for us by entities and systems beyond our control and understanding.26 That is the challenge of Future Politics.