ELEVEN

Freedom and the Tech Firm

‘Let every nation know, whether it wishes us well or ill, that we shall pay any price, bear any burden, meet any hardship, support any friend, oppose any foe to assure the survival and the success of liberty.’

John F. Kennedy, Inaugural Address (1961)

Niccolò Machiavelli wrote in his Discourses that ‘the peoples of ancient times were greater lovers of liberty than those of our own day.’1 That was 500 years ago. Back then, the greatest threats to ‘common liberty’ were kings, colonization, and conquest. In the future, there will be a fourth: code.

As well as supercharging the state, digital technology will increasingly concentrate power in the hands of the tech firms that control the technologies of power. Recall the line from Alexis de Tocqueville’s Democracy in America (1835) that opened this Part of the book: ‘Every individual lets them put the collar on, for he sees that it is not a person, or a class of persons, but society itself which holds the end of his chain.’ This won’t always be true in the digital lifeworld. The ‘collar’ of digital power, where not held by the state, will often be controlled by a very particular ‘class of persons’, that is, the firms that control the technology. This chapter is dedicated to understanding what that means for freedom.

My view is simple: if tech firms assume the kind of power that affects our most precious liberties, then they must also understand and respect some of the rudimentary principles of liberty. Humans have been developing these for centuries. Under what circumstances is it permissible to restrict a person’s freedom? Should we be free to harm ourselves? Should we be prevented from behaving immorally? These can’t be treated as merely corporate or commercial concerns. They are fundamental questions of political theory.

Liberty and Private Power

One of the curious traits of digital technology, as we’ve seen, is that it can enhance and restrict our freedom at the same time. It frees us to do things that we couldn’t do previously. But it restricts us according to the constraints of the code. Think for a moment about using an Apple device. It’s usually a thing of beauty: smooth, seamless, and intuitive. It offers a universe of applications. But it’s a universe closely curated by Apple. You can’t reprogram the device to your tastes. You can only use the applications chosen by Apple, whose Guidelines for app developers say:

We will reject apps for any content or behavior that we believe is over the line. What line, you ask? Well, as a Supreme Court Justice once said, ‘I’ll know it when I see it.’

Despite the somewhat arbitrary power of this clause, it feels churlish to complain. Apple devices offer plenty of choice and the system works well. The legal scholar Tim Wu, referring to this example, observes that ‘consumers on the whole seem content to bear a little totalitarianism for convenience.’2 He’s right. We intuitively understand that Apple devices enhance our overall freedom even if we can’t do everything we’d like with them. The question is whether the same trade-off will still make sense in the digital lifeworld, where code’s empire will extend to almost every freedom we currently take for granted.

Take free speech, a freedom of the utmost sanctity. Free speech permits authentic self-expression. It protects us against powerful interests, exposing them to criticism and ridicule. It allows for the ‘collision of adverse opinions’, a process essential to the pursuit of truth.3 With some exceptions, most of us would be horrified by the idea of the state censoring what we say or how we say it. We venerate the idea of the Greek agora, where the citizenry spoke freely, fearlessly, and on equal terms (see chapters twelve and thirteen).

Now think about this. In the digital lifeworld, almost all of our speech will be mediated and moderated by private technology firms. That’s because we will come to rely almost entirely on their platforms for communication, both with people we know and people we don’t. That means tech firms will determine the forms of communication that are allowed (for example, images, audio, text, hologram, VR, AR, no more than 140 characters, and so forth). They will also determine the audience for our communication, including who can be contacted (members of the network only?) and how content is ranked and sorted according to relevance, popularity, or some other criterion. They’ll even determine the content of what we say, prohibiting speech that they deem unacceptable. This involves some tricky distinctions. According to leaked documents obtained by the Guardian, Facebook will not remove a post saying ‘To snap a bitch’s neck, make sure to apply all your pressure to the middle of her throat’ but it will remove one saying ‘Someone shoot Trump’ because the President is in a protected category as a head of state. Videos of abortions are OK, apparently, unless they involve nudity.4

It won’t just be social media platforms that have the capacity to constrain speech. There will usually be several other technical intermediaries between speakers and their audiences, including the firms that control the hardware through which the information travels.5 With more or less precision, each will be able to control the flow of information.

We are, in short, witnessing the emergence of a historic new political balance: we are given wholly new forms and opportunities for speech, but in exchange we must accept that that speech is subject to the rules set by those who control the platforms. It’s as if the agora had been privatized and purchased by an Athenian oligarch, giving him the power to dictate the rules of debate, choose who could speak and for how long, and decree which subjects were out of bounds. The main difference is that algorithmic regulation of these platforms means that thousands upon thousands of decisions affecting our freedom of speech like this will be taken every day, decided automatically and executed seamlessly with no right of appeal. Microsoft, Twitter, and YouTube, for instance, have recently teamed up to announce the Global Internet Forum to Counter Terrorism. Among other measures, they’ll ‘work together to refine and improve existing joint technical work’ including using machine learning techniques for ‘content detection and classification’.6

Now, plenty of folk will be happy to let tech firms get on with the task of regulating speech. But since this is a chapter about liberty, it’s worth recalling the republican principle of freedom: that a freedom that depends on the restraint of the powerful is no kind of freedom at all. Each firm that controls the platforms for speech could reduce or refine our freedom of speech any time if it so desired. Like the pre-revolutionary English and Americans, we depend on their whim and fancy for the survival of our freedom of speech. Politically speaking, is this satisfactory?

It’s not just speech. Consider freedom of thought generally. Already we trust tech firms to find and gather information about the world, choose what is worthy of being reported, decide how much context and detail is necessary, and feed it back to us in a digestible form. With few strings attached, we give them the power to shape our shared sense of right and wrong, fair and unfair, clean and unclean, seemly and unseemly, real and fake, true and false, known and unknown. We let them, in short, control our perception of the world. That’s a pretty big deal for freedom of thought.

Now consider yet another basic freedom: freedom of movement. Self-driving cars will obviously generate valuable new affordances. Non-drivers will be able to make use of the roads. Road transportation will be safer, faster, and more energy efficient. Passengers will be able to work, eat, sleep, or socialize while in transit. In return for these affordances, however, we’ll necessarily sacrifice other ­freedoms. The freedom (occasionally) to drive over the speed limit. The freedom (occasionally) to make an illegal manoeuvre or park on a double yellow line. The freedom to make a journey with no record of it. Perhaps even the freedom to make moral choices, like (in the case of the trolley problem described in chapter six) whether to kill the child or the trucker. Again, I don’t seek to ­suggest that this isn’t a deal worth striking. But I do suggest that we see it for what it is: a trade-off in which our precious liberties are part of the bargain.

From the perspective of freedom, there are four important differences between the power wielded by the state and that wielded by tech firms.

The first and most obvious is that the democratic state is answerable to the people and citizens have a meaningful say in the rules that govern them. Power can be held to account. The same can’t usually be said of most tech firms that operate in the private sector. They make the rules; we live with them. Recall that even ownership of a device doesn’t necessarily mean control over it. Most technology in the digital lifeworld will be reprogrammable from afar. Our property could be repurposed under our very noses without our consent or even our knowledge.

Second, (at least in theory) the state exists to serve the general interest. A well-functioning government generates laws and policies aimed at the common good. By contrast, tech firms, like all private companies operating in a capitalist paradigm, exist for the commercial benefit of their owners.

A third difference is that mature legal systems develop in a systematic way over time according to clear rules and canons. Private code, by contrast, develops in an ad hoc and inconsistent way. Different companies take different approaches: Facebook might censor content that Twitter deems acceptable. One app may gather your personal data; another might not. Your self-driving car might kill the child; mine might swerve and kill the trucker. Code’s empire is not a ­unified realm, but rather a patchwork of overlapping jurisdictions. This isn’t necessarily a bad thing: it could enable a kind of Digital Confederalism, where people move between systems according to whose code they prefer. More on that later.

Fourth, technology in the digital lifeworld will be mind-­bogglingly complex, and therefore even more inscrutable than the workings of government. This is an important point. As Samuel Arbesman observes, a Boeing 747-400 aeroplane—already a pretty vintage piece of kit—has 6 million individual parts and 171 miles of wiring.7 But that’s child’s play compared with what’s to come. The future will teem with components and contraptions, devices and sensors, robots and platforms containing untold trillions of lines of code that reproduce, learn, and evolve at an ever-increasing pace. Some systems will function entirely ‘outside of human knowledge and understanding’.8 The fact that machines don’t function like humans makes them inherently hard to understand. But often they don’t even function according to their design either. Like parents baffled by their child’s decision to get a tattoo, software engineers are often surprised by the decisions of their own AI systems. As algorithms grow more complex, they grow more mysterious. Many systems already run on thousands of lines of self-generated ‘dark code’, whose function is unknown.9 In the future, even the creators of technology will increasingly be heard to say: why did it do that? How did it do that? For the rest of us, the technical workings of the digital lifeworld will be utterly opaque.

It’s not just complexity that makes technology inscrutable. We’re often deliberately prevented from knowing how it works. Code is often commercially valuable and its owners use every available means to keep it hidden from competitors. As Frank Pasquale argues in The Black Box Society (2015), we’re increasingly surrounded by ­‘proprietary algorithms’ that are ‘guarded by a phalanx of lawyers’, making them ‘immune from scrutiny, except on the rare occasions when a whistleblower litigates or leaks’.10

In addition, there will often be times in the digital lifeworld when we aren’t even aware that power is being exerted on us. Many technologies of surveillance operate quietly in the background. And if a news algorithm subtly promotes one narrative over another, or hides certain stories from view, how are we supposed to know? The best technology in the future won’t feel obtrusive. It won’t feel like technology at all. The risk, as Daniel Solove puts it, is that we find ourselves in a world that’s as much Kafka as Orwell, one of constant ‘helplessness, frustration, and vulnerability’ in the face of vast, unknowable, and often unseen power.11

In light of the power tech firms will have to shape and limit our freedom in the future, it’s worth going back to fundamental principles about what should and shouldn’t be permitted in society. These principles should inform the work of those entrusted with our precious liberties.

The Harm Principle

John Stuart Mill (1806–1873) was a singular figure in the history of ideas. The son of James Mill, a well-known Scottish philosopher, the young John Stuart was deliberately isolated from other children except his siblings. His education was intense. ‘I have no remembrance of the time when I began to learn Greek’ he records in his Autobiography (1873), but ‘I have been told that it was when I was three years old.’12 He began Latin when he was eight.13 He grew up debating with ‘Mr Bentham’, a friend of his father and one of the most important thinkers in western philosophy. Young John Stuart was evidently a prodigy, but Mill Senior didn’t let him know it. With ‘extreme vigilance’ he kept his son from hearing himself praised.14

Mill developed into a thinker of extraordinary subtlety and range, dedicated above all to the ideal of individual liberty. As Isaiah Berlin observes, what the adult Mill feared most was ‘narrowness, uniformity, the crippling effect of persecution, the crushing of individuals by the weight of authority or of custom or of public opinion’. He rejected ‘the worship of order or tidiness, or even peace’ and he loved ‘the variety and colour of untamed human beings with unextinguished passions and untrammelled imaginations’.15 Mill was way (way) ahead of his time. In an era defined by strict Victorian moralism16 he fearlessly advocated individualism over ‘conformity’ and ‘mediocrity’.17 As befitted a man who started learning ancient Greek when he was three, he believed that the main danger of his time was that ‘so few now dare to be eccentric’.18

Mill was a liberal, not a libertarian. He accepted that there had to be more-than-minimal restrictions on individual freedom (wise restraints, if you will) in order for society to survive. But to restrict the liberty of others, he believed, there must always be a good reason. He came to think that only one reason could ever justify such a restriction: to prevent harm to others. This is the harm principle, one of the most influential ideas in western political thought. It is the centrepiece of Mill’s On Liberty (1859):

That the only purpose for which power can be rightfully exercised over any member of a civilised community, against his will, is to prevent harm to others . . . Over himself, over his own body and mind, the individual is sovereign.19

Subsequent liberal thinkers have refined the harm principle. In Joel Feinberg’s formulation, for instance, only those acts that cause ‘avoidable and substantial harm’ can rightly be prohibited.20

Unfortunately, the harm principle has been wantonly violated throughout history. Since the very beginning, people have been persecuted for holding the wrong convictions, punished for making love to people of the wrong gender, and pogrommed for praying to the wrong god—none of which caused harm to anyone else. Tech firms in the digital lifeworld must do better than the powerful of the past. This is our chance to structure human freedoms in a way that emancipates people rather than crushing them.

Harm to Self

Imagine four scenarios.

In the first scenario, Eva writes an insulting email about Laura and then accidentally sends it to Laura herself rather than its intended recipient. (We’ve all been there.) Luckily for Eva, an automated alert immediately pops up: ‘Our system detects that this may not be the intended recipient. Proceed/Change?’ Eva gratefully corrects her error and resends the email. The email system imposed a constraint on Eva’s freedom by withholding the message once she had hit send. But that constraint was temporary, minor in nature, and ­capable of immediate override. It rescued her from considerable embarrassment. Most of us, I reckon, would welcome this kind of interference with our freedom.

Next consider James, who is a few pounds overweight. It’s late at night and, feeling peckish, he tiptoes down to the kitchen, opens the ‘smart’ refrigerator and removes a generous slab of pie. Salivating, and standing over the sink to prevent crumb spillage, he readies himself for the first delicious bite. Suddenly, the fridge pipes up in a loud and scornful tone: ‘Do you really need that pie, James?’ Shocked and ashamed, he drops the pie and scarpers back to bed. Like Eva’s email alert, James’s sassy fridge caused him to change his course of action (or, to put it in terms of power, caused him to refrain from doing something he would otherwise have done). True, the fridge didn’t force James not to eat the pie, but the disciplinary effect of its scrutiny was just as strong. I suspect that most of us would be uncomfortable with this kind of intrusion from any person, let alone one of our kitchen appliances. It’s a little too personal, a little offensive, even if it’s in our own best interests. Of course, the situation would be different if James had asked his fridge to police his eating habits in an act of Odyssean self-restraint (see chapter ten).

Next imagine Nick, who instructs his food preparation system (call it RoboChef)21 to make a curry according to a recipe he has prepared. Nick’s recipe, however, contains a measurement error that would result in an excessively high amount of capsicum (chilli spice) in the dish—enough to give Nick an evening of acute discomfort. RoboChef detects the disproportionate figure, chooses to ignore it, and instead prepares a tasty jalfrezi with just the right amount of spice. Nick enjoys the meal. How should he feel about RoboChef’s intervention? It can’t be denied that RoboChef acted according to (what it correctly recognized as) Nick’s interests. But there’s something mildly disconcerting about the fact that it disobeyed Nick’s instructions without telling him and assumed his interests without consulting him. What if the capsicum figure had not been an error? Perhaps Nick is someone who enjoys the sensation of overwhelming spice—and to hell with the gastric consequences! If he directly instructed RoboChef to deliver a truly, ridiculously, spicy curry—one that would be genuinely harmful to the majority of the population—would it be right for RoboChef to refuse to cook it? Put another way: should a digital system be able to deny Nick, a competent adult, the consequences of his voluntary and informed choice?

Finally, picture an elderly gentleman—call him Grahame—who suffers from a painful chronic disease. Lonely but at peace, he sincerely wishes to end his life. To that end he instructs his RoboChef to prepare a lethal cyanide broth. RoboChef refuses. Frustrated, Grahame lies down in his driveway and orders his self-driving car to reverse over his skull. The vehicle politely declines. Desperate, Grahame then tries to commit suicide by ingesting too many pills. But his ‘smart’ floor system detects his unconscious body on the floor and places an automated call to emergency services.22 Against his will, Grahame’s life is again saved.

In each of these scenarios a digital system intervenes to restrain the will or conduct of a human being, albeit in a way that is essentially benign. The aim is to protect the human from self-harm. We can call this Digital Paternalism. To what extent should Digital Paternalism become a feature of the digital lifeworld? Should our digital systems coddle and protect us? We know what John Stuart Mill might say. He believed that a person’s ‘own good, either physical or moral’ is never ‘a sufficient warrant’ to exercise power over him. Only harm to others could justify that.23 But Mill’s principle feels a little narrow in a world where tiny restrictions on freedom could lead to real benefits. At the other extreme from Mill, Isaac Asimov’s well-known First Law of Robotics provides that a robot may not injure a human being or, through inaction, allow a human being to come to harm. But, as Asimov knew well, the First Law, by itself, is not a sufficient guide to action. It doesn’t tell us how far a robot must go in defence of humanity. And it doesn’t say what counts as harm. Personal embarrassment? The cardiac implications of an unhealthy slice of pie? A fiendishly spicy curry? Death?

Of the four scenarios, Grahame’s is the simplest but also the most vexing. In most developed countries, suicide has long ceased to be a crime. What is generally illegal is for one person to assist in the death or mutilation of another; hence why in most countries you can’t take part in euthanasia, inject someone with heroin, or ­seriously wound them in a loving act of sadomasochism. Consent on the part of the harmed person is no defence. But Grahame isn’t exactly asking for the assistance of any other human being. On the contrary, his death need not morally implicate any person other than himself. He has one desire—to die—and he thinks he has the technical means to make it happen. Yet the technology either refuses to assist in that goal or actively thwarts it. Isn’t this a serious intrusion into Grahame’s liberty? Perhaps, but it might be a mistake to think that liberty is the only value at play here. Allowing machines to assist in self-harm (or to stand by while it takes place) could threaten ‘one of the great moral principles upon which society is based, that is, the sanctity of human life’.24 Grahame’s suicide, on this view, may not be a crime against himself but it is an offence ‘against society as a whole’ that cannot be tolerated, even if that means making Grahame and others in his position less free.25

My view is that a fruitful balance could be struck between the freedom espoused by Mill and the benefits of Digital Paternalism. As the scenarios demonstrate, at least eleven considerations could factor in to any decision about whether freedom should be restricted to prevent self-harm. How significant is the freedom being denied? By comparison, how great is the benefit being sought? Is what’s being restricted a voluntary, conscious, informed choice or an ­accidental, reflexive, or otherwise involuntary act? Is the constraint imposed openly or covertly? Is the constraint imposed according to our express interests or just our perceived interests? Does it involve force, or influence, or manipulation (that is, does it properly leave room for freedom of choice)? Is the constraint an omission or an act? Can it be overridden? How long does it last? Is the person whose freedom is being restricted an adult or a minor? Of sound mind?

The line has to be drawn somewhere, but the question is political and not technical.

Immoral but Not Harmful

Every society has some shared sense of what counts as evil, wrong, sinful, and immoral. One longstanding question is whether we should be permitted to do things that are immoral even if they’re not harmful. It’s possible, for instance, for a person to have paedophilic fantasies without ever harming a child (and indeed being repulsed by the very idea of doing so). It’s possible for a man to have consensual sex with his father, in private, with no risk of harm to anyone else. Should such things be forbidden? If discovered, should they be punished? These questions were previously the sole province of law and ethics. In the digital lifeworld they will also be matters of code.

I want you to imagine a VR platform that provides an immersive experience for its users. Sight and sound are provided by a headset and earphones. Smells are synthesized using olfactory gases and oils. Physical touch is stimulated by way of a ‘haptic’ suit and gloves, together with customized props in the appropriate form. Sexual stimulation, when needed for the experience, is provided by ­‘teledildonic’ apparatus designed for the task.26 The system is even able to respond directly to brainwaves it detects using electroencephalography (EEG).27 It is used in the privacy of the home. Only the user is shown their virtual adventures: not even the manufacturer receives that information.

What would you choose to do with this kind of technology? Throw the winning pass in the SuperBowl? Go toe-to-toe with Muhammad Ali? Dance with Fred Astaire?

What if, instead, you wanted to ‘experience’ what it was like to be a Nazi executioner at Auschwitz? Or to recreate, in first person, the final day of Mohammed Atta, one of the 9/11 hijackers? Should users be able to simulate, in virtual reality, the act of drowning a puppy or strangling a kitten? Should they be allowed to torture and mutilate a virtual avatar of their neighbour? What about the ‘experience’ of raping a child? Should it be possible to know, in virtual reality, what it felt like to nail Jesus to the cross?

For most, these scenarios are horrendous even to contemplate. They grossly offend our shared sense of morality. But that’s why we have to think about them. The test of a society’s commitment to liberty is not what it thinks about conduct within the moral mainstream, but rather what it has to say about activities considered unspeakable, obscene, or taboo.

We might well agree that choosing to experience these things in VR is itself likely to corrode people’s moral character. That’s a kind of harm to self. But no one else is actually mutilated, raped, or violated. If you believe that society has no business policing people’s private morality then you can logically conclude that in VR anything should be permitted. So long as no harm is caused to ­others, people should be left to do as they please. It’s not forbidden, after all, to fantasize about these things in the privacy of one’s own mind. Is VR so different?

Some philosophical context is helpful in thinking this through.

It’s long been seen as bad form to punish people for what goes on inside their heads. In the nineteenth century, as we’ve seen, John Stuart Mill argued that society has no business in prohibiting ­conduct that affected no one other than the individual in question. One of Mill’s contemporaries, a judge called Sir James Fitzjames Stephen, disagreed, arguing that we have every reason to be ‘interested not only in the conduct, but in the thoughts, feelings, and opinions’ of other people.28 A century later, the same disagreement resurfaced in the famous Hart–Devlin debate of the 1960s. Herbert Lionel Adolphus Hart was the quintessential liberal law professor, a genial man with a colossal intellect and a gentle manner. Lord Devlin cut an altogether sterner figure. Like Sir James Fitzjames Stephen, he too was a judge. The Hart–Devlin debates were prompted by the publication in 1957 of the Report of the Committee on Homosexual Offenses and Prostitution, generally known as the Wolfenden Report. The Report’s famous conclusion was that ‘It is not the duty of the law to concern itself with immorality as such’: ‘there must remain a realm of private morality and immorality which is, in brief and crude terms, not the law’s business.’29

Devlin disagreed. He believed that a society is a ‘community of ideas’, not just political ideas, but ‘ideas about the way its members should behave and govern their lives’. That is to say, every society must have shared morals.30 Without them ‘no society can exist’.31 To let immorality go unpunished, even immorality that causes no identifiable harm to other people, is to degrade the moral fabric that holds peoples together. ‘A nation of debauchees,’ he wrote in 1965, ‘would not in 1940 have responded satisfactorily to Winston Churchill’s call to blood and toil and sweat and tears.’32

H. L. A. Hart accepted that some shared morality was needed for the existence of any society, at least in order to limit violence, theft, and fraud. But he dismissed Devlin’s notion that the law has any business in regulating morality. There is no evidence, Hart, ­suggested, that deviation from ‘accepted sexual morality’ by adults in private is ‘something which, like treason, threatens the existence of society’: ‘As a proposition of fact it is entitled to no more respect than the Emperor Justinian’s statement that homosexuality was the cause of earthquakes.’33 For Hart, our personal choices, especially those made in private, have no bearing on whether we are loyal citizens. (Hart himself had no problem answering Churchill’s call to service, having worked in military intelligence for most of the Second World War. Nor had the great mathematician and codebreaker Alan Turing who also worked at Bletchley Park, and who was the subject of criminal prosecution for homosexual acts.)

So how would the Hart–Devlin debate play out today?

We might firstly argue that VR is actually pretty different from pure fantasy. Its realism and sensual authenticity bring it closer to actually doing something than merely thinking about it. The trouble with this argument is that if you believe on principle (as Mill and Hart did) that mere immorality should never be made the subject of coercion, then to say something is very immoral, as opposed to merely quite immoral, doesn’t take you much further.

Another argument is that if we let people do extreme things in virtual reality then they might be more likely to do them in ‘real reality’. Violent sexual experiences in VR could encourage acts of sexual violence against people. This is an empirical argument that can be tested through experiment. And early research suggests that it might well be right.34 If it is true that virtual behaviour causes real behaviour, then prohibition of certain VR experiences could be justifiable under the harm principle. That said, if my obscene VR experience involves virtual harm to me alone—say, a fantasy about being violently dominated—then it doesn’t obviously follow that I would go out and inflict harm on others.

A third, more Devlinesque objection is that untrammelled liberty in VR could be simply too corrosive to shared morality or to our traditional way of life.35 Another way of putting it is to say that our liberties should be structured so as to ‘elevate or perfect human character’ rather than debasing it.36 Amusingly, one early study suggests that the experience of being able to fly in VR encourages altruistic behaviour in the real world, apparently by triggering associations with airborne superheroes like Superman.37 Another variant on this argument is that no system of morality is inherently better than another, but that multiple moralities within the same community are a problem. As Devlin put it, in the absence of ‘fundamental agreement about good and evil’ society will ‘disintegrate’. ‘For society is not something that is kept together physically; it is held by the invisible bonds of common thought.’38 In modern coinage, we might call this the problem of fragmented morality.

A final argument against untrammelled virtual liberty, with broader implications for all digital technology, is that those who manufacture the VR hardware and software ought not to be able to profit from such grotesquery. In its strong form, the argument is that tech firms should be legally prohibited from making such technologies at all, or required to code them so that certain virtual experiences are made impossible. It might, alternatively, be said that manufacturers should be given the discretion to choose the functionality of their VR systems—and if that means the facilitation of obscenity, then on their moral heads be it. There are two difficulties with the second approach. It doesn’t solve the problem of fragmented morality. And it entirely delegates important questions of human liberty to private firms. Is it wise for matters of public morality to be reduced to questions of corporate strategy? It’s questionable, to say the least, whether our freedoms should be determined by the tastes of ­managers, lawyers, and engineers at tech firms. How would we feel about a platform that permitted virtual violence against only one racial group; or which allowed straight sex but not gay sex? Or (thinking more broadly) a self-driving car that refused to take its passengers to a fast food drive-through because its manufacturers oppose factory farming? Or a general-purpose surgical robot that refused to perform legal abortions because its manufacturers were religious Christians?

Technology will bring new affordances, it’s true, but that also means new opportunities for immoral conduct and deviancies we can’t even yet imagine. Some will rejoice in the prospect. Others will shudder. Will we be free to do as we please so long as others aren’t harmed? Tech firms can’t be left to decide these questions on their own.

Digital Liberty

Let’s pause to think through the implications of the last two chapters. What we need, I suggest, is a new collection of concepts that can explain different approaches to the future of freedom. I set out a modest selection here. See which most appeals to you.

Digital Libertarianism is the belief that freedom in the future will mean freedom from technology. Every line of code that exerts power is contrary to liberty. Freedom begins where technology ends. This doctrine supports the reduction of all forms of digital power, whatever their nature or origins. No one should ever be forced to use digital systems where another means would do. If I don’t want ‘smart’ appliances in my home, I shouldn’t have to install them.

Digital Liberalism is the more nuanced belief that technology should be engineered to ensure the maximum possible individual liberty for all. This is the ‘wise restraints’ approach. To work, it requires that code should, as far as possible, be neutral between different conceptions of the good. It should not aggressively promote one course of life over another. It should leave individuals the largest possible sphere of personal freedom to determine their paths, perhaps through individual customization of owned devices.

Digital Confederalism is the idea that the best way to preserve liberty is to ensure that people can move between systems according to whose code they prefer. If I consider one speech platform, for instance, to be unduly restrictive, I should be able to find another one. Digital Confederalism requires that for any important liberty—communication, news-gathering, search, transport—there must be a plurality of available digital systems through which to exercise that liberty. And it must be possible to move between these systems without adverse consequences.39 In practice, a private or state monopoly over any given platform or technology could be fatal to Digital Confederalism (chapter eighteen).

By contrast, Digital Paternalism and Digital Moralism hold that technology should be designed, respectively, to protect people from the harmful consequences of their own actions and steer them away from lives of immorality. How far they do this is a matter of taste. A refrigerator that warns you about the health consequences of eating another slice of pie would constrain you less than one that physically prevented access to any more pie until you’d digested the previous helping. A VR system that placed limits on extreme obscenity would be less restrictive than one that only allowed its users to experience wholesome activities like going to virtual church or attending a virtual lecture.

Finally, Digital Republicanism is the belief that nobody should be subject to the arbitrary power of those who control digital technologies. At the very least, it means we must be helped to understand how the technologies that govern our lives actually work, the values they encode, who designed and created them, and what purpose they serve. More radically, authentic Digital Republicanism would require not only that we understand the digital systems that exert power over us, but that we actually have a hand in shaping them. It’s not good enough to rely on the benevolence and wisdom of tech companies to make important decisions about our freedom. So long as they can arbitrarily change the rules, making technology work in their interests and not ours (the argument runs) we must consider ourselves unfree. Even in technological terms, this is ­actually an old idea, not a new one. As Douglas Rushkoff explains, in the early days of personal computing, ‘there was no difference between operating a computer and programming one.’ Computers were ‘blank slates, into which we wrote our own software’.40 We controlled our technology, not the other way round. That’s not to say that Digital Republicanism means everyone should retrain as a software engineer. But it is an activist doctrine. It requires citizens to cultivate the civic virtues that will be needed to hold the state and tech firms to account: technical understanding where possible, but also vigilance, prudence, curiosity, persistence, assertiveness, and public-spiritedness. It’s about ensuring that we do not become subject to rules that we can’t understand, that we haven’t chosen, and that could be altered at any time. Demand transparency, the slogan may run. Demand accountability. Demand participation. ‘Program, or be programmed.’41

Liberty and Democracy

An idea has haunted the last two chapters without ever becoming explicit. It’s that there is an important bond between liberty and democracy. For liberals, the nature of this bond is simple: only in a democracy can the people make sure that their liberties are not trampled on and take part in crafting the ‘wise restraints that make men free’. For republicans in the Roman tradition, the bond is even closer. They believe that a restriction imposed by a democratic process is less inimical to freedom than an identical restriction imposed by a private body by virtue of the fact that it was decided democratically. As Rousseau puts it, ‘What man loses by the Social Contract is his natural liberty . . . what he gains by the social contract is civil liberty.’42 Combining the liberal and republican approaches, what becomes clear is that democratic accountability will become more important than ever in the digital lifeworld. It will be an indispensable weapon when it comes to protecting ourselves against the growing power of the state and private firms. I said in chapter nine that the digital lifeworld would be thick with power and drenched with politics. If we care about liberty, there must also be a corresponding increase in the ability of the citizenry to hold that power to account.

And so we turn to democracy.