‘We can hardly touch a single political issue without, implicitly or explicitly, touching upon an issue of man’s liberty. For freedom . . . is actually the reason why men live together in political organization at all; without it, political life as such would be meaningless.’
Hannah Arendt, ‘Freedom and Politics’ (1960)
Humans have fought about freedom for centuries. Countless screeds have been published, orations delivered, constitutions enacted, revolutions fought, and wars waged—all in the name of some freedom or other. But the rhetoric of freedom conceals a more ambiguous reality. The twentieth century taught us that certain types of freedom are not inconsistent with ignorance, war, and disease. ‘Liberty does not mean all good things’ writes Friedrich Hayek: ‘to be free may mean freedom to starve’.1 The great philosopher and psychoanalyst Erich Fromm, who fled Germany in the 1930s, wrote in The Fear of Freedom (1942) that freedom can bring unbearable anxiety, isolation, and insecurity. In his native land, he wrote, millions ‘were as eager to surrender their freedom as their fathers were to fight for it’.2
Visions of the future of freedom tend to take two forms. Optimists say that technology will set us free, liberating our bodies and minds from the shackles of the old ways. Pessimists predict that technology will become yet another way for the strong to trample on the freedoms of the weak. Which vision is right? Why should we care? My own view is that there are causes for both optimism and pessimism, but what the future requires above all is vigilance, if we are to ensure what John F. Kennedy called the ‘survival and success of liberty’.3 This chapter looks at the relationship between individuals and the state; chapter eleven looks at the relationship between individuals and big, powerful tech firms. At the end of Part III, I offer a set of new concepts designed to help us think clearly about freedom’s future: Digital Libertarianism, Digital Liberalism, Digital Confederalism, Digital Paternalism, Digital Moralism, and Digital Republicanism. But for now, don’t worry about these monstrous terms. We begin simply by trying to understand what people mean when they talk about freedom.
For career political theorists, liberty is a gift that never stops giving. It is a concept of breathtaking elasticity with a dazzling array of acceptable meanings. Entire working lives have been spent happily trying to define it. Heretically, I suggest that freedom can be distilled into three categories: freedom of action, freedom of thought, and freedom of community (the last also known as the republican conception of freedom).
Freedom of action means the ability to act without interference: to gather with others, to travel the land, to march and demonstrate, to engage in sexual activity, to study, write, and speak unmolested. It’s a physical conception of freedom, concerned with actions, activities, motions, and movement rather than the inner workings of the mind.
A different type of freedom lies within.
The ability to think, believe, and desire in an authentic way, to develop one’s own character, to generate a life plan, and to cultivate personal tastes are all fundamental aspects of freedom of thought. It’s closely associated with another concept, autonomy, derived from the Greek words autos (self) and nomos (rule or law): mastery over the inner self.
For some thinkers, it’s not enough for people to be guided by their own desires. A truly free mind, they say, is one that can ‘reflect upon’ those desires and change them in light of ‘higher-order preferences and values’.4 The fact that a person craves a stiff drink means little if she’s unable to reflect on why she so craves it—and perhaps adapt her desire accordingly. Jean-Jacques Rousseau argued that ‘to be governed by appetite alone is slavery’5 and contemporary philosophers like Tim Scanlon say that true autonomy requires us to weigh ‘competing reasons for action’.6 Freedom of thought, for Scanlon, really means freedom to think in a particular way: consciously, clearly, rationally.
When we think about freedom of mind and body, it always pays to remember the myth of Odysseus. Odysseus ordered his sailors to lash him to the mast of his ship and refuse all his subsequent orders to be let loose. He did this to protect his men from the deadly Sirens, who he knew would try to lure the vessel onto the rocks with their song. Coming within earshot of the Sirens, Odysseus writhed in his bondage and demanded to be set free. But the loyal men, who had plugged their ears with beeswax, refused to obey. The ship sailed on unharmed. While tied to the masthead, Odysseus was certainly unfree in one sense. His immediate conscious ‘self’ was prevented from doing what it wanted, as was his body. But this unfreedom was the result of a free choice made by his earlier, rational self. The myth demonstrates something we already know: that sometimes we act intuitively, instinctively, and spontaneously, while other times we act rationally, carefully, and consciously. Sometimes we act consistently with our moral convictions, while other times we violate our own deepest ethics. It’s as if within each individual there’s not one unitary self but rather an assortment of competing selves that struggle and vie for supremacy. In the moment that the addict injects himself, his conscious ‘self’ (in the realm of action) is surely free, but his deeper, rational, ‘self’, which longs to kick the habit, is shackled by the ravages of addiction. Three points follow. The first, as Berlin put it, is that ‘conceptions of freedom directly derive from views of what constitutes a self, a person, a man.’7 To say ‘I am free’ is therefore to prompt the question: ‘which me?’ Second, it’s possible to be free and unfree simultaneously, depending on which ‘self’ is calling the shots. Third, it’s possible to will ourselves into partial or temporary unfreedom. The same is true when a person voluntarily submits to the martial regimen of the armed forces or asks to be tied up by a lover. I think of Odysseus when I see people using the Freedom app on their smartphones. At the user’s request, this app temporarily blocks access to the internet and social media, allowing them to work without distraction.8 Like Odysseus, users can’t break their digital chains until the specified time.
Before turning to republican freedom, it’s worth noting that free thought and free action (typically aspects of a liberal view of freedom) are complementary to each other. There’s not much point in being able to do loads of things if we only want to do them because someone else told us to. The ‘right to express our thoughts,’ Dr Fromm reminds us, ‘means something only if we are able to have thoughts of our own.’9 Hence the power of perception-control, discussed in chapter eight. Likewise, it’s all very well being able to think for ourselves, but without the ability to put those thoughts into action they’d remain forever imprisoned in the mind.
While free action and free thought go hand in hand, republican freedom is a rather different beast.
While actions and thoughts are generally attributed to individual persons, there is a broader way of thinking about liberty, which holds that to be free is to be an active member of a free community. This has been called the republican view of freedom. Its leading modern proponents are Philip Pettit and Quentin Skinner.10 There are three dimensions to it.
First, a free community is one that governs itself by pursuing the will of its citizens without external interference. The poet and polemicist John Milton (1608–1674), author of Paradise Lost, called this ‘common liberty’.11 It’s a very old idea. Indeed, the word autonomy was used to describe city-states long before it came to describe people.12
Second, true freedom comes from participation in politics and the development of what Cicero called virtus, ‘which the Italian theorists later rendered as virtù, and which the English republicans translated as civic virtue or public-spiritedness’.13
Third, actual freedom matters more than arbitrary freedom, a crucial distinction that underpinned both the English and American Revolutions. In the first half of the seventeenth century, the ‘democratical gentlemen’ of the English Parliament began to complain about the power of King Charles I. It wasn’t just that the king was behaving badly (although he was). The bigger problem was that the king’s constitutional supremacy meant that he could behave badly any time he wanted to. His power, as Thomas Paine put it, was arbitrary, meaning that his subjects relied on his whim and favour for the survival of their freedom.14 If the king became insane, or vindictive, or despotic, he could snatch away the people’s freedom with impunity. In time, enough of the king’s subjects decided that ‘mere awareness of living under an arbitrary power’ was itself an unacceptable constraint on their liberty.15 The king was removed from the throne and his head was removed from his shoulders.
The English republic did not survive, but late in the next century the fledgling American Congress also won its independence from the Crown. That revolution, too, was justified by the belief that ‘if you depend on the goodwill of anyone else for the upholding of your rights’ then ‘even if your rights are in fact upheld—you will be living in servitude.’16
The idea is simple: a freedom that depends on the restraint of the powerful is no kind of freedom at all.
The republican ideal can ultimately be traced back to Roman thinkers—Cicero in particular—who saw the Roman Republic as the paradigm of a free community. Unlike freedom of thought and action, the republican conception of freedom places less emphasis on individuals doing as they please and more on the freedom of the community as a whole. Paradoxically, this means that in the name of freedom, individuals might sometimes be forced to act in a more public-spirited way than if the choice had been left to them.
The challenges to freedom in the future will be many and severe. The future state, armed with digital technologies, will be able to monitor and control our behaviour much more closely than in the past. But before the doom and gloom, let’s consider some of the extraordinary new possibilities for freedom that the digital lifeworld might hold.
It’s useful to think of technologies in terms of the affordances they offer. The term describes the actions made possible by the relationship between a person and an object.17 For an adult, a chair offers a place to sit. For a child, it might offer a climbing frame, a hiding place, or a diving board. Same object, different affordances. There’s no doubt that technologies in the digital lifeworld will offer many new affordances. Thanks to digital technology, we already enjoy ways of working, travelling, shopping, creating art, playing games, protecting our families, expressing ourselves, staying in touch, meeting strangers, coordinating action, keeping fit, and self-improvement that would have been unimaginable to our predecessors. Still new forms of creation, self-expression, and self-fulfilment should all be possible in the future. And lots of the dross that consumes our time—cleaning, shopping, admin—will increasingly be automated, freeing up our days for other pursuits. These aren’t trivial gains for freedom.
Taking one specific set of affordances, there are many people who are currently considered disabled whose freedom of action will be significantly enlarged in the digital lifeworld. Voice-controlled robots will do the bidding of people with limited mobility. Self-driving vehicles will make it easier to get around. Those unable to speak or hear will be able to use gloves that can turn sign language into writing.18 Speech recognition software embedded in ‘smart’ eyewear could allow all sounds—speech, alarms, sirens—to be captioned and read by the wearer.19 Brain interfaces will allow people with communication difficulties to ‘type’ messages to other people using only their thoughts.20
As for freedom of thought, in the short time we’ve lived with digital technology we’ve already witnessed an explosion in the creation and communication of information. This ought to pose a threat to the enemies of free thought: ignorance, small-mindedness, and monoculture. We can imagine a future in which more and more people are able to access the great works of human culture and civilization with increasing ease. I call this something to imagine, however, because it isn’t inevitable. As I argue throughout this book, how we choose to structure the flow of information in the digital lifeworld will itself be a political question of the first rank.
Some futurists predict that people in the future will be able to augment their intellectual faculties through biological or digital means.21 This would obviously represent a transformation in freedom of thought, rolling back the frontiers of the mind itself. But wisely deployed, technology can free our minds in more subtle ways. Little prods, alerts, prompts, reminders, bulletins, warnings, bits of advice—together these digital offerings could combine to make us much more organized, informed, sentient, thoughtful, and self-aware.
Of course, the line between technology that influences us and technology that manipulates us is not always clear. In the future it may be difficult to tell whether a particular form of scrutiny or perception-control is making us more autonomous or whether it’s actually exerting control in a way that’s too subtle to see. How far can a person’s thoughts be subject to outside influences before they cease to be ‘free’? The eccentric philosopher Auguste Comte (1798–1857) believed that the key to good thinking was insulation from the ideas of others. He called this ‘cerebral hygiene’.22 (Students: next time you haven’t done the required reading for class, blame ‘cerebral hygiene’.) I prefer the view of Helen Nissenbaum, that to be ‘utterly impervious to all outside influences’ is not to be autonomous: but to be a ‘fool’.23 Harvard professor Cass Sunstein has done some interesting thinking on this subject, most recently in The Ethics of Influence (2016). For Sunstein, reminders, warnings, disclosures of factual information, simplification, and frameworks of ‘active choosing’ are nudges that influence people but preserve their freedom of choice. Influence only becomes manipulation ‘to the extent that it does not sufficiently engage or appeal’ to people’s ‘capacity for reflection and deliberation’.24 Or as Gerald Dworkin puts it in The Theory and Practice of Autonomy (1989), we need to distinguish methods that ‘promote and improve’ people’s ‘reflective and critical faculties’ from those that really ‘subvert’ them.25
You’ve heard the case for expanded freedoms in the digital lifeworld. Now let’s hear the other side. As we’ve seen, technology will make it easier for political authorities to enforce the law. This is no small claim, because humans have never known a more effective, wide-ranging, or stable system of control than the one we currently live with. The modern state already enjoys the formidable power to enforce laws using the threat or application of force. And yet its powers are weedy in comparison to those that will be enjoyed by states in the digital lifeworld. This could well mean a lessening of liberty. Here I identify four areas of possible concern that could flow from the arrival of the supercharged state.
Have you ever grabbed a shopping bag at the supermarket without paying for it? Or paid someone cash-in-hand knowing they won’t declare it for tax? Perhaps you’ve streamed an episode of Game of Thrones without paying, dodged a bus fare, nabbed an illicit refill from the soda dispenser, ‘tasted’ one too many grapes at the fruit stand, lied about your child’s age to get a better deal, or paid less tax on a takeaway meal then eaten it in the restaurant. These are all illegal acts. But according to a poll, as many as 74 per cent of British people confess to having done them.26 That’s hardly a surprise, and not because the British are scoundrels. People aren’t perfect. It would be absurd to suggest that the world would be better if every one of these indiscretions were routinely detected and punished.
All civilized legal systems offer a slender stretch of leeway in which people are sometimes able to get away with breaking the law without being punished. A surprising amount of liberty lives in this space. Its existence is a pragmatic concession to the fact that most of us can be trusted to obey the law most of the time. One of the dangers in the digital lifeworld is that the law will further colonize this precious hinterland of naughtiness. This would be the natural consequence of pervasive and intimate scrutiny of our conduct; and digital law that enforces itself and adapts to different situations. There’s a big difference between the world we live in now and one in which DRM makes it impossible to stream Game of Thrones; the bus fare is automatically deducted from your smart wallet when you board; the soda dispenser recognizes your face and refuses additional service; the ‘smart’ grape stand screeches when its load is lightened by theft; your child’s age is verified by an instantaneous retina scan; and so forth. The difference is one of day-to-day freedom.
A more serious concern is that using technology to predict and prevent crime will have serious implications for freedom. We saw in chapter seven that machine learning systems are increasingly being used to predict crimes, criminals, and victims on the basis of largely circumstantial evidence. As Blaise Agüera y Arcas and others explain there’s nothing new in trying to predict deviant behaviour. In the nineteenth century criminality was associated with physical traits. The bandits tormenting southern Italy, for instance, were said to be ‘a primitive type of people’. A dimple at the back of their skull and an ‘asymmetric face’ were said to demonstrate their innate predisposition toward crime.27 Criminologists and a fair few quacks since have sought to find more ‘scientific’ ways of predicting crimes before they happen. For a while the dominant sub-field in criminology was psychiatry. It was later joined by sociology.28 In the second half of the twentieth century, trying to predict crime became an ‘actuarial’ endeavour, involving the use of statistical methods to determine patterns and probabilities.29
A recent paper claims that machine learning systems can predict the likelihood that a person is a convicted criminal with nearly 90 per cent accuracy using a single photograph of the person’s face.30 This paper has received a great deal of criticism, but it’s not exactly an outlier. An Israeli startup with the unforgivable name of Faception claims to be developing a system that can profile people and ‘reveal their personality based only on their facial image’. That includes potentially categorizing them as ‘High IQ’, ‘White-Collar Offender’, ‘Paedophile’, and ‘Terrorist’—based on their face.31 This too may turn out to be far-fetched, but we already know that AI systems can learn a huge amount about what John Stuart Mill called ‘the inward domain of consciousness’32 just by observing our expression, our gait, our heartbeat, and the many other clues we give away just by being alive (chapter two). The ability to detect deception through mere observation would fundamentally alter the balance in the relationship between authorities and the people.
The upshot is that the predictive powers of digital systems are growing significantly stronger, and they’re increasingly being used to enforce the law. Why does this matter for freedom? Well, if you’re going to restrict people’s liberty on the basis of a prediction about their future behaviour, then you should be able to explain that prediction. But often we can’t. Predictive Policing systems frequently tell police to direct their resources to a particular area for reasons that are quite unclear.33 But in a free society you should probably be able to ask the policeman (or police droid) standing outside your house why he (or it) is there, and get a better answer than ‘because the system predicted this is where I should stand’. The same is true when it comes to the use of predictive sentencing. We already use algorithms to predict how likely an offender is to commit another crime. If the system suggests a high likelihood, then the judge may choose to impose a longer sentence. As with Predictive Policing, however, it is often unclear why a particular offender has scored well or badly. In the US case of Wisconsin v Loomis, the defendant’s answers to a series of questions were fed into Compas, a predictive system used by the Wisconsin Department of Corrections. The Compas system labelled him ‘high risk’ and he went down for a long time. But he didn’t know why and was not legally permitted to examine the workings of the Compas algorithm itself.34 Obviously when the defendant was incarcerated, his freedom of action was curtailed. But we can also think about this case from the perspective of republican freedom: the idea that a person’s liberty could be restricted on the basis of an algorithm whose workings are utterly opaque is the very antithesis of that ideal.∝
There’s also something philosophically problematic about restricting people’s freedom on the basis of predictions about their future conduct. To see why, it helps to place predictive criminology within a broader intellectual tradition. For centuries, political thinkers have tried to discover general formulae, laws, and social forces that can explain the unfolding of human affairs. ‘Race, colour, church, nation, class; climate irrigation, technology, geo-political situation; civilisation, social structure, the Human Spirit, the Collective Unconscious’—all have been said, at one time or another, to be the forces that ultimately govern and explain human activity.36 The holy grail for thinkers in this tradition was to predict the future of politics perfectly:37
with knowledge of all the relevant laws, and of a sufficient range of relevant facts, it will be possible to tell not merely what happens, but also why; for, if the laws have been correctly established, to describe something is, in effect, to assert that it cannot happen otherwise.
Probably the most famous exponent of this way of thinking was August Comte (the one who also advocated cerebral hygiene). Comte believed that all human behaviour was pre-determined by ‘a law which is as necessary as that of gravity’.38 Politics, therefore, needed to be raised ‘to the rank of sciences of observation’ and there could be no place for moral reflection. Just as astronomers, physicists, chemists, and physiologists ‘neither admire nor criticise their respective phenomena’ so too the role of the social scientist was merely to ‘observe’ the laws governing human conduct and obey them purposefully.39 Comte, who invented the term ‘sociology’, would have been fascinated by machine learning systems that could predict human behaviour. I suspect that his approach to politics might still find some sympathy among the more mathematically minded today.
But here’s the problem. To predict the likelihood of future offending is to accept that human conduct is, to a significant extent, governed by general laws and patterns that are nothing to do with the free choices of the individual. If a machine can take personal, sociological, and historical facts about a person and make accurate predictions about their future activity, then that suggests that humans have less freedom of thought and action than the criminal justice system might like to assume. And if people are less free, doesn’t that mean they’re less morally responsible too? This is the paradox at the heart of predictive sentencing. In many penal theories, punishment is premised on the notion that individuals can be held morally responsible for their choices. But prediction is premised on the notion that individuals’ choices are often determined by factors outwith their immediate control. Instead of incarcerating people, therefore, shouldn’t our response to predictive policing technology be to work harder at understanding why people commit crimes, and, as far as possible, to tackle those causal factors instead?
It’s said that the act of freely obeying the law teaches us to behave ethically. The point can be traced back at least to Aristotle, who taught that the state exists ‘for the sake of noble actions’.40 In the Nichomachean Ethics he argued that virtue of character results from habit . . . ‘Correct habituation distinguishes a good political system from a bad one.’41
There are many reasons why you or I might obey the law: habit, morality, convenience, prudence, fear of punishment. But whatever the dominant reason, the conscious act of obedience teaches us to behave ethically. Part of becoming a good citizen is learning to think about the rights and wrongs of what we do. That is liberty’s gift. In a world where many moral decisions are made for us, however, because many options simply aren’t available in the face of self-enforcing laws, or because the inevitability of punishment means it’s not worth the risk, we won’t be called upon to hone our character in the same way. In such a world, as Roger Brownsword puts it, the question will usually be what is practical or possible rather than what is moral.42 Children born into a society in which misdeeds are made impossible or extremely difficult will not learn what it feels like to choose not to break the law.
In 1911 the mathematician Alfred North Whitehead wrote that ‘Civilization advances by extending the number of important operations which we can perform without thinking about them.’43 Again, Silicon Valley Comteans might applaud this kind of dictum but it demands philosophical interrogation. Is the state’s business to prevent crime at all costs? Or would the automation of morality not diminish us in some way, stripping away an important part of our humanity—the freedom to do bad things if we wish, but also the freedom to choose not to?
So far in this chapter the tacit assumption has been that we’re dealing with a state that governs broadly in the interests of its citizens and doesn’t seek actively to oppress them. It’s also been assumed that the state has not taken complete control over the means of force, scrutiny, and perception-control. But we must now consider the threat to liberty that would be posed by an authoritarian regime that was able to harness the technologies of power for the purposes of repression. To illustrate what this kind of society might look like, imagine an individual—call him Joseph—who plans to attend a forbidden protest.
Joseph was never meant to hear about the protest at all. An email from a comrade containing details of the venue was censored en route and never arrived. The AI systems that reported, filtered, and presented Joseph’s news didn’t mention it in their bulletins. Online searches yielded no results. Somehow, against the odds, he knows about it.
What Joseph doesn’t know is that the regime’s crime-prediction systems are already interested in him. Various snippets of personal data have suggested that he might be engaged in subversive activity. Three months ago he foolishly used the word ‘demonstration’ in a message to a friend. Picked up by surveillance systems, this prompted an automated review of his historic social media activity, which revealed that, ten years ago, he used to associate with two of the known protest ringleaders. To worsen matters, just two weeks previously he downloaded the design for a balaclava and manufactured one at home using his 3D-printer. This pretty much guaranteed him the wrong kind of attention.
On the morning of the protest, Joseph’s digital assistant emphasizes that there’s going to be awful rain and traffic all day: better to stay indoors and work from home. Joseph, however, is aware that the device is unencrypted. Has it been hacked by the state authorities? Is this a subtle form of repression?
Disquieted but undeterred, Joseph sets off to the protest site, a plaza in the centre of town. He notes with cold amusement that there is no rain and little traffic. But at his first attempt he’s unable to reach the vicinity of the protest. The public transport system, a municipal fleet of self-driving cars, refuses to drop passengers within a hundred yards of the plaza where the protest is taking place. (In our time, New York’s Metropolitan Transportation Authority has been known to force trains to pass through stations near to protests, ‘to keep people from assembling’.)44
Joseph instead proceeds on foot.
Conscious of the facial-recognition cameras now ubiquitous in the city centre, he pulls the balaclava over his face. Also aware that cameras can identify him by his normal gait, he wears shoes that are a size too small in order to change the way he walks. But he can’t escape attention. His posture gives him away. The way he holds his body is suggestive of furtive behaviour. Several miles above his head, unseen, a drone quietly records his progress. (The US Army and Defence Advanced Research Projects Agency (DARPA) has developed a ‘surveillance platform that can resolve details as small as six inches from an altitude of 20,000 feet’.)45
As he approaches the plaza, Joseph clocks with concern that his comrades are nowhere to be seen. He won’t know until later that they’ve been placed under house arrest, remotely confined to their homes by state-controlled ‘smart locks’ installed on their apartment doors and windows.
Reaching the outskirts of the protest area, Joseph sees that a dispiritingly small group of protesters has been penned into the middle of the plaza by a phalanx of robotic bollards. A swarm of airborne riot-control drones circles overhead, blaring out orders and occasionally spraying paintballs into the throng. (Something like this, the Skunk Riot Control Copter, has already been developed in South Africa. It’s an airborne drone armed with strobe lights, cameras, lasers, speakers, and four paintball guns that can that fire, every second, 20 paintballs made of dye, pepper spray, or solid plastic.)46
Suddenly, and without any introduction, a holographic image appears vividly in the air in front of Joseph’s eyes. He freezes. The image is of him. It shows him knelt in a stress position in a prison cell, weeping and in agony, his hands cuffed behind his back. This image, Joseph realizes, has been telegraphed directly onto his smart contact lenses.47 It’s a warning. We see you. Go home or face the consequences.
The blood drains from Joseph’s face. He turns and runs.
What this little vignette shows is the imposing range of powers that digital technology could lend to an authoritarian government in the future. What’s striking is that only a few of the means of power involve physical brutality; the rest are softer and less obtrusive. Most of them could readily be automated. That’s what makes them so dangerous.
The tale of Joseph raises the question of the future of dissent. Even where the state purports to act in the best interests of its citizens, no legal system is perfect. It’s why many great thinkers have argued that obedience to one’s conscience is more important than obedience to the law, and that breaking the law can be justified as a means of encouraging a change in it. ‘It is not desirable to cultivate a respect for the law,’ writes Henry David Thoreau, ‘so much as for the right.’48 The doctrine of civil disobedience holds that laws may be deliberately broken in a public, non-violent, and conscientious way, with the aim of bringing about change.49 As Martin Luther King Jr wrote from Birmingham City Jail in 1963, where he had been imprisoned for marching for civil rights:50
There are just and there are unjust laws . . . I submit that an individual who breaks a law that conscience tells him is unjust, and willingly accepts the penalty by staying in jail to arouse the conscience of the community over its injustice, is in reality expressing the very highest respect for law.
Civil disobedience of the kind practiced by Dr King requires a certain minimum of liberty in order to be possible, but like all forms of lawbreaking, it will become increasingly difficult in the digital lifeworld. New forms of civil disobedience are likely to come to the fore. If power is to be exerted through digital technology, then resistance will increasingly take a digital form too. Hacking is likely to be foremost among them.
Hacker culture has existed for a while and has a number of definitions. At its broadest it refers to a ‘playful’ and ‘pranking’ attitude among programmers and coders, although governments and corporations engage in it as well. We are concerned here with situations where a person gains unauthorized access to a digital system for political ends. Such hacking, to borrow Gabriella Coleman’s artful phrase, will usually be ‘either in legally dubious waters or at the cusp of new legal meaning’.51 I call it political hacking. Its purpose might be to access information, to expose the functioning of a system, or even to alter or disable a particular system—perhaps for the sake of liberty.
A lot of hacking already has a political flavour, albeit one marked by differences in approach. According to Coleman, hackers from Europe and North and South America have typically been more ‘antiauthoritarian’ than their counterparts elsewhere, while hackers in southern Europe have typically been more ‘leftist, anarchist’ than those from the north. Chinese hackers are ‘quite nationalistic’ in their work.52
If power subsists in the control of certain digital systems, then hacking diminishes that power by reducing the efficacy of those systems. It’s a serious business. As hacking grows in political importance, it would be quite naïve to entrust it entirely to the judgment of hackers themselves, particularly (as appears to be the case) if the work of hacking itself is becoming increasingly automated.53 As with all important political activity, we’ll need to develop an acceptable ethical framework by which the work of political hackers can be judged. Drawing on the ideas of John Rawls, for instance, we might say that:54
Hacks can only be justified in the public interest, not the interests of the hacker or any other party.
Hacks affecting policies that were decided democratically should be limited to cases of ‘substantial and clear injustice’.
Hacks should be proportionate to the injustice they seek to remedy, going no further than necessary to achieve their aim.
Hacks should never cause physical harm to persons.
As far as possible, the consequences of hacks should be public and visible rather than ‘covert or secretive’.
Hacks should be used sparingly, even when justified.
In a democracy, hacks should be a last resort after efforts made in ‘good faith’ through the proper procedural channels.
In a democracy, hackers must accept peacefully the consequences of their conduct, including the possibility of arrest and punishment. (And the punishment for hacking should be proportionate to the crime and take into account its political function.)
These are just some possible principles. There will be others.
As well as hacking, an increasingly important form of digital resistance will be cryptography, ‘the use of secret codes and ciphers’.55 Encryption is already an indispensable part of digital systems. It protects the integrity of our online transactions, the security of our databases, and the privacy of our communications. It defends our public utilities, communications networks, and weapons systems. Its most obvious function is to repel scrutiny from those who seek to gather data about us. But it has a bearing on the means of force too because it prevents platforms and devices from being hijacked or reprogrammed from afar.
In fact, encryption is the most important defence against malevolent hacking, where a person gains unauthorized access to a digital system for reasons that are not in the public interest. Some of the hacks we hear about today are reasonably funny, like when a ‘smart’ toilet was reprogrammed to fire jets of water onto the backside of its unfortunate user.56 Others, however, are more sinister, like the ‘smart’ doll that could be reprogrammed to listen and speak to the toddler playing with it.57 Still others are deeply troubling: in 2016, ‘ransomware’ held hostage people’s medical records until insurance companies paid $20 million.58 The scale of the problem is serious. A study of ‘critical infrastructure companies’ in 2014 revealed that in the previous year nearly 70 per cent of them had suffered at least one security breach leading to the loss of confidential information or disruption of operations. One divulged that it had been the ‘target of more than 10,000 attempted cyber attacks each month’.59 In 2014 about 70 per cent of the devices connected to the internet of things were found to be vulnerable to hacking.60 With so many more connected devices, the possibilities for malevolent hacking in the digital lifeworld will be radically greater than today. If weapons or large machines were hacked by criminals or hostile foreign powers, they could be devastating to our liberty. Imagine a terrorist in Syria remotely hijacking a weaponized drone in New York City, unleashing hell on its inhabitants; or a geopolitical adversary accessing a country’s missile systems. ‘Increasingly,’ as William Mitchell put it, ‘we can do unto others at a distance, and they can do unto us.’61 Cryptography offers some protection.
The political function of cryptographic methods will be one of the most important political issues of the digital lifeworld. Interestingly, the recent trend has been toward greater encryption of digital platforms. To take a well-known example, the messages you send on WhatsApp are now ‘end-to-end’ encrypted, meaning they can’t easily be intercepted or inspected either by WhatsApp or any other third party. Increased encryption by social media and news platforms has made it harder for authoritarian regimes to selectively filter the flow of information. Previously, they could remove individual ‘accounts, web pages, and stories’.62 Increasingly, encryption has forced them to choose between blocking the entire platform or none of it. Some regimes have taken the more open option: the whole of Wikipedia is now accessible in Iran, as is Twitter in Saudi Arabia. Both were previously the subject of selective blocking. On the other hand, in 2017 Turkey’s government blocked Wikipedia in its entirety and Egypt did the same to the online publishing platforms Huffington Post and Medium (and many others).63
Encryption isn’t always the friend of freedom. For every plucky dissident or journalist who uses cryptography as a shield against tyranny, there’s a terrorist organization, human-trafficking syndicate, drug cartel, or fraudster who uses it to conceal criminality. End-to-end encryption naturally causes concern in the intelligence community because it makes it harder for state agencies to detect terrorist plotting. It’s plainly not in the interests of liberty for dangerous groups to flourish unmolested. And there’s a risk that private encryption will encourage states to develop their own ‘home-grown platforms’ that can be more easily controlled. Iran has developed its own version of YouTube, and Turkey is making its own Turkish search engine and email platform.64
A recent study by Harvard’s Berkman Klein Center for Internet and Society predicted that encryption probably won’t become a ubiquitous feature of technology in the future, mainly because businesses themselves will want to be able to retain easy access to the platforms that we use, partly for commercial reasons (that is, data harvesting) and partly because excessive encryption can make it harder to detect and correct problems.65
Even the powerful can’t agree on the politics of encryption. In late 2017, the British government indicated that it might crack down on end-to-end encryption, while, in complete contrast, the European Parliament was mulling a prohibition on member states securing ‘backdoor access’ to encrypted technologies.66 To preserve our liberty there will have to be a balance: individuals, corporations, and governments all have competing and overlapping priorities, but at their heart should be freedom of thought, freedom of action, and freedom of community.
In the stairwell to the main library of Harvard Law School there’s a plaque that reads: ‘You are ready to aid in the shaping and application of those wise restraints that make men free.’ I passed this plaque many times during my year as a Fellow at Harvard’s Berkman Klein Center for Internet and Society, and it intrigued me every time. The same phrase—more gender-balanced—is still used to pronounce individuals as graduates of Harvard Law School.
Restraints that make us free. It sounds paradoxical, but it’s not.
A well-designed legal system, like any good system of rules, can enhance the overall sum of human freedom even as it places restrictions on people. It does this, on the liberal conception, by guaranteeing each person the space in which to flourish unharmed by others. This is what Hobbes meant when he said that a ‘common Power’ was needed to keep people ‘all in awe’.67 On a republican conception of freedom, the constraints of the law can make us freer by shaping our lives in a more purposeful and civic direction. Whether your tastes are liberal or republican, the future of freedom won’t just depend on the new affordances that technology will offer. It’ll depend on whether we can together shape and apply the wise restraints that make us free. Those restraints will often be in code.
There are at least four points that can be made in defence of the supercharged state. The first is that if the system is one of wise restraints, then, self-evidently, enforcing those restraints should work in the interests of liberty. It’s no bad thing to enforce good laws. (An unenforced law is just a collection of words on a piece of paper. A law that’s both unwritten and unenforced—to borrow an old political theory joke—is not worth the paper it’s not written on. Political theorists are not known for being hilarious.) The question becomes what those good laws should be. We might, for example, think that in a world in which the powers of enforcement are greater, the rules should be correspondingly weaker or fewer. Assuming that the state will use all the means at its disposal to enforce the law, we’ll need to tailor our laws for the digital lifeworld, not for the past.
Second, as digital technologies grow in speed, complexity, and significance, we’ll need to use digital methods of enforcement just to keep them in check. Recall the example from the financial sector in chapter six: trading algorithms are now best regulated by other algorithms.
Third, entrusting more power to digital systems will also mean relying less on human officials. Yes, such officials can be kind and sympathetic. But they can also be selfish, myopic, greedy, vain, cruel, even evil. The German philosopher Immanuel Kant, not known for his jollity, believed that a ‘complete solution’ to human self-government was ‘impossible’ since ‘from such crooked timbers as man is made of, nothing perfectly straight can be built.’68 Silicon, however, is not crooked. It is predictable and consistent in its operation. At least in theory, digital systems can execute code, and therefore apply the law, in an even-handed way with less room for human prejudice or fallibility. (Some of the challenges to this argument are discussed in chapter sixteen, which looks at algorithmic injustice.)
Finally, although digital law may control us more finely than written law, it will often do so in a less obtrusive way. Compare the airport experience of walking through a contactless body scanner, as against being physically patted down by a stranger in surgical gloves. Likewise, a discreet biometric authentication system may be an improvement on the security guard who gruffly demands that you show him your documents.
Are we devising systems of power that are too complete, too effective, for the people they govern? Although we’ll enjoy many new affordances, this chapter has showed the risks that flow from the supercharged state. A system of precise and perfect law enforcement may not be well-suited to the governance of flawed, imperfect, damaged human beings. Perhaps some system of ‘wise restraints’ can help us to maintain a satisfactory degree of overall freedom in the digital lifeworld, but there is no room for complacency. And this is only half the story. We turn now to freedom’s fate in circumstances where it is private firms, and not the state, setting the limits of our liberty.
∝ Side point: the independent site ProPublica obtained more than 7,000 risk scores and compared the system’s predictions of recidivism to actual outcomes. They disproportionately predicted recidivism among black offenders at almost twice the rate of white defendants.35 See chapter sixteen on algorithmic injustice.