‘One does not make a child powerful by placing a stick of dynamite in his hands: one only adds to the dangers of his irresponsibility.’
Lewis Mumford, Technics and Civilization (1934)
Lenin is said to have distilled politics into two words: ‘Who? Whom?’1 If the previous four chapters are close to the mark, then we’ll need to think hard about the who and whom of power in the future. That’s the purpose of this short chapter.
Turning first to whom, it seems clear that most of us will become subject to technology’s power in two main ways. The first is when we engage technology for a particular purpose. That might be when we use a social media, communications, or shopping platform, or ride in a self-driving car. Almost everything we do will be mediated or facilitated by digital platforms and systems of one sort or another. Most of the time we won’t have a choice in the matter: in a fully cashless economy, for example, we’ll have no option but to use the digital payment platform or platforms. The second is as passive subjects—when, for instance, surveillance cameras track our progress down a street. Just going about our lives we’ll necessarily, and often unconsciously, engage with technology. Even when we try to avoid it by switching off our personal devices, then technology integrated into the world around us will often act on us in the background.
The question of who is a little more vexed. In modern times we’ve typically seen the distinction between the state and everyone else as the central cleavage in political life. This was the result of four assumptions. First, that only the state could force you to do things. Second, that it was the state (and not private firms) that did most of the work of scrutiny. Third, that the media (and not the state) properly enjoyed the power of perception-control. Fourth, that the power of perception-control was ultimately a less potent form of power than force and scrutiny. As we’ve seen, none of these assumptions are likely to hold in the digital lifeworld.
In Spheres of Justice (1983) the political philosopher Michael Walzer argues that ‘[d]omination is always mediated by some set of social goods’ known as ‘dominant goods’.2 In a capitalist society, he explains, capital is the dominant good because it can be ‘readily converted’ into other desirable things like power, prestige, and privilege.3 In the digital lifeworld, I suggest, the dominant good will be digital technology, because for those who control it, it won’t just bring convenience, amusement, or even wealth: it’ll bring power. Note that power will lie with those who control the technology, not necessarily those who own it. Your personal computer, your smartphone, your ‘smart’ thermostat, locks, and meters, your self-driving car and your robotic assistant—you may well own these things in the future, but if today’s system is anything to go by, you’ll very rarely control the code inside them. Tech firms have control over the initial design of their products, determining their ‘formal and technical’ properties4 as well as their ‘range of possibilities of utilisation’.5 And they’ll obviously retain control over platforms—like social media applications—that remain under their direct ownership. But they’ll also control the code in devices they sell.6 That means that technology we buy for one purpose can be reprogrammed without our consent or even our knowledge.
For tech firms, code is power.
But the state will muscle in too. Its ability to use force against us, for instance, would be greatly enhanced if it also had access to broad means of scrutiny. That’s why although the state doesn’t own the technologies that gather data about us, it’s already tried to establish control over them—sometimes with the blessing of tech firms, sometimes against their will, and sometimes without their knowledge. To take a couple of examples, law-enforcement authorities don’t need to scan the emails of Gmail users for evidence of child pornography because Google does it for them and reports suspicious activity.7 Similarly, the state doesn’t need to compile public and private records of all the data collected about individuals (in the US the Constitution partly prevents it from doing so) but is perfectly able to purchase that information from data brokers who have undertaken the scrutiny themselves.8 Big Brother, it’s said, has been replaced by a swarm of corporate ‘Little Brothers’.9 In 2011 Google received more than 10,000 government requests for information and complied with 93 per cent of them.10 Tech firms comply with the government for various reasons: sometimes because they agree with the government’s aims, sometimes because they’re well-paid, sometimes because they want to collaborate on cutting-edge technologies, and sometimes because it makes business sense to stay on the state’s good side.11 In this context, Philip Howard, professor of Internet Studies at the University of Oxford, has identified what he calls a ‘pact’ between big tech firms and government: ‘a political, economic, and cultural arrangement’ of mutual benefit to both sides.12
As well as asking permission, the state will sometimes use the law to help it gain control over the means of scrutiny. Many European countries and the US have enacted laws requiring Internet Service Providers (ISPs) to adapt their networks to make it possible for them to be wiretapped.13 Sometimes, however, tech companies push back, as when Apple refused to accommodate the FBI’s demands that it unlock the iPhone of one of the San Bernadino terrorists.14 But where the state wants information that it can’t buy, legislate for, or demand—it still has the illicit option of hacking the databases of those who hold it. One of the revelations made by Edward Snowden was that the National Security Agency (NSA) project MUSCULAR had compromised the cloud storage facilities of both Google and Yahoo, harvesting a vast trove of emails, text messages, video, and audio for its own purposes.15
As we saw in chapter eight, in jurisdictions such as China the state has gained control over not only the means of force and scrutiny, but also perception-control, in its ability to censor the news people receive, what they can find when they search for information, and even what they are able to say to each other using digital platforms. In the western hemisphere, too, the state has tried to muscle in on the means of perception-control, albeit in a more indirect way (people are wary of anything that looks like state control of the media). Think, for instance, of Google’s agreement to adjust its algorithm to demote sites that infringe copyright. This reduces the need for scrutiny or force by the state.16
As well as the state and tech firms, less stable forms of power will also lie with hackers who temporarily assume control of given technologies. That could mean foreign governments, organized criminals, angry neighbours, naughty schoolkids, and industrial spies. More on this in chapter ten.
Looking past the old assumptions enables us to see clearly how much power could accrue to corporations in the digital lifeworld. Firms that enjoy both the means of scrutiny and the means of perception-control, for instance, will be able to monitor and manipulate human behaviour in a way that would have been envied by political rulers in the past. Imagine them being able to control our perceptions and scrutinize our responses in real time, reacting to our behaviour in a constant interactive cycle. They’ll have the power to target each of us individually, promoting certain behaviours and enforcing them with a gaze, or even with force.
These tech firms won’t be like corporations of the past. They’ll possess real power: a stable and wide-ranging capacity to get others to do things of significance that they would not otherwise do, or not to do things they might otherwise have done. This is a new political development—so new, in fact, that our vocabulary isn’t rich enough to describe it. Some commentators compare mega-companies like Google to a government or a state, but this is conceptually sloppy. A tech firm is a private entity that operates in a market system in pursuit of limited economic interests. It answers not to the public at large but to its owners and stakeholders. It has interests of its own, separate from those of its users. The state, by contrast, is not supposed to have interests of its own. In theory at least, it exists for the sake of the public. What Google and the state have in common, of course, is that they exert power. But the nature and scope of that power is different. Our use of language should be able to accommodate that difference.
A more satisfactory analogy is that the most important tech firms are increasingly like public utilities, that is, like the organizations that maintain the infrastructure for amenities like electricity, gas, sewage, and water.17 When privately owned, utility companies are generally subject to state regulation requiring them to act in the public interest (as are other public service providers like healthcare and education companies). The physical infrastructure underpinning the internet is already seen as a kind of public utility. That’s why there has long been such passionate support for the idea of network neutrality, that is, that private network providers should not be able to speed up or slow down connectivity depending on the user—or for that matter, block content they didn’t like. As I write, the principle of net neutrality, supported by successive US governments, is being revisited by the Trump administration.18
The utility analogy is apt for technologies that are set to become vital public goods: municipal fleets of self-driving cars, drone-based national postal services, cloud-computing services essential to the economic life of the country, and so forth. But the analogy is imperfect. Our relationship with utilities tends to be one of reliance rather than power: we need them, to be sure, but they don’t often get us to do things we wouldn’t otherwise do. And unlike public utilities, most of the digital technologies we encounter in the digital lifeworld won’t exist to serve a collective need. The means of scrutiny, for instance—the technologies that gather data from us in public and private—exist chiefly for the benefit of those who control them.
In essence, when we talk about powerful tech firms, we’re talking about economic entities that are politically powerful. Not all tech firms, however, will be equal in power. They’ll only be truly powerful if their power is stable and wide-ranging, touching on matters of significance. So a platform that provides an important forum for political debate, for instance, will be more powerful than one that offers a funky way to swap and edit images of fluffy cats. The most powerful firms will be those that control the technologies that affect our core freedoms, like the ability to think, speak, travel, and assemble (chapters ten and eleven); those that affect the functioning of the democratic process (chapters twelve and thirteen); those with the power to settle matters of social justice (chapters fourteen, fifteen, and sixteen); and those that attain a position of market dominance in any one of these areas (chapter eighteen).
This isn’t the first time that economic entities have grown politically powerful. In the feudal system, the economic ownership of land also gave landowners political control over the people who lived and worked on it. The powerful could tax their serfs, conscript them into their private militias, put them to work, discipline and punish them, and prevent them from leaving.19 The medieval guilds were another type of economic entity that exerted political power. They issued precise regulations relating to the pricing, quality, and trade of goods and their private judicial systems fined and imprisoned members who refused to comply.20 Even the Anglican Church, an economic as well as spiritual powerhouse, exercised considerable political power. It taxed and tithed its parishioners. It punished them for breaking ecclesiastical rules. It was entitled to censor publications it saw as ‘heretical or blasphemous’.21 One of the defining features of modernity, for better or worse, was the emergence of the state as the supreme political body, a sovereign distinct from and ‘above’ the market, society, and the church. In our time it’s often said, correctly, that the separation between money and politics is not as clear as it should be. But whereas today’s corporations mostly acquire political power through lobbying, networking, PR, and campaign finance, in the future they’ll enjoy their own kind of power—the kind that comes with controlling digital technology.
My claim is not that tech firms will rival states in the nature or extent of their power. Indeed, far from predicting the death of the state, I’ve argued that much of the power of digital technology could be co-opted by the state, supercharging its power. But we must shed the dangerous view that a body has to be as powerful as a state before we start taking it seriously as an authentic political entity. The rise of tech firms with great political power is itself a development of major significance. It deserves its own theory.
In the last five chapters we’ve looked at the future of power—and already we can see that it will be very different from the past. In the realm of force: a shift from written law to digital law, a rise in the power of private entities able to use force against us, and the emergence of autonomous digital systems without human oversight and control. In the realm of scrutiny: a dramatic increase in the scrutiny to which we are (or may be) made subject, in the intimacy of what may be seen by others, in the capacity of third parties to rate and predict our behaviour and then remember everything about us for a long time. In the realm of perception: the capacity to control with ever-increasing precision what we know, what we feel, what we want, and therefore what we do.
In the future, mighty entities—public and private—will try to wrest control of these new instruments of force, scrutiny, and perception-control. Anyone who seeks great power will dream of holding the full house of all three. The digital lifeworld will be thick with power and drenched with politics.
Can we bring these powerful and complex new technologies to heel? What hope is there for ordinary people to have a share in the powers that govern them? These questions are the subject of the next two Parts of this book, on the future of liberty and democracy. A famous author once wrote of an impressive new form of power in human affairs, one ‘unlike anything that ever before existed in the world’:22
It covers the whole of social life with a network of petty complicated rules that are both minute and uniform, through which even men of the greatest originality and the most vigorous temperament cannot force their heads above the crowd. It does not break men’s will, but softens, bends, and guides it; it seldom enjoins, but often inhibits, action . . . it is not at all tyrannical, but it hinders, restrains, enervates, stifles, and stultifies so much that in the end each nation is no more than a flock of timid and hardworking animals with the government as its shepherd.
This passage could be about the power of technology in the digital lifeworld. In fact, it was written in 1837, nearly two hundred years ago, by the young French nobleman Alexis de Tocqueville. His subject? Democracy in America.