‘For you created my inmost being’ Psalms 139:13
The final way to exert power over people, without subjecting them to force or scrutiny, is to control what they know, what they think, and what they are prepared to say about the world. A good way to get someone to refrain from doing something is to prevent them from desiring it in the first place, or to convince them that their desire is wrong, illegitimate, shameful, or even insane.1 I refer to this as perception-control.
If it’s possible to create an environment that is deeply hostile to the airing of particular political grievances, then people are unlikely to bring those grievances to the fore.2 If you know that criticizing a particular politician will bring a menacing silence to the bar where you’re sitting, or cause a Twitterstorm of personal abuse to erupt against you on social media, then you might think twice about voicing that criticism in the first place. Those who seek power do well to mobilize this kind of bias against their opponents.3 There is power in the ability to keep certain issues off the political agenda altogether.4 What better way to maintain the status quo than to create an environment in which mere criticism of it is unacceptable?
Another way to exert power through perception-control is to prevent people from having certain grievances in the first place.5 This may simply be a matter of persuasion. But it can also be the fruit of skulduggery, such as censorship, which prevents issues from coming to the attention of the population at all. People can’t get angry about things they don’t know about.
More subtly, if the powerful can manufacture a widespread belief or conventional wisdom that what is in their interests is in everyone else’s interests too, then there will be no need for force or scrutiny to ensure compliance.
To know about matters beyond our immediate experience, we rely on others to (a) find and gather information, (b) choose what is worthy of being reported or documented, (c) decide how much context and detail is necessary, and (d) feed it back to us in a digestible form. This is the work of filtering. How we perceive the wider world depends heavily on the filters that make it available and comprehensible to us. We know that filtered content will usually only be a fraction of the whole picture, but we hope and trust that the information we receive is true and that the most important aspects have been prioritized.
Filtering is an incredibly powerful means of perception-control. If you control the flow of information in a society, you can influence its shared sense of right and wrong, fair and unfair, clean and unclean, seemly and unseemly, real and fake, true and false, known and unknown. You can tell people what’s out there, what matters, and what they should think and feel about it. You can signal how they ought to judge the conduct of others. You can rouse enthusiasm and fear, depression and despair. You can shape the norms and customs that define what is permitted and what is forbidden, which manners are acceptable, what behaviour is considered proper or improper, and how shared social rituals like greeting, courtship, ceremonies, discourse, and protest ought to be performed; what may be said and what should be considered unspeakable; and the accepted boundaries of political and social conduct. If I breach a norm or custom, I may not face a sanction in the form of force, as I would with a law, but the consequences could feel much worse: ridicule, shame, ostracism, isolation, even excommunication. ‘The way we feel and think,’ as Manuel Castells says, ‘determines the way we act.’6 Those who control the means of perception will increasingly determine the way we feel and think, and therefore the way we act.
Understanding the role of perception-control can help to explain one of the most enduring issues in politics: why the ‘scrofulous, overworked, and consumptive starvelings’ of the world have so seldom risen up against their rich and powerful overlords.7 The answer given by Karl Marx and his apostles, notably the Italian thinker Antonio Gramsci, is that ordinary people have been psychologically conditioned to accept their fate passively.8 They are held in place by illusions and delusions that make change seem impossible or even undesirable. A host of theories with cool-sounding names have been advanced to explain this phenomenon, including hegemony, ideology, false consciousness, and fetishism (not the naughty kind). Assuming the right historical conditions, Marx believed that once the realities of the world were revealed to the industrial working class (the ‘proletariat’) they would rise up and overthrow capitalism. The role of intellectuals, Marx believed, was to help ordinary folk shed their misconceptions: ‘As philosophy finds its material weapons in the proletariat, so the proletariat finds its spiritual weapons in philosophy.’9 (We look at post-truth politics and fake news in more depth in chapter thirteen.)
In the last century, the daily business of filtering was mostly done by the mass media: substantial corporations broadcasting information to millions, even billions, of consumers through print, radio, and television. The power entrusted to mighty media corporations and their mogul owners was frequently the subject of concern, with fears that they might manipulate or brainwash consumers with information geared toward their own interests and prejudices. Nevertheless, the political culture in the English-speaking world has strongly favoured self-regulation of the media other than in times of extreme national peril.
The invention and widespread adoption of the internet signalled the end of the traditional mass-media monopoly over the means of filtering. Alongside the old system emerged a ‘networked information economy’ in which social media and digital news platforms enabled people to be producers and critics of content as well as its consumers.10 In the noughties, this development was generally treated with enthusiasm. Harvard Professor Yochai Benkler predicted in The Wealth of Networks (2006) that the result of the networked information environment would be ‘space for much more expression, from diverse sources and of diverse qualities’:11
the freedom to speak, but also to be free from manipulation and to be cognisant of many and diverse opinions . . . radically greater diversity of information, knowledge, and culture.
To some extent this prediction has been vindicated, mainly because the structure and culture of the pre-mobile internet leant itself to a free and pluralistic flow of information.12 But as we’ll see in chapter thirteen, the new information environment has also brought its own plagues and problems, with serious difficulties for rational deliberation.
It’s important to see that the internet has also been used for more precise and extensive control over the information we transmit and receive, and thus over how we perceive the world. Sometimes this has simply been a matter of controlling the physical infrastructure—the transmission towers, routers, and switches—through which information travels. A repressive state that wants to censor the content available on the internet within its jurisdiction, or even to create a separate walled-off network, can use this infrastructure to do so.13 So it is with the Great Firewall of China. And as Benkler acknowledges, the infrastructure of the later wireless internet, used on smartphones, is engineered to enable more control over content by manufacturers and services providers.14
In the future, how we perceive the world will be determined more and more by what is revealed to us by digital systems—and by what is concealed. When we only experience a tiny fraction of the world, which fraction we are presented with will make a big difference. It will determine what we know, what we feel, what we want, and therefore what we do. To control these is the essence of politics.
The first means of perception to come under the control of digital technology will be news. Already we are increasingly reliant on social media to sort and present our news. The main difference from today will be that the human reporters, writers, editors, and moderators who have traditionally done the work of filtering will gradually be replaced by automated systems programmed to do that work in their stead. Algorithmic filters will be able to give you certain content, transmitted in your favourite format: spoken aloud to you while you shower, played in short hologrammic clips, brought to life in AR or VR, or even laid out in good old prose. You’ll be able to request just the right amount of detail and context. The main promise of algorithmic filtering is an information environment uniquely tailored to each of our needs. If you trust the code.
The process of news automation is already underway, with automated article-generation, automated comment moderation, and automated editors making decisions about what news you see and what you don’t. Just as Amazon and Netflix recommend the books and television you should consume, until recently Facebook’s news platform was said to determine the news you saw on its platform by balancing roughly 100,000 factors, including clicks, likes, shares, comments, the popularity of the poster, your particular relationship with the poster, what subjects interested you generally, and how relevant and trustworthy the item appeared to be.15
Second, filters kick in when we go out searching for information. Search engines decide which results or answers should be prioritized in response to our queries. The precise details of Google’s page ranking methods are not known, but it’s generally thought that it ranks sites according to their relevance and importance to the particular query, determined in part by how often a web page is linked to and clicked on by other searchers looking for the same information. It’s difficult to overstate the commercial and political importance of ranking highly on Google searches. Ninety per cent of clicks are on the first page of search results. If you or your business rank too low, then in informational terms you might as well not exist.16 In the future, search systems will be better able to parse questions asked in natural language, so that when you ‘look up’ something, it will feel less like scanning a massive database and more like consulting an all-knowing personal assistant.17 These oracular systems will decide what you need to know and what should remain unknown. As with news, there’s no guarantee that the information you receive will be the same as what I receive; it could well be tailored according to what the systems perceive to be most relevant for us.
Third, whenever we communicate using digital means—which in the digital lifeworld will be very often indeed—we open ourselves up to filtering in that realm too. To take a basic example, our email messaging systems already use algorithms to determine what is spam and what isn’t. (It’s always vaguely upsetting to learn that someone’s email system decided that your message was ‘junk’.) Alarmingly, users of the Chinese version of Skype are literally prohibited by the application’s code from sending certain terms to each other, including ‘campus upheaval’ and ‘Amnesty International’.18 This reflects a broader trend in which communications technologies are made subject to real-time censorship based on prohibited terms. The Chinese system WeChat, the fourth largest chat application in the world with nearly 900 million monthly users, is censored according to key words. If you send a message that includes a prohibited word, the remote server through which it passes will simply refuse to send the message (and you are not told that this has happened).19 Human officials in the past would only have dreamed of being able to censor conversations in real time. Our communication can be shaped and moulded in more subtle ways too. For example, Apple’s decision to remove the gun emoji from the messaging applications on its devices was an interesting effort to police people’s speech and thereby their behaviour.20
Fourth, digital technologies will increasingly affect how we feel as well as what we know. A recent study conducted by Facebook showed that it could influence the emotions of its users by filtering their news content to show either ‘positive’ or ‘negative’ stories. (Controversially, this study took place without the knowledge or consent of its subjects.)21 And this is just the beginning: increasingly sensitive technologies will be able to sense and adapt to our emotions with great efficacy. Surrounded by AI ‘companions’ with ‘faces’ and ‘eyes’ capable of responding sensitively to our needs, and perhaps provoking our sentiments in other ways, our means of perception will be even more in the grip of technology. We’ll also grow increasingly subject to technogenic norms and customs that encourage us to perform in a certain way. Writers already feel pressured to create content that is likely to generate traffic or go viral. For younger people, there’s social reward in revealing one’s inner thoughts on Twitter, exposing one’s life and body on Instagram, and revealing one’s likes and dislikes on Facebook.
Finally, in the past, external filters only really came into play when we sought to know things beyond our gaze, but in the future we’ll increasingly submit our immediate sensory experiences to filters as well. As I explained in chapter one, augmented reality technology (referred to as AR) supplements our experience of the physical world with computer-generated input such as sound, graphics, or video. Smart glasses (and eventually retina-based AR) may provide a visual overlay for what we see; earbuds for what we hear. As the technology grows more advanced, it will become hard (or fruitless) to distinguish between reality and virtuality, even when both are being experienced simultaneously. If VR or AR systems do come to replace the ‘glass slab’ computing paradigm, this type of filtering will rise in importance.
What we see and what is blocked out, which emotions are stimulated and which are left unmoved—we will entrust these decisions to the devices that filter the world for us. A world in which the homeless are literally removed from view is one in which the political importance of homelessness is low.22 The outcome of a drinks date in which your smart retinas feed you real-time information about your companion based on the signals emitted by her body—whether her laugh is genuine, how nervous she is, whether she is attracted to you—is likely to be quite different because of the presence of that technology. The power to control our perceptions will be an extraordinary addition to the arsenal of those who seek to control us.
It’s sometimes said that ‘to govern is to choose’. The opposite is also true: to choose is to govern. Every time an algorithm chooses which news story to tell or which search results to prioritize, it necessarily leaves some information out. The choice to prioritize news or search results based on clicks and popularity, for instance, necessarily excludes and marginalizes less mainstream perspectives. It encourages sensationalism. Failure to deal with fake news is a choice that allows those who purvey it to wield influence (chapter thirteen). As we see throughout this book, in the context of our liberty, our democracy, and the demands of social justice, what seem to be technical decisions are often in fact political ones.
The other side of the coin, of course, is that social media and social networking platforms have also provided ordinary people with a means to have their voices heard. Political movements like the Arab Spring and Occupy relied heavily on such technologies to mobilize and organize. But these examples only emphasize the deeper point that when we use social media to communicate, we are at the mercy of those who control those platforms. We communicate with their permission and on their terms. Their code, their rules.
What are the deeper implications of living in a world where perception-control is increasingly delegated to digital technology and those who control it? There are certainly problems of societal fragmentation when we all see the world differently because of how it is filtered. Some of these problems are discussed in Part IV.
There is also the question of legitimacy, which ripples throughout this book. We seem to trust tech firms to filter the world in a fair and unbiased way—but what if they don’t? There are already some reasons for disquiet and the digital lifeworld is only in its infancy. Apple, for instance, has blocked or refused to support applications critical of the working conditions of its manufacturers.23 With search engines, it can be hard to tell whether search results are the result of corporations paying for the space or otherwise gaming the algorithms.24
One of the problems with giving others the capacity to control our perceptions is the risk of extreme outcomes. In 2009, after a dispute with a publisher, Amazon reached into every Kindle device and, without permission, deleted copies of a particular book—a feat made possible because the e-readers used cloud-based storage. The title of the book in question was apt: George Orwell’s Nineteen Eighty-Four.25 Needless to say, it would not have been possible for a print bookseller to recall, instantaneously, thousands of books it had sold to customers in 1995. It’s not hard to imagine an insecure President seeking to prevent people from accessing particular information about his past dealings.
Now imagine that your news and search filters, as well as your AR devices, were all run by a technology company—call it Delphi. One day a politician decides that Delphi has become too rich and powerful, and stands for election on the basis that the corporation should be broken up and taxed at a higher rate. Delphi’s executives see this politician’s proposals as posing a threat to its survival and decide to take radical action to protect their position. Over time, the politician in question is blotted out of the public consciousness. People receive almost no news of her campaign events, and when they do, the news is unflattering. When they ask their Delphi search-oracles about ‘election candidates’ her candidacy is either overlooked, mentioned as an afterthought, or garnished with an unpleasant factoid. Those who hear her speak in person find that their AR filters make her sound unattractive. In due course, the politician loses and the status quo remains unchanged. This stylized example of what Harvard professor Jonathan Zittrain calls ‘digital gerrymandering’ combines a number of concerns about the power of the means of perception in the digital lifeworld.26 It demonstrates that although power may be used for positive ends, like keeping us well-informed, it may also be used to create an environment hostile to particular ideas, to remove or downgrade certain issues from public consciousness, and to foster norms and customs that are not in the public interest.
If an algorithm skewed our perception of the world in a significant way, causing us to hold beliefs we might not otherwise have held, or have feelings we might not otherwise have felt, or do things we might not otherwise have done, it might be hard even to realize the nature of the power being exerted over us. We don’t know what we don’t know. Filters cloud the very perspective needed to keep an eye on the powerful.