The culture of surveillance, seen through the lens of the imaginaries and practices of mundane, everyday life, troubles more common visions of surveillance today. The mention of surveillance often summons images of spooks and spies, of video cameras recording what goes on in the street or perhaps the personal profiles held by corporate or government bodies in some data centre. This book urges that we look beyond such images to consider people’s everyday experiences of surveillance, how they think about them and act on what they know. And this includes a variety of responses, from accepting and even adopting surveillance practices to assessing and sometimes opposing surveillance at any scale.
As we have seen, today’s is a much more complex cultural landscape than a them-and-us binary, where ‘they’ watch ‘us’, intrude on our privacy, violate our rights. These things happen, of course, and vigilance about them is paramount in ever more intensively surveillant situations. But they sometimes happen because ordinary citizens have acquiesced in the use of new technologies or because social media users are turning the tools of surveillance on each other. Not that security agency overreach is the fault of complacent citizens or that social surveillance is somehow equivalent to police surveillance. Rather, to engage with surveillance culture is to ask about hearts and minds, everyday attitudes and actions, as well as to analyse technologies, profits or policies.
These tensions inform the debate in this final chapter, about what sorts of imaginaries and practices might develop, and which ones might be fostered in the future. The paths taken in the book bring together the worlds of surveillance capitalism and surveillance culture, showing how they grow symbiotically. I stress the ways that, in the twenty-first century conditions of digital modernity, there is a need to go beyond the Orwellian frame of Nineteen Eighty-Four and find suitable depictions of surveillance that capture today’s realities. From the many possible options I chose to focus on The Circle, just because it captures so clearly and cleverly today’s social and cultural realities.1
The Circle is also a fine foil for Nineteen Eighty-Four because, like the latter, it touches on the deeper questions raised by the enlargement of surveillance. No single novel has had a fraction of the impact in the West that Orwell’s has enjoyed, but equally, no other contemporary novel than that by Eggers that I have encountered encapsulates so sympathetically yet in such scorching satire the themes that I have endeavoured to expose and explain in this book. If I knew a film that did a better job, for instance, I might have used that. Beyond this, The Circle also offers some memorable characters who may be used to typify different elements of surveillance culture.
As well, within its fictional format, The Circle addresses simultaneously the questions of surveillance culture and surveillance capitalism. On the one hand there is the deftly described experience of Mae’s ‘going transparent’, with her initial uncertainty, emotional turmoil and eventual embrace. She would lead the way, become a celebrity employee. Her creative kayaking, unreported and, she thought, off-camera, was behaviour that could be rectified, levelled to the norm. Which brings us to the other hand – the way that the Silicon Valley corporation demands and delights in transparency because it makes the machine work more smoothly, operationalizing and accelerating all processes.2 The Circle seeks the standardized style, disliking the different, the foreign, the Other.
To recap. The three central chapters in Part II of the book examine performance and compliance, the normalization of surveillance and how selves are formed in surveillance culture. Chapter 5, on The Circle, shows how the same sorts of themes are treated in a novel. No single social or cultural analysis could hope to catch so engagingly the spirit of surveillance culture, or to display so convincingly its downsides and dilemmas. In a sense, while social-cultural analysis attempts to diagnose today’s world of surveillance saturation, smartphone and social media use, a novel like The Circle diagnoses us. If ‘diagnosis’ sounds critical – suggesting there is something not quite right about this world – this is not a mistake. Some aspects of the emerging surveillance culture are less welcome than others.
This chapter responds to the diagnosis, to the assessment of today’s social-cultural condition. Earlier, I affirmed Jonathan Finn’s assertion that, having grasped that ‘surveillance is no longer the purview of police, the state and corporations but that it is a constitutive element of life …’ then this ‘requires a self-reflexive look at our own willingness and desire to watch, record and display our lives and the lives of others’.3 So this chapter suggests two key areas for focus that could be construed technically as ethics and politics but that, in keeping with the book’s central theme, lean towards imaginaries and practices.
Under the heading of ‘recognition and responsibility’, I suggest that the real issue lying behind other important questions such as visibility and exposure is that of persons; how they are recognized and how they take responsibility. Following from this, under ‘rights and regulations’, I propose that power be understood as something within reach rather than simply over against ordinary people. And that a notion of ‘good gazing’ relating to fairness could underlie approaches to those rights and regulations.
‘There is a crack in everything,’ growls Leonard Cohen, ‘that’s how the light gets in.’4 The ‘cracks’ producing surveillance culture are evident in this book, but where might one hope for light to break through? The hidden hope advertised in the chapter title draws the book to a close or, better, to an opening. The hidden hope refers in part to what the previous sections show – that while some forms of analysis produce a sense that nothing can be done about massive global forces, in fact, human agency is never restricted to the codes of the dominant culture. The quest for alternatives or even for outright resistance is not futile. But the hidden hope is also a humble reminder that hope’s fulfilment is never fully understood or experienced by those seeking it. We cannot know in advance how things will work out, but it is possible to articulate more clearly and strive towards some desirable features of the hoped-for world.
The Circle holds up a mirror in which we see ourselves. Maybe you are like Mae, increasingly at home and at ease with the wise men, having been initiated into the importance of being always on, always available, always open to the world. If so, you have become accustomed to the metrics-and-performance-driven world of work or you have fallen prey to the seductive sirens of social media. In either case, performance is the purpose. In Mae’s story we recognize others we know or know about. They, with Mae, represent the dominant elements of surveillance culture, those most closely aligned with the techno-political-economic status quo of surveillance capitalism, which live by the nothing-to-hide-nothing-to-fear mantra and are okay with checking the profiles of strangers.
Or perhaps you identify with Mae’s ex, Mercer, for his nagging doubts about transparency and being ‘always on’. And, of course, in her kayak, don’t forget that Mae herself once relaxed into a comfortable switching off for a while. Is Eggers hinting that such sporadic disengagement would help to keep being online in perspective? Absolutely, as I see it! Mercer had no intention of succumbing to the belief that life online exhausts the meaning of life itself. There is so much more than onlife! Mercer could be seen as representing a residual element in surveillance culture. He is a Luddite in the historical sense of someone who has a reasoned and ethical scepticism about new technologies. After all, he drives a truck, listens to the radio. His attempt to escape was prompted not by technophobia but by Mae’s turning a fugitive-finding program on him.
A third possibility is that you feel like the shadowy figure, Kalden, with whom Mae has an uncertain romantic attachment. She shares freely with him but does not know clearly where she stands with him. Kalden, revealed as Ty Gospodinov, one of the Circle’s founders, has apparently seen the light, recognized the totalitarian tendencies of his own creation and is ready to dismantle what he has made before more harm is done. Kalden is an emergent element in surveillance culture. Despite his association with the dominant culture, his increasingly radical questioning leads not merely to seeking alternatives but to outright opposition, from within.
Throughout the book, I have stressed how surveillance culture may be seen as a shadow of mainstream surveillance strategies, now, arguably, understood as surveillance capitalism. Such mainstream activities include controlling crime, managing cities, securitizing transport and communication systems – all of which are, in Michel de Certeau’s sense, ‘strategies’ of power. In his stimulating analysis, turning to everyday life is to find ‘tactics’ of power that contrast with and sometimes contest the dominant codes of control and efficiency. From trying to understand what some of those tactics might be, through exploring imaginaries and practices, we discover new kinds of creativity as people identify and pursue meanings that are important to them. Some may be normalizing and complacent. Others offer fresh ways of seeing and being, in digital environments.
Of course, tactical activities may be found, on occasion, within the sphere of strategies. Think of Kalden. Or they may be mixed, or just negatively disposed towards the internet, as in the case of Mercer. Or apparently innocent and healthy tactics, such as switching off for a while, could be squeezed by pressures – even pleasures – of conformity, which of course was what happened with Mae. The sorts of tactics that we discuss in what follows are informed by situations and initiatives described and explored in the rest of this book, coloured by the kinds of priorities that seem appropriate for confronting pressing questions such as security agency overreach or corporate obsessions with valorizing personal data.
If this book is to encourage a stepping back to yield a wider range of vision, three issues should be borne in mind. The first is the need to notice the wider contexts of small-scale experiences of surveillance. Giving them names, such as the Internet of Things, Smart Cities, or Wearables, for example, even if these are promotional labels, indicates what sorts of processes prompt the micro-level experiences of, for example, being asked by an employer to wear a tracking device. How might these experiences undermine trust, generate disadvantage for some, or simply be an instance of treating others only in terms of disembodied, abstract data?
Secondly, how may emerging surveillance imaginaries and practices relate to actual ethical and political responses that could be shared responsibilities? After all, what may at first feel like personal troubles often turn out to be public issues.5 Is there room for alliances or coalitions with others of like mind and purpose? How can online connections be used for this activity? While today’s surveillance sorts people into social categories, it provides no means for those finding themselves in the same categories to join forces in a democratic way to propose alternatives to current developments.
Thirdly, it is worth recalling a point made repeatedly here, that the attitudes and actions of users make a difference. One of the indirect ways in which the Snowden revelations became relevant to internet users was through demonstrating how profoundly security agencies as well as internet companies depend on data gleaned from social media and other internet use. User-generated surveillance, in the sense of everyday online activities yielding constant data streams, is critical to the successful operation of numerous platforms and systems, from GCHQ – the UK partner of the NSA – to Amazon or Uber. Surveillance imaginaries and practices affect how well the systems that construct subjects as commodities or suspects actually work. The elements for surveillance imaginaries and practices described here are personal and local as well as having globalized corporate and governmental scope.
In this section I push the argument about visibility a little further. Having noted how important visibility is within surveillance culture, I now want to accent an aspect of visibility often referred to as recognition. Mae wanted to be seen online as a feature of her very existence. She wanted recognition as having a distinct identity, something she shares with millions of internet and social media users. What I wish to tease out of this are some possibilities for what I call good gazing, in which an ethics of care linked with human flourishing informs surveillance imaginaries.
While I often refer to those engaging with the internet and social media as users, this could be thought of as a way of ignoring the many other dimensions of their lives, despite my actual desire to do justice to their humanity. In discussing the use of data analysis, earlier in the book, I acknowledged that the critical issue is the assumptions made about human beings as persons, and how their identities and relationships are affected by Big Data practices.6 The same applies to surveillance imaginaries of those commonly using social media and the internet. Personhood is crucially important and needs to be asserted and struggled for at every level. Visibility and recognition are one aspect of this.
Another way of saying this relates to what Nicholas Mirzoeff calls ‘looking’.7 His superb history The Right to Look focuses on large-scale cultural modes of visuality from the plantation to the colony and to the military-industrial complex, but he begins with the ‘right to look’ at a personal level, ‘with the look into someone else’s eyes to express friendship, solidarity or love’. It must be a mutual look, too, says Mirzoeff, and it implies something much larger: ‘It means requiring the recognition of the other in order to have a place from which to claim rights and to determine what is right.’8 What I have to say follows a parallel trajectory, from the personal to the political in an unbroken line.
The desire for visibility is a vital component, conveniently choreographed by new media consumerism, but it is sometimes tempered by anxieties from a different source, national security strategies and their paradoxical insecurities. Wishing to be seen in Instagram photos or Facebook posts may simultaneously be linked with fear of potential consequences of exposure. Imaginaries and practices relating to exposure betray a complex array of emotions and activities. But which of them feed into positive responses to issues of recognition, relationships and responsibility?
Recall that in chapter 4 the issue was aired of how the presentation of online selves is imagined. As we saw, some security experts think in terms of ‘narcissism’, a self-absorbed approach to self-presentation where the unknown online audience has to be presented with a persona that will be more or less artificial. Narcissism is not usually used in a complimentary fashion, however. That it may be present in some cases is not a good reason for using it in a blanket fashion. Kate Hawkins suggests that something like Foucault’s ‘confession’ is closer to the mark than ‘narcissism’.9 In this view, online subjects are not merely subjects of powers that draw confessions from us, but also subjects of confession, seeing themselves as knowledgeable, thinking subjects. The former is disempowering, the latter, empowering. In this sense, says Hawkins, online confession could be seen as a search for accuracy of representation or of truth in determining how you are made visible.
When Goffman wrote in the mid-twentieth century, he thought that self-presentation performances could sometimes be relaxed, for example after a job interview. But online life does not really allow such relaxation. Maintaining the persona is constant hard work. At the same time, internet companies’ use of multiple sources of data means that their sense – and probably with them, security agencies’ sense – of who people are is the one that counts behind the scenes, as it were.10 What internet users think of as their mode of visibility, their self-presentation, works on different criteria from the corporate and state systems, which make them visible for other purposes. Internet users are much more likely to present themselves to known than unknown others.
Visibility is an all too real aspect of daily life, which demands constant negotiation.11 It is definitely relational, in that seeing and being seen are each involved. Moreover, our ways of seeing are socially crafted and varied. Without being seen in some way, we cannot be recognized or identified. Online, as in offline life, with some effort, people can be relatively invisible, at least in relation to other ordinary users, almost to the point of disappearance. Others, however, notably social media celebrities, become supervisible.12 Building on this, Eric Stoddart speaks of negotiating visibility by invoking a notion of ‘in/visibility’.13
For Stoddart, in/visibility is active, perhaps performative, in that it is able to evaluate the conditions of being seen or not, as well as the resources on which people draw to make themselves more and less visible for specific strategic14 purposes. In relation to social media in particular, this negotiating skill can inform attitudes to large internet companies as well as what to disclose, or not. Given that data are both in the foreground – phones, tablets – and the background – financial information, traffic monitoring – of daily life, how one interacts with data has to be considered at every level. Under what conditions, consciously or unconsciously, do people give access to their data?15
As The Circle or Black Mirror reflect daily realities, in satire or caricature, further questions arise of how new online media affect relationships at the most basic level. Negotiating visibility occurs in contexts where being drawn together with others or pushed apart are constantly in flux. For Roger Silverstone, a key issue of all contemporary media is their capacity to bring people together and simultaneously to keep them apart.16 Proximity, not distance, he argues, is required for properly moral responses. Of course, there does also have to be some separation of self and other, as Emmanuel Levinas insists, to provide a context for respect and responsibility. Levinas stresses that subjects are brought into being, at a profound level, by responsibility for the other. The ‘face’, for him, is crucial.17
Paradoxically, perhaps, the ethics of the ‘face’ offers hope for reconsidering today’s surveillance practices. For Levinas, the face of the other calls us to a basic human responsibility for the welfare, the flourishing, of the other. But while it is true that technologies of data processing distance one person from another and may contribute to forgetting that responsibility, one has to wonder whether literal proximity is required for humanity to be asserted. The face is a reminder that seeing surveillantly, whether done by organizations or just curious individuals, depends on a data image or online persona, not the presence of the flesh-and-blood person. Are there really no ways for maintaining responsibility for the other at a distance, using internet technologies?
Some, such as Bauman, say modernity has created social spaces with no moral proximity. Craig Calhoun, for example, traces how relationality is affected by modern conditions, especially those dependent on information technologies.18 If primary relations are face-to-face and secondary ones are mediated, for example through bureaucracy, then the third and fourth levels take this further. There may be no co-presence in tertiary relationships – say, an email to the bank about an error on a statement – and not even any direct contact at all in quaternary ones. These latter are largely the product of surveillance, in which, for example, a sociotechnical system such as WhatsApp, prompted by someone sending a text message, also analyses the data thus generated. Thus a further communication is created from the original text, of which the original author is unaware and with consequences for her that she can probably not even guess.
Some implications of this are discussed dramatically in Sherry Turkle’s critical social-psychological analysis Alone Together. The second half of the book is about young people on the internet,19 which for many users is not merely compelling but compulsive (Turkle is also a psychotherapist, which helps explain some use of language). Their digital absorption is scarcely comprehended by older adults, particularly when teens might find a phone call threatening in its real-time immediacy, as the live performance requires unedited spontaneity. Other users, according to Turkle, may be thought of by these same young people as resources to be used or problems to be managed, which sounds, to me, ominously like the approach of surveillance capitalism. It denies that the other, above all else, should be seen as a person to be cared about.
Of course, one could carp about Turkle’s rather psychologized approach or the limited range of respondents in her study. But she is not attempting to investigate the political economy of the internet or to provide some representative sample of internet users. Two factors make her work worthy of serious attention. One is that she has been examining human–computer relations for decades, starting with a very optimistic account of the computer as a ‘second self’.20 Turkle’s conclusions are much more measured today. The other factor, linked with the first, is that her work hits unavoidably an ethical note. She asks whether a line is being crossed, not so much in people’s relations with ‘technology’ but fundamentally in how people see each other.
This, I believe, is why recognition is an important category. Negotiating visibility is indeed what dealing with surveillance is about. But seeking visibility is not the end of the story. Persons wish to be recognized for a sense of who they are and the value accorded to them. Charles Taylor, who initiated current debates over recognition, sees recognition as related to identity, a sense of what is human.21 And identity develops through interaction with others; it is not something generated individually. It also relates to dignity, a vital plank on which democracy rests.
It may sound like an impossible goal in a world of surveillance capitalism, and even an almost unattainable aspiration for surveillance culture, but if surveillance imaginaries had at their heart a sense that such recognition is desirable, then this would contribute to some very different kinds of surveillance. Indeed, this sort of surveillance could be construed as good gazing, something that reaffirms the human rather than reducing humanness to data images. To see the other not as a competitor or a component of one’s own ladder to success, but as someone to be cared about, even to take responsibility for, would contribute to human flourishing rather than the shrinking of humanity. Needless to say, this speaks to practices as well as to imaginaries.
The benefits of recognition spill over from surveillance imaginaries to practices. Our identities are shaped in part by recognition, but that recognition, in turn, is an essential dimension of democracy, where being recognized on equal terms with others is a constant goal. Just as The Circle’s Mae finds herself trying out a new identity as she accepts and models transparency, so also she propels herself into a role of leadership in democratic development. As the three wise men declare their desire to promote democracy by making every Circle member a voter, Mae caps it with a further step: make voting compulsory. Pure, direct democracy! Quickly, they arrange a beta version and try the first test: ‘Should we have more veggie options at lunch?’
For Circle leaders, full knowledge and participation could be assured through democracy, or ‘demoxie’ in their words. For Bailey, knowledge, and ‘equal access to all human experience’, are basic human rights. Everyone has access; everyone has a say. The reader might be thrown for a moment – where is the satire? Pressures for equal inclusion on the basis of race, gender and sexuality are contemporary political priorities. And attempts at exploiting the political possibilities of social media sound suspiciously like what is already occurring in European and Brazilian populist groups that use ‘liquid feedback’ to obtain direct comment on government policy.22
The satire is in Mae’s naive dreams of the outcome of the Circle’s ‘completion’. It ‘would bring peace, and it would bring unity, and all that messiness of humanity until now, all those uncertainties that accompanied the world before the Circle, would be only a memory’. It is the persistent messiness of humanity and those nagging uncertainties to which most Circlers seem oblivious. While equal access and equal say may be good goals, they are neither uncontroversial nor easy to implement. As well, other aspects of technological innovation in the political process, such as voter surveillance that appears to undermine some vital aspects of democracy, could clash with liquid feedback.23
So what sorts of surveillance practices might actually support democratic development in quest of human flourishing and the common good? Is there some hopeful realism that both sees the ongoing potential of human agency and also admits that messiness and uncertainty are an unavoidable part of the human condition? Several questions are considered, briefly, that offer ways to develop fresh surveillance practices. We start with the question of agency, before commenting on data justice and fairness, context and care with data, and lastly, digital citizenship.
Raymond Williams notes that agency is never limited by the dominant code. In all but the most extreme cases, alternatives are possible. It is all too easy to imagine that, given the power of states and corporations, no action is possible. But if, as I have proposed throughout, knowledge of feedback loops can be encouraged, then it becomes easier to see how the activities of everyday internet users – and others engaged with new technologies – can be shown to make a difference.
These are the local tactics that engage with the global strategies of large-scale organizations. The latter are geared to control, efficiency and profit, the former with the meanings, for them, of their interaction with everything from street cameras to online gaming. In everyday life, people interpret their uses of technology in the light of their imaginaries, out of which practices grow. Even in the case of one of the largest entities, Facebook, the corporation does not have everything its way. We have seen that its users may object to new developments such that the behemoth has to back down.
To assume that resignation is the dominant mood or the only available option is to accept a lopsided view of subjects and of power. Such negative and defeatist views are fed by notions of internet users merely as ‘subjects to power’, as Balibar and Isin and Ruppert put it,24 in the thrall of global corporations supported by government, and not also, simultaneously, as ‘subjects of power’. Fed by images of Big Brother as a feature of global security agencies and internet corporations, it is easy for surveillance imaginaries to be dominated by the sense of remote, out-of-control organizations. After all, they are hugely powerful and often resist moves to make their activities more open or accountable.
Of course, citizens may well participate in their own submission by accepting terms of service without reading them or going along with security theatre at the airport. But even within those situations, opportunities for questioning, negotiating and even subversion may open up. Such participation offers chances for making rights claims that can contribute to the overall shaping of the internet or for contributing to the mitigation of unnecessary security procedures. Small but significant acts that could stimulate shared practices.
It is true that the challenges are tremendous; we do not downplay them here. Surveillance is opaque to those with no technical background; few know how it works; surveillance is invisible because it is usually digital – increasingly miniature or hidden cameras or the use of plastic cards with barcodes and other sensors are the tip of the iceberg – and algorithmic; and surveillance is often covert or inadequately publicized, especially in relation to policing or national security. At the same time, because personhood is played down and life-chances and opportunities are directly or even indirectly affected, matters such as these should be a matter of public scrutiny and participation, attuned to data justice and fairness.
Social sorting is a feature of all surveillance and represents a key social and political challenge that should prompt appropriate practices. Interestingly, Charles Taylor’s work speaks to this via what he calls the politics of recognition. Resources ought to be justly distributed, in this view, and recognition is a way to ensure that this happens, over against opaque criteria such as the marketers’ ‘lifetime value’ of consumers. Others have tweaked Taylor’s position: Nancy Fraser, who says that distribution and recognition cannot be reduced to each other, and Axel Honneth, who insists that recognition is enough to ensure that matters of distribution can be dealt with justly.25 When a key aspect of large-scale surveillance is social sorting, with all too real consequences materially and in terms of life-chances for those involved, the significance of recognition becomes very evident.
Having stressed that the culture of surveillance affects everyone, its politics, too, is an everyday affair, in shopping, social media use, how to handle others’ data and our own in the workplace and myriad other sites. If surveillance is embedded within, brought about by and generates cultural imaginaries and practices, then it is plainly important to see surveillance within the frames of reference of ordinary people; the insider, not just the outsider view. So the big questions have to do not only with algorithms and encryption – where technical expertise as well as sound wisdom are required – but with a more general ‘data justice’26 and with the everyday means of making a difference, of exercising agency. For instance, Helen Kennedy and Giles Moss imagine ‘conditions in which data mining is not just used as a way to know publics, but can become a means for publics to know themselves’.27
As noted earlier, Helen Kennedy and her colleagues found, in studying user responses in the UK, Norway and Spain, that a key question is ‘fairness’.28 Rather than rely on quantitative studies of opinions on using social media, they chose a different method. They asked social media users in focus groups what they thought about data mining, based on their internet use. It turned out that their varying viewpoints on data-mining practices had in common an expressed concern about fairness. Kennedy et al. found that the context is crucial29 and that fairness was salient in relation to more general concerns with well-being and social justice. By not framing the focus groups in terms of surveillance or privacy, the researchers were able to let the users offer their own responses in their own words.
If we are to understand the issues for surveillance practice in relation to fairness, then it is appropriate to conceive these issues in terms of ‘data justice’. Using loyalty cards or engaging with social media, or even walking freely along city streets watched by video cameras, means that data pertaining to our everyday lives are available to others. And those organizations analyse the data in ways that affect our opportunities for better or for worse. In the world of service provision in American cities, practices such as red-lining are seen as a means of reproducing social inequalities. It occurs when services are denied or made inaccessible to certain areas based on the racial make-up of those areas. How much more so those automated surveillance practices that affect others across a whole range of life-chances and choices? This is even further accented when the very nature of capitalism is undergoing change as it depends increasingly heavily on the manipulation of data.
Several commentators and analysts note that, following Snowden, there has been a resurgence of political activism in the digital realm.30 In Canada, for instance, OpenMedia.ca has become a leading commentator and activist group in relation to the internet. A number of sophisticated technical forms of political activism and resistance involve encryption, more secure networks or other specialized responses to data-driven surveillance. But these tasks are often outsourced to expert groups. Unfortunately, such experts often expect users to protect themselves using tools provided by developers.
To address this apparent divide, Lina Dencik, along with others,31 suggests that security engineers and others learn more of the language of collective action, rather than merely individual assistance, to try to expand the scope of potential action. Encouraging users to understand the outlook of technical experts would also help the situation. Beyond the potential for opposition to unwarranted and unwanted surveillance, however, is a further issue of how internet users might themselves benefit from data mining, rather than it merely being a tool used by powerful corporations.
Wider concerns with personal data, expressed in terms of fairness and data justice, are the context within which all everyday data issues arise. Data are highly valued by corporations as well as by government departments. While a key problem is that fully living persons may be reduced to their data doubles by marketers or security agents, the other side of the coin is seen when less care than necessary is given to personal data. These data are not merely significant because they spell profit for companies or profiles for police. Their real significance, however limited or inconsequential they may appear, is that they refer to persons. The care with which someone is treated in face-to-face presence should be extended when handling personal data, in whatever context. Of course, such radically countercultural surveillance practices have to be worked out practically in varying contexts, but the person-oriented starting point is of the essence.
Contexts are crucial. Surveillance practices require sensitivity to the situation in question. Nissenbaum’s insights on ‘contextual integrity’ are fruitful and widely cited in this field, informing and prompting a number of studies that try to engage more directly with how issues of personal information handling are understood by ordinary users of the internet.32 For Nissenbaum, context cannot safely be ignored. People disclose information in specific contexts and expect that it will be used appropriately in that context. To ignore this is to violate their rights, she argues. Not limiting herself to the circulation of personal data on the internet, Nissenbaum also discusses this in relation to loyalty cards used in consumer surveillance, to public and private video cameras, and the use of biometrics such as fingerprints and facial recognition.
Indeed, Nissenbaum’s initial studies of contextual integrity appeared just as social media were taking off, which meant that her proposals were available to be applied in that area, too.33 Her work, in other words, is highly pertinent to surveillance culture. She provides evidence of how surveillance imaginaries are nuanced and how different norms of practices circulate within each area.
Because the internet is so important today, it is essential that emerging practices be seen for what they are. Not only are issues of surveillance intelligence-gathering and analysis primarily about the internet, arguments relating to them are frequently conducted on and through the internet. Most of what is discussed as the culture of surveillance relates to the internet in some way, not only national security surveillance but also social media, the internet of things, and the ways that our devices and environments are increasingly interlinked and, of course, self-recording and sharing data. This again prompts questions of how people communicate online and what differences are made to their everyday practices by so doing.
Daily life involves a whole bundle of roles that have to be performed and negotiated constantly. Some of them refer to close relationships of domestic and familial kinds, some to relationships with friends, workmates, team members and neighbours, and yet others to those that connect us with institutions and authorities, whether banks or bars, churches or charities, city councils or national governments. Each set of relationships is now, at least partially, mediated digitally, and that digital mediation, as should now be quite clear, is by definition surveillant. Although surveillance is not their exclusive concern, Engin Isin and Evelyn Ruppert’s Being Digital Citizens helps explain why the politics of surveillance takes on some new and different characteristics today.
The idea of ‘digital citizens’ is not limited to some famous or notorious activists such as Edward Snowden, Julian Assange or Chelsea Manning. Digital citizens appear wherever the internet is the medium through which things are done. While legal or technical documents often describe internet users as ‘data subjects’, this suggests they are simply created or at least controlled by the data. On the other hand, some activists put the shoe on the other foot, describing users as ‘sovereign subjects’, in other words, people capable of more but who are constrained by the internet. Isin and Ruppert remark that, while digital citizens are in some ways compliant, they may also contest what is happening by claiming their rights.
The central issue is that, while ‘subjects’ is a good word, it has to be thought of in two ways at the same time. People are both ‘subjects to power’ in that our lives are profoundly affected in positive and negative ways by data and the internet, and ‘subjects of power’ in that they may demonstrate subversive as well as submissive behaviours in online life. Digital citizens come into being, in part, as data politics begins to form in recognizable ways, generating ‘worlds, subjects and rights’.34 Our very relationship as citizens with states is now mediated by the internet and by data, and as we make rights claims about those data, we do so prompted and provoked into governing ourselves and others through such claims.
After all, these authors hint, we are not talking about mere ‘dead data’ as if data do nothing. It is those data that actually help to generate certain kinds of everyday realities such as whether or not we are eligible for some government benefit or, as we have been discussing, who gets detained in airport security. As well, debates about data are all too often a bit like shooting stars. Seen momentarily, as bright, newsworthy flashes of light, they create controversies about security breaches that leave personal data vulnerable to misuse, or about corporations like Google claiming that they can predict flu outbreaks better than centres for disease control. Then they fade in the firmament and we wait for the next to flare. But in fact these issues are ongoing and may form patterns – of data politics.
Regrettably, data politics is often reduced to an individual level rather than being seen as something much broader. The recommended self-protection by encryption or even by taping over one’s computer camera lens is symptomatic of this ‘atomism’ that deflects attention from the possibility of concerted, collective activities and practices that could be crucial aspects of data politics. Digital citizens, in this view, engage in data politics especially when they make claims to rights. These may be fairly familiar claims, as in demanding that established civil liberties or human rights be maintained or developed in relation to the internet and data. Or they may be rights claims that are only just coming into view, starting with a fuzzy focus and then turning clearer, about how and with whom to share data, for example.
Under contemporary conditions of rapidly augmented government and corporate surveillance practices, the politics of surveillance, as an aspect of data politics, becomes a vital activity. In a critical vein, Colin Bennett’s fine work on privacy advocates is a good example, based on the idea that controlling information about ourselves is a fundamental right.35 He interviewed many players globally in the advocacy enterprise so his work gives a good all-round sense of the issues, although it would be interesting to see what has changed since Snowden. He comments on the practices of various players and comments on how the different agencies work together – or not!
If surveillance is embedded within, brought about by and generative of cultural practices, then it is important to see surveillance within the frames of reference of ordinary people. In line with our overall theme, this is not just the operator perspective, but the user perspective – the insider, not just the outsider, view. So who questions surveillance today and why? Internet users, travellers, workers, citizens and consumers raise questions about privacy and surveillance. It is likely that many more, who may not even consider those categories, do discern some relevant questions about fairness. So the question is, what sorts of emerging practices speak appropriately – even if not yet adequately – to the questions now confronting us? Responses will vary depending on age, gender, context and other factors, but there are green shoots worth nurturing.36
Among the most interesting developments are those of today’s post-Snowden context, in which many players have appeared, some forming new alliances, such as with journalists and activists for freedom of the press. Debates about national security intelligence and its reliance on Big Data prompt new practices, ones that are taking place over a much broader terrain than was visible in recent years. Coalitions form around commitments to a free and open internet, which also make extensive use of the same media and platforms available online, evidencing a fresh phase of citizenship that seems to recognize the risks of being ‘subjects to power’ while simultaneously embracing new opportunities as ‘subjects of power’. Several approaches to being ‘subjects of power’ are taken.
One is to seek fixes for the perceived problems. Much in the new world of surveillance is a product, direct or indirect, partial or complete, explicit or implicit, of new technologies. Plenty of fixes are available that would at least mitigate the worst difficulties. Often known as privacy-enhancing technologies, or PETs, these use encryption or other devices to increase the security of the systems in question, so that they are less data leaky. A more subtle and systematic approach is to try to foster a climate for socially responsible technology, to find ways of designing in features that reduce the risks of surveillance to those affected. Either way, one limit of these kinds of approaches is that they may be seen merely as technical fixes or as economically viable solutions.
Another approach is to address the issues by way of regulation. To use privacy or data protection law and fair information practices – FIPs – to limit the use of certain kinds of data and to ensure that important things like informed consent are introduced or that data are used only for the purposes for which they were collected and are destroyed within a certain period. Of course, the very notion of consent has become problematic in a Big Data era. Also, the collection of data is one thing. Crucially, its analysis for specific purposes is another.
If FIPs were indeed taken seriously by organizations processing personal data, many difficulties could in principle be overcome. But unfortunately it is relatively easy for organizations to pay lip service to ‘privacy’ while pursuing policies that – legally – discriminate negatively or simply profit from personal data as if they had no connection with those from whom it was extracted. Yet regulation and law still hold promise for long-term limits to egregious or inappropriate surveillance – which includes the modes of analysis of the data and not only the kinds or amounts of data garnered.37 At the same time, ‘privacy’, defined broadly, includes the very issues of fairness and of social values such as the importance of relationships and democratic participation.38
A third approach is to mobilize in quest of more careful treatment of personal data, whether in consumer groups, such as CASPIAN (Consumers against Supermarket Privacy Invasion and Numbering) in the US, or anti-identification campaigns, such as the NO2ID initiative against the proposal for a UK ID card in 2010, or civil liberties organizations, particularly those concerned with border and security issues, such as the International Civil Liberties Monitoring Group in Canada. Plenty of such groups exist, though they tend to be inadequate to the task before them. They may also have self-imposed limits due to their focus on specific, short-term issues or circumscribed sectors. Still, ‘privacy advocacy’, or the claims to rights in data politics, or the expansion of user-level surveillance practices, are on the increase.39 Struggling for rights, especially relating to freedom, is a profoundly worthwhile surveillance practice.
A fourth approach raises questions about contemporary surveillance in a more radical way. This book approaches the familiar world of contemporary living, with its everyday reliance on technical mediation, using the internet and connecting though social media, with a key aim: to make the familiar unfamiliar. To discern surveillance imaginaries and practices. To show how, in some important ways, everyday life shadows the global world of surveillance capitalism. And to ask how appropriate this is as a way of seeking human flourishing and the common good. The defamiliarizing process obliges one to see things differently and consider how things might be done differently, too.
Too often, surveillance substitutes ‘customer relationship management’ for knowing your customers, or attending to the immigration data screen rather than asking to hear the story of the asylum seeker. And the surveillance culture may easily substitute a coded text for actually speaking with someone, or continuing to be online 24/7 for taking breaks to play – holding no phone – with your children, smell the roses or feel the wind in your hair.
There is also a massive imbalance of access to information that could be beneficial to those to whom it refers, but from whom it is hidden through arcane algorithms or sheer proprietary secrecy. The ‘control society’ is all about ‘managing’, not about responsibility, except among those who tend to be most vulnerable,40 who are expected to handle their own fate. Encouraging access, and open data, is another way forward. These outlooks and actions are constantly expanding and making themselves more and more indispensable to the way that today’s society works.41
Modern bureaucracy denies the morally pregnant effects of human actions; ethics is ‘not its department’. How much more this applies to today’s surveillance capitalism whose practices are in so many ways cut off from ethical considerations. Bauman suggests that ‘ethical tranquillizers’ are in use such as short-cut solutions and fixes – ‘technology fetishism’ – that morally deskill actors and give them the sense that they have no responsibilities.
Yet the issues raised by contemporary surveillance are manifold and profound. They need an ethical seriousness, not technical fixes. And such ethics would start, for example with putting care before control, and of placing the person in a position superior to the data image. Is it too much to hope that emerging surveillance imaginaries and practices might ignite a counter-movement, from below and in everyday contexts of home and workplace?
Claiming digital rights already happens in regard to expression, access, privacy, openness and innovation. These are emerging practices that are already visible in digital modernity. The results of these new struggles are by no means clear, but what does become clearer every day is that they are necessary. If the gains made in earlier centuries and decades towards greater political participation, the growth of freedom, human security and the reduction of inequalities are not to be lost in an era of consumerist and self-absorbed pseudo-priorities and of Big Data fetishism, then the nurturing of new imaginaries and practices appropriate to a digital modernity is an urgent and eminently worthwhile task.
The emergence of the culture of surveillance, of watching as a way of life, is both something that may be readily observed, but also, by definition, something to which constructive contributions are being made. They will be truly constructive if the emerging surveillance imaginaries and practices connect with the large canvas of the common good, human flourishing and the care of the other. This, of course, is easy to say, but it is a risky path to take. Actually to put human flourishing first or to care properly for the other is to put yourself out there, make yourself vulnerable, to make a sacrifice and to let go.
This chapter does not close, it opens. Mine is not to prescribe but to propose some open questions about where surveillance culture is heading and how some emerging trends might channel it in some fresh ways.42 I have pointed to ways that contemporary cultural developments may foster human flourishing43 and the kinds of fairness that are sought by some over against tendencies towards a colder and more calculating surveillance capitalism. This accords with the notion of utopia-as-method.44 Here, beyond fictional accounts of idealized worlds, sociology works not only as a critical account of current cultural directions but also as a means of proposing and promoting alternative futures that embody holistic, reflexive and democratic imaginaries and practices.
The Circle offers some sobering scenes about a possible future, foreclosed by internet giants. It is dystopia dressed as utopia. But there is a role for utopia too, not as a means of escape, but as a method for seeking alternatives. A means of getting back on track and avoiding distracting detours. So what aspects of surveillance culture’s emergent imaginaries should be nurtured? What actual examples are available of alternative imaginaries that might inform our own?
You can ponder such questions on your own, but how much more constructive – not to mention convivial – to discuss them with others, in the pub, with friends and family, within your religious community or your book club (start with The Circle!). Of course this is not for a moment an exercise pitting some ‘new’ and wholesome approach over an ‘old’ and deleterious one. After all, as Torin Monahan observes, the tensions between care and control will continue in the surveillance world.45 The point is to bring these issues to the surface, be mindful of them and to raise them as social issues, not just personal troubles.
The challenge is to see the culture of surveillance for what it is, a development beyond the surveillance state and surveillance society, which nonetheless is imbued with the all too familiar features of each. To take surveillance culture seriously means several things. For a start, while keeping a focus on how surveillance works, how it operates in a surveillance capitalist context, driven by Big Data, it is also vital to keep an eye on the ways that surveillance is experienced. The future cannot be read from political economy any more than from the latest techno-gadgets and gizmos. How surveillance is experienced varies tremendously and those variations make a difference. And ultimately, the struggle is for the deep wells of culture, the ways people think, the direction their hearts take them and what is done in the routines of everyday life.
This is true on a spectrum from those who may enjoy the experience of surveillance, who find it entertaining or who can play with it, through those who are cautious and compliant, to others who are the most vulnerable, for whom it is anything but entertaining. Each dimension needs to be understood and each has its own imaginaries and practices associated with it, that help to determine how far we are able to see ourselves only as subjects to surveillance power or also as ‘subjects of power’ in this context.
But understanding surveillance culture also means recognizing ways in which people not only experience but also engage with surveillance themselves. This too has two dimensions, in that surveillance may be done both on others and on ourselves. In what ways these are appropriate, and contribute to human flourishing, are matters that urgently need to be worked out. If in the twentieth century our understanding of surveillance could so profoundly be affected by fictional literature – Orwell’s Nineteen Eighty-Four – then in the twenty-first it would be worth allowing all such work to hold up a mirror so that we may recognize our own world for what it is today. Even a Black Mirror.
Recognizing our world for what it is is a vital step. Realizing that things do not have to continue as they are at present is the second. The doctrine of technological inevitability is false because doing technology is a human endeavour and is socially shaped. Those who insinuate that technology is an unstoppable juggernaut usually have an interest in preventing resistance or denying the role of human agency. ‘Common good’ and ‘human flourishing’ alternatives are worth working for; another world is possible.
If the work of novels and of movies mirrors effectively the culture of surveillance, then we shall also be in a better position to evaluate and assess our own roles within this emerging world, as well as what our fears and hopes for it might be. As I have more than hinted, however new and committed to change the culture of surveillance turns out to be, and however unfamiliar the issues it must face, the ways of wisdom for imagining and acting within it are likely to be familiar, even ancient. They just await realistic, sensitive and practical retrieval.