2
From Convenience to Compliance

‘He showed devotion to all the rules, held at least one other security job, and went to night school. When he wasn’t working or studying, he was watching cop shows, preparing himself for the latest threats. In other words, he was a true believer with big aspirations in the security field.’ This is Lance, a transportation security officer (TSO) at Albany International Airport in New York State. He was assigned to tutor a new TSO, who had not yet perfected his performance, especially of pat-downs, those peculiarly intimate forms of surveillance-by-feel required since 9/11.1

Lance has a role to play and watching cop shows provides part of his preparation. For all that passengers may complain about the delays or even dehumanization during the security check, it is worth recalling that security officers are also playing a required role. In keeping with the theme of the growth of surveillance culture, however, this chapter explores the roles played by those experiencing surveillance. This yields rich clues about surveillance imaginaries and practices, from the ground up.

The focus here is on people experiencing fairly conventional forms of surveillance, first in the security field and then in marketing. The imaginaries and practices of those negotiating airport security or deciding whether or not to use certain search terms that could be incriminating vary according to many factors. National origin, gender and, of course, previous negative experiences make a difference. Similarly, choices about whether or not to use a loyalty card or even where to shop will differ depending on knowledge and experience along with gender, class and the like. This chapter offers some relevant illustrations without pretending to give comprehensive or systematic coverage.

Returning to the airport, then, hear a sample comment from a passenger, commenting on how it feels to go through security and how personal responses might be muted at Pearson International Airport, Toronto: ‘I feel like they have all the control. If they don’t want me to pass through then I won’t pass through, and if they want to be rude to you they can be rude to you, and I can’t say anything, because I want to go on my trip.’2

This aspect of a surveillance imaginary is a familiar one to many who travel by air, especially since 9/11. And the surveillance practice, in this case, is to say and do nothing that could be construed as suspicious. This even extends, as one researcher found,3 to struggling to stay silent when a whole young family, not English-speakers and not ‘white’, are pulled aside not just for questioning, but for treatment quite different from that accorded to pale-skinned anglophones. Here we see caution and compliance writ large. Awareness of surveillance prompts particular kinds of attitudes and actions.

Because air travel for business or pleasure is such a familiar aspect of surveillance and one that often hits the headlines with stories about new systems such as automated biometric border controls or the use of full-body scanners, it is worth starting here, at the airport. I shall return to it in the next section, too. Compliance seems widespread. Are passengers simply governed by the gaze? What exactly is this gaze, and what does it mean to be governed? Recall that the gaze may generate anxiety.4 And as passengers know from the airport experience, such anxiety may resurface all too easily as you divest yourself of your laptop, jacket and sometimes shoes onto the X-ray conveyor under the eye of the attending security workers.

However, parallel effects may be produced, or not, by a gaze that is not literally visual and that silences us in quite different ways. Passengers may find the full-body scanner at the airport threatening or potentially intrusive on their privacy, but in a broader context, what of journalists, bloggers and texters who discover that they are traced or tracked because of trigger words they have used? Such ‘chilling effects’ may be devastating. They affect not only the individual but also touch on the prospects for the journalistic profession, and indeed for democracy itself.

In each case, passenger or internet-user surveillance, emotions are aroused. Fear is a powerful means of obtaining compliance or of silencing voices. And because anxiety, fear and uncertainty are so easily and frequently associated with the term surveillance, they have to be brought properly into the picture. Academics in particular often tend towards dispassionate and distanced analyses but risk failing to recognize realities of life beyond the cool and rational. One colleague, reading an early draft of my book Surveillance after Snowden, exclaimed, ‘where’s the outrage?’ An apt reminder that attempts to get the facts straight may forget that the facts are sometimes intolerably in your face. Needless to say, I revised the text.

Of course, not all surveillance automatically triggers fear-filled performances or debilitating chilling effects. It does not. There may be ‘warming effects’ too. Surveillance is not necessarily sinister. The purposes it serves may be positive ones. Not long ago I spent time in hospital and after surgery it was important that my vital signs be monitored. I noticed one day when the nurse came in to my ward that rather than asking how I was feeling she simply said she was pleased to see my healthy heart rate. But you haven’t checked it yet, I replied. Oh yes, she responded, there’s a remote feed to the screen by the nurses’ station. We monitor you the whole time.

Surveillance is not only a constant and familiar fact of life, it is also in flux. The faces of surveillance alter over time and in different settings, and as surveillance expands our imaginaries and practices mutate as well. At the airport, to return there for a moment, the performances vary considerably for different passengers. Part of the script is given by the fact that travellers are already sorted and ranked by how much they paid for the flight, placing them in different queues and then in different seats in the aircraft cabin. But the perceived position of the passenger within that hierarchy will also lead to improvisation and even pre-security practising of roles. Again, at Pearson Airport, families that consider themselves to have a ‘Middle Eastern’ or ‘Muslim’ appearance take pains to tidy beards, speak no Arabic and to reduce the amount of conspicuous clothing.5

People know, in other words, that they are watched and modify their moves in ways that fit their imaginaries. At the same time, there are also contexts in which people try to watch back or at least watch as well. Such practices will again depend in part on the kinds of imaginaries within which constantly shifting experiences are placed. The use of smartphones to record police activities in traffic pull-overs or at protests is now answered by the rapidly rising use of police body-worn cameras. One could be forgiven for seeing this as a game, using developing devices, to outsmart the other in surveillance strategies.

At the close of this chapter, threads are drawn together by examining that frequently heard slogan, a routine phrase from many surveillance imaginaries, ‘if you have nothing to hide, you have nothing to fear’. This seems to be taken by many as a sound basis for compliance, and its media-amplified repetition by government and corporations reinforces its apparent plausibility, not to mention its prominent place in many surveillance imaginaries. Is it really true at the airport or when you are pulled over by police? If not, then not only is its place in surveillance culture factually questionable, it is also ethically and politically so.

In a moment, I shall return to the airport and its security surveillance to consider more fully what happens there, from the passenger perspective. But first I want to highlight the significance of a term that has already cropped up several times in the introductory section: emotion. Ideas such as performance depend on a sense of how situations feel to those experiencing them. If you feel fearful, threatened or overwhelmed, this affects how you see yourself in a given situation – perhaps powerless, subordinate – and thus how you respond. Those surveillance imaginaries and practices are bound up with your ‘performance’.

The emotional life of surveillance

The cases of airport performances and chilling effects discussed in this chapter should give us pause. They suggest that the impact of security-related surveillance is to create insecurity and with it anxiety and fear. These could, of course, be unintended consequences of augmented surveillance. But it is also possible that the so-called shock doctrine strategies of some governments may play into this,6 in which case they may be rather more intentional. According to the shock doctrine, exploiting some natural or constructed crisis to introduce some new or controversial government measure is a deliberate strategy practised by many around the world.

Not only this. If experiences of airport security and of internet use are sources of uncertainty, anxiety and fear, then they also engage human emotions in profound ways. Needless to say, the scope of emotions is huge and the study of emotions sometimes overwhelming. Social scientists have not always been good at including emotion in their analyses; too often they are ‘all mind and no heart’.7 Yet surely human agency has to be thought of in terms of the heart and not merely the mind? More particularly, in the realm of surveillance studies, with some exceptions there is a surprising disconnect between descriptions of how people’s rights might be ‘trampled’ or their privacy ‘invaded’ and the emotional content of how those people feel about such surveillance trampling and invading.

This disconnect is perhaps all the more surprising when one considers how emotions themselves have been placed under surveillance. One thinks, for instance, of post-9/11 efforts to use facial recognition technology in video surveillance to capture the micro-expressions – such as lip twitches when answering questions – on passenger faces in the airport. This is intended to yield clues about unusual emotional behaviour that might indicate a threat.8

Or, more recently, consider the 2013 experiment in which Facebook was accused of emotional manipulation for adjusting the positive and negative content of users’ newsfeeds to demonstrate what was said to be a contagion effect. Many users were upset, saying that they were ‘creeped out’ by the experiment. Some cancelled their Facebook account; others angrily commented: ‘you can’t mess with my emotions; it’s like messing with me!’9

The issue of trust also rings bells here. The breaking of trust is an emotional matter par excellence, whether mediated by security surveillance or social networking sites. Trusting relationships, as both Georg Simmel and Erving Goffman taught, depend in part on the person’s ability to manage their visibility, however partially, before others, whether individuals or organizations.10

It is important not to forget that, while expressions of anger or irritation come from the lips of individuals, emotions are not only a psychological – still less merely a physical or chemical – matter. A sociological understanding places emotion in a social context. Of course emotion is subjective, but in this view it is also intersubjective, arising from encounters with others. Emotion is an aspect of daily life, as we interact with friends, associates, neighbours, family. In her classic The Managed Heart, Arlie Hochschild sees emotion as being rather like language and thus ‘best understood in relation to its social context’.11 In other words, emotion may helpfully be thought of in ‘interactionist’ fashion.

Thankfully, several scholars have addressed the question of emotion in relation to surveillance, from studies of how surveillance operators are affected by their work,12 through to why questions of privacy should not only be examined in abstract, legal ways. On the latter point, the studies of Julie Cohen stand out, in which she stresses the need for considering how people under surveillance are embodied, subjective persons, in relationships with others and themselves. They are socially constructed selves who continually engage in efforts to manage the ‘boundaries’ between themselves and the surveilling eyes of government, corporation and, we shall suggest, other internet users.13

So-called boundary management is the way of limiting the visibility of others.14 But this does not occur in universally similar ways. Both Valerie Steeves and Priscilla Regan take this up in discussing the ways that young people value opportunities to limit access to their online activities.15 In this case, the emotional aspects of online users are threatened by unwanted surveillance because it exacerbates the sense of vulnerability to others within the general stresses of social interaction.

Negotiating airport security

There is little doubt that you are under scrutiny when you go to an airport. You expect to have your details checked at the airline desk or electronic check-in. You know you have to go through security. There, your boarding card is verified and your hand luggage goes through the scanner while you walk through the electronic archway and possibly get wanded as well. If you are at London Heathrow or some similarly equipped site, a biometric check is added to the mix. You have to stare into a camera for a second or two. And if your destination is in another country, customs and immigration services will also require more data. No one will be surprised to learn that you are also watched by video cameras, even though you may have been too preoccupied with the other searches to notice where they are.

In Canada, those video screens may be viewed by operators at the airport, but the images from major international airports are also available in the headquarters of the Canadian Air Transport Security Authority in Ottawa. So while you struggle through security in, say, Vancouver, someone 3,550 kilometres away may be watching. Not only that, they can also check their screen to see the X-ray of suspicious objects in your bag.16

In the US, successful tests of ‘whole body imagers’ led the Transportation Security Administration to decide in 2009 that these be phased in, in preference to walk-through metal detectors. They provide X-ray images of the unclothed body seen only, passengers were told, by operators at a remote location, unable to connect the image with the person being scanned.17 These machines were later modified to produce clearer images of inanimate objects, less clear ones of the bodies. The same kind of equipment is in use in Canada as well, although across the Atlantic the European Parliament rejected whole-body scanners in 2008.18

The airport experience is very much one of performance. Not only is there a heavy emphasis on acting, but for some, notably so-called visible minorities, the performance must be practised beforehand to ensure that suspicions are not inadvertently aroused. Skin colour is a prominent issue here, with many passengers worrying that their brown skin or ‘Middle Eastern’ appearance will impede their progress through security checks.

Some families approaching the X-ray machines at Pearson International Airport in Toronto warn each other – ‘OK, speak only English until we’re on the other side.’19 As Rachel Hall notes, these everyday surveillance rituals are part of a broader, coercive performance of risk management. Potential suspects are obliged to perform their innocence by showing that they cannot be construed as a threat to other passengers or to the aircraft itself.20

Hall’s work shows how important performance is to how airport security works. What she dubs the transparent traveller exudes the imaginary and embodies the practice of submitting to surveillance. This is how it works. Airport security logic may be seen as what Michel Foucault calls biopolitics, where people are governed by determining if they fall into one or another population category. This is not the discipline of individuals, which Foucault also explores, but politics expressed as administration at the level of the population.21

The exceedingly small chance of death by terrorism is played up by the ‘security theatre’ logic of the airport such that passengers, in turn, are all encouraged to play their parts appropriately. Those who, for whatever reasons, but especially due to their racial differences, citizenship, age, ability or religion, are less adept at producing a convincing performance experience delays, detainment or even refusal to board an aircraft.

In terms of surveillance culture categories, surveillance imaginaries prompt practices that are not merely suited to the context but that also keep that context – airport security – functioning smoothly. Thus so-called security theatre is not merely a show for spectators. Hall shows that performance helps to constitute what is known of security measures. It is not enough for people to prove only once that they are trusted travellers; they have to perform each time they arrive at the airport. If imaginaries include a phrase like ‘I guess they have to do it to keep us safe’, then passengers are likely to continue with the idea that attacks can be prevented, even though proving that prevention occurred is ultimately impossible. In this way, says Hall, ‘we become co-creators of a shared reality.’22

In other words, airport travellers become complicit in the security theatre with which all have become familiar. And for all the talk of ‘clearing security’, it is not obvious that this is ever really possible. Security is an ongoing performance, for the whole cast. Everything hangs on what Hall calls the ‘aesthetic of transparency’. To appear transparent, passengers perform their innocence and show that they are willing to open themselves to inspection and checking. Who has not observed passengers who routinely remove their shoes and belts, even when there is no instruction to do so? Woe betide those who are less than transparent, for whatever reason. The cloudiness of their performance singles them out to wait longer in line, to be searched again.

Chilling effects

In the years after 9/11, many murmured about the chilling effects of ramped-up security surveillance. But that murmur became much more internationally audible after Edward Snowden’s exposure of the NSA’s intrusive surveillance, often carried out on people who were suspected of nothing. Among other things, it led directly to a lawsuit against the US Department of Justice and the NSA, in March 2015. The suit cited a chilling effect that stifled freedom of speech and the free exchange of ideas on Wikipedia, the collaborative online encyclopaedia, but was dismissed for lack of evidence of objective harms, just as earlier suits had been ignored for their ‘speculative’ character.23

Since then, however, it has become clearer that a chilling effect does indeed accompany the rapid expansion of pervasive surveillance of the kind that Snowden uncovered. Working against what some suppose, that intensified government surveillance would not have significant impact, given public accommodation to surveillance generally, Jon Penney sought evidence indicating that self-censorship does indeed occur.24 He asked if search traffic for articles on privacy-sensitive topics decreased after the shock of discovering the nature and extent of the NSA’s surveillance activities. The answer he found showed that there was a significant drop in such searches and that there was a lasting general shift in what Wikipedia users were willing to search, following Snowden’s disclosures. In a sense, this could be read as pulling away from performance, so this section is more about imaginaries and practices that are cooled with caution.

The idea of chilling effects has been around for decades, especially since the Cold War years of the mid-twentieth century. It assumes that people may be deterred from expressing certain views in public for fear of punishment. Legal scholar Daniel Solove weighs in here to propose that chilling may not only be related to the possibility of punishment but also to being embarrassed, labelled or being further tracked by authorities.25 Either way, such chilling would in principle have an impact both on individuals and on the democratic process in general, dependent as it is on open access to information of all kinds. Wikipedia is an ideal site for such research, given its huge popularity and widespread, everyday use.

Research like Penney’s and Solove’s contributes much, not only to the legal process of determining whether or not a ‘chilling’ challenge can be mounted in relation of self-censorship, but also to our knowledge of how government surveillance affects how people obtain and share online information. This is a crucial question for the health of any democracy. It has been explored in several contexts since the Snowden disclosures began, not least in relation to the writers’ group PEN, and more generally among ordinary internet users. PEN – ‘Poets, Essayists, Novelists’ – was founded in 1921 to promote literature, literary cooperation and freedom of expression, as well as the freedom to write and to read.

First, in 2015, PEN International launched a report on post-Snowden chilling, based on feedback from almost 800 authors, living in fifty countries. Because of the importance of free expression to their craft, writers have been described by the novelist E. L. Doctorow as the ‘canaries in the coalmine’ when it comes to the impact of surveillance on privacy and free expression in society writ large.26 It showed that concern about surveillance is nearly as high (75 per cent) in democracies as in non-democracies (80 per cent). Self-censorship reported by writers – now including bloggers and other internet users – in democratic countries approaches levels reported by writers in authoritarian or only semi-democratic countries. And writers in many countries say that mass surveillance has deeply damaged American credibility as a global champion of free expression. As one British writer noted, ‘most UK citizens are now under levels of surveillance that would make the Stasi seem amateurish.’27

Accepting this as part of their surveillance imaginary, many writers now engage in self-censorship for fear that their communications will be monitored by a government authority. For an example of this surveillance practice, 33 per cent of US writers reported deliberately steering clear of certain topics in personal phone conversations or email messages, or seriously contemplated it. Similarly, the proportion of writers in free countries equalled that of non-free countries in limiting their internet searches or website visits for ‘controversial or suspicious’ topics. These were striking findings that the authors of the PEN report hope might lead to a curtailing of mass surveillance in the US and elsewhere.

Also in 2015, the Pew Research Center made an extensive study28 of American attitudes that showed a very divided nation on the question of whether the US had gone too far in restricting civil liberties or, as was previously the view, not far enough in protecting from terrorism. In 2013–14, however, the proportion switched to concern for civil liberties for several months, following the Snowden revelations. Curiously, at that time, both Republicans and Democrats were equally concerned about the impact on civil liberties. After this, however, with the emergence of ‘Islamic State’ or ISIS and actual attacks in the US, the proportion of those saying not enough is being done to counter terrorism has risen, especially among Republicans.

Interestingly, a large proportion of internet users (86 per cent) were taking steps to remove or mask their digital footprints, and many wished to do more but were not sure which tools to use. They clear their cookies, avoid using their real names, mask their internet protocol (IP) address or, in some cases, encrypt their email. And quite a sizeable group (55 per cent) sought ways to avoid observation by specific people, organizations or the government. This included worries about social surveillance as well as what corporations or government might be doing with their personal data. Generally, there was a lack of confidence in the security of everyday communication channels and a lack of trust in all kinds of organizations to protect their data.

The chilling effect may also be seen through the lens of ethnographic studies with particular groups, for example Muslim Canadians. In Tabasum Akseer’s study of the chilling effect of security surveillance on this particular minority group, young people freely expressed their sense of chilling. The kinds of experiences they have reflect the unevenness of treatment that exacerbates the chilling effect on some segments of the population, but also drives home both the gut feeling involved and the surprising buoyancy of some responses:

You know you just have to watch yourself. Don’t be too loud, don’t hang out with more than 3–4 Arabs, [laughs again] and don’t grow your beard too long! Those are typical things that scream – I’m a Muslim, look at me so watch out, and control the image that you give up.29

All of which suggests that the chilling effects of so-called mass surveillance are real and widespread and lead to uncertainties and fears that disturb the normal routines of everyday life. Canadian Muslims certainly worry about how their internet use might erroneously incriminate them:

You don’t know who is watching you [on the internet] …Now that you know that someone’s listening to you, or someone’s recording you, how are you going to react? Now people have to think of what they post, they can’t just be free … I try not to be one of those people ...30

The clear majority of Americans polled by Pew were keen to be in control of what information is collected about them. At the same time, most also admitted that they struggled to understand what information is collected on them or how it is used, although this is less a feature of younger respondents. The latter also do more than others to restrict their online visibility. In general, the chilling effect seems to promote a state of fear that is hard to pin down and appears to ebb and flow in unpredictable ways.

The data dragnet and surveillance culture

So, passengers expect to be watched at the airport; this is a commonplace aspect of surveillance imaginaries. The airport is a peculiarly focused surveillance zone. You are obliged to go through security and you cannot but be aware that you are under observation at several levels simultaneously. And it is not surprising to hear of ‘chilling effects’ of surveillance, especially among writers and journalists. After all, their very livelihoods and often their sense of identity are bound up with how they work with words, seeking their stories and crafting them for their audience. Surveillance does elicit emotional responses, and it does provoke performances of various kinds.

But some people might perhaps be surprised to discover both how much is seen and the ways in which ordinary people are seen, and how they – wittingly or unwittingly – allow themselves to be seen, playing a role in what is actually seen. If it were not for the fact that novelist Franz Kafka’s The Trial is about a mysterious police probe into his life, for reasons that he cannot fathom, and that leaves him increasingly frustrated and frightened, it would be a good image for consumer surveillance in the twenty-first century.31 Consumers may have a vague sense that they are profiled and tracked but generally lack inside knowledge of how their data are collected, compared and calculated, let alone why and with what effect.

Thus what can be known about consumers’ surveillance imaginaries and practices is more limited than the literature on those of either airline passengers or journalists. What evidence there is often comes from sensational stories about massive data breaches – even though these are more often in public services – or highly intimate disclosures that embarrass family members or those with romantic attachments. I shall refer to one of these later, but the 2015 case of Ashley Madison, where hackers published details of 30 million users who were using a dating site for people who were already married, is a case in point.32

When security agencies defend their data-gathering strategies of vacuuming up data and ‘collecting it all’, the notion of a data dragnet sounds plausible. Yet you never need to go close to an airport or complain publicly about the government to be under scrutiny. Surveillance is ubiquitous. Some is literally visual, done using cameras, but much surveillance does not involve literal watching at all. You are ‘seen’ in your bank records, cellphone calls, bus passes, workplace IDs, loyalty cards at the supermarket, passports, credit cards, healthcare and social security numbers, on Google, Facebook and Twitter, only some of which have any visual dimension. But a lot of personal data can be seen. Some, as from airport security cameras, are fine-grained, or, as from whole-body scanners, quite intimate.

However, the data-dragnet metaphor applies just as much – if not more – to the corporate sphere of consumer surveillance as to security agencies. In order to grasp how the emerging surveillance culture develops, it is vital to have some sense of how personal data became so valuable, so avidly sought after and so difficult to imagine or respond to. Today’s watching is driven by new dynamics. In part, it is an economic logic: personal data are assiduously sought by consumer corporations and marketers, making them valuable as never before. An organizational logic also plays into the mix, for example, moving management from risk to precaution. Plus, of course, a techno-logic; many organizations as well as, now, individuals have access to the means to amass and process personal data that was never available to the legendary little old lady behind her net curtains. Behind all these lies a cultural reliance on watching and visibility; to see – and, as discussed in chapter 4, to be seen – is to be sure. Supposedly.

In a world of much-hyped Big Data, fine-grained fragments of information are sucked up, stored, combined, analysed and interpreted in ways that do reveal patterns of life but are also subject to much potential error, especially when they are used in an attempt to predict behaviour. A classic example is when the megastore Target inferred from purchases of lotions, supplements and cotton balls that a Minnesota teen was pregnant. Her father, displeased, complained, but it turned out that Target knew better than he did. She was indeed pregnant.

However, as maths professor Jordan Ellenberg points out,33 the creepier scenario is when the algorithms get it wrong. As he argues, Netflix and Amazon do not have a very high rate of success, but it is enough to satisfy, usually. If similar algorithms are deployed by predictive policing programs or ‘homeland security’, the dangers are obvious. At the same time, as we shall see, the customers’ own imaginaries and practices play into this, especially but not exclusively as people perform on social media.

Supermarket surveillance

The same kind of social sorting logic applies in supermarkets and superstores as in airports. To get a sense of this, I look at a British, an American and a Swiss supermarket chain and a Canadian hardware chain, before turning to how consumer surveillance is experienced, understood and responded to by customers.

Tesco, a major British supermarket chain, runs a database, through a subsidiary, called Crucible that profiles every consumer in the UK with personality details, travel habits, shopping preferences, degrees of greenness and levels of charitable generosity.34 Crucible says that in a ‘perfect world, we would know everything we need to about consumers … attitudes, behaviour, lifestyle. In reality we never know as much as we would like.’

The Tesco subsidiary uses a software system called Zodiac for ‘intelligent profiling and targeting’ so that, together with Crucible, a map is generated of how individuals think, work and shop. Consumers are classified into one of ten categories: wealth, promotions, travel, charities, green, time poor, credit, living style, creature of habit and adventurous. The Clubcard is used as one data source, but added to that are huge international data brokers such as Experian, Claritas and Equifax, and public sources such as the electoral roll, Land Registry and the Office of National Statistics.

Why is all this worthwhile? Just because the company knows better where to focus its energies, which customers to court with offers and which to ignore. Data Analytics has become a watchword at Tesco, with an increasingly complex set of categories into which customers are placed. It is widely viewed as a leader in the field. The company organizes the databases carefully, to avoid conflict with the UK’s Data Protection Act, but it is amazing just how fine-grained their data are.35 And curious how such an inscrutable scheme is constructed in the name of ‘care for the customer’.

In Canada, an executive at Canadian Tire, the electronics, sports, automotive and kitchenware chain, decided back in 2002 to make a precise inventory of all credit card sales.36 What could this show? Well, people buying generic motor oil were less likely to pay debts than those buying brand names. People buying carbon monoxide monitors or felt feet for furniture to protect floors would also pay back quickly, unlike those buying chrome-skull car accessories. Drinking in a particular bar in Montreal was associated with bad risks, while buying premium birdseed was not.

Psychological profiling was the next step; folk who buy felt feet protect their belongings and their credit scores as much as their hardwood floors. This way, the company knows who to spend time on when they register for baby showers or weddings and whose credit lines to cut when they see their credit cards used in pawn shops or for marriage therapy. The latter uses could easily flag bad risks if they also lose their jobs.

The use of personal data for all manner of purposes, often well beyond what might be imagined, shows that it makes sense to speak of this as surveillance. Personal lifestyles can be ‘seen’ in fine-grained and intimate detail through the profiles constructed not only by police or security officials but also by corporations. Of course, the customer may not agree that hanging out in a particular bar is associated with not paying credit card bills, but to the company that shopper is part of a statistical set that cannot fully be trusted. Here, too, we can discern those three logics at work, the economic, the organizational and the technological. Those personal data are surprisingly valuable (there is an extremely big market for them), especially to organizations keen to reduce the risks they face (like defaulting credit card holders), and smart software and statistics are available to help maximize their use (this is part of the ‘techno-logic’).

Customers of all kinds inhabit a closely watched world, both visually – think of those ubiquitous video surveillance cameras, frequently in stores – and virtually – think of the digital profiles by which customers are seen metaphorically. The video image has its consequences but so, equally, does the digital image. Today’s surveillance imaginaries are often aware, but hazily, about the fact of ongoing surveillance, and fuzzy about how it actually operates. Interestingly, however, when chain stores started involving customers in their own monitoring, this opened a crack in the door through which those who took up such opportunities could see better what was happening. So why do companies use these kinds of surveillance?

Along with security agencies, database marketing was perhaps the most obvious site for surveillance to be found from the 1990s onwards. One data source for this was loyalty card programmes, a domain where data appetites mushroomed.37 Such surveillance appears as a form of biopower, where data mining is utilized to reveal patterns of consumption, as in the Canadian Tire example. Recall that biopower is Michel Foucault’s term for the means used by institutions of all kinds to regulate people’s lives. As he explains, it is the ‘administration of bodies and the calculated management of life’.38 It is how populations and groups are dealt with, beyond the individual disciplining power of surveillance of which he wrote elsewhere. What is more, this marketing biopower is strengthened by emerging projects which aim to use consumers’ data for purposes beyond pure marketing, such as obesity prevention or monitoring the intake of food additives.

In 2008, for instance, the American retail store Safeway launched the now discontinued Foodflex programme, an initiative enabling consumers to monitor their own nutrient consumption.39 The technology was able to make personalized product suggestions so that participants could improve their diets and consumption profile. These recommendations were in accordance with US Department of Agriculture dietary guidelines. In a similar way, Migros, the major retail chain in Switzerland, uses what they call a Famigros programme.40 This scheme offers advice on how to achieve things such as consuming healthily, losing weight or feeding a newborn baby. These recommendations are derived from official governmental sources. In these ways and others, biopower, based on data derived from both government and corporate sources, may be coupled with data from those enrolled in performance-improvement schemes to enhance that ‘calculated management of life’.

All such schemes, now also seen in various available apps, depend increasingly on consumer interaction with the analyses. Feedback loops are created in which the data collected may in turn be appropriated by those who are the subjects of the data, thus leading to fluctuations in the datasets themselves. As consumers discover how the corporations work, so they join the game. A curious twist to this, from the point of view of the social sciences, is the way that those whose behaviour is analysed may alter their practices to the extent that the original analysis must be modified. In the later part of the twentieth century, this kind of process was used to show how the social sciences differ from the natural sciences. In the human, rather than natural, sciences, research subjects have the chance to learn of the research and modify their behaviour accordingly.41

Despite some legal and technical limits, personal data streams went from a trickle to a flood such that it is now impossible to follow all the conduits and rivulets back to their source. As well, new techniques of data capture and, especially, analysis were developed, often under the ‘Big Data’ banner. ‘Metadata’, a term not noticeably in the public domain until Snowden’s leaks, became a crucial source not only of security-related intelligence but also of commercial surveillance. Where some might once have worried about the content of specific photos, videos, texts, messages or calls being publicly displayed in contexts where they might be compromising or embarrassing, now the data dragnet gathers all kinds of apparently trivial fragments of information, including dates, times, locations or call durations relating to communication or transactions, that can be analysed for patterns and trends more easily and sometimes more accurately than combing through content itself.

We know you’re watching

Surveillance imaginaries only exist insofar as there is some awareness of being watched and what that might mean for one’s place in the world, one’s opportunities or one’s limits. While some watching is surely surreptitious and while there may be some spheres of surveillance about which internet users are unaware, few have missed at least some signs of surveillance culture. One does not need that sixth sense that alerts us to a hidden watcher; the cameras’ tinted domes decorate the ceiling and advertisements pop up on the screen relating directly to some keywords in the freshly sent email. And, as in the case of weight-watching or healthy eating, some surveillance data are modified in real time through the data subject’s conscious responses to it.

If, for instance, you drive onto the toll highway ramp, like Highway 407 in Ontario, without going through tollbooths (because there are none), the cameras will capture your licence plates. Even though the car carries no windshield transponder, the bill will drop into your mailbox anyway. You may order a pizza from a big chain but be aware that they know your topping preferences just from the phone number. Indeed you may have to make it quite clear that this time you do not want ‘the usual’. There is, in other words, some knowledge of being under surveillance that prepares you for a bill from the toll company or reminds you to be specific in ordering pizza.

So knowing that you are watched is an important aspect of surveillance culture. Surveillance today may in some cases be covert but in many situations the subjects of surveillance are aware of what is happening. The data-entry clerk or call-centre operator knows that keystrokes are counted and that calls are monitored for quality control purposes. The trucker is aware that the truck carries a ‘How’s my driving?’ sign on the back, by which passing motorists may call the employer when some particularly bad – or good – driving is observed. The consumer wandering the street, window-shopping, can clearly see the sign saying that the store is under video surveillance, just as the internet surfer is aware that the sites visited often include their privacy policies that explain what personal data gleaned by the owner may be used for.

Among those who are aware they are watched, changed behaviours may result. My local drug-store clerk never fails to ask if I have an Optimum Card – a loyalty programme membership – when I make a purchase, to which I invariably reply, no, and then, no thank you, when I am asked if I would like one. Other stores often ask for telephone numbers and postal codes, and again, one can hear varied responses. Some simply comply, while others ask why it is necessary or just refuse. Such refusal depends, of course, on prior knowledge – an element in a surveillance imaginary – that such apparently innocent data may become a key for connecting with other personal data. And questioning such everyday surveillance – a surveillance practice – is, from a privacy perspective, a commendable response.

However, such acts of questioning or resistance, while important, are unlikely to be very effective in themselves, given the huge imbalance of power of organizations that carry out surveillance, compared with the solo refusenik. Without doubt, some acts of citizen surveillance may be significant on a larger scale, as when photos or video are shot by protesters during demonstrations, such as those following the queried election in Iran in 2009, or in the spate of police shootings of black men in the US that became notorious from 2012 when a neighbourhood-watch volunteer killed teenager Trayvon Martin.

But, more often, acts of self-protection – which are also signs that people are aware of surveillance – such as putting privacy settings on ‘high’ on Facebook, covering the laptop lens with tape or having the phone camera at the ready downtown ‘just in case’, are unlikely to have much effect. While citizens are often encouraged by many means to protect themselves, the larger, truly political question is whether or not ordinary citizens can challenge surveillant organizations to care appropriately about the personal data they handle. The accountability for doing surveillance is far more significant than our personal preferences and practices relating to whether or not we are surveilled.

We have more to say about resistance to surveillance, but at this point it is worthwhile examining another phenomenon; the ways in which surveillance may be not so much avoided as adopted. Surveillance cultures seem to spawn other levels of watching. Rather than eschewing surveillance, some seemingly embrace it.

You can watch too

You are watched and you know it but do you care? Another dimension of surveillance cultures is that people respond intentionally to surveillance. As we shall see, some even do surveillance for themselves. Perhaps as a response to the question of why large organizations should have a monopoly on monitoring, but more likely for mundane reasons, ordinary people start to watch too. The technologies for so doing are already at hand. This includes using smartphones to find others using GPS technology, checking through social media networks such as Facebook to discover details on neighbours, colleagues or friends, installing nannycams to keep an eye on the babysitter, or snooping into your children’s internet surfing.

Extreme ways of reversing the gaze, or watching back, include Steve Mann of the University of Toronto who has a camera secreted in his glasses or, even more imaginatively, Rob Spence, also in Toronto, who dreamed of having a digital video camera installed in his artificial eye. Mann is a walking history of prosthetic technologies for extending his visual and audio capacities; he has spent more than twenty years perfecting the craft. He will turn his wearable camera lens on, say, overhead shopping mall cameras and record his consequent exchanges with retail staff. Rob Spence, originally from Belleville, Ontario, is a filmmaker who aspires to be what he calls an ‘eyeborg’, watching and recording those in his field of vision in order to make people more aware of the surveillance that is everywhere.42 He has had several upgrades to the original video ‘eyeball’ such that it now resembles any prosthetic eye.43

I suspect that few aspire to be eyeborgs, but this does not mean that people never engage in do-it-yourself surveillance. If you google the word surveillance, and even more, if you add the word camera, plenty of suggestions come up about why you need your own surveillance system and which company will sell you the equipment for installing your own. ‘Protect your home and business from vandals,’ offers one. ‘Free shipping, free tech support,’ sirens another. And of course, for bargain hunters a ‘discounted selection of new and used surveillance cameras at low prices’ is promised on eBay. The idea is to be able to check on who is at the door or on the property, or alternatively to keep an eye on those you know are there, such as your children’s nanny or your pets. Harold Hurtt, former police chief in Houston, Texas, thinks that domestic surveillance cameras are such a good idea that they should become part of the building code.44

While Harold Hurtt may have public safety as a priority, in North America most people deploying their own surveillance systems do so from simple self-interest or to protect family members from harm. There is much activity in the realm of what might be called snooping or spying on the others, especially where parents are concerned. According to a Pew internet and American life study, more than half the parents surveyed said they used either monitoring software to inform themselves about their children’s online activity or an internet filter to block access to inappropriate sites.45

In Brazil, however, setting up one’s own surveillance system may have much to do with your opinion of public surveillance organized by the police or the city. Many Brazilians opt for private security systems because they do not trust the public ones. They are suspicious of the motives and operations of the police and the judiciary. Doing one’s own surveillance seems better suited to the interests of the family and the community.46

So the surveillance culture not only becomes visible in the large corporations using high-tech means to track and monitor the daily activities of consumers, but may also be seen in consumers’ awareness of and responses to surveillance. Even more strikingly, the culture of surveillance is evident when ordinary people start to use surveillance themselves in order to organize their lives, protect their homes or family members, or to check up on what their partners or children may be doing. Or parents. Some families in the US have agreed to have aged relatives who are suffering from Alzheimer’s or some other memory-inhibiting disease implanted with a radio-frequency identification (RFID) chip to try to prevent them from wandering too far or to find them when they are lost.47

At the same time, popular media may ring some warning bells about the limits of aping the collect-it-all mentality of government agencies or consumer corporations. In the ‘Entire history of you’ episode of the Black Mirror series, it is precisely exploiting the capacity to record and play back everything that makes relationships unravel, devastatingly.

Today’s surveillance culture is unprecedented. Never before has so much time, energy and money been invested in watching others, and never before has such watching been so consequential.

Now we have surveyed some of the key dimensions of surveillance culture, what may we conclude? In our everyday lives, people are watched in an extraordinary number of ways and contexts. But they are also increasingly aware that they are watched and in some respects appear to have made their peace with this. Indeed, inspired perhaps by the prevalence and availability of new surveillance devices, many are even prepared to adopt some surveillance strategies for themselves. This is clearly not merely about ‘us and them’ where the watching is all top-down. This is why we can speak of a surveillance culture. Surveillance has become a way of life, a key aspect of how we think about the world and operate within it on an everyday and sometimes almost unconscious basis.

Nothing to hide, nothing to fear?

At the start of this chapter, I promised to comment on a common phrase – nothing-to-hide-nothing-to-fear. Probably originating in a biblical text – rendered in contemporary English as ‘Decent citizens should have nothing to fear’48 – it features prominently as government propaganda in many countries, encouraging the ‘innocent’ to believe that however surveillance systems are reinforced, their lives will not be affected. And judging by opinion polls and anecdotal evidence from everyday conversations, many accept this as part of their surveillance imaginary. It follows that their surveillance practices will also reflect such ideas.

Because the nothing-to-hide-nothing-to-fear phrase is intrinsically attractive, because people have become accustomed to surveillance, and may have come to terms with aspects of the new reality and are complicit in user-generated surveillance, and because many even set up their own private surveillance systems, this does not mean that some basic questions are not in order. The idea of a surveillance society was once associated with police states, with repression, and was rightly seen as repugnant. Within these, having nothing to hide certainly did not keep fear at bay. Orwell’s classic novel Nineteen Eighty-Four begat the language of Big Brother to describe a state of affairs of impugned innocence to be resisted at all costs.

But, Orwell notwithstanding, the surveillance society arrived, not wearing the heavy boots of brutal repression, but the cool clothing of high-tech efficiency. It came not from an authoritarian state49 but from companies claiming to know their customers better so that they could provide just the goods and services they wanted. It showed up not as a looming telescreen with the fearsome face of Big Brother, but on a million screens of social networking sites and of handheld devices marketed as convenient, cost-effective and customized. Surely in this world, having nothing-to-hide-nothing-to-fear makes sense?

The fact is, today’s surveillance is deeply ambiguous. Efficiency, convenience and customization are not generally seen as enemies. And nor are they just a Trojan horse, to trick people into thinking a gift conceals an enemy. No, to some degree genuine social benefits are available from the new surveillance technologies that also promote and extend questionable modes of monitoring and tracking. This is true of surveillance systems run by both government agencies and consumer corporations. The crucial question to ask about the new visibility is how the system actually works and how it operates in different contexts.

Once, in societies that accepted the rule of law, in which the vital presumption of innocence was upheld, it was fairly safe to assume that if you had nothing to hide, you had nothing to fear. Authorities had the right to search out and make visible only those who might be hiding something. Of course, no system is perfect, but in general and in many countries this rule could be trusted. Today, the assumption that if you have nothing to hide you have nothing to fear is being systematically undermined by the new surveillance.

Of course, even such undermining is not a uniform process. If we return to the kind of Canadian experiences that informed the earlier discussion of airport passengers, it is clear that opinions about surveillance differ. While more open and flexible outlooks have grown in Canada generally over several decades, now more than isolated pockets of the population worry about the risk of terrorism and associate such risks with newcomers.50 A third of Canadians now say they change their travel plans due to potential attack risks, and one in four think that police should be allowed to listen in on phone calls or read texts without special warrant. The same people are more deferential to traditional authority.

On the other hand, two-thirds of Canadians are more open to immigration, less accepting of police snooping and believe that seeking crime-and-justice responses to terrorism is more appropriate than augmenting police powers of surveillance without increased accountability or turning to violent measures. These are also more willing to learn from diversity and do not automatically defer to authority. Clearly such shifts in what pollsters call values also lie behind evolving surveillance imaginaries and practices.

Remember the case of the bar and the birdseed? Where you choose to drink in Montreal may flag you as a bad risk in seeking credit. You are simply caught in the wrong company according to Canadian Tire. Remember Tesco-Zodiac’s ‘intelligent targeting and profiling’? The whole point of this is to place people in consequential categories so that they can be treated differently. The same goes for the ‘webwise’ software that classifies people by browsing habits, not just by their visits to one page. You are where you surf. From such automated assessments, decisions are made about everything from creditworthiness to levels of after-sales service, and from internet speed to the ability to keep a bank account. And if you are already marginal or disadvantaged, the system will ensure that these vulnerabilities are magnified, through the effects of what Oscar Gandy calls ‘cumulative disadvantage’.51

But it does not stop there. These kinds of classifications are also used by police, intelligence services and other authorities. After 9/11, the agency that would become Homeland Security made a priority of calling customer relationship management companies to help them locate and identify not potential customers but would-be terrorists.52 Classifying, clustering and excluding certain groups work here too. And the same strategies trigger the inclusion of innocents on no-fly lists and watch lists, ordinary homes within police hotspots in the city, and peaceful pedestrians as ‘possible suspects’ as they walk under cameras, shopping or simply coming home from work. This is how Canadian high-school students like Alistair Burt have been refused permission to board planes when on family holidays, and innocent citizens like Canadian ‘suspects’ Maher Arar, Ahmad El Maati, Muayyed Nureddin and Abdullah Almalki ended up in Syrian torture chambers in 2002 and 2003.53 Even without reasonable grounds, and with nothing to hide, you can still end up as a suspect.

Yet, because there may be apparent benefits of some kinds of surveillance and because it is so hard to know how some kinds of seemingly trivial data – birdseed? – could possibly make a difference, there is relatively little outcry against surveillance overreach, allowing it to grow almost unimpeded. As well, the positive effects can benefit some people, and their negative effects are predominantly felt among those who, because of their economic situation, ethnic background or gender, are already disadvantaged. On the other hand, as the examples of Montreal drinkers or teenage holidaymakers show, anyone may be affected negatively. There are some real safeguards, such as privacy and data protection laws, but they tend to be effective only in extreme cases, when there is some obvious or well-known violation or intrusion.

Most of the time, the real risks of this new surveillance affect people negatively when those systems are working correctly, for their intended purposes, and within the limits of law. Social sorting, especially when it uses searchable databases and networked communications, works through automatically categorizing people’s data so that different groups may be treated differently. Simply being in a statistical group sets you up for inclusion or exclusion, access or denial. Having nothing to hide simply does not help to protect you from any negative effects that surveillance might have.

Conclusions

Starting with performances at airport security and the chilling effects of state surveillance for internet users and journalists, this chapter has explored some facets of surveillance culture in relation to some rather familiar forms of convenience, caution and compliance. Clearly, it is not just state agencies but also corporations that are involved in surveillance of rather extensive and intensive kinds. The latter, corporations, may well be more, not less, significant than surveillance by state agencies. Many understand this, too, but also comply, this time for the sake of convenience and the possibility of reward.

In the following chapter, I examine the ways that surveillance is far less obvious than in airport security or loyalty card systems. In much-hyped ideas such as the internet of things or smart cities, surveillance seems to change character to become much more taken for granted. It becomes, as it were, part of the furniture, and as it does so, aspects of the culture of surveillance mutate as well. This is not just because the part played by the machines has become less perceptible. It may be because the very contexts within which surveillance occurs are metamorphosing, in ways that it would be unwise to ignore.

Notes