Conclusion

By some lights, the conclusion to a book tracking the history of privacy in modern America should be brief: an epitaph for something that was always going, even as it arrived, and is now definitively gone. By the early twenty-first century, privacy was finished, many said. Or at least, its historical arc was by that point clear, more than a century of efforts to carve out a sphere for individual solitude and sovereignty from the insistent demands of modern social organization at an end. Experts in a wide range of fields, including technology, business, law, media, and behavioral science, subscribed to this view. Privacy was quaint, outmoded, or dead, the victim of a relentlessly knowing society’s practices of governing, selling, reporting, and discovering—coupled with the citizenry’s willingness to go along. Yet, the very statement, laced with regret, betrayed a strong allegiance to a realm where the person might be free of scrutiny. To decry privacy’s end, as had been true in the late nineteenth century, was an investment in the concept. And not everyone was convinced that privacy’s clock had run out.1

The early decades of the twenty-first century arrived with their own distinctive brand of privacy talk. Instead of instantaneous photography or brainwatching or the record prison, the conversation teemed with clouds, mosaics, and algorithms. Rather than eavesdropping or diary snooping, Americans worried about geolocation tracking and Facebook account tampering. If the particulars were new, the themes were familiar: Who should know us and how?—along with its corollary: What might be altered by others’ possession of that knowledge?

The questions are as fundamental today as they were in 1890 when Samuel Warren and Louis Brandeis penned their famous article. Indeed, in the age of big data, when many parties traffic in “personally identifying information” and all are alert to its value, the problem of the known citizen has moved to the center of public consciousness as never before.2 Commentators warn that we are nearing the tipping point to a completely “transparent” or “post-privacy” society; others, that we have already tipped.3 CEOs and political leaders openly question the feasibility of personal privacy, and scholars and technologists ponder whether there is any return from a world where citizens are already known so well.4 Privacy, at least in this form, is palpably present in American public life.

The explanations proffered for the current privacy crisis often center on the poor choices made by individual users of social media and their array of connected devices—as if the only choice for citizens facing a “fishbowl life” is that between “excessive caution or foolhardy fearlessness.”5 Conducting one’s life on the web has been billed as a kind of hoodwinking: the great “privacy give-away” or the “offer you cannot refuse.”6 Others name the culprit as corporations’ rapacious profit-seeking drive to know us, or the state’s urge to make citizens legible in order both to punish and protect. The question of whether Americans’ valuations of privacy were changing or were being forced to change under technological, governmental, and corporate pressure, however, was difficult to unravel. A recognition of privacy as a fundamentally subjective concept was built into the Supreme Court’s language in Katz v. United States specifying a standard of “reasonable expectation.” With inexpensive aerial drones available for purchase and copious information about just about anyone available in a quick Google search, would those expectations need to shift, and if so, how much?7

Those with a vested interest in unlimited information sharing, notably the new social media behemoths, declared that Americans’ choice to conduct their lives online was proof positive that privacy norms were relaxing, that citizens cared less than they once did about releasing personal details out into the world.8 They, along with retailers and marketers, also worked to adjust those norms, “teaching people what they have to give up in order to get along in the twenty-first century.” The most important of these lessons, one scholar suspected, involved “opening spigots to their personal information.”9 Less interested parties found plenty of anxiety lurking in Americans’ attitudes toward the security of their “private” information, not to mention their status as the raw material awaiting commodification in an emerging “political economy of informational capitalism.”10 According to a 2014 study, citizens were well aware of their lack of control over commercial and governmental uses of personal data—with Social Security numbers considered the single most sensitive item.11 Law and technology scholars turned to the distinctly unscholarly word “creepy” to capture instances of the mismatch between older norms around privacy and newfound capacities to scour through others’ lives with the click of a button. Practices labeled “creepy”—ambient social apps, personalized analytics, and data-driven marketing—noted two scholars, “rarely breach any of the recognized principles of privacy and data protection law. They include activity that is not exactly harmful, does not circumvent privacy settings, and does not technically exceed the purposes for which data were collected.”12 And yet such practices jostled against respect for individual privacy, as well as the civility that depended on not knowing everything you could about another.13

The picture is complicated by the fact that Americans’ desire for privacy today is seemingly matched only by their quest for self-disclosure. Dave Eggers’s dystopian novel of 2013 slyly probed the dilemma through the workings of a powerful Internet corporation, “The Circle,” and its quest to render people fully transparent to themselves and others. Its slogans, an homage to Orwellian double-speak, are embraced by the novel’s heroine, a tech worker. “Secrets are lies,” “sharing is caring,” and “privacy is theft,” she earnestly proclaims. In protest, another character is finally driven to scrawl (on paper) a list of “The Rights of Humans in a Digital Age,” including the following: “We must all have the right to anonymity”; “Not every human activity can be measured”; “The ceaseless pursuit of data to quantify the value of any endeavor is catastrophic to true understanding”; “The barrier between public and private must remain unbreachable”; and finally, “We must all have the right to disappear.”14 But, the reader had to ask, how could vanishing be the goal, when Americans devoted so much time to publicizing their daily lives in the form of photos and videos, blogs and tweets?

Since the 1990s, commentators had in large numbers worried about Americans trading away privacy of their own accord, their desire for recognition or attention besting the right to be let alone. In the Age of Confession 2.0, these new norms of disclosure encountered social networking. Those seeking to know citizens in finer detail benefited immensely from people’s desire for platforms by which to share their personal histories, habits, preferences, and movements. The humor magazine The Onion skewered the impulse, outing the social media leader Facebook as a brilliant CIA operation: a mother lode for government spies, offering up caches of free data on every U.S. citizen, voluntarily divulged and conveniently uploaded for viewing.15 One technology scholar provides a snapshot of this ever-growing data bank. During every minute of 2012, she writes, “204,166,667 emails were sent, over 2,000,0000 queries were received by Google, 684,478 pieces of content were shared on Facebook, 200,000 tweets were sent, 3,125 new photos were added to Flickr, 2,083 check-ins occurred on Foursquare, 270,000 words were written on Blogger, and 571 websites were created.”16

Right now, the collision (or collusion?) between the outflow of personal information and the technological capacity to capture, analyze, and harness it looks like the defining feature of the twenty-first-century privacy landscape. The struggle for control over how one would be known—what of oneself would be revealed and what should be concealed—was an old dilemma. But individuals’ ability to exercise some determination over their own public-private boundaries seemed to be receding in this new context. People were being made “borderless,” speculated a sociologist, turned “inside out,” a product both of the “piercing abilities of the new surveillance” and the increasingly valuable returns from knowledge.17 Former U.S. government contractor Edward Snowden’s exposés of sweeping National Security Agency surveillance on American citizens in the summer of 2013 illuminated the scope of the problem. A vast apparatus of electronic surveillance came to light, including the NSA’s immense database of mobile phone location data, its ability to crack the encryption methods used by individuals and organizations to protect their email and e-commerce communications, its ability to install malicious software (“malware”) in millions of computers around the world, its collection of telephone content and metadata from every call made in target countries, and its collaboration with major tech companies such as Apple, Google, and Microsoft to evade privacy controls.18

Spectacular as Snowden’s leaks were, they came on the heels of other revelations. There was the federal government’s post-9/11 Total Information Awareness program, which pledged to “collect it all,” “process it all,” “exploit it all,” “partner it all,” “sniff it all,” and, finally, “know it all.”19 There were CCTV cameras watching every corner of America’s cities; by 2009, an estimated 30 million of them were recording 4 billion hours of footage a week.20 There was the monitoring of users’ photographs and posts by the company Facebook, described in 2007 already as “one of America’s largest electronic surveillance systems,” tracking “roughly 9 million Americans, broadcasting their photographs and personal information on the Internet”—that number skyrocketing to 800 million users worldwide just a half-decade later.21 There were 8.3 million victims of identity theft in 2005 alone.22 There were reports of massive data breaches and widely publicized search tracking by giant corporations like Google and Amazon.23 There was dawning awareness of the tremendous power over individuals’ futures and fortunes exercised by data aggregators such as Acxiom, ChoicePoint, Experian, and Equifax—billed as the “little-known overlords of the surveillance society” by two scholars—as well as the black market for sensitive information such as credit card and Social Security numbers on the “darknet.”24

If self-broadcasting ever seemed like a way out of the record prison, a means for asserting control over one’s own image and information, that hope dimmed considerably in the new century. Current privacy debates in the United States are shadowed by the knowledge that social media, the Internet of Things, wearable technologies, cloud computing, spyware, phone record metadata, radio frequency identification (RFID) systems, drones, electronic monitoring software, biometric readers, and predictive algorithms—key words of our time—intersect fatefully with the twin imperatives of corporate profit and national security.25 Most citizens are aware that they can be tracked by their “electronic footprint” as well as their DNA, by phrases in their email correspondence as well as purchases on their credit cards, by their Social Security number as well as their GPS coordinates. But few can discern precisely when or how they are known or to what end. The phrases coined to capture the traces we leave behind in a networked age—“digital dossier,” “data exhaust,” “data shadows”—highlight their inadvertent nature.26

As in the nineteenth century, when wiretapping and fingerprinting were new, technological advances today furnish the most visible platform for privacy fears. Innovations that were the stuff of science fiction for past Americans—facial recognition programs and remote desktop viewing, a cashless society and fully personalized advertising—have a not-at-all-fictional potency in the present.27 More than any one particular invention or device, it is the arrival of “big data” that has crystallized today’s debate. The term refers both to the “exponential increase and availability of data” in an age when large volumes of information stream rapidly from electronic transactions, social media, audio and video files, sensors, and “smart metering” and to the capacity of powerful computers to sort and dissect it.28 Not just the amount but the kind of information collected has changed. New data mining technologies, writes sociologist Gary Marx, hold “the potential to reveal and analyze the unseen, unknown, forgotten, withheld, and unconnected,” able to “surface bits of reality that were previously hidden, or did not contain informational clues.”29 Once scattered and undecipherable, data on individual purchases, searches, and communications can now be accessed and digested, conferring on state and commercial actors potent powers of divination and prediction.30

The prospect of being known in this fashion, we will not be surprised, carries promise as well as peril for individual citizens. Big data’s value for epidemiologists as well as marketers is that they might see social life freshly in the patterns derived from vast stores of medical, financial, genetic, and location information.31 That same clarity and precision, employed differently, can do real harm to specific people.32 As one reporter pointed out by way of example, subjects participating in a genomic study might “help advance science” only to “find themselves unable to obtain life insurance.”33 The specter of an information net so vast, and yet nimble enough to pinpoint an individual person, threatens to undo common conceptions of privacy defined as control over one’s accessibility to others.34 In an era of algorithmic knowledge, this sort of privacy, many fear, will be ever scarcer.

Already, being watched feels different in the twenty-first century.35 A recent study found that the nation’s top fifty websites installed an average of sixty-four pieces of tracking technology, allowing them invisibly to “scan in real time” users’ “access location information, income, shopping interests, and health concerns.”36 Through their browsing, their buying, and their posts, individuals are profiled: pegged with psychological or medical ailments, identified as sexually assaulted, characterized as impulse buyers, or allotted “pregnancy prediction scores.” Indeed, one of the viral stories of 2012 concerned a corporation (Target) that “knew” a woman was pregnant ahead of her intimates based only on its compilation of her online search patterns.37 Recommendation software and canny computer algorithms, able to select for you what you had not even realized that you wanted, have posed the question of the known citizen anew.

Drones, Fitbits, and smart refrigerators alike make clear that who or what was capable of “knowing” individual citizens is itself shifting. Mundane objects of everyday life have become monitoring instruments and often reporting instruments as well—mobile phones, of course, but also cars, credit cards, televisions, household appliances, thermostats, wristwatches, and eyeglasses. One report found that 20 percent of U.S. residents owned a wearable device in 2014, not including smartphones.38 These devices are prized by sellers and marketers because their close proximity to the body means that people are less likely to be without them, “the intimate, always-connected nature of the wearable device” facilitating “continuous tracking across time and space.”39 Surveillance scholars John Gilliom and Torin Monahan point out that Americans would never agree to a government program requiring that they carry a device providing live-streamed data on their physical location, communications, and personal interactions, archiving it all in data banks—and permitting, when deemed necessary, the monitoring of specific conversations and messages. Yet this is precisely what the nearly universal use of cell phones—now owned by upward of 90 percent of Americans—allows. Like credit cards, mobile phones are a surveillance technology “gladly, even fervently, adopted” by consumers, who more and more are “enmeshed in surveillant relationships just by moving through the world, even without an explicit gaze from above.”40

Privacy sensibilities and rights have themselves created incentives for unobtrusive forms of surveillance—an echo of debates that opened up in the 1960s about how the “right to research” might coexist with the rights of the research subject. Today, those who wish to know “gather what is voluntarily radiated, unwittingly left behind, or silently and effortlessly made available by breaking borders that traditionally protected information.”41 Full-body scanners at airports caused a public ruckus when introduced at airports in 2010, Gilliom and Monahan suggest, mostly because they were so obvious at a time when so much tracking is inconspicuous and easily missed.42 The fact that nonhuman observers are doing much of the watching paradoxically accounts for the surprisingly intimate, integrated feel of today’s privacy invasions. As a recent study of “beacon surveillance” trained on retail shoppers has it, it is as if “the aisles have eyes.”43 Individuals abet their own observation, if often unwittingly. Even minor actions they take—a click or a keystroke—put them at risk of revealing themselves to an expanding and unknowable number of parties. The reason they go ahead, of course, is that these actions also bring all kinds of social goods, ranging from tailored recommendations to breaking news.44

The fact that personal information divulged for social connection or entertainment could as easily be put to use for commercial or political surveillance significantly eroded the distinction between different kinds of authorities in U.S. society: agents of the state and big business in particular.45 The collaboration between the National Security Agency, law enforcement, large corporations, and data aggregators in the wake of the September 11, 2001, terrorist attack exposed the wide-open information channel between the public and private sector.46 It was true that from the 1960s onward, Americans had often not distinguished between state and commercial data banks. And stretching back into the nineteenth century, the U.S. government had worked in tandem with private companies in matters of policing and national security—to intercept telegrams, for example. But these once fairly distinct realms seemed to be blurring further, even merging.47 The implication was that autonomous decision making would be less and less available to the twenty-first-century data subject.

Today’s particular information constellation, where profit, security, and self-definition mingle indiscriminately, is often hailed as unparalleled. But we gain more clarity by placing its development in a longer history of knowing citizens. A sense that Americans are known too well by government and corporate entities alike is not unprecedented. It was a discovery of the 1890s and then again of the 1930s and the 1960s. What gives the current moment its special urgency is a uniquely combustible combination: a deluge of volunteered or solicited personal information, on the one hand, and the increasingly sophisticated capacities of other parties for linking, sharing, and acting on it, on the other.

The problem of secret knowledge about citizens—of not knowing who knows you—was a fear sparked in the 1970s by silent record-keeping systems and hidden gatekeepers. The arrival of big data has given it a new and uneasy shape in the twenty-first century. Today’s analog is the algorithm: the set of rules, often generated by machine learning, whereby individuals are systematically ranked and rated by a host of commercial, financial, and government agencies. Citizens’ fates are being shaped by proprietary formulas that they could not decipher even if they had access to them.48 The coupling of “increasingly enigmatic technologies” and their own carefully protected opacity means that “corporate actors have unprecedented knowledge of the minutiae of our daily lives,” writes legal scholar Frank Pasquale, “while we know little to nothing about how they use this knowledge to influence the important decisions that we—and they—make.” This fundamental asymmetry between the knowers and the known makes the contemporary world less a “peaceable kingdom of private walled gardens,” he argues, than “a one-way mirror.”49

In response, legal scholars and technologists have turned their attention to algorithmic discrimination, including predictive harms. Some have proposed a novel sort of privacy right for the twenty-first century—a “right to quantitative privacy”—buttressed by codes of “big data ethics” and “fair reputation reporting.”50 The newfound volume and value of raw personal data, suggests one computer scientist, will require a whole new bundle of individual entitlements, including rights to access one’s own data, to inspect data companies, and to amend, blur, experiment with, or port one’s own data to other holders. Only then might “digital citizens” regain some semblance of autonomy and take charge of their own information in a “post-privacy economy.”51 In these renewed calls for access and transparency as a check on powerful institutions that control citizens’ data, a legacy of the data bank debate of decades past, we spy the persistent appeal of civil liberties solutions. And yet, the entangled nature of data and disclosure with new technologies of divination places both the efficacy of transparency and the hope for personal autonomy in serious doubt.

The Quantified Self movement—also referred to as lifelogging, personal informatics, and personal analytics—is emblematic. Founded by two former editors of Wired magazine in 2007, it brought together enthusiasts around the project of harnessing “self-knowledge through numbers.” Whether to track and optimize personal health, fitness, emotional well-being, or productivity, lifelogging is intended as a form of self-supervision, offering legibility and insight to the person who chooses it.52 As an industry was built on the promise of “technologically assisted self-regulation,” however, new notes of control have crept in.53 Apps and devices are now available to help consumers stop smoking, track their fertility, curb their appetite, get more exercise or sleep, and monitor their blood pressure, stress, and hydration levels—indeed, to know themselves better. But the potential for others to extract information from this rich trove of detail about a consumer’s pulse, REM sleep, skin temperature, and mood is ever clearer.54

Although billed as a voluntary and self-motivated practice, self-tracking has thus presented opportunities for others to follow along. In some contexts, explains sociologist Deborah Lupton, this kind of monitoring is “being encouraged, or even enforced on people.”55 Insurance companies, workplaces, and schools—sites that have always had plenty of incentives to know individuals better—have all grasped at the new devices and the information they provide, whether to track students’ physical movements or employees’ adherence to corporate wellness policies. In other contexts, the data streaming from tracking devices are monitored silently by the developers of software they use, third-party purchasers, data-mining companies, or government agencies.56 As these practices take hold, the “quantified self” can become an extremely well-known citizen. As Lupton sees it, self-trackers through their own devices may come to resemble the involuntarily monitored: those under probation, on parole, or serving at-home sentences.57

Indeed, some wondered if the main thrust of new tracking and surveillance practices, imbued with a participatory ethos and full of advertised rewards for the user—convenience, efficiency, health, and happiness—was softening the very edges of what was once named and recognized as a privacy invasion.58 Was the real goal cultivating pro-surveillance dispositions in the population at large? And was it working? Lupton observes that practices “once considered coercive and imposed forms of state surveillance, such as biometric facial recognition for security purposes, are now routinely used in social media sites such as Facebook for the purposes of tagging others in images.”59 Others point to the fact that fingerprints, once bitterly resisted as a technique for tracking criminals, now regularly open a phone or laptop. Indeed, a host of technologies, not just facial recognition but also retina scanning, voice spectrometry, and DNA typing, have migrated from criminal justice into the society at large in recent decades, serving as convenient forms of identification or security.60

Finally, in a variant of worries that first bloomed in the postwar era about society infiltrating individual psyches via brainwashing and subliminal advertisements, commentators worried in the early twenty-first century about the influence that came with new commercial tools for shaping and “nudging” users’ behaviors.61 It was not just that an array of companies sought to burrow into the “lives and psyches” of their current and prospective customers in order to better tailor their pitches. It was not even that such firms further “sought to draw psychological and behavioral lessons from the enormous amounts of data” they collected on a daily basis. 62 It was that, as in the 1950s, these external agents seemed to be getting inside people in new ways. Was feeding people information as a means of persuasion also, perhaps, a form of domination? Did an app that cued a user to resist her urge to smoke or consume calories or that encouraged a man idle at his desk to stand up harbor less beneficent possibilities of social control going forward? Theorists of “libertarian paternalism” preached the benefits of behavioral coaxing, enthusiastic about the possibility of channeling consumers’ decisions through a properly designed “choice architecture.” As they did, the security and integrity of Americans’ mental states came back into view. Visibility, choice, consent, freedom, autonomy: these were the stakes once again in the early twenty-first century as citizens reckoned with their knowing society.

Privacy’s philosophers of the present, contemporary counterparts of Louis Brandeis and Samuel Warren, have attempted to grasp the implications of this dizzying array of new practices, whether stemming from corporations, official agencies, or citizens’ own desires. Some, noting the extensive, continuous, and distributed nature of watching people, speak of “liquid surveillance.”63 Others, focused on the way that law enforcement increasingly operates outside the bounds of “individualized suspicion,” sweeping up massive crowds through drone surveillance or NSA metadata, write of “panvasive surveillence.”64 Indeed, a whole new scholarly field, “surveillance studies,” came into its own at the turn of the twenty-first century, complete with a professional society and journal. Scholars devoted themselves to apprehending the web of watchers employed at company headquarters, airports, tollbooths, borders, traffic intersections, and schools—as well as in individual minds and homes as it became second nature to monitor oneself.65 The notion of a “surveillant assemblage” that works by “abstracting human bodies from their territorial settings, and separating them into a series of discrete flows,” is so far one of the field’s most influential formulations.66

Many grappled with the interlocking nature of the parties who made it their project to know individual citizens so thoroughly. As one commentator had it, no piece of information was irrelevant or insignificant to the new surveillors, who pursued “every little desire, every preference, every want, and all the complexity of the self, social relations, political beliefs and ambitions, psychological well-being,” such that their probes extended “into every crevice and every dimension of everyday living of every single one of us in our individuality.”67 It was difficult to characterize what was propelling this surveillance complex: was it the state, the private sector, or was it citizens themselves? “Knots of statelike power” is how this last writer describes it, “where economy, society, and private life melt into a giant data market for everyone to trade, mine, analyze, and target.”68 Others have traced the emergence of a “networked self” in an age of ubiquitous connectivity, a successor to the data subject or “dossier personality” of the last century.69

Updating earlier worries about being imprisoned by a photograph or a file, new critics have asked whether a digital-age citizen has a right to be forgotten.70 The progress of that right in the European Union—in the form of requirements that search engines edit or delete offending information—has led to much speculation (and envy) in the United States as to what might be done about what one scholar calls “the threat of digital memory.”71 The fear of intimate, embarrassing, or shameful images and facts making their way to a broader public is of course as old as gossip. The modern framing of a “right to privacy” in the late nineteenth century derived much of its urgency from the way new media and technologies threatened reputation. The rapid spread of information today and its easy accessibility have raised the stakes, in the form of cyberbullying and “revenge porn” but also lost jobs and school disciplinary proceedings.72 “In a connected world,” as one scholar puts it, “a life can be ruined in a matter of minutes.”73 Extensive lawsuits waged to get one’s life back in the form of a damaging photograph, video, or story both recall the past—Abigail Roberson’s suit at the turn of the twentieth century against the flour company that borrowed her image—and remind us that contemporary Americans, no matter what critics say, are only sometimes “voluntarily” giving their privacy away.74

In the face of individuals’ rush to divulge personal information and authorities’ ability to make sense of it, the descriptive power of Jeremy Bentham’s panopticon or George Orwell’s Big Brother—for decades, the go-to metaphors for surveillance—suddenly seemed inadequate. Twenty-first-century conditions could not be likened to “a prisonlike panopticon where trapped people follow the rules because they’re afraid someone is watching,” mused two scholars. To begin with, the watchers were far more multifarious and more anonymous—indeed, unknown—than earlier theorists of surveillance imagined. Moreover, citing the hundreds of millions of users of Facebook and other social media sites, some suspected that Americans’ larger fear was in fact that “no one is watching.”75

As for Orwell, some alleged that he had misdiagnosed the future by not accounting for individual desire: the entertainment and pleasure that came from being enmeshed in a data set far deeper and richer than anyone in the postwar decades could have dreamed up. Moreover, he had underestimated citizens’ embrace of the convenience and comfort of being known. The author of Nineteen Eighty-Four in this analysis did not anticipate how modern supervision would be made palatable to its subjects, allowing “softer, more manipulative, engineered, connected and embedded” forms of suasion to enter into daily life. Focused on authoritarian states in 1949, Orwell had also neglected the threats to personal autonomy from the private sector.76 Helping to make the point, a reality television show titled Big Brother was an international hit of the late 1990s. Contestants on the program were confined to a house in which they were subject to an omnipresent authority figure known to them only as “Big Brother” and willingly had their every action recorded by cameras and microphones.77 The Loud family had subjected their domestic life to television cameras in the early 1970s, but the nod here to totalitarianism as theater—and the embrace of total surveillance by producers and participants alike—was a new twist. In fact, Big Brother had been conceived as “a social and psychological experiment” to study the ways individuals coped with surveillance. But when it appeared that participants coped rather too well, this ambition was sidelined in the interest of pure entertainment.78

The unanticipated meeting of confession and surveillance in the last decade has led to some of the most robust arguments over and defenses of personal privacy since the 1970s. But in all the contemporary handwringing over lost privacy, remedies seem harder to come by. This too is an important feature of the present moment. The Internet age, like the computer age it followed, “would bring something of a privacy storm,” writes one observer; but “no legal hurdles, in principle or practice, were put in place to slow these changes down.”79 In fact, the Federal Trade Commission, the Federal Communications Commission, Congress, and the White House continued to churn out proposals for protecting Americans’ privacy. In the decades since the Privacy Act of 1974, several key pieces of federal legislation responded to specific issues. The Video Privacy Protection Act of 1988 was a direct counter to reporters hunting through Supreme Court nominee Robert Bork’s video rental record. The Driver’s Privacy Protection Act of 1994, which ended the kind of reverse searching that Laud Humphreys had used to identify tearoom users, was a response to abortion clinic tracking.80 The Health Insurance Portability and Accountability Act of 1996 registered growing concerns about the privacy of medical records as they took electronic form. The Children’s Online Privacy Protection Act of 1998, a reckoning with commercial data mining in the case of the most vulnerable citizens, made it illegal to gather personal information on or track children under the age of 13 without parental permission. Too, watchdog agencies like the Electronic Frontier Foundation, the Electronic Privacy Information Center, the Privacy Rights Clearinghouse, and the American Civil Liberties Union continue to advocate for individual civil liberties as insistently as digital technologies threaten to override them.

And yet few seem to turn with great expectation to the state or regulatory agencies to curb the tide, perhaps thinking it naïve, given past history, that a transformative court ruling or legislative act could alter our present course.81 The central state’s own changing aspect—long in the making—from relatively beneficent bureaucracy to menacing invader has something to do with this. Another part of the explanation is that the answers of the past appear outmoded, too flimsy to staunch the algorithmic power of commercial and state knowers. Privacy policies that in theory allow people to “opt out” have come in for particular disdain as a “failed disclosure regime,” revolving around “the fiction that consumers can and will bargain for privacy, or ‘opt out’ of deals or jobs they deem too privacy invasive.” The terms of service that customers sign off on, argues Frank Pasquale, are less “privacy policies” than “contracts surrendering your rights to the owner of the service.”82 What this means functionally is that citizens of the surveillance society are on their own. “If you believe that your privacy is being protected by laws and user agreements,” write John Gilliom and Torin Monahan, “think again.” Rapid technological advances and their implementation by corporations, law enforcement, and the military seem to empty privacy rights of their substance. Given the choice of “scrutinizing user agreements and privacy policies, writing our congressional representative, or learning how to use anonymizing software,” they acknowledge, “we’d both pick the software. Hands down.”83

A final piece of the predicament, once again, is that citizens seem so readily to succumb to the allure of being known. Some contend that with surveillance becoming a comprehensive, and even welcome, mode of social organization, the concept of privacy has outlasted its usefulness—“too limiting and dated” a notion to speak to the present.84 As Andreas Weigend, former chief scientist at Amazon and founder of the Social Data Lab, puts the same sentiment, “Privacy is a concept that was only rather recently enabled by technology, and the concept may not be up to the task of protecting us in the age of social data.” He counsels Americans to cast aside their fears of a knowing society and instead embrace the “value we get from sharing data about ourselves,” its manifold “opportunities for discovery and optimization.” Weigend looks forward to the day that social data “help us make some of the biggest decisions in life, including who we pick as a romantic partner, where and how we work, what medications we take, and how and what we study.” It is time to give up on the illusion of privacy: “We shouldn’t be fighting for privacy simply because it was a pretty good response to people’s problems a hundred years ago.”85

And yet: new tactics that have appeared on the horizon, outside the scope of legislatures and courts, suggest that many have not yet given up on fighting for privacy. A revised self-help literature, for example, emerged in the new century, offering a set of tools for navigating an all-too-knowing society. Not simply exposés of the parties infringing on Americans’ privacy, these were guides for doing an end-run around them. While some counseled using paper shredders and paying only in cash, others turned to freelance sites, new encryption devices, proxy servers, secure hardware and software, and even secure chat programs and messaging systems. The Electronic Frontier Foundation maintains an advice website offering “Surveillance Self-Defense,” and some schools have begun to teach “cyberhygiene”; businesses in droves hire “privacy professionals” to manage the escalating threats to their customers’ data.86 Popular privacy manuals have also proliferated, with titles such as You’re at Risk: A Complete Guide for You and Your Family to Stay Safe Online; The Smart Girl’s Guide to Privacy; Life under Surveillance: A Field Guide; and Hack-Proof Your Life Now! How to Protect (or Destroy) Your Reputation Online. New experts on how to move through American society undercover, in books such as How to Disappear, The Incognito Toolkit, and The Art of Invisibility, preach a gospel of hiding. “Do not, as long as you live, ever again allow your real name to be coupled with your home address,” is step one, according to the author of How to Be Invisible: Protect Your Home, Your Children, Your Assets, and Your Life, currently in its third edition.87

As separate surveillance systems are joined, leading to the “progressive ‘disappearance of disappearance,’ ” the state of being unknown or unrecognized seems to be rising in value.88 The most promising contemporary avenue for achieving something like an “inviolate” state or space, it seems, may come not in the form of restraining those who seek to know but in carefully designed practices for hiding one’s tracks. Guarding one’s passwords and personal data, being savvy about how and where one goes online, and opting out of data-sharing platforms are today’s words to the wise. Technologies like blind signatures, anonymous remailers, and encryption software promise that one might reclaim one’s life by re-anonymizing it.89 Rooted in private and individual solutions rather than state regulation, this kind of counsel is the dominant advice that privacy-conscious citizens encounter today despite the looming threat presented by sophisticated re-identification techniques.

Other solutions to the knowingness of American society come in evading the electronic gaze, via dark routes on Google Maps, for example, or apps that locate CCTV cameras and reroute the walker so as to avoid them.90 A New York theater company in that spirit leads tours of surveillance systems in the city.91 A 2016 book, Obfuscation: A User’s Guide for Privacy and Protest, by well-respected scholars, offers a more confrontational approach.92 The book announces that the time has come to “fight today’s digital surveillance,” and its authors urge “the deliberate use of ambiguous, confusing, or misleading information” to interfere with data collection projects. Evasion, noncompliance, refusal, and even sabotage are the weapons with which “average users” not able to opt out or otherwise exert control over data about themselves might yet defend themselves. Still others have called for “obscurity by design,” enabling individuals to hide data that are technically public through techniques such as reduced search visibility, access controls, pseudonymous profiles, and the blurring of observed information.93

Spying opportunities in an emerging market, a large and increasingly profitable privacy industry seeks to fill the breach, peddling spyware, privacy-enhancing or anonymizing technologies (PETs), certified services for identity protection, online reputation management, and digital “vaults” complete with insurance for valuable data. Consumer demand for techno-precautions is large and growing. It is a development that may challenge information sharing between corporations and the state, and has already led to standoffs. In “a step opposed by intelligence officials,” major corporations like Apple and Google beginning in 2014 and 2015 offered consumer encryption of data as a new privacy feature for operating systems.94 The flip side of such products are market-based solutions that might enable savvier practices of evaluating the “return on data” one gets by giving up one’s personal details.95 Those exploring this option have proposed a “privacy-preserving marketplace” that could compensate people according to the level of risk they take in disclosing personal information.96 Others in this vein floated a system of micro-payments for the use of one’s data, a sign that propertied notions of privacy are still alive and well in a virtual age.97 It is as yet unclear how far citizens will privatize their efforts to secure privacy or place their trust either in technology or the market, the very sites where modern privacy alarms were first sounded.

Most significantly, the double-edged nature of digital surveillance has introduced new prospects for holding authorities to account—and potentially far more powerful species of transparency than existed previously. “Ever since the advent of print, political rulers have found it impossible to control completely the new kind of visibility made possible by the media and to shape it entirely to their liking,” writes a cultural theorist. “Now, with the rise of the Internet and other digital technologies, it is more difficult than ever.”98 Some launched projects of reverse surveillance with the aim of making the government—and government data—more accessible to citizens. Websites such as GovTrack.us, for example, aimed to empower civil society through the “continuous public surveillance of government.”99 Combatted by investigative reporters, congressional inquiries, and a few lone whistleblowers in the 1970s, the government’s secrets would in the new century be countered by “doxing.” This term referred to the practice of using public or private records against the powerful in society or to outing an anonymous person’s or group’s identity, a favored tactic of the hacker collective, Anonymous.100 Its advocates believed that the open and connecting architecture of the Internet might serve as a “buffer against the control efforts of the more powerful and can also be turned against them.”101

It was in the conduct of national security that the novel challenges of keeping secrets in a digital world became dramatically evident to the knowers as well as the known. WikiLeaks, an organization that burst onto the scene in 2009, led by Australian activist Julian Assange, took as its mission the release of classified documents—video footage, military files, and diplomatic logs—to expose shadowy government policies.102 The ambition of the organization’s “philosophy of transparence” was to “allow citizens to become the surveillors of the state and see directly into every crevasse and closet of the central watchtower, while rendering the public opaque and anonymous; to invert the line of sight.” Assange himself called for a “worldwide movement of mass leaking.”103 As the co-director of Harvard’s Transparency Policy Project, Mary Graham, has observed, old ways of resolving media-state conflicts over publicity could no longer hold “in a world where insiders can self-publish top-secret information on the Internet with no filters and no advance notice, and can operate beyond U.S. borders—and beyond U.S. laws.” She referred to the army’s Bradley/Chelsea Manning, who in 2010 copied more than 250,000 State Department cables, airstrike videos, and soldiers’ reports from Iraq and Afghanistan and turned them over to WikiLeaks.104

Possibilities for a new privacy politics were built into the very infrastructure of a surveillance society. The variety of inexpensive technologies that equipped ordinary citizens with the ability to do their own watching and recording seemed especially promising in this regard. The term “sousveillance” was coined to describe the ways in which individuals could turn observation techniques back on authorities. Some called for the deliberate use of such powers to forge a “coveillant” society, where the ability to watch might be symmetrical. Might citizens have a “right to record,” for example?105 The use of mobile phones and body cams by the Black Lives Matter movement to document police brutality toward African Americans showed that the relationship between observer and observed could be reversed, at least temporarily.106 Doxing and sousveillance alike took advantage of the double edge of a knowing society, and they suggested the new kinds of activism it inspired. As the gay advocates of “outing” in the 1990s understood, to know and to reveal could be potent tools for social change. The hierarchy between police officer and policed, watcher and watched, powerful and powerless might be disrupted through the targeted use of a knowing society’s tools: publicity, identification, documentation, transparency recording, and outing.

These instruments of exposure carried another kind of power: the potential to bring visibility to the persistently unequal experiences of and entitlements to privacy in the contemporary United States. Scholars of race and racism in particular argued that “there is no such thing as a private sphere for people of Color except that which they manage to create and protect in an otherwise hostile environment,” and indeed that the monitoring of African Americans from slavery forward had provided the template for a modern surveillance society.107 Activists’ work to publicize the policing of specific bodies in the second decade of the twenty-first century—whether immigrant female, non-white, or transgender—helped to cast new light on the uneven slant of American privacy policies and debates. In particular, it highlighted the disproportionate attention given to the privacy violations affecting better-off, normative, and usually white citizens.108 If the trend line in the twentieth century had seemed to point away from physical violations of privacy and toward “informational” ones, it betrayed that pundits and politicians had prioritized some Americans’ privacy at the expense of others.109 Activists protesting the carceral state, “bathroom bills,” and tightening restrictions on reproductive rights made clear that digital or data rights—whether the right to transparency or the right to disappear—would not be enough to create meaningful privacy for all citizens.110

Anonymity and inaccessibility, arguably less rich and humane concepts than “privacy,” have taken center stage in American debates about a knowing society in the early decades of the twenty-first century. But is anonymity—or obscurity or ambiguity or blurriness—the same as privacy? It is worth asking whether autonomy in a digital age is or should be equivalent to whatever is left after the data miners are through. This minimalist understanding of what has often been judged a fundamental human value and social good tells us something of the chastened aspirations of today’s known citizens.

We will do well to remember that this debate has been with us a long time now. Indeed, one of the reasons it is so difficult to imagine a resolution to today’s privacy dilemmas is that the invasions many citizens rail against have become foundational to the workings of U.S. society. The unfettered media that outraged Samuel Warren and Louis Brandeis, along with Henry James; the identification infrastructure wrought by criminal policing and state benefit programs alike; the disciplines, experts, and corporations invested in demystifying human behavior; the continuously aggregating bureaucratic data banks for improved governance and selling; and, not least, the voluminous information voluntarily divulged in the public sphere together form the warp and the woof of contemporary social existence. As American society became more knowing across the last century and a half, the gain often seemed self-evident. Reporting the news, protecting national security, tracking public health and social welfare, understanding human psychology, improving commercial efficiency, fostering political transparency and accountability, and announcing individual truths all appeared compelling, even necessary, rationales for infringing on the “inviolate personality.” Americans may not have set out to create the kind of political culture that they now inhabit—or wished for the kind of privacy now on offer—but neither have these developments been unintentional.

This book has traced Americans’ reluctance but also their desire to be known across a century and more—a history not of a tangible thing but of a tense and ongoing debate. Even a knowing society remains full of uncertainties. One of these is the future of the known citizen. Not one of the privacy issues that Americans wrestled with in the past is settled or solved. Attempts to protect free expression but to rein in overly zealous media; to reap the rewards but not the risks of identification systems; to devise politically feasible laws honoring sexual privacy and reproductive rights; to discover new insights about human behavior and psyches while respecting those same humans’ dignity; to deploy data in ways that enrich rather than damage individual lives; and to balance national security with civil liberties are still with us. What privacy can and will mean going forward will depend on how these debates are waged.

The story of the known citizen is not over. Nor—even in an age of social media, big data, and NSA spying—is privacy.111 The corners of life presumed to be off-limits or beyond scrutiny are, we might guess, simply once again under reconstruction. The controversies that such changes spark alert us to moments of possibility embedded within a shifting social order. Can a known citizen be happy? Is a known citizen free? The fact that we are still asking Auden’s questions, composed just as today’s knowing society was coming into view, tells us that privacy is not yet a concept or a claim that we can do without.