Chapter 7 Beyond the “Creep” Factor
“Queasy,” “Icky,” “Creepy”—it's not unusual for people to use these words when expressing their concerns about companies tracking them online. These labels have even reached the halls of the U.S. Senate during a 2010 privacy hearing convened by Senator Claire McCaskill (D-Mo.), who said she found behavioral targeting troubling. “I understand that advertising supports the Internet, but I am a little spooked out,” McCaskill said. “This is creepy.”1 Joanna O'Connell of the consulting firm Forrester Research spoke similarly in a National Public Radio interview that discussed marketers’ tracking of consumers. In describing marketers’ attempts to determine at which point negative reactions will start, “There's sort of the human element, the sort of ick factor,” O'Connell said. “And marketers are aware of that. Depending on the marketer, there are some that are very reticent about using certain types of targeting.”2
When lawmakers and analysts confront an issue by invoking an “ick” or “creep” factor as a reason for their distaste, society has a problem. Executives in the new advertising system counter that public and private officials’ use of such terms merely demonstrates their lack of understanding of audience tracking and labeling, and in response they have adopted the position that the issue is basically psychological. The problem, they say, is rooted in consumers’ negative emotional reactions rather than in any widespread or genuine threats to society or its members. Indeed, supporters of the emerging advertising world argue that the real threats have already been tackled. They contend that concerns about health information and the use and sale of personal financial data are addressed by the Health Insurance Portability & Accountability Act (HIPAA) and the Gramm-Leach-Bliley Act, respectively. Rules limiting the collection of information from children under the age of thirteen are covered by the Children's Online Privacy Protection Act. Identity theft, they note, is clearly illegal. Bad actors using deceptive advertising practices are being warned and even pursued by an increasingly activist Federal Trade Commission as well as by a new Consumer Protection Agency. They say that the rest—the recording of individuals’ everyday actions and attributes for the purpose of selling them products and serving them material they are likely to enjoy—is really quite harmless and may even be useful for the people who are targeted. It's particularly nonthreatening, marketers note, when the individuals are anonymous, as they often are in this process.
At the same time, marketing executives allow that many contemporary audiences dislike the thought of being followed. Because people dislike it and because the marketers who are doing it may be tarred and feathered by public ire, advertisers agree they must be careful. The antidote, they argue, is self-regulation. But a look at what self-regulation means in practice, and what the public gains and doesn't gain by it, shows that we have to move beyond the oft-cited creep factor to discuss what's actually taking place. We have to broaden social discussion, and action, about the meaning of the advertising system's unprecedented tracing and labeling activities.
In a USA Today opinion piece in August 2010, Randall Rothenberg added a twist, conspiracy, to the industry line that the public's negative reactions to being followed and targeted online are misguided. Rothenberg had been an Advertising Age columnist and a strategist at Booz Allen Hamilton consulting; when he wrote the opinion piece he was the president and chair of the Interactive Advertising Bureau (IAB). Since the mid-1990s the IAB had helped bring order to Web advertising by creating technical standards that made it possible for publishers, media agencies, and technology firms to work together efficiently. Those standards would be irrelevant, Rothenberg implied, if determined attacks on internet-industry marketing activities continued. “A wild debate is on,” he began, “about websites using ‘tracking tools’ to ‘spy’ on American Internet users. Don't fall for it. The controversy is led by activists who want to obstruct essential Internet technologies and return the U.S. to a world of limited consumer choice in news, entertainment, products and services.” Rothenberg stated that the activists “have rebranded as ‘surveillance technology’ various devices—cookies, beacons and IP addresses—that fuel the Internet.” He then asserted that “without them, Web programming and advertising can't make its way to your laptop, phone or PC. At risk are $300 billion in U.S. economic activity and 3.1 million jobs generated by the advertising-supported Internet, according to [an IAB-funded study by] Harvard professors John Deighton and John Quelch.”3
Rothenberg went on to note that “thousands of small retailers and sites” depend on the Web for a living. After giving a few examples of regular folks’ ad-supported sites, he concluded that the tracking activities that sustain them should raise no alarms because anonymity is the rule. “The information they use to deliver content is impersonal. Unlike newspaper and cable-TV subscription data, it doesn't contain your name or address.” Besides, he said, “you already have what you need to control your privacy, by eliminating cookies from your browser. Major websites offer highly visible tools that put consumers in charge of their data.”4
Privacy activists disagreed fiercely with Rothenberg's assertion regarding consumer power over their data, pointing to his claim about cookies as being especially disingenuous. Certainly, Web users can eliminate cookies (about a quarter of them say do that regularly), but marketers keep putting them back. There are ways to block the insertion of browser cookies—Ghostery.com is a site that helps with that—but there is little evidence that a substantial percentage of the internet population does it. Rothenberg undoubtedly is also aware that companies have been addressing threats to the traditional tracking cookie by figuring out new online ways to maintain the identities of people. One tack is to make a third-party “tracking cookie,” which is typically the kind that browsers erase, look like a “first-party” cookie so that it won't be zapped. Another involves the use of locally shared objects (LSOs), also called Flash cookies, which perform the function of cookies but are harder to erase. Rothenberg must also have been aware that the need for continuous identification would push companies in his industry toward new ways to track people without erasure. Two months after his piece appeared, for example, a startup company called BlueCava announced that it had begun to provide original equipment manufacturers with technology that would provide a digital device “with the ability to identify itself” and that a website could associate with particular information it would store. Online Media Daily reported that the company “has put together a data exchange where businesses can contribute information they know about a device that should make targeting ads more accurate.”5 Also around this time the New York Times reported on a technology company called Ringleader Digital, which had created a product called Media Stamp that, according to critics, could surreptitiously acquire information from a mobile device and assign it a unique ID.6
Rothenberg was clearly exaggerating about individual controls, but the opinion piece's main purpose was to be a salvo on behalf of what Rothenberg called “the nation's largest media and marketing trade associations” to counter rising ire at the federal and state level about the tracking and targeting of individual consumers. The larger battle took place over a number of years, going into high gear in 2007 when the Federal Trade Commission (FTC) released a report that urged the industry to follow a set of principles for self-regulation regarding online behavioral advertising. Intentionally defining the activity broadly, the FTC said that “behavioral advertising means the tracking of a consumer's activities online—including the searches the consumer has conducted, the web pages visited, and the content viewed—in order to deliver advertising targeted to the individual consumer's interests.”7 “Town hall” meetings and petitions from industry groups as well as activists led the FTC to release a 2009 staff report in which it laid out a suggested regulatory framework that fundamentally supported marketers’ needs. When the dust settled, it was clear the staff had written their document in a way that meshed with the views of industry lobbyists.
The new regulatory framework did respond to concerns nonbusiness interests raised in town hall meetings. It proposed that firms engaging in tracking and targeting explain, outside the site's formal privacy policy, about the information they gather. The staff report also encouraged firms to give audiences the choice of whether to receive targeted ads. It enjoined firms to inform consumers when privacy policies change, to receive consent to use the old data in new ways, and to make sure the data are secure and not retained indefinitely. It urged that use of so-called sensitive data—data about finance, health, or sexual preferences—be handled with great care to the point that consumers should consent, or affirmatively opt in, to their use. And it accepted privacy advocates’ contentions that, because of sophisticated linking techniques and data accidents, distinguishing between the online collection of personally identifiable information (for example, a person's name, postal address, e-mail address) and information that was supposedly not clearly identifiable—an anonymous person's health condition, for example—made no sense from a privacy standpoint. Firms should treat all data in the same way.
Most prominently, though, the FTC staff accepted that tracking and targeting had become part of the digital landscape, important for present and future business opportunities. What made privacy advocates particularly unhappy was the report's agreement with marketers that they could carry out most data collection on an opt-out basis. That is, an advertiser didn't have to get permission to collect information from individuals except in highly sensitive areas. In fact, in some areas the staff agreed that companies didn't even need to offer an opt-out provision at all. For example, the staff report distinguished between first-party and third-party tracking, concluding that the two types involve different consumer expectations. The former involves a company tracking people only on its site and on other sites with the same brand—for example, Disney.com and Disney.net—while the latter involves a company that follows people across sites and uses the data to send ads to them.
After considering the comments, staff agrees that “first party” behavioral advertising practices are more likely to be consistent with consumer expectations, and less likely to lead to consumer harm, than practices involving the sharing of data with third parties or across multiple websites… . In such case, the tracking of the consumer's online activities in order to deliver a recommendation or advertisement tailored to the consumer's inferred interests involves a single website where the consumer has previously purchased or looked at items. Staff believes that, given the direct relationship between the consumer and the website, the consumer is likely to understand why he has received the targeted recommendation or advertisement and indeed may expect it. The direct relationship also puts the consumer in a better position to raise any concerns he has about the collection and use of his data, exercise any choices offered by the website, or avoid the practice altogether by taking his business elsewhere. By contrast, when behavioral advertising involves the sharing of data with ad networks or other third parties, the consumer may not understand why he has received ads from unknown marketers based on his activities at an assortment of previously visited websites. Moreover, he may not know whom to contact to register his concerns or how to avoid the practice.8
This basic distinction became a key launching pad from which five industry groups—American Association of Advertising Agencies, Association of National Advertisers, Direct Marketing Association, Interactive Advertising Bureau, and Council of Better Business Bureaus—built their self-regulation policy.9 The approach solidified around the use of an “advertising option icon” next to an ad to disclose that behavioral targeting has taken place. The icon would link the site visitor to the kinds of explanations and opt-out activities the FTC report suggested.10 Industry representatives met with FTC staff intensively to make sure the emerging industry approach mapped onto the FTC report and intent.
Guidelines that the five industry groups released in 2009, however, used a narrower definition of the activity than that of the FTC's initial broad take. Online behavioral advertising, the report said, is “the collection of data online from a particular computer or device regarding Web viewing behaviors over time and across non-affiliate Web sites for the purpose of using such data to predict user preferences or interests to deliver advertising to that computer or device based on the preferences or interests inferred from such Web viewing behaviors.”11 Taking a cue from the FTC staff's sense of consumer expectations, the industry group excludes first parties from even the notion that behavioral advertising is taking place. That means a publisher doesn't have to display the icon if it buys off-line information about its site visitors or if it follows people around on its own site and on “affiliate sites.” Affiliate sites are those the publisher owns or controls even if the names of the sites are so different as to make it unlikely a consumer could discern this; ESPN.com and Disney.com is an example of affiliate sites (Disney being the parent company of the cable sports channel ESPN). Therefore, if a publisher exerts management control over the advertising on one hundred sites that have totally different names, it can track people across those domains and not have to show them the icon.
The icon is supposed to be a company's portal to “clear, meaningful” notice about “data collection and use practices.” In important ways, though, what it leads to is little different from a Web staple that should have helped with such disclosures but didn't: the privacy policy. As Wikipedia notes, a privacy policy “is a legal document that discloses some or all of the ways a party gathers, uses, discloses and manages a customer's data.”12 With the exception of certain information involving health, finances, and children, the United States does not have specific regulations requiring companies to explain themselves when they collect and use data about individuals. Nevertheless, in 1998 the FTC released “Privacy Online: A Report to Congress,” which described what the Commission said were widely accepted “fair information practice principles” that companies ought to follow when collecting personal information from the public. They included: notice about the activities, choice about whether and how the personal information should be used beyond the initial purposes for which it was provided, access to the data to be able to judge its accuracy, and reasonable steps for security—that is, ensuring that the information collected is accurate and protected from unauthorized use. The FTC made clear that although no law mandated the principles and practices, they were norms to guide the drafting of privacy policies online. It also noted that enforcement—the use of a reliable mechanism to provide sanctions for noncompliance—was a critical component of any governmental or self-regulatory program.13
By the turn of the twenty-first century, many critics were already pointing out that overwhelmingly privacy policies did not fully follow the FTC's principles. Moreover, the legalistic formulations of the policies made them nearly impossible to understand. The FTC, implementing a privacy-policy requirement in the Children's Online Privacy Protection Act of 1998, tried to enforce clarity on those texts.14 It didn't help; a systematic content analysis of ninety children's websites that I led in 2000 found major problems with their “completeness and complexity.”15 While no similar analysis of children's sites exists today, it is apparent nevertheless that complexity and incomplete adherence to the fair-information principles are hallmarks of websites in general. Just as important from the standpoint of advertiser power is a more subtle phenomenon: even if you can get through a site's privacy policy, you will find little that is direct and explicit about what advertisers do on the site. Put another way, part of the power advertisers hold over websites is manifested by the sites’ behind-the-scenes responsiveness to advertisers.
Consider Google's privacy policy, which is obscure and scattered around its website. It's no easy task to find out what information Google makes available to its advertisers about people who visit Google Display Network sites. The main theme of the privacy policy appears to be that Google protects “personal information,” the site explains while avoiding a central focus on advertising. For one thing, Google's privacy policy doesn't define this term directly. Instead, a link that brings you to a separate page describes it as “information that you provide to us which personally identifies you, such as your name, e-mail address or billing information, or other data which can be reasonably linked to such information by Google.” The phrasing is ambiguous: does this mean that Google considers a detail to be personal information only if “you provide it to us”? If a marketer provides the same information, is it then not considered “personal”?
The privacy policy also simply states that cookies are used “to improve ad selection” but reveals nothing about their role in helping the firm create data for interest-based advertising. Much later in the policy a statement notes that “Google uses the DoubleClick advertising cookie on AdSense partner sites and certain Google services to help advertisers and publishers serve and manage ads across the web.” But it does not offer any explanation of DoubleClick, AdSense, partner sites, and certain Google services. This section of the policy does go on to explain how site visitors can view, edit, and manage ad preference through an “Ad Preferences Manager” and select a “DoubleClick opt-out cookie.” Here, too, the policy includes no direct explanation of the preference manager and opt-out cookie. Instead, hot links associated with those terms can start you on a journey across Google pages in hopes of learning about terms and the activities connected to them. After wading through the privacy policy an uninformed visitor (or even a viewer of Google's privacy videos) may well come away believing that Google doesn't help advertisers learn about them or tailor ads to them. As we saw in Chapter 5, Google executives are telling advertisers just the opposite.
But Google is by no means alone when it comes to obscuring disclosure of the tracking and data-sharing activities it shares with marketers. In fact, a mark of digital media outlets’ sensitivity to advertisers’ demands is a willingness to cloak the data marketers share, use, and trade across websites. At Pogo.com, the hugely popular casual gaming site owned by Electronic Arts, the welcome paragraph on the Pogo home page states that “Pogo is a great place to play free online games,” and it encourages players to “earn tokens, enter for chances to win prizes, create your own personalized avatar and chat with other people while you play free online games.” Those who click on the “Get Started!” button are taken to a “Create Your FREE Account” form. To allay any nervousness about filling in your e-mail address, a big sign on the right side of the form states, “We Protect Your Privacy,” with the Electronic Arts logo positioned underneath. That promise may be enough on its own to keep you from feeling any need to read the privacy policy, but if you do choose to click on it, you'll see another comforting message in bold right at the top: “EA respects your privacy and understands the importance of protecting your personal information. We will only collect information we need to fulfill your requests and our legitimate business objectives. We will never send you marketing communications without your consent, and we will never share your personal information with third parties that are not bound by our privacy policy unless you tell us we can.”16
A close reading of these assurances suggests the opposite of privacy protection. The paragraph is filled with deceptive ambiguities, as someone would learn provided the reader had the stamina to wade through the 5,136-word privacy policy as well as the knowledge to understand its technical terms. A large part of the reason for the evasiveness is EA's desire to satisfy advertisers without calling public attention to it. The privacy policy makes clear that the company collects both “personal information” and “non-personal” information. EA defines the former as “information that identifies you and that may be used to contact you on-line or off-line,” a characterization it reserves for name, postal address, and e-mail address. It dubs as “non-personal information” most of the other information it collects about its visitors and members via cookies, Flash cookies, and Web beacons. These data range from information on your computer's IP address to “feature usage, game play statistics and scores, user rankings and click paths as well as other data that you may provide in surveys, via your account preferences and online profiles or through purchases, for instance.” The company states that it will “never share your personal information with third parties without your consent.” Nevertheless, EA also states that it may use your personal information to buy “either non-personal or public information” from third parties “in order to supplement personal information provided directly by you so that we can consistently improve our sites and related advertising to better meet our visitors’ needs and preferences. To enrich our understanding of individual customers, we tie this information to the personal information you provide to us.”17
In other words, EA is saying that it uses “personal information” to learn “non-personal” information, which it can then use to attract advertisers for targeting. This disclosure is difficult to discern, however. Nor would the uninitiated have a clue about the extent to which Pogo accommodates advertising networks not only in data it collects for them but in the data it allows them to collect via cookies and Web beacons. The privacy policy notes that “DoubleClick, the company that serves many of the ads that appear on www.pogo.com, also collects information regarding your activities online, including the site you visit.” It adds that “other ad serving companies may also collect similar information.” It doesn't discuss all the ways advertisers, ad networks, ad exchanges, and data providers can converge to use information obtained in the course of tagging you to decide which ads to send. Instead, it suggests going to DoubleClick for “opt out” information. And, to protect itself from the use of data by advertisers and ad networks that post cookie tags and beacons on Pogo but that are under no obligation to conform to its privacy policy, EA yet further down in the notice presents this alarming warning:
If you click on a link to a third party site, including on an advertisement, you will leave the EA site you are visiting and go to the site you selected. Because we cannot control the activities of third parties, we cannot accept responsibility for any use of your personal information by such third parties, and we cannot guarantee that they will adhere to the same privacy and security practices as EA. We encourage you to review the privacy policies of any other service provider from whom you request services. If you visit a third party website that is linked to an EA site, you should consult that site's privacy policy before providing any personal information.18
This paragraph is rather common on ad-supported websites, and it sounds like a public service announcement. Reading the EA privacy policy without a decoder, one would hardly get the impression that Pogo itself is contributing information collected on the site and purchased elsewhere to help create segments that determine which ads will appear next to and in the games you play. Pogo also obscures the possibility that marketers will gather data about you on Pogo.com and that they will link this with other data and use this information elsewhere. The games you like and their relation to demographics, ad-click preferences, or various behaviors or lifestyle characteristics are additional elements that marketers and data providers might use to build up ideas about your preferences for particular products, services, or even ideologies. Here then, by drips and drabs, are the building blocks for a broadly shared reputation, virtually hidden in the interest of sponsors by the exhortation to follow links.
By 2010, the FTC concluded that the public lacked a clear understanding of privacy policies, and it encouraged the industry to help the public learn about behavioral targeting “outside the privacy policy.” Ironically, though, the model adopted for the advertising option icon by the industry's self-regulatory group, Evidon, amounts to a well-established pattern of cloaking and ambiguity, as the icon has enabled many advertisers and ad networks to hide their activities behind jargon and rabbit-hole links. Moreover, despite the internet advertising industry's professed enthusiasm for the device, it is not omnipresent. It does appear with many Yahoo! sites’ ads; for example, as of this writing the icon appears on the Yahoo! Sports’ NHL site (http://sports.yahoo.com/nhl), where it is included in the top-right corner of an ad from the retail giant Target. For various reasons it may not be visible to some, one explanation being that Yahoo! is currently not targeting you by tracking your behavior across sites, though it may be targeting you through demographic or other categories. Even if the icon does appear you may not notice it; on Yahoo! it is a tiny gray drawing next to the wording “AdChoices,” also in gray. But suppose you do notice it and click on it to learn more. Doing this in mid-2011 would have brought you to a Yahoo! page that, confusingly, has nothing to do with Target. It is a Yahoo! privacy page titled “AdChoices: Learn More About This Ad.”19 The page is divided into two parts, one “for consumers” and the other “for advertisers and publishers.” The former presents a preamble about how “the Web sites you visit work with online advertising companies to provide you with advertising that is as relevant and useful as possible.” It then lists three bulleted items: Who placed this ad? (answer: Yahoo!); Where can I learn more about how Yahoo! selects ads? (answer: a link to a page about Yahoo!'s “privacy and advertising practices”); and What choices do I have—about interest-based advertising from Yahoo!? (answer: a link to see the “interest-based categories” Yahoo! uses to serve you ads as well as to add to the category list or opt out). Click on the link to the choices and you may see that Yahoo! has tagged you in a few or many of the hundreds of interest categories.
Yahoo! is following the rules, and the rules say it doesn't have to give you detailed explanations about data mining or tracking right after you click on the icon. What Yahoo! actually says may sound completely innocuous, so you may not find it worth your bother to take additional action. Let's assume, though, that you decide you want to opt out of the company tracking you. You'll find a lot of language on the page and successive links that try to dissuade you. A prominent “Learn More!” notice on the AdChoices Web page exhorts the visitor to follow a link to “find out how online advertising supports the free content, products, and services you use online.” Another link takes a person to the Network Advertising Initiative (NAI), which tells visitors at the top that allowing cookies is “a way to support the websites and products you care about.” OK, but say you still want to stop Yahoo! from tracking you. Turns out, you can't do it. The only thing you can do is link to a part of the NAI site where you can tell that company and others that you don't want to receive their online behavioral ads. The company can still track you with a cookie so that it can use what it learns about you in statistical analyses of Web users. The rules don't allow you to tell it to stop doing that. In fact, when you go to the opt-out area, the site cautions you that your action to stop the firm's targeted ads will not enable you to stop receiving advertising. It will simply result in ads that are not necessarily relevant to your interests.20 In view of the limitations—that you will continue to be tracked and have irrelevant ads sent to you—why would many people click to opt out?
That, of course, is exactly what the internet advertising industry hopes will happen. It may be a bit shocking that the self-regulatory apparatus is set up to guide people to accept tracking by behavioral marketers. An even deeper concern are four propositions that the digital advertising system's reports, websites, and leaders weave into their pronouncements, because each of them discourages people from taking seriously what is going on behind their screens. Randall Rothenberg's opinion piece discussed earlier in this chapter underscores the first proposition: marketers and regulators have dealt successfully with the real privacy problems of the Web so that the only reason people worry about the use of their data in the new media environment is because it feels “creepy.” This view sees the Web's real potential for harm as the leaking of information about a person that can damage a person's financial situation, reveal sensitive health information, or cause other forms of embarrassment that might corrode interpersonal or employment relationships. The 2007 FTC report quietly accepts this view. It does not present a summary of why we ought to be concerned about behavioral targeting, but its strongest statements of concern center on situations whereby people may be stung when their anonymity is unmasked without permission. This approach regarding first- and third-party tracking also shows up in the 2009 FTC report quoted above; the report accepts that some internet users’ worries may relate to real estimations of misused or abused information, while some worries related to the creepiness of being followed. The internet industry generally accepts this view, though it argues that the FTC, like privacy advocates, overestimate the real dangers that exist.
The second proposition holds that regulators and the public should appreciate marketers for promoting anonymity and relevance as the two pillars of acceptable tracking by the advertising system. Marketers and websites state that while they reserve the right to learn and use people's names and e-mail addresses, they typically don't sell or reveal that personally identifiable information to other parties that are using their data. That said, the industry maintains, the public and regulators should know that companies need to feed on individuals’ data if they are to bring them relevant, enjoyable material. Giving up information, even personal information, will increasingly be the price of circulating “free” or inexpensive relevant content on the internet. But, according to the industry, the kind of information requested—and the protections of that information—are such that the worries are merely psychological.
This stream of logic leads to the third and fourth propositions. The third one declares that because privacy concerns about internet advertising are really emotional states not rooted in logical or valid concerns, people's reactions to marketers’ data collection efforts are inherently unstable. After all, people may say they want to protect their information, but they will gladly relinquish it for token rewards. This contention has been analyzed in the trade press and at meetings for years. A 2001 Advertising Age article, for example, quoted industry analyst Rob Leathern as saying “flatly” that “consumers are very schizophrenic. They want their privacy, but they're willing to give out information for entry into an online sweepstakes.”21 In contrast, the fourth proposition sees audiences as more rational, arguing that while some Americans are simply unconcerned about their privacy online, most Americans make cost-benefit analyses about whether to release their information. Privacy consultant Alan Westin calls these people “privacy pragmatists.” He noted in a 2003 survey analysis of Americans’ attitudes on this topic that “they examined the benefits to them or society of the data collection and use, wanted to know the privacy risk and how organizations proposed to control those, and then decided whether to trust the organization or seek legal oversight.”22
This description of most Americans as being aware of their online privacy options supports the industry's contention that self-regulation through opt-out mechanisms is a logical way to go and that the small opt-out numbers reflect rational choice. Those who champion the notion that consumers are illogical about privacy would probably agree that such mechanisms can't hurt but that in the end their decision not to opt out reflects fickleness more than anything else. Either assessment would click with Dave Morgan's view that quid pro quo arrangements with consumers is the best way to handle data collection. Morgan is a target-marketing entrepreneur who has founded companies (24/7 Real Media and Tacoda) that have exerted profound influence on the internet. “If you're giving medicine to a dog, you put it inside some peanut butter,” he advised in 2001. “Tell consumers what you're going to do, but give them something for it.”23
Morgan's recommendation that companies offer people a trade for their information resonates with the needs of marketers even more today. Publishers increasingly want visitors to register so that they can track them across different devices—for example, a desktop computer, a laptop, an iPad, a mobile device, or a home television set. In the heat of escalating competition, publishers will also want information from people—about their health, their travels, the value of their homes—that can attract advertisers to them but that individuals may think twice about providing. For some visitors a gift, a few kind words, and a nod to security will pry loose their data. These tactics likely will also escalate among marketers and third-party data providers, as these groups want to collect data of their own as well as to convince people not to opt out of their participation in behavioral advertising practices. When such schemes result in a willingness among people to offer up personal information, the industry can argue that people are inconsistent about their data or that they carefully consider their choices. Indeed, the industry is doing its best to convince the public and regulators that self-regulation guards against bad actions as well as educates people, thereby minimizing any risk to those offering personal information online. To hear Randall Rothenberg tell it, the risk may even be nonexistent, exaggerated, or trumped up by activists.
A series of national telephone surveys I've conducted with the help of well-known polling firms since 1999 demonstrates that neither schizophrenic nor pragmatic is an apt description of what is taking place when people willingly offer information about themselves. Rather, individuals seem to be facing an enormous deficit when they try to find a balance between their need to carry out activities online and their realization they are being tracked. Beyond knowing they are being tracked, they have little understanding of how companies are allowed to handle their data. In fact, they underestimate what marketers can legally do, and they overestimate the protection the government provides for them. These findings point to very different conclusions from the ones marketers and even regulators have been promulgating. This reality, linked to an alternative understanding of the importance of the data-collection system, also suggests very different courses of action than the ones we have seen to this point.
Consider first that, in 2005, 79 percent of 1,500 adults participating in a nationally representative survey I conducted agreed with the statement “I am nervous about websites having information about me.”24 We had cast our net broadly to include anyone eighteen years or older who answered yes to the question “Have you used the internet in the past month at home, work, or anywhere else?” The concern expressed was not a fluke; after more than a decade of tracking Americans’ attitudes and ideas about online marketing, concerns about online data collection had become a persistent worry.
The picture accompanying this worry is complex and troubling. Eighty percent of the 2005 survey respondents believed that “companies today have the ability to follow my activity across many sites on the web.” At the same time, large majorities of the internet-using U.S. public did not understand key practices and laws relating to profiling and behavioral targeting. They mistakenly believed that the government will protect them from online activities that put their data in danger or that create unfair differential pricing. We found that about half the adult population did not realize that most online merchants are allowed to share information with “affiliates” without the consumers’ permission or that print magazines can sell information about them without their permission. About half were unaware of the scheme called “phishing,” in which crooks posing as trusted banks or other institutions send e-mails to an unsuspecting public, requesting sensitive information such as credit card numbers and passwords. At the same time, half the respondents did not believe that banks “often send their customers emails that ask them to click on a link wanting them to verify their account.”
If the fact that half the population was ill-informed doesn't seem shocking, many other topics in the survey demonstrated much higher percentages of unaware consumers. For example, 62 percent of our respondents thought that it was illegal “for an online store to charge different people different prices at the same time of day,” and 71 percent thought the same of a brick-and-mortar store. Similarly, 68 percent did not know whether “by law, a site such as Expedia and Orbitz that compares prices on different airlines must include the lowest airline prices” (the answer is false). Sixty-four percent didn't know that it is legal for their supermarket “to sell other companies information about what I buy,” and 72 percent didn't know that charities can legally sell names to other charities.
Nor has consumer awareness increased notably in the past few years. A 2009 survey I conducted with Chris Jay Hoofnagle and Jennifer King at Berkeley Law School asked similar questions about the legality of merchants sharing or selling people's information, and only half of the population or less answered correctly.25 Moreover, the 2009 study confirmed the public's lack of understanding of Web privacy that was evident in three of my earlier surveys. Sixty-two percent falsely believed that “if a website has a privacy policy, it means that the site cannot share information about you with other companies, unless you give the website your permission” (16 percent answered that they didn't know). This ignorance is distressing: despite all the discussions in policy circles about online privacy, 78 percent of American adults still do not realize that the phrase “privacy policy” is merely an invitation to read how companies deal with their information. Some analysts—even some consumer advocates—have suggested to me in frustration that the widespread, continuing ignorance betrays an irritating lack of attention to what people ought to see as a key issue. My perspective is that, although people do see the use of their data by marketers as an important topic, they also have busy, complicated lives, which encourages shorthand assumptions that often turn out to be wrong.
Whatever their level of nervousness, and whatever misconceptions they have, a majority of Americans who go online say they don't want online content tailored to them. Our 2009 survey tackled this topic in a number of ways, with a telephone interviewer asking randomly selected participants:
• Please tell me whether or not you want the websites you visit to show you ads that are tailored to your interests.
• Please tell me whether or not you want the websites you visit to show you discounts that are tailored to your interests.
• Please tell me whether or not you want the websites you visit to show you news that is tailored to your interests.
If a respondent answered “yes” to any of these questions, a corresponding question would then be asked:
• Would it be OK or not OK if these ads [discounts/news] were tailored for you based on following what you do on the website you are visiting?
• Would it be OK or not OK if these ads [discounts/news] were tailored for you based on following what you do on OTHER websites you have visited?
• Would it be OK or not OK if these ads [discounts/news] were tailored for you based on following what you do OFFLINE—for example, in stores?
The interviewer also asked a general question about whether behavioral tracking for the purpose of tailored ads is acceptable if the tracking is anonymous. The lead-up to the question noted that marketers “often use technologies to follow the websites you visit and the content you look at in order to better customize ads.” The interviewer then asked whether the respondent would “definitely allow, probably allow, probably not allow, or definitely not allow advertisers” to “follow you online in an anonymous way in exchange for free content.”
It turns out that fully 66 percent of the respondents do not want advertisements tailored for them. The proportions saying no to tailoring are lower when it comes to tailored discounts and news, but they still represent around half the population—49 percent and 57 percent, respectively. When we add to them the respondents who reject tailoring when they learn they will be followed at the same website, on other websites, or off-line, the number saying no jumps strongly. If the tracking is “on the website you are visiting,” 73 percent don't want it for ads, 62 percent don't want it for coupons, and 71 percent don't want it for news. If the tracking takes place on “other websites you have visited” or “offline—for example, in stores,” more than 80 percent say they don't want tailoring for all three areas—advertisements, discounts, and news.
The assurance that the tracking is anonymous doesn't seem to lessen Americans’ concerns about behavioral targeting. They are quite negative when it comes to the general scenario of free content supported by tailored advertising that results from anonymously “following the websites you visit and the content you look at.” A total of 68 percent “definitely” would not allow it, and 19 percent would “probably” not allow it. While 10 percent would “probably” allow it, only 2 percent would “definitely” permit it; 1 percent say they don't know what they would do.
A final aspect of the findings that deserves highlighting relates to young adults (eighteen to twenty-four years old). Popular commentary suggests that America's youngest adults do not care about information privacy, particularly online. As evidence, many point to younger internet users’ adoption and prolific use of blogs, social-networking sites, posting of photos, and general documenting and (over)sharing their life's details online, from the mundane to the intimate, for all the world to consume. “Young adults,” exhorted one newspaper article to that segment of its readers, “you might regret that scandalous Facebook posting as you get older.”26 More broadly, Robert Iger, CEO of Disney, quipped in 2009 that “kids don't care” about privacy issues, contending that complaints generally came from much older consumers. Indeed, he said that when he talked to his adult children about their online privacy concerns, “they can't figure out what I'm talking about.”27 The contention is an important one, because at industry meetings one often hears internet practitioners claim that today's privacy concerns are confined to an older generation. The rising generation, they predict, will not have anywhere near the worries about privacy that their elders had—so tracking and targeting activities will have more freedom.
Iger is not alone in this view. Anecdotes abound detailing how college-age students post photos of themselves, unclothed and/or drunken, for the entire world—including potential employers—to see. It is not a leap to argue that these actions are hard-wired into young people. One psychological study found that adolescents (age thirteen to sixteen) and what they termed “youths” (those age eighteen to twenty-two) are “more inclined toward risky behavior and risky decision making than are ‘adults’ (those older than 24 years) and that peer influence plays an important role in explaining risky behavior during adolescence.”28 Their finding was more pronounced among adolescents than among youths, but differences between youths’ and adults’ willingness to take risks were striking—particularly when group behavior was involved. Although the authors do not mention social media, the findings are clearly relevant to these situations. There the benefits of looking cool to peers may outweigh concerns about negative consequences, especially if those potential consequences are not likely to be immediate. A related explanation for risky privacy behavior on social-networking sites is that they encourage users to disclose more and more information over time.
Young people's use of social media does not in itself mean that they find privacy irrelevant. Indeed, the Pew Internet & American Life Project found in 2007 that teenagers used a variety of techniques to obscure their real location or personal details on social-networking sites.29 That study fits with the findings of other researchers, who have urged the importance of reframing the issue to ask, what dimensions of privacy concern younger adults? While differences between young and older adults may be important, other, more subtle commonalities may be ignored. In recent years older age groups have rushed to social networking in large numbers with discussions of personal issues and details. A common anecdotal observation is that young adults and adolescents are more likely than their elders to post racy photos or document episodes of unseemly or disreputable behavior. If research shows this distinction to be accurate, the question nevertheless remains: Do identical, higher, or lower percentages of Americans over twenty-four years old reveal perhaps more subtle but important private information about themselves that might lead to embarrassing or unfortunate incidents, such as identity theft?
In fact, we found that, unlike what many have suggested, attitudes toward privacy expressed by American young adults (age eighteen to twenty-four) are not nearly so different from those of older adults.30 Indeed, large percentages of young adults are in agreement with older Americans when it comes to sensitivity about online privacy and policy suggestions. For example, both a large majority of young adults and those over twenty-four years old:
• have refused to give information to a business when they felt it was too personal or not necessary (82 percent for young adults, 88 percent for the entire sample);
• believe anyone who uploads a photo of someone to the internet should get that person's permission first, even if taken in public (84 percent and 86 percent);
• believe there should be a law entitling people to know all the information websites have about them (62 percent and 68 percent); and
• believe there should be a law that requires websites to delete all stored information about an individual (88 percent and 92 percent).31
In view of these findings, why do so many young adults behave in social networks and other online public spaces in such indiscreet and revealing ways? A number of answers present themselves, including suggestions that people twenty-four and younger approach cost-benefit analyses related to risk differently than do individuals older than twenty-four. Wrapped up in their desire to socialize online, they tend to overlook concerns that they may have in more quiet, rational moments. An important part of the picture, though, must surely be our finding that, among all age groups, higher proportions of the eighteen–twenty-four-year-olds had the poorest understanding of the meaning of the privacy policy label and the right of companies to sell or share their data with other firms. According to our findings, the online savvy many attribute to younger individuals (so-called digital natives) doesn't appear to translate to privacy knowledge. The entire population of adult Americans exhibits a high level of online-privacy illiteracy: 75 percent answered only two or fewer questions correctly, with 30 percent getting none right. But the youngest adults perform the worst on these measures: 88 percent answered only two or fewer correctly, and 42 percent could answer none correctly. One conclusion we can draw from these results is that younger people believe even more than their elders that the law protects their privacy online and off-line more than it actually does. This lack of online and legal literacy rather than a cavalier lack of concern for privacy seems to be an important factor in young people's much-hyped disregard for traditional privacy.32
In the conclusion to the full report, my colleagues and I suggest that “young-adult Americans have an aspiration for increased privacy even while they participate in an online reality that is optimized to increase their revelation of personal data.” Actually, the findings of the surveys we conducted in 2005 and 2009 support this point not just for young adults but for more than 70 percent of adult Americans who go online. As we have seen, Americans worry that marketers are following them even as they participate in marketing activities online. But because the FTC does not dispute the industry's claim that truly anonymous targeting is not harmful, should we accept that our problem with tracking and tailoring is simply a visceral, mere psychological impression of creepiness?
The answer is no. There are crucial issues relating to privacy in the digital-marketing space that have not received the attention they deserve. This book tells the story of an advertising system that has embarked on a fundamental and systematic process of social discrimination. It's a new world, and we're only at the beginning. Nevertheless, the logic of social discrimination is already firmly entrenched in the advertising system and the media that serve it. The direction is basic: In their quest to separate “targets” from “waste,” marketers buy access to data about users’ backgrounds, activities, and friends that will allow them to locate the customers they deem most valuable. They surround those targets with commercial messages that match their views of them and that offer them incentives and rewards—discounts, personal messages, the possibility of relationships—that are designed to make them feel good about the product. And digital publishers help the marketers: by drawing on information about individuals acquired through registrations and purchase from data firms, they provide the targets with personalized content. This personalized content is designed for two purposes: to keep individuals on the site and thereby increase the chance that they will interact with the sponsor's ad; and to resonate with and reinforce the commercial message.
Industry claims of anonymity surrounding all these data may soften the impact of these sorting and labeling processes. But in doing so, it seriously undermines the traditional meaning of the word. If a company can follow and interact with you in the digital environment—and that potentially includes the mobile phone and your television set—its claim that you are anonymous is meaningless, particularly when firms intermittently add off-line information to the online data and then simply strip the name and address to make it “anonymous.”
The logical end for marketers and publishers of all these practices is the creation of reputation silos: flows of advertising, information, entertainment, and news designed to fit profiles about individuals and people who statistically seem similar. The speed, variety, and texture of the targeting and personalization work that defines the new media-buying system have progressed to such an extent over the past half-decade that the next half-decade and the decades beyond will see exponentially rising abilities to mine individuals’ information, decide their value using probabilistic methods, and target and personalize mass-media content. Add to that the ever-widening sharing of information about customers among companies, and it is not at all difficult to see that media and advertisers are determined to present people who have been assigned specific reputations in the marketplace with preconceived views of the world and with opportunities based on those reputations.
Recall Larry and Rhonda, the fictional lower middle class family of fast-food aficionados I mentioned in the introductory chapter. By now it should be easy to see how this daisy chain of labels will be applied to them in the twenty-first-century marketing world. Although Larry had complained in conversation with his boss about the down-market Web he was seeing, not all labels reflect a downscale reputation. In fact, in view of the attributions and predictions publishers and marketers make about their targets (à la Eric Schmidt, as discussed in Chapter 6), the reputations people receive might even have an accidental, weird quality to them.
In that vein, consider Sasha, a fictional woman with many Facebook and Twitter friends who chat with her online about a particular upscale cosmetics line she uses. Marketers of trendy products begin to see her as a hub for trendy women's accessories. No matter that her salary as a receptionist in a law firm does not provide her with nearly enough money to buy all these products; some marketers have discovered that she had an Ivy League education and that she lives—with her well-off parents, it turns out—in upscale Roslyn, Long Island. Because of her educational background and current address, one data analysis tags her as selectively affluent. Another labels her as influential among women in the age eighteen to twenty-eight bracket. As a result, soon she is as surrounded by cosmetics products in the same way that Rhonda is surrounded by loan and weight loss ads. The barrage leads Sasha to start a blog about cosmetics, and it takes on a viral popularity after she posts a video in which her cat sports lots of different types of makeup. The numbers of visitors to her blog lead firms to send her a wide variety of cosmetics in the hope that she will offer favorable comments (recall the discussion on “earned media” in Chapter 6). Companies that make all sorts of upscale goods begin sending her products. She also finds that articles awaiting her when she goes online tend to be about fashion trends, and they are often sponsored by companies that make expensive dishware, travel cases, and writing implements. Unlike Larry and Rhonda, she has never received a discount coupon for a fast-food outlet, and the e-mails and ads she receives from fitness companies emphasize not weight but their high-quality equipment and elite air.
It remains to be seen how long the lives, or half-lives, of such reputations last as we move deep into the tailored-marketing century. Will reputations of some people endure, with news, entertainment, and commercial messages meshing consistently through many years? Will the reputations of others move among different reputation grades, with accordingly different opportunities and worldviews at different times? What will happen when people who are unhappy with what they perceive to be their profiles try to game the system, changing their digital habits in order to reconstruct and perhaps rehabilitate their reputations?
Whether one approves or disapproves, social discrimination via reputation silos may well mean having sectors of your life labeled by companies you don't know, for reasons you don't understand, and with data that you did not grant permission to use. As we have seen, marketing-driven synergies between advertising, information, human-interest stories (“soft news”), discount offers, and entertainment are becoming commonplace. We have also seen that technologies to tailor and target these activities are developing quickly. An increasing number of digital advertisers already expect that publishers will choose or craft editorial matter to reinforce the tailored ads that surround them. Not too many years from now your TV listings, the amount you pay for programs, and the commercials you see will significantly be determined by the reputation you develop with marketers and publishers that want to attract those advertisers.
Advertisers and publishers will reach out to form relationships with customers they value—targets—pushing away the “waste” whose purchase patterns suggest low margins and even losses. Better still, they will adopt data-analysis techniques that warn them against advertising to those people with those brands in the first place. They will have different brands for them, and a targeting and triaging process to get them there. This isn't a new idea. During the 1990s the Kroger supermarket company, which owns Smitty's and Ralph's in the same areas of California, sent certain Smitty's discount coupons and advertisements to Ralph's customers after deciding that this group fit the more downscale profile of that brand. In the years to come, these sorts of activities will become technologically easy, common, based on broadly shared data about people and households, and reinforced by associations with news, information, and entertainment that stream alongside, and even as part of, the commercial messages.
All this will take place under an umbrella of industry reassurance that individuals and households are receiving the most relevant, and therefore interesting, materials possible. Personalized customer relationship marketing by marketers and media firms will reach out to people to assure them of the benefits of tailored content. Lobbyists will similarly assure government officials, whom they will surround with the most desirable streams of marketing and media available in order to gain their favor. Individual clicks will still be key technologies in measuring results. They will indicate your activities online, measure your responses, determine what content environment promotes the buying of particular products, and help determine where you belong in the schemes of value that advertisers and publishers concoct. But as voice recognition technology takes off, the click will be less and less often defined by a finger on a mouse or a remote control, because your voice will register your interest, particularly in the mobile space and home television. Your role in the long click that links ads and content with purchases will also be traced by whatever technology replaces the credit-card swipe. Your mobile device will probably carry a near-field-communication chip that will automatically communicate with your bank or credit-advancing company. Eventually your voiceprint or some as-yet unknown technology will complete the deal in ways to ensure that your money is secure while they add to your profiles held by companies that have cultivated relationships with you.
So what are the social costs of such a world—a world in which discrimination through reputation silos has become the norm? One way to answer is to ask what a society needs from its media. I would suggest that a good society should have a balance between what might be called society-making media and segment-making media. Segment-making media are media that encourage small slices of society to talk to themselves, while society-making media are those that have the potential to get all those segments to talk to each other. A hallmark of the twentieth century was the growth of both types in the United States. A huge number of ad-supported vehicles—mostly magazines and newspapers—served as a way to reinforce, even create, identities for an impressive array of segments that advertisers cared about, from immigrant Czechs to luxury-car owners, to Knights of Columbus, and far more. At the same time, some ad-sponsored newspapers, radio networks, and—especially—television networks were able to reach across these groups. Through entertainment, news, and information, society-making media depicted concerns and connections that people ought to share in a larger national community.
For those who hope for a caring society, each level of media had, and continues to have, its problems. Segment-making media have sometimes offered their audiences narrow, prejudiced views of other social segments. Similarly, society-making media have marginalized certain groups, perpetuated stereotypes of many others, and generally presented an ideal vision of the world that reflects the corporate establishment sponsoring them at the expense of the competing visions that define actual publics. Nevertheless, the existence of both forms of media offers the potential for a healthy balance. In the ideal scenario segment-making media strengthen the identities of interest groups while society-making media allow those groups to move out of their parochial scenes to talk with, argue against, and entertain one another. The result is a rich and diverse sense of overarching connectedness: this is what a vibrant society is all about.
Yet the past three decades have marked a steady movement away from such a society, and this change is directly related to the profound shift in the long-term strategies of major advertisers and their agencies away from society-making media and toward segment-making media—media, that is, which have allowed them to search out and exploit differences between consumers. This is a slow process that will continue to evolve through the twenty-first century, but we can already see stages nevertheless. During the 1980s and early 1990s, with cable TV splintering channels to the home, advertisers’ focus was on identifying segment-making vehicles. They encouraged the growth of electronic and print channels that reached segments of society that marketers found valuable. As this book has shown, though, the past decade has seen the rise of a new mini-industry within advertising that is upending not only traditional marketing practices but traditional media practices as well. The emerging media-planning and media-buying system is predicated on neither society-making nor segment-making advertising media channels. Rather, it is organized by a belief in the primacy of the chosen person: a belief which has motivated them to sort audiences, find individuals within them whom they deem “valuable,” track those people, and serve them personalized ads and other content anywhere they show up.
This is not quite the world that technology writers such as Yochai Benkler, Henry Jenkins, and Clay Shirky celebrate when they note the immense value of the internet in people's collaborations. It is no less important, however, to note less-celebratory aspects of this same activity. Away from public view advertisers and their agents try to audit and exploit collaborations, sometimes to the point of attempting to influence them by stimulating buzz to hype a particular brand. People's contributions to Facebook, Twitter, Foursquare, Huffington Post, and even their own blogs become fodder for the advertising system's databases. Similarly, marketers’ analyses of the “social graph” of an individual's friends may lead advertisers to shape the earned, paid, and owned media that those friends see to the extent that it affects the way they collaborate with each other.
This is also not quite the world political theorists such as Cass Sunstein predict when they talk about how Americans will increasingly self-select their news on the basis of their political values. In fact, behind the screen marketers and publishers are increasingly interested in selecting the news material that individuals are to be offered. To some extent, at least, a distinction needs to be drawn between so-called hard news and other information. Sunstein's focus is principally on hard news and associated editorial matters. By contrast, marketers and publishers so far have tended to stay away from personalizing straightforward political news and views. Rather, they are interested in positioning human-interest stories as well as advertisements, discounts, and television-viewing agendas in ways that fit the new marketing logic. This focus doesn't make the advertising and media system's activities any less political, though. The views people receive from advertising, soft news and information, and entertainment very much relate to the ongoing struggle over who will guide society. Media content presents dynamic portraits of which groups and people are “in” and which are “out” who has claim to public attention and who doesn't; who in society has problems and why we should care; which corporate and government institutions work well and which work badly—and why; and how we fit into all of it.
These pictures are starting points for our interpretations; recall Sasha as well as Rhonda and Larry and their kids. People in consistently different reputation silos—fed different streams of material—will have very different starting points and different opportunities. From these understandings of power in everyday life and our relation to it come interpretations of the functions and success of capitalism, democracy, and the American ethos of equal opportunity. The advertising system clearly isn't creating these distinctions and nodes of power out of new cloth. Broadly shared stereotypes, prejudices, and resource struggles lie behind much of what is taking place. The advertising system is, however, placing the elements of social discrimination in striking new contexts. They now appear as individualized rather than group realities—a kind of report card of a person's social position. The ones who come out on top may find the grades exhilarating. Even for them, though, the realization that marketers are doing the grading in secret and without permission may not be a sanguine thought. New ways to tweak formulas, a few bad marketing and media choices on an individual's part, social-media “friends” marketers decide are wrong for certain topics, the inevitable lack of marketer interest that comes with aging—these and other uncontrollable changes might throw a person down the reputation slope.
Of course, reputation silos will never be hermetically sealed. People will see other choices, and the serendipity of meeting untargeted, unlikely content will remain. So will the vagaries of individual interpretation. But the different starting points that personalization encourages and that feel comfortably related to people's life circumstances may well discourage some from exploring entertainment, information, and news that seem too far from the comfort zones. They will allow, perhaps even encourage, individuals to live in their own personally constructed worlds, separate from people and issues they don't care about and don't want to be bothered with. Such a preference may accelerate when antagonisms based on employment, age, income, ethnicity, and more rise up as a result of competition over jobs and political muscle. In these circumstances, reputation silos may accelerate the distance people feel between one another. They may further erode the tolerance and mutual dependence between diverse groups that enable a society to work.
People's awareness of differences in the content they receive may also create or reinforce a sense of distrust about the power of organizations over which they have no control to define and position them in the social world. The sense that advertisers are manipulating labels about people's value behind the scenes, but without the targets knowing how or with what information, is an invitation to social tension. It's hard not to be suspicious when articles appear in the popular press about certain people being charged different prices for the same merchandise by the same company at around the same time. As I discussed earlier in this chapter, Annenberg national surveys show quite clearly that Americans believe this sort of price discrimination to be illegal. Anecdotal evidence suggests that when citizens realize that price discrimination is legal they become angry at the businesses that carry it out and the government that allows it. Going forward, then, it seems likely that the new advertising system using data people wouldn't want marketers to have in order to choose winners and losers behind people's backs will produce a corrosive social atmosphere characterized by resentment and distrust of both government and marketers.
The way in which the new advertising system approaches individuals and society is difficult to understand. It is hidden behind multiple screens of industry jargon, claims of competitive secrecy, and links that pretend to lead toward explanations but actually enhance confusion. The confusion is accepted by a federal government that centers on narrow privacy issues instead of realizing that at the core this story is the health of consumer-business relations in the twenty-first century, and the extent to which marketers should be able to take consumers’ own information, movements, friendships, and far more and turn these points into statistically driven profiles of them. These labels are often based on the analysis of far-flung data gathering and prob-ability statistics. They may have little to do with the ways the individuals see themselves. Nor is it clear that the generalizations these labels imply have anything to do with truth or reality. Simply because an ad campaign yields positive results doesn't prove that the audience labels built into the campaign are correct. What remains after all the mathematical smoke clears, though, is that marketers’ images of people, people's images of themselves, and media firms’ approaches to traditional norms and society are irrevocably altered. Moreover, the new technologies that marketers and media firms have developed to track individuals and tailor materials to them are a template for the manipulation of citizens. Governments throughout the world can easily adapt them in their attempts to control populations for explicit political aims.
What can be done? To a large extent the train has already left the station in the United States. The media-planning and media-buying industry's new logics are quickly becoming part of our landscape. Despite bipartisan concerns about privacy, the Federal Trade Commission's mantra of self-regulation is unlikely to change in the foreseeable future. Advertising and media representatives will undoubtedly continue to be adept at defining a narrow problem and a narrow solution. Nevertheless, here are four major steps that need to be carried out if we take seriously the claim that the new rules of advertising are reshaping our world—and not for the better—and are doing so behind our backs:
• Teach our children well—early and often. Today, the overwhelming number of people who use the internet have had no formal training in it. In the rush and excitement of new technologies, they have adapted by figuring out what they need to do and getting help if something goes wrong. That method works on a day-to-day basis, but it's the wrong approach if we are to have a citizenry that can claim control over the rules of the game behind the screen. You don't have to be a computer engineer to be able to grasp the policy issues involved in the emerging media system. It is crucial, however, to have knowledge of the ABCs of digital technologies in order to understand what marketers, media firms, and technology companies are doing as well as to talk to experts who can evaluate their claims. This is a new language of the twenty-first century, and students from at least middle school onward should learn it. The claim that the curriculum is already too crowded is a poor excuse for a fascinating topic that can be easily accommodated across the curriculum—in science, math, history, and even literature and art. The more difficult challenge is to get teachers up to speed so that they can teach it effectively. But it has to be done. A serendipitous by-product might be an increase in the number of students who want to pursue engineering careers.
• Let people know what is really going on with their data. In 2008 I suggested to New York Times technology reporter Saul Hansell that the industry ought to post an icon on every tailored advertisement that when clicked would lead to a “privacy dashboard.”33 The dashboard would show you the various levels of information the advertising company used about you to create the specific ad. The information disclosed would not only show behavioral targeting but would include knowledge collected from any sources, including third-party data providers, the U.S. Census, and online discussions.34 The dashboard would reveal precisely which companies provided that information, how certain data were mixed with other data, and what conclusions were drawn from this mixing. It would also allow you to suggest specific deletions or changes in various companies’ understandings of you. Unfortunately, instead of the icon with the privacy dashboard the industry gave us the “advertising option icon,” which, as noted earlier, does not seriously address the problem of people's complete lack of understanding of specific information that companies know about them. I appreciate the difficulty of carrying out a project such as this; after Hansell's blog post appeared I had lots of good discussions with people in the industry about it. But one great advantage of the dashboard approach is that it would facilitate individuals’ involvement with their data and would encourage them to see in an uncomplicated yet compelling way just what does happen behind the screen. It may be quixotic to believe that major industry actors such as Google, Microsoft, Ford, Procter and Gamble, Publicis, and WPP would come together to implement it. Nevertheless, I'm still convinced that something far more informative than what we have now can be implemented.
• Create a “Do Not Track” regime with rotating “relevance” categories. This activity, too, will require industry will and creativity, but it must be done. As part of their bid to demonstrate to Congress and the FTC that they can self-regulate, some important internet actors, among them the browser divisions of Microsoft, Mozilla (maker of Firefox), and Google, have moved in different ways toward enabling Web users to choose not to be tracked by third parties. Of course, at the same time, marketers, websites, and advertising associations discourage the public from signing up with the reminder—by now a mantra—that by prohibiting advertisers from following them users will lose the ability to consistently be served relevant ads and offers.
To counter this reminder, the public needs to be better informed that receiving relevant ads may come at the cost of control over their personal information as well as their self-definition. At the same time, a positive spin should be placed on the lack of specific ad relevance. Instead of sending people ads that are totally random, firms should be pressured to deliver ads that have been tailored to other categories of people—or to other specific people—and to inform them which types the ads are intended to serve. So, for example, a forty-year-old African American man might get an ad that, he will be told, is normally sent to a sixteen-year-old African American teenager—or to a forty-year-old Hispanic man. This sharing of tailored ads might well intrigue people and bring far more signups to such a list than the NAI list has now. In addition, it will allow people to witness the divisions advertisers are creating between people, encourage critical social conversation about this practice, and possibly help advertisers understand what limits various stakeholders would like to place on such categories of division.
• Pass ground-level government regulations to force a playing field of good actors. Industry actors naturally want to control their own activities, and they often make a claim that the digital environment is still too new to form rules that might stifle creativity. Not wanting to kill an American goose that is laying golden eggs both domestically and worldwide, regulators have come to accept their argument. In discussing data tracking and use with people throughout the industry, however, I have come to believe that some companies are pursuing paths of data intrusion they wouldn't otherwise if not for the competition. It may well be that a small but critical group of ground-level regulations is necessary to allow the good actors to prosper and at the same time to stop actions by firms that an intersecting community of privacy advocates, industry practitioners, citizens, and regulators believes may be going too far with collecting, analyzing, and implementing certain data. Consensus already exists generally about the delicacy of so-called sensitive data—information on people's sexual preferences, pharmaceutical drug use, and financial status. At this point, these areas tend to be treated as having opt-in status; maybe they should be fully off-limits. Perhaps, too, less-explosive categories should likewise be assigned opt-in or maybe even off-limits status when it comes to targeted advertising and publishing. For example, we might consider prohibiting data firms from using without permission what individuals say in the heat of social-media discussions or in chat rooms for the purposes of marketing to them. Such a step would assure people that some parts of the social world can be enjoyed without worrying that what they say might be used to follow them in ways they could hardly have considered.
These steps are not rigid; others may well have different, better ideas. The point of this process—in fact, the purpose of all four steps—is not only to make good rules; it is to encourage people from many backgrounds to examine, interrogate, understand, and critique the twenty-first-century advertising and media system and its rules. In the final analysis, this process of understanding will ensure that all of us can knowledgeably exert influence over the forces that define us as well as our value to ourselves and to the world at large.