“The Truth will be present in everything. You’ll know everything about yourself and your loved ones if you opt in,” said Jeff Malmad, managing director of mobile devices at Mindshare media and marketing services agency, which is owned by the huge WPP international conglomerate.1 He believes that marketers should use this pitch to convince consumers of the benefits they receive from “wearables”—clothing and accessories incorporating computer and advanced electronic technology.2 The digital watch and the fitness bracelet stand out as the most common examples today, but analysts predict this list will expand as the public adopts chip-laden glasses, headgear, shirts, coats, rings, and more in the years to come.3
Some retailing experts are skeptical that wearables will make a major splash. They point out that Google stopped production of its Google Glass spectacles after they were criticized for interfering with people’s daily activities and for uncivilly accessing real-time data about individuals. Fitness monitors, among the first entrants into wearables (around 2014), have been accused of not retaining the attention of their owners. The consultancy firm Endeavor Partners estimated in 2015 that about a third of these trackers are abandoned after six months. Health care investment fund Rock Health noted that only half of nearly twenty million registered users of Fitbit, by far the biggest seller of fitness monitors, were still active as of the first quarter of 2015.4
Supporters of wearables believe that smart watches are a superior test of the success of wearables, noting that these watches can perform the functions of fitness monitors as well as other useful activities. The most prominent smart watch, the Apple Watch, was introduced in April 2015. After three months an estimated 1.9 million watches were sold. In June of that year conventional timepiece sales in the United States fell more than they had in seven years, indicating that Apple Inc.’s watch was eroding demand for traditional watches and clocks.5 Elmar Mock, one of the inventors of the Swatch watch, predicted a month before the Apple debut that its smart watch might cause an “ice age” for the four-century-old timepiece industry. To compete, Swatch itself released its own smart watch later in the year. “The Apple Watch is going to gain a significant amount of penetration,” was the more measured prediction of an executive from the NPD market-tracking firm in August 2015.6
Marketing analysts have taken the long view of the nascent business. “Just as tablets faced skepticism in their early days, with consumers and critics questioning the need for new devices, so too does wearable technology,” wrote the international accounting firm PwC (formerly known as Pricewaterhouse Coopers) in a report heralding the growing importance of wearable devices.7 “Wearable technology is still in its infancy,” agreed the Certona consultancy in 2014.8 The company suggested the devices will be data generators that can “improve personalization . . . by implementing predictive analytics tools.”9 For example, mileage data on a fitness tracker can alert a retailer that the customer is in need of new running shoes. But Certona also cautioned, “Google Glass, smartwatches, and fitness trackers are getting all the attention now, but brands should not discount other wearable technologies.”10
PwC went on to note that in 2014 20 percent of U.S. residents owned some form of wearable device, either a “primary” device (a central connector for all kinds of devices and information) or a “secondary” device (one that captures specific actions or measurements that are then funneled back to a primary device). Smart watches are presently considered secondary devices, but PwC predicted that they, along with smart glasses, “will emerge as key primary devices, acting as a central collection portal for different wearables.”11 The PwC report didn’t count the smartphone as a wearable, though one can argue it is a de facto primary wearable device, as it serves as a central hub for secondary accessories such as fitness trackers. If the report had included it, the percentage of Americans carrying technology close to their bodies in late 2014 would have exceeded 71 percent—in other words, the proportion of smartphone-owning citizens.12
The company in fact found that many of the consumers interviewed for its report think of their smartphone as a wearable device. Why, then, would anyone advocate for any other wearable? For marketers, the answer has to do with physical intimacy. The closer the instrument is to the body, the less likely people will remove it, meaning it can provide marketers with more information. A retailer app that has been installed on a person’s wearable device enables the merchant to pursue a continuous relationship with the person “at home, on the go, and in store.” This means an interaction with the entire multifaceted retailing ecosystem in the same way that the smartphone does, from website cookies to GPS location trackers to beacons, and much more. But because the devices are connected to the body more directly, they will have a more consistent presence than that of smartphones. Consequently, wearable devices will “usher in a new level of hyper-interconnected retail in which retailers ‘join the dots’ between an individual’s pre-store and in-store behaviors to deliver an enhanced customer shopping experience,” reported an article in Information Age.13 In other words, the retailer will not have to rely on the shopper to move smartphones, tablets, or PCs “from couch to the shelf,” noted PwC.14 Instead, the intimate, always-connected nature of the wearable device will virtually guarantee continuous tracking across time and space.
Champions of wearables believe that tracking via these always-on products will be far more advanced in comparison with current smartphones. They will inevitably be part of what has come to be known as “the internet of things”—an environment in which all such devices (for example, smartphones, clothing sensors, and smart watches) connect with each other to perform services for their owner as established by the retailer or marketer that created the service software. The information generated will travel instantly to a data cloud to be run in real time through complex predictive analytics, incorporating other already-stored information pertaining to the individual. The retailer or marketer can then detect changes in the individual’s habits and behaviors as well as in the individual’s surroundings and target them accordingly. The PwC report notes that “this process will be made possible through passive listening elements [on the wearable] as well as active cues—what you listen to, what you ‘like’ and what you browse.”15 In this vision of the future, wearable devices will enable retailers to recognize individual shoppers immediately, so advertisements, offers, and loyalty programs will be individually tailored more precisely than ever. And wearables will have built-in payment systems and verifications that make checkout a breeze.
In addition, many wearables advocates say, the sustained close proximity of the devices to the users will add new elements of data to shoppers’ profiles exceeding the capabilities of smartphones. PwC notes, for example, that nearly 30 percent of cell phone users turn off their phone overnight. And many of those who do keep their phones on while they sleep place them near but not actually in bed. Primary wearable devices that remain on the body both day and night transmit torrents of information on everything from sleep patterns to sports activities. The Certona consultancy notes that “nanotechnology and biometric technology is [sic] paving the way for smart clothing and turning heads in the athletic apparel industry. The data available through ‘smart’ fabrics is unbelievable—perspiration data, heart rate patterns, activity tracking and calorie monitoring. An innovative company can benefit from this data by pushing out inspiring messages to a smartwatch or Google Glass to keep users motivated toward their goals.”16 Indeed, Google’s head of retail says that wearables “give [merchants] niche data that allow [them] to speak to customers better than ever before.”17 PwC examines some of these possibilities: “Through wearable technology, brands could present relevant content to a shopper while they are considering a product—say, in a grocery store, recognizing items a consumer has placed in the grocery cart and serving up relevant recipes through augmented reality. Brands could even tap body cues to tailor messaging. Sensor revealing that you’re thirsty? Here’s a coupon for smart water. Low on vitamins? Flash this for $1 off your favorite vitamin-loaded juice product. Serotonin levels down? Grab yourself a free soda and open happiness.”18
Although the retailing establishment seems reasonably certain that people can be persuaded to allow retailers to identify them through certain wearable devices, it’s not nearly so confident that shoppers will consent to participating in the other emerging retailing tracker, the facial recognition system. This technology involves taking complex measurements of facial images and converting them into a mathematic calculation called a “faceprint,” which is then compared against a faceprint database of photographs and video still images. In 2014 the New York Times reported that “if security cameras record someone at, say, a store or a casino, the system can compare the faceprint of that live image to those in the database, taking only a few seconds to run through millions of faceprints and find a match.”19
The various companies working with facial recognition systems use different techniques, and there is a lot of competition among those trying to make money on their face-matching strategies. Law enforcement agencies logically are major customers, as facial recognition can assist in matching video images taken of criminal suspects with mug shots. Some airports use the technology to speed security checks for employees and frequent fliers.20 On the internet, Facebook is among the firms offering face-matching software to suggest the names of people in posted photographs.
Retailers, too, have begun adopting the technology. As early as 2009 an online digital news and information publisher commented that such systems represent “perhaps the most exciting innovation” in analyzing shoppers. It listed companies that were offering “sophisticated hardware and software suites that use inexpensive cameras mounted on screens to recognize human faces.” These systems “keep detailed logs on who looks at what, for how long and when.”21 Fast-forward to 2014, when a reporter interested in “the future of shopping” noted that video surveillance and analytics tools “were everywhere” at that year’s National Retail Federation trade conference.22 In addition to serving as a potential marketing tool, retailers are interested in this technology for security reasons—for example, identifying convicted shoplifters that enter their store and notifying security personnel.23
A logical step beyond using surveillance equipment to monitor shoplifting is for retailers to apply the facial recognition technology to marketing oriented questions about the kinds of people who enter a store. In fact, Face-Six has sold software that enables cameras at dozens of malls in the United States and around the world to count and categorize shoppers based on age, gender, and race. In addition to these mall demographics surveys, Face-Six can display target advertising based on facial analyses, CEO Moshe Greenshpan noted.24 Similarly, food manufacturer Mondelez International was reported to be working on a system in late 2014 that combined facial recognition with digital shelf displays; a digital message board accompanying the display would tailor messages based on the demographic information obtained by the system.25 For example, at an Oreo cookie display a person whom the system identifies as a thirty-something female could see a message stating that one of the Oreo Thin cookies in the package she is holding has just thirty-five calories, while a teenage boy might see a discount voucher in the form of a barcode scan or QR code for his mobile device.
Because facial recognition is passive, it has an advantage over mobile phone trackers. Using beacons to determine a shopper’s age, gender, and race at the very least requires that a shopper possess a smartphone, with its Bluetooth turned on and the pertinent apps loaded. In the course of this exchange the phone owner’s identity will often be revealed, so beacons do bring a lot more information to the aisle than just basic demographics. But supporters of facial recognition systems say that anonymous demographics are only the beginning of what they can deliver on the retail level. For example, several companies say they can offer retailers the ability to detect the current emotions of the people walking through their aisles. One such company’s software “extracts at least 90,000 data points from each frame, everything from abstract patterns of light to tiny muscular movements, which are sorted by emotional categories, such as anger, disgust, joy, surprise or boredom,” reported the Wall Street Journal.26 The facial recognition company Affectiva told the Journal that it has measured seven billion emotional reactions from nearly two and a half million faces in eighty countries. Many of the algorithms incorporate the seminal yet controversial work of psychologist Paul Ekman, a pioneer in the study of emotion awareness.
Another such company, Emotient, says that its software algorithms are based on analyses of an ethnically diverse group of hundreds of thousands of people participating in research for its clients via video chat. These analyses look for miniscule movements in the face. On its website in 2015 Emotient stated that “major retailers, brands, and retail providers can use Emotient’s technology to assess customer sentiment analysis at point of sale, point of entry, or in front of the shelf.”27 The company’s co-founder and chief scientist, Marian Bartlett, said that her company aims “to measure emotion and tell the store managers that someone is confused in aisle 12.” Emotient’s other co-founder, Javier Movellan, stressed anonymity. “We do not want to recognize who is watching. All we care about is what they are watching and how they feel about it.” The firm’s technology appealed to Apple, which purchased Emotient in 2016 with the aim, some suggested, of using it in a new version of its phone helper Siri to understand its smartphone owners better.28 Like Apple, retailers are beginning to see the utility of this form of data. The Wall Street Journal reported that one retailer “is starting to test software embedded in security cameras that can scan people’s faces and divine their emotions as they walk in and out of its stores.”29 The Hill noted that digital monitors in stores use small embedded cameras to detect age and gender; the ads that then appear on the monitors will be tailored to the camera’s analysis of the shopper.30 “There is anecdotal evidence that the results can be quite extraordinary,” said Tony Stockil, chief executive of retail strategy consultancy Javelin Group.31
Of course, these systems mean that customers will be substantially more visible than they are in traditional foot-traffic videos, in which faces often are blurred and images are destroyed promptly to protect customer anonymity. And although Emotient and ShopperTrak (discussed in Chapter 4) stress the anonymity of their subjects, increasing competition is fostering a movement toward identifying shopper faces. Two facial recognition companies, NEC and FaceFirst, have developed systems in which cameras scan shoppers entering a store and identify existing or potential customers deemed important to the business. NEC says its system, “VIP Identification,” is “ideally suited to hospitality environments or businesses where there is a need to identify the presence of important visitors, whether expected or unannounced.” The facial matching software “can take less than a second” to identify a shopper and alert store personnel “so that the necessary action can be taken to greet or prepare for the arrival of the VIP.”32 The FaceFirst website says its system enables retailers not only to “spot and stop the bad guys,” but to “recognize the good guys and treat them better.” At a cost as low as $15 per month per store, the retailer loads “existing photos of . . . your best customers into [the FaceFirst database]. Instantly, when a person in [the database] steps into one of your stores, you are sent an email, text, or SMS alert that includes their picture and all biographical information of the known individual so you can take immediate and appropriate action.”33 FaceFirst CEO Joel Rosenkrantz told the BBC that “someone could approach you and give you a cappuccino when you arrive, and then show you the things they think you will be interested in buying.”34 The FaceFirst website exhorts retailers to “build a database of good customers, recognize them when they come through the door, and make them feel more welcome.” To compile a database of customer photos, Rosenkrantz presented one possibility: “If a particular brand has 10,000 likes on Facebook, you could use the profile pictures of all the people who have liked it. . . . You can tell customers that if they agree to enroll [their face] with their camera, then they will be offered a discount coupon when they walk into the store, or get them to tick a box saying they agree that their picture can be used when they log on with Facebook.”35
Rosenkrantz’s suggestion to ask permission for shoppers’ facial profile data by bribing them with discounts or exploiting their Facebook loyalties is consistent with the ways merchants encourage shoppers to give them personally identifiable data to be used in other tracking technologies—for example, Ulta Beauty’s loyalty program, which gathers information that store associates can use as, tablets in hand, they greet shoppers entering the store (see Chapter 5). When it comes to obtaining an individual’s permission—“opting in”—NEC’s website is rather evasive. It mentions “matching images against what is likely to be an opt-in database of individuals they deem as important.” In 2010 the retailing trade association Point of Purchase Advertising International published its “Recommended Code of Conduct for Consumer Tracking Methods,” which suggested that consumers should be allowed to opt in before their personal data are collected. In ranking various consumer tracking methods, the code listed as “high risk” activities “any camera based OTD [observed tracking data] system” as well as “any method used to personally or uniquely identify consumers, when combined with loyalty program data, or third-party marketing data.” It added, “While the federal government has recognized dangers in the realm of mobile marketing and healthcare and has subsequently passed laws to protect consumers, no such laws exist for data collection in retail settings.”36
As of 2016, laws in this area were still lacking despite the increasing use of facial recognition technology. In fact, there were no federal laws that expressly regulated facial recognition software. (Only two states, Illinois and Texas, had laws regulating the collection and use of biometric data. Illinois specifically required firms to get permission before collecting and retaining a “scan of . . . face geometry.”)37 In 2012 Mark Eichorn of the Federal Trade Commission’s Division of Privacy and Identity Protection stated that the FTC “would be very concerned about the use of cameras to identify previously anonymous people.”38 A 2014 attempt to nail down norms for the facial recognition activities of U.S. businesses was unsuccessful. The initiative was part of a larger project, encouraged by the Obama administration, to address consumer privacy and brought together nine privacy groups, leading industry representatives (including from Facebook), and retailer associations to negotiate a voluntary code of conduct. Coordinated by the Commerce Department’s National Telecommunication and Information Administration (NTIA), the effort was doomed because the positions of those involved remained far apart. Chris Calabrase of the Center for Democracy and Technology was among those who argued that those using facial recognition should seek the consent of people before scanning their faces. He told CBS News that “face recognition allows secret tracking so any time you’re in public whether you are attending a protest rally or visiting your doctor or entering a church or a bar it could allow you to be identified and your movements tracked.” He emphasized that “the individual must be able to choose. If they can’t, lots of entities, whether they are companies or governments relying on company databases, can use this technology to spy on people. They will never know, and they will never be able to control it.”39
On the other side, facial recognition supporters contended that the demand for consent made little sense when the software is used to track criminals. The privacy groups offered a “narrow exemption” for security uses, but the industry turned it down, Calabrese said. Carl Szabo of NetChoice, a digital commerce trade association, said he and his industry colleagues would approve self-regulation requiring a merchant to post signs when using facial recognition technology. “If it turns out consumers love it, they will embrace it. If they hate it, they will walk away and that store will stop doing it.” He acknowledged “the feeling that people might get that someone is spying on them and invading their privacy. But . . . we haven’t encountered any misuse or abuse of the data. Nobody is selling the data to third parties.” He added, “I don’t think we are there yet for any calls for regulation. Theoretical fears make for bad laws. We also don’t want to strangle this new technology.”40
After a year and a half of negotiations, the retailers remained staunchly against the idea of seeking consent to use facial recognition.41 Frustrated, privacy advocates abandoned their efforts, even as NTIA representatives urged them to continue. About two months later the General Accountability Office (GAO) released a troubling report on facial recognition technologies. It noted that it could find no data pertaining to how much these systems were being used by American businesses.42 And it sided with those who worried that the activities had the potential to cause lasting harm to Americans. “The privacy issues stakeholders have raised about facial recognition and other biometric technologies serve as yet another example of the need to adapt federal privacy law to reflect new technologies,” the report stated.43 It was a remarkably straightforward call for regulation at a time when the industry was digging in against any effort to do so. In the hypercompetitive physical store environment, merchants wanted to create tracking systems as effective—or even more effective—as those on the internet.
Australia-based retailing futurist Chris Riddell took many of these developments to their logical extension in a 2015 blogpost titled “The Future of Retail—Emotional Analytics”:
Imagine for a moment that, by using a combination of Bluetooth proximity and FRS [facial recognition systems], you could first of all identify your customers when they entered your store. Then you had the opportunity to identify the emotions they felt as they walked through your store. Of course, some customers keep a “poker face” when they shop, but new software from Californian company, Emotient, can register “micro-expressions”—the tiny flickers of emotion that show on people’s faces before they even know they have registered an emotion or are able to control it. It can even tell you if they are smiling, but not with their eyes.
How useful could this information be when it comes to understanding your customer and creating a unique, personalised experience for them? This kind of granular data is priceless. You can already analyse a customer’s online journey through your company website—discovering where they have come from, what they look at, how long they stay on the site, what grabs their attention and what motivates them to action. What if you could apply those same principles to their real-time journey through your real-world store?44
Some future-oriented observers of retailing believe that facial recognition will greatly augment existing information-gathering technology at checkout in the not-too-distant future.45 A Russian technology firm, Synqera, conducted facial recognition trials at checkout in late 2013 for a Russian retail chain. The retailer hoped the activity could supplement loyalty-card information for those customers who have a card, as well as stand in for a loyalty card for those who don’t. “If the customer has no loyalty card or doesn’t want to identify himself with a loyalty card, then the system recognizes his general mood [by the presence or absence of a smile], gender and age in order to use this data for targeting of the content,” said Ekaterina Savchenko, Synqera’s head of international marketing. “If the customer identified himself with a loyalty card, the system double checks the customer age and gender with data sourced through facial recognition. If the Synqera system sees that the loyalty card data differs from the camera data, then it evaluates the correctness of the camera data (probability defined for the particular user’s gender and age) and, if it is high, gives it priority.”46
Although theft may be the reason when facial recognition software doesn’t match the loyalty card being presented, more likely the card has been loaned to a friend or other family member. In these situations the facial recognition system helps the retailer to parse its data about the name on the card. The checkout systems and the back-end analytics “will then learn that these few users are linked to one card and base the analysis and relevant content on the facial recognition data,” Savchenko said.47 Retail information technology writer Evan Schuman noted that “the biometrics help make sure that the message or promotion being displayed [on a video display at checkout] is the right one.”48 Savchenko said that her firm’s software can evaluate whether these are successful for each person at checkout. “Users’ smiles are used for the evaluation of the content effectiveness,” she said. And users who smile are rewarded, she added: “If the user smiles, he gets a virtual achievement badge or extra loyalty bonuses to his card.”49
Schuman said that facial recognition systems at checkout can be used either for identifying individual customers by name or, in maintaining anonymity, “merely capturing the facial data points and noting what purchases the person attached to that face makes. Then, when the cameras catch that same face again (say, perhaps four days later), [the system] will remember the prior purchases.” With either approach, Schuman said, “such activities need not end with the same channel where they began. Once a shopper is identified in-store and is matched with a CRM [customer relationship management] profile—or they are identified anonymously in-store and a purchase profile of this unknown-person-with-this-specific-face is slowly built—that information can theoretically be married to data from that person’s desktop-shopping E-Commerce efforts or their tablet/smartphone’s M-Commerce efforts.”50
Finally, Schuman advanced the possibility that the facial recognition process might be installed outside the physical store as well as inside: “What if [a retail] chain pushes some attractive incentives to get lots of customers and prospects to download its free mobile app? And buried in the terms and conditions is the right for the app to monitor images?” All these recognition approaches would better enable merchants to identify customers entering their store and offer them customized service and deals based on their shopping histories and the retailer’s calculation of their long-term value to the business. And while accuracy remains a problem today, Schuman said, “look for this technology to get an order of magnitude more accurate over the next couple of years.” As the technology continues to develop, he noted, “the privacy—and associated shopper backlash—risks are obvious.” Nevertheless, he contended, “shoppers (especially younger shoppers) seem to have developed an almost infinite capacity for tolerating such efforts. Make the incentive strong enough—and use the data in subtle enough ways so that you’re not forcing the customer to know how far you’ve gone—and privacy will be a trivial concern. Not saying that it should be a trivial concern, but merely our belief that it will be.”51 Indeed, over ten years ago retail consultant Karl Bjornson observed that the success of facial recognition systems could well depend on whether the public could be convinced to accept recognition technology as a way to secure their identity and enable them to receive special offers.52
Such predictions equally apply to wearable devices. By the mid-2010s retailing consultants were crafting rhetoric to justify data flows from body-hugging technologies. PricewaterhouseCoopers concluded, for example, that “done right, wearable advertising can promise more personal and relevant messaging to consumers—and bring in big business for the companies that leverage this strategically.”53 Similarly, an ad agency CEO proposed continual reinforcement in ways that fit the always-attached nature of the devices: “customizable shopping paths for each person, while incentivizing them with real-time offers and deals along that path.”54 In fact, in an online survey PwC found that millennials more than any other age group would welcome cash, loyalty privileges, and gaming as rewards for adopting wearable devices. The survey found that 37 percent of this age group would be “strongly motivated” to use a wearable device if it “has apps/features that reward those who frequently use it with loyalty points”; 52 percent would be “strongly motivated” if the device “has apps/features that reward those who frequently use it with monetary rewards”; and 64 percent would be “motivated” (though not “strongly motivated”) if the device “had some type of gaming component to it.”55
But just as with the technologies discussed in previous chapters, shoppers who encounter facial recognition systems or who sport wearable devices as they offer up their personal data will likewise not be treated equally. The best customers in the best niches will continue to get the best deals, while, as Bjornson noted, “people not in the right segments will be left behind. They will not have as rewarding an experience.”56 As I’ve stated throughout, if a primary lesson of the hidden curriculum is that people should get used to giving out their data, another lesson will be that in this great transformation some shoppers will be winners, some will be losers, some will be a bit of both, and many will worry where they stand and why. It is not hard to see that a marketplace centered on individuals’ private data and their personalized mobile devices will upend the understanding of shopping that has crystallized over the past century and a half.
Without a doubt, many of the new information technologies created to help people shop online and in physical stores have marvelous capabilities. The ability to carry a device around a store to check shelf prices against those of other stores gives shoppers enormous knowledge—and potential leverage in the purchases they ultimately make. Tailoring a discount coupon to shoppers because of where they happen to be standing in the store at that moment can be a smart way for a retailer to ingratiate itself. Increasingly, though, the data that retailers collect on their own and that they purchase from marketers will be far more personal and will lead to increasingly individualized products, postings, and prices aimed at shoppers. While marketing and retailing executives worry that tailored prices might anger shoppers, they nevertheless agree that discriminatory pricing will become common over time—they consider it an efficient approach to selling in an era of enormous competition. Three of the executives I interviewed argued that offering personalized deals on mobile devices likely would not cause public indignation over undemocratic pricing. These individualized discount coupons, along with tailored product presentations and messages, will be sent just to a shopper’s smartphone, tablet, or other privately owned device, so no one else will see them. Shoppers may compare notes about these deals in casual conversation with friends and co-workers, so they may be aware that prices seem to differ for each person, but they will have no clue as to the reasons for the variations, and stores won’t provide direct answers.
As I mentioned earlier, the airlines have clearly already paved the way for such discrimination. To begin with, the baroque rules of airline loyalty programs award those considered most loyal with the best seats, first crack at coveted carry-on space, and priority boarding, while others are offered less-desirable seats (though they can shell out more money for better ones), may have to scramble to find a place to stow their carry-on items, and may have to wait longer during the boarding process. There is also the matter of ticket cost, as passengers sitting next to one another on a flight may be paying wildly different fares, and for reasons that are often opaque. There is no record of open rebellion by passengers as a result of discriminatory pricing, seating, storage space, or boarding, and this can be chalked up to the personalized nature of the process: no individual passenger knows what his or her fellow passengers paid for their ticket, or why they received a specific seat or boarding priority. Passengers likely won’t discuss these issues among themselves as they sit at the gate waiting to board, partly because of shyness, partly because they don’t want to know whether they won or lost in the particular transaction, and partly because they may feel that, whatever the inequalities, those disparities are a legitimate part of the social system. The airlines repeat this message in virtually all communications with their customers: the more you spend and the greater your loyalty, the more you’ll get back. In their discriminatory practices they make it clear that fliers who spend less for flights or who fly infrequently aren’t as loyal as those who spend and fly more and therefore deserve to be treated less well. You may protest that you simply don’t fly frequently for business (a characteristic of the most “loyal” passengers), but to no avail. The airlines have carved out this world of air travel, and we have to live in it.
This type of discrimination-centered world is now evolving in retailing, and it has broader ramifications. In this hypercompetitive environment stores are doing their utmost to steer shoppers to accept that the data-based customer relationship is a natural activity, a process that is taken for granted and a useful part of contemporary life. By doing the right things and having the right purchasing patterns, you will be rewarded with the best product suggestions and the best prices online and in physical stores. Part of “doing the right things” means being a “loyal” customer of the retailers you visit—though being loyal may be more complicated than it sounds. Doing the right thing also means allowing the retailer to gather information about you—for example, by logging in when you visit the merchant’s website, downloading (or in some cases just streaming) the retailer’s app and allowing location tracking, and enabling Bluetooth so the store can send you coupons as you move through the aisles. It may often mean having the assets and lifestyle that will lead a retailer’s computers to conclude that you are a shopper with a high lifetime value to the company.57
We are only at the beginning of this retailing transformation. Many of the data collection and tracking technologies that will become standard likely have yet to be invented, and it will take some time before every person experiences these activities on a daily basis. But the trajectory is evident: the technologies of personalization and the social discrimination issues linked to them will become ever central to the ways people shop for goods and services. And the shifting class structure in American society will play a major part, as the changing retail framework will take place against a backdrop of widening income inequality in American society. Beginning in the 1970s income growth for households in the lower and middle classes slowed sharply, while incomes at the top continued to grow strongly.58 An article sponsored by the University of Southern California’s Center on Economic Policy Research concluded that “Americans today live in a starkly unequal society. Inequality is greater now than it has been at any time in the last century, and the gaps in wages, income, and wealth are wider here than they are in any other democratic and developed economy.” The article singled out the growing share of recent income gains going to the very high earners (the 1 percent or .01 percent of the population), the emergence of lavishly compensated “supermanagers,” and a concentration of wealth that fell a little during the first half of the twentieth century but that has grown steadily since.59
One might wonder how many of today’s major retailers can survive if such large portions of the population have less real income than in the past. One major way is credit. U.S. middle and lower-middle classes have had to pursue the American dream by borrowing to the hilt. Though the Great Recession of 2008 wrecked many of those ambitions and plunged many people into disastrous debt, there is little evidence borrowing for buying is going away; it simply has become a necessary part of the U.S. economic system. Another way is to target the very wealthy. During the past couple of decades, in fact, retailers and brands have succeeded in the United States when they have pursued both the luxury and the low-price market; the middle is increasingly being hollowed out. There are, in fact, large numbers of people who can afford relatively expensive goods, even if they make up a small percentage of the population overall. This group also includes comparatively wealthy international shoppers, who have increasing access to U.S. merchants through the internet and via stores the retailers have opened abroad. In the process of adapting to these new economic realities merchants have become all the more resolute in their desire to use data and technologies to identify customers of value and pursue them across as many channels as possible. It is increasingly a matter of retailing life and death to fix on and cultivate the right customers while shucking off shoppers who offer little benefit.
The reshaping of retailing will likely add data-driven anxieties to both buyers and sellers. Stores will become stress centers, as technologies aiding shoppers duel with those aiding retailers, resulting in often conflicting conclusions about price and preferential treatment. Sellers will have to change prices constantly, introduce new products rapidly, and continually adopt new ways to define, identify, track, and reevaluate customers, as well as satisfy those they define as winners. Beyond the traditional tension of assessing product quality and cost, shoppers will have to deal with the uncertainty surrounding the personal information that merchants have gathered about them and the scores they are assigned as a result, and the effect that these activities will have on their shopping experience.
This new direction in retail may be healthy for some stores’ bottom lines, but it is toxic for people’s sense of democratic possibilities in society. The data-driven stratification of customers encourages abandonment of the historical ideal of egalitarian treatment in the American marketplace. As these forms of discrimination accelerate, the retail system increasingly encourages shoppers to accept a coarsening of relations in their dealings with merchants. The old saw that every person’s dollar is worth the same increasingly no longer holds sway. Instead, the value of one’s dollar depends on the person’s prior spending, demographics, lifestyle, and willingness to be an open book for the retailer from whom good treatment and good deals are desired. Certainly the traditional discriminatory categories of race, income, and age will enter these calculations, but proving prejudicial behaviors toward such groups as black, low-income, or older Americans will become far more difficult because they will be masked by complex algorithms. To protect themselves from accusations of bigotry, retailers will probably not even put race into the equation. But it will certainly be represented in a host of hidden factors such as neighborhood, income, education, and even health. And rest assured that shoppers will respond by doing all they can to foster and perpetuate a favorable profile, adjusting their behaviors to those they think will lead to the best treatment from their favorite retailers. Look, too, for whispered social discussions and possibly blogs that try to decipher specific stores’ criteria when they pick winners and losers at any given time for any given product.
Jeff Malmad’s aggressively utopian declaration that opened this chapter reflects the urgent need that businesses, including retailers, have to gather more and more information on individual consumers. The days when merchants and their advertising agencies targeted broad population segments such as men or college students—or even entire cities—as potential customers are beginning to fade. We are moving rather quickly into an era in which retailers are focused on exploiting what they know about specific individuals to encourage their loyalty and to sell them products. Retailers believe that the more information they secure, the more likely they can identify the hot buttons that computer formulas say will ignite an individual’s interest in a particular product. This involves compiling and sifting through huge amounts of data about individuals’ shopping habits along with other material (from third-party data firms or from “listening” services) that at first glance doesn’t appear directly related to shopping, such as Twitter statements, Facebook comments, blog posts about various products, the names of social media friends, and hobbies. Many merchants agree with IBM that the key to long-term customer profitability is “the process of identifying all of the information for a customer (member data) throughout the enterprise, linking it together for a 360-degree view of a member and maintaining that view going forward.”60
Of particular concern to many is that much of the data-gathering activities described throughout this book are occurring under the hood. And Malmad’s statement that “you’ll know everything about yourself and your loved ones if you opt in” inverts what is actually happening: retailers and data providers are trying to learn everything about us and our loved ones whether we know it or not, and if we opt in, an act that is often encouraged by loyalty programs or other incentives, they will learn far more about us than if we don’t. Yet Malmad’s comments do reflect an intriguing tension that lies at the core of how retailers generally are approaching their customers in the twenty-first century. Even as merchants herald this era as the age of customer power, a result of the hypercompetition brought on by internet trading, they push back against that power via an increasing number of technologies aimed at constraining and channeling the decisions of their customers in ways that benefit the marketers.
This book has charted approaches that the largest department store, supermarket, and big-box chains have adopted to corral desirable customers while minimizing the negative influence of customers who bring them little, if any, profits. Whether these techniques work for the retailers’ bottom lines is beside the point; what matters is the larger implications of the emerging retailing institution in which shoppers accept and possibly normalize surveillance and social discrimination. Throughout the book I’ve noted how emerging technologies and retail marketers’ discussions about them turn into taken-for-granted elements of people’s worlds. Academic and futurist visions about the ways retailers need to use the new digital technologies to survive on- and off-line are translated into practical advice via consultant-generated systems such as customer relationship management, digital intelligence, loyalty systems, and listening platforms. These frameworks, in turn, encourage a continual stream of technologies that put into everyday practice the surveillance, predictive-analytic, and personalization techniques that translate the elite theory into everyday routines. While foes of surveillance and data aggregation in areas such as health, employment, and loan discrimination continue their fight before government agencies, the retailing establishment has built quite similar activities into its very fabric with few complaints. The reason: buying and selling everyday products doesn’t seem problematic even when the new tools are used.
And yet, as I’ve shown, the practice of social discrimination is very much at the core of the transformation of everyday retailing today. The norms of this new retailing era are taking hold: it is rather commonplace that, online and on mobile devices, cookies and their variants form the building blocks of a data-rich world that personalizes offers on the fly. And now online personalization is migrating to the seemingly old-fashioned world of physical retailing—the arena where shoppers still do most of their buying by far. Internet-like tracking and analytical abilities are coming to the aisles and to the checkouts. Tying into the always-on smartphone carried by about 70 percent of Americans, merchants, brand manufacturers, and their agents are exploiting cellular signals, Bluetooth, Wi-Fi, sound waves, light waves, and more to track customers and send them product messages before, during, and after their store visits. The opportunity to receive personally relevant offers is one lure that companies use to entice customers to join the data-collection bandwagon. Enjoying the rewards of protection, privilege, and games relating to loyalty are others.
As a result of the information that retailers gather about their shoppers in this way, the complex algorithms they use mean that they treat people increasingly differently. As they go about their day, including moving through particular stores, people are receiving different messages, even different prices, based on profiles retailers have created about them—profiles that the individuals probably don’t know exist and, if they could read them, might not even agree with. While these sorts of discrimination currently take place to a greater degree online than in the physical retailing space, it’s clear that the brick-and-mortar establishments are moving inexorably in that direction. Concerns about hypercompetition and notions of efficiency are behind the push, and these changes are overturning more than a century of democratic treatment and posted prices in department stores, groceries, and discount chains. Yet neither the public nor the government has sufficiently grasped the immensity of the changes to be able to ask what they will do to our society.
With these disturbing trends in mind, I hope to encourage a discussion about what can be done to soften, if not eliminate, downsides of this new environment. The data-gathering and data-exploitation activities I’ve described in this book raise weighty questions. Does the American public really want a society in which marketers are free to track, profile, and target them virtually anywhere they go, and to share information about them with other marketers in ways they do not understand? Are people comfortable with a society that reverts to the discriminatory elements of the peddler era in which selling was based on profiling each customer? If not, how should this new America create room for information respect—the idea that people deserve to know about and manage the ways society’s institutions use data about what they say and do?
Marketing, retailing, database, and technology executives typically ignore these questions. They acknowledge the discomfort that some Americans feel regarding the data that firms gather about them.61 But they claim Americans understand full well they are involved in a trade-off of their data for relevant messages and offers. Marketers depict an informed public that understands the opportunities and costs of giving up its data and makes the positive decision to do so. A 2014 Yahoo! report, for example, concluded that when Americans are online they “demonstrate a willingness to share information, as more consumers begin to recognize the value and self-benefit of allowing advertisers to use their data in the right way.”62 In its argument to policy makers and the media the industry uses this image of a powerful consumer as proof that Americans accept widespread tracking of their backgrounds, behaviors, and lifestyles across devices, even though surveys repeatedly show they object to these activities.
Recall the privacy paradox described earlier in the book, which noted that people have inconsistent and contradictory impulses and opinions when it comes to safeguarding their own private information.63 Many marketers have embraced this seeming contradiction as a way to argue in favor of wide-ranging data collection. The McCann Worldwide advertising network’s Truth Central project derived this conclusion from “a global research study surveying over 10,000 people in eleven countries,” including the United States. Neither breaking down the results by country nor detailing the survey method, McCann stated that while “71% worry about the amount online stores know about them, 65% are willing to share their data as long as they understand the benefits for them.”64 The editor for mCommerceDaily interpreted the findings to mean that “the tracking of consumers all comes down to the trade-off in value.”65 Along the same lines, the president and chief strategy officer of Mobiquity, a mobile-device-strategy consultancy, wrote in 2012 that “the average person is more than willing to share their information with companies if these organizations see the overall gain for end-users as a goal, not just for themselves.”66 A May 2014 report by Yahoo! Advertising followed this logic in interpreting its survey of “6,000 respondents ages 13–64, a representative sample of the U.S. online population.” It highlighted the finding that “roughly two-thirds of consumers find it acceptable or are neutral to marketers using online behavior or information to craft better ads.” Digitally connected Americans, the study concluded, “demonstrate a willingness to share information, as more consumers begin to recognize the value and self-benefit of allowing advertisers to use their data in the right way.”67
Are marketers correct in their assertion that Americans believe in trade-offs and are willing to give up personal data about themselves in favor of discounts and other blandishments? This is a difficult question to answer conclusively. The specific questions asked, and methods used, in these studies often are not included with the survey results, so it’s challenging to perform a careful evaluation of the results. Sometimes the respondents are volunteers and their views have no statistical relationship to the population as a whole. A major problem with the accuracy of this research is that the participants are typically recruited online, and it is quite possible that, as opposed to the population at large, volunteers willing to fill out online surveys are comfortable with giving up personal data. And finally, the survey results are inconsistent, and even marketing executives are sometimes loath to fully champion the trade-off view. An Accenture executive interpreted his company’s survey to mean that “if retailers approach and market personalization as a value exchange, and are transparent in how the data will be used, consumers will likely be more willing to engage and trade their personal data.”68 The Bain consultancy was even more cautious about its results, saying that “customers’ trust cannot be bought by companies offering compensation in exchange for selling or sharing personal data.” So what do Americans believe? And if we can determine their views conclusively, what, if anything, should we do about it?
The Annenberg National Internet Surveys, conducted seven times between 1999 and 2015, suggest useful and somewhat provocative answers to these questions. I headed the team at the University of Pennsylvania’s Annenberg School for Communication that created each survey. Major polling firms—Roper, ICR, and Princeton Research Associates International—asked the survey questions during twenty-minute interviews with random samples of (typically) fifteen hundred Americans, age eighteen and older. The survey firms contacted prospective respondents by phone—landlines only in the early studies and a combination of cell phones and landlines in recent years to account for the rise of mobile-phone-only households. In our surveys we report the specific questions along with the results (all the questions can be viewed online).69
The basic findings are clear and, in some cases, alarming.
1. Most people know they are being tracked but don’t understand what happens behind the screen. They don’t understand data mining, that is, the way companies plumb data about them from various sources and merge the information to arrive at broad conclusions about them. They also don’t understand the purpose of a privacy policy, which is supposed to outline these activities. In four of the surveys (conducted between 2005 and 2015), we asked the participants a slight variation of this question: “True or False: If a website has a privacy policy, it means the site won’t share information about you with other sites without your permission.” A clear majority of the respondents in all of the surveys believed that this was a true statement, and a substantial percentage beyond that simply said they didn’t know. In actuality, the answer is false.
2. Most people don’t know the rules of the new digital marketplace, and they think the government protects them more than it does. The surveys found that more than half of Americans:
• do not know that a pharmacy does not legally need an individual’s permission to sell information to other parties regarding the over-the-counter drugs that the person buys;
• do not know it is legal for an online store to charge different prices to different people buying the identical product at the same time of day;
• do not know it is legal for an offline or physical store to charge different prices to different people buying the identical product at the same time of day;
• do not know that price-comparison sites such as the travel websites Expedia or Orbitz are not legally required to include the lowest travel prices.
3. Most people don’t think personalization by marketers or retailers is a good thing, especially once they know the techniques used for obtaining their personal data. The 2009 survey conclusively found that, contrary to the claims of many marketers, most adult Americans (approximately 66 percent) do not want to receive tailored advertisements. In contrast, slightly less than half said they would welcome tailored discounts. However, once this group understood several common ways that data are mined to produce tailored discounts—tracking the website the person has just visited, tracking the person on other websites, and tracking the person in stores—much higher percentages said they do not want such messages.
4. Americans directly admit feeling vulnerable in a retail environment in which companies collect personal information. The 2005 survey found that more than 70 percent disagreed with the statement “What companies know about me won’t hurt me,” disagreed with the statement “Privacy policies are easy to understand,” and agreed with the statement “I am nervous about websites having information about me.” This vulnerable feeling hadn’t changed by 2015. We found that 72 percent of Americans rejected the idea that “what companies know about me from my behavior online cannot hurt me.”
5. Most people philosophically do not agree with the idea of trade-offs. In 2015 we described to a random cross-section of American adults some everyday circumstances whereby marketers collect people’s data, and posed this activity in relation to personalized discounts as trade-offs. We found that far more than half felt those trade-offs were unfair, disagreeing with the statements
• “If companies give me a discount, it is a fair exchange for them to collect information about me without my knowing.”
• “It’s fair for an online or physical store to monitor what I’m doing online when I’m there, in exchange for letting me use the store’s wireless internet, or Wi-Fi, without charge.”
• “It’s okay if a store where I shop uses information it has about me to create a picture of me that improves the services they provide for me.”
But if people disagree with trade-offs on principle, why do they accept them? This apparent contradiction surfaced even in our survey. We presented the people interviewed with a real-life trade-off situation, asking whether they would take discounts in exchange for allowing their supermarket to collect information about their grocery purchases. We found a much higher percentage said yes to the trade-offs than agreed with the three statements above. In examining our findings carefully, we concluded that the reason for this contradiction is resignation, as explained in #6:
6. Contrary to the claim that a majority of Americans consent to discounts because the commercial benefits are worth the costs, we found that Americans do so because they are resigned to the inevitability of surveillance and the power of marketers to harvest their data. The meaning of resignation we represent here is, to quote a Google dictionary entry, “the acceptance of something as undesirable but inevitable.”70 And, in fact, our study revealed that 58 percent of Americans agree with the statement “I want to have control over what marketers can learn about me online,” at the same time they agree with the statement “I’ve come to accept that I have little control over what marketers can learn about me online.” Rather than feeling able to make choices, Americans believe it is futile to try to manage what companies can learn about them.
Watching what shoppers do doesn’t reveal their attitude. We found that people who believe in trade-offs are quite likely to accept supermarket discounts, but we couldn’t predict whether people who are resigned to marketers’ data-gathering activities would accept or reject the discounts. Marketers want us to see all those who accept discounts as rational believers in trade-offs. But when we looked at those surveyed who agreed to give up their data for supermarket discounts, we found that well over half of those who took the deal did so out of resignation rather than because they believe in trade-offs.
Ironically, and contrary to many claims for why people give up their personal information, those who are most aware of these marketing practices are more likely to be resigned. Moreover, the more knowledge resigned individuals have of marketing practices, the more likely they are to accept supermarket discounts even when the supermarket collects increasingly personal information. When it comes to protecting personal data, our survey found those with the knowledge to accurately calculate the costs and benefits of maintaining privacy are likely to consider their efforts to do so futile. Though neither age nor gender reflect any differences in the number of people who are so resigned, statistically significant differences do show up regarding education and race: we found a higher percentage of resignation among the white population compared with nonwhites, and among more educated people compared with respondents who have a high school education or less. At the same time, one-half or more of individuals in those categories of respondents are resigned overall to personal data-gathering by marketers and retailers.
7. Young adults generally aren’t much different than older Americans when it comes to privacy issues. Media reports teem with stories of young people posting salacious photos online, writing on social networking sites about alcohol-fueled misdeeds, and publicizing other ill-considered escapades that may haunt them in the future. Commentators interpret these anecdotes as representative of a generation-wide shift in attitude toward information privacy. They claim that young people are less concerned than older people when it comes to maintaining privacy. But our research with the Berkeley School of Law found that expressed attitudes toward privacy by American young adults (age eighteen to twenty-four) are not very different from those of older adults. In fact, large percentages of young adults are in harmony with older Americans with respect to both personal privacy and privacy regulations. For example, a large majority of young adults:
• have refused to give information to a business in cases where they felt it was too personal or not necessary;
• believe anyone who uploads a photo of them to the internet should get their permission first, even if taken in public;
• believe a law should be passed to give people the right to know all the information websites have compiled on them; and
• believe a law should be passed requiring websites to delete all stored information about an individual.
In view of these findings, why would so many young adults conduct themselves on social networks and elsewhere online in ways that would seem to forfeit very private information to all comers? One answer is that it’s the nature of young adults to approach cost-benefit analyses related to risk differently in comparison with individuals older than twenty-four. An important part of the picture, though, must surely be our finding that higher proportions of eighteen- to twenty-four-year-olds believe incorrectly that the law provides more privacy protection online and offline than it actually does. This lack of knowledge in a tempting environment, rather than a cavalier lack of concern regarding privacy, may be an important reason large numbers of them engage with the digital world seemingly unconcerned.
The Annenberg surveys consistently reflect an American population that is struggling to cope with a media and marketing world they don’t understand, one they worry can harm them. Shopping is and will continue to be central to this new world. As U.S. society moves further into the twenty-first century, personalized deals, prices, and other tailored offers will undoubtedly be increasingly troubling to Americans who believe they are on the losing end of often-hidden consumer profiles and targeting formulas. Americans will also surely sense that the changes are causing the democratized marketplace to disappear and will feel powerless to do anything about them.
The issues raised in this book are relevant far beyond the United States. Earlier I discussed a popular Australian marketing consultant who carries his data-personalization boosterism throughout Asia. I also noted Synqera’s work on facial recognition in Russian stores. And in 2013 the chief information officer of Tesco, one of the world’s largest retailers, linked the key strategic question of how to bring the internet into the stores to three broad forces that would guide his firm’s information technology investments: cloud computing, personalization, and the “seamless, blended world of physical and digital.” These activities represent the tip of the iceberg of retailing developments involving data collection, tracking, and personalization in-store and via various digital channels that are proceeding at different places around the world.71 A 2015 Nielsen global report on “The Future of Grocery” begins by describing “a grocery store where you can receive personal recommendations and offers the moment you step in the store, where checkout takes seconds and you can pay for groceries without ever taking out your wallet.” Might it sound “far-fetched?” the report asks. Not at all, it answers: “It’s closer than you think.” The report acknowledges that “today, only a small percentage of consumers around the world is already using such features.” But, it adds, this new kind of store is spreading. Moreover, based on a global online survey the company concludes that “willingness to use them in the future is high.” It contends that millennials especially are champing at the bit.72
The Nielsen report says nothing about surveillance or privacy. Marketing and retailing executives around the world typically have attempted to minimize concerns about their use of shopper data. Our U.S.-focused survey challenges marketers’ usual trade-off defense by showing quite clearly that most Americans do not accept the fairness of getting discounts in exchange for their personal data. Yet many in the U.S. marketing industry are making the case that privacy in most areas of shopping doesn’t merit concern. They point to controversies about data mining and predictive analytics swirling around the National Security Agency’s work as well as around businesses that ferret out targets for shady payday loans and those that dredge up online relationships suggesting a job applicant shouldn’t be hired. These intrusions, they insist, are quite different from tracking individuals for the purpose of ordinary advertising and shopping. Consider the comments of Scott Howe, president and CEO of Acxiom:
Some believe that all data is of equal importance and therefore must be controlled in exactly the same way. This is just not the case. Data regarding personal information that pertains to employment or insurability decisions, or that relates to sensitive health-related issues or confidential matters, deserves much different treatment than data that would indicate that I am a sports fan. Too often, legislation and regulation seek to paint issues in broad strokes, often with unforeseen consequences to liberties and innovation, despite being born of the best intentions.73
Apart from ignoring the issue of respecting private information—Americans want to be able to manage their personal data in commerce—this view pushes aside the discriminatory influence that seemingly benign pieces of data can potentially have on an individual’s opportunities in the public sphere. Howe surely must have considered that, in the not-too-distant future, companies may well take such actions as merging the category “sports fan” with dozens of other characteristics about individuals—their eating habits (at the ballpark, for example), their income, the number and age of their children, the value of their house, their vacation habits, their geographic whereabouts, their clothes-shopping habits, and their media-use patterns—to create profiles that dub them winners or losers regarding certain areas of shopping and that determine the advertisements and discounts that are directed to them. For reasons they don’t understand, people may see patterns of discounts that suggest they are being placed into certain lifestyle segments and are consequently receiving treatment they consider inferior to that of their neighbors and co-workers. The individuals may vaguely understand that these profiles are the cause, and they may try to change their behavior to get better deals, often without success, all the while wondering why “the system”—the opaque predictive analytics regimes that they know are tracking their lives but to which they have no access—is treating them that way.
Arguments that play down the profound discriminatory potential of this sort of data-driven consumer commerce have the same rhetorical aims as those that insist Americans want to exchange data for benefits: they give policy makers false justifications for enabling the collection and use of all kinds of consumer data in ways that the public often finds objectionable. Yet when three of every five Americans are resigned to feelings of powerlessness regarding their data relationships with marketers, when two of every five are both resigned to relinquishing the control of their data to marketers and worried that the loss of control can hurt them, and when people who are well aware of these data-gathering activities are actually more rather than less likely to be resigned, we have a problem. In a central area of the public sphere there exist substantial tensions that cannot be swept away by executives’ assertions of consumer autonomy and rational choice.
In 2010 the international retail marketing trade association Point of Purchase Advertising International (POPAI) directly addressed this issue in its “Recommended Code of Conduct for Consumer Tracking Methods.” The code concluded that “the ability to record and track a customer’s every move through the store, identify customers facially and demographically, and pinpoint where and what customers are looking at, picking up, and putting into their shopping carts through Observed Tracking Data (OTD) raises privacy issues and sends shivers down the spine of even the boldest marketer.”74 By 2015, however, the code had disappeared from POPAI’s website, as the marketing and retailing industries had since closed ranks around the idea that these activities are perfectly fine as long as shoppers give their consent by downloading apps and/or turning on Wi-Fi, Bluetooth, location tracking, or other features of their mobile devices that signal a shopper’s presence. As we have seen in earlier chapters, loyalty strategies encourage people to opt into tracking via apps. We have also seen that retailers create privacy policies that intentionally avoid shedding any light on tracking activities. The policies not only are turgid documents designed to be unreadable by nonexperts; they are tough-luck contracts draped in take-it-or-leave-it terms. Moreover, their opaque nature decreases the likelihood that shoppers will become upset over specific details of the merchants’ use of their data. Between the policies’ true intent and our finding that most Americans assume a “privacy policy” means that their data won’t be shared without their permission—and the fact that the FTC has made no attempt to implement any changes that would clear up this misconception—we have a brick-and-mortar retailing environment that is a frontier for customer data exploitation.
The marketing and retailing establishments generally have been secure pursuing their activities without government intrusion. A notable exception is the Children’s Privacy Protection Act, a landmark law that makes it illegal for companies to gather data on, or track, children under the age of thirteen without explicit parental permission. This one law notwithstanding, Tony Hadley, CEO of Experian, which sells both online and offline information on hundreds of millions of Americans, stated confidently in 2014 that Congress would not pass a law regulating the collection of marketing data in the foreseeable future, or maybe ever. “Should marketing data be regulated like credit, like employment, like lending data?” he pondered, referring to specific laws that address financial and health data in the name of privacy. “I think the clear indication that we’re getting from Congress is no.”75
Beyond Congress, the Federal Trade Commission (FTC) has taken the lead at the national level in wading into the controversies surrounding the activities that Experian and so many of the retailers and intermediaries are pursuing. A number of the agency’s commissioners and staff members are sympathetic to complaints expressed by individuals and advocacy organizations regarding potential misuse of digital-marketing data. In an attempt to prevent deception, the FTC in 1998 adopted a set of “fair information practice principles” that require firms to provide “notice” of their information practices; “choice” (giving individuals options about how their personal identifying information is used beyond the initial purpose); “access” (giving individuals reasonable opportunity to review the information collected and to make corrections); and “security” (reasonable steps to protect the information). Over time, the FTC has added to these principles with categories such as “collection limitation,” “purpose specification,” and “privacy by design.” Yet privacy expert Robert Gellman has concluded that, “from 1998 through 2010, the Commission’s description of [fair information practice principles] has been consistently inconsistent.”76 More troubling, companies have learned how to conform to the letter of the FTC categories while running roughshod over their spirit—the incoherent privacy policies, a response to the FTC requirement that websites provide “notice” of their information practices, are a primary example. Marketers and retailers also play games with the “choice” and “access” principles; for example, the typical tough-luck privacy policy is an illusion of choice.
Marketers have generally had free rein to track people and collect information about them, as the FTC has been hobbled by the narrow definition of “harm” that it (and advocates petitioning it) must follow before it can act: injury to a person resulting from the use of the person’s gender, race, or age in health care, financial, or employment decisions and ending in monetary loss. For example, the FTC has shown an interest in whether certain loan companies collect personal data that, while seeming to have nothing to do with race, enables those companies to circumvent race discrimination laws. The commission has also been willing to penalize companies that have gathered or have used data in a deceptive manner. Although these are important initiatives, the FTC’s definition of harm prevents it from addressing fundamental concerns about the public’s right to know about the specific ways companies compile personal data and how they use the data. Nor can it mandate limits on in-the-aisle data gathering in the new retailing era. As the FTC itself noted in a 2010 report, approaches to protecting consumer privacy that employ a harm-based model tend to focus on enhancing physical security, limiting economic injury, and minimizing unwanted intrusions into the daily lives of consumers.77 The report goes on to note that such an approach tends to leave out reputational harm and the pervasive fear many Americans have of being watched—issues that certainly intersect with concerns we have found in the Annenberg surveys. Yet the issues our surveys have identified are both more specific and at the same time broader: Americans want to have control over the data that marketers obtain about them. They believe that marketers currently have total control, and they worry (perhaps because of that lack of autonomy) that the commercial relationships they have as a result of marketers possessing their personal information are fraught. And they feel resigned to being unable to do anything about it.
The retail transformation taking place challenges the usefulness of the fair information practices and other self-regulatory regimes set up to protect America’s right to privacy in the commercial realm. Philosopher and New York University professor Helen Nissenbaum notes that privacy involves the contextually appropriate flow of personal information,78 meaning that this flow should be consistent with the expectations of those whose data is being used. A website’s “notice” of its information practices helps individuals to determine whether the flow meets their expectations. Yet the evolving cross-platform surveillance activities of the new retailing world make it difficult, if not impossible, for an individual to understand and act on even this one aspect of the fair information practice principles; the person would continually have to review and attempt to comprehend several sets of frequently changing privacy practices of merchants online, on apps, and in the physical stores. They would also have to be familiar with the evolving privacy practices of third-party app companies such as shopkick and inMarket. Even in a perfect world, carrying out such due diligence successfully would be a huge task.
Companies such as Google and Acxiom say they offer individuals the opportunity to view and correct or update their personal information stored in the businesses’ database. However, these firms join with other companies to meld their respective sets of data in targeting potential customers, resulting in individualized messages that often incorporate far more personal information than the characteristics individuals see. Further, in 2013 the New York Times reported that the website Acxiom had created to make this information available, aboutthedata.com, wasn’t releasing all the personal information it had collected.79 Jeff Chester of the consumer group Center for Digital Democracy told the newspaper that the website’s language “is so innocuous that the average consumer would think there’s no privacy concern.”80 A comparison between the information Acxiom was making available to individuals on aboutthedata.com and the descriptions of what it was selling in its 2013 Consumer Data Products Catalog indicates that as of late 2015 the company was still making available only a small percentage of the personal data. For example, the website did not allude to the company’s tracking of people on social media, yet the catalog offers a panoply of products that provide such information as the number of an individual’s social network friends and the names of the people an individual is following—and is followed by—on Twitter.81 Health data is another example, as the website did not mention that Acxiom collects this type of personal data yet the product catalog sells information focusing on such areas as an individual’s cholesterol level, diabetes, or senior needs. Nor did the website make any mention of the sixty-seven profiling tags the catalog assigned to people based on household socioeconomic status, such as “Summit Estates,” “Humble Homes,” and “Resilient Renters.”
Acxiom’s actions suggest a misleading approach that would keep even the most assiduous people from understanding the activities surrounding their personal information, as do those of Google and other firms. Although the Federal Trade Commission is mandated to monitor and punish deception in the marketplace, one has to wonder whether the fair information practice principles themselves actually aid deception or misleading inferences by pretending that notice, choice, and access can actually be carried out in the emerging complex, multilayered world.
* * *
What is taking place is clearly not business as usual. It’s not even the business that existed when the fair information practice principles were written. We cannot rely on self-regulation by marketers and the monitoring abilities of individual shoppers to resolve the complex ethical, social, and legal problems created by the new regimes of commercial surveillance. Our Annenberg surveys consistently indicate that even when people consent for whatever reason to being tracked, or to allow retailers to use their personal information, they don’t truly understand what’s taking place behind the screen. At the heart of the situation is an asymmetry of power: on the one hand, retailers have vast amounts of personal information on people and can identify them using a range of means, and on the other, the public has little recourse in educating itself and acting on those activities.
Regulation is a crucial means for addressing this situation, as it can immediately slow the pace of marketing surveillance. Some public-interest advocates frustrated with the FTC hold out hope that the Federal Communications Commission (FCC) will step up with stricter regulations that encourage transparency of data use by marketers and discourage tracking in the commercial sphere without permission. Unlike the FTC’s mandate to counter business harms, the FCC has the broader government mandate to regulate “in the public interest, convenience and necessity,” to quote the 1936 Federal Communications Act. In recent years, the FCC has reclaimed the right to regulate internet service providers (ISPs) as common carriers, similar to phone services. In 2016 FCC chairman Tom Wheeler suggested an FCC edict that those providers (typically cable firms and large telecommunications firms such as AT&T and Verizon) should be prohibited from sharing customer data without active (“opt-in”) permission by those individuals. Wheeler reasoned that because they monitor all the broadband traffic to every location and device, ISPs have the capability of learning enormous amounts about where their customers go and what they look at online and on apps. Public interest activists cheered the idea, but the ISPs reacted angrily. They argued the FCC shouldn’t be regulating them with draconian rules while, they contended, the FTC was allowing Google, Facebook, and other commercial giants to scarf up at least as much data about individuals. The ISPs gathered the support of important lawmakers, who announced the FCC was overreaching its regulatory authority. As of late 2016 Wheeler remained intent on pushing his plan through the FCC, and his opponents remained equally intent on making sure such opt-in rules never get implemented.
Rather than pursuing the ISP argument against opting in, the correct approach is actually to argue for the opposite: require a comprehensive opt-in policy for every company that wants to use an individual’s data, whether it be Google, inMarket, or Macy’s. Retailers will object, stating that people already opt in to being tracked when they install apps on their mobile devices, but this is a disingenuous argument requiring one to seriously believe that individuals stop to read the legalese of privacy policies on their smartphones’ small screens at the moment of download. One way such an opt-in could be implemented for apps is to prohibit the company offering the app from tracking individuals immediately after download. Instead, the firms should send downloaders a straightforward accounting of data use through email (under the condition that the firms discard the email addresses afterward). Only after the downloader responds affirmatively to the email, or uses the app at least a day after receiving the email, should the app owner be allowed to husband data.
These opt-in requirements clearly will not address the many issues regarding commercial surveillance in the imminent omnichannel environment. But they can slow the growth of surveillance in the aisles so that society can then have the necessary time to chart a retailing future that satisfies marketers but protects people from being overrun by surveillance and minimizes discrimination. Despite the insistence of retailers’ lobbyists, we can’t put the burden on shoppers, who have a life and don’t have time to learn the ins and outs of new technologies that are often sugarcoated by companies that have a vested interest in deceiving them. The term transparency is used a lot in the marketing trade press to signify advertisers’ insistence that they must be able to look into the specifics of their programmatic-advertising activities so that they can calculate their return on investment and not get harmed monetarily. Yet when it comes to transparency in advertisers’ relationships with the public, the term is far less popular—and the activities related to implementing openness far less diligent. We need initiatives that give the public the right and ability to learn what companies know about them and how they profile them, and what forms of data lead to what kinds of personalized offers. We also need to get people excited about using that right and ability. Here are a few suggestions:
• Encourage more corporate openness about the commercial use of people’s data by naming, praising, and shaming. Public interest organizations as well as government agencies should develop clear definitions of transparency that reflect concerns identified in the Annenberg surveys and elsewhere. They should then systematically identify companies accordingly. When activists, journalists, and government officials name and shame firms that don’t abide by the transparency norms, they can alert the public to stay away from bad actors, possibly force those actors to change their behaviors, and encourage some firms to see privacy as a selling point.
• Create an initiative that dissects and reports on the implications of privacy policies. Activists, journalists, and government officials—perhaps aided by crowd-sourcing initiatives—should take on the role of interpreting these legally binding documents for the public. Rather than focusing on whether websites abide by their privacy policies, privacy policy interpreters can be most helpful by uncovering how companies say they collect and use their data, and what the implications might be for the individual and for society. They should also advocate for a consumer right to agree with selective parts of privacy policies by underscoring policies that do allow this. When this information is available in a digestible form, it may spur informed naming, praising, and shaming. It may well also lead firms to alter objectionable behaviors.
• Give individuals the right to know the specific data and profile on which a retailer bases any of its targeted messages, coupons, or other interactions. As long as the algorithms companies implement to analyze and predict the future behaviors of shoppers are hidden from public view, the potential for unwanted marketer exploitation of individuals’ data remains high. We therefore ought to consider it an individual’s right to access the profiles and scores companies use to create every personalized message and discount that the person receives. Although companies will argue that giving out this information exposes trade secrets, we contend that this can be done without exposing companies to such damage.
• Educate the public about digital media and marketing, beginning in middle school. To have a truly informed discussion, people need to be able to communicate in the language of the future. They have to learn the vocabulary of digital media and marketing and be familiar with the primary individuals behind it. This area should be considered a part of the liberal arts because a solid understanding of it is necessary for a thriving citizenry.
Activists, journalists, and schoolteachers all need to be educators when it comes to the changing retailing institution and its hidden curriculum. They can push against the asymmetry of power in retailing by casting light on the often non-privacy-oriented nature of merchants’ privacy policies, focusing on how companies say they collect and use their data, and on what the implications might be for the individual and society. When these practices are available in digestible form—and people realize how different the retailers’ norms are from their own—the knowledge may incite the praising or shaming of retailers and result in chastened merchants that change the behaviors their shoppers find disagreeable. These public activities, in turn, may spur the FTC, the FCC, Congress, state regulators, and the courts to work toward as even and as open a shopping playing field as possible. Also helping to push in this direction, and to puzzle out some of the legal conandrums the new environment is creating, are the platoons of privacy and surveillance scholars the nation is fortunate to have.82 They offer important ideas about regulation, corporate behavior, and cultural values that might well help regulators confront the evolving retail challenges highlighted in this book.
The stakes are high. Our society should not rush headlong into a new retailing world, but instead should question whether it is the one we want. Should payment for products include a part of yourself? Do you want the next generation strolling down store aisles and thinking it normal that the merchants have profiled and scored them, often in prejudicial ways, and that they don’t know how they were labeled, what consequences the labeling will have on their shopping experience, and whether they have any say in the matter? Merchants, left to their own interests and in response to hypercompetition, are creating this world. And they continue to work with the digital industry, which has grown around them, to ensure that future generations accept an environment of surveillance and tracking. If the retail industry prevails, a future societal mantra might well be, “Shoppers want to be tracked.” Our descendants might well remember when we failed to make a choice—if they recall there was a choice at all.