Privacy is one of the most pressing issues for smartphone users today. Companies are leveraging apps to snoop on people’s travels, habits, and purchases. The government has spent years collecting calling records and other smartphone data about large numbers of people. Law enforcement agencies are increasingly accessing smartphone information not just on criminals but on law-abiding citizens. To curb these intrusions, users need to understand the many different ways they are being monitored.
Apps are a major reason why smartphones are privacy invaders. In 2012, Carnegie Mellon University (CMU) researchers analyzed the 100 most popular Android apps and discovered that more than half gathered user data, including contact lists, user locations, and unique device IDs assigned to each phone.1 Device IDs are long letter-and-number codes that Apple and Google give to each phone so it can be distinguished from other iPhones and Android phones. On iOS, these IDs are called “Unique Device Identifiers” (UDIDs) and, on Android, “Android IDs.”
Android apps can also read a user’s phone number, Gmail address, full name, calendar data, call log, Web-browsing history, and bookmarks. iOS apps can look at users’ calendar, reminders, and photo library, and access their Facebook and Twitter accounts. Some apps must collect some of these user data to operate effectively. For example, apps that help people find local deals, navigate public transportation, and detect where they parked their cars all need to know their users’ locations. iPhone developers use UDIDs to give beta users early access to their apps, before they go live in the App Store, and Apple uses them to link people’s iTunes login credentials with their iOS devices, which is useful when you need to redownload an app you’ve already paid for.
Developers routinely gather this nonessential user information so they can send it to advertisers. The CMU researchers found that 61 of the 100 apps in their study did this. Angry Birds gave sensitive user data to seven mobile ad companies. So did Brightest Flashlight.
Users typically don’t know what an app does with the data it collects. CMU researchers uncovered these connections after they read the apps’ code and found small chunks of code from third-party advertising “libraries” embedded in it, says Hong. Mobile ad networks create these libraries and provide them to developers to stick in their apps. Developers incorporate the code so they can show ads in their apps, but integrating the libraries also enables ad networks to access user data.
Mobile ad companies often leverage this information to group app users into audience categories or segments that can be targeted. Flurry, a San Francisco–based app analytics and advertising firm, is one of these companies. Flurry partners with developers who enable it to gather data about smartphone users across hundreds of thousands of apps, and it sells advertisers access to those consumers by grouping them into more than 40 different “personas,” including “Singles,” “High-Net Worth Individuals,” and “LGBT.”2 Advertisers will typically pay a premium to reach their desired audiences, which means more money for companies like Flurry and the developers who partner with them. For developers, the more they know about their users, the more valuable their app is to advertisers.
When Google announced the new IDs, the company said the technology would “give users better controls” while providing developers with a “simple, standard system to continue to monetize their apps.”3 Hong is skeptical. “On the surface, advertising-specific IDs appear to be a really good idea,” he says. “But I’m concerned that people will try to find a workaround.” Hong draws an analogy to the desktop Internet where Do Not Track initiatives spurred advertisers to devise tracking technologies that are harder for users to detect and delete. He predicts that “we will see the same challenges with smartphones as with [the desktop Internet].”
Though apps gather far more data than most people realize, they don’t do it without warning. Android apps are permission-based, meaning they tell users what data they plan to collect and request their permission. This information appears as soon as a smartphone user tries to install an app. iOS apps don’t request permissions during installation, but users are notified the first time an app tries to access sensitive data, and they can also disable or enable specific permissions for individual apps in their phone settings.
Hong thinks Apple and Google “tried their best to figure out” how to notify people of app permissions, but he says both the iOS and Android briefing systems have weaknesses. The iOS approach, he says, is “good, in that an app tells you when it’s about to do something [that requires permission], but it’s not always clear why it’s going to be doing it.” With Android’s approach, “you have to hit the install button before you see the permissions,” which may not be an effective way to present the information, because “at that point, you’ve already chosen to install the app.” Android also takes an all-or-nothing attitude toward permissions, so a person who wants to use a certain app must either agree to all the requested permissions or decide not to download and use the app at all. Hong says both Android and iOS wrestle with “how to explain all the things going on inside an app without having too little or too much detail.”
Recent research shows that Android users don’t read and/or don’t understand app permissions. A 2012 University of California–Berkeley study of Android app permissions found that only 17 percent of study participants (all of whom were Android users) looked at the app permissions when they installed apps from Google Play. Even when they did, comprehension of the permissions was low. The researchers gave participants a multiple-choice questionnaire that asked them to describe the meaning of various permissions. Based on the responses, less than 25 percent of the participants exhibited “competent” comprehension of app permissions.4
In a written summary of their findings, the researchers concluded, “The current Android permission system does not help most users make good security decisions.”5 They cited “warning fatigue” and confusing descriptions as two of the system’s major flaws.6 In other words, smartphone users see so many permissions warnings that they stop paying attention. App permissions are also written in technical jargon and fail to specify the purpose of their requests. For example, many apps ask permission to read a user’s “phone status and identity.” People don’t realize it, but granting that permission enables the app to detect whether the phone is actively making or receiving a call and lets it copy the phone’s IMEI [International Mobile Equipment Identity] number—another type of unique identifier phone makers assign to the phones they manufacture, so they can be identified in case of theft. Apps don’t need users’ IMEIs to function, so developers that solicit them are likely sending that data to ad networks for ad targeting.
Apps are proliferating so rapidly with so little regulation that former EFF technologist Dan Auerbach refers to the app market as “the Wild West.” “The barrier to entry to becoming a developer is so low, most developers don’t have much experience handling user data,” he explains. “Users are at the whim of developers, which is scary.”
Developers, in turn, are at the whim of advertisers, and that is the root of many app privacy problems. The freemium app economy partly relies on the sale of consumer data and ads to make money. In 2013, Hong and his researchers interviewed developers in the United States and surveyed developers worldwide about their attitudes toward privacy. He found that developers who invade user privacy often do so unintentionally: “Sometimes developers don’t understand what these advertising libraries are doing. They know it’s an advertising library, and they throw it in their apps, but they don’t know how often data is collected, where it’s being sent, or the range of data. They just think, ‘My app won’t work [or make money] if I don’t put in this library.’ . . . I’m pretty sure advertisers don’t see themselves as bad guys [either]. They’re trying to figure out how to offer free apps and have innovation. The challenge is, once your business model is predicated on ads, you have a strong incentive to collect as much data as possible about [users] because of behavioral ads. If developers can double their [ad] click-through rates based on having more user information, and that would essentially double their revenues—well, you can see how that whole cycle goes.”
Disrupting that cycle will require better developer education, guidelines, and tools, says Hong. “In our interviews we did see developers who said they wanted to do something about privacy, but it was not clear to them what they should be doing.”
In the absence of a specific app privacy law, the U.S. Federal Trade Commission (FTC) has gone after a few app developers for threatening consumer privacy and violating the FTC Act, which prohibits unfair or deceptive business practices. In February 2013, Path Inc., the start-up behind the eponymous social networking app, settled FTC charges that it had violated the FTC Act by collecting users’ smartphone address books “without their knowledge and consent.”7 The FTC said Path also allowed approximately 3,000 preteen children to sign up for its service without parental consent, which is a violation of the Children’s Online Privacy Protection Rule (COPPA). Updated in 2013, the COPPA rule requires apps (and websites and other online services) to obtain “verifiable parental consent” before collecting personal information—such as names, addresses, contact information, photos, and videos—from children younger than 13.8
The FTC also went after the creator of the Brightest Flashlight app that had rattled people in the CMU privacy study. In December 2013, Idaho-based Goldenshores Technologies settled charges that it had violated the FTC Act by “deceiv[ing] consumers about how their geolocation information [and unique device IDs] would be shared with advertising networks and other third parties.”9 Among other stipulations, the FTC settlements required both companies to delete the personal information they had gathered and improve their disclosure and permissions procedures. Path also had to pay an $800,000 penalty for its COPPA violations.
In February 2013, on the same day it announced the Path settlement, the FTC issued a mobile privacy report that recommended that platform providers educate developers on privacy issues, require developers to make privacy disclosures, and “reasonably enforce” various best practices.10 The guidelines are nonbinding, but the agency said it would “view adherence to [strong privacy codes] favorably in connection with its law enforcement work.”11 The GSMA, the Commerce Department’s NTIA agency, and the California attorney general have also formulated “best practice guidelines” for developers. Hong has discussed app privacy with the FTC, and he doesn’t think a specific app privacy law is necessary, especially since it’s not clear what the law should be, but he contends that greater transparency is needed to keep the app business healthy. He says: “We’ve got this interesting ecosystem, with a gigantic number of apps, and we don’t want to destroy it, because we’re seeing lots of innovation. People are generally OK with ads, and behavioral ads, if things aren’t hidden and they can make an informed decision. We just need a better conversation in the public sphere as to what’s going on.” Given the power that smartphone platform providers wield over their app stores, strengthening app privacy may depend on companies like Apple and Google instituting stricter rules.
Apps aren’t the only way companies peep on smartphone users. Retailers use smartphone Wi-Fi and Bluetooth connections to surreptitiously learn about consumers who visit their stores. They do this by purchasing technology that grabs IDs known as Media Access Control (MAC) addresses, which are 12-digit alphanumeric codes that help smartphones communicate with Wi-Fi networks. All Wi-Fi–capable smartphones have unique MAC addresses assigned by their manufacturers. They also have separate Bluetooth MAC addresses, which are used to pair smartphones with Bluetooth accessories, such as wireless headsets.
Retailers can read people’s MAC addresses through their store Wi-Fi networks or special sensors. The technology providers are supposed to de-identify this information and retailers primarily use it for general insights, such as which store areas attract the most and least traffic and how long shoppers browse and wait in line before making purchases; but over time, retailers could gather enough data about shoppers to create more detailed, though anonymous profiles. To protect their privacy, people can turn off their smartphones’ Wi-Fi and Bluetooth connections before entering stores or alter their MAC addresses but the latter requires hacking their phones to obtain access to all of its files and programs—a process that is called “rooting” when done on Android phones and “jailbreaking” when done on iPhones.
Senator Charles Schumer (D-NY) and the Future of Privacy Forum (FPF), a Washington, D.C.–based think tank, have publicized the need for the responsible use of these so-called smart store technologies. In October 2013, the FPF released a code of conduct for the firms that provide this technology. The companies agreed to ask their retailer partners to put signs in stores telling shoppers they are being tracked. The signs would also give shoppers the address of SmartStorePrivacy.org, an FPF website where they can opt out of being monitored. Civil liberties advocates, such as the EFF, applauded this attempt toward self-regulation but also highlighted the policy’s limitations, including the fact that retailers can decline (and so far have declined) to post the proposed signs.
Carriers are tracing smartphone users’ shopping patterns, too—and they’re selling the information. Verizon, Sprint, and AT&T recently established programs that provide data about their subscribers to marketers, advertisers, and retailers. The three carriers mention these programs in their subscriber privacy policies, and they say the data is made anonymous. Verizon, Sprint, and AT&T include their cellphone subscribers in these programs by default. Subscribers must take the initiative to opt out.
Verizon’s program is called Precision Market Insights. Launched in October 2012, it collects “detailed demographics and geographics,” including “shopping habits, interests, travel patterns, and mobile browsing trends,”12 for all of Verizon’s consumer subscribers—i.e., not its corporate or government subscribers—unless they opt out. Verizon gathers this data from its smartphone users by recording their locations, the websites they visit on their phones, including the terms they use for mobile searches, and how they use their apps. In the next step, as a 2013 CNN story explained, “Verizon sends that data to an internal database, matching it up with a deep trove of demographic information about you from companies including data giant Experian. The data is stripped of any personally identifying information, aggregated into categories, and are placed into reports for Precision customers to use.”13 For example, a sports team could request data on the demographics (i.e., income level, age range) of the people who attend its games and use this data to tweak its local advertising.
Sprint introduced its Reporting & Analytics program at the same time as Verizon. Like Verizon, Sprint gathers two general types of subscriber data: mobile usage information and consumer information. Mobile usage information is data Sprint collects through its network, such as location information and the websites that subscribers visit and the types of apps subscribers use on their phones. Consumer information is more specific to Sprint products, services, and subscribers, such as the data and calling features its subscribers use, how much or often they use them, and their gender and age range. Sprint uses the data to produce business and marketing reports or sells the data to other companies to produce reports. As examples, Sprint says;
We may aggregate customer information across a particular region and create a report showing that 10,000 subscribers from a given city visited a sports stadium. . . . [W]e may share deidentified location data . . . to create a report showing that 10,000 mobile subscribers passed a retail location on a given day.14
AT&T has a similar External Marketing & Analytics program that sells customer location information, mobile browsing habits, and app usage to retailers, TV networks, and device makers. The example AT&T employs is similar to Sprint’s. The carrier says it could create a report that shows a retailer the number of AT&T wireless devices in or near its store locations by time of day and day of the week, along with general shopper demographics, such as age range and gender. Though that type of report doesn’t sound intrusive, civil liberties advocates regard AT&T’s program as a greater potential privacy threat than Sprint’s or Verizon’s, because AT&T is able to aggregate detailed usage data across its wireless, Wi-Fi, and Internet Protocol TV (IPTV) networks.
All of these programs are designed to manage privacy issues by synthesizing information without ever identifying specific individuals. But as a 2013 MIT Technology Review article noted:
The concerns about making such data available . . . are not that individual data points will leak out or contain compromising information but that they might be cross-referenced with other data sources to reveal unintended details about individuals or specific groups.15
Recent events and research have shown that data tracking can have worrying implications even if people aren’t identified personally.
THE NATIONAL SECURITY AGENCY CONTROVERSY
Murky consumer-tracking and data-mining initiatives validate the need for smartphone privacy protections, but for many smartphone users it was the NSA’s phone-records scandal that clarified those points and made smartphone surveillance feel like a real, immediate concern.
The NSA controversy began in June 2013, when whistleblower and former NSA contractor Edward Snowden leaked a stash of classified NSA documents to media organizations. The resulting articles, published in the British daily the Guardian and elsewhere, revealed that the NSA was keeping records on millions of American landline and cellphone calls every day.16
President George W. Bush authorized the program in 2001 following the September 11 terrorist attacks. It operated without court supervision until 2006, at which point the NSA began using secret court orders to compel AT&T, Sprint, and Verizon to tell it what numbers their subscribers were calling or receiving calls from, at what time, and for how long “on an ongoing daily basis.”17 The call data did not list people’s names or addresses, but it did contain information that could be used to determine a person’s identity, such as the caller and call recipient’s phone numbers and IMEI and IMSI (International Mobile Subscriber Identity) numbers, the latter of which is stored in a phone’s SIM card and used by carriers to identify subscribers on their networks. The NSA saved this carrier-supplied information in databases for up to five years and searched it several hundred times a year.
Officially, the NSA was only supposed to query its phone records when it had “reasonable, articulable suspicion” that the phone numbers it was researching were associated with “specific foreign terrorist organizations,”18 but media reports found instances when NSA employees strayed beyond this rule, mostly unintentionally. NSA phone data queries could also be very wide in scope, with any one potentially involving thousands of Americans’ phone records.
People believed the NSA tracked all American phone activity until February 2014 when the Wall Street Journal19 and Washington Post20 reported that technical and compliance barriers had limited the agency’s data collection to 30 percent or less of all U.S. calls (landline and wireless combined). Verizon, for example, hands over data regarding only its landline calls, not its cellphone subscribers, and T-Mobile does not automatically share any data with the NSA. But privacy advocates have noted that the agency still collects information in bulk for the phone-records program even if it is not comprehensive.
The revelations about the NSA’s bulk-phone-records and other data-collection programs, including ones involving people’s online communications, such as e-mail, triggered more than a dozen legislative proposals, several lawsuits from civil liberties groups, and a number of mass rallies. Each of the initiatives sought to curtail governmental monitoring of private communications.
To quell the uproar, President Barack Obama requested the creation of a review group to assess U.S. intelligence policy, including the NSA’s data-collection practices. The five-person advisory panel made more than 40 recommendations in a December 2013 report, including the substantive modification of the NSA’s bulk-phone-records program, because the government lacked “sufficient justification” to collect and store the sensitive data.21 The report also said the bulk-phone-records program had not proved to be “essential to preventing [terrorist] attacks.”22 A few weeks later an independent federal watchdog agency, the Privacy and Civil Liberties Oversight Board, met with the president and told him the majority of its members had concluded that the bulk-phone-records program was ineffective and “lack[ed] a viable legal foundation,”23 and therefore should be shut down. Obama declined to do that, but in January 2014 he announced program reforms, including stricter legal requirements for NSA bulk-phone-records queries, limits on the scope of accessible data, and the development of a new phone-records program to replace the government’s controversial practice of storing massive amounts of American phone data on its own servers. In his speech, Obama called the bulk-phone-records program a “powerful tool” that can help intelligence agencies “identify patterns or pursue leads that may thwart impending threats.”24 He also acknowledged that “the government collection and storage of such bulk data . . . creates a potential for abuse.”25
In March 2014, Obama recommended that the government leave these data with the carriers and submit search requests to them when needed (subject to judicial approval.) Civil liberties advocates applauded the idea of ending the government’s systematic collection of Americans’ phone records, but also noted that the proposal would leave “the underlying legal theory that spawned [the program] intact.”26
The NSA scandal showed how vulnerable phone-related information is to government intrusion and the complicated role that carriers and other service providers play as keepers of that information. The Fourth Amendment safeguards people’s “papers and effects”27 against unreasonable search and seizure by requiring a warrant, but the government doesn’t consider phone data subject to the Fourth Amendment. Under the third-party doctrine, a legal concept established decades ago, information knowingly revealed to a third party loses its Fourth Amendment protections. From the government’s perspective, individuals who sign up for cellphone service automatically surrender their right to privacy, because they know their carriers will track their phone activity and keep records of it. In fact, the government often refers to this data as the carriers’ “business records.”28
Judges have upheld the third-party doctrine, although one federal judge, U.S. District Court Judge Richard Leon, disagreed with the government’s interpretation when he ruled on a lawsuit that challenged the NSA’s bulk-phone-records program. In his December 2013 written opinion, Leon said that the program was “almost-Orwellian” and likely constituted “an unreasonable search under the Fourth Amendment,”29 which would make it unconstitutional. His decision granted the case’s two plaintiffs—a conservative legal activist and one of his clients—a preliminary injunction that would have blocked their phone data from government monitoring. But Leon put his order on hold in anticipation of a government appeal, which the Justice Department soon requested.
The NSA controversy also illustrated how the law distinguishes between content and metadata and how that distinction jeopardizes smartphone privacy. Metadata is the transactional data surrounding a technological action, such as the date and time a phone call was placed. Content is the substance of the action, such as the audio portion of a call. The data the NSA compiled in its bulk-phone-records database was metadata, not content. The government typically does not view metadata as sensitive information, which is problematic for smartphone users, because they create metadata each time they make and receive calls, send and receive e-mails, use apps, search and surf the mobile Web, and take photos. As Hanni Fakhoury, an EFF staff attorney who specializes in privacy issues, says, “Smartphones can generate big data trails.”
Civil liberties advocates say laws shouldn’t treat metadata and content differently. While metadata isn’t content in the traditional sense, it reveals who talked to whom at a given point in time. That information can convey details, such as people’s political leanings, religious affiliations, and medical conditions that may be more sensitive than the content of a one-minute phone conversation. Law enforcement can use metadata to reconstruct people’s past activities and to predict their future actions. Senator Ron Wyden (D-OR), who was an outspoken critic of the NSA’s bulk-phone-records program, has called metadata “a treasure trove of human relationship data.”30 Groups, including the EFF, have argued that government collection and analysis of phone metadata threatens Americans’ First Amendment–protected freedom of association, because cellphone users may avoid calling certain people (such as an undocumented workers’ outreach organization) or places (such as mosques) out of fear that their behavior will be tracked and misinterpreted.
Metadata is also not as anonymous as it appears to be, since some can be matched to people’s names using readily available online tools. In the wake of the NSA bulk-phone-records program revelations, Stanford University computer science researchers created an Android app, MetaPhone, that pulled phone call metadata from phones. They asked volunteers to download the app and grant them access to their data. Once they had gathered a data set the researchers randomly selected 5,000 phone numbers and found they were able to match 27 percent of them to people or businesses just by searching for the numbers in Facebook, Google’s business listings service Google Places, and the online review site Yelp. The researchers also took a separate group of 100 numbers and identified 91 of them by manually running them through Facebook, Google, Yelp, and public records databases.
In a follow-up study that examined three months of phone metadata from 546 volunteers, the researchers found that 30 percent had contacted pharmacies during that period, 8 percent religious institutions, and 7 percent “firearm sales and repair” businesses. Calls were also placed to Alcoholics Anonymous, a reproductive pro-choice organization, labor unions, divorce lawyers, and sexually transmitted disease clinics. In a blog post, the researchers wrote, “phone metadata is unambiguously sensitive, even in a small population and over a short time window.”31
PHONE LOCATION DATA AND WARRANTLESS SEARCHES
Both the metadata distinction and the third-party doctrine are factors in the ongoing privacy debate about smartphone location data. Civil liberties advocates consider location records protected under the Fourth Amendment, but since location data is not communications content, and subscribers allow carriers to collect it, many courts consider the information to be carrier property. These courts say police and the government need only a court order—not a search warrant—to request location data from carriers. Court orders are easier to obtain than search warrants, because they do not require police to show probable cause that a crime has been or is being committed. The police only have to claim the information is relevant and material to an ongoing investigation.
Statistics indicate that federal, state, and local law enforcement agencies access the location data of tens of thousands of cellphones and smartphones a year through carriers. After Senator Ed Markey (D-MA) pushed carriers to disclose the number of law enforcement requests for phone-related data they receive annually, eight of the then-largest U.S. carriers revealed that they had collectively responded to 1.1 million requests in 2012. Markey says the figure should be higher, but Sprint “did not provide complete information” in its response.32 An unspecified portion of those requests sought location information.
AT&T and Verizon supplied more detailed and up-to-date information in early 2014 when they separately published “transparency reports” in response to pressure from privacy activists and others seeking greater clarity on how and when they divulge customer data to law enforcement. Verizon said it received approximately 38,000 law enforcement demands for location data in 2013, through either court orders or search warrants, and that the number of warrants and orders for location information is increasing each year.33 AT&T said it received nearly as many demands for location data in 2013.34
The NSA also collects location data from cellphones worldwide, according to a 2013 Washington Post investigation35 based on confidential NSA documents supplied by Edward Snowden. The NSA’s location-tracking efforts are focused on foreign intelligence targets outside the United States. But since the NSA obtains phone location information by tapping into telecommunications providers’ “key network routing points,”36 the agency ends up “incidentally”37 collecting the locations of an undetermined number of American cellphones, including those of Americans traveling abroad. The NSA saves this location data in a database and uses it to determine social relationships, such as whom its targets might be meeting or accompanying.
The NSA says it does not currently collect Americans’ domestic phone-location data in bulk, but it has in the past, and it may resume doing so in the future. After months of obfuscation on the subject intelligence officials revealed this information during an October 2013 Senate Judiciary Committee hearing. After the New York Times reported that the NSA had gathered cell tower location information from U.S. carriers, the agency’s then-director, Keith B. Alexander and the director of National Intelligence James R. Clapper confirmed it did so in 2010 and 2011 as part of a secret pilot project designed “to test the ability of its systems to handle”38 such information. (This project was separate from the NSA’s phone metadata program.) At the same hearing, the officials said the NSA had never used the location data for intelligence purposes, but Alexander admitted that could change, and that collecting the locations of cellphone calls for NSA analysis “may be something that is a future requirement for the country.”39 The NSA would need to apply for permission to resume location tracking on a bulk level, but the special court that assesses the agency’s demands has approved almost all of its requests thus far. The NSA has also said it will inform Congress if it starts gathering phone-location data.
Civil liberties advocates object, since even anonymous location data can be traced back to specific users because phone usage patterns are largely predictable. A 2013 study by researchers from Harvard, MIT, and Belgium’s Louvain University found it was possible to “uniquely characterize” nearly everyone in a data set of 1.5 million people using a few cellphone-location data points.40 After examining 15 months of anonymous user data from a European carrier the researchers were able to differentiate 95 percent of the people with just four randomly chosen data points that disclosed the user’s location and time of day. This information could then be cross-referenced with publicly available information, such as a person’s home or work address, to identify and track a particular individual, much the way the MetaPhone researchers did with phone numbers in their experiment. The American Civil Liberties Union (ACLU) likes to say, “If the government knows where you are, the government knows who you are.”41
Senator Wyden has been trying to pass a federal law prohibiting warrantless phone tracking since 2011. His Geolocational Privacy and Surveillance Act (GPS Act), co-written with and sponsored by Representative Jason Chaffetz (R-UT), is stuck in committee review after being reintroduced in Congress in March 2013. If it passes, it would require government entities, including the police, to prove probable cause and obtain warrants before tracing a suspect’s cellphone location in real time or acquiring historical cellphone location data. The bill makes exceptions for emergency situations, such as national security threats or tracking people who are in immediate danger.
In the absence of a federal law, states have been taking action. In June 2013, Montana was the first state to enact an anti-cellphone tracking law. Maine passed a similar law a few weeks later and Indiana, Maryland, Utah, and Virginia passed broader laws in March 2014. The Maine and Montana laws mirror the GPS Act though are less extensive while the Utah law requires a probable cause warrant not just for location information, but also for “stored” and/or “transmitted data” from “electronic devices”—language that the ACLU notes could be interpreted to protect a range of electronic communications content.42 The Indiana, Maryland, and Virginia laws only pertain to real-time (not historical) location information. The Massachusetts and New Jersey Supreme Courts have also ruled that police need warrants to track phones, though the Massachusetts decision specifically applies to requests concerning longer time periods, such as two weeks or more of location information.
A related area of concern is whether the police can search people’s cellphones after they have been arrested. Civil liberties advocates say these examinations are overly invasive when they include smartphones. As ACLU technologist and policy analyst Chris Soghoian has written: “The type of data stored on a smartphone can paint a near-complete picture of even the most private details of someone’s personal life.”43 To illustrate his point Soghoian published a document that a Department of Homeland Security agency had submitted to court in connection with a 2012 Michigan drug investigation. The document outlined what law enforcement was able to learn after commissioning a forensic exam of a suspect’s iPhone. The trove of sensitive information included call activity, contact lists, stored voicemails, text messages, photos and videos, apps, eight different passwords, and 659 geolocation points, including 227 cell towers, and 403 Wi-Fi networks to which the phone had previously connected.
Courts are divided on the constitutionality of warrantless cellphone searches. A legal doctrine known as “search incident to arrest” permits police to search a suspect during or immediately after a lawful arrest. But when the Supreme Court developed that rule in the 1960s it was intended to permit the search of a suspect’s clothing and the immediate vicinity for weapons and evidence—not a smartphone that carries far more data than a person’s pockets. The Stored Communications Act, the federal law that regulates the government’s access to digital information stored by a service provider, shields an individual’s text messages, e-mails, and Facebook and Twitter messages from government agencies for 180 days, unless they produce a warrant. But the act was passed in 1986, and it’s not clear whether it trumps the search incident to arrest rule in the context of digital information saved on smartphones.
Technology evolves so quickly that civil liberties advocates worry that smartphone surveillance methods that aren’t common today will soon become widespread. The ACLU has warned, “More and more when it comes to monitoring the public, [technological] capability is driving policy. The limits of law enforcement surveillance are being determined by what is technologically possible, not what is wise or even lawful.”44 These technologies include ways to remotely collect smartphone data.
Stingray surveillance systems, which capture phone IDs within a particular vicinity, are an emerging smartphone privacy threat. Stingrays are portable, antennae-equipped devices that mimic cell towers and trick nearby phones into connecting to them so that they can grab the phones’ ID numbers. Every cellphone has multiple ID numbers, but the ones Stingrays snare are mostly IMEI and IMSI numbers, which is why some people call them IMSI catchers.
Law enforcement agencies use Stingrays to track a suspect’s phone location—and thus, the suspect—in real time. Since Stingrays are as small as a shoe box, they can be placed in a car and transported around a suspect’s neighborhood. Once the Stingray picks up the suspect’s phone signal, police measure its strength from varied angles and map the data on a computer, until they locate the suspect. The systems are controversial for two reasons: they arguably invade the suspect’s privacy and they scoop up the phone data of any person in typically a one-mile radius. Police or the government could then use this data to trace people’s actions via their phones and uncover their names. This method could be used to remotely identify attendees at a protest, for example. Police would simply have to activate a Stingray near the protest and collect people’s phone IDs.
Privacy groups say these broad searches violate the Fourth Amendment, especially since some Stingrays could enable police to eavesdrop on phone calls through interception modules installed by their manufacturers. These would function like a wiretap but with wider reach and less judicial oversight, since law enforcement generally need only a court order to deploy a Stingray, whereas a wiretap always requires a warrant.
The Department of Justice has admitted that Stingray use is “very common”45 among federal agents, and Stingray investigations are surprisingly prevalent among local police, as well. The FBI is known to lend its stingrays to state and local law enforcement agencies, and some state and city police have purchased their own, though they are pricy—as much as $400,000 per system, according to one estimate. The Los Angeles Police Department bought a Stingray in 2006 and routinely uses it for burglary, drug, and murder investigations.46 The Florida Department of Law Enforcement has spent more than $3 million on Stingrays since 2008 and it loans them to regional and local law enforcement agencies, according to the ACLU.47 Court cases and investigations by journalists and civil liberties advocates have also uncovered local police use of Stingrays in Arizona, Indiana, northern California, and Fort Worth. In December 2013, USA Today reported that at least 25 U.S. state and local police departments owned Stingrays, and that some had defrayed the cost through federal antiterrorism grants, even though they used the devices for “far broader” purposes.48
Civil liberties groups often discuss Stingrays alongside another cellphone information-gathering tactic called “tower dumps.” These are time-targeted information reports related to specific cell towers. Police investigating a crime in a particular location can ask carriers to identify all the cellphones that were connected to the nearest cell towers (for calls, for data, or simply because they were in proximity) during a given period of time—usually a few hours. Like Stingrays, tower dumps can hoover up hundreds or even thousands of law-abiding citizens’ location information in pursuit of clues about a suspect or crime. But unlike Stingrays, tower dumps require carrier cooperation, and they can’t track people in real time. In their transparency reports AT&T and Verizon said they received about 1,000 and 3,200 tower dump requests, respectively, in 2013. One in four U.S. law-enforcement agencies have used tower dumps to obtain information, according to USA Today.49
Government and police surveillance aren’t the only Stingray-related concerns. Civil liberties advocates say stalkers and criminals are using similar technologies to track cellular communications. While EFF attorney Fakhoury says, “It’s still too early to tell how [widely deployed] Stingrays will be,” he also notes that “it’s troublesome to have devices like this that can look around everywhere and get the information of every phone in an area.” The EFF calls Stingrays the “biggest technological threat to cellphone privacy you don’t know about.”50
Stingrays will pose an even greater menace to privacy if they are paired with the unmanned aerial vehicles (UAV) or unmanned aircraft systems (UAS) known as drones. Domestically drones are used for law enforcement, firefighting, search and rescue efforts, disaster relief, and weather monitoring, while the military uses them abroad for reconnaissance and surveillance missions and attacks against alleged terrrorists. Drones aren’t focused on capturing smartphone data, but privacy groups say the vehicles can be outfitted with devices that intercept phone calls and text messages. Independent security researchers already have developed a small drone that can spy on data that smartphones send over Wi-Fi, such as e-mail usernames and passwords and credit card information.51 “At some point drones and Stingrays will converge,” predicts Fakhoury.
Though drone privacy regulations have yet to be implemented on a federal level, law enforcement agencies have been flying the vehicles throughout the country for several years. U.S. Customs and Border Protection (CBP) owns approximately ten drones,52 which it deploys along the U.S.-Mexico and U.S.-Canada borders to “identify and intercept potential terrorists and illegal cross-border activity.”53 CBP also flies its drones on behalf of other government agencies, including the FBI, U.S. Immigration and Customs Enforcement (ICE), the Drug Enforcement Administration (DEA), the U.S. Marshals Service, and county sheriff’s departments, and it did so nearly 700 times between 2010 and 2012.54
The drone privacy issue is expected to take on more urgency. In March 2014, a federal judge ruled that the de facto ban the Federal Aviation Administration (FAA) had placed on commercial drones in U.S. airspace was not legally binding. The decision would have allowed photographers, filmmakers, surveyors, and news organizations to deploy unmanned vehicles over the continental United States, but was stayed because the FAA quickly appealed the decision. The resulting ruling is still pending, but privacy groups worry that when commercial drones are integrated into the skies, some of them will be used for surveillance, due to their low operational costs and advanced technological capabilities.
PUBLIC RESPONSE
Not everyone finds these emerging technologies intimidating. People’s notions of privacy are highly individual and often vary according to age. In an April 2013 survey of 3,900 consumers in 13 countries, nearly two out of three (65 percent) 18- to 34-year-olds described themselves as uninterested in privacy matters related to online and mobile communications.55 The next age group, of 35- to 44-year-olds, expressed only slightly more interest in guarding their digital privacy.
Yet polls that asked Americans specifically about warrantless phone searches, location tracking for marketing purposes, and NSA data collection found the majority of respondents object to these actions. More than three out of four respondents to a 2012 University of California–Berkeley study said police should get court permission before searching a phone during an arrest.56 In the same study, 70 percent of respondents said they would “definitely not allow” carriers to use their locations to tailor ads to them.57
Sentiment about NSA data tracking was also negative. In a November 2013 Washington Post poll 69 percent of respondents said they were concerned about carriers’ collection of their personal information, and 66 percent said they were concerned about the NSA’s collection and use of their personal information. A January 2014 poll from the Pew Research Center and USA Today found the majority (53 percent) of respondents opposed the NSA’s bulk-phone-records program. The results represented a notable increase in negativity from July 2013, when a majority (50 percent) of respondents said they approved of the program and only 44 percent disapproved.
Young people are not always indifferent to privacy concerns, either. In an August 2013 Pew study, 59 percent of teenage girls said they had disabled location tracking on their smartphone apps due to privacy worries. (Teenage boys expressed less concern, with 37 percent of them taking this precaution.) A large majority of young Americans are also clearly against government collection of their phone and online communications for NSA-like purposes. An October-November 2013 Harvard University Institute of Politics survey of 18- to 29-year-olds found that only 14 percent of respondents were in favor of the government tracking their phone calls and GPS location, even if doing so would “aid national security efforts.”58
It’s clear the law needs to catch up with technology. The mass aggregation and analysis of information enabled by smartphones, affordable data storage, cloud computing, and supercomputers would have been unfathomable in the 1970s and 1980s, but that is when many of our current electronic privacy laws were written or legal precedents were established.
Much of the government’s legal rationale for arguing that people can’t expect privacy regarding their phone records stems from a 1979 legal case called Smith v. Maryland, which involved a phone company installing a device called a “pen register” on a robbery suspect’s home phone line in response to a police request (without a warrant). The device recorded the numbers the suspect—Smith—dialed and soon showed an outbound call to the house of a woman whose purse had been stolen, confirming him as a suspect in the robbery. Smith was convicted and argued that the warrantless use of a pen register constituted a search that invaded his privacy, but the Supreme Court rejected his claim, writing in its opinion: “All [telephone] subscribers realize . . . that [their] phone company has facilities for making permanent records of the numbers they dial.”59
Nearly 35 years after Smith v. Maryland, government lawyers and officials cited the case to defend the NSA’s phone metadata program, even though the NSA’s program involved millions of Americans rather than one robbery suspect and the information that police learned about Smith from looking at his phone records for a few days was far more limited than what law enforcement could deduce from analyzing years of people’s cellphone and smartphone records today.
Stingrays and drones also gather much more information than would have been anticipated in the 1970s and 1980s, but experts say courts will probably draw on decades-old rulings to make decisions about those technologies, too. Says Fakhoury: “There was a Supreme Court case in the 1980s where the police department took a plane and flew it to look in a guy’s backyard to see if he was growing marijuana. The Court said that was OK and didn’t violate the Fourth Amendment. Under that precedent, a fight against drone [spying] will be tough.”
Lawmakers have introduced bills that aim to amend such outdated digital privacy laws as the 1986 Electronic Communications Privacy Act (ECPA), which includes the Stored Communications Act. When these laws are finally revamped, privacy groups want them to mandate search warrants for all forms of electronic communications, including content, metadata, and location data. “Smartphones are becoming more accurate and precise,” says Fakhoury. “If we want to future-proof these laws so they can withstand the next thirty to forty years, we need to stop thinking about everything as content or noncontent and start considering what different types of data reveal about a person.”