THIRD-PARTY INFORMATION AND THE CLOUD
There’s long been a serious chink in the Fourth Amendment’s armor: a Supreme Court rule that says if the government demands information about you from a third party, it is not a “search,” and therefore doesn’t implicate your rights at all. Time and technology have worn that chink into a spyhole on what we hold both intimate and dear.1 As our lives have moved from actual homes and offices to the virtual world of cyberspace, most everything about us is now sitting in some third party’s hands. If the government wants it, the Supreme Court says, it need only subpoena it from whoever happens to be holding it. No warrant is needed and no probable cause required, thank you very much.2
There’s a growing consensus that giving the government this much leeway can’t be right. It’s one reason so many companies like the texting giant WhatsApp and iPhone maker Apple have moved to end-to-end encryption—to protect user data by making themselves immune to these government requests. Still, figuring out how the third-party rule should be changed is a daunting task, one that has tied Congress in knots. That’s in part because it also implicates the other side of the equation: law enforcement’s ability to obtain information it needs in criminal and terrorism investigations.
CYBERSPACE: THE GOVERNMENT’S OWN FILE CABINET
What’s at stake under the third-party rule was made clear in a clash between the government and the social media giant Twitter, during the government’s investigation of WikiLeaks, the international organization dedicated to exposing government secrets. In May 2010, U.S. Army Private Bradley (now Chelsea) Manning was arrested for allegedly passing hundreds of thousands of classified documents to WikiLeaks. In December of that same year, as part of its investigation of the massive leak, federal prosecutors issued to Twitter what is known as a 2703(d) order—or simply a D-order—to get records pertaining to three Twitter customers, one of whom—the computer security expert and activist Jacob Appelbaum—was a United States citizen. (The other two were Rop Gonggrijp, who founded the Netherlands’ first private Internet service provider, and Birgitta Jonsdottir, a member of Iceland’s Parliament.)3
At the time, most tech companies simply would have complied. But Twitter, which has made its reputation on protecting user privacy, was different. It has long had a policy of notifying targets so they can defend themselves in court. Ben Lee, Twitter’s Vice President, Legal, explains that “modern platforms for communication and expression … create a certain level of responsibility … for the rest of society.” For that reason, he says, it is “hammered into” the minds of people at Twitter: “Protect the user where we can.” The goal of notifying the user, Lee emphasizes, is to “hold the government accountable to existing legal requirements before they can obtain user data.” 4
When the government demanded the information from Twitter, it did so in a court filing kept secret from the users and the public. It is common for the government to try to keep such requests under wraps, which, as the WikiLeaks Three later pointed out, makes it difficult “to oppose an order because the individual does not know about it.” When Twitter refused to comply until the order was made public and the targets were notified, it was instant front-page news. Barton Gellman, writing in Time, wagged his finger at Twitter’s competitors: “It is beyond reasonable doubt that authorities asked other companies to supply the same kinds of information sought from Twitter, but none of them admit it.”5
To be clear, the information the government sought in this case was not public. The government wanted everything Twitter had about the WikiLeaks Three other than their tweets: account names and user IDs, all personal addresses, payment information, session times, and IP addresses for devices from which tweets were sent. The targets complained the information could be “‘intensely revealing’ as to location” and would let the government create a “map” of their private associations.6
Twitter’s fight to notify its clients ended in vain, because the court decided, remarkably, that the targets of government inquiry didn’t even have legal “standing”—a legal right to come to court to challenge the attempt to gather their data. Instead, the third-party information holder—in this case Twitter—was the only one who could fight their battles for them. Twitter’s Lee was “flummoxed” when prosecutors first made this argument. A former Legal Aid defender well used to sitting across the table from lawyers in the District Attorney’s office, he was surprised at the “aggressiveness with which they were approaching this.” Twitter pointed out in court that even if it had the resources to fight all these battles on behalf of their customers, “Twitter will often know little or nothing about the underlying facts necessary to support their user’s argument that the subpoenas may be improper.” As the ACLU aptly noted, third-party companies “just don’t have the capacity—or the incentives—to go to bat against the government each time there is a challenge to one of their user’s rights.”7
What matters at the moment isn’t that the WikiLeaks Three lost, but why. The court ruled that the users could not complain about the government getting hold of their data, because they had “voluntarily” turned the information over to Twitter. “Voluntarily” is the trick word here. Even assuming using Twitter is voluntary, in today’s world we have little choice but to give our most intimate information to third parties all the time.8 Half the time, we don’t even know the information is being collected. Indeed, in ruling against the WikiLeaks Three, the court justified its position in part by saying the need to give over information like your IP address is “built directly into the architecture of the Internet.” That’s right, but it seems to undercut the court’s own argument that giving over the information is “voluntary.” It quickly becomes clear that “voluntary” in court-speak bears little relation to what ordinary human beings mean when they talk about giving something knowingly and freely to someone else.9
Under current Supreme Court decisions, virtually any information you provide to anyone is “voluntarily” given and thus fair game for the government to grab. Unless you plan to keep your cash in a mattress, you need a bank and credit. These institutions have all your financial information. Search engine companies know if you looked into breast cancer symptoms, sought marriage counseling, worried whether your kid was autistic, or wondered how to treat your hemorrhoids. “Smart meters” tell utility companies not only how much electricity you are using, but which appliances are using it and when. Radio-frequency identification tags—“RFID”—implanted in your credit cards, your passport, your customer loyalty cards, even ticket stubs, reveal what you buy and where you go. Your cell phone provider not only knows where you’ve been, but where you are right at this moment. The “cloud” holds all your music, your photos, your instant messages, your love notes, and your spreadsheets—even if you’ve done nothing but upload them for your private use. The ACLU’s chief technologist Chris Soghoian has written, aptly, “In the cloud, the government is just one subpoena away.” In theory it may be possible to go off the grid and avoid opening yourself up to any scrutiny—the Unabomber pulled it off for a while—but for most of us it is impossible to live that way.10
Few today doubt this area of the law is ripe for change, but what makes it tough is that the government claims a good argument of its own. As the judge in the WikiLeaks case pointed out, “The purpose of a criminal investigation is to find out whether crimes have occurred.” The whole reason “the legal threshold for issuing a subpoena is low,” explained a judge in another Twitter user case, is that some investigations would never get off the ground if probable cause were required just to get started in the first place. This sort of worry is in part what has FBI Director James Comey stumping the country fretting about encryption and issuing dire warnings about the government “going dark.”11
That’s the tension: protect the information, and law enforcement says it can’t go after some bad guys; weaken protections, and we all can say adios to any shard of security from government prying. Caught in the middle are the tech companies, which—frankly—have long tried to have it both ways. A company like Twitter, The New York Times pointed out, has to “play nice with the governments of countries in which it operates.” At the same time, the companies feel the need to reassure users of their privacy. Worse yet, says the president of the Electronic Privacy Information Center (EPIC), Marc Rotenberg, while “commercial providers like to act as though they are adjudicating a dispute between the government and users,” the truth is that many “want access to the data themselves” for commercial purposes such as promoting advertising. They want to hold it, but then it is right there in their hands when the government wants it. And so, Rotenberg concludes, “we are in this very weird triangular space.”12
This tension between the needs of law enforcement and user privacy—made worse by tech companies’ own complex interests—triggered one of the great legislative standoffs in memory. For over half a decade it has been apparent to everyone involved—legislators, law enforcement, courts, the public—that the laws on the books regulating law enforcement access to digital information held by third parties are hopelessly out of date. Everyone is vulnerable at present. But Congress is paralyzed; the most it has even tried to tackle is access to email, the easiest case for privacy protections. And so the battle is fought out in the courts—the very courts that gave away all our data in the first place.
WHAT YOU’VE “VOLUNTARILY” GIVEN AWAY
In a trio of cases from the 1960s and 1970s, the Supreme Court concluded that if you’d given your data to a third party, it was not a Fourth Amendment “search” for the government to acquire it.
The first case, from 1966, involved the notorious Teamsters Union leader Jimmy Hoffa. The government had gone after Hoffa for violating federal labor law. Hoffa was acquitted. Then the government caught him conspiring to bribe jurors. Critical to the government’s jury tampering case was the testimony of an informant named Partin. Partin agreed to rat out Hoffa to the feds in order to avoid prison for his own misdeeds. Hoffa complained that the government had placed Partin in his inner circle deliberately in order to gather information; the government insisted Partin was acting on his own initiative. Distinction without a difference, responded the justices: the Fourth Amendment simply doesn’t protect a wrongdoer’s “misplaced belief that a person to whom he voluntarily confides his wrongdoing will not reveal it.”13
Many, justifiably, associate the government’s use of secret informants with totalitarianism. Still, the Hoffa decision has a certain logic to it. Hoffa had blabbed, and in doing so, to paraphrase the Supreme Court’s terminology, he’d “assumed the risk” that the third party he told would turn that information over to the government. Surely, if Partin had decided to go to the government on his own initiative with the incriminating information, no one would have had any problem with that.
Pretty soon, though, the justices began to rob the idea of “voluntarily” giving information to third parties of all ordinary meaning. In the early 1970s, the government was investigating a fellow named Mitch Miller (no, not the television bandleader, for those old enough to remember) concerning a king-sized bootlegging operation. As part of its investigation, agents from the Bureau of Alcohol, Tobacco and Firearms used a subpoena to get Miller’s financial statements and deposit slips from his banks. Miller cried foul, but—relying on Hoffa—the justices concluded Miller had no Fourth Amendment rights. What the government had collected was “only information voluntarily conveyed to the banks and exposed to their employees in the ordinary course of business.” Just like Hoffa, the justices claimed, Miller took the “risk, in revealing his affairs to another,” that the information would be given to the government.14
There are two nontrivial problems with applying Hoffa like this to Miller. First, it is not clear any of us “voluntarily” use financial institutions. What is the alternative, exactly? But second, what risk had Miller actually assumed? Sure, he put his money in a bank rather than stashing it in a cupboard. But the risk he took in doing so—that bank officials would reveal his financial records to the government—was close to zero. Bank officials never would have put two and two together, in part because they didn’t have half the information—i.e., any suspicion that Miller was bootlegging. What really happened in Miller was that a federal law required the bank to retain the records, and then the government used a subpoena to force the bank to turn them over. That all may be fine as an accommodation to the needs of law enforcement—we’ll get to that question in just a moment—but to claim the government came across the information “voluntarily” is to torture the English language.15
The real crusher happened in the decision in Smith v. Maryland. This seemingly limited case from 1979 has become crucial to defining our rights in the information age. A woman was robbed. Then she started getting threatening and obscene phone calls. Once, the caller, who said he was the robber, asked her to step outside while he drove by her place. The police soon spotted the vehicle in her neighborhood, and obtained the car owner’s name and address by tracing the license plate number. Then they arranged for the phone company to install a “pen register”—a device that records the phone numbers a caller dials—which showed that Smith was phoning the woman from his home. This information was used in turn to get a warrant to search Smith’s house, where yet more evidence was found to convict him. Smith asked to have all the evidence thrown out, on the ground that installing the pen register was a warrantless “search.”16
The Court held Smith had no expectation of privacy in the phone numbers he dialed, because—as you surely can guess by now—“[w]hen he used his phone, [Smith] voluntarily conveyed numerical information to the telephone company and ‘exposed’ that information to its equipment in the ordinary course of business.” This makes less sense than Miller. Unlike the bank, the phone company wasn’t even holding the information the government wanted. The government had to have the company attach a device to collect the information. Still, the Court said that it didn’t matter whether the phone company chose to collect this sort of information on its own or not: “Regardless of the phone company’s election” the company “had facilities for recording it and was free to record.” Translated: If a third party is capable of gathering the information the government wants, it can make them collect it and turn it over.17
If the logic of Smith were solid, it gets very difficult to see why the government can’t just ask the phone company to record your conversations whenever it wants. After all, the phone company is every bit as capable of recording conversations as it is phone numbers dialed—the fact that they don’t do it doesn’t mean they couldn’t. The justices in Smith distinguished Katz, in which—as we saw in the last chapter—they held that wiretapping was out, by stressing that the pen register didn’t capture the content of conversations, just the phone numbers dialed. But, as Justice Stewart, the author of the Katz decision said—dissenting in Smith—most people would not “be happy to have broadcast to the world a list of the … numbers they have called.” Not because it would incriminate them, but “because it easily could reveal … the most intimate details of a person’s life.”18
Given what a creep Smith was, it is easy to see why the Supreme Court ruled as it did, but nothing can reel back the unfathomable license the decision has been taken to grant the government. Smith has been used to justify everything from location tracking to bulk data collection by the National Security Agency. Police officials take full advantage of this third-party rule, in numerous cases, to obtain big helpings of personal information. In 2012, cell phone companies reported they’d received 1.3 million demands from law enforcement for everything from texts to location information. And, consistent with the “triangular” positioning of the tech companies, it has even turned into a major revenue stream for the businesses. AT&T alone took in more than $8 million dollars that very year by turning over its customers’ information in response to law enforcement demands. At a private conference held in Washington, D.C., for law enforcement and their vendors, Sprint Nextel’s “manager of electronic surveillance”—that’s quite the job title, no?—described how it had set up a dedicated website so police could access customer information directly from their desks. “The tool has just really caught on fire with law enforcement,” he bragged.19
SUBPOENAS: A LICENSE TO PRY
What makes matters worse still is how the government gets its hands on most of this third-party information—by using a subpoena. A subpoena is an order to produce documents or other information at a given place or time, backed up by the threat of being held in contempt of court. Unlike with a warrant, to get a subpoena law enforcement officials need not show probable cause, and they don’t even have to get permission from a judge. In the Miller case, the Supreme Court blithely described how Treasury agents “presented” the bank presidents “with grand jury subpoenas issued in blank by the clerk of the District Court, and filled in by the United States Attorney’s office.”20
The origin of prosecutors’ “blank check” authority to issue subpoenas rests in the traditional function of the grand jury, an evidence-gathering body that dates back as far as twelfth-century England. Grand juries, usually composed of from twelve to twenty-three people, are empaneled to investigate crime in the community. If the grand jury concludes there is cause to believe a crime has been committed, it hands down an indictment, signaling the start of criminal proceedings. The Supreme Court has said that a grand jury “can investigate merely on suspicion that the law is being violated, or even just because it wants assurance that it is not.” Its job “is not fully carried out until every available clue has been run down and all witnesses examined.” Given this “broad brush” role, the logic runs, it would make no sense to require probable cause even to begin investigating. As the Twitter court pointed out, requiring probable cause would stop an investigation in its tracks before it got going.21
If you are wondering, reasonably, how to square grand jury fishing expeditions with the probable cause requirement of the Fourth Amendment, the answer rests in the fact that historically the grand jury was separate from the government. Just like the Fourth Amendment itself, the grand jury was understood as a check on government. The grand jury’s “most valuable function,” the Supreme Court has said, is “not only to examine into the commission of crimes, but to stand between the prosecutor and the accused.”22
Part of the reason our federal Constitution contains a right to an indictment by the grand jury stems from the famous trials of John Peter Zenger. Zenger was a printer in the 1700s harshly critical of New York’s colonial governor. Thrice the government tried to prosecute Zenger; each time the grand jury refused to issue an indictment. Similarly, grand juries refused to let Stamp Act prosecutions go forward in the run-up to the Revolutionary War. Historically, grand juries—at their own initiative—pursued official wrongdoing and unveiled official corruption. During the Progressive Era, grand juries were the downfall of big city machines like that of Boss Tweed.23
Today, though, grand juries are nothing but the tool of prosecutors, who wield the subpoena power in the grand jury’s name, but with no real supervision by the jurors themselves. That’s why it is said that if a prosecutor asked, the grand jury would “indict a ham sandwich.” The country got a vivid taste of this when Special Prosecutor Ken Starr went after President Bill Clinton for perjury and obstruction of justice involving his affair with Monica Lewinsky. Not only did Starr subpoena Lewinsky’s semen-stained dress, he also used a subpoena to get his hands on Monica Lewinsky’s hard drive containing love letters to Clinton. He even hauled off the computer of one of Lewinsky’s friends, ultimately revealing to the world the friend’s private, intimate letters about her honeymoon in Tokyo.24
Congress eventually dispensed altogether with the pretense that the grand jury is supervising the prosecutor, authorizing government officials to issue subpoenas without a grand jury anywhere in the picture. Whether it was antitrust violations or failure to follow wage and hour laws, the notion was that administrative officials could not find offenders if they had to have probable cause before even beginning to investigate. More recently, the line between administrative agencies and prosecutors was obliterated. Statutes now empower prosecutors to go after health care fraud and child sex offenders by issuing their own subpoenas. Investigating health care fraud, prosecutors—with no showing of probable cause and no grand jury in existence—have forced doctors to turn over not only their financial records, but patient records, lists of magazines and journals they read, information about courses they take, and the financial records of their children. As one doctor fighting off a health care fraud subpoena pointed out, if a government agent came to take his papers without a warrant, that would violate the Fourth Amendment; and if the government agent got a warrant without probable cause, that also would violate the Fourth Amendment; so how come a prosecutor can just write out his own subpoena and demand the same papers?25
The failure of all checks on government prying was when Congress decided to give the FBI—not even prosecutors, but a policing agency—its own form of subpoena authority, National Security Letters (NSLs). Initially, NSLs were available only if the FBI had “specific and articulable facts” indicating that the person being investigated was “a foreign power or the agent of a foreign power.” When Congress passed the USA Patriot Act in the wake of 9/11, though, it substantially broadened the Bureau’s NSL powers. Now NSLs can be used to get information from anyone—foreign agent or not—so long as the FBI says the information is “relevant” to an authorized terrorism or intelligence investigation. Using this incredibly loose standard, the FBI is collecting everything from credit information to telephone toll and email subscriber records on American citizens. Following adoption of the Patriot Act, the number of NSLs skyrocketed to tens of thousands annually. The DOJ Inspector General’s office found widespread abuse of the practice, from substantially underreporting the number of requests in reports to Congress, to issuing something called “exigent letters”—for which there was zero authority in law—to gather information quickly without even meeting the minimal requirements for NSLs.26
The president’s Privacy Review Group on Intelligence and Communications Technologies, appointed by President Obama to investigate government spying in the wake of the Snowden revelations, urged the elimination of the NSL practice. “[I]t is important to emphasize,” Group members wrote, “that NSLs are issued directly by the FBI itself, rather than a judge or by a prosecutor acting under the auspices of a grand jury.” The Review Group found itself “unable to identify a principled argument why NSLs should be issued by FBI officials.” This remains the law, nonetheless.27
The rationale for letting law enforcement officials issue their own subpoenas is that—supposedly—the government acts under the ultimate supervision of the courts. Anyone who doesn’t think a subpoena is legit can come to court and challenge its validity. If the court agrees, it tosses the subpoena out—“quashes” it, in the lexicon.28
The first problem with this supposed justification is that in defending its subpoena in court, the government still does not have to show probable cause. It need only demonstrate that the subpoenaed information is “relevant” to a government investigation. Relevance is a whole lot less than probable cause, which is precisely why the subpoena is likened to a blank check. Probable cause means the government has cause to believe you did something wrong; “relevance” just means they think they need it whether you are under suspicion or not.29
But the real kicker is this: If the subpoena is served on a third party, like in the Miller case or the WikiLeaks case, the target won’t know about it, and so wouldn’t know to complain to a court in the first place. And even if companies like Twitter want to tell customers about the data hunt, they typically are forbidden by law from doing so. Gag orders are the order of the day. Many of the laws that authorize subpoenas and D-orders have provisions forbidding the recipient from telling the target. One judge, who balked at the practice, said that, in seeking to get a subscriber’s emails off its servers, the government wanted “Microsoft gagged for … well, forever.”30
CONGRESS STEPS IN
At the dawn of the digital age, it was clear to all concerned that congressional legislation was needed to regulate law enforcement’s access to electronic communications. Under the existing Wiretap Law, adopted in 1968, the government had to get a sort of “superwarrant” before it could listen in on telephone conversations. But no protection at all existed for email or other electronic information. It took no genius to see that given the Supreme Court’s third-party rule, and the government’s broad subpoena power, all the information in the hands of the new information service providers was going to be easy prey for government poaching. Not only was that bad for individual privacy, it also was bad for business: the burgeoning Internet companies needed to be able to assure customers that their data would remain secure. Even law enforcement needed help: In the face of the Supreme Court’s liberal third-party doctrine, states were adopting their own privacy laws to protect third-party disclosures, making a uniform solution essential. So civil libertarians and industry joined together, with support from law enforcement, to get Congress to do something.31
In response, in 1986, Congress enacted the Electronic Communications Privacy Act. At the heart of the ECPA rested the distinction, drawn initially by the Supreme Court in the Smith case, between what today we call “metadata”—such as the addressing information on an email, or the number that was dialed from a particular phone—and the “content” of those communications. Under the ECPA, the most protection is accorded to the content of communications. Before the government can get this information it generally needs a traditional warrant issued by a judge and based on probable cause. On the other hand, if the government wants noncontent “records” stored with third-party providers, the sort of D-order that was used in the WikiLeaks case will suffice. To obtain such an order the government need only provide a court with “specific and articulable facts” showing the information is potentially “relevant and material” to a criminal investigation. That’s not a whole lot; among other things, the government need not show the target has done anything wrong. Finally, armed with nothing but a subpoena issued by the government itself, under the even looser “relevance” standard, agents can get ahold of basic subscriber data such as name, address, log in, and account information.32
Although the ECPA may have made sense in the early days of digital technology, its shortcomings have become glaring in a world no one could even imagine in 1986. The theory was that the more private the information, the higher level of suspicion and judicial supervision required. But it has not worked out that way.
First, the ECPA contains a strange loophole that allows the government to gather a lot of email with nothing but a subpoena. Email—like phone conversations—undeniably contains “content” and thus seems to require the highest level of protection—a probable cause warrant. And under the ECPA, if an email is sitting on a server for less than six months, a warrant is indeed needed before the government can read it. But if the email sits there for more than six months, the government can simply issue a subpoena and get it. Why this bizarre “six month” distinction? Because when the ECPA was adopted in 1986, third-party storage was extremely expensive, and the assumption was that people would download their emails to their own computers to avoid incurring these costs. If they had not downloaded the email, the thought was that the email had been abandoned, and the government should be able to access it. But who, today, doesn’t store emails with commercial providers for more than six months?33
Then, there’s the widespread storage in the cloud of data other than email. In 1986, no one could have anticipated how much of our private lives would be kept on third-party servers—our personal documents, our diaries, our photos. All of this is undeniably “content.” Yet, under the ECPA, it appears a warrant is not necessary to get any of this material either.34
Finally, there’s the underprotection of metadata. The ECPA requires no probable cause to get this information. But metadata is often all the government needs to pry our lives apart. “In the analog world,” explains the Electronic Privacy Information Center’s Marc Rotenberg, “the transcript of the phone conversation was obviously more valuable than looking at a pattern of phone numbers.” That was the “old style” approach to law enforcement investigation.” But the “new style … is all about data, all about network analysis. In that world the data is more important than the calls. It is more objective, it can’t be modified; people can’t use a code to hide its meaning.”35
The best example of the privacy implications of collecting metadata is location tracking. The New York Times explained in 2012 that “[i]n most cases, law enforcement officers do not need to hear the actual conversation; what they want to know can be discerned from a suspect’s location or travel patterns.” Government requests for cell phone location data have skyrocketed even as old-fashioned wiretap requests have become a disappearing breed. That’s because, as the Times elaborated: “location data can be as revealing of a cellphone owner’s associations, activities, and personal tastes as listening in on a conversation, for which a warrant is mandatory.”36
The ECPA as originally adopted makes little sense today, but in fairness it was simply asking too much of Congress—or anyone else in 1986—to have the faintest clue what the future would hold. In 1984, in the run up to enactment of the ECPA, only 5 percent of homes had a personal computer. The World Wide Web as we know it did not exist. Congress had not authorized the development of the Internet for commercial use, and the first Web browser was seven years away.37 Email was a novelty to most; those who had it paid for it, and the idea of services such as Gmail was beyond the ken. Similarly with cell phones. The year before the ECPA was passed there were fewer than 1,000 cell sites. By 2010 one estimate put the number at more than 250,000.38
By very early in the twenty-first century, though, the overwhelming consensus was that the ECPA was seriously out of date and needed to be fixed. At a 2004 conference on Internet surveillance at George Washington Law School, every commentator who discussed the ECPA, no matter their ideological stripes, called for “changing it in fairly significant ways.” In 2010, Digital Due Process, a wide-ranging coalition of tech companies, individuals, and organizations from across the political spectrum, formed to lobby for ECPA reform. By 2015, groups as diverse as the conservative Heritage Action for America and the liberal ACLU even agreed on what needed done, which was to step up the standards by which government obtained information, including—in many cases—requiring a warrant and probable cause.39
But Congress was frozen because law enforcement—which also recognized the law needed to be changed—could not get comfortable with the proposed reforms.
LAW ENFORCEMENT’S TECHNOLOGY PROBLEM
By the middle of the second decade of this millennium, law enforcement faced a tough problem of its own. In a high-profile speech given on October 16, 2014, FBI Director Jim Comey explained that developments in communications technology were making it difficult for law enforcement to keep up. The question he asked was “Are Technology, Privacy, and Public Safety on a Collision Course?” 40
Comey’s central point was this: even when law enforcement could and did get orders from judges to engage in surveillance, technological change was rendering those orders “nothing more than a piece of paper.” When the Wiretap Act was passed in 1968, if law enforcement needed to know what was said on the telephone call, all it needed were “two alligator clips and a tape recorder.” Not only was the technology relatively straightforward, but there was only one provider, the monopoly known as Ma Bell. Now, though, things are more complex: “If a suspected criminal is in his car, and he switches from cellular coverage to Wi-Fi, we may be out of luck. If he switches from one app to another, or from cellular voice service to a voice or messaging app, we may lose him.” “The bad guys know this,” said Comey. And “they’re taking advantage of it every day.” 41
Early in the digital revolution, in 1994, Congress had given law enforcement a hand by enacting the Communications Assistance for Law Enforcement Act, or CALEA, which required communications firms to design their equipment specifically to ensure that law enforcement could conduct surveillance. But, as Comey told his audience, CALEA had been adopted “[t]wenty years ago—a lifetime in the Internet Age.” The same problem of unanticipated change that had made the ECPA obsolete in protecting our privacy was doing the same with regard to law enforcement’s ability under CALEA to get what it needed. For example, CALEA applied to “communications” companies, but it exempted “information” firms. Obviously, no one in Congress had foreseen the volume of Internet vehicles for communicating that we have today: Google Hangouts, apps that allow people to talk with one another, digital games that permit players to scream and yell but also to send messages. There are thousands of new “information” firms that facilitate communications, many of them start-ups whose hardware and software leave no room for law enforcement to gain access.42
And so, Comey declared, law enforcement was at risk of “going dark.” “Those charged with protecting our people aren’t always able to access the evidence we need to prosecute crime and prevent terrorism even with lawful authority.” 43
Indicative of the problem, in Comey’s mind, was Apple’s move to encrypt iPhones passcodes so only the owner could unlock the phone and access data on it. With its iOS8 update, Apple told consumers, “it’s not technically feasible for us to respond to government warrants for the extraction of this data.” Rather, “[w]e’ve built privacy into the things you use every day.” Comey’s address came just a month later, and he singled out Apple—as well as its competitor Android, which was following suit. “Both companies are run by good people, responding to what they perceive as market demand. But the place they are leading us is one that we shouldn’t go without careful thought and debate as a country.” 44
Soon enough, the country was treated to an example of what Comey said was wrong. Two homegrown terrorists, inspired by Islamist terrorists abroad, attacked a gathering of public health workers in San Bernardino, California, killing and wounding more than thirty people. As part of its investigation, the FBI obtained a court order requiring Apple to develop software so that FBI investigators could get into the telephone used by one of the perpetrators. As an epic court battle attracted the nation’s attention, an anonymous third party surfaced to show the FBI how it could gain access to the information it wanted, without Apple’s assistance. That resolved the immediate case. But there was no gainsaying the issue would be back before long.45
LAW ENFORCEMENT’S POLITICAL PROBLEM
The problem law enforcement faced in 2014 was technical, but—as Comey himself recognized—it was equally political. “In the wake of the Snowden disclosures,” he conceded, “the prevailing view is that the government is sweeping up all of our communications.” Comey sought to assure people “that is not true,” and issued dire warnings about the risks we face from those who would do us harm if law enforcement cannot get the information it needs, even with a warrant.46
The wall Comey was running into was that repeated news reports of government snooping on Americans, combined with instances of official dissembling—such as the president of the United States claiming none of it was happening prior to the Snowden revelations proving him wrong—had eroded the trust law enforcement requires to do its job. Law enforcement had gotten so aggressive in grabbing data, that by the time Comey spoke of the challenge of “going dark,” many people were no longer in any mood to make it easier for them to do so.
Law enforcement simply has had a hard time hearing the society’s concern about government collecting private information from third-party providers. The harm in the Snowden disclosures, Comey said, “has extended—unfairly—to the investigations of law enforcement agencies that obtain individual warrants, approved by judges.” But the fight is not just about warrants, and if it were, then law enforcement might get a lot more of what it wants. The government consistently has taken legal positions—often based on the poor drafting of the ECPA—that would give it access to email communications held by third parties, without a warrant or probable cause.47 And, if you carefully parse the speeches and testimony of law enforcement officials, they talk about getting “court order[s] or warrant[s].” In other words, government still believes it should be able to get information from third parties using only subpoenas or D-orders. This was clear in joint testimony Comey and Deputy Attorney General Sally Quillian Yates gave to the Senate Judiciary Committee in July 2015. They bluntly expressed concern about a world in which “users have sole control over access to their devices and communications,” and discussed the need to use court orders “to recover the content of electronic communications from the technology provider.” Users, on the other hand, seem to think they should have “sole control” over their own devices and communications.48
In fairness, a tightening up of the third-party rule could create a problem for law enforcement, what we might call the “investigative gap.” As Principal Deputy Assistant Attorney General Elana Tyrangiel told the House Judiciary Committee in September 2015, “non-content information gathered early in investigations is often used to generate the probable cause necessary for a subsequent search warrant. Without the mechanism to obtain non-content information, it may be impossible for an investigation to develop and reach a stage where agents have the evidence necessary to obtain a warrant.” In other words, law enforcement says it needs to be able to collect information from third parties using its broad subpoena power, just to uncover probable cause of criminal activity in the first place.49
But what law enforcement has proven either unable or unwilling to do is prove how real this problem is—and many people are no longer simply willing to take them at their word. Tyrangiel, for example, gave the absolutely horrifying case of the government getting hold of photographs of a man “sexually abusing his prepubescent son.” As she told it, in displaying the photographs he had carefully masked his identity behind the anonymity of the Internet and so court orders were required to discover the IP address of the device he was using, and ultimately his identity. But surely these photographs constituted probable cause of a crime being committed, and so law enforcement ought to be able to get a warrant. Rather than making its case rigorously and empirically, government officials regularly argue by pulling out anecdotes and horror stories. In today’s environment, though, the minute law enforcement offers up an anecdote, hecklers and naysayers instantly appear all over the Internet to dispute the claimed need.50
More than anything else, law enforcement has lost the help of dependable allies in the communications and tech industries. Recent disclosures have made clear the extent to which, prior to the Snowden and other recent disclosures, some of the telecom and tech firms were playing both sides of the fence—no company more than AT&T. In 2013, in response to a Freedom of Information request, law enforcement turned over a set of slides (apparently by mistake) describing the “Hemisphere Project.” Under Hemisphere, AT&T employees teamed up with the federal Drug Enforcement Agency to facilitate the rapid turnover of phone information. The government would pay AT&T to put AT&T employees in-house with the DEA. Then, when the government wanted information, it simply handed over an administrative subpoena to the embedded AT&T employees. In this way, the government could get any phone information it wanted moving through an AT&T switch. By some accounts, the Hemisphere Project was collecting some four billion records a day before it was (supposedly) shut down. Through unacceptable subterfuge, this program was kept secret even from courts.51
Consumer anger—directed not only at the government but also at the tech and communications companies themselves—has caused those companies to hop squarely on the consumer-driven bandwagon to tighten privacy laws. A series of annual releases by the Electronic Frontier Foundation, titled “Who Has Your Back?,” tells the story in a simple chart, on which gold stars are given to companies that “help protect your data from the government.” Stars are awarded for things like “Requires a warrant for content,” “Tells users about government data requests,” and “Fights for users’ privacy rights” in courts or in Congress. As recently as 2011, there weren’t a lot of stars on that chart, other than those awarded to Twitter and Google. By 2015, however, gold was everywhere (AT&T still being a notable exception).52
The ECPA was a compromise, passed in 1986 with the concurrence of private industry, individual rights advocates, and law enforcement. This sort of “competitive cooperation” seems hard to imagine today. Caught in the crossfire between tech companies and consumers on the one hand, and law enforcement on the other, for the better part of a decade Congress has proven unable to deal with pressing issues like access to emails, location tracking, and encryption.
And so the battles have raged in the courts.
SOLVING FOR TECHNOLOGY: WHAT TO DO ABOUT THE THIRD-PARTY DOCTRINE?
Whether by courts or by Congress, lines need to be drawn, and wherever and however they are drawn is likely to be unsatisfying. Anyone who thinks this is easy is engaging in self-delusion. The digital transformation has put law enforcement, and thus all of us, in a kind of bind. The Internet may allow me to exchange sweet nothings I would not want others to hear, or post photos I would share only with my closest friends. But it also allows child abusers to strike unmentionable deals and terrorists to hatch unimaginably destructive plots. Even Twitter recognizes, says Ben Lee, that “[p]seudonymity and anonymity are very important and tricky things on the Internet” and that there is not “a right never to be unmasked.” The issue is what “process we need first” to ensure government accountability. The question, in short, is where on a very slippery slope we can draw lines about when the government should be able to get information about us from third parties.53
Law tends to move by analogies, but given the rapid technological transformation, those analogies have become perilously strained. In one case, a court reasoned that obtaining cell tower data to track a suspect’s location could not possibly be a search: “If a tool used to transport contraband gives off a signal that can be tracked for location, certainly the police can track the signal.” Otherwise, the court concluded “dogs could not be used to track a fugitive if the fugitive did not know that the dog hounds had his scent,” and “[a] getaway car could not be identified and followed based on the license plate number if the driver reasonably thought he had gotten away unseen.” That’s quite the stretch—in the hound and license plate examples the government is on the trail without requiring a data dump from a third-party provider of completely private information shared only because one needs to use a phone.54
Analogies run out because digital technology not only has altered where we store our data; it has profoundly affected the very way we live our lives. In a 2011 article titled “Home, Home on the Web,” a law professor, Kathy Strandburg, described how an earlier technology—the telephone—did the same. All the sudden, people easily had long-distance relationships, and made “different decisions about where to work or live.” And thus a “telephone system open to unregulated wiretapping by government (and others)” would have changed the way the world evolved. So too, today, she says, the way our lives are lived is formed as much by the properties of cyberspace as by the physical space that surrounds us. She describes a hypothetical person, Moira, who in the morning collaborates on a report stored in the cloud with colleagues in another state, at lunch shares thoughts on Facebook with family and friends, then in the afternoon texts private messages to a long-distance boyfriend suggesting they stream a movie together on Netflix that night while talking together on Skype. This isn’t very hypothetical, is it?55
The question we must confront is this: Is all our information really fair game for the government, without warrant or probable cause, simply because a third party holds it? And if not, what’s in and what’s out?
Content
There is no serious argument that our “content” (as opposed to metadata), even if stored in cyberspace, should be available to law enforcement without a warrant. In their joint testimony to the Senate Judiciary Committee, Director Comey and Deputy Attorney General Yates said, “The more we as a society rely on electronic devices to communicate and store information, the more likely it is that information that was once found in filing cabinets, letters, and photo albums will now be stored only in electronic form.” That’s true, but it is not clear how it helps their case. To get that information in the past, what law enforcement needed was a warrant, based on probable cause. Why should it matter that the information is in virtual storage? When you rent an apartment or put things in a storage unit, the government can’t just come in and get what it wants on its own say so. Although the government does not seem to like it, even the federal courts finally recognized in 2010 that despite the ECPA, the Constitution forbids collecting our emails from cyberspace—no matter how long they were on the server—without probable cause and a warrant.56
Metadata
Nor is it clear the rules should be any different for “metadata” as opposed to content, though here law enforcement has put up a much bigger fight. Relying on the 1979 Smith telephone pen register decision, the Department of Justice insists a subpoena should be enough to get “addressing information” for email and other electronic communications. But even assuming Smith was right when decided—and there is, as we’ve seen, reason to doubt that—metadata today is much richer than the “to/from” information of olden days. Today, phone metadata reveals not only what number you called, but whether the call was completed, how long you were on the line, where you made the call from, what equipment you used—and often, the location from which you made the call. Similarly, email metadata can reveal the identities of your correspondents, the computer you wrote from, any attachments, and so forth.57
What people, including some judges, are rapidly coming to see is that all these bits and pieces of metadata about people are just as revealing of our lives as content information—and thus deserve similar protection. In a case involving a warrantless search of a cell phone, the Supreme Court pointed out that Web browser history (addressing information, after all) “could reveal an individual’s private interests or concerns—perhaps a search for certain symptoms of disease, coupled with frequent visits to WebMD.” It said the same about the “apps” you own: “There are apps for Democratic Party news and Republican Party news; apps for alcohol, drug, and gambling addictions … for tracking pregnancy symptoms.” Some of this may be “content” and some “non-content,” but that’s the point: the line itself is collapsing, and so giving the government ready access to metadata on a grand scale no longer makes sense.58
Information Used by Third Parties
Some courts have suggested the government should have freer access to information we give third parties to use, rather than merely to store. The argument seems to be that if we’re allowing the third party to make use of our data, it is no longer private. But it’s not obvious this is right either.59
Judge Richard Posner points out that it is easy to confuse privacy with secrecy, but—he explains—keeping things private “does not mean refusing to share information with everyone.” “The fact that I disclose symptoms of illness to my doctor,” says Posner by way of example, “does not make my health a public fact, especially if he promises (where the rules of the medical profession require him) not to disclose my medical history to anyone without my permission.” Adds the philosopher Helen Nissenbaum: We talk to teachers about problems our kids are having that we would not share with anyone else; we give financial information to professionals that we expect they will keep to themselves. The sharing of information is contextual, and the law should respect this.60
If those entities decide to go to the government on their own that is one thing. It is quite another to claim no Fourth Amendment search occurs when the government shows up demanding the information. Just because Google’s Gmail machine-reads your email in order to display targeted ads, that hardly means the government should be able to order Google to turn the information over. As courts have recognized, your landlord may have a right to come into your apartment periodically to fix things or inspect for safety, but that doesn’t give her the right to go through your desk, and it surely doesn’t allow her to show the government around without a warrant. The hotel housekeeper cleans your room; that has never been seen as license for the government to slip in for a look around either.61
The Third Parties’ Own Records
A somewhat better argument is that the government should be able to demand access to records that contain your information but are created by the third party itself in its usual course of business. In the Miller case, for example, when approving the subpoena for the bootlegger’s financial information, the Court stressed, “these are the business records of the banks.” The DOJ has relied on this logic to argue in favor of obtaining location tracking information from cell phone providers.62
Still, the argument is a tricky one, underscoring the fact that tech companies are being put between a rock and a hard place. On the one hand, they are making a lot of money off user data and want to keep it. On the other hand, consumers are mad that the data is being stored, and thus is so easily accessible to the government. Some consumers already are choosing companies that will guarantee them privacy by making sure the data does not exist.63
The hard question we must face—which is as much a matter of policy as constitutional law—is whether we want to hurt the competitiveness of these companies, efficiency gains, and even our own security advantages, in order to give law enforcement the tools it claims to need. This was a point driven home by the former head of the NSA and CIA, Michael Hayden, in the context of the encryption debate. In an op-ed piece, Hayden described how in the late twentieth century the United States tried to protect against the export of high-powered computing power, because our monopoly on that power made us leaders in breaking codes. Eventually, though, we realized that we were undercutting our global competitiveness in the computer industry, and that maintaining our long-term security advantage meant keeping that industry on top. So too, today, said Hayden, pointing a finger at law enforcement’s efforts to control the use of encryption: “One wonders what the Russias and Chinas of the world will demand if U.S.-based firms are forbidden to create encryption schemes inaccessible to themselves or the government.” His point is that companies are adopting encryption to be competitive in the face of consumer worries, and if the government insists on tunneling in nonetheless, then not only do we undercut competitiveness and efficiency, we may undermine security interests as well.64
It may be that there are no constitutional constraints on law enforcement obtaining a third party’s own records about us with but a subpoena. Some courts have so held. Still, we should be mindful of the costs in terms of privacy, efficiency, and competitiveness.
Ground Rules
Although at this juncture one might be inclined to think the whole third-party rule needs to be junked, that would fail to take account of law enforcement’s needs. Law enforcement forcefully maintains that it requires some of this third-party information, because insisting on warrants might derail important investigations before they get started. Suppose, for example, the government comes across some information that suggests a terrorist plot is in the offing. The information contains a phone number, and the government wants to find out who owns the phone, but doesn’t have probable cause. Or consumers tell a state Attorney General they think a company is engaging in fraud, but there’s not enough evidence there to get a warrant for anything. In such cases should the government do nothing and just hope probable cause develops? Or should there be some authority to get certain kinds of information from third parties via a subpoena?
While these are inescapably difficult questions, there are a few things that can be said with certainty, and that help frame a resolution.
For one thing, under no circumstances should law enforcement be writing its own blank checks. Without thinking clearly through the problem, we’ve drifted from grand jury supervision of prosecutors, to administrative agencies authorizing their own subpoenas, to prosecutors doing the same—all the way to the FBI getting its own NSL authority. There are various ways to get the situation back under control. Subpoenas—which, after all, are only a tool to get information—should not be used without prior judicial authorization, as is the case with warrants. This could be true even if what the government must show to get the order is less than probable cause, as with D-orders. It violates every fundamental constitutional precept to allow law enforcement to decide on its own what information to demand.
Similarly, we should require law enforcement to demonstrate how, and to what extent, its ability to detect crime will be substantially hindered before we grant it easy access to private information in third-party hands. Enough with the anecdotes and horror stories; we need real facts and data. We do need a sane policy to allow law enforcement to protect us, but sane policies are based whenever possible on hard facts. Law enforcement officials must develop the case for why they need this information, without pulling out horror stories about child pornographers that are an easy emotional sell but often prove quite uninformative.
Finally, as we have seen time and again, the courts are simply not the best place to resolve these nuanced issues. Their primary tool is the Constitution, which is a blunderbuss, not a scalpel. Deciding questions in constitutional terms casts them in concrete. Rapidly advancing technology has gotten us into this pickle. Hard-to-change rules adopted by judges lacking in expertise is not the way to get us out.
All this would be problematic enough if the question were only the government gaining access to our information in third-party hands. But it turns out the government not only acquires the information when it needs it, it is saving much of the information it obtains to build databases containing our information. This chapter has been about the acquisition of the data; the next chapter tackles the problem of saving the information, and using it for data mining.