CHAPTER 14


“THE FIVE GUYS REPORT”

ON August 9, 2013, a hot, humid Friday, shortly after three in the afternoon, the laziest hour in the dreariest month for news in the nation’s capital, President Obama held a press conference in the East Room of the White House to announce that he was forming “a high-level group of outside experts” to review the charges of abuse in NSA surveillance.

“If you are outside of the intelligence community, if you are the ordinary person and you start seeing a bunch of headlines saying U.S. Big Brother is looking down on you, collecting telephone records, etc., well, understandably,” he said, “people would be concerned. I would be concerned too, if I wasn’t inside the government.” But Obama was inside the government, at its apex, and he’d developed a trust in the agencies’ propriety. Of course, he acknowledged, “it’s not enough for me, as President, to have confidence in these programs. The American people need to have confidence in them as well.”

And that, he seemed to be saying, would be the mission of this high-level group of outside experts: not so much to recommend major reforms or even to conduct a particularly hard-hitting probe, but rather, as he put it, to “consider how we can maintain the trust of the people.” He would also work with Congress to “improve the public’s confidence in the oversight conducted by the Foreign Intelligence Surveillance Court.” Both efforts would be “designed to ensure that the American people can trust” that the intelligence agencies’ actions were “in line with our interests and our values.” The high-level group, or the select intelligence committees of Congress, might come up with ways “to jigger slightly” the balance between national security and privacy, and that was fine. “If there are some additional things that we can do to build that trust back up,” Obama said, “then we should do them.” But he seemed to assume that big changes wouldn’t be necessary. “I am comfortable that the program currently is not being abused,” he said. “The question is, how do I make the American people more comfortable?”

That same day, as if to seal the case, the Obama administration published a twenty-three-page “white paper,” outlining the legal rationale for the bulk collection of metadata from Americans’ telephone calls, and the NSA issued its own seven-page memorandum, explaining the program’s purpose and constraints.

Already, by this time, Obama, White House chief of staff Denis McDonough, and Susan Rice, his first-term U.N. ambassador who’d recently replaced Tom Donilon as national security adviser, had mulled over possible candidates for the outside group of experts. A few days before the press conference, they chose five, asked them to serve, and, upon getting their consent, ordered the FBI to expedite security-clearance reviews for each.

It wasn’t entirely an outside, or independent, group. All five were old friends or former aides of President Obama. Still, it was a more disparate and intriguing bunch than his press conference led many skeptics to expect.

Michael Morell was the establishment pick, a thirty-three-year veteran of the CIA, who had just retired two months earlier as the agency’s deputy director and who’d been the main point of contact between Langley and the White House during the secret raid on Osama bin Laden’s lair in Pakistan. Morell’s presence on the panel would go some distance toward placating the intelligence community.

Two of the choices were colleagues of Obama from his days, in the 1990s, teaching at the University of Chicago Law School. One of them, Cass Sunstein, had also worked on his presidential campaign, served for three years as the chief administrator of his regulatory office, and was married to Samantha Power, his long-standing foreign policy aide, who had recently replaced Susan Rice as U.N. ambassador. An unconventional thinker on issues ranging from the First Amendment to animal rights, Sunstein had written an academic paper in 2008, proposing that government agencies infiltrate the social networks of extremist groups and post messages to undermine their conspiracy theories; some critics of Obama’s panel took this paper as a sign that Sunstein was well disposed to NSA domestic surveillance.

The other Chicagoan, Geoffrey Stone, had been dean of the law school when Obama taught there. A prominent member of the ACLU’s national advisory council and the author of highly lauded books on the First Amendment in wartime and on excessive secrecy in the national security establishment, Stone seemed a likely critic of NSA abuses.

Peter Swire, a professor of law at the Georgia Institute of Technology, was a longtime proponent of privacy on the Internet and author of a landmark essay on surveillance law. As the White House counsel on privacy during Bill Clinton’s presidency, Swire played a key role in the debate over the Clipper Chip, arguing against the NSA’s attempt—which he, correctly, saw as futile—to put a clamp on commercial encryption. A couple years later, also on privacy grounds, he argued against Richard Clarke’s ill-fated plan to put critical-infrastructure industries on a separate Internet and to wire them so that, in the event of a security breach, the FBI would be directly alerted.

For that reason, Swire was nervous to learn that the fifth member of the Review Group would be Richard Clarke himself. The former White House official who’d immersed himself in NSA practices, written presidential directives on cyber security, and built a reputation as relentless in promoting his own views and in quashing those of others, Clarke was seen as a wild card generally.

Still the consummate operator, Clarke had made a huge splash since quitting the Bush White House on the eve of the Iraq War. One year after the invasion, he gained unlikely fame as an American folk hero at the 9/11 Commission’s nationally televised hearings, prefacing his testimony with an apology. “To the loved ones of the victims of 9/11, to them who are here in this room, to those who are watching on television,” he began, “your government failed you. Those entrusted with protecting you failed you. And I failed you. We tried hard, but that doesn’t matter because we failed. And for that failure, I would ask, once all the facts are out, for your understanding and for your forgiveness.”

It seemed to be a genuine plea of contrition—enhanced by the fact that no other Bush official, past or present, had apologized for anything—and the hearing room erupted with applause. After his testimony, family members of victims lined up to thank him, shake his hand, and hug him.

Clarke’s critics, whose numbers were legion, scoffed that he was just drumming up publicity. His new book, Against All Enemies: Inside America’s War on Terror, had hit the bins the previous Friday, trumpeted by a segment on CBS TV’s 60 Minutes the Sunday night between the release date and the hearing. When it soared to the top of the best-seller charts, critics challenged his claims that, in the months leading up to 9/11, Bush’s top officials ignored warnings (including Clarke’s) of an impending al Qaeda attack and that, the day after the Twin Towers fell, Bush himself pressed Clarke to find evidence pinning the blame on Saddam Hussein to justify the coming war on Iraq. But Clarke, always a scrappy bureaucratic fighter, would never have opened himself to such easy pummeling; he knew the documents would back him up, and, as they trickled to the light of day, they did.

All along, though, Clarke retained his passion for cyber issues, and six years later, he wrote a book called Cyber War: The Next Threat to National Security and What to Do About It. Published in April 2010, it was derided by many as overwrought—legitimately in some particulars (he imputed cyber attacks as the possible cause of a few major power outages that had been convincingly diagnosed as freak accidents or maintenance mishaps), but unfairly in the broad scheme of things. Some critics, especially those who knew the author, viewed the book as simply self-aggrandizing: Clarke was now chairman of a cyber-risk-management firm called Good Harbor; thus, they saw Cyber War as a propaganda pamphlet to drum up business.

But the main reason for the dismissive response was that the book’s scenarios and warnings seemed so unlikely, so sci-fi. The opening of a (generally favorable) review in The Washington Post caricatured the skepticism: “Cyber-war, cyber-this, cyber-that: What is it about the word that makes the eyes roll? . . . How authentic can a war be when things don’t blow up?”

It had been more than forty years since Willis Ware’s paper on the vulnerability of computer networks, nearly thirty years since Ronald Reagan’s NSDD-145, and more than a decade since Eligible Receiver, the Marsh Report, Solar Sunrise, and Moonlight Maze—touchstone events in the lives of those immersed in cyberspace, but forgotten, if ever known, to almost everyone else. Even the Aurora Generator Test, just six years earlier, and the offensive cyber operations in Syria, Estonia, South Ossetia, and Iraq—which had taken place more recently still—made little dent on the public consciousness.

Not until a few years after Clarke’s book—with the revelations about Stuxnet, the Mandiant report on China’s Unit 61398, and finally Edward Snowden’s massive leak of NSA documents—did cyber espionage and cyber war become the stuff of headline news and everyday conversation. Cyber was suddenly riding high, and when Obama responded to the ruckus by forming a presidential commission, it was only natural that Clarke, the avatar of cyber fright, would be among its appointees.


On August 27, the five panelists—christened, that same day, as the President’s Review Group on Intelligence and Communications Technologies—met in the White House Situation Room with the president, Susan Rice, and the heads of the intelligence agencies. The session was brief. Obama gave the group’s members the deadline for their report—December 15—and assured them that they’d have access to everything they wanted. Three of the panelists were lawyers, so he made it clear that he did not want a legal analysis. Assume that we can do this sort of surveillance on legal grounds, he said; your job is to tell me if we should be doing it as policy and, if we shouldn’t, to come up with something better.

Obama added that he was inclined to follow whatever suggestions they offered, with one caveat: he would not accept any proposal that might impede his ability to stop a terrorist attack.

Through the next four months, the group would meet at least two days a week, sometimes as many as four, often for twelve hours a day or longer, interviewing officials, attending briefings, examining documents, and discussing the implications.

On the first day, shortly before their session with the president, the five met one another, some of them for the first time, in a suite of offices that had been leased for their use. The initial plan had been for them to work inside the national intelligence director’s headquarters in Tysons Corner, Virginia, just off the Beltway, ten miles from downtown Washington. But Clarke suggested that they use a more nearby SCIF—a “sensitive compartmented information facility,” professionally guarded and structurally shielded to block intruders, electronic and otherwise, from stealing documents or eavesdropping on conversations. Clarke pointed, in particular, to a SCIF on K Street: it would keep the panelists just a few blocks from the White House, and it would preserve their independence, physically and otherwise, from the intelligence community. But Clarke’s real motive, which his colleagues realized later, was that this SCIF was located across the street from his consulting firm’s office; he preferred not to drive out to the suburbs every day amid the thick rush-hour traffic.

Inside the SCIF that first day, they also met the nine intelligence officers, on loan from various agencies, who would serve as the group’s staff. The staffers, one of them explained, would do the administrative work, set the group’s appointments, organize its notes, and, at the end, under the group’s direction of course, write the report.

The Review Group members looked at one another and smiled; a few laughed. Four of them—Clarke, Stone, Sunstein, and Swire—had written, among them, nearly sixty books, and they had every intention of writing this one, too. This was not going to be the usual presidential commission.

The next morning, they were driven to Fort Meade. Only Clarke and Morell had ever before been inside the place. Clarke’s view of the agency was more skeptical than some assumed. In Cyber War, he’d criticized the fusion of NSA and Cyber Command under a single four-star general, fearing that the move placed too much power in one person’s hands and too much emphasis on cyber offensive operations, at the expense of cyber security for critical infrastructure.

Swire, the Internet privacy scholar, had dealt with NSA officers during the Clipper Chip debate, and he remembered them as smart and professional, but that was fifteen years ago; he didn’t know what to expect now. From his study of the FISA Court, he knew about the rulings that let the NSA invoke its foreign intelligence authorities to monitor domestic phone calls; but Edward Snowden’s documents, suggesting that the agency was using its powers as an excuse to collect all calls, startled him. If this was true, it was way out of line. He was curious to hear the NSA’s response.

Stone, the constitutional lawyer and the one member of the group who’d never had contact with the intelligence world, expected to find an agency gone rogue. Stone was no admirer of Snowden: he valued certain whistleblowers who selectively leaked secret information in the interest of the public good; but Snowden’s wholesale pilfering of so many documents, of such a highly classified nature, struck him as untenable. Maybe Snowden was right and the government was wrong—he didn’t know—but he thought no national security apparatus could function if some junior employee decided which secrets to preserve and which to let fly. Still, the secrets that had come out so far, revealing the vast extent of domestic surveillance, appalled him. Stone had written a prize-winning book about the U.S. government’s tendency, throughout history, to overreact in the face of national security threats—from the Sedition Act to the McCarthy era to the surveillance of activists against the Vietnam War—and some of Snowden’s documents suggested that the reaction to 9/11 might be another case in point. Stone was already mulling ways to tighten checks and balances.

Upon arrival at Fort Meade, they were taken to a conference room and greeted by a half dozen top NSA officials, including General Alexander and his deputy, John C. “Chris” Inglis. A former Air Force pilot with graduate degrees in computer science, Inglis had spent his entire adult life in the agency, both in its defensive annex and in SIGINT operations; and he’d been among the few dozen bright young men that Ken Minihan and Mike Hayden promoted ahead of schedule as part of the agency’s post–Cold War reforms.

After some opening remarks, Alexander made his exit, returning periodically through the day, leaving Inglis in charge. Over the next five hours, Inglis and the other officials gave rotating briefings on the controversial surveillance programs, delving deeply into the details.

The most controversial program was the bulk collection of telephone metadata, as authorized by Section 215 of the Patriot Act. According to the Snowden documents, this allowed the NSA to collect and store the records of all phone calls inside the United States—not the contents of those calls, but the phone numbers of those talking, as well as the dates, times, and durations of the conversations, which could reveal quite a lot of information on their own.

Inglis told the group that, in fact, this was not how the program really operated. In the FISA Court’s ruling on Section 215, the NSA could delve into this metadata, looking for connections among various phone numbers, only for the purpose of finding associates of three specific foreign terrorist organizations, including al Qaeda.

Clarke interrupted him. You’ve gone to all the trouble of setting up this program, he said, and you’re looking for connections to just three organizations?

That’s all we have the authority to do, Inglis replied. Moreover, if the metadata revealed that someone inside the United States had called, or been called by, a suspected terrorist, just twenty-two people in the entire NSA—twenty line personnel and two supervisors—were able to request and examine more data about that phone number. And before that data could be probed, two of those twenty personnel and at least one of the supervisors had to agree, independently, that an expanded search was worthwhile. Finally, the authority to search that person’s phone records would expire after 180 days.

If something suspicious showed up in one of those numbers, the NSA analysts could take a second hop; in other words, they could extract a list of all the calls that those numbers had made and received. But if the analysts wanted to expand the search to a third hop, looking at the numbers called to or from those phones, they would have to go through the same procedure all over again, obtaining permission from a supervisor and from the NSA general counsel. (The analysts usually did take a second hop, but almost never a third.)

From the looks that they exchanged across the table, all five members of the Review Group seemed satisfied that the Section 215 program was on the up-and-up (assuming this portion of the briefing was confirmed in a probe of agency files): it was authorized by Congress, approved by the FISA Court, limited in scope, and monitored more fastidiously than any of them had imagined. But President Obama had told them that he didn’t want a legal opinion of the programs; he wanted a broad judgment of whether they were worthwhile.

So the members asked about the results of this surveillance: How many times had the NSA queried the database, and how many terrorist plots were stopped as a result?

One of the other senior officials had the precise numbers at hand. For all of 2012, the NSA queried the database for 288 U.S. phone numbers. As a result of those queries, the agency passed on twelve “tips” to the FBI. If the FBI found the tips intriguing, it could request a court order to intercept the calls to and from that phone number—to listen in on the calls—using NSA technology, if necessary.

So, one of the commissioners asked, how many of those twelve tips led to the halting of a plot or the capture of a terrorist?

The answer was zero. None of the tips had led to anything worth pursuing further; none of the suspicions had panned out.

Geof Stone was floored. “Uh, hello?” he thought. “What are we doing here?” The much-vaunted metadata program (a) seemed to be tightly controlled, (b) did not track every phone call in America, and, now it turned out, (c) had not unearthed a single terrorist.

Clarke asked the unspoken question: Why do you still have this program if it hasn’t produced any results?

Inglis replied that the program had hastened the speed with which the FBI captured at least one terrorist. And, he added, it might point toward a plot sometime in the future. The metadata, after all, exist; the phone companies collect it routinely, as “business records,” and would continue to do so, with or without the NSA or Section 215. Since it’s there, why not use it? If someone in the United States phoned a known terrorist, wasn’t it possible that a plot was in the works? As long as proper safeguards were taken to protect Americans’ privacy, why not look into it?

The skeptics remained tentatively unconvinced. This was something to examine more deeply.

Inglis moved on to what he and his colleagues considered a far more important and damaging Snowden leak. It concerned the program known as PRISM, in which the NSA and FBI tapped into the central servers of nine leading American Internet companies—mainly Microsoft, Yahoo, and Google, but also Facebook, AOL, Skype, YouTube, Apple, and Paltalk—extracting email, documents, photos, audio and video files, and connection logs. The news stories about PRISM acknowledged that the purpose of the intercepts was to track down exclusively foreign targets, but the stories also noted that ordinary Americans’ emails and cellular phone calls got scooped up in the process as well.

The NSA had released a statement, right after the first news stories, calling PRISM “the most significant tool in the NSA’s arsenal for the detection, identification, and disruption of terrorist threats to the US and around the world.” General Alexander had publicly claimed that the data gathered from PRISM had helped discover and disrupt the planning of fifty-four terrorist attacks—a claim that Inglis now repeated, offering to share all the case files with the Review Group.

Whatever the ambiguities about the telephone metadata program, he stated, PRISM had demonstrably saved lives.

Did Americans’ calls and email get caught up in the sweep? Yes, but that was an unavoidable by-product of the technology. The NSA briefers explained to the Review Group what Mike McConnell had explained, back in 2007, to anyone who’d listen: that digital communications traveled in packets, flowing along the most efficient path; and, because most of the world’s bandwidth was concentrated in the United States, pieces of almost every email and cell phone conversation in the world flowed, at some point, through a line of American-based fiber optics.

In the age of landlines and microwave transmissions, if a terrorist in Pakistan called a terrorist in Yemen, the NSA could intercept their conversation without restraint; now, though, if the same two people, in the same overseas locations, were talking on a cell phone, and if NSA analysts wanted to latch on to a packet containing a piece of that conversation while it flowed inside the United States, they would have to get a warrant from the Foreign Intelligence Surveillance Court. It made no sense.

That’s why McConnell pushed for a revision in the law, and that’s what led to the Protect America Act of 2007 and to the FISA Amended Act of 2008, especially Section 702, which allowed the government to conduct electronic surveillance inside the United States—“with the assistance of a communications service provider,” in the words of that law—as long as the people communicating were “reasonably believed” to be outside the United States.

The nine Internet companies, which were named in the news stories, had either complied with NSA requests to tap into their servers or been ordered by the FISA Court to let the NSA in. Either way, the companies had long known what was going on.

Much of this was clear to the Review Group, but some of the procedures that Inglis and the others described were baffling. What did it mean that callers were “reasonably believed” to be on foreign soil? How did the NSA analysts make that assessment?

The briefers went through a list of “selectors”—key-word searches and other signposts—that indicated possible “foreignness.” As more selectors were checked off, the likelihood increased. The intercept could legally get under way, once there was a 52 percent probability that both parties to the call or the email were foreign-based.

Some on the Review Group commented that this seemed an iffy calculation and that, in any case, 52 percent marked a very low bar. The briefers conceded the point. Therefore, they went on, if it turned out, once the intercept began, that the parties were inside the United States, the operation had be shut down immediately and all the data thus far retrieved had to be destroyed.

The briefers also noted that, even though a court order wasn’t required for these Section 702 intercepts, the NSA couldn’t go hunting for just anything. Each year, the agency’s director and the U.S. attorney general had to certify, in a list approved by the FISA Court, the categories of intelligence targets that could be intercepted under Section 702. Then, every fifteen days, after the start of a new intercept, a special panel inside the Justice Department reviewed the operation, making sure it conformed to that list. Finally, every six months, the attorney general reviewed all the start-ups and submitted them to the congressional intelligence committees.

But there was a problem in all this. To get at the surveillance target, the NSA operators had to scoop up the entire packet that carried the pertinent communication. This packet was interwoven with other packets, which carried pieces of other communications, many of them no doubt involving Americans. What happened to all of those pieces? How did the agency make sure that some analyst didn’t read those emails or listen to those cell phone conversations?

The briefers raised these questions on their own, because, just one week earlier, President Obama had declassified a ruling, back in October 2011, by a FISA Court judge named John Bates, excoriating the NSA for the Section 702 intercepts generally. The fact that domestic communications were caught up in these “upstream collections,” as they were called, was no accident, Bates wrote in his ruling; it was an inherent part of the program, an inherent part of packet-switching technology. Unavoidably, then, the NSA was collecting “tens of thousands of wholly domestic communications” each year, and, as such, this constituted a blatant violation of the Fourth Amendment.

“The government,” Bates concluded, “has failed to demonstrate that it has struck a reasonable balance between its foreign intelligence needs and the requirement that information concerning U.S. persons be protected.” As a result, he ordered a shutdown of the entire Section 702 program, until the NSA devised a remedy that did strike such a balance, and he ordered the agency to delete all upstream files that had been collected to date.

This was a serious legal problem, the briefers acknowledged, but, they emphasized, it had been brought to the court’s attention by the NSA; there was no cover-up of wrongdoing. After Bates’s ruling, the NSA changed the architecture of the collection system in a way that would minimize future violations. The new system was put in place a month before the Review Group was formed; Judge Bates declared himself satisfied that it solved the problem.

All in all, the first day of the Review Group’s work was productive. The NSA officials around the table had answered every question, taken up every challenge with what seemed to be genuine candor, even an interest in discussing the issues. They’d rarely discussed these matters with outsiders—until then, no outsider had been cleared to discuss them—and they seemed to relish the opportunity. Geoffrey Stone in particular was impressed; the tenor seemed more like a university seminar than a briefing inside the most cloistered American intelligence agency.

It also seemed clear—if the officials were telling the truth (an assumption the Review Group would soon examine)—that, in one sense, the Snowden documents had been overblown. Stone’s premise going into the day—that the NSA had morphed into a rogue agency—seemed invalid: the programs that Snowden uncovered (again, assuming the briefings were accurate) had been authorized, approved, and pretty closely monitored. Most of the checks and balances that Stone had thought about proposing, it turned out, were already in place.

But to some of the panelists, certainly to Stone, Swire, and Clarke, the briefings had not dispelled a larger set of concerns that the Snowden leaks had raised. These NSA officials, who’d been briefing them all day long, seemed like decent people; the safeguards put in place, the standards of restraint, were impressive; clearly, this was like neither the NSA of the 1960s nor an intelligence agency of any other country. But what if the United States experienced a few more terrorist attacks? Or what if a different sort of president, or a truly roguish NSA director, came to power? Those restraints had been put up from the inside, and they could be taken down from the inside as well. Clearly, the agency’s technical prowess was staggering: its analysts could penetrate every network, server, phone call, or email they wished. The law might bar them from looking at, or listening to, the contents of those exchanges, but if the law were changed or ignored, there would be no physical obstacles; if the software were reprogrammed to track down political dissidents instead of terrorists, there would be no problem compiling vast databases on those kinds of targets.

In short, there was enormous potential for abuse. Stone, who’d written a book about the suppression of dissent in American history, shivered at the thought of what President Richard Nixon or FBI director J. Edgar Hoover might have done if they’d had this technology at their fingertips. And who could say, especially in the age of terror, that Americans would never again see the likes of Nixon or Hoover in the upper echelons of power?


Stone nurtured an unexpected convert to this view in Mike Morell, the recently retired spy on the Review Group. The two shared an office in the SCIF on K Street, and Stone, a charismatic lecturer, laid out the many paths to potential abuse as well as the incidents of actual abuse in recent times, a history of which Morell claimed he knew little, despite his three decades in the CIA. (During the Church hearings, Morell was in high school, oblivious to global affairs; his posture at Langley, where he went to work straight out of college, was that of a nose-to-the-grindstone Company Man.)

Over the next four months, the group returned to Fort Meade a few times, and delegations from Fort Meade visited the group at its office a few times, as well. The more files that the group and its staff examined, the more they felt confirmed in their impressions from the first briefing.

Morell was the one who pored through the NSA case files, including the raw data from all fifty-four terrorist plots that Alexander and Inglis claimed were derailed because of the PRISM program under Section 702 of the FISA Act, as well as a few plots that they were now claiming, belatedly, had been unearthed because of the bulk collection of telephone metadata, authorized by Section 215 of the Patriot Act. Morell and the staff, who also reviewed the files, concluded that the PRISM intercepts did play a role in halting fifty-three of those fifty-four plots—a remarkable validation of the NSA’s central counterterrorist program. However, in none of those fifty-three files did they find evidence that metadata played a substantial role. Nor were they persuaded by the few new cases that Alexander had sent to the group: yes, in those cases, a terrorist’s phone number showed up in the metadata, but it showed up in several other intercepts, too. Had there never been a Section 215, had metadata never been collected in bulk, the NSA or the FBI would still have uncovered those plots.

This conclusion came as a surprise. Morell was inclined to assume that highly touted intelligence programs produced results. His findings to the contrary led Clarke, Stone, and Swire to recommend killing the metadata program outright. Morell wasn’t prepared to go that far; neither was Sunstein. Both bought the argument that, even if it hadn’t stopped any plots so far, it might do so in the future. Morell went further still, arguing that the absence of results suggested that the program should be intensified. For a while, the group’s members thought they’d have to issue a split verdict on this issue.

Then, during one of the meetings at Fort Meade, General Alexander told the group that he could live with an arrangement in which the telecom companies held on to the metadata and the NSA could gain access to specified bits of it only through a FISA Court order. It might take a little longer to obtain the data, but not by much, a few hours maybe; and the new rule, Alexander suggested, could provide for exceptions, allowing access, with a post-facto court order, in case of an emergency.

Alexander also revealed that the NSA once had an Internet metadata program, but it proved very expensive and yielded no results, so, in 2011, he’d terminated it.

To the skeptics on the Review Group, this bit of news deepened their suspicions about Section 215 and the worth of metadata generally. The NSA had a multibillion-dollar budget; if Internet metadata had produced anything of promise, Alexander could have spent more money to expand its reach. The fact that he didn’t, the fact that he killed the program rather than double down on the investment, splashed doubts on the concept’s value.

Even Morell and Sunstein seemed to soften their positions: if Alexander was fine with storing metadata outside the NSA, that might be a compromise the entire group could get behind. Morell embraced the notion with particular enthusiasm: metadata would still exist; but removing it from NSA headquarters would prevent some future rogue director from examining the data at will—would minimize the potential for abuse that Stone had convinced him was a serious issue.

The brief dispute over metadata—whether to kill the program or expand it—had sparked one of the few fits of rancor within the group. This fact had been another source of surprise: given their disparate backgrounds and beliefs, the members had expected to be at each other’s throats routinely. From early on, though, the atmosphere was harmonious.

This camaraderie took hold on the second day of their work when the five went to the J. Edgar Hoover Building—FBI headquarters—in downtown Washington. The group’s staff had requested detailed briefings on the bureau’s relationship with the NSA and on its own version of metadata collection, known as National Security Letters, which, under Section 505 of the Patriot Act, allowed access to Americans’ phone records and other transactions that were deemed “relevant” to investigations into terrorism or clandestine intelligence activities. Unlike the NSA’s metadata program, the FBI’s had no restrictions at all: the letters required no court order; any field officer could issue one, with the director’s authorization; and the recipients of a letter were prohibited from ever revealing that they’d received it. (Until a 2006 revision, they couldn’t even inform their lawyer.) Not merely the potential for abuse, but actual instances of abuses, seemed very likely.

When the five arrived at the bureau’s headquarters, they were met not by the director, nor by his deputy, but by the third-ranking official, who took leave after escorting them to a conference room, where twenty FBI officials sat around a table, prepared to drone through canned presentations, describing their jobs, one by one, for the hour that the group had been allotted.

Ten minutes into this dog-and-pony show, Clarke asked about the briefings that they’d requested. Specifically, he wanted to know how many National Security Letters the FBI issued each year and how the bureau measured their effectiveness. One of the officials responded that only the regional bureaus had those numbers, no one had collated them nationally; and no one had devised any measure of effectiveness.

The canned briefings resumed, but after a few more minutes, Clarke stood up and exclaimed, “This is bullshit. We’re out of here.” He walked out of the room; the other four sheepishly followed, while the FBI officials sat in shock. At first, Clarke’s colleagues were a bit mortified, too; they’d heard about his antics and wondered if this was going to be standard procedure.

But by the next day, it was clear that Clarke had known exactly what he was doing: word quickly spread about “the bullshit briefing,” and from that point on, no federal agency dared to insult the group with condescending show-and-tell; only a few agencies proved to be very useful, but they all at least tried to be substantive, and the FBI even called back for a second chance.

Clarke’s act emboldened his colleagues to press more firmly for answers to their questions. The nature of their work reinforced this solidarity. They were the first group of outsiders to investigate this subject with the president’s backing, and they derived an esprit de corps from the distinction. More than that, they found themselves agreeing on almost everything, because most of the facts seemed so clear. Peter Swire, who’d been nervous about reigniting fifteen-year-old tensions with Clarke, found himself getting along grandly with his former rival and gaining confidence in his own judgments, the more they aligned with his.

As the air lightened from cordiality to jollity, they started calling themselves the “five guys,” after the name of a local hamburger joint, and referring to the big book they’d soon be writing as “The Five Guys Report.”

Their esprit intensified with the realization that they were pretty much the only serious monitors in town. They met on Capitol Hill with the select intelligence committees, and came away concluding that their members had neither the time nor the resources for deep oversight. They spoke with a few former FISA Court judges and found them too conciliatory by nature.

Good thing, they concluded, that the NSA had an inside army of lawyers to assure compliance with the rules, because if it didn’t, no one on the outside would have any way of knowing whether the agency was or wasn’t a den of lawlessness. The five guys agreed that their task, above all else, was to come up with ways to strengthen external controls.

They divvied up the writing chores, each drafting a section or two and inserting ideas on how to fix the problems they’d diagnosed. Cut, pasted, and edited, the sections added up to a 303-page report, with forty-six recommendations for reform.

One of the key recommendations grew out of the group’s conversation with General Alexander: all metadata should be removed from Fort Meade and held by the private telecom companies or some other third party, with the NSA allowed access only through a FISA Court order. The group was particularly firm on this point. Even Mike Morell had come to view this recommendation as the report’s centerpiece: if the president rejected it, he felt, the whole exercise will have been pointless.

Another proposal was to bar the FBI from issuing National Security Letters without a FISA Court order and, in any case, letting recipients disclose that they’d received such a letter after 180 days, unless a judge extended the term of secrecy for specific national security reasons. The point of both recommendations was, as the report put it, to “reduce the risk, both actual and perceived, of government abuse.”

The group also wrote that the FISA Court should include a public interest advocate, that NSA directors should be confirmed by the Senate, that they should not take on the additional post of U.S. cyber commander (on the grounds that dual-heading CyberCom and the NSA gave too much power to one person), and that the Information Assurance Directorate—the cyber security side of Fort Meade—should be split off from the NSA and turned into a separate agency of the Defense Department.

Another recommendation was to bar the government from doing anything to “subvert, undermine, weaken, or make vulnerable generally available commercial software.” Specifically, if NSA analysts discovered a zero-day exploit—a vulnerability that no one had yet discovered—they should be required to patch the hole at once, except in “rare instances,” when the government could “briefly authorize” using zero-days “for high-priority intelligence collection,” though, even then, they could do so only after approval by a “senior interagency review involving all appropriate departments.”

This was one of the group’s more esoteric, but also radical, recommendations. Zero-day vulnerabilities were the gemstones of modern SIGINT, prized commodities that the agency trained its top sleuths—and sometimes paid private hackers—to unearth and exploit. The proposal was meant, in part, to placate American software executives, who worried that the foreign market would dry up if prospective customers assumed the NSA had carved back doors into their products. But it was also intended to make computer networks less vulnerable; it was a declaration that the needs of cyber security should supersede those of cyber offensive warfare.

Finally, lest anyone interpret the report as an apologia for Edward Snowden (whose name appeared nowhere in the text), ten of the forty-six recommendations dealt with ways to tighten the security of highly classified information inside the intelligence community, including procedures to prevent system administrators—which had been Snowden’s job at the NSA facility in Oahu—from gaining access to documents unrelated to their work.

It was a wide-ranging set of proposals. Now what to do with them? When the five had first met, back in late August, they’d started to discuss how to handle the many disagreements that they anticipated. Should the report contain dissenting footnotes, majority and minority chapters, or what? They’d never finished that conversation, and now, here they were, with the mid-December deadline looming.

One of the staffers suggested listing the forty-six recommendations on an Excel spreadsheet, with the letters Y and N (for Yes and No) beside each one, and giving a copy of the sheet to each of the five members, who would mark it up, indicating whether they agreed or disagreed with each recommendation. The staffer would then tabulate the results.

After counting the votes, the staffer looked up and said, “You won’t believe this.” The five guys had agreed, unanimously, on all forty-six.


On December 13, two days before deadline, the members of the Review Group turned in their report, titled Liberty and Security in a Changing World. To their minds, they fulfilled their main task—as their report put it, “to promote public trust, while also allowing the Intelligence Community to do what must be done to respond to genuine threats”—but also exceeded that narrow mandate, outlining truly substantial reforms to the intelligence collection system.

Their language was forthright in ways bound to irritate all sides of the debate, which had only intensified in the six months since the onslaught of Snowden-leaked documents. “Although recent disclosures and commentary have created the impression, in some quarters, that NSA surveillance is indiscriminate and pervasive across the globe,” the report stated, “that is not the case.” However, it went on, the Review Group did find “serious and persistent instances of noncompliance in the Intelligence Community’s implementation of its authorities,” which, “even if unintentional,” raised “serious concerns” about its “capacity to manage” its powers “in an effective, lawful manner.”

To put it another way (and the point was made several times, throughout the report), while the group found “no evidence of illegality or other abuse of authority for the purpose of targeting domestic political activities,” there was, always present, “the lurking danger of abuse.” In a passage that might have come straight out of Geoffrey Stone’s book, the report stated, “We cannot discount the risk, in light of the lessons of our own history, that at some point in the future, high-level government officials will decide that this massive database of extraordinarily sensitive private information is there for the plucking.”

On December 18, at eleven a.m., President Obama met with the group again in the Situation Room. He’d read the outlines of the report and planned to peruse it during Christmas break at his vacation home in Hawaii.

A month later, on January 17, 2014, in a speech at the Justice Department, Obama announced a set of new policies prompted by the report. The first half of his speech dwelled on the importance of intelligence throughout American history: Paul Revere’s ride, warning that the British were coming; reconnaissance balloons floated by the Union army to gauge the size of Confederate regiments; the vital role of code-breakers in defeating Nazi Germany and Imperial Japan. Similarly, today, he said, “We cannot prevent terrorist attacks or cyber threats without some capability to penetrate digital communications.”

The message had percolated throughout the national security bureaucracy, and Obama had absorbed it as well: that, in the cyber world, offense and defense stemmed from the same tools and techniques. (In a interview several months later with an IT webzine, Obama, the renown hoops enthusiast, likened cyber conflict to basketball, “in the sense that there’s no clear line between offense and defense, things are going back and forth all the time.”) And so, the president ignored the Review Group’s proposals to split the NSA from Cyber Command or to place the defensive side of NSA in a separate agency.

However, Obama agreed with the group’s general point on “the risk of government overreach” and the “potential for abuse.” And so, he accepted many of its other recommendations. He rejected the proposal to require FISA Court orders for the FBI’s National Security Letters, but he did limit how long the letters could remain secret. (He eventually settled on a limit of 180 days, with a court order required for an extension.) There would be no more surveillance of “our close friends and allies” without some compelling reason (a reference to the monitoring of Angela Merkel’s cell phone, though Obama’s wording allowed a wide berth for exceptions). And his national security team would conduct annual reviews of the surveillance programs, weighing security needs against policies toward alliances, privacy rights, civil liberties, and the commercial interests of U.S. companies.

This last idea led, three months later, to a new White House policy barring the use of a zero-day exploit, unless the NSA made a compelling case that the pros outweighed the cons. And the final verdict on its case would be decided not by the NSA director but by the cabinet secretaries in the NSC and, ultimately, by the president. This was potentially a very big deal. Whether it would really limit the practice—whether it amounted to a political check or a rubber stamp—was another matter.I

Finally, Obama spoke of the most controversial program, the bulk collection of telephone metadata under Section 215 of the Patriot Act. First, as an immediate step, he ordered the NSA to restrict its data searches to two hops, down from its previously allowed limit of three. (Though potentially significant, this had little real impact, as the NSA almost never took three hops.) Second, and more significant, he endorsed the proposal to store the metadata with a private entity and to allow NSA access only after a FISA Court order.

These endorsements seemed doomed, though, because any changes in the storage of metadata or in the composition of the FISA Court would have to be voted on by Congress. Under ordinary conditions, Congress—especially this Republican-controlled Congress—wouldn’t schedule such a vote: its leaders had no desire to change the operations of the intelligence agencies or to do much of anything that President Obama wanted them to do.

But these weren’t ordinary conditions. The USA Patriot Act had been passed by Congress, under great pressure, in the immediate aftermath of the September 11 attacks: the bill came to the floor hot off the printing presses; almost no one had time to read it. In exchange for their haste in passing it, key Democratic legislators insisted, over intense opposition by the Bush White House, that a sunset clause—an expiration date—be written into certain parts of the law (including Section 215, which allowed the NSA to collect and store metadata), so that Congress could extend its provisions, or let them lapse, at a time allowing more deliberation.

In 2011, when those provisions had last been set to expire, Congress voted to extend them until June 2015. In the interim four years, three things happened. First, and pivotally, came Edward Snowden’s disclosures about the extent of NSA domestic surveillance. Second, the five guys report concluded that this metadata hadn’t nabbed a single terrorist and recommended several reforms to reduce the potential for abuse.

Third, on May 7, just weeks before the next expiration date, the U.S. 2nd Circuit Court of Appeals ruled that Section 215 of the Patriot Act did not in fact authorize anything so broad as the NSA’s bulk metadata collection program—that the program was, in fact, illegal. Section 215 permitted the government to intercept and store data that had “relevance” to an “investigation” of a terrorist plot or group. The NSA reasoned that, in tracing the links of a terrorist conspiracy, it was impossible to know what was relevant—who the actors were—ahead of time, so it was best to create an archive of calls that could be plowed through in retrospect; it was necessary, by this logic, to collect everything because anything might prove relevant; to find a needle in a haystack, you needed access to “the whole haystack.” The FISA Court had long ago accepted the NSA’s logic, but now the 2nd Circuit Court rejected it as “unprecedented and unwarranted.” In the court case that culminated in the ruling, the Justice Department (which was defending the NSA position) likened the metadata collection program to the broad subpoena powers of a grand jury. But the court jeered at the analogy: grand juries, it noted, are “bounded by the facts” of a particular investigation and “by a finite time limitation,” whereas the NSA metadata program required “that the phone companies turn over records on an ‘ongoing daily basis’—with no foreseeable end point, no requirement of relevance to any particular set of facts, and no limitations as to subject matter or individuals covered.”

The judges declined to rule on the program’s constitutionality; they even allowed that Congress could authorize the metadata program, if it chose to do so explicitly. And so it was up to Congress—and its members couldn’t evade the moment of truth. Owing to the sunset clause, the House and Senate had to take a vote on Section 215, one way or the other; if they didn’t, the metadata program would expire by default.

In this altered climate, the Republican leaders couldn’t muster majority support to sustain the status quo. Moderates in Congress drafted a bill called the USA Freedom Act, which would keep metadata stored with the telecom companies and allow the NSA access only to narrowly specified pieces of it, and only after obtaining a FISA Court order to do so. The new law would also require the FISA Court to appoint a civil-liberties advocate to argue, on occasion, against NSA requests; and it would require periodic reviews to declassify at least portions of FISA Court rulings. The House passed the reform bill by a wide majority; the Senate, after much resistance by the Republican leadership, had no choice but to pass it as well.

Against all odds, owing to the one bit of farsighted caution in a law passed in 2001 amid the panic of a national emergency, Congress approved the main reforms of NSA practices, as recommended by President Obama’s commission—and by President Obama himself.

The measures wouldn’t change much about cyber espionage, cyber war, or the long reach of the NSA, to say nothing of its foreign counterparts. For all the political storms that it stirred, the bulk collection of domestic metadata comprised a tiny portion of the agency’s activities. But the reforms would block a tempting path to potential abuse, and they added an extra layer of control, albeit a thin one, on the agency’s power—and its technologies’ inclination—to intrude into everyday life.


On March 31, two and a half months after Obama’s speech at the Justice Department, in which he called for those reforms, Geoffrey Stone delivered a speech at Fort Meade. The NSA staff had asked him to recount his work on the Review Group and to reflect on the ideas and lessons he’d taken away.

Stone started off by noting that, as a civil libertarian, he’d approached the NSA with great skepticism, but was quickly impressed by its “high degree of integrity” and “deep commitment to the rule of law.” The agency made mistakes, of course, but they were just that—mistakes, not intentional acts of illegality. It wasn’t a rogue agency; it was doing what its political masters wanted and what the courts allowed, and, while reforms were necessary, its activities were generally lawful.

His speech lavished praise a little while longer on the agency and its employees, but then it took a sharp turn. “To be clear,” he emphasized, “I am not saying that citizens should trust the NSA.” The agency needed to be held up to “constant and rigorous review.” Its work was “important to the safety of the nation,” but, by nature, it posed “grave dangers” to American values.

“I found, to my surprise, that the NSA deserves the respect and appreciation of the American people,” he summed up. “But it should never, ever, be trusted.”


IThe questions to be asked, in considering whether to exploit a zero-day vulnerability, were these: To what extent is the vulnerable system used in the critical infrastructure; in other words, does the vulnerability, if left unpatched, pose significant risk to our own society? If an adversary or criminal group knew about the vulnerability, how much harm could it inflict? How likely is it that we would know if someone else exploited it? How badly do we need the intelligence we think we can get from exploiting it? Are there other ways to get the intelligence? Could we exploit the vulnerability for a short period of time before disclosing and patching it?