CHAPTER 8


TAILORED ACCESS

ART MONEY was flustered. He was the ASD(C3I), the assistant secretary of defense for command, control, communications, and intelligence—and thus the Pentagon’s point man on information warfare, its civilian liaison with the NSA. The past few years should have vindicated his enthusiasms. Eligible Receiver, Solar Sunrise, and Moonlight Maze had sired an awareness that the military’s computer networks were vulnerable to attack. J-39’s operations in the Balkans proved that the vulnerabilities of other countries’ networks could be exploited for military gain—that knowing how to exploit them could give the United States an advantage in wartime. And yet, few of America’s senior officers evinced the slightest interest in the technology’s possibilities.

Money’s interest in military technology dated back to a night in 1957, when, as a guard at an Army base in California, he looked up at the sky and saw Sputnik II, the Soviet Union’s second space satellite, orbiting the earth before the Americans had launched even a first—a beacon of the future, at once fearsome and enthralling. Four years later, he enrolled at San Jose State for an engineering degree. Lockheed’s plant in nearby Sunnyvale was hiring any engineer who could breathe. Money took a job on the night shift, helping to build the system that would launch the new Polaris missile from a tube in a submarine. Soon he was working on top secret spy satellites and, after earning his diploma, the highly classified devices that intercepted radio signals from Soviet missile tests.

From there, he went to work for ESL, the firm that Bill Perry had founded to develop SIGINT equipment for the NSA and CIA; by 1990, Money rose to the rank of company president. Six years later, at the urging of Perry, his longtime mentor, who was now secretary of defense, he came to work at the Pentagon, as assistant secretary of the Air Force for research, development, and acquisition.

That job put him in frequent touch with John Hamre, the Pentagon’s comptroller. In February 1998, Solar Sunrise erupted; Hamre, now deputy secretary of defense, realized, to his alarm, that no one around him knew what to do; so he convinced his boss, Secretary of Defense William Cohen, to make Art Money the new ASD(C3I).

Money was a natural for the job. Hamre was set on turning cyber security into a top priority; Money, one of the Pentagon’s best-informed and most thoroughly connected officials on cyber matters, became his chief adviser on the subject. It was Money who suggested installing intrusion-detection systems on Defense Department computers. It was Money who brought Dusty Rhoads into J-39 after hearing about his work in the Blue Flag war games at the 609th Information Warfare Squadron. It was Money who brought together J-39, the NSA, and the CIA during the campaign in the Balkans.

The concept of information warfare—or cyber warfare, as it was now called—should have taken off at this point, but it hadn’t because most of the top generals were still uninterested or, in some cases, resistant.

In the summer of 1998, in the wake of Solar Sunrise, Money was instrumental in setting up JTF-CND—Joint Task Force-Computer Network Defense—as the office to coordinate protective measures for all Defense Department computer systems, including the manning of a 24/7 alert center and the drafting of protocols spelling out what to do in the event of an attack. In short, Money was piecing together the answer to the question Hamre posed at the start of Solar Sunrise: “Who’s in charge?”

The initial plan was to give Joint Task Force-Computer Network Defense an offensive role as well, a mandate to develop options for attacking an adversary’s networks. Dusty Rhoads set up a small, hush-hush outpost to do just that. But he, Money, and Soup Campbell, the one-star general in charge of the task force, knew that the services wouldn’t grant such powers to a small bureau with no command authority.

However, Campbell made a case that, to the extent the military services had plans or programs for cyber offensive operations (and he knew they did), the task force ought, at the very least, to be briefed on them. His argument was unassailable: the task force analysts needed to develop defenses against cyber attacks; knowing what kinds of attacks the U.S. military had devised would help them expand the range of defenses—since, whatever America was plotting against its adversaries, its adversaries would likely soon be plotting against America.

Cohen bought the argument and wrote a memo to the service chiefs, ordering them to share their computer network attack plans with the joint task force. Yet at a meeting chaired by John Hamre, the vice chiefs of the Army, Navy, and Air Force—speaking on behalf of their bosses—blew the order off. They didn’t explicitly disobey the order; that would have been insubordination, a firing offense. Instead, they redefined their attack plans as something else, so they could say they had no such plans to brief. But their evasion was obvious: they just didn’t want to share these secrets with others, not even if the secretary of defense told them to do so.

Clearly, the task force needed a broader charter and a home with more power. So, on April 1, 2000, JTF-CND became JTF-CNO, the O standing for “Operations,” and those operations included not just Computer Network Defense but also, explicitly, Computer Network Attack. The new task force was placed under the purview of U.S. Space Command, in Colorado Springs. It was an odd place to be, but SpaceCom was the only unit that wanted the mission. In any case, it was a command, invested with war-planning and war-fighting powers.

Still, Money, Campbell, Hamre, and the new task force commander, Major General James D. Bryan, saw this, too, as a temporary arrangement. Colorado Springs was a long way from the Pentagon or any other power center; and the computer geeks from the task force were complaining that their counterparts at Space Command, who had to be meshed into the mission, didn’t know anything about cyber offense.

Money felt that the cyber missions—especially those dealing with cyber offense—should ultimately be brought to the Fort Meade headquarters of the NSA. And so did the new NSA director, Lieutenant General Michael Hayden.


Mike Hayden came to the NSA in March 1999, succeeding Ken Minihan. It wasn’t the first time Hayden followed in his footsteps. For close to two years, beginning in January 1996, Hayden commanded Kelly Air Force Base in San Antonio. Kelly was where Minihan had run the Air Force Information Warfare Center, which pioneered much of what came to be called cyber warfare—offense and defense—and, by the time Hayden arrived, it had grown in sophistication and stature.

Hayden knew little about the subject before his tenure at Kelly, but he quickly realized its possibilities. A systematic thinker who liked to place ideas in categories, he came up with a mission concept that he called GEDA—an acronym for Gain (collect information), Exploit (use the information to penetrate the enemy’s networks), Defend (prevent the enemy from penetrating our networks), Attack (don’t just penetrate the enemy network—disable, disorient, or destroy it).

At first glance, the concept seemed obvious. But Hayden’s deeper point was that all these missions were intertwined—they all involved the same technology, the same networks, the same actions: intelligence and operations in cyberspace—cyber security, cyber espionage, and cyber war—were, in a fundamental sense, synonymous.

Hayden was stationed overseas, as the intelligence chief for U.S. forces in South Korea, when Solar Sunrise and Moonlight Maze stirred panic in senior officialdom and made at least some generals realize that the trendy talk about “information warfare” might be worthy of attention. Suddenly, if just to stake a claim in upcoming budget battles, each of the services hung out a cyber shingle: the Army’s Land Information Warfare Activity, the Navy’s Naval Information Warfare Activity, and even a Marine Corps Computer Network Defense unit, joined the long-standing Air Force Information Warfare Center in the enterprise.

Many of these entities had sprung up during Ken Minihan’s term as NSA director, and the trend worried him for three reasons. First, there were financial concerns: the defense budget was getting slashed in the wake of the Cold War; the NSA’s share was taking still deeper cuts; and he didn’t need other, more narrowly focused entities—novices in a realm that the NSA had invented and mastered—to drain his resources further. Second, some of these aspiring cyber warriors had poor operational security; they were vulnerable to hacking by adversaries, and if an adversary broke into their networks, he might gain access to files that the NSA had shared.

Finally, there was an existential concern. When Minihan became NSA director, Bill Perry told him, “Ken, you need to preserve the mystique of Fort Meade.” The mystique—that was the key to the place, Minihan realized early on: it was what swayed presidents, cabinet secretaries, committee chairmen, and teams of government lawyers to let the NSA operate in near-total secrecy, and with greater autonomy than the other intelligence agencies. Fort Meade was where brilliant, faceless code-makers and code-breakers did things that few outsiders could pretend to understand, much less duplicate; and, for nearly the entire post–World War II era, they’d played a huge, if largely unreported, role in keeping the peace.

Now, the mystique was unraveling. With the Cold War’s demise, Minihan gutted the agency’s legendary A Group, the Soviet specialists, in order to devote more resources to emerging threats, including rogue regimes and terrorists. The agency could still boast of its core technical base: the cryptologists, the in-house labs, and their unique partnership with obscure outside contractors—that was where the mystique still glowed. Minihan needed to build up that base, expand its scope, shift its agenda, and preserve its mastery—not let it be diluted by lesser wannabes splashing in the same stream.

Amid the profusion of entities claiming a piece of Fort Meade’s once-exclusive turf, and the parallel profusion of terms for what was essentially the same activity (“information warfare,” “information operations,” “cyber warfare,” and so forth), Minihan tried to draw the line. “I don’t care what you call it,” he often said to his political masters. “I just want you to call me.”

To keep NSA at the center of this universe, Minihan created a new office, at Fort Meade, called the IOTC—the Information Operations Technology Center. The idea was to consolidate all of the military’s sundry cyber shops: not to destroy them—he didn’t want to set off bureaucratic wars—but to corral them into his domain.

He had neither the legal authority nor the political clout to do this by fiat, so he asked Art Money, whom he’d known for years and who’d just become ASD(C3I), to scour the individual services’ cyber budgets for duplicative programs; no surprise, Money found many. He took his findings to John Hamre, highlighted the redundancies, and made the pitch. No agency, Money said, could perform these tasks better than the NSA—which, he added, happened to have an office called the IOTC, which would be ideal for streamlining and coordinating these far-flung efforts. Hamre, who had recently come to appreciate the NSA’s value, approved the idea and put the new center under Money’s supervision.

When Hayden took over NSA, Money pressed him to take the center in a different direction. Minihan’s aim, in setting up the IOTC, was to emphasize the T—Technology: that was the NSA’s chief selling point, its rationale for remaining at the top of the pyramid. Money wanted to stress the O—Operations: he wanted to use the IOTC as a back door for the NSA to get into cyber offensive operations.

The idea aroused controversy on a number of fronts. First, within the NSA, many of the old-timers didn’t like it. The point of SIGINT, the prime NSA mission, was to gather intelligence by penetrating enemy communications; if the NSA attacked the source of those communications, then the intelligence would be blown; the enemy would know that we knew how to penetrate his network, and he’d change his codes, revamp his security.

Second, Money’s idea wasn’t quite legal. Broadly, the military operated under Title 10 of the federal statutes, while the intelligence agencies, including the NSA, fell under Title 50. Title 10 authorized the use of force; Title 50 did not. The military could use intelligence gathered by Title 50 agencies as the basis for an attack; the NSA could not launch attacks on its own.

Money and Hayden thought a case could be made that the IOTC maneuvered around these strictures because, formally, it reported to the secretary of defense. But the legal thicket was too dense for such a simple workaround. Each of the military services would have a stake in any action that the IOTC might take, as would the CIA and possibly other agencies. It was a ramshackle structure from the start. It made sense from a purely technical point of view: as Minihan and Hayden both realized, from their tenures at the Air Force Information Warfare Center in San Antonio, computer network offense and defense were operationally the same—but the legal authorities were separate.

Hayden considered the IOTC a good enough arrangement for the moment. At least, as Minihan had intended, it protected the NSA’s dominance in the cyber arena. Expanding its realm over the long haul would have to wait. Meanwhile, Hayden faced a slew of other, daunting problems from almost the moment he took office.

Soon after his arrival at Fort Meade, he got wind of a top secret report, written a few months earlier for the Senate Select Committee on Intelligence, titled “Are We Going Deaf?” It concluded that the NSA, once on the cutting edge of SIGINT technology, had failed to keep pace with the changes in global telecommunications; that, while the world was shifting to digital cell phones, encrypted email, and fiber optics, Fort Meade remained all too wedded to tapping land lines, analog circuits, and intercepting radio frequency transmissions.

The report was written by the Technical Advisory Group, a small panel of experts that the Senate committee had put together in 1997 to analyze the implications of the looming digital age. Most of the group’s members were retired NSA officials, who had urged their contacts on the committee to create the advisory group precisely because they were disturbed by Fort Meade’s recalcitrant ways and thought that outside prodding—especially from the senators who held its purse strings—might push things forward.

One of the group’s members, and the chief (though unnamed) author of this report, was former NSA director Bill Studeman. A full decade had passed since Studeman, upon arriving at Fort Meade, commissioned two major studies: one, projecting how quickly the world would shift from analog to digital; the other, concluding that the skill sets of NSA personnel were out of whack with the requirements of the impending new world.

In the years since, Studeman had served as deputy director of the CIA, joined various intelligence advisory boards, and headed up projects on surveillance and information warfare as vice president of Northrop Grumman Corporation. In short, he was still plugged in, and he was appalled by the extent to which the NSA was broaching obsolescence.

The Senate committee took his report very seriously, citing it in its annual report and threatening to slash the NSA budget if the agency didn’t bring its practices up to date.

Studeman’s report was circulated while Minihan was still NSA director, and it irritated him. He’d instigated a lot of reforms already; the agency had come a long way since the Senate committee discovered that it was spending only $2 million a year on projects to penetrate the Internet. But he didn’t speak out against the report: if the senators believed it, maybe they’d boost the NSA’s budget. That was his biggest problem: he knew what needed to be done; he just needed more money to do it.

But Hayden, when he took over Fort Meade, took Studeman’s report as gospel and named a five-man group of outsiders—senior executives at aerospace contractors who’d managed several intelligence-related projects—to conduct a review of the NSA’s organization, culture, management, and priorities. With Hayden’s encouragement, they pored over the books and interviewed more than a hundred officials, some inside the NSA, some at other agencies that had dealings—in some cases, contentious dealings—with Fort Meade.

On October 12, after two months of probing, the executives briefed Hayden on their findings, which they, soon after, summarized in a twenty-seven-page report. The NSA, they wrote, suffered from a “poorly communicated mission,” a “lack of vision,” a “broken personnel system,” “poor” relations with other agencies that depended on its intelligence, and an “inward-looking culture,” stemming in part from its intense secrecy. As a result of all these shortcomings, NSA managers tended to protect its “legacy infrastructure” rather than develop “new methods to deal with the global network.” If it persisted in its outmoded ways, the agency “will fail,” and its “stakeholders”—the president, secretary of defense, and other senior officials—“will go elsewhere” for their intelligence.

Few of the group’s observations or critiques were new. NSA directors, going back twenty years, had spoken of the looming gap between the agency’s tools and the digital world to come. Studeman and his mentor, Bobby Ray Inman, warned of the need to adapt, though too far ahead of time for their words to gain traction. Mike McConnell prodded the machine into motion, but then got caught up in the ill-fated Clipper Chip. Ken Minihan saw the future more clearly than most, but he wasn’t a natural manager. He was a good old boy from Texas who dispensed with the formalities of most general officers and played up his down-home style (some called it “his Andy Griffith bit”). Everyone liked him, but few understood what he was talking about. He would drop Air Force aphorisms, like “We’re gonna make a hard right turn with our blinkers off,” which flew over everyone’s head. He’d issue stern pronouncements, like “One team, one mission,” but this, too, inspired only uncertainty: he seemed to be saying that someone should work more closely with someone else, but who and with whom—the SIGINT and Information Assurance Directorates? the NSA and the CIA? the intelligence community and the military? No one quite knew.

Hayden, by contrast, was a modern military general, less brash, certainly less folksy, than Minihan: more of a tight-cornered, chart-sketching manager. As a follow-up to the five-man group’s briefing, he circulated throughout the NSA his own eighteen-page, bluntly worded memo titled “The Director’s Work Plan for Change,” summarizing much of the executives’ report and outlining his solutions.

His language was as stark as his message. The NSA, he wrote, “is a misaligned organization,” its storied legacy “in great peril.” It needed bold new leadership, an integrated workforce in which signals intelligence and information security would act in concert, not at odds with each other (this is what Minihan had meant by “one team, one mission”), and—above all—a refocused SIGINT Directorate that would “address the challenge of technological change.”

He concluded, “We’ve got it backwards. We start with our internal tradecraft, believing that customers will ultimately benefit”—when, in fact, the agency needed to focus first on the needs of the customers (the White House, the Defense Department, and the rest of the intelligence community), then align its tradecraft to those tasks.

Minihan had gone some distance down that road. He decimated the A Group, the Soviet specialists who’d helped win the Cold War, but he didn’t erect a structure, or clearly define a new mission worthy of the title A Group in its place. It wasn’t entirely his fault: as he frequently complained, he lacked the money, the time, and any direction from his political masters. Ideally, as Hayden would note, the NSA goes out and gets what the nation’s leaders want it to get; but no one high up gave Minihan any marching order. Then again, the lack of communication went both ways: no one high up knew what the NSA could offer, apart from the usual goods, which were fine, as far as they went, but they fell short in a world where its tools and techniques were “going deaf.”

One of the agency’s main problems, according to the aerospace executives’ report, was a “broken personnel system.” Employees tended to serve for life, and they were promoted through the ranks at the same pace, almost automatically, with little regard for individual talent. This tenured system had obstructed previous stabs at reform: the upper rungs were occupied by people who’d come up in the 1970s and 1980s, when money flowed freely, the enemy was clear-cut, and communications—mainly telephone calls and radio-frequency transmissions—could be tapped by a simple circuit or scooped out of the air.

Hayden changed the personnel system, first of all. On November 15, he inaugurated “One Hundred Days of Change.” Before, senior employees wore special badges and rode in special elevators; now, everyone would wear the same badges, and all elevators would be open to all. Hayden also scoured personnel evaluations, consulted a few trusted advisers, and—after the first two weeks—fired sixty people who had been soaking up space for decades and promoted sixty more competent officials, most of them far junior in age and seniority, to fill the vacancies.

Much grumbling ensued, but then, on January 24, 2000, ten weeks into Hayden’s campaign, an alarm bell went off: the NSA’s main computer system crashed—and stayed crashed for seventy-two hours. The computer was still storing intelligence that the field stations were gathering from all over the world, but no one at Fort Meade could gain access to it. Raw intelligence—unsifted, unprocessed, unanalyzed—was all but useless; for three days, the NSA was, in effect, shut down.

At first, some suspected sabotage or a delayed effect of Y2K. But the in-house tech crews quickly concluded that the computer had simply been overloaded; and the damage was so severe that they’d have to reconstruct the data and programs after it came back online.

The grumbling about Hayden ceased. If anyone had doubted that big changes were necessary, there was no doubt now.

Another criticism in the executives’ report was that the SIGINT Directorate stovepiped its data by geography—one group looked at signals from the former Soviet Union, another from the Middle East, another from Asia—whereas, out in the real world, all communications passed through the same network. The World Wide Web was precisely that—worldwide.

In their report, the executives suggested a new organizational chart for the SIGINT Directorate, broken down not along regional lines (which no longer made sense) but rather into “Global Response,” “Global Network,” and “Tailored Access.”

“Global Response” would confront day-to-day crises without diverting resources from the agency’s steady tasks. This had been a big source of Minihan’s frustrations: the president or secretary of defense kept requesting so much intelligence on one crisis after another—Saddam Hussein’s arms buildup, North Korea’s nuclear program, prospects for Middle East peace talks—that he couldn’t focus on structural reforms.

“Global Network” was the new challenge. In the old days, NSA linguists would sit and listen to live feeds, or stored tapes, of phone conversations and radio transmissions that its taps and antenna dishes were scooping up worldwide. In the new age of cell phones, faxes, and the Internet, there often wasn’t anything to listen to; and to the extent there was, the signal didn’t travel from one point to another, on one line or channel. Instead, digital communications zipped through the network in data packets, which were closely interspersed with packets of other communications (a feature that would spark great controversy years later, when citizens learned that the NSA was intercepting their conversations as well as those of bad guys). These networks and packets were far too vast for human beings to monitor in real time; the intelligence would have to be crunched, sifted, and processed by very high-speed computers, scanning the data for key words or suspicious traffic patterns.

To Hayden, the three-day computer crash in January suggested that the NSA’s own hardware might not be up to the task. The aerospace executives had recommended, with no small self-interest, that the agency should examine what outside contractors might offer. Hayden took them up on their suggestion. New computers and software would be needed to scan and make sense of this new global network; maybe commercial contractors would do a better job of creating them.

He called the new program Trailblazer, and in August, he held Trailblazer Industry Day, inviting 130 corporate representatives to come to Fort Meade and hear his pitch. In October, he opened competition on a contract to build a “technical demonstration platform” of the new system. The following March, the NSA awarded $280 million—the opening allotment of what would amount to more than $1 billion over the next decade—to Science Applications International Corporation, with pieces of the program shared by Northrop Grumman, Boeing, Computer Sciences Corp., and Booz Allen Hamilton, all of which had longtime relations with the intelligence community.

SAIC was particularly intertwined with NSA. Bobby Ray Inman sat on its board of directors. Bill Black, one of the agency’s top cryptologists, had retired in 1997 to become the corporation’s assistant vice president; then, three years later, in a case of revolving doors that shocked the most jaded insiders, Hayden brought him back in to be the NSA deputy director—and to manage Trailblazer, which he’d been running from the other side of the transom at SAIC.

But the NSA needed a bigger breakthrough still: it needed tools and techniques to intercept signals, not only as they flowed through the digital network but also at their source. The biggest information warfare campaign to date, in the Balkans, had involved hacking into Belgrade’s telephone system. Earlier that decade, in the Gulf War, when Saddam Hussein’s generals sent orders through fiber-optic cable, the Pentagon’s Joint Intelligence Committee—which relied heavily on NSA personnel and technology—figured out how to blow up the cable links, forcing Saddam to switch to microwave. The NSA knew how to intercept microwaves, but it didn’t yet know how to intercept the data rushing through fiber optics. That’s what the agency now needed to do.

In their report to Hayden, the aerospace executives recommended that the SIGINT and Information Assurance Directorates “work very closely,” since their two missions were “rapidly becoming two sides of the same coin.”

For years, Information Assurance, located in an annex near Baltimore-Washington International Airport, a half hour’s drive from Fort Meade, had been testing and fixing software used by the U.S. military—probing for vulnerabilities that the enemy could exploit. Now one of the main roles of the SIGINT crews, in the heart of the agency’s headquarters, was to find and exploit vulnerabilities in the adversaries’ software. Since people (and military establishments) around the world were using the same Western software, the Information Assurance specialists possessed knowledge that would be valuable to the SIGINT crews. At the same time, the SIGINT crews had knowledge about adversaries’ networks—what they were doing, what kinds of attacks they were planning and testing—that would be valuable to the Information Assurance specialists. Sharing this knowledge, on the offense and the defense, required mixing the agency’s two distinct cultures.

Inman and McConnell had taken steps toward this integration. Minihan had started to tear down the wall, moving a few people from the annex to headquarters and vice versa. Hayden now widened Minihan’s wedge, moving more people back and forth, to gain insights about the security of their own operations.

Another issue that needed to be untangled was the division of labor within the intelligence community, especially between the NSA and the CIA. In the old days, this division was clear: if information moved, the NSA would intercept it; if it stood still, the CIA would send a spy to nab it. NSA intercepted electrons whooshing through the air or over phone lines; CIA stole documents sitting on a desk or in a vault. The line had been sharply drawn for decades. But in the digital age, the line grew fuzzy. Where did computers stand in relation to this line? They stored data on floppy disks and hard drives, which were stationary; but they also sent bits and bytes through cyberspace. Either way, the information was the same, so who should get it: Langley or Fort Meade?

The logical answer was both. But pulling off that feat would require a fusion with little legal or bureaucratic precedent. The two spy agencies had collaborated on the occasional project over the years, but this would involve an institutional melding of missions and functions. To do its part, each agency would have to create a new entity—or to beef up, and reorient, an existing one.

As it happened, a framework for this fusion already existed. The CIA had created the Information Operations Center during the Belgrade operation, to plant devices on Serbian communications systems, which the NSA could then intercept; this center would be Langley’s contribution to the new joint effort. Fort Meade’s would be the third box on the new SIGINT organizational chart—“tailored access.”

Minihan had coined the phrase. During his tenure as director, he pooled a couple dozen of the most creative SIGINT operators into their own corner on the main floor and gave them that mission. What CIA black-bag operatives had long been doing in the physical world, the tailored access crew would now do in cyberspace, sometimes in tandem with the black-baggers, if the latter were needed—as they had been in Belgrade—to install some device on a crucial piece of hardware.

The setup transformed the concept of signals intelligence, the NSA’s stock in trade. SIGINT had long been defined as passively collecting stray electrons in the ether; now, it would also involve actively breaking and entering into digital machines and networks.

Minihan had wanted to expand the tailored access shop into an A Group of the digital era, but he ran out of time. When Hayden launched his reorganization, he took the baton and turned it into a distinct, elite organization—the Office of Tailored Access Operations, or TAO.

It began, even under his expansion, as a small outfit: a few dozen computer programmers who had to pass an absurdly difficult exam to get in. The organization soon grew into an elite corps as secretive and walled off from the rest of the NSA as the NSA was from the rest of the defense establishment. Located in a separate wing of Fort Meade, it was the subject of whispered rumors, but little solid knowledge, even among those with otherwise high security clearances. Anyone seeking entrance into its lair had to get by an armed guard, a cipher-locked door, and a retinal scanner.

In the coming years, TAO’s ranks would swell to six hundred “intercept operators” at Fort Meade, plus another four hundred or more at NSA outlets—Remote Operations Centers, they were called—in Wahiawa, Hawaii; Fort Gordon, Georgia; Buckley Air Force Base, near Denver; and the Texas Cryptology Center, in San Antonio.

TAO’s mission, and informal motto, was “getting the ungettable,” specifically getting the ungettable stuff that the agency’s political masters wanted. If the president wanted to know what a terrorist leader was thinking and doing, TAO would track his computer, hack into its hard drive, retrieve its files, and intercept its email—sometimes purely through cyberspace (especially in the early days, it was easy to break a target’s password, if he’d inscribed a password at all), sometimes with the help of CIA spies or special-ops shadow soldiers, who’d lay their hands on the computer and insert a thumb drive loaded with malware or attach a device that a TAO specialist would home in on.

These devices—their workings and their existence—were so secret that most of them were designed and built inside the NSA: the software by its Data Network Technologies Branch, the techniques by its Telecommunications Network Technologies Branch, and the customized computer terminals and monitors by its Mission Infrastructure Technologies Branch.

Early on, TAO hacked into computers in fairly simple ways: phishing for passwords (one such program tried out every word in the dictionary, along with variations and numbers, in a fraction of a second) or sending emails with alluring attachments, which would download malware when opened. Once, some analysts from the Pentagon’s Joint Task Force-Computer Network Operations were invited to Fort Meade for a look at TAO’s bag of tricks. The analysts laughed: this wasn’t much different from the software they’d seen at the latest DEF CON Hacking Conference; some of it seemed to be repackaged versions of the same software.

Gradually, though, the TAO teams sharpened their skills and their arsenal. Obscure points of entry were discovered in servers, routers, workstations, handsets, phone switches, even firewalls (which, ironically, were supposed to keep hackers out), as well as in the software that programmed, and the networks that connected, this equipment. And as their game evolved, their devices and programs came to resemble something out of the most exotic James Bond movie. One device, called LoudAuto, activated a laptop’s microphone and monitored the conversations of anyone in its vicinity. HowlerMonkey extracted and transmitted files via radio signals, even if the computer wasn’t hooked up to the Internet. MonkeyCalendar tracked a cell phone’s location and conveyed the information through a text message. NightStand was a portable wireless system that loaded a computer with malware from several miles away. RageMaster tapped into a computer’s video signal, so a TAO technician could see what was on its screen and thus watch what the person being targeted was watching.

But as TAO matured, so did its targets, who figured out ways to detect and block intruders—just as the Pentagon and the Air Force had figured out ways, in the previous decade, to detect and block intrusions from adversaries, cyber criminals, and mischief-makers. As hackers and spies discovered vulnerabilities in computer software and hardware, the manufacturers worked hard to patch the holes—which prodded hackers and spies to search for new vulnerabilities, and on the race spiraled.

As this race between hacking and patching intensified, practitioners of both arts, worldwide, came to place an enormous value on “zero-day vulnerabilities”—holes that no one had yet discovered, much less patched. In the ensuing decade, private companies would spring up that, in some cases, made small fortunes by finding zero-day vulnerabilities and selling their discoveries to governments, spies, and criminals of disparate motives and nationalities. This hunt for zero-days preoccupied some of the craftiest mathematical minds in the NSA and other cyber outfits, in the United States and abroad.

Once, in the late 1990s, Richard Bejtlich, a computer network defense analyst at Kelly Air Force Base discovered a zero-day vulnerability—a rare find—in a router made by Cisco. He phoned a Cisco technical rep and informed him of the problem, which the rep then quickly fixed.

A couple days later, proud of his prowess and good deed, Bejtlich told the story to an analyst on the offensive side of Kelly. The analyst wasn’t pleased. Staring daggers at Bejtlich, he muttered, “Why didn’t you tell us?”

The implication was clear: if Bejtlich had told the offensive analysts about the flaw, they could have exploited it to hack foreign networks that used the Cisco server. Now it was too late; thanks to Bejtlich’s phone call, the hole was patched, the portal was closed.

As the NSA put more emphasis on finding and exploiting vulnerabilities, a new category of cyber operations came into prominence. Before, there was CND (Computer Network Defense) and CNA (Computer Network Attack); now there was also CNE (Computer Network Exploitation).

CNE was an ambiguous enterprise, legally and operationally, and Hayden—who was sensitive to legal niceties and the precise wiggle room they allowed—knew it. The term’s technical meaning was straightforward: the use of computers to exploit the vulnerabilities of an adversary’s networks—to get inside those networks, in order to gain more intelligence about them. But there were two ways of looking at CNE. It could be the front line of Computer Network Defense, on the logic that the best way to defend a network was to learn an adversary’s plans for attack—which required getting inside his network. Or, CNE could be the gateway for Computer Network Attack—getting inside the enemy’s network in order to map its passageways and mark its weak points, to “prepare the battlefield” (as commanders of older eras would put it) for an American offensive, in the event of war.I

The concept of CNE fed perfectly into Hayden’s desire to fuse cyber offense and cyber defense, to make them indistinguishable. And while Hayden may have articulated the concept in a manner that suited his agenda, he didn’t invent it; rather, it reflected an intrinsic aspect of modern computer networks themselves.

In one sense, CNE wasn’t so different from intelligence gathering of earlier eras. During the Cold War, American spy planes penetrated the Russian border in order to force Soviet officers to turn on their radar and thus reveal information about their air-defense systems. Submarine crews would tap into underwater cables near Russian ports to intercept communications, and discover patterns, of Soviet naval operations. This, too, had a dual purpose: to bolster defenses against possible Soviet aggression; and to prepare the battlefield (or airspace and oceans) for an American offensive.

But in another sense, CNE was a completely different enterprise: it exposed all society to the risks and perils of military ventures in a way that could not have been imagined a few decades earlier. When officials in the Air Force or the NSA neglected to let Microsoft (or Cisco, Google, Intel, or any number of other firms) know about vulnerabilities in its software, when they left a hole unplugged so they could exploit the vulnerability in a Russian, Chinese, Iranian, or some other adversary’s computer system, they also left American citizens open to the same exploitations—whether by wayward intelligence agencies or by cyber criminals, foreign spies, or terrorists who happened to learn about the unplugged hole, too.

This was a new tension in American life: not only between individual liberty and national security (that one had always been around, to varying degrees) but also between different layers and concepts of security. In the process of keeping military networks more secure from attack, the cyber warriors were making civilian and commercial networks less secure from the same kinds of attack.

These tensions, and the issues they raised, went beyond the mandate of national security bureaucracies; only political leaders could untangle them. As the twenty-first century approached, the Clinton administration—mainly at the feverish prodding of Dick Clarke—had started to grasp the subject’s complexities. There was the Marsh Report, followed by PDD-63, the National Plan for Information Systems Protection, and the creation of Information Sharing and Analysis Centers, forums in which the government and private companies could jointly devise ways to secure their assets from cyber attacks.

Then came the election of November 2000, and, as often happens when the White House changes party, all this momentum ground to a halt. When George W. Bush and his aides came to power on January 20, 2001, the contempt they harbored for their predecessors seethed with more venom than usual, owing to the sex scandal and impeachment that tarnished Clinton’s second term, compounded by the bitter aftermath of the election against his vice president, Al Gore, which ended in Bush’s victory only after the Supreme Court halted a recount in Florida.

Bush threw out lots of Clinton’s initiatives, among them those having to do with cyber security. Clarke, the architect of those policies, stayed on in the White House and retained his title of National Coordinator for Security, Infrastructure Protection, and Counterterrorism. But, it was clear, Bush didn’t care about any of those issues, nor did Vice President Dick Cheney or the national security adviser, Condoleezza Rice. Under Clinton, Clarke had the standing, even if not the formal rank, of a cabinet secretary, taking part in the NSC Principals meetings—attended by the secretaries of defense, state, treasury, and other departments—when they discussed the issues in his portfolio. Rice took away this privilege. Clarke interpreted the move as not only a personal slight but also a diminution of his issues.

During the first few months of Bush’s term, Clarke and CIA director George Tenet, another Clinton holdover, warned the president repeatedly about the looming danger of an attack on America by Osama bin Laden. But the warnings were brushed aside. Bush and his closest advisers were more worried about missile threats from Russia, Iran, and North Korea; their top priority was to abrogate the thirty-year-old Anti-Ballistic Missile Treaty, the landmark Soviet-American arms-control accord, so they could build a missile-defense system. (On the day of the 9/11 attacks, Rice was scheduled to deliver a speech on the major threats facing the land; the draft didn’t so much as mention bin Laden or al Qaeda.)

In June 2001, Clarke submitted his resignation. He was the chief White House adviser on counterterrorism, yet nobody was paying attention to terrorism—or to him. Rice, taken aback, urged him not to leave. Clarke relented, agreeing to stay but only if they limited his responsibilities to cyber security, gave him his own staff (which eventually numbered eighteen), and let him set up and run an interagency Cyber Council. Rice agreed, in part because she didn’t care much about cyber; she saw the concession as a way to keep Clarke onboard while keeping him out of issues that did interest her. However, she needed time to find a replacement for the counterterrorism slot, so Clarke agreed to stay in that position as well until October 1.

He still had a few weeks to go as counterterrorism chief when the hijacked planes smashed into the World Trade Center and the Pentagon. Bush was in Florida, Cheney was dashed to an underground bunker, and, by default, Clarke sat in the Situation Room as the crisis manager, running the interagency conference calls and coordinating, in some cases directing, the government’s response.

The experience boosted his standing somewhat, not enough to let him rejoin the Principals meetings, but enough for Rice to start paying a bit of attention to cyber security. However, she balked when Clarke suggested renewing the National Plan for Information Systems Protection, which he’d written for Clinton in his last year as president. She vaguely remembered that the plan set mandatory standards for private industry, and that would be anathema to President Bush.

In fact, much as Clarke wished that it had, the plan—the revised version, after he had to drop his proposal for a federal intrusion-detection network—called only for public-private cooperation, with corporations in the lead. But Clarke played along, agreeing with Rice that the Clinton plan was deeply flawed and that he wanted to do a drastic rewrite. Rice let him draft an executive order, which Bush signed on September 30, calling for a new plan. For the next several months, Clarke and some of his staff went on the road, doing White House “cyber town halls” in ten cities—including Boston, New York, Philadelphia, Atlanta, San Francisco, Los Angeles, Portland, and Austin—inviting local experts, corporate executives, IT managers, and law-enforcement officers to attend.

Clarke would start the sessions on a modest note. Some of you, he would say, criticized the Clinton plan because you had no involvement in it. Now, he went on, the Bush administration was writing a new plan, and the president wants you, the people affected by its contents, to write the annexes that deal with your sector of critical infrastructure. Some of the experts and executives in some of the cities actually submitted ideas; those in telecommunications were particularly enthused.

In fact, though, Clarke wasn’t interested in their ideas. He did, however, need to melt their opposition; the whole point, the only point, of the town hall theatrics was to get their buy-in—to co-opt them into believing that they had something to do with the report. As it turned out, the final draft—a sixty-page document called The National Strategy to Secure Cyberspace, signed by President Bush on February 14, 2003—contained more passages kowtowing to industry, and it assigned some responsibility for securing nonmilitary cyberspace to the new Department of Homeland Security. But otherwise, the language on the vulnerability of computers came straight out of the Marsh Report, and the ideas on what to do about it were nearly identical to the plan that Clarke had written for Clinton.

The document set the framework for how cyber security would be handled over the next several years—as well as the limits in the government’s ability to handle it at all, given industry’s resistance to mandatory standards and (a problem that would soon become apparent) the Homeland Security Department’s bureaucratic and technical inadequacies.

Clarke didn’t stick around to fight the political battles of enforcing and refining the new plan. On March 19, Bush ordered the invasion of Iraq. In the buildup to the war, Clarke had argued that it would divert attention and resources from the fight against bin Laden and al Qaeda. Once the war’s wheels were firmly in motion, Clarke resigned in protest.

But a few years after the invasion, as the war devolved from liberation to occupation and the enemy switched from Saddam Hussein to a disparate array of insurgents, the cyber warriors at Fort Meade and the Pentagon stepped onto the battlefield for the first time as a significant, even decisive force.


I. Out of CNE sprang a still more baroque subdivision of signals intelligence: C-CNE, for Counter-Computer Network Exploitation—penetrating an adversary’s networks in order to watch him penetrating our networks.