ON June 9, 1997, twenty-five members of an NSA “Red Team” launched an exercise called Eligible Receiver, in which they hacked into the computer networks of the Department of Defense, using only commercially available equipment and software. It was the first high-level exercise testing whether the U.S. military’s leaders, facilities, and global combatant commands were prepared for a cyber attack. And the outcome was alarming.
Eligible Receiver was the brainchild of Kenneth Minihan, an Air Force three-star general who, not quite a year and a half earlier, had succeeded Mike McConnell as director of the NSA. Six months before then, in August 1995, he’d been made director of the Defense Intelligence Agency, the culmination of a career in military intel. He didn’t want to move to Fort Meade, especially after such a short spell at DIA. But the secretary of defense insisted: the NSA directorship was more important, he said, and the nation needed Minihan at its helm.
The secretary of defense was Bill Perry, the weapons scientist who, back in the Carter administration, had coined and defined “counter command-control warfare”—the predecessor to “information warfare”—and, before then, as the founding president of ESL, Inc. had built many of the devices that the NSA used in laying the grounds for that kind of warfare.
Since joining the Clinton administration, first as deputy secretary of defense, then secretary, Perry had kept an eye on the NSA, and he didn’t like what he saw. The world was rapidly switching to digital and the Internet, yet the NSA was still focused too much on telephone circuits and microwave signals. McConnell had tried to make changes, but he lost focus during his Clipper Chip obsession.
“They’re broken over there,” Perry told Minihan. “You need to go fix things.”
Minihan had a reputation as an “out-of-the-box” thinker, an eccentric. In most military circles, this wasn’t seen as a good thing, but Perry thought he had the right style to shake up Fort Meade.
For a crucial sixteen-month period, from June 1993 until October 1994, Minihan had been commander at Kelly Air Force Base, sprawled out across an enclave called Security Hill on the outskirts of San Antonio, Texas, home to the Air Force Information Warfare Center. Since 1948, four years before the creation of the NSA, Kelly had been the place where, under various rubrics, the Air Force did its own code-making and code-breaking.
In the summer of 1994, President Clinton ordered his generals to start planning an invasion of Haiti. The aim, as authorized in a U.N. Security Council resolution, was to oust the dictators who had come to power through a coup d’état and to restore the democratic rule of the island-nation’s elected president, Jean-Bertrand Aristide. It would be a multipronged invasion, with special operations forces pre-positioned inside the country, infantry troops swarming onto the island from several approaches, and aircraft carriers offering support offshore in the Caribbean. Minihan’s task was to come up with a way for U.S. aircraft—those carrying troops and those strafing the enemy, if necessary—to fly over Haiti without being detected.
One of Minihan’s junior officers in the Information Warfare Center had been a “demon-dialer” in his youth, a technical whiz kid—not unlike the Matthew Broderick character in WarGames—who messed with the phone company, simulating certain dial tones, so he could make long-distance calls for free. Faced with the Haiti challenge, he came to Minihan with an idea. He’d done some research: it turned out that Haiti’s air-defense system was hooked up to the local telephone lines—and he knew how to make all the phones in Haiti busy at the same time. There would be no need to attack anti-aircraft batteries with bombs or missiles, which might go astray and kill civilians. All that Minihan and his crew had to do was to tie up the phone lines.
In the end, the invasion was called off. Clinton sent a delegation of eminences—Jimmy Carter, Colin Powell, and Sam Nunn—to warn the Haitian dictators of the impending invasion; the dictators fled; Aristide returned to power without a shot fired. But Minihan had woven the demon-dialer’s idea into the official war plan; if the invasion had gone ahead, that was how American planes would have eluded fire.
Bill Perry was monitoring the war plan from the outset. When he learned about Minihan’s idea, his eyes lit up. It resonated with his own way of thinking as a pioneer in electronic countermeasures. The Haiti phone-flooding plan was what put Minihan on Perry’s radar screen as an officer to watch—and, when the right slot opened up, Perry pushed him into it.
Something else about Kelly Air Force Base caught Perry’s attention. The center didn’t just devise clever schemes for offensive attacks on adversaries; it also, in a separate unit, devised a clever way to detect, monitor, and neutralize information warfare attacks that adversaries might launch on America. None of the other military services, not even the Navy, had designed anything nearly so effective.
The technique was called Network Security Monitoring, and it was the invention of a computer scientist at the University of California at Davis named Todd Heberlein.
In the late 1980s, hacking emerged as a serious nuisance and an occasional menace. The first nightmare case occurred on November 2, 1988, when, over a period of fifteen hours, as many as six thousand UNIX computers—about one tenth of all the computers on the Net, including those at Wright-Patterson Air Force Base, the Army Ballistic Research Lab, and several NASA facilities—went dead and stayed dead, incurably infected from some outside source. It came to be called the “Morris Worm,” named after its perpetrator, a Cornell University grad student named Robert T. Morris Jr. (To the embarrassment of Fort Meade, he turned out to be the son of Robert Morris Sr., chief scientist of the NSA Computer Security Center. It was the CSC that traced the worm to its culprit.)
Morris had meant no harm. He’d started hacking into the Net, using several university sites as a portal to hide his identity, in order to measure just how extensive the network was. (At the time, no one knew.) But he committed a serious mistake: the worm interrogated several machines repeatedly (he hadn’t programmed it to stop once it received an answer), overloading and crashing the systems. In the worm’s wake, many computer scientists and a few officials drew a frightening lesson: Morris had shown just how easy it was to bring the system down; had that been his intent, he could have wreaked much greater damage still.
As a result of the Morris Worm, a few mathematicians developed programs to detect intruders, but these programs were designed to protect individual computers. Todd Heberlein’s innovation was designing intrusion-detection software to be installed on an open network, to which any number of computers might be connected. And his software worked on several levels. First, it checked for anomalous activity on the network—for instance, key words that indicated someone was making repeated attempts to log on to an account or trying out one random password after another. Such attempts drew particular notice if they entered the network with an MIT.edu address, since MIT, the Massachusetts Institute of Technology, was famous for letting anyone and everyone dial in to its terminal from anyplace on the Net and was thus a favorite point of entry for hackers. Anomalous activities would trigger an alert. At that point, the software could track data from the hacker’s session, noting his IP address, how long he stayed inside the network, and how much data he was extracting or transferring to another site. (This “session date” would later be called “metadata.”) After this point, if the hacker’s sessions raised enough suspicion to prompt further investigation, Heberlein’s software could trace their full contents—what the hacker was doing, reading, and sending—in real time, across the whole network that the software was monitoring.
Like many hackers and counter-hackers of the day, Heberlein had been inspired by Cliff Stoll’s 1989 book, The Cuckoo’s Egg. (A junior officer, who helped adapt Heberlein’s software at the Air Force Information Warfare Center, wrote a paper called “50 Lessons from the First 50 Pages of The Cuckoo’s Egg.”) Stoll was a genial hippie and brilliant astronomer who worked at Lawrence Berkeley National Laboratory, as the computer system’s administrator. One day, he discovered a seventy-five-cent error in the lab’s phone bill, traced its origins out of sheer curiosity, and wound up uncovering an East German spy ring attempting to steal U.S. military secrets, using the Berkeley Lab’s open site as a portal. Over the next several months, relying entirely on his wits, Stoll essentially invented the techniques of intrusion detection that came to be widely adopted over the next three decades. He attached a printer to the input lines of the lab’s computer system, so that it typed a transcript of the attacker’s activities. Along with a Berkeley colleague, Lloyd Bellknap, he built a “logic analyzer” and programmed it to track a specific user: when the user logged in, a device would automatically page Stoll, who would dash to the lab. The logic analyzer would also cross-correlate logs from other sites that the hacker had intruded, so Stoll could draw a full picture of what the hacker was up to.
Heberlein updated Stoll’s techniques, so that he could track and repel someone hacking into not only a single modem but also a computer network.
Stoll was the inspiration for Heberlein’s work in yet another sense. After Stoll gained fame for catching the East German hacker, and his book scaled the best-seller list, Lawrence Livermore National Laboratory—the more military-slanted lab, forty miles from Berkeley—exploited the headlines and requested money from the Department of Energy to create a “network security monitoring” system. Livermore won the contract, but no one there knew how to build such a system. The lab’s managers reached out to Karl Levitt, a computer science professor at UC Davis. Levitt brought in his star student, Todd Heberlein.
By 1990, the Air Force Cryptology Support Center (which, a few years later, became part of the Air Force Information Warfare Center) was upgrading its intrusion-detection system. After the Morris Worm, the tech specialists started installing “host-based attack-detection” systems, the favored method of the day, which could protect a single computer; but they were quickly deemed inadequate. Some of the specialists had read about Heberlein’s Network Security Monitoring software, and they commissioned him to adapt it to the center’s needs.
Within two years, the cryptologists installed his software—which they renamed the Automated Security Incident Measurement, or ASIM, system—on Air Force networks. A new subdivision of the Air Force center, called the Computer Emergency Response Team, was set up to run the software, track hackers, and let higher-ups know if a serious break-in was under way. From their cubicles in San Antonio, the team could look out on Air Force networks across the nation—or that was the idea, anyway.
The program faced bureaucratic obstacles from the start. On October 7, 1992, Robert Mueller, the assistant attorney general in charge of the Justice Department’s criminal division, wrote a letter, warning that network monitoring might violate federal wiretapping laws. A device that monitored a network couldn’t help but pick up the Internet traffic of some innocent civilians, too. Mueller noted that the practice might not be illegal: the wiretapping statutes were written before the age of computer hackers and viruses; no court had yet ruled on their present-day application. But pending such a ruling, Mueller wrote, all federal agencies using these techniques should post a “banner warning,” giving notice to “unauthorized intruders” that they were being monitored.
The Air Force officers in San Antonio ignored Mueller’s letter: it wasn’t a cease-and-desist order; and besides, warning hackers that they were being watched would destroy the whole point of a monitor.
One year later, Heberlein got a phone call from an official at the Justice Department. At first, he held his breath, wondering if at last the feds were coming to get him. To the contrary, it turned out, the department had recently installed his software, and the official had a technical question about one of its features. Justice had changed its tune, and adapted to the new world, very quickly. In a deep irony, Robert Mueller later became director of the FBI and relentlessly employed network-monitoring software to track down criminals and terrorists.
Still, back at the dawn of the new era, Mueller raised a legitimate question: Was it legal for the government to monitor a network that carried the communications not just of foreign bad guys but of ordinary Americans, too? The issue would raise its head again twenty years later, with greater passion and wider controversy, when an NSA contractor named Edward Snowden leaked a trove of ultrasecret documents detailing the agency’s vast metadata program.
More daunting resistance to the network-monitoring software, in its beginnings, came from the Air Force itself. In October 1994, Minihan was transferred from Kelly to the Pentagon, where he assumed the post of Air Force intelligence chief. There, he pushed hard for wider adoption of the software, but progress was slow. Air Force computer servers had slightly more than a hundred points of entry to the Net; by the time he left the Pentagon, two years later, the computer teams back in San Antonio had received permission to monitor only twenty-six of them.
It wasn’t just the monitors that Minihan had a hard time getting the top brass to accept; it was the very topic of computer security. He told three- and four-star generals about the plan to tie up the phone lines in Haiti, adding that his former teams in San Antonio were now devising similar operations against enemy computers. Nobody was interested. Most of the generals had risen through the ranks as pilots of fighter planes or bombers; to their way of thinking, the best way to disable a target was to drop a bomb on it. This business of hacking into computer links wasn’t reliable and couldn’t be measured; it reeked of “soft power.” General Colin Powell may have issued a memorandum on information warfare, but they weren’t buying it.
Minihan’s beloved Air Force was moving too slowly, and it was way ahead of the Army and Navy in this game. His frustration had two layers: he wanted the military—all three of the main services, as well as the Pentagon’s civilian leadership—to know how good his guys were at hacking the adversaries’ networks; and he wanted them to know how wide open their own networks were to hacking by the same adversaries.
As the new director of the NSA, he was determined to use the job to demonstrate just how good and how bad these things were.
Each year, the Pentagon’s Joint Staff held an exercise called Eligible Receiver—a simulation or war game designed to highlight some threat or opportunity on the horizon. One recent exercise had focused on the danger of biological weapons. Minihan wanted the next one to test the vulnerability of the U.S. military’s networks to a cyber attack. The most dramatic way to do this, he proposed, was to launch a real attack on those networks by a team of SIGINT specialists at the NSA.
Minihan got the idea from a military exercise, already in progress, involving the five English-speaking allies—the United States, Great Britain, Canada, Australia, and New Zealand—known in NSA circles as the “five eyes,” for their formal agreement to share ultrasecret intelligence. The point of the exercise was to test new command-control equipment, some of it still in research and development. As part of this test, an eight-man crew, called the Coalition Vulnerability Assessment Team, working out of the Defense Information Systems Agency in Arlington, Virginia, would try to hack into the equipment. Minihan was told that the hackers always succeeded.
The assessment team’s director was an American civilian named Matt Devost, twenty-three years old, a recent graduate of St. Michael’s College in Burlington, Vermont, where he’d studied international relations and computer science. In his early teens, Devost had been a recreational hacker, competing with his tech friends—all of whom had watched WarGames several times—to see who could hack into the servers of NASA and other quasi-military agencies. Now Devost was sitting in an office with several like-minded foreigners, hacking some of the most classified systems in the world, then briefing two- and three-star generals about their exploits—all in the name of bolstering American and allied defenses.
In the most recent coalition war game, Devost’s team had shut down the command-control systems of three players—Canada, Australia, and New Zealand—and taken over the American commander’s personal computer, sending him fake emails and false information, thus distorting his view of the battlefield and leading him to make bad decisions, which, in a real war, could have meant defeat.
The NSA had a similar group called the Red Team. It was part of the Information Assurance Directorate (formerly called the Information Security Directorate), the defensive side of the NSA, stationed in FANEX, the building out near Friendship Airport. During its most sensitive drills, the Red Team worked out of a chamber called The Pit, which was so secret that few people at NSA knew it existed, and even they couldn’t enter without first passing through two combination-locked doors. In its workaday duties, the Red Team probed for vulnerabilities in new hardware or software that had been designed for the Defense Department, sometimes for the NSA itself. These systems had to clear a high bar to be deemed secure enough for government purchase and installation. The Red Team’s job was to test that bar.
Minihan’s idea was to use the NSA Red Team in the same way that the five-eyes countries were using Devost’s Coalition Vulnerability Assessment Team. But instead of putting it to work in a narrowly focused war game, Minihan wanted to expose the security gaps of the entire Department of Defense. He’d been trying for years to make the point to his fellow senior officers; now he wanted to hammer it home to the top officials in the Pentagon.
Bill Perry liked the idea. Still, it took Minihan a year to jump through the Pentagon bureaucracy’s hoops. In particular, the general counsel needed convincing that it was legal to hack into military computers, even as part of an exercise to test their security. NSA lawyers pointed to a document called National Security Directive 42, signed by President George H. W. Bush in 1990 (as an update to Reagan’s NSDD-145), which expressly allowed such tests, as long as the secretary of defense gave written consent. Secretary Perry signed the agreement form.
The lawyers placed just one restriction on the exercise: the NSA hackers couldn’t attack American networks with any of their top secret SIGINT gear; they could use only commercially available equipment and software.
On February 16, 1997, General John Shalikashvili, the chairman of the Joint Chiefs of Staff, issued Instruction 3510.01, “No-Notice Interoperability Exercise (NIEX) Program,” authorizing and describing the scenario for Eligible Receiver.
The game laid out a three-phase scenario. In the first, North Korean and Iranian hackers (played by the NSA Red Team) would launch a coordinated attack on the critical infrastructures, especially the power grids and 911 emergency communication lines, of eight American cities—Los Angeles, Chicago, Detroit, Norfolk, St. Louis, Colorado Springs, Tampa, Fayetteville—and the island of Oahu, in Hawaii. (This phase was played as a tabletop game, premised on analyses of how easy it might be to disrupt the grid and overload the 911 lines.) The purpose of the attack, in the game’s scenario, was to pressure American political leaders into lifting sanctions that they’d recently imposed on the two countries.
In the second part of the game, the hackers would launch a massive attack on the military’s telephone, fax, and computer networks—first in U.S. Pacific Command, then in the Pentagon and other Defense Department facilities. The stated purpose was to disrupt America’s command-control systems, to make it much harder for the generals to see what was going on and for the president to respond to threats with force. This phase would not be a simulation; the NSA Red Team would actually penetrate the networks.
For the three and a half months between the JCS chairman’s authorization and the actual start of the game, the NSA Red Team prepared the attack, scoping the military’s networks and protocols, figuring out which computers to hack, and how, for maximum effect.
The game, its preparation and playing, was carried out in total secrecy. General Shalikashvili had ordered a “no-notice exercise,” meaning that no one but those executing and monitoring the assault could know that an exercise was happening. Even inside the NSA, only the most senior officials, the Red Team itself, and the agency’s lawyer—who had to approve every step the team was taking, then brief the Pentagon’s general counsel and the attorney general—were let in on the secret.
At one point during the exercise, Richard Marshall, the NSA counsel, was approached by Thomas McDermott, deputy director of the agency’s Information Assurance Directorate, which supervised the Red Team. McDermott informed Marshall that he was under investigation for espionage; someone on the security staff had noticed him coming in at odd hours and using the encrypted cell phone more than usual.
“You know why I’m here, right?” Marshall asked, a bit alarmed.
“Yes, of course,” McDermott said, assuring Marshall that he’d briefed one security officer on what was happening. Even that officer was instructed not to tell his colleagues, but instead to continue going through the motions of an investigation until the game was over.
Eligible Receiver 97 formally got under way on Monday, June 9. Two weeks had been set aside for the exercise to unfold, with provisions for a two-week extension if necessary. But the game was over—the entire defense establishment’s network was penetrated—in four days. The National Military Command Center—the facility that would transmit orders from the president of the United States in wartime—was hacked on the game’s first day. And most of the officers manning those servers didn’t even know they’d been hacked.
The NSA Red Team steered clear of only one set of targets that it otherwise might have hacked: the two dozen Air Force servers that were monitored by the computer response team analysts in San Antonio. Figuring they’d be spotted if they broke through those networks, the hackers aimed their attacks elsewhere—and intruding elsewhere turned out to be absurdly easy.
Many defense computers, it turned out, weren’t protected by a password. Others were protected by the lamest passwords, like “password” or “ABCDE” or “12345.” In some cases, the Red Team snipped all of an office’s links except for a fax line, then flooded that line with call after call after call, shutting it down. In a few instances, NSA attachés—one inside the Pentagon, the other at a Pacific Command facility in Hawaii—went dumpster diving, riffling through trash cans and dumpsters, looking for passwords. This trick, too, bore fruit.
The team had the hardest time hacking into the server of the J-2, the Joint Staff’s intelligence directorate. Finally, one of the team members simply called the J-2’s office and said that he was with the Pentagon’s IT department, that there were some technical problems, and that he needed to reset all the passwords. The person answering the phone gave him the existing password without hesitating. The Red Team broke in.
In most of the systems they penetrated, the Red Team players simply left a marker—the digital equivalent of “Kilroy was here.” In some cases, though, they did much more: they intercepted and altered communications, sent false emails, deleted files, and reformatted hard drives. High-ranking officers who didn’t know about the exercise found phone lines dead, messages sent but never received (or sent, but saying something completely different upon arrival), whole systems shut down or spitting out nonsense data. One officer who was subjected to this barrage sent his commander an email (which the Red Team intercepted), saying, “I don’t trust my command-control.”
This was the ultimate goal of information warfare, and Eligible Receiver revealed that it was more feasible than anyone in the world of conventional warfare had imagined.
A few weeks after it was over, an Air Force brigadier general named John “Soup” Campbell put together a postmortem briefing on the exercise. Campbell, a former F-15 fighter pilot, had been transferred to the Pentagon just as Eligible Receiver was getting under way. His new job was head of J-39, a bureau inside the operations directorate of the Joint Staff that served as a liaison between the managers of ultrasecret weapons programs and the military’s combatant commanders. The Joint Staff needed someone to serve as its point man on Eligible Receiver; Campbell got the assignment.
He delivered the briefing to a small group that included senior civilian officials and the vice chiefs of the Air Force, Navy, and Marines. (The Army had decided not to participate in the exercise: a few of its officers knew they were vulnerable but didn’t want to expose themselves to embarrassment; most of them dismissed the topic as a waste of time.)
Campbell’s message was stark: Eligible Receiver revealed that the Defense Department was completely unprepared and defenseless for a cyber attack. The NSA Red Team had penetrated its entire network. Only a few officers had grasped that an attack was going on, and they didn’t know what to do about it; no guidelines had ever been issued, no chain of command drawn up. Only one person in the entire Department of Defense, a technical officer in a Marine unit in the Pacific, responded to the attack in an effective manner: seeing that something odd was happening with the computer server, he pulled it offline at his own initiative.
After Campbell’s briefing, the chief of the NSA Red Team, a Navy captain named Michael Sare, made a presentation, and, in case anyone doubted Campbell’s claims, he brought along records of the intrusion—photos of password lists retrieved from dumpsters, tape recordings of phone calls in which officers blithely recited their passwords to strangers, and much more. (In the original draft of his brief, Sare noted that the team had also cracked the JCS chairman’s password. Minihan, who read the draft in advance, told Sare to scratch that line. “No need to piss off a four-star,” he explained.)
Everyone in the room was stunned, not least John Hamre, who had been sworn in as deputy secretary of defense at the end of July. Before then, Hamre had been the Pentagon’s comptroller, where he’d gone on a warpath to slash the military budget, especially the part secretly earmarked for the NSA. Through the 1980s, as a staffer for the Congressional Budget Office and the Senate Armed Services Committee, Hamre had grown to distrust the NSA: it was a dodgy outfit, way too covert, floating in the gray area between “military” and “intelligence” and evading the strictures on both. Hamre didn’t know anything about information warfare, and he didn’t care.
A few weeks before Eligible Receiver, as Hamre prepared for his promotion, Minihan had briefed him on the threats and opportunities of information warfare and on the need for a larger budget to exploit them. Hamre, numbed by the technical detail, had sighed and said, “Ken, you’re giving me a headache.”
But now, listening to Campbell and Sare run down the results of Eligible Receiver, Hamre underwent a conversion, seized with a sense of urgency. Looking around the room of generals and colonels, he asked who was in charge of fixing this problem.
They all looked back at him. No one knew the answer. No one was in charge.
Around the same time, Ken Minihan delivered his own briefing on Eligible Receiver to the Marsh Commission. The panel, by now, had delved deeply into the fragile state of America’s critical infrastructure. But the scenarios they’d studied were hypothetical and dealt with the vulnerability of civilian sectors; no one had ever launched an actual cyber attack, and most of the commissioners had assumed that the military’s networks were secure. Minihan’s briefing crushed their illusions on both counts: an NSA Red Team had launched an actual attack, and its effects were devastating.
Minihan did not reveal one episode of Eligible Receiver, an incident that only a few officials knew about: when the Red Team members were hacking into the networks as part of the exercise, they came across some strangers—traceable to French Internet addresses—hacking into the network for real. In other words, foreign spies were already penetrating vital and vulnerable networks; the threat wasn’t hypothetical.
Even without this tidbit, the commissioners were stunned. Marsh asked what could be done to fix the problem. Minihan replied, “Change the law, give me the power, I’ll protect the nation.”
No one quite knew what he meant. Or, if he meant what they thought he meant, nobody took it seriously: nobody was going to revive Reagan’s NSDD-145 or anything like it.
On October 13, the Marsh Commission published its report. Titled Critical Foundations, it only briefly alluded to Eligible Receiver. Its recommendations focused mainly on the need for the government and private industry to share information and solve problems jointly. It said nothing about giving the NSA more money or power.
Four months later, another attack on defense networks occurred—something that looked like Eligible Receiver, but coming from real, unknown hackers in the real, outside world.