WHEN General John Abizaid took the helm of U.S. Central Command on July 7, 2003, overseeing American military operations in the Middle East, Central Asia, and North Africa, his political bosses in Washington thought that the war in Iraq was over. After all, the Iraqi army had been routed, Saddam Hussein had fled, the Baathist regime had crumbled. But Abizaid knew that the war was just beginning, and he was flustered that President Bush and his top officials neither grasped its nature nor gave him the tools to fight it. One of those tools was cyber.
Abizaid had risen through the Army’s ranks in airborne infantry, U.N. peacekeeping missions, and the upper echelon of Pentagon staff jobs. But early on in his career, he tasted a slice of the unconventional. In the mid-1980s, after serving as a company commander in the brief battle for Grenada, he was assigned to the Army Studies Group, which explored the future of combat. The Army vice chief of staff, General Max Thurman, was intrigued by reports of the Soviet army’s research into remote sensing and psychic experiments. Nothing came of them, but they exposed Abizaid to the notion that war might be about more than just bullets and bombs.
In his next posting, as executive assistant to General John Shalikashvili, chairman of the Joint Chiefs of Staff, Abizaid once accompanied his boss on a trip to Moscow. Figuring their quarters were bugged, the staff set up little tents so they could discuss official business away from Russian eavesdropping. Later, in Bosnia, as assistant commander of the 1st Armored Division, Abizaid learned that the CIA was flying unmanned reconnaissance planes over Sarajevo—and he was aware of the worry, among U.S. intelligence officials on the ground, that the Russians might seize control of a plane by hacking its communications link.
By 2001, when Abizaid was promoted to director of the Joint Staff in the Pentagon, the plans and programs for cyber security and cyber warfare were in full bloom. His job placed him in the thick of squabbles and machinations among and within the services, so he knew well the tensions between operators and spies throughout the cyber realm. In the event of war, the operators, mainly in the military services, wanted to use the intelligence gleaned from cyber; the spies, mainly in the NSA and CIA, saw the intelligence as vital for its own sake and feared that using it would mean losing it—the enemy would know that we’d been hacking into their networks, so they’d change their codes or erect new barriers. Abizaid understood this tension—it was a natural element in military politics—but he was, at heart, an operator. He took the guided tour of Fort Meade, was impressed with the wonders that the NSA could accomplish, and thought it would be crazy to deny their fruits to American soldiers in battle.
In the lead-up to the invasion of Iraq, Abizaid, who was by now the deputy head of Central Command, flew to Space Command headquarters in Colorado Springs, home of Joint Task Force-Computer Network Operations, which would theoretically lead cyber offense and defense in wartime. He was appalled by how bureaucratically difficult it would be to muster any kind of cyber offensive campaign: for one thing, the tools of cyber attack and cyber espionage were so shrouded in secrecy that few military commanders even knew they existed.
Abizaid asked Major General James D. Bryan, the head of the joint task force, how he would go about getting intelligence from al Qaeda’s computers into the hands of American soldiers in Afghanistan. Bryan traced the circuitous chain of command, from Space Command to a bevy of generals in the Pentagon, up to the deputy secretary of defense, then the secretary of defense, over to the National Security Council in the White House, and finally to the president. By the time the request cleared all these hurdles, the soldiers’ need for the intel would probably have passed; the war itself might be over.
Bush ordered the invasion of Iraq on March 19. Three weeks later, after a remarkably swift armored assault up through the desert from Kuwait, Baghdad fell. On May Day, three weeks after the toppling, President Bush stood on the deck of the USS Abraham Lincoln, beneath a banner reading “Mission Accomplished,” and declared that major combat operations were over. But later that month, the American proconsul, L. Paul Bremer, issued two directives, disbanding the Iraqi army and barring Baathist party members from power. The orders alienated the Sunni population so fiercely that, by the time Abizaid took over as CentCom commander, an insurgency was taking form, raging against both the new Shiite-led Iraqi government and its American protectors.
Abizaid heard about the vast reams of intelligence coming out of Iraq—communications intercepts, GPS data from insurgents’ cell phones, photo imagery of Sunni jihadists flowing in from the Syrian border—but nobody was piecing the elements together, much less incorporating them into a military plan. Abizaid wanted to get inside those intercepts and send the insurgents false messages, directing them to a certain location, where U.S. special-ops forces would be lying in wait to kill them. But he needed cooperation from NSA and CIA to weave this intel together, and he needed authorization from political higher-ups to use it as an offensive tool. At the moment, he had neither.
The permanent bureaucracies at Langley and Fort Meade didn’t want to cooperate: they knew that the world was watching—including the Russians and the Chinese—and they didn’t want to waste their best intelligence-gathering techniques on a war that many of them regarded as less than vital. Meanwhile, Secretary of Defense Donald Rumsfeld wouldn’t acknowledge that there was an insurgency. (Rumsfeld was old enough to know, from Vietnam days, that defeating an insurgency required a counterinsurgency strategy, which in turn would leave tens of thousands of U.S. troops in Iraq for years, maybe decades—whereas he just wanted to get in, get out, and move on to oust the next tyrant standing in the way of America’s post–Cold War dominance.)
Out of frustration, Abizaid turned to a one-star general named Keith Alexander. The two had graduated from West Point a year apart—Abizaid in the class of 1973, Alexander in ’74—and they’d met again briefly, almost twenty years later, during battalion-command training in Italy. Now Alexander was in charge of the Army Intelligence and Security Command, at Fort Belvoir, Virginia, the land forces’ own SIGINT center, with eleven thousand surveillance officers deployed worldwide—a mini-NSA all its own, but geared explicitly to Army missions. Maybe Alexander could help Abizaid put an operational slant on intelligence data.
He’d come to the right man. Alexander was something of a technical wizard. Back at West Point, he worked on computers in the electrical engineering and physics departments. In the early 1980s, at the Naval Postgraduate School, in Monterey, California, he built his own computer and developed a program that taught Army personnel how to make the transition from handwritten index cards to automated databases. Soon after graduating, he was assigned to the Army Intelligence Center, at Fort Huachuca, Arizona, where he spent his first weekend memorizing the technical specifications for all the Army’s computers, then prepared a master plan for all intelligence and electronic-warfare data systems. In the run-up to Operation Desert Storm, the first Gulf War of 1991, Alexander led a team in the 1st Armored Division, at Fort Hood, Texas, wiring together a series of computers so that they could process data more efficiently. Rather than relying on printouts and manual indexing, the analysts and war planners back in the Pentagon could access data that was stored and sorted to their needs.
Before assuming his present command at Fort Belvoir, Alexander had been Central Command’s chief intelligence officer. He told Abizaid about the spate of technical advances on the boards, most remarkably tools that could intercept signals from the chips in cell phones, either directly or through the switching nodes in the cellular network, allowing SIGINT teams to track the location and movements of Taliban fighters in Pakistan’s northwest frontier or the insurgents in Iraq—even if their phones were turned off. This was a new weapon in the cyber arsenal; no one had yet exploited its possibilities, much less devised the procedures for one agency to share the intelligence feed with other agencies or with commanders in the field. Abizaid was keen to get this sharing process going.
Although CentCom oversaw American military operations in Iraq, Afghanistan, and their neighboring countries, its headquarters were in Tampa, Florida, so Abizaid made frequent trips to Washington. By August, one month into his tenure as its commander, intelligence on insurgents was flowing into Langley and Fort Meade. He could see the “ratlines” of foreign jihadists crossing into Iraq from Syria; he read transcripts of their phone conversations, which were correlated with maps of their precise locations. He wanted to give American soldiers access to this intel, so they could use it on the battlefield.
By this time, Keith Alexander had been promoted to the Army’s deputy chief of staff for intelligence, inside the Pentagon, so he and Abizaid collaborated on the substantive issues and the bureaucratic politics. They found an ideal enabler in General Stanley McChrystal, head of the Joint Special Operations Command. If this new cache of intelligence made its way to the troops in the field, the shadow soldiers of JSOC would be the first troops to get and use it; and McChrystal, a soldier of spooky intensity, was keen to make that happen. All three worked their angles in the Pentagon and the intelligence community, but the main obstacle was Rumsfeld, who still refused to regard the Iraqi rebels as insurgents.
Finally, in January 2004, Abizaid arranged a meeting with President Bush and made the case for launching cyber offensive operations against the insurgents. Bush told his national security adviser, Condoleezza Rice, to put the subject on the agenda for the next NSC meeting. When it came up several days later, the deputies from the intelligence agencies knocked it down with the age-old argument: the intercepts were providing excellent information on the insurgents; attacking the source of the information would alert them (and other potential foes who might be watching) that they were being hacked, prompting them to change their codes or toss their cell phones, resulting in a major intelligence loss.
Meanwhile, the Iraqi insurgents were growing stronger, America was losing the war, and Bush was losing patience. Numbed by the resistance to new approaches and doubting that an outside army could make things right in Iraq anyway, Abizaid moved toward the view that, rather than redoubling its efforts, the United States should start getting out.
But then things started to change. Rumsfeld, disenchanted with all the top Army generals, passed over the standing candidates for the vacated post of Army chief of staff and, instead, summoned General Peter Schoomaker out of retirement.
Schoomaker had spent most of his career in Special Forces, another smack in the face of regular Army. (General Norman Schwarzkopf, the hero of Desert Storm, had spoken for many of his peers when he scoffed at Special Forces as out-of-control “snake eaters.”) McChrystal, who had long known and admired Schoomaker, told him about the ideas that he, Abizaid, and Alexander had been trying to push through. The new chief found them appealing but understood that they needed an advocate high up in the intelligence community. At the start of 2005, Mike Hayden was nearing the end of an unusually long six-year tenure as director of the NSA. Schoomaker urged Rumsfeld to replace him with Alexander.
Seventeen years had passed since an Army officer had run the NSA; in its fifty-three-year history, just three of its directors had been Army generals, compared with seven Air Force generals and five Navy admirals. The pattern had reflected, and stiffened, the agency’s resistance to sharing intelligence with field commanders of “small wars,” who tended to be Army officers. Now the United States was fighting a small war, which the sitting president considered a big deal; the Army, as usual, was taking the brunt of the casualties, and Alexander planned to use his new post to help turn the fighting around.
McChrystal had already made breakthroughs in weaving together the disparate strands of intelligence. He’d assumed command of JSOC in September 2003. That same month, Rumsfeld signed an executive order authorizing JSOC to take military action against al Qaeda anywhere in the world without prior approval of the president or notification of Congress. But McChrystal found himself unable to do much with this infusion of great power: the Pentagon chiefs were cut off from the combatant commands; the combatant commands were cut off from the intelligence agencies. McChrystal saw al Qaeda as a network, each cell’s powers enhanced by its ties with other cells; it would take a network to fight a network, and McChrystal set out to build his own. He reached out to the CIA, the services’ separate intelligence bureaus, the National Geospatial-Intelligence Agency, the intel officers at CentCom. He prodded them into agreements to share data and imagery from satellites, drones, cell phone intercepts, and landline wiretaps. (When the Bush administration rebuilt the Iraqi phone system after Saddam’s ouster, the CIA and NSA were let in to attach some devices.) But to make this happen—to fuse all this information into a coherent database and to transform it into an offensive weapon—he also needed the analytical tools and surveillance technology of the NSA.
That’s where Alexander came in.
As Keith Alexander took over Fort Meade, on August 1, 2005, his predecessor, Mike Hayden, stepped down, seething with suspicion.
A few years earlier, when Alexander was running the Army Intelligence and Security Command at Fort Belvoir, the two men had clashed in a dragged-out struggle for turf and power, leaving Hayden with a bitter taste, a shudder of distrust, about every aspect and activity of the new man in charge.
From the moment Alexander assumed command at Fort Belvoir, he was determined to transform the place from an administrative center—narrowly charged with providing signals intelligence to Army units, subordinate to both the Army chief of staff and the NSA director—into a peer command, engaged in operations, specifically in the war on terror.
In his earlier post as CentCom’s intelligence chief, Alexander had helped develop new analytic tools that processed massive quantities of data and parsed them for patterns and connections. He thought the technique—tracing telephone and email links (A was talking to B, who was talking to C, and on and on)—could help track down terrorists and unravel their networks. And it could serve as Alexander’s entrée to the intelligence world’s upper echelon.
But he needed to feed his software with data—and the place that had the data was the NSA. He asked Hayden to share it; Hayden turned him down. The databases were the agency’s crown jewels, the product of decades of investments in collection technology, computers, and human capital. But Hayden’s resistance wasn’t just a matter of turf protection. For years, other rival intelligence agencies had sought access to Fort Meade’s databases, in order to run some experiment or pursue an agenda of their own. But SIGINT analysis was an esoteric specialty; raw data could sire erroneous, even dangerous, conclusions if placed in untrained hands. And what Alexander wanted to do with the data—“traffic analysis,” as NSA hands called it—was particularly prone to this tendency. Coincidences weren’t proof of causation; a shared point of contact—say, a phone number that a few suspicious people happened to call—wasn’t proof of a network, much less a conspiracy.
Fort Belvoir had a particularly flaky record of pushing precisely these sorts of flimsy connections. In 1999, two years before Alexander arrived, his predecessor, Major General Robert Noonan, had set up a special office called the Land Information Warfare Activity, soon changed to the Information Dominance Center. One of its experiments was to see whether a computer program could automatically detect patterns in data on the Internet—specifically, patterns indicating foreign penetration into American research and development programs.
Art Money, the assistant secretary of defense for command, control, communications, and intelligence, had funded the experiment, and, when it was finished, he and John Hamre, the deputy secretary of defense, went to Belvoir for a briefing. Noonan displayed a vast scroll of images and charts, showing President Clinton, former secretary of defense William Perry, and Microsoft CEO Bill Gates posing with Chinese officials: the inference seemed to be that China had infiltrated the highest ranks of American government and industry.
Hamre was outraged, especially since the briefing had already been shown to a few Republicans in Congress. Noonan tried to defend the program, saying that it wasn’t meant as an intelligence analysis but rather as a sort of science-fair project, showing the technology’s possibilities. Hamre wasn’t amused; he shut the project down.
The architect of the project was Belvoir’s chief technology adviser, a civilian engineer named James Heath. Intense, self-confident, and extremely introverted (when he talked with colleagues, he didn’t look down at their shoes, he looked down at his own shoes), Heath was fanatical about the potential of tracking connections in big data—specifically what would later be called “metadata.”
Hamre’s slam might have meant the end of some careers, but Heath stayed on and, when Alexander took command of Fort Belvoir in early 2001, his fortunes revived. The two had known each other since the mid-1990s, when Alexander commanded the 525th Military Intelligence Brigade at Fort Bragg, North Carolina, and Heath was his science adviser. They were working on “data visualization” software even then, and Alexander was impressed with Heath’s acumen and single-mindedness. Heath’s workmates, even the friendly ones, referred to him as Alexander’s “mad scientist.”
One of Mike Hayden’s concerns about Alexander’s request for raw NSA data was that Heath would be the one running the data. This was another reason why Hayden denied the request.
But Alexander fought back. Soft-spoken, charming, even humorous in an awkward way that cloaked his aggressive ambition, he mounted a major lobbying campaign to get the data. He told anyone and everyone with any power or influence, especially on Capitol Hill and in the Pentagon, that he and his team at Fort Belvoir had developed powerful software for tracking down terrorists in a transformative way but that Michael Hayden was blocking progress and withholding data for parochial reasons.
Of course, Hayden had his own contacts, and he started to hear reports of this Army two-star’s machinations. One of his sources even told him that Alexander was knocking on doors at the Justice Department, asking about the ways of the Foreign Intelligence Surveillance Court, which authorized warrants for intercepts of suspected agents and spies inside U.S. borders. This was NSA territory, and no one else had any business—legally, politically, or otherwise—sniffing around it.
Hayden started referring to Alexander as “the Nike swoosh,” after the sneaker brand’s logo (a fleet, curved line), which carried the slogan “Just do it”—a fitting summary, he thought, of Alexander’s MO.
But Alexander won over Rumsfeld, who didn’t much like Hayden and was well disposed to the argument that the NSA was too slow. Hayden read the handwriting on the wall and, in June 2001, worked out an arrangement to share certain databases with Fort Belvoir. The mutual distrust persisted: Alexander suspected that Hayden wasn’t giving him all the good data; Hayden suspected that Alexander wasn’t stripping the data of personal information about Americans who would unavoidably get caught up in the surveillance, as the law required.I In the end, the analytical tools that Alexander and Heath had so touted neither turned up new angles nor unveiled any terrorists. Hayden and Alexander both failed to detect signs of the September 11 attack.
Now, four years after 9/11, following a brief term as the Army’s top intelligence officer in the Pentagon, Alexander was taking over the palace at Fort Meade, taking possession of the databases—and bringing along Heath as his scientific adviser.
In his opening months on the job, Alexander had no time to push ahead with his metadata agenda. The top priority was the war in Iraq, which, for him, meant loosening the traditional strictures on NSA assets, putting SIGINT teams in regular contact with commanders on the ground, and tasking TAO—the elite hackers in the Office of Tailored Access Operations—to address the specific, the tailored, needs of General McChrystal’s Special Forces in their fight against the insurgents.
He also had to repair some damage within NSA.
One week before Alexander arrived at Fort Meade, William Black, Hayden’s deputy for the previous five years, pulled the plug on Trailblazer, the agency’s gargantuan outsourced project to monitor, intercept, and sift communications from the digital global network.
Trailblazer had consumed $1.2 billion of the agency’s budget since the start of the decade, and it had proved to be a disaster: a fount of corporate mismanagement, cost overruns, and—more to the point, as Alexander saw it—conceptual wrongheadedness. It was a monolithic system, built around massive computers to capture and process the deluge of digital data. The problem was that the design was too simple. Mathematical brute force worked in the era of analog signals intelligence, when an entire conversation or fax transmission spilled through the same wire or radio burst; but digital data streamed through cyberspace in packets, breaking up into tiny pieces, each of which traveled the fastest possible route before reassembling at the intended destination. It was no longer enough to collect signals from sensors out in the field, then process the data at headquarters: there were too many signals, racing too quickly through too many servers and networks. Trailblazer could be “scaled up” only so far, before the oceans of data overwhelmed it. Sensors had to process the information, and integrate it with the feed from other sensors, in real time.
Alexander’s first task, then, was to replace Trailblazer—in other words, to devise a whole new approach to SIGINT for the digital age. His predecessors of the last decade had faced the same challenge, though less urgently. Ken Minihan possessed the vision, but lacked the managerial skills; Mike Hayden had the managerial acumen, but succumbed to the presumed expertise of outside contractors, who led him down a costly path to nowhere. Alexander was the first NSA director who understood the technology at the center of the enterprise, who could talk with the SIGINT operators, TAO hackers, and Information Assurance analysts on their own level. He was, at heart, one of them: more a computer geek than a policy maven. He would spend hours down on the floor with his fellow geeks, discussing the problems, the possible approaches, the solutions—so much so that his top aides installed more computers in his office on the building’s eighth deck, so he could work on his beloved technical puzzles without spending too much time away from the broader issues and agendas that he needed to address as director.
As a result of his technical prowess and his ability to speak a common language with the technical personnel, he and his staff devised the conceptual outlines of a new system in a matter of months and launched the first stages of a new program within a year. They called it Turbulence.
Instead of a single, monolithic system that tried to do everything, Turbulence consisted of nine smaller systems. In part, the various systems served as backups or alternative approaches, in case the others failed or the global technology shifted. More to the point, each of the systems sliced into the network from a different angle. Some pieces intercepted signals from satellites, microwave, and cable communications; others went after cell phones; still others tapped into the Internet—and they went after Internet traffic on the level of data packets, the basic unit of the Internet itself, either tracking the packets from their origins or sitting on the backbone of Internet traffic (often with the cooperation of the major Internet service providers), detecting a target’s packet, then alerting the hackers at TAO to take over.
It wasn’t just Alexander’s technical acumen that made Turbulence possible; it was also the huge advances—in data processing, storage, and indexing—that had taken place in just the previous few years. Alexander took over Fort Meade at just the moment when, in the world of computers, his desires converged with reality.
Over the ensuing decade, as Turbulence matured and splintered into specialized programs (with names like Turbine, Turmoil, QuantumTheory, QuantumInsert, and XKeyscore), it evolved into a thoroughly interconnected, truly global system that would make earlier generations of signals intelligence seem clunky by comparison.
Turbulence drew on the same massive databases as Trailblazer; what differed was the processing and sifting of the data, which were far more precise, more tailored to the search for specific information, and more closely shaped to the actual pathways—the packets and streams—of modern digital communications. And because the intercepts took place within the network, the target could be tracked on the spot, in real time.
In the early stages of Turbulence, a parallel program took off, derived from the same technical concepts, involving some of the same technical staff, but focused on a specific geographical region. It was called the RTRG—for Real Time Regional Gateway—and its first mission was to hunt down insurgents in Iraq.
RTRG got under way early in 2007, around the same time that General David Petraeus assumed command of U.S. forces in Iraq and President Bush ordered a “surge” in the number of those forces. Petraeus and Alexander had been friendly for more than thirty years: they’d been classmates at West Point, a source of bonding among Army officers, and they’d renewed their ties years later as brigade commanders at Fort Bragg. When they met again, as Petraeus led the fight in Baghdad, they made a natural team: Petraeus wanted to win the war through a revival of counterinsurgency techniques, and Alexander was keen to plow NSA resources into helping him.
Roadside bombs were the biggest threat to American soldiers in Iraq. Intelligence on the bombers and their locations flooded into NSA computers, from cell phone intercepts, drone and satellite imagery, and myriad other sources. But it took sixteen hours for the data to flow to the Pentagon, then to Fort Meade, then to the tech teams for analysis, then back to the intel centers in Baghdad, then to the soldiers in the field—and that was too long: the insurgents had already moved elsewhere.
Alexander proposed cutting out the middlemen and putting NSA equipment and analysts inside Iraq. Petraeus agreed. They first set up shop, a mini-NSA, in a heavily guarded concrete hangar at Balad Air Base, north of Baghdad. After a while, some of the analysts went out on patrol with the troops, collecting and processing data as they moved. Over the next few years, six thousand NSA officials were deployed to Iraq and, later, Afghanistan; twenty-two of them were killed, many of them by roadside bombs while they were out with the troops.
But their efforts had an impact: in the first few months, the lag time between collecting and acting on intelligence was slashed from sixteen hours to one minute.
By April, Special Forces were using this cache of intelligence to capture not only insurgents but also their computers; and stored inside those computers were emails, phone numbers, usernames, passwords of other insurgents, including al Qaeda leaders—the stuff of a modern spymaster’s dreams.
Finally, Alexander and McChrystal had the ingredients for the cyber offensive campaign that they’d discussed with John Abizaid four years earlier. The NSA teams at Balad Air Base hoisted their full retinue of tricks and tradecraft. They intercepted insurgents’ emails: in some cases, they merely monitored the exchanges to gain new intelligence; in other cases, they injected malware to shut down insurgents’ servers; and in other—many other—cases, they sent phony emails to insurgents, ordering them to meet at a certain time, at a certain location, where U.S. Special Forces would be hiding and waiting to kill them.
In 2007 alone, these sorts of operations, enabled and assisted by the NSA, killed nearly four thousand Iraqi insurgents.
The effect was not decisive, nor was it meant to be: the idea was to provide some breathing space, a zone of security, for Iraq’s political factions to settle their quarrels and form a unified state without having to worry about bombs blowing up every day. The problem was that the ruling faction, the Shiite government of Prime Minister Nouri al-Maliki, didn’t want to settle its quarrels with rival factions among the Sunnis or Kurds; and so, after the American troops left, the sectarian fighting resumed.
But that pivotal year of 2007 saw a dramatic quelling of violence and the taming, co-optation, or surrender of nearly all the active militias. Petraeus’s counterinsurgency strategy had something to do with this, as did Bush’s troop surge. But the tactical gains could not have been won without the Real Time Regional Gateway of the NSA.
RTRG wasn’t the only innovation that the year saw in cyber offensive warfare.
On September 6, just past midnight, four Israeli F-15 fighter jets flew over an unfinished nuclear reactor in eastern Syria, which was being built with the help of North Korean scientists, and demolished it with a barrage of laser-guided bombs and missiles. Syrian president Bashar al-Assad was so stunned that he issued no public protest: better to pretend nothing happened than to acknowledge such a successful incursion. The Israelis said nothing either.
Assad was baffled. The previous February, his generals had installed new Russian air-defense batteries; the crews had been training ever since, and, owing to tensions on the Golan Heights, they’d been on duty the night of the attack; yet they reported seeing no planes on their radar screens.
The Israelis managed to pull off the attack—code-named Operation Orchard—because, ahead of time, Unit 8200, their secret cyber warfare bureau, had hacked the Syrian air-defense radar system. They did so with a computer program called Suter, developed by a clandestine U.S. Air Force bureau called Big Safari. Suter didn’t disable the radar; instead, it disrupted the data link connecting the radar with the screens of the radar operators. At the same time, Suter hacked into the screens’ video signal, so that the Unit 8200 crew could see what the radar operators were seeing. If all was going well, they would see blank screens—and all went well.
It harked back to the campaign waged in the Balkans, ten years earlier, when the Pentagon’s J-39 unit, the NSA, and the CIA’s Information Operations Center spoofed the Serbian air-defense command by tapping into its communications lines and sending false data to its radar screens. And the Serbian campaign had its roots in the plan dreamed up, five years earlier, by Ken Minihan’s demon-dialers at the Air Force Information Warfare Center in San Antonio, to achieve air surprise in the (ultimately aborted) invasion of Haiti by jamming all the island’s telephones.
The Serbian and Haitian campaigns were classic cases of information warfare in the pre-digital age, when the armed forces of many nations ran communications through commercial phone lines. Operation Orchard, like the NSA-JSOC operation in Iraq, exploited the growing reliance on computer networks. Haiti and the Balkans were experiments in proto-cyber warfare; Operation Orchard and the roundup of jihadists in Iraq marked the start of the real thing.
Four and a half months earlier, on April 27, 2007, riots broke out in Tallinn, the capital of Estonia, the smallest and most Western-leaning of the three former Soviet republics on the Baltic Sea, just south of Finland. Estonians had chafed under Moscow’s rule since the beginning of World War II, when the occupation began. When Mikhail Gorbachev took over the Kremlin and loosened his grip almost a half century later, Estonians led the region-wide rebellion for independence that helped usher in the collapse of the Soviet Union. When Vladimir Putin ascended to power at the turn of the twentyfirst century on a wave of resentment and nostalgia for the days of great power, tensions once again sharpened.
The riots began when Estonia’s president, under pressure from Putin, overruled a law that would have removed all the monuments that had gone up during the years of Soviet occupation, including a giant bronze statue of a Red Army soldier. Thousands of Estonians took to the streets in protest, rushing the bronze statue, trying to topple it themselves, only to be met by the town’s ethnic Russians, who fought back, seeing the protest as an insult to the motherland’s wartime sacrifices. Police intervened and moved the statue elsewhere, but street fights continued, at which point Putin intervened—not with troops, as his predecessors might have done, but with an onslaught of ones and zeros.
The 1.3 million citizens of Estonia were among the most digitally advanced on earth, a larger percentage of them hooked up to the Internet and were more reliant on broadband services than those of any other country. The day after the Bronze Night riot, as it was called, they were hit with a massive cyber attack, their networks and servers flooded with so much data that they shut down. And unlike most denial-of-service attacks, which tended to be one-off bits of mischief, this attack persisted and was followed up—in three separate waves—with infections of malware that spread from one computer to another, across the tiny nation, in all spheres of life. For three weeks, sporadically for a whole month, many Estonians were unable to use not just their computers but their telephones, bank accounts, credit cards: everything was hooked up to one network or another—the parliament, the government ministries, mass media, shops, public records, military communications—and it all broke down.
As a member of NATO, Estonia requested aid under Article 5 of the North Atlantic Treaty, which pledged each member-state to treat an attack on one as an attack on all. But the allies were skeptical. Was this an attack, in that sense? Was it an act of war? The question was left open. No troops were sent.
Nonetheless, Western computer specialists rushed to Estonia’s defense at their own initiative, joining and aiding the considerable, skilled white-hat hacker movement inside Estonia. Using a variety of time-honored techniques, they tracked and expelled many of the intruders, softening the effects that would have erupted had the Tallinn government been the only source of resistance and defense.
Kremlin officials denied involvement in the attack, and the Westerners could find no conclusive evidence pointing to a single culprit—one reason, among several, for their reluctance to regard the cyber attacks as cause to invoke Article 5. Attributing the source of a cyber attack was an inherently difficult matter, and whoever launched this one had covered his tracks expertly. Still, forensic analysts did trace the malware code to a Cyrillic keyboard; in response, Kremlin authorities arrested a single member of the nationalist youth organization Nashi (the Russian word for “ours”), fined him the equivalent of a thousand dollars, and pronounced the crime solved. But no one believed that a single lowly citizen, or a small private group, could have found, much less hacked, some of the sensitive Estonian sites that had been taken down all at once and for such a long time.
The cyber strikes in Estonia proved to be the dress rehearsal for a coordinated military campaign, a little over a year later, in which Russia launched simultaneous air, ground, naval, and cyber operations against the former Soviet republic of Georgia.
Since the end of the Cold War, tensions had been rife between Moscow and the newly independent Georgian government over the tiny oblasts of South Ossetia and Abkhazia, formally a part of Georgia but dense with ethnic Russians. On August 1, 2008, Ossetian separatists shelled the Georgian village of Tskhinvali. The night of August 7–8, Georgian soldiers mobilized, suppressing the separatists and recapturing the town in a few hours. The next day, under the pretense of “peace enforcement,” Russian troops and tanks rolled into the village, supported by air strikes and a naval blockade along the coast.
At the precise moment when the tanks and planes crossed the South Ossetian line, fifty-four Georgian websites—related to mass media, finance, government ministries, police, and armed forces—were hacked and, along with the nation’s entire Internet service, rerouted to Russian servers, which shut them down. Georgian citizens couldn’t gain access to information about what was happening; Georgian officers had trouble sending orders to their troops; Georgian politicians met long delays when trying to communicate with the rest of the world. As a result, Russian propaganda channels were first to beam Moscow’s version of events to the world. It was a classic case of what was once called information warfare or counter command-control warfare—a campaign to confuse, bewilder, or disorient the enemy and thus weaken, delay, or destroy his ability to respond to a military attack.
The hackers also stole material from some sites that gave them valuable intelligence on the Georgian military—its operations, movements, and communiqués—so the Russian troops could roll over them all the more swiftly.
Just as with Estonia, Kremlin spokesmen denied launching the cyber attacks, though the timing—coordinated so precisely with the other forms of attack—splashed extreme doubt on their claims of innocence.
After four days of fighting, the Georgian army retreated. Soon after, Russia’s parliament formally recognized South Ossetia and Abkhazia as independent states. Georgia and much of the rest of the world disputed the status, seeing the enclaves as occupied Georgian territory, but there wasn’t much they could do about it.
In the sixteen months from April 2007 to August 2008, when America hacked Iraqi insurgents’ email, Israel spoofed Syrian air defenses, and Russia flooded the servers of Estonia and Georgia, the world witnessed the dawn of a new era in cyber warfare—the fulfillment of a decade’s worth of studies, simulations, and, at the start of the decade in Serbia, a tentative tryout.
The Estonian operation was a stab at political coercion, though in that sense it failed: in the end, the statue of the Red Army soldier was moved from the center of Tallinn to a military graveyard on the town’s outskirts.
The other three operations were successes, but cyber’s role in each was tactical: an adjunct to conventional military operations, in much the same way as radar, stealth technology, and electronic countermeasures had been in previous conflicts. Its effects were probably short-lived as well; had the conflicts gone on longer, the target-nations would likely have found ways to deflect, diffuse, or disable the cyber attacks, just as the Estonians did with the help of Western allies. Even in its four-day war in South Ossetia, Georgia managed to reroute some of its servers to Western countries and filter some of the Russian intrusions; the cyber attack evolved into a two-way cyber war, with improvised tactics and maneuvers.
In all its incarnations through the centuries, information warfare had been a gamble, its payoff lasting a brief spell, at best—just long enough for spies, troops, ships, or planes to cross a border undetected, or for a crucial message to be blocked or sent and received.
One question that remained about this latest chapter, in the Internet era, was whether ones and zeroes, zipping through cyberspace from half a world away, could inflict physical damage on a country’s assets. The most alarming passages of the Marsh Report and a dozen other studies had pointed to the vulnerability of electrical power grids, oil and gas pipelines, dams, railroads, waterworks, and other pieces of a nation’s critical infrastructure—all of them increasingly controlled by computers run on commercial systems. The studies warned that foreign intelligence agents, organized crime gangs, or malicious anarchists could take down these systems with cyber attacks from anywhere on earth. Some classified exercises, including the simulated phase of Eligible Receiver, posited such attacks. But were the scenarios plausible? Could a clever hacker really destroy a physical object?
On March 4, 2007, the Department of Energy conducted an experiment—called the Aurora Generator Test—to answer that question.
The test was run by a retired naval intelligence officer named Michael Assante. Shortly after the 9/11 attacks, Assante was tasked to the FBI’s National Infrastructure Protection Center, which had been set up in the wake of Solar Sunrise and Moonlight Maze, the first major cyber intrusions into American military networks. While most of the center’s analysts focused on Internet viruses, Assante examined the vulnerability of the automated control systems that ran power grids, pipelines, and other pieces of critical infrastructure that the Marsh Report had catalogued.
A few years later, Assante retired from the Navy and went to work as vice president and chief security officer of the American Electrical Power Company, which delivered electricity to millions of customers throughout the South, Midwest, and Mid-Atlantic. Several times he raised these problems with his fellow executives. In response, they’d acknowledge that someone could hack into a control system and cause power outages, but, they would add, the damage would be short-term: a technician would replace the circuit breaker, and the lights would go back on. But Assante would shake his head. Back at the FBI, he’d talked with protection and control engineers, the specialists’ specialists, who reminded him that circuit breakers were like fuses: their function was to protect very costly components, such as power generators, which were much harder, and would take much longer, to replace. A malicious hacker wouldn’t likely stop at blowing the circuit breaker; he’d go on to damage or destroy the generator.
Finally persuaded that this might be a problem, Assante’s bosses sent him to the Idaho National Laboratory, an 890-square-mile federal research facility in the prairie desert outside Idaho Falls, to examine the issues more deeply. First, he did mathematical analyses, then bench tests of miniaturized models, and finally set up a real-life experiment. The Department of Homeland Security had recently undertaken a project on the most worrisome dangers in cyberspace, so its managers agreed to help fund it.
The object of the Aurora test was a 2.25-megawatt power generator, weighing twenty-seven tons, installed inside one of the lab’s test chambers. On a signal from Washington, where officials were watching the test on a video monitor, a technician typed a mere twenty-one lines of malicious code into a digital relay, which was wired to the generator. The code opened a circuit breaker in the generator’s protection system, then closed it just before the system responded, throwing its operations out of sync. Almost instantly, the generator shook, and some parts flew off. A few seconds later, it shook again, then belched out a puff of white smoke and a huge cloud of black smoke. The machine was dead.
Before the test, Assante and his team figured that there would be damage; that’s what their analyses and simulations had predicted. But they didn’t expect the magnitude of damage or how quickly it would come. Start-up to breakdown, the test lasted just three minutes, and it would have lasted a minute or two shorter, except that the crews paused to assess each phase of damage before moving on.
If the military clashes of 2007—in Iraq, Syria, and the former Soviet republics—confirmed that cyber weapons could play a tactical role in new-age warfare, the Aurora Generator Test revealed that they might play a strategic role, too, as instruments of leverage or weapons of mass destruction, not unlike that of nuclear weapons. They would, of course, wreak much less destruction than atomic or hydrogen bombs, but they were much more accessible—no Manhattan Project was necessary, only the purchase of computers and the training of hackers—and their effects were lightning fast.
There had been similar, if less dramatic, demonstrations of these effects in the past. In 2000, a disgruntled former worker at an Australian water-treatment center hacked into its central computers and sent commands that disabled the pumps, allowing raw sewage to flow into the water. The following year, hackers broke into the servers of a California company that transmitted electrical power throughout the state, then probed its network for two weeks before getting caught.
The problem, in other words, was long known to be real, not just theoretical, but few companies had taken any steps to solve it. Nor had government agencies stepped in: those with the ability lacked the legal authority, while those with the legal authority lacked the ability; since action was difficult, evasion was easy. But for anyone who watched the video of the Aurora Generator Test, evasion was no longer an option.
One of the video’s most interested viewers, who showed it to officials all around the capital, from the president on down, was the former NSA director who coined the phrase “information warfare,” Rear Admiral Mike McConnell.
I. Ironically, while complaining that Alexander might not handle NSA data in a strictly legal manner, Hayden was carrying out a legally dubious domestic-surveillance program that mined the same NSA database, including phone conversations and Internet activity of American citizens. Hayden rationalized this program, code-named Stellar Wind, as proper because it had been ordered by President Bush and deemed lawful by Justice Department lawyers.