CHAPTER 3


A CYBER PEARL HARBOR

ON April 19, 1995, a small gang of militant anarchists, led by Timothy McVeigh, blew up a federal office building in Oklahoma City, killing 168 people, injuring 600 more, and destroying or damaging 325 buildings across a sixteen-block radius, causing more than $600 million in damage. The shocking thing that emerged from the subsequent investigation was just how easily McVeigh and his associates had pulled off the bombing. It took little more than a truck and a few dozen bags of ammonium nitrate, a common chemical in fertilizers, obtainable in many supply stores. Security around the building was practically nonexistent.

The obvious question, in and out of the government, was what sorts of targets would get blown up next: a dam, a major port, the Federal Reserve, a nuclear power plant? The damage from any of those hits would be more than simply tragic; it could reverberate through the entire economy. So how vulnerable were they, and what could be done to protect them?

On June 21, Bill Clinton signed a Presidential Decision Directive, PDD-39, titled “U.S. Policy on Counterterrorism,” which, among other things, put Attorney General Janet Reno in charge of a “cabinet committee” to review—and suggest ways to reduce—the vulnerability of “government facilities” and “critical national infrastructure.”

Reno turned the task over to her deputy, Jamie Gorelick, who set up a Critical Infrastructure Working Group, consisting of other deputies from the Pentagon, CIA, FBI, and the White House. After a few weeks of meetings, the group recommended that the president appoint a commission, which in turn held hearings and wrote a report, which culminated in the drafting of another presidential directive.

Several White House aides, who figured the commission would come up with new ways to secure important physical structures, were startled when more than half of its report and recommendations dealt with the vulnerability of computer networks and the urgent need for what it called “cyber security.”

The surprise twist came about because key members of the Critical Infrastructure Working Group and the subsequent presidential commission had come from the NSA or the Navy’s super-secret black programs and were thus well aware of this new aspect of the world.

Rich Wilhelm, the NSA director of information warfare, was among the most influential members of the working group. A few months before the Oklahoma City bombing, President Clinton had put Vice President Al Gore in charge of overseeing the Clipper Chip; Mike McConnell sent Wilhelm to the White House as the NSA liaison on the project. The chip soon died, but Gore held on to Wilhelm and made him his intelligence adviser, with a spot on the National Security Council staff. Early on at his new job, Wilhelm told some of his fellow staffers about the discoveries he’d made at Fort Meade, especially those highlighting the vulnerability of America’s increasingly networked society. He wrote a memo on the subject for Clinton’s national security adviser, Anthony Lake, who signed it with his own name and passed it on to the president.

When Jamie Gorelick put together her working group, it was natural that Wilhelm would be on it. One of its first tasks was to define its title, to figure out which infrastructures were critical—which sectors were vital to the functioning of a modern society. The group came up with a list of eight: telecommunications, electrical power, gas and oil, banking and finance, transportation, water supply, emergency services, and “continuation of government” in the event of war or catastrophe.

Wilhelm pointed out that all of these sectors relied, in some cases heavily, on computer networks. Terrorists wouldn’t need to blow up a bank or a rail line or a power grid; they could merely disrupt the computer network that controlled its workings, and the result would be the same. As a result, Wilhelm argued, “critical infrastructure” should include not just physical buildings but the stuff of what would soon be called cyberspace.

Gorelick needed no persuading on this point. As deputy attorney general, she served on several interagency panels, one of which dealt with national security matters. She co-chaired that panel with the deputy director of the CIA, who happened to be Bill Studeman, the former NSA director (and Bobby Ray Inman protégé). In his days at Fort Meade, Studeman had been a sharp advocate of counter-C2 warfare; at Langley he was promoting the same idea, now known as information warfare, both its offensive and its defensive sides—America’s ability to penetrate the enemy’s networks and the enemy’s ability to penetrate America’s.

Studeman and Gorelick met to discuss these issues every two weeks, and his arguments had resonance. Before her appointment as deputy attorney general, Gorelick had been general counsel at the Pentagon, where she heard frequent briefings on hackings of defense contractors and even of the Defense Department. Now, at the Justice Department, she was helping to prosecute criminal cases of hackers who’d penetrated the computers of banks and manufacturers. One year before Oklahoma City, Gorelick had helped draft the Computer Crime Initiative Action Plan, to boost the Justice Department’s expertise in “high-tech matters,” and had helped create the Information Infrastructure Task Force Coordinating Committee.

These ventures weren’t mere hobbyhorses; they were mandated by the Justice Department’s caseload. In recent times, a Russian crime syndicate had hacked into Citibank’s computers and stolen $10 million, funneling it to separate accounts in California, Germany, Finland, and Israel. A disgruntled ex-employee of an emergency alert network, covering twenty-two states, crashed the system for ten hours. A man in California gained control of the computer running local phone switches, downloaded information about government wiretaps on suspected terrorists, and posted the information online. Two teenage boys, malicious counterparts to the hero of WarGames, hacked into the computer network at an Air Force base in Rome, New York; one of the boys later sneered that military sites were the easiest to hack on the entire Internet.

From all this—her experiences as a government lawyer, the interagency meetings with Studeman, and now the discussions with Rich Wilhelm on the working group—Gorelick was coming to two disturbing conclusions. First, at least in this realm, the threats from criminals, terrorists, and foreign adversaries were all the same: they used the same means of attack; often, they couldn’t be distinguished. This wasn’t a problem for the Department of Justice or Defense alone; the whole government had to deal with it, and, since most computer traffic moved along networks owned by corporations, the private sector had to help find, and enforce, solutions, too.

Second, the threat was wider and deeper than she’d imagined. Looking over the group’s list of “critical” infrastructures, and learning that they were all increasingly controlled by computers, Gorelick realized, in a jaw-drop moment, that a coordinated attack by a handful of technical savants, from just down the street or the other side of the globe, could devastate the nation.

What nailed this new understanding was a briefing by the Pentagon’s delegate to the working group, a retired Navy officer named Brenton Greene, who had recently been named to a new post, the director for infrastructure policy, in the office of the undersecretary of defense.

Greene had been involved in some of the military’s most highly classified programs. In the late 1980s and early 1990s, he was a submarine skipper on beyond-top-secret spy missions. After that, he managed Pentagon black programs in a unit called the J Department, which developed emerging technologies that might give America an advantage in a coming war. One branch of J Department worked on “critical-node targeting.” The idea was to analyze the infrastructures of every adversary’s country and to identify the key targets—the smallest number of targets that the American military would have to destroy in order to make a huge impact on the course of a war. Greene helped to develop another branch of the department, the Strategic Leveraging Project, which focused on new ways of penetrating and subverting foreign adversaries’ command-control networks—the essence of information warfare.

Working on these projects and seeing how easy it was, at least in theory, to devastate a foreign country with a few well-laid bombs or electronic intrusions, Greene realized—as had several others who’d journeyed down this path before him—the flip side of the equation: what we could do to them, they could do to us. And Greene was also learning that America was far more vulnerable to these sorts of attacks—especially information attacks—than any other country on the planet.

In the course of his research, Greene came across a 1990 study by the U.S. Office of Technology Assessment, a congressional advisory group, called Physical Vulnerability of Electric Systems to Natural Disasters and Sabotage. In its opening pages, the authors revealed which power stations and switches, if disabled, would take down huge chunks of the national grid. This was a public document, available to anyone who knew about it.

One of Greene’s colleagues in the J Department told him that, soon after George Bush entered the White House in January 1989, Senator John Glenn showed the study to General Brent Scowcroft, Bush’s national security adviser. Scowcroft was concerned and asked a Secret Service officer named Charles Lane to put together a small team—no more than a half dozen technical analysts—to do a separate study. The team’s findings were so disturbing that Scowcroft shredded all of their work material. Only two copies of Lane’s report were printed. Greene obtained one of them.

At this point, Greene concluded that he’d been working the wrong side of the problem: protecting America’s infrastructure was more vital—as he saw it, more urgent—than seeking ways to poke holes in foreign infrastructures.

Greene knew Linton Wells, a fellow Navy officer with a deep background in black programs, who was now military assistant to Walter Slocombe, the undersecretary of defense for policy. Greene told Wells that Slocombe should hire a director for infrastructure policy. Slocombe approved the idea. Greene was hired.

In his first few months on the new job, Greene worked up a briefing on the “interdependence” of the nation’s infrastructure, its concentration, and the commingling of one segment with the others—how disabling a few “critical nodes” (a phrase from J Department) could severely damage the country.

For instance, Greene knew that the Bell Corporation distributed a CD-ROM listing all of its communications switches worldwide, so that, say, a phone company in Argentina would know how to connect circuits for routing a call to Ohio. Greene looked at this guide with a different question in mind: where were all the switches in the major American cities? In each case he examined, the switches were—for reasons of economic efficiency—concentrated at just a couple of sites. For New York City, most of them were located at two addresses in Lower Manhattan: 140 West Street and 104 Broad Street. Take out those two addresses—whether with a bomb or an information warfare attack—and New York City would lose almost all of its phone service, at least for a while. The loss of phone service would affect other infrastructures, and on the cascading would go.

Capping Greene’s briefing, the CIA—where Bill Studeman was briefly acting director—circulated a classified report on the vulnerability of SCADA systems. The acronym stood for Supervisory Control and Data Acquisition. Throughout the country, again for economic reasons, utility companies, waterworks, railway lines—vast sectors of critical infrastructure—were linking one local stretch of the sector to another, through computer networks, and controlling all of them remotely, sometimes with human monitors, often with automated sensors. Before the CIA report, few on the working group had ever heard of SCADA. Now, everyone realized that they were probably just scratching the surface of a new danger that came with the new technology.

Gorelick wrote a memo, alerting her superiors that the group was expanding the scope of its inquiry, “in light of the breadth of critical infrastructures and the multiplicity of sources and forms of attack.” It was no longer enough to consider the likelihood and impact of terrorists blowing up critical buildings. The group—and, ultimately, the president—also had to consider “threats from other sources.”

What to call these “other” threats? One word was floating around in stories about hackings of one sort or another: “cyber.” The word had its roots in “cybernetics,” a term dating back to the mid-nineteenth century, describing the closed loops of information systems. But in its present-day context of computer networks, the term stemmed from William Gibson’s 1984 science-fiction novel, Neuromancer, a wild and eerily prescient tale of murder and mayhem in the virtual world of “cyberspace.”

Michael Vatis, a Justice Department lawyer on the working group who had just read Gibson’s novel, advocated the term’s adoption. Others were opposed: it sounded too sci-fi, too frivolous. But once uttered, the word snugly fit. From that point on, the group—and others who studied the issue—would speak of “cyber crime,” “cyber security,” “cyber war.”

What to do about these cyber threats? That was the real question, the group’s raison d’être, and here they were stuck. There were too many issues, touching too many interests—bureaucratic, political, fiscal, and corporate—for an interagency working group to settle.

On February 6, 1996, Gorelick sent the group’s report to Rand Beers, Clinton’s intelligence adviser and the point of contact for all issues related to PDD-39, the presidential directive on counterterrorism policy, which had set this study in motion. The report’s main point—noting the existence of two kinds of threats to critical infrastructure, physical and cyber—was novel, even historic. As for a plan of action, the group fell back on the usual punt by panels of this sort when they don’t know what else to do: it recommended the creation of a presidential commission.


For a while, nothing happened. Rand Beers told Gorelick that her group’s report was under consideration, but there was no follow-up. A spur was needed. She found it in the personage of Sam Nunn, the senior Democrat on the Senate Armed Services Committee.

Gorelick knew Nunn from her days as the Pentagon’s general counsel. Both were Democratic hawks, not quite a rare breed but not so common either, and they enjoyed discussing the issues with each other. Gorelick told him about her group’s findings. In response, Nunn inserted a clause in that year’s defense authorization bill, requiring the executive branch to report to Congress on the policies and plans to ward off computer-based attacks against the national infrastructure.

Nunn also asked the General Accounting Office, the legislature’s watchdog agency, to conduct a similar study. The resulting GAO report, “Information Security: Computer Attacks at Department of Defense Pose Increasing Risks,” cited one estimate that the Defense Department “may have experienced as many as 250,000 attacks last year,” two thirds of them successful, and that “the number of attacks is doubling each year, as Internet use increases along with the sophistication of ‘hackers’ and their tools.”

Not only was this figure unlikely (a quarter million attacks a year meant 685 per day, with 457 actual penetrations), it was probably pulled out of a hat: as the GAO authors themselves acknowledged, only “a small portion” of attacks were “actually detected and reported.”

Still, the study sent a shockwave through certain corridors. Gorelick made sure that Beers knew about the wave’s reverberations and warned him that Nunn was about to hold hearings on the subject. The president, she hinted, would do well to get out in front of the storm.

Nunn scheduled his hearing for July 16. On July 15, Clinton issued Executive Order 13010, creating the blue-ribbon commission that Gorelick’s working group had suggested. The order, a near-exact copy of the working group’s proposed draft three months earlier, began: “Certain national infrastructures are so vital that their incapacity or destruction would have a debilitating impact on the defense or economic security of the United States.” Listing the same eight “critical” sectors that the working group had itemized, the order went on, “Threats to these critical infrastructures fall into two categories: physical threats to tangible property (‘physical threats’) and threats of electronic, radio-frequency, or computer-based attacks on the information or communications components that control critical infrastructures (‘cyber threats’).”

The next day, the Senate Governmental Affairs Committee, where Nunn sat as a top Democrat, held its much-anticipated hearing on the subject. One of the witnesses was Jamie Gorelick, who warned, “We have not yet had a terrorist cyber attack on the infrastructure. But I think that that is just a matter of time. We do not want to wait for the cyber equivalent of Pearl Harbor.”

The cyber age was officially under way.


So, behind the scenes, was the age of cyber warfare. At one meeting of the Critical Infrastructure Working Group, Rich Wilhelm took Jamie Gorelick aside and informed her, in broad terms, of the ultrasecret flip side of the threat she was probing—that we had long been doing to other countries what some of those countries, or certain people in those countries, were starting to do to us. We weren’t robbing their banks or stealing their industrial secrets, we had no need to do that; but we were using cyber tools—“electronic, radio-frequency, or computer-based attacks,” as Clinton’s executive order would put it—to spy on them, scope out their networks, and prepare the battlefield to our advantage, should there someday be a war.

The important thing, Wilhelm stressed, was that our cyber offensive capabilities must be kept off the table—must not even be hinted at—when discussing our vulnerability to other countries’ cyber offensive capabilities. America’s programs in this realm were among the most tightly held secrets in the entire national security establishment.

When Rand Beers met with deputies from various cabinet departments to discuss Clinton’s executive order, John White, the deputy secretary of defense, made the same point to his fellow deputy secretaries, in the same solemn tone: no one can so much as mention America’s cyber offensive capabilities.

The need for secrecy wasn’t the only reason for the ensuing silence on the matter. No one around the table said so, but, clearly, to acknowledge America’s cyber prowess, while decrying the prowess of others, would be awkward, to say the least.


It took seven months for the commission to get started. Beers, who once again served as the White House point man, first had to find a place for the commissioners to meet. The Old Executive Office Building, the mansion next door to the White House, wasn’t sufficiently wired for computer connections (in itself, a commentary on the dismal state of preparedness for a cyber crisis). John Deutch, the new CIA director, pushed for the commissioners to work at his headquarters in Langley, where they could have secure access to anything they needed; but officials in other departments feared this might breed insularity and excessive dependence on the intelligence community. In the end, Beers found a vacant suite of offices in a Pentagon-owned building in Arlington; to sweeten the deal, the Defense Department offered to pay all expenses and to offer technical support.

Then came the matter of filling the commission. This was a delicate matter. Nearly all of the nation’s computer traffic flowed through networks owned by private corporations; those corporations should have a say in their fate. Beers and his staff listed the ten federal departments and agencies that would be affected by whatever recommendations came out of this enterprise—Defense, Justice, Transportation, Treasury, Commerce, the Federal Emergency Management Administration, the Federal Reserve, the FBI, the CIA, and the NSA—and decided that each agency head would pick two delegates for the commission: one official and one executive from a private contractor. In addition to deputy assistant secretaries, there would also be directors or technical vice presidents from the likes of AT&T, IBM, Pacific Gas & Electric, and the National Association of Regulatory Utility Commissioners.

There was another delicate matter. The commission’s final report would be a public document, but its working papers and meetings would be classified; the commissioners would need to be vetted for top secret security clearances. That, too, would take time.

Finally, Beers and the cabinet deputies had to pick a chairman. There were tried-and-true criteria for such a post: he (and it was almost always a he) should be eminent, but not famous; somewhat familiar with the subject at hand, but not an expert; respected, amiable, but not flush with his own agenda; someone with time on his hands, but not a reject or a duffer. They came up with a retired Air Force four-star general named Robert T. Marsh.

Tom Marsh had risen through the ranks on the technical side of the Air Force, especially in electronic warfare. He wound up his career as commander of the electronic systems division at Hanscom Air Force Base in Massachusetts, then as commander of Air Force Systems Command at Andrews Air Force Base, near Washington. He was seventy-one years old; since retiring from active duty, he’d served on the Defense Science Board and the usual array of corporate boards; at the moment, he was director of the Air Force Aid Society, the service’s main charity organization.

In short, he seemed ideal.

John White, the deputy secretary of defense, called Marsh to ask if he would be willing to serve the president as chairman of a commission to protect critical infrastructure. Marsh replied that he wasn’t quite sure what “critical infrastructure” meant, but he’d be glad to help.

To prepare for the task, Marsh read the report by Gorelick’s Critical Infrastructure Working Group. It rang true. He recalled his days at Hanscom in the late 1970s and early 1980s, when the Air Force crammed new technologies onto combat planes with no concern for the vulnerabilities they might be sowing. The upgrades were all dependent on command-control links, which had no built-in redundancies. A few technically astute junior officers on Marsh’s staff warned him that, if the links were disrupted, the plane would be disabled, barely able to fly, much less fight.

Still, Marsh had been away from day-to-day operations for twelve years, and this focus on “cyber” was entirely new to him. For advice and a reality check, Marsh called an old colleague who knew more about these issues than just about anybody—Willis Ware.

Ware had kept up with every step of the Internet revolution since writing his seminal paper, nearly thirty years earlier, on the vulnerability of computer networks. He still worked at the RAND Corporation, and he was a member of the Air Force Scientific Advisory Board, which is where Marsh had come to know and trust him. Ware assured Marsh that Gorelick’s report was on the right track; that this was a serious issue and growing more so by the day, as the military and society grew more dependent on these networks; and that too few people were paying attention.

His chat with Ware filled Marsh with confidence. The president’s executive order had chartered the commission to examine vulnerabilities to physical threats and cyber threats. Marsh figured that solutions to the physical threats were fairly straightforward; the cyber threats were the novelty, so he would focus his inquiry on them.

Marsh and the commissioners first convened in February 1997. They had six months to write a report. A few of the members were holdovers from the Critical Infrastructure Working Group, most notably Brent Greene, the Pentagon delegate, whose briefing on the vulnerability of telecom switches and the electrical power grid had so shaken Gorelick and the others. (Gorelick, who left the Justice Department for a private law practice in May, would later co-chair an advisory panel for the commission, along with Sam Nunn.)

Most of the commissioners were new to the issues—at best, they knew a little bit about the vulnerabilities in their own narrow sectors, but had no idea of how vastly they extended across the economy—and their exposure to all the data, at briefings and hearings, filled them with dread and urgency.

Marsh’s staff director was a retired Air Force officer named Phillip Lacombe, who’d earned high marks as chief of staff on a recent panel studying the roles and missions of the armed forces. Lacombe’s cyber epiphany struck one morning, when he and Marsh were about to board an eight a.m. plane for Boston, where they were scheduled to hold a ten-thirty hearing. Their flight was delayed for three hours because the airline’s computer system was down; the crew couldn’t measure weights and balances (a task once performed with a slide rule, which no one knew how to use anymore), so the plane couldn’t take off. The irony was overwhelming: here they were, about to go hear testimony on the nation’s growing dependence on computer networks—and they couldn’t get there on time because of the nation’s growing dependence on computer networks.

That’s when Lacombe first realized that the problem extended to every corner of modern life. Military officers and defense intellectuals had been worried about weapons of mass destruction; Lacombe now saw there were also weapons of mass disruption.

Nearly every hearing that the commission held, as well as several casual conversations before and after, hammered home the same point. The executives of Walmart told the commission that, on a recent Sunday, the company’s computer system crashed and, as a result, they couldn’t open any of their retail stores in the southeast region of the United States. When a director at Pacific Gas & Electric, one of the nation’s largest utilities, testified that all of its control systems were getting hooked up to the Internet, to save money and speed up the transmission of energy, Lacombe asked what the company was doing about security. He didn’t know what Lacombe was talking about. Various commissioners asked the heads of railways and airlines how they were assuring the security of computer-controlled switches, tracks, schedules, and air traffic radar—and it was the same story: the corporate heads looked puzzled; they had no idea that security was an issue.

On October 13, 1997, the President’s Commission on Critical Infrastructure Protection, as it was formally called, released its report—154 pages of findings, analyses, and detailed technical appendices. “Just as the terrible long-range weapons of the Nuclear Age made us think differently about security in the last half of the twentieth century,” the report stated in its opening pages, “the electronic technology of the Information Age challenges us to invent new ways of protecting ourselves now. We must learn to negotiate a new geography, where borders are irrelevant and distances meaningless, where an enemy may be able to harm the vital systems we depend on without confronting our military power.”

It went on: “Today, a computer can cause switches or valves to open and close, move funds from one account to another, or convey a military order almost as quickly over thousands of miles as it can from next door, and just as easily from a terrorist hideout as from an office cubicle or a military command center.” These “cyber attacks” could be “combined with physical attacks” in an effort “to paralyze or panic large segments of society, damage our capability to respond to incidents (by disabling the 911 system or emergency communications, for example), hamper our ability to deploy conventional military forces, and otherwise limit the freedom of action of our national leadership.”

The report eschewed alarmism; there was no talk here of a “cyber Pearl Harbor.” Its authors allowed up front that they saw “no evidence of an impending cyber attack which could have a debilitating effect on the nation’s critical infrastructure.” Still, they added, “this is no basis for complacency,” adding, “The capability to do harm—particularly through information networks—is real; it is growing at an alarming rate; and we have little defense against it.”

This was hardly the first report to issue these warnings; the conclusions reached decades earlier by Willis Ware, and adopted as policy (or attempted policy) by the Reagan administration’s NSDD-145, had percolated through the small, still obscure community of technically minded officials. In 1989, eight years before General Marsh’s report, the National Research Council released a study titled Growing Vulnerability of the Public Switched Networks, which warned that “a serious threat to communications infrastructure is developing” from “natural, accidental, capricious, or hostile agents.”

Two years after that, a report by the same council, titled Computers at Risk, observed, “The modern thief can steal more with a computer than with a gun. Tomorrow’s terrorist may be able to do more damage with a keyboard than with a bomb.”

In November 1996, just eleven months before the Marsh Report came out, a Defense Science Board task force on “Information Warfare-Defense” described the “increasing dependency” on vulnerable networks as “ingredients in a recipe for a national security disaster.” The report recommended more than fifty actions to be taken over the next five years, at a cost of $3 billion.

The chairman of that task force, Duane Andrews, had recently been the assistant secretary of defense for command, control, communications, and intelligence—the Pentagon’s liaison with the NSA. The vice chairman was Donald Latham, who, at the ASD(C3I) twelve years earlier, had been the driving force behind Reagan’s NSDD-145, the first presidential directive on computer security. In his preface, Andrews was skeptical, bordering on cynical, that the report would make a dent. “I should also point out,” he wrote, “that this is the third consecutive year a DSB Summer Study or Task Force has made similar recommendations.”

But unlike those studies, the Marsh Report was the work of a presidential commission. The commander-in-chief had ordered it into being; someone on his staff would read its report; maybe the president himself would scan the executive summary; there was, in short, a chance that policy would sprout from its roots.

For a while, though, there was nothing: no response from the president, not so much as a meeting or a photo op with the chairman. A few months later, Clinton briefly alluded to the report’s substance in a commencement address at the Naval Academy. “In our efforts to battle terrorism and cyber attacks and biological weapons,” he said, “all of us must be extremely aggressive.” That was it, at least in public.

But behind the scenes, at the same time that Marsh and his commissioners were winding up their final hearings, the Pentagon and the National Security Agency were planning a top secret exercise—a simulation of a cyber attack—that would breathe life into Marsh’s warnings and finally, truly, prod top officials into action.