CHAPTER 12


“SOMEBODY HAS CROSSED THE RUBICON”

GEORGE W. BUSH personally briefed Barack Obama on Olympic Games, rather than leave the task to an intelligence official, because, like all cyber operations, it required presidential authorization. After his swearing-in, Obama would have to renew the program explicitly or let it die; so Bush made a forceful plea to let it roll forward. The program, he told his successor, could mean the difference between a war with Iran and a chance for peace.

The operation had been set in motion a few years earlier, in 2006, midway through Bush’s second term as president, when Iranian scientists were detected installing centrifuges—the long, silvery paddles that churn uranium gas at supersonic speeds—at a reactor in Natanz. The avowed purpose was to generate electrical power, but if the centrifuges cascaded in large enough quantities for a long enough time, the same process could make the stuff of nuclear weapons.

Vice President Cheney advocated launching air strikes on the Natanz reactor, as did certain Israelis, who viewed the prospect of a nuclear-armed Iran as an existential threat. Bush might have gone for the idea a few years earlier, but he was tiring of Cheney’s relentless hawkishness. Bob Gates, the new defense secretary, had persuaded Bush that going to war against a third Muslim country, while the two in Afghanistan and Iraq were still raging, would be bad for national security. And so Bush was looking for a “third option”—something in between air strikes and doing nothing.

The answer came from Fort Meade—or, more precisely, from the decades-long history of studies, simulations, war games, and clandestine real-life excursions in counter-C2 warfare, information warfare, and cyber warfare, whose innovations and operators were now all centered at Fort Meade.

Like most reactors, Natanz operated with remote computer controls, and it was by now widely known—in a few months, it would be demonstrated with the Aurora Generator Test at the Idaho National Laboratory—that these controls could be hacked and manipulated in a cyber attack.

With this in mind, Keith Alexander, the NSA director, proposed launching a cyber attack on the controls of the Natanz reactor.

Already, his SIGINT teams had discovered vulnerabilities in the computers controlling the reactor and had prowled through their network, scoping out its dimensions, functions, and features, and finding still more vulnerabilities. This was digital age espionage, CNE—Computer Network Exploitation—so it didn’t require the president’s approval. For the next step, CNA, Computer Network Attack, the commander-in-chief’s formal go-ahead would be needed. In preparation for the green light, Alexander laid out the rudiments of a plan.

In their probes, the NSA SIGINT teams had discovered that the software controlling the Natanz centrifuges was designed by Siemens, a large German company that manufactured PLCs—programmable logic controllers—for industrial systems worldwide. The challenge was to devise a worm that would infect the Natanz system but no other Siemens systems elsewhere, in case the worm spread, as worms sometimes did.

Bush was desperate for some way out; this might be it; there was no harm in trying. So he told Alexander to proceed.

This would be a huge operation, a joint effort by the NSA, CIA, and Israel’s cyber war bureau, Unit 8200. Meanwhile, Alexander got the operation going with a simpler trick. The Iranians had installed devices called uninterruptible power supplies on the generators that pumped electricity into Natanz, to prevent the sorts of spikes or dips in voltage that could damage the spinning centrifuges. It was easy to hack into these supplies. One day, the voltage spiked, and fifty centrifuges exploded. The power supplies had been ordered from Turkey; the Iranians suspected sabotage and turned to another supplier, thinking that would fix the problem. They were right about the sabotage, but not about its source.

Shutting down the reactor by messing with its power supplies was a one-time move. While the Iranians made the fix, the NSA prepared the more durable, devastating potion.

Most of this work was done by the elite hackers in TAO, the Office of Tailored Access Operations, whose technical skills and resources had swelled in the decade since Ken Minihan set aside a corner of the SIGINT Directorate to let a new cadre of computer geeks find their footing. For Olympic Games, they took some of their boldest inventions—which astounded even the most jaded SIGINT veterans who were let in on the secret—and combined them into a single super-worm called Flame.

A multipurpose piece of malware that took up 650,000 lines of code (nearly 4,000 times larger than a typical hacker tool), Flame—once it infected a computer—could swipe files, monitor keystrokes and screens, turn on the machine’s microphone to record conversations nearby, turn on its Bluetooth function to steal data from most smart phones within twenty meters, among other tricks, all from NSA command centers across the globe.

To get inside the controls at Natanz, TAO hackers developed malware to exploit five separate vulnerabilities that no one had previously discovered—five zero-day exploits—in the Windows operating system of the Siemens controllers. Exploiting one of these vulnerabilities, in the keyboard file, gave TAO special user privileges throughout a computer’s functions. Another allowed access to all the computers that shared an infected printer.

The idea was to hack into the Siemens machines controlling the valves that pumped uranium gas into the centrifuges. Once this was accomplished, TAO would manipulate the valves, turning them way up, overloading the centrifuges, causing them to burst.

It took eight months for the NSA to devise this plan and design the worm to carry it out. Now the worm had to be tested. Keith Alexander and Robert Gates cooked up an experiment, in which the technical side of the intelligence community would construct a cascade of centrifuges, identical to those used at Natanz, and set them up in a large chamber at one of the Department of Energy’s weapons labs. The exercise was similar to the Aurora test, which took place around the same time, proving that an electrical generator could be destroyed through strictly cyber means. The Natanz simulation yielded similar results: the centrifuges were sent spinning at five times their normal speed, until they broke apart.

At the next meeting on the subject in the White House Situation Room, the rubble from one of those centrifuges was placed on the table in front of President Bush. He gave the go-ahead to try it out on the real thing.

There was one more challenge: after the Iranians replaced the sabotaged power supplies from Turkey, they took the additional precaution of taking the reactor’s computers offline. They knew about the vulnerability of digital controls, and they’d read that surrounding computers with an air gap—cutting them off from the Internet, making their operations autonomous—was one way to eliminate the risks: if the system worked on a closed network, if hackers couldn’t get into it, they couldn’t corrupt, degrade, or destroy it, either.

What the Iranians didn’t know was that the hackers of TAO had long ago figured out how to leap across air gaps. First, they’d penetrated a network near the air-gapped target; while navigating its pathways, they would usually find some link or portal that the security programmers had overlooked. If that path led nowhere, they would turn to their partners in the CIA’s Information Operations Center. A decade earlier, during the campaign against Serbian President Slobodan Milosevic, IOC spies gained entry to Belgrade’s telephone exchange and planted devices, which the NSA’s SIGINT teams then hacked, giving them full access to the nation’s phone system. These sorts of joint operations had blossomed with the growth of TAO.

The NSA also enjoyed close relations with Israel’s Unit 8200, which was tight with the human spies of Mossad. If it needed access to a machine or a self-contained network that wasn’t hooked up to the Internet, it could call on any of several collaborators—IOC, Unit 8200, the local spy services, or certain defense contractors in a number of allied nations—to plant a transmitter or beacon that TAO could home in on.

In Olympic Games, someone would install the malware by physically inserting a thumb drive into a computer (or a printer that several computers were using) on the premises—in much the same way that, around this same time, Russian cyber warriors hacked into U.S. Central Command’s classified networks in Afghanistan, the intrusion that the NSA detected and repelled in Operation Buckshot Yankee.

Not only would the malware take over the Natanz reactor’s valve pumps, it would also conceal the intrusion from the reactor’s overseers. Ordinarily, the valve controls would send out an alert when the flow of uranium rapidly accelerated. But the malware allowed TAO to intercept the alert and to replace it with a false signal, indicating that everything was fine.

The worm could have been designed to destroy every centrifuge, but that would arouse suspicions of sabotage. A better course, its architects figured, would be to damage just enough centrifuges to make the Iranians blame the failures on human error or poor design. They would then fire perfectly good scientists and replace perfectly good equipment, setting back their nuclear program still further.

In this sense, Operation Olympic Games was a classic campaign of information warfare: the target wasn’t just the Iranians’ nuclear program but also the Iranians’ confidence—in their sensors, their equipment, and themselves.

The plan was ready to go, but George Bush’s time in office was running out. It was up to Barack Obama.

To Bush, the plan, just like the one to send fake email to Iraqi insurgents, was a no-brainer. It made sense to Obama, too. From the outset of his presidency, Obama articulated, and usually followed, a philosophy on the use of force: he was willing to take military action, if national interests demanded it and if the risks were fairly low; but unless vital interests were at stake, he was averse to sending in thousands of American troops, especially given the waste and drain of the two wars he inherited in Afghanistan and Iraq. The two secret programs that Bush pressed him to continue—drone strikes against jihadists and cyber sabotage of a uranium-enrichment plant in Iran—fit Obama’s comfort zone: both served a national interest, and neither risked American lives.

Once in the White House, Obama expressed a few qualms about the plan: he wanted assurances that, when the worm infected the Natanz reactor, it wouldn’t also put out the lights in nearby power plants, hospitals, or other civilian facilities.

His briefers conceded that worms could spread, but this particular worm was programmed to look for the specific Siemens software; if it drifted far afield, and the unintended targets didn’t have the software, it wouldn’t inflict any damage.

Gates, who’d been kept on by Obama and was already a major influence on his thinking, encouraged the new president to renew the go-ahead. Obama saw no reason not to.

Not quite one month after he took office, the worm had its first success: a cascade of centrifuges at Natanz sped out of control, and several of them shattered. Obama phoned Bush to tell him the covert program they’d discussed was working out.

In March, the NSA shifted its approach. In the first phase, the operation hacked into the valves controlling the rate at which uranium gas flowed into the centrifuges. In the second phase, the attack went after the devices—known as frequency converters—that controlled how quickly the centrifuges rotated. The normal speed ranged from about 800 to 1,200 cycles per second; the worm gradually sped them up to 1,410 cycles, at which point several of the centrifuges flew apart. Or, sometimes, it slowed down the converters, over a period of several weeks, to as few as 2 cycles per second: as a result, the uranium gas couldn’t exit the centrifuge quickly enough; the imbalance would cause vibrations, which severely damaged the centrifuge in a different way.

Regardless of the technique, the worm also fed false data to the system’s monitors, so that, to the Iranian scientists watching them, everything seemed normal—and, when disaster struck, they couldn’t figure out what had happened. They’d experienced technical problems with centrifuges from the program’s outset; this seemed—and the NSA designed the worm to make it seem—like more of the same, but with more intense and frequent disruptions.

By the start of 2010, nearly a quarter of Iran’s centrifuges—about 2,000 out of 8,700—were damaged beyond repair. U.S. intelligence analysts estimated a setback in Iran’s enrichment program of two to three years.

Then, early that summer, it all went wrong. President Obama—who’d been briefed on every detail and alerted to every success or breakdown—was told by his advisers that the worm was out of the box: for reasons not entirely clear, it had jumped from one computer to another, way outside the Natanz network, then to another network outside that. It wouldn’t wreak damage—as the briefers had told him before, it was programmed to shut down if it didn’t see a particular Siemens controller—but it would get noticed: the Iranians would eventually find out what had been going on; Olympic Games was on the verge of being blown.

Almost at once, some of the world’s top software security firms—Symantec in California, VirusBlokAda in Belarus, Kaspersky Lab in Russia—started detecting a strange virus randomly popping up around the world. At first, they didn’t know its origins or its purpose; but probing its roots, parsing its code, and gauging its size, they realized they’d hit upon one of the most elaborate, sophisticated worms of all time. Microsoft issued an advisory to its customers, and, forming an anagram from the first few letters on the code, called the virus “Stuxnet”—a name that caught on.

By August, Symantec had uncovered enough evidence to release a statement of its own, warning that Stuxnet was designed not for mischievous hacking or even for espionage, but rather for sabotage. In September, a German security researcher named Ralph Langner inferred, from the available facts, that someone was trying to disable the Natanz nuclear reactor in Iran and that Israelis were probably involved.

At that point, some of the American software sleuths were horrified: Had they just helped expose a highly classified U.S. intelligence operation? They couldn’t have known at the time, but their curiosity—and their professional obligation to inform the public about a loose and possibly damaging computer virus—did have that effect. Shortly after Symantec’s statement, even before Langner’s educated guess about Stuxnet’s true aim, the Iranians drew the proper inference (so this was why their centrifuges were spinning out of control) and cut off all links between the Natanz plant and the Siemens controllers.

When Obama learned of the exposure at a meeting in the White House, he asked his top advisers whether they should shut down the operation. Told that it was still causing damage, despite Iranian countermeasures, he ordered the NSA to intensify the program—sending the centrifuges into wilder contortions, speeding them up, then slowing them down—with no concerns about detection, since its cover was already blown.

The postmortem indicated that, in the weeks after the exposure, another 1,000 centrifuges, out of the remaining 5,000, were taken out of commission.


Even after Olympic Games came to an end, the art and science of CNA—Computer Network Attack—pushed on ahead. In fact, by the end of October, when U.S. Cyber Command achieved full readiness for operations, CNA emerged as a consuming, even dominant, activity at Fort Meade.

A year earlier, anticipating Robert Gates’s directive creating Cyber Command, the chairman of the Joint Chiefs of Staff, General Peter Pace, issued a classified document, National Military Strategy for Cyber Operations, which expressed the need for “offensive capabilities in cyber space to gain and maintain the initiative.”

General Alexander, now CyberCom commander as well as the NSA director, was setting up forty “cyber-offensive teams”—twenty-seven for the U.S. combatant commands (Central Command, Pacific Command, European Command, and so forth) and thirteen engaged in the defense of networks, mainly Defense Department networks, at home. Part of this latter mission involved monitoring the networks; thanks to the work of the previous decade, starting with the Air Force Information Warfare Center, then gradually extending to the other services, the military networks had so few access points to the Internet—just twenty by this time, cut to eight in the next few years—that Alexander’s teams could detect and repel attacks across the transom. But defending networks also meant going on the offensive, through the deliberately ambiguous concept of CNE, Computer Network Exploitation, which could be both a form of “active defense” and preparation for CNA—Computer Network Attack.

Some officials deep inside the national security establishment were concerned about this trend. The military—the nation—was rapidly adopting a new form of warfare, had assembled and used a new kind of weapon; but this was all being done in great secrecy, inside the nation’s most secretive intelligence agency, and it was clear, even to those with a glimpse of its inner workings, that no one had thought through the implications of this new kind of weapon and new vision of war.

During the planning for Stuxnet, there had been debates, within the Bush and Obama administrations, over the precedent that the attack might establish. For more than a decade, dozens of panels and commissions had warned that America’s critical infrastructure was vulnerable to a cyber attack—and now America was launching the first cyber attack on another nation’s critical infrastructure. Almost no one outright opposed the Stuxnet program: if it could keep Iran from developing nuclear weapons, it was worth the risk; but several officials realized that it was a risk, that the dangers of blowback were inescapable and immense.

The United States wasn’t alone on this cyber rocket ship, after all. Ever since their penetration of Defense Department sites a decade earlier, in Operation Moonlight Maze, the Russians had been ramping up their capabilities to exploit and attack computer networks. The Chinese had joined the club in 2001 and soon grew adept at penetrating sensitive (though, as far as anyone knew, unclassified) networks of dozens of American military commands, facilities, and laboratories. In Obama’s first year as president, around the Fourth of July, the North Koreans—whose citizens barely had electricity—launched a massive denial-of-service attack, shutting down websites of the Departments of Homeland Security, Treasury, Transportation, the Secret Service, the Federal Trade Commission, the New York Stock Exchange, and NASDAQ, as well as dozens of South Korean banks, affecting at least 60,000, possibly as many as 160,000 computers.

Stuxnet spurred the Iranians to create their own cyber war unit, which took off at still greater levels of funding a year and a half later, in the spring of 2012, when, in a follow-up attack, the NSA’s Flame virus—the massive, multipurpose malware from which Olympic Games had derived—wiped out nearly every hard drive at Iran’s oil ministry and at the Iranian National Oil Company. Four months after that, Iran fired back with its own Shamoon virus, wiping out 30,000 hard drives (basically, every hard drive in every workstation) at Saudi Aramco, the joint U.S.-Saudi Arabian oil company, and planting, on every one of its computer monitors, the image of a burning American flag.

Keith Alexander learned, from communications intercepts, that the Iranians had expressly developed and launched Shamoon as retaliation for Stuxnet and Flame. On his way to a conference with GCHQ, the NSA’s British counterpart, he read a talking points memo, written by an aide, noting that, with Shamoon and several other recent cyber attacks on Western banks, the Iranians had “demonstrated a clear ability to learn from the capabilities and actions of others”—namely, those of the NSA and of Israel’s Unit 8200.

It was the latest, most dramatic illustration of what agency analysts and directors had been predicting for decades: what we can do to them, they can someday do to us—except that “someday” was now.

Alexander’s term as NSA director was coinciding with—and Alexander himself had been fostering—not only the advancement of cyber weapons and the onset of physically destructive cyber attacks, but also the early spirals of a cyber arms race. What to do about it? This, too, was a question that no one had thought through, at even the most basic level.

When Bob Gates became secretary of defense, back at the end of 2006, he was so stunned by the volume of attempted intrusions into American military networks—his briefings listed dozens, sometimes hundreds, every day—that he wrote a memo to the Pentagon’s deputy general counsel. At what point, he asked, did a cyber attack constitute an act of war under international law?

He didn’t receive a reply until the last day of 2008, almost two years later. The counsel wrote that, yes, a cyber attack might rise to the level that called for a military response—it could be deemed an act of armed aggression, under certain circumstances—but what those circumstances were, where the line should be drawn, even the criteria for drawing that line, were matters for policymakers, not lawyers, to address. Gates took the reply as an evasion, not an answer.

One obstacle to a clearer answer—to clearer thinking, generally—was that everything about cyber war lay encrusted in secrecy: its roots were planted, and its fruits were ripening, in an agency whose very existence had once been highly classified and whose operations were still as tightly held as any in government.

This culture of total secrecy had a certain logic back when SIGINT was strictly an intelligence tool: the big secret was that the NSA had broken some adversary’s code; if that was revealed, the adversary would simply change the code; the agency would have to start all over, and until it broke the new code, national security could be damaged; in wartime, a battle might be lost.

But now that the NSA director was also a four-star commander, and now that SIGINT had been harnessed into a weapon of destruction, something like a remote-control bomb, questions were raised and debates were warranted, for reasons having to do not only with morality but with the new weapon’s strategic usefulness—its precise effects, side effects, and consequences.

General Michael Hayden, the former NSA director, had moved over to Langley, as director of the CIA, when President Bush gave the go-ahead on Olympic Games. (He was removed from that post when Obama came to the White House, so he had no role in the actual operation.) Two years after Stuxnet came crashing to a halt, when details about it were leaked to the mainstream press, Hayden—by now retired from the military—voiced in public the same concerns that he and others had debated in the White House Situation Room.

“Previous cyber-attacks had effects limited to other computers,” Hayden told a reporter. “This is the first attack of a major nature in which a cyber-attack was used to effect physical destruction. And no matter what you think of the effects—and I think destroying a cascade of Iranian centrifuges is an unalloyed good—you can’t help but describe it as an attack on critical infrastructure.”

He went on: “Somebody has crossed the Rubicon. We’ve got a legion on the other side of the river now.” Something had shifted in the nature and calculation of warfare, just as it had after the United States dropped atom bombs on Hiroshima and Nagasaki at the end of World War II. “I don’t want to pretend it’s the same effect,” Hayden said, “but in one sense at least, it’s August 1945.”

For the first two decades after Hiroshima, the United States enjoyed vast numerical superiority—for some of that time, a monopoly—in nuclear weapons. But on the cusp of a new era in cyber war, it was a known fact that many other nations had cyber war units, and America was far more vulnerable in this kind of war than any likely adversary, than any other country on the planet, because it relied far more heavily on vulnerable computer networks—in its weapons systems, its financial systems, its vital critical infrastructures.

If America, or U.S. Cyber Command, wanted to wage cyber war, it would do so from inside a glass house.

There was another difference between the two kinds of new weapons, besides the scale of damage they could inflict: nuclear weapons were out there, in public; certain aspects of their production or the exact size of their stockpile were classified, but everyone knew who had them, everyone had seen the photos and the film clips, showing what they could do, if they were used; and if they were used, everyone would know who had launched them.

Cyber weapons—their existence, their use, and the policies surrounding them—were still secret. It seemed that the United States and Israel sabotaged the Natanz reactor, that Iran wiped out Saudi Aramco’s hard drives, and that North Korea unleashed the denial-of-service attacks on U.S. websites and South Korean banks. But no one took credit for the assaults; and while the forensic analysts who traced the attacks were confident in their assessments, they didn’t—they couldn’t—boast the same slam-dunk certainty as a physicist tracking the arc of a ballistic missile’s trajectory.

This extreme secrecy extended not only to the mass public but also inside the government, even among most officials with high-level security clearances. Back in May 2007, shortly after he briefed George W. Bush on the plan to launch cyber attacks against Iraqi insurgents, Mike McConnell, then the director of national intelligence, hammered out an accord with senior officials in the Pentagon, the NSA, the CIA, and the attorney general’s office, titled “Trilateral Memorandum of Agreement Among the Department of Defense, the Department of Justice, and the Intelligence Community Regarding Computer Network Attack and Computer Network Exploitation Activities.” But, apart from the requirement that cyber offensive operations needed presidential approval, there were no formal procedures or protocols for top policy advisers and policymakers to assess the aims, risks, benefits, or consequences of such attacks.

To fill that vast blank, President Obama ordered the drafting of a new presidential policy directive, PPD-20, titled “U.S. Cyber Operations Policy,” which he signed in October 2012, a few months after the first big press leaks about Stuxnet.

Eighteen pages long, it was the most explicit, detailed directive of its kind. In one sense, its approach was more cautious than its predecessors. It noted, for instance, in an implied (but unstated) reference to Stuxnet’s unraveling, that the effects of a cyber attack can spread to “locations other than the intended target, with potential unintended or collateral consequences that may affect U.S. national interests.” And it established an interagency Cyber Operations Policy Working Group to ensure that such side effects, along with other broad policy issues, were weighed before an attack was launched.

But the main intent and impact of PPD-20 was to institutionalize cyber attacks as an integral tool of American diplomacy and war. It stated that the relevant departments and agencies “shall identify potential targets of national importance” against which cyber attacks “can offer a favorable balance of effectiveness and risk as compared to other instruments of national power.” Specifically, the secretary of defense, director of national intelligence, and director of the CIA—in coordination with the attorney general, secretary of state, secretary of homeland security, and relevant heads of the intelligence community—“shall prepare, for approval by the President . . . a plan that identifies potential systems, processes, and infrastructure against which the United States should establish and maintain [cyber offensive] capabilities; proposes circumstances under which [they] might be used; and proposes necessary resourcing and steps that would be needed for implementation, review, and updates as U.S. national security needs change.”

Cyber options were to be systematically analyzed, preplanned, and woven into broader war plans, in much the same way that nuclear options had been during the Cold War.

Also, as with nuclear options, the directive required “specific Presidential approval” for any cyber operation deemed “reasonably likely to result in ‘significant consequences’ ”—those last two words defined to include “loss of life, significant responsive actions against the United States, significant damage to property, serious adverse U.S. foreign policy consequences, or serious economic impact to the United States”—though an exception was made, allowing a relevant agency or department head to launch an attack without presidential approval in case of an emergency.

However, unlike nuclear options, the plans for cyber operations were not intended to lie dormant until the ultimate conflict; they were meant to be executed, and fairly frequently. The agency and department heads conducting these attacks, the directive said, “shall report annually on the use and effectiveness of operations of the previous year to the President, through the National Security Adviser.”

No time was wasted in getting these plans up and ready. An action report on the directive noted that the secretary of defense, director of national intelligence, and CIA director briefed an NSC Deputies meeting on the scope of their plans in April 2013, six months after PPD-20 was signed.

PPD-20 was classified TOP SECRET/NOFORN, meaning it could not be shared with foreign officials; the document’s very existence was highly classified. But it was addressed to the heads of all the relevant agencies and departments, and to the vice president and top White House aides. In other words, the subject was getting discussed, not only in these elite circles, but also—with Stuxnet out in the open—among the public. Gingerly, officials began to acknowledge, in broad general terms, the existence and concept of cyber offensive operations.

General James Cartwright, who’d recently retired as vice chairman of the Joint Chiefs of Staff and who, before then, had been head of U.S. Strategic Command, which had nominal control over cyber operations, told a reporter covering Stuxnet that the extreme secrecy surrounding the topic had hurt American interests. “You can’t have something that’s a secret be a deterrent,” he said, “because if you don’t know it’s there, it doesn’t scare you.”

Some officers dismissed Cartwright’s logic: the Russians and Chinese knew what we had, just as much as we knew what they had. Still, others agreed that it might be time to open up a little bit.

In October, the same month that PPD-20 was signed, the NSA declassified a fifteen-year-old issue of Cryptolog, the agency’s in-house journal, dealing with the history of information warfare. The special issue had been published in the spring of 1997, its contents stamped TOP SECRET UMBRA, denoting the most sensitive level of material dealing with communications intelligence. One of the articles, written by William Black, the agency’s top official for information warfare at the time, noted that the secretary of defense had delegated to the NSA “the authority to develop Computer Network Attack (CNA) techniques.” In a footnote, Black cited a Defense Department directive from the year before, defining CNA as “operations to disrupt, deny, degrade, or destroy information resident in computers and computer networks, or the computers and networks themselves.”

This was remarkably similar to the way Obama’s PPD-20 defined “cyber effect”—as the “manipulation, disruption, denial, degradation, or destruction of computers, information or communications systems, networks, physical or virtual infrastructure controlled by computers or information systems, or information resident therein.”

In this sense, PPD-20 was expressing, in somewhat more detailed language, an idea that had been around since William Perry’s counter command-control warfare in the late 1970s.

After all those decades, the declassified Cryptolog article marked the first time that the term CNA, or such a precise definition of the concept, had appeared in a public document.

Within the Air Force, which had always been the military service most active in cyberspace, senior officers started writing a policy statement acknowledging its CNA capabilities, with the intent of releasing the paper to the public.

But then, just as they were finishing a draft, the hammer came down. Leon Panetta, a former Democratic congressman and budget director who’d replaced a fatigued Robert Gates as Obama’s secretary of defense, issued a memo forbidding any further references to America’s CNA programs.

Obama had decided to confront the Chinese directly on their rampant penetrations of U.S. computer networks. And Panetta didn’t want his officers to supply the evidence that might help the Chinese accuse the American president of hypocrisy.