CHAPTER 15


“WE’RE WANDERING IN DARK TERRITORY”

IN the wee hours of Monday, February 10, 2014, four weeks after President Obama’s speech at the Justice Department on NSA reform, hackers launched a massive cyber attack against the Las Vegas Sands Corporation, owner of the Venetian and Palazzo hotel-casinos on the Vegas Strip and a sister resort, the Sands, in Bethlehem, Pennsylvania.

The assault destroyed the hard drives in thousands of servers, PCs, and laptops, though not before stealing thousands of customers’ credit-card charges as well as the names and Social Security numbers of company employees.

Cyber specialists traced the attack to the Islamic Republic of Iran.

The previous October, Sheldon Adelson, the ardently pro-Israel, right-wing billionaire who owned 52 percent of Las Vegas Sands stock, had spoken on a panel at Yeshiva University in New York. At one point, he was asked about the Obama administration’s ongoing nuclear negotiations with Iran.

“What I would say,” he replied, “is, ‘Listen. You see that desert out there? I want to show you something.’ ” Then, Adelson said, he would drop a nuclear bomb on the spot. The blast “doesn’t hurt a soul,” he went on, “maybe a couple of rattlesnakes or a scorpion or whatever.” But it does lay down a warning: “You want to be wiped out?” he said he’d tell the mullahs. “Go ahead and take a tough position” at those talks.

Adelson’s monologue went viral on YouTube. Two weeks later, the Ayatollah Ali Khamenei, Iran’s supreme leader, fumed that America “should slap these prating people” and “crush their mouths.”

Soon after, the hackers went to work on Adelson’s company. On January 8, they tried to break into the Sands Bethlehem server, probing the perimeters for weak spots. On the twenty-first, and again on the twenty-sixth, they activated password-cracking software, trying out millions of letter-and-number combinations, almost instantaneously, to hack into the company’s Virtual Private Network, which employees used at home or on the road.

Finally, on February 1, they found a weakness in the server of a Bethlehem company that tested new pages for the casino’s website. Using a tool called Mimikatz, which extracted all of a server’s recent records, the hackers found the login and password of a Sands systems engineer who’d just been in Bethlehem on a business trip. Using his credentials, they strolled into the Vegas-based servers, probed their pathways, and inserted a malware program, consisting of just 150 lines of code, that wiped out the data stored on every computer and server, then filled the spaces with a random stream of zeroes and ones, to make restoring the data nearly impossible.

Then they started to download really sensitive data: the IT passwords and encryption keys, which could take them into the mainframe computer, and, potentially more damaging, the files on high-rolling customers—“the whales,” as casino owners called them. Just in time, Sands executives shut off the company’s link to the Internet.

Still, the next day, the hackers found another way back in and defaced the company’s website with a message: “Encouraging the Use of Weapons of Mass Destruction UNDER ANY CONDITION Is a Crime.” Then they shut down a few hundred more computers that hadn’t been disabled the first time around.

After the storm passed, the casino’s cyber security staff estimated that the Iranians had destroyed twenty thousand computers, which would cost at least $40 million to replace.

It was a typical, if somewhat sophisticated, cyber attack for the second decade of the twenty-first century. Yet there was one thing odd about these hackers: anyone breaking into the servers of a Las Vegas resort hotel casino could have made off with deep pools of cash—but these hackers didn’t take a dime. Their sole aim was to punish Sheldon Adelson for his crude comments about nuking Iran: they launched a cyber attack not to steal money or state secrets, but to influence a powerful man’s political speech.

It was a new dimension, a new era, of cyber warfare.

Another notable feature, which the Sands executives picked up on after the fact: the Iranians were able to unleash such a destructive attack, after making such extensive preparations, without arousing notice, because the company’s cyber security staff consisted of just five people.

Las Vegas Sands—one of the largest resort conglomerates in the world, with forty thousand employees and assets exceeding $20 billion—wasn’t ready to deal with the old era of cyber war, much less the new one.

At first, not wanting to scare off customers, the executives tried to cover up just how badly the hack had hurt them, issuing a press release commenting only on their website’s defacement. The hackers struck back, posting a video on YouTube showing a computer screen with what seemed like thousands of the Sands’ files and folders, including passwords and casino credit records, underscored with a text box reading, “Do you really think that only your mail server has been taken down?!! Like hell it has!!”

The FBI took down the video within a few hours, and the company managed to quash much further exposure, until close to the end of the year, when Bloomberg Businessweek published a long story detailing the full scope of the attack and its damage. But the piece drew little notice because, two weeks earlier, a similar, though far more devastating attack hit the publicity-drenched world of Hollywood, specifically one of its major studios—Sony Pictures Entertainment.

On Monday morning, November 24, a gang of hackers calling themselves “Guardians of Peace” hacked into Sony Pictures’ network, destroying three thousand computers and eight hundred servers, carting off more than one hundred terabytes of data—much of which was soon sent to, and gleefully reprinted by, the tabloid, then the mainstream, press—including executives’ salaries, emails, digital copies of unreleased films, and the Social Security numbers of 47,000 actors, contractors, and employees.

Sony had been hacked before, twice in 2011 alone: one of the attacks shut down its PlayStation network for twenty-three days after purloining data from 77 million accounts; the other stole data from 25 million viewers of Sony Online Entertainment, including twelve thousand credit card numbers. The cost, in business lost and damages repaired, came to about $170 million.

But, like many conglomerates, Sony ran its various branches in stovepipe fashion: the executives at PlayStation had no contact with those at Online Entertainment, who had no contact with those at Sony Pictures. So the lessons learned in one realm were not shared with the others.

Now, the executives realized, they had to get serious. To help track down the hacker and fix the damage, they contacted not only the FBI but also FireEye, which had recently purchased Mandiant, the company—headed by the former Air Force cyber crime investigator Kevin Mandia—that had, most famously, uncovered the massive array of cyber attacks launched by Unit 61398 of the Chinese army. Soon enough, both FireEye and the FBI, the latter working with NSA, identified the attackers as a group called “DarkSeoul,” which often did cyber jobs for the North Korean government from outposts scattered across Asia.

Sony Pictures had planned to release on Christmas Day a comedy called The Interview, starring James Franco and Seth Rogen as a frothy TV talk show host and his producer who get mixed up in a CIA plot to assassinate North Korea’s ruler, Kim Jong-un. The previous June, when the project was announced, the North Korean government released a statement warning that it would “mercilessly destroy anyone who dares hurt or attack the supreme leadership of the country, even a bit.” The hack, it seemed, was the follow-up to the threat.

Some independent cyber specialists doubted that North Korea was behind the attack, but those deep inside the U.S. intelligence community were unusually confident. In public, officials said that the hackers used many of the same “signatures” that DarkSeoul had used in the past, including an attack two years earlier that wiped out forty thousand computers in South Korea—the same lines of code, encryption algorithms, data-deletion methods, and IP addresses. But the real reason for the government’s certainty was that the NSA had long ago penetrated North Korea’s networks: anything that its hackers did, the NSA could follow; when the hackers monitored what they were doing, the NSA could intercept the signal from their monitors—not in real time (unless there was a reason to be watching the North Koreans in real time), but the agency’s analysts could retrieve the files, watch the images, and compile the evidence retroactively.

It was another case of a cyber attack launched not for money, trade secrets, or traditional espionage, but to influence a private company’s behavior.

This time, the blackmail worked. One week before opening day, Sony received an email threatening violence against theaters showing the film. Sony canceled its release; and, suddenly the flow of embarrassing emails and data to the tabloids and the blogosphere ceased.

The studio’s cave-in only deepened its problems. At his year-end press conference, traditionally held just before flying off to his Hawaii home for the holidays, President Obama told the world that Sony “made a mistake” when it canceled the movie. “I wish they had spoken to me first,” he went on. “I would have told them, ‘Do not get into a pattern in which you’re intimidated by these kinds of criminal acts.’ ” He also announced that the United States government would “respond proportionally” to the North Korean attack, “in a place and time and manner that we choose.”

Some in the cyber world were perplexed. Hundreds of American banks, retailers, utilities, defense contractors, even Defense Department networks had been hacked routinely, sometimes at great cost, with no retributive action by the U.S. government, at least not publicly. But a Hollywood studio gets breached, over a movie, and the president pledges retaliation in a televised news conference?

Obama did have a point in making the distinction. Jeh Johnson, the secretary of homeland security, said on the same day that the Sony attack constituted “not just an attack against a company and its employees,” but “also an attack on our freedom of expression and way of life.” A Seth Rogen comedy may have been an unlikely emblem of the First Amendment and American values; but so were many other works that had come under attack through the nation’s history, yet were still worth defending, because an attack on basic values had to be answered—however ignoble the target—lest some future assailant threaten to raid the files of some other studio, publisher, art museum, or record company if their executives didn’t cancel some other film, book, exhibition, or album.

The confrontation ticked off a debate inside the Obama White House, similar to the debates discussed, but never resolved, under previous presidents: What was a “proportional” response to a cyber attack? Did this response have to be delivered in cyberspace? Finally, what role should government play in responding to cyber attacks on citizens or private corporations? A bank gets hacked, that’s the bank’s problem; but what if two, three, or a dozen banks—big banks—were hacked? At what point did these assaults become a concern for national security?

It was a broader version of the question that Robert Gates had asked the Pentagon’s general counsel eight years earlier: at what point did a cyber attack constitute an act of war? Gates never received a clear reply, and the fog hadn’t lifted since.

On December 22, three days after Obama talked about the Sony hack at his press conference, someone disconnected North Korea from the Internet. Kim Jong-un’s spokesmen accused Washington of launching the attack. It was a reasonable guess: Obama had pledged to launch a “proportional” response to the attack on Sony; shutting down North Korea’s Internet for ten hours seemed to fit the bill, and it wouldn’t have been an onerous task, given that the whole country had just 1,024 Internet Protocol addresses (fewer than the number on some blocks in New York City), all of them connected through a single service provider in China.

In fact, though, the United States government played no part in the shutdown. A debate broke out in the White House over whether to deny the charge publicly. Some argued that it might be good to clarify what a proportional response was not. Others argued that making any statement would set an awkward precedent: if U.S. officials issued a denial now, then they’d also have to issue a denial the next time a digital calamity occurred during a confrontation; otherwise everyone would infer that America did launch that attack, whether or not it actually had, at which point the victim might fire back.I

In this instance, the North Koreans didn’t escalate the conflict, in part because they couldn’t. But another power, with a more robust Internet, might have.

Gates’s question was more pertinent than ever, but it was also, in a sense, beside the point. Because of its lightning speed and the initial ambiguity of its source, a cyber attack could provoke a counterattack, which might escalate to war, in cyberspace and in real space, regardless of anyone’s intentions.

At the end of Bush’s presidency and the beginning of Obama’s, in casual conversations with aides and colleagues in the Pentagon and the White House, Gates took to mulling over larger questions about cyber espionage and cyber war.

“We’re wandering in dark territory,” he would say on these occasions.

It was a phrase from Gates’s childhood in Kansas, where his grandfather worked for nearly fifty years as a stationmaster on the Santa Fe Railroad. “Dark territory” was the industry’s term for a stretch of rail track that was uncontrolled by signals. To Gates, it was a perfect parallel to cyberspace, except that this new territory was much vaster and the danger was greater, because the engineers were unknown, the trains were invisible, and a crash could cause far more damage.

Even during the darkest days of the Cold War, Gates would tell his colleagues, the United States and the Soviet Union set and followed some basic rules: for instance, they agreed not to kill each other’s spies. But today, in cyberspace, there were no such rules, no rules of any kind. Gates suggested convening a closed-door meeting with the other major cyber powers—the Russians, Chinese, British, Israelis, and French—to work out some principles, some “rules of the road,” that might diffuse our mutual vulnerabilities: an agreement, say, not to launch cyber attacks on computer networks controlling dams, waterworks, electrical power grids, and air traffic control—critical civilian infrastructure—except perhaps in wartime, and maybe not even then.

Those who heard Gates’s pitch would furrow their brows and nod gravely, but no one followed up; the idea went nowhere.

Over the next few years, this dark territory’s boundaries widened, and the volume of traffic swelled.

In 2014, there were almost eighty thousand security breaches in the United States, more than two thousand of which resulted in losses of data—a quarter more breaches, and 55 percent more data losses, than the year before. On average, the hackers stayed inside the networks they’d breached for 205 days—nearly seven months—before being detected.

These numbers were likely to soar, with the rise of the Internet of Things. Back in 1996, Matt Devost, the computer scientist who simulated cyber attacks in NATO war games, co-wrote a paper called “Information Terrorism: Can You Trust Your Toaster?” The title was a bit facetious, but twenty years later, with the most mundane items of everyday life—toasters, refrigerators, thermostats, and cars—sprouting portals and modems for network connectivity (and thus for hackers too), it seemed prescient.II

President Obama tried to stem the deluge. On February 12, 2015, he signed an executive order titled “Improving Critical Infrastructure Cybersecurity,” setting up forums in which private companies could share data about the hackers in their midst—with one another and with government agencies. In exchange, the agencies—mainly the NSA, working through the FBI—would provide top secret tools and techniques to protect their networks from future assaults.

These forums were beefed-up versions of the Information Sharing and Analysis Centers that Richard Clarke had established during the Clinton administration—and they were afflicted with the same weakness: both were voluntary; no company executives had to share information if they didn’t want to. Obama made the point explicitly: “Nothing in this order,” his document stated, “shall be construed to provide an agency with authority for regulating the security of critical infrastructure.”

Regulation—it was still private industry’s deepest fear, deeper than the fear of losing millions of dollars at the hands of cyber criminals or spies. As the white-hat hacker Peiter “Mudge” Zatko had explained to Dick Clarke fifteen years earlier, these executives had calculated that it cost no more to clean up after a cyber attack than to prevent one in the first place—and the preventive measures might not work anyway.

Some industries had altered their calculations in the intervening years, notably the financial sector. Its business consisted of bringing in money and cultivating trust; hackers had made an enormous dent on both, and sharing information demonstrably lowered risk. But the big banks were exceptions to the pattern.

Obama’s cyber policy aides had made a run, early on, at drafting mandatory security standards, but they soon pulled back. Corporate resistance was too stiff; the secretaries of treasury and commerce argued that onerous regulations would impede an economic recovery, the number-one concern to a president digging the country out of its deepest recession in seventy years. Besides, the executives had a point: companies that had adopted tight security standards were still getting hacked. The government had offered tools, techniques, and a list of “best practices,” but “best” didn’t mean perfect—after the hacker adapted, erstwhile best practices might not even be good—and, in any case, tools were just tools: they weren’t solutions.

Two years earlier, in January 2013, a Defense Science Board task force had released a 138-page report on “the advanced cyber threat.” The product of an eighteen-month study, based on more than fifty briefings from government agencies, military commands, and private companies, the report concluded that there was no reliable defense against a resourceful, dedicated cyber attacker.

In several recent exercises and war games that the panel reviewed, Red Teams, using exploits that any skilled hacker could download from the Internet, “invariably” penetrated even the Defense Department’s networks, “disrupting or completely beating” the Blue Team.

The outcomes were all too reminiscent of Eligible Receiver, the 1997 NSA Red Team assault that first exposed the U.S. military’s abject vulnerability.

Some of the task force members had observed up close the early history of these threats, among them Bill Studeman, the NSA director in the late 1980s and early 1990s, who first warned that the agency’s radio dishes and antennas were “going deaf” in the global transition from analog to digital; Bob Gourley, one of Studeman’s acolytes, the first intelligence chief of the Pentagon’s Joint Task Force-Computer Network Defense, who traced the Moonlight Maze hack to Russia; and Richard Schaeffer, the former director of the NSA Information Assurance Directorate, who spotted the first known penetration of the U.S. military’s classified network, prompting Operation Buckshot Yankee.

Sitting through the briefings, collating their conclusions, and writing the report, these veterans of cyber wars past—real and simulated—felt as if they’d stepped into a time machine: the issues, the dangers, and, most surprising, the vulnerabilities were the same as they’d been all those years ago. The government had built new systems and software, and created new agencies and directorates, to detect and resist cyber attacks; but as with any other arms race, the offense—at home and abroad—had devised new tools and techniques as well, and, in this race, the offense held the advantage.

“The network connectivity that the United States has used to tremendous advantage, economically and militarily, over the past twenty years,” the report observed, “has made the country more vulnerable than ever to cyber attacks.” It was the same paradox that countless earlier commissions had observed.

The problem was basic and inescapable: the computer networks, the panelists wrote, were “built on inherently insecure architectures.” The key word here was inherently.

It was the problem that Willis Ware had flagged nearly a half century earlier, in 1967, just before the rollout of the ARPANET: the very existence of a computer network—where multiple users could gain access to files and data online, from remote, unsecured locations—created inherent vulnerabilities.

The danger, as the 2013 task force saw it, wasn’t that someone would launch a cyber attack, out of the blue, on America’s military machine or critical infrastructure. Rather, it was that cyber attacks would be an element of all future conflicts; and given the U.S. military’s dependence on computers—in everything from the GPS guidance systems in its missiles, to the communications systems in its command posts, to the power stations that generated its electricity, to the scheduling orders for resupplying the troops with ammunition, fuel, food, and water—there was no assurance that America would win this war. “With present capabilities and technology,” the report stated, “it is not possible to defend with confidence against the most sophisticated cyber attacks.”

Great Wall defenses could be leapt over or maneuvered around. Instead, the report concluded, cyber security teams, civilian and military, should focus on detection and resilience—designing systems that could spot an attack early on and repair the damage swiftly.

More useful still would be figuring out ways to deter adversaries from attacking even in the most tempting situations.

This had been the great puzzle in the early days of nuclear weapons, when strategists realized that the atomic bomb and, later, the hydrogen bomb were more destructive than any war aim could justify. As Bernard Brodie, the first nuclear strategist, put it in a book called The Absolute Weapon, published just months after Hiroshima and Nagasaki, “Thus far the chief purpose of our military establishment has been to win wars. From now on its chief purpose must be to avert them.” The way to do that, Brodie reasoned, was to protect the nuclear arsenal, so that, in the event of a Soviet first strike, the United States would have enough bombs surviving to “retaliate in kind.”

But what did that mean in modern cyberspace? The nations most widely seen as likely foes in such a war—Russia, China, North Korea, Iran—weren’t plugged into the Internet to nearly the same extent as America. Retaliation in kind would inflict far less damage on those countries than the first strike had inflicted on America; therefore, the prospect of retaliation might not deter them from attacking. So what was the formula for cyber deterrence: threatening to respond to an attack by declaring all-out war, firing missiles and smart bombs, escalating to nuclear retaliation? Then what?

The fact was, no one in a position of power or high-level influence had thought this through.

Mike McConnell had pondered the question in the transition between the Bush and Obama presidencies, when he set up the Comprehensive National Cybersecurity Initiative. The CNCI set twelve tasks to accomplish in the ensuing few years: among other things, to install a common intrusion-detection system across all federal networks, boost the security of classified networks, define the U.S. government’s role in protecting critical infrastructure—and there was this (No. 10 on the list): “Define and develop enduring deterrence strategies and programs.”

Teams of aides and analysts were formed to work on the twelve projects. The team assigned to Task No. 10 came up short: a paper was written, but its ideas were too vague and abstract to be described as “strategies,” much less “programs.”

McConnell realized that the problem was too hard. The other tasks were hard, too, but in most of those cases, it was fairly clear how to get the job done; the trick was getting the crucial parties—the bureaucracies, Congress, and private industry—to do it. Figuring out cyber deterrence was a conceptual problem: which hackers were you trying to deter; what were you trying to deter them from doing; what penalties were you threatening to impose if they attacked anyway; and how would you make sure they wouldn’t strike back harder in response? These were questions for policymakers, maybe political philosophers, not for midlevel aides on a task force.

The 2013 Defense Science Board report touched lightly on the question of cyber deterrence, citing parallels with the advent of the A-bomb at the end of World War II. “It took decades,” the report noted, “to develop an understanding” of “the strategies to achieve stability with the Soviet Union.” Much of this understanding grew out of analyses and war-game exercises at the RAND Corporation, the Air Force–sponsored think tank where civilian economists, physicists, and political scientists—among them Bernard Brodie—conceived and tested new ideas. “Unfortunately,” the task force authors wrote, they “could find no evidence” that anyone, anywhere, was doing that sort of work “to better understand the large-scale cyber war.”

The first official effort to find some answers to these questions got underway two years later, on February 10, 2015, with the opening session of yet another Defense Science Board panel, this one called the Task Force on Cyber Deterrence. It would continue meeting in a highly secure chamber in the Pentagon for two days each month, through the end of the year. Its goal, according to the memo that created the panel, was “to consider the requirements for effective deterrence of cyber attack against the United States and allies/partners.”

Its panelists included a familiar group of cyber veterans, among them Chris Inglis, deputy director of the NSA under Keith Alexander, now a professor of cyber studies at the U.S. Naval Academy in Annapolis, Maryland; Art Money, the former Pentagon official who guided U.S. policy on information warfare in the formative era of the late 1990s, now (and for the previous decade) chairman of the NSA advisory board; Melissa Hathaway, the former Booz Allen project manager who was brought into the Bush White House by Mike McConnell to run the Comprehensive National Cybersecurity Initiative, now the head of her own consulting firm; and Robert Butler, a former officer at the Air Force Information Warfare Center who’d helped run the first modern stab at information warfare, the campaign against Serbian president Slobodan Milosevic and his cronies. The chairman of the task force was James Miller, the undersecretary of defense for policy, who’d been working cyber issues in the Pentagon for more than fifteen years.

All of them were longtime inside players of an insiders-only game; and, judging from their presence, the Pentagon’s permanent bureaucrats wanted to keep it that sort of game.

Meanwhile, the power and resources were concentrated at Fort Meade, where U.S. Cyber Command was amassing its regiments, and drawing up battle plans, even though broad questions of policy and guidance had barely been posed, much less settled.

In 2011, when Robert Gates realized that the Department of Homeland Security would never be able to protect the nation’s critical infrastructure from a cyber attack (and after his plan for a partnership between DHS and the NSA went up in smoke), he gave that responsibility to Cyber Command as well.

Cyber Command’s original two core missions were more straightforward. The first, to support U.S. combatant commanders, meant going through their war plans and figuring out which targets could be destroyed by cyber means rather than by missiles, bullets, or bombs. The second mission, to protect Defense Department computer networks, was right up Fort Meade’s alley: those networks had only eight points of access to the Internet; Cyber Command could sit on all of them, watching for intruders; and, of course, it had the political and legal authority to monitor, and roam inside, those networks, too.

But its third, new mission—defending civilian critical infrastructure—was another matter. The nation’s financial institutions, power grids, transportation systems, waterworks, and so forth had thousands of access points to the Internet—no one knew precisely how many. And even if the NSA could somehow sit on those points, it lacked the legal authority to do so. Hence Obama’s executive order, which relied on private industry to share information voluntarily—an unlikely prospect, but the only one available.

It was a bitter irony. The growth of this entire field—cyber security, cyber espionage, cyber war—had been triggered by concerns, thirty years earlier, about the vulnerability of critical infrastructure. Yet, after all the commissions, analyses, and directives, the problem seemed intractable.

Still, Keith Alexander not only accepted the new mission, he aggressively pushed for it; he’d helped Gates draft the directive that gave the mission to Cyber Command. To Alexander’s mind, not only did Homeland Security lack the resources to protect the nation, it had the wrong concept. It was trying to install intrusion-detection systems on all the networks, and there were just too many networks: they’d be impossible to monitor, and it would cost way too much to try. Besides, what could the DHS bureaucrats do if they detected a serious attack in motion?

The better approach, to Alexander’s mind, was the one he knew best: to go on the offensive—to get inside the adversary’s networks in order to see him preparing an attack, then deflect it. This was the age-old concept of “active defense” or, in its cyber incarnation, CNE, Computer Network Exploitation, which, as NSA directors dating back to Ken Minihan and Mike Hayden knew well, was not much different from Computer Network Attack.

But Alexander advocated another course, too, a necessary supplement: force the banks and the other sectors—or ply them with alluring incentives—to share information about their hackers with the government: and by “government,” he meant the FBI and, through it, the NSA and Cyber Command. He decidedly did not mean the Department of Homeland Security—though, in deference to the White House, which had designated DHS as the lead agency on protecting critical infrastructure, he would say the department could act as the “router” that sent alerts to the other, more active agencies.

Alexander was insistent on this point. Most private companies refused to share information, not only because they lacked incentives but also because they feared lawsuits: some of that information would include personal data about employees and customers. In response, President Obama urged Congress to pass a bill exempting companies from liability if they shared data. But Alexander opposed the bill, because Obama’s version of the bill would require them to share data with the Department of Homeland Security. Without telling the White House, Alexander lobbied his allies on Capitol Hill to amend or kill his commander-in-chief’s initiative.

It was an impolitic move from someone who was usually a bit more adroit. First, the White House staff soon heard about his lobbying, which didn’t endear him to the president, especially in the wake of the Snowden leaks, which were already cutting into the reserves of goodwill for Fort Meade. Second, it was self-defeating from a substantive angle: even with exemption from liability, companies were averse to giving private data to the government—all the more so if “government” was openly defined as the NSA.

The information-sharing bill was endangered, then, by an unlikely coalition of civil liberties advocates, who opposed sharing data with the government on principle, and NSA boosters, who opposed sharing it with any entity but Fort Meade.

So, the only coordinated defense left would be “active defense”—cyber offensive warfare.

That was the situation inherited by Admiral Michael Rogers, who replaced Alexander in April 2014. A career cryptologist, Rogers had run the Navy’s Fleet Cyber Command, which was also based at Fort Meade, before taking over the NSA and U.S. Cyber Command. He was also the first naval officer to earn three stars (and now he had four stars) after rising through the ranks as a code-breaker. Shortly after taking the helm, he was asked, in an interview with the Pentagon’s news service, how he would protect critical infrastructure from a cyber attack—Cyber Command’s third mission. He replied that the “biggest focus” would be “to attempt to interdict the attack before it ever got to us”—in other words, to get inside the adversary’s network, in order to see him prepare an attack, then to deflect or preempt it.

“Failing that,” Rogers went on, he would “probably” also “work directly with those critical infrastructure networks” that “could use stronger defensive capabilities.” But he knew this was backup, and flimsy backup at that, since neither Fort Meade nor the Pentagon could do much to bolster the private sector’s defenses on its own.

In April 2015, the Obama administration endorsed the logic. In a thirty-three-page document titled The Department of Defense Cyber Strategy, signed by Ashton Carter, a former Harvard physicist, longtime Pentagon official, and now Obama’s fourth secretary of defense, the same three missions were laid out in some detail: assisting the U.S. combatant commands, protecting Defense Department networks, and protecting critical infrastructure. To carry out this last mission, the document stated that, “with other government agencies” (the standard euphemism for NSA), the Defense Department had developed “a range of options and methods for disrupting cyber attacks of significant consequence before they can have an impact.” And it added, in a passage more explicit than the usual allusions to the option of Computer Network Attack, “If directed, DoD should be able to use cyber operations to disrupt an adversary’s command-and-control networks, military-related critical infrastructure, and weapons capabilities.”

A month earlier, on March 19, at hearings before the Senate Armed Services Committee, Admiral Rogers expressed the point more directly still, saying that deterring a cyber attack required addressing the question: “How do we increase our capacity on the offensive side?”

Senator John McCain, the committee’s Republican chairman, asked if it was true that the “current level of deterrence is not deterring.”

Rogers replied, “That is true.” More cyber deterrence meant more cyber offensive tools and more officers trained to use them, which meant more money and power for Cyber Command.

But was this true? At an earlier hearing, Rogers had made headlines by testifying that China and “probably one or two other countries” were definitely inside the networks that controlled America’s power grids, waterworks, and other critical assets. He didn’t say so, but America was also inside the networks that controlled such assets in those other countries. Would burrowing more deeply deter an attack, or would it only tempt both sides, all sides, to attack the others’ networks preemptively, in the event of a crisis, before the other sides attacked their networks first? And once the exchanges got under way, how would anyone keep them from escalating to more damaging cyber strikes or to all-out war?

These were questions that some tried to answer, but no one ever did, during the nuclear debates and gambits of the Cold War. But while nuclear weapons were incomparably more destructive, there were four differences about this new arms race that made it more likely to careen out of control. First, more than two players were involved, a few were unpredictable, and some weren’t even nation-states. Second, an attack would be invisible and, at first, hard to trace, boosting the chances of mistakes and miscalculations on the part of the country first hit. Third, a bright, bold firewall separated using nuclear weapons from not using nuclear weapons; the countries that possessed the weapons were constrained from using them, in part, because no one knew how fast and furious the violence would spiral, once the wall came down. By contrast, cyber attacks of one sort or another were commonplace: they erupted more than two hundred times a day, and no one knew—no one had ever declared, no one could predict—where the line between mere nuisance and grave threat might be drawn; and so there was a higher chance that someone would cross the line, perhaps without intending or even knowing it.

Finally, there was the extreme secrecy that enveloped everything about cyber war. Some things about nuclear weapons were secret, too: details about their design, the launch codes, the targeting plans, the total stockpile of nuclear materials. But the basics were well known: their history, how they worked, how many there were, how much destruction they could wreak—enough to facilitate an intelligent conversation, even by people who didn’t have Top Secret security clearances. This was not true of cyber: when Admiral Rogers testified that he wanted to “increase our capacity on the offensive side,” few, if any, of the senators had the slightest idea what he was talking about.

In the five guys report on NSA reform, which President Obama commissioned in 2013 in the wake of the Snowden revelations, the authors acknowledged, even stressed, the need to keep certain sources, methods, and operations highly classified. But they also approvingly quoted a passage from the report by Senator Frank Church, written in the wake of another intelligence scandal—that one, clearly illegal—almost forty years earlier. “The American public,” he declared, “should know enough about intelligence activities to be able to apply their good sense to the underlying issues of policy and morality.”

This knowledge, which Senator Church called “the key to control,” has been missing from discussions of policy, strategy, and morality in cyber war. We are all wandering in dark territory, most of us only recently, and even now dimly, aware of it.


I. As a compromise, when Obama issued an executive order imposing new sanctions against North Korea, on January 2, 2015, White House spokesman Josh Earnest pointedly called it “the first aspect of our response” to the Sony hacking. Listeners could infer from the word “first” that the United States had not shut down North Korea’s Internet eleven days earlier. But no official spelled this out explicitly, at least not on the record.

II. In 2013, two security researchers—including Charlie Miller, a former employee at the Office of Tailored Access Operations, the NSA’s elite hacking unit—hacked into the computer system of a Toyota Prius and a Ford Escape, then disabled the brakes and commandeered the steering wheel while the cars were driven around a parking lot. In that test, they’d wired their laptops to the cars’ onboard diagnostic ports, which service centers could access online. Two years later, they took control of a Jeep Cherokee wirelessly, after discovering many vulnerabilities in its onboard computers—which they also hacked wirelessly, through the Internet, cellular channels, and satellite data-links—while a writer for Wired magazine drove the car down a highway. Fiat Chrysler, the Jeep’s manufacturer, recalled 1.4 million vehicles, but Miller made clear that most, maybe all, modern cars were probably vulnerable in similar ways (though none of them were recalled). As with most other devices in life, their most basic functions had been computerized—and the computers hooked up to networks—for the sake of convenience, their manufacturers oblivious to the dangers they were opening up. The signs of a new dimension in the cyber arms race—involving sabotage, mayhem, terrorism, even assassination plots, carried out more invisibly than drone strikes—seemed ominous and almost inevitable.