On May 12, 2017, Patrick Ward was wheeled into a surgical prep room at St. Bartholomew’s Hospital in central London. The sprawling medical complex, known simply as Bart’s to locals, sits a few blocks from St. Paul’s Cathedral, where it was founded in the year 1123, when Henry I ruled the kingdom. The hospital operated continuously throughout Hitler’s aerial bombing campaign, proudly weathering the Second World War as bombs rained around and sometimes even on the elegant landmark.1 But in Bart’s nine hundred years of existence, no bomb created greater havoc than the one dropped that Friday morning.
Ward had traveled three hours from his small village in Dorset, near the southern English town of Poole. His family had farmed the maritime land since the late 1800s, a sweeping landscape plucked straight from the pages of a storybook. And Ward had a job that suited his idyllic home. He was the longtime sales director of Purbeck Ice Cream, a gourmet ice cream maker. He loved the job. “I get paid to talk and eat ice cream,” he told us. “And I can do both of those fairly well.”
He’d waited two years for a surgical slot to open up at Bart’s to repair a serious heart condition called cardiomyopathy, a genetic condition that had thickened his heart wall and rendered the sturdy middle-aged Englishman, who’d hiked and played soccer, unable to perform most daily tasks. That morning, Ward’s chest was shaved and his body beset by a battery of tests. He lay resting on a gurney, awaiting his long-sought procedure, when the surgeon dropped in. “It’ll be just a few more minutes. I’ll see you soon in the back.” But Ward was never wheeled into the operating room. He just continued waiting.
More than an hour later, the doctor reappeared. “We’ve been hacked. The whole system is down across the hospital. I can’t do the operation.” The hospital that had remained open throughout World War II was suddenly frozen, a target of a sweeping cyberattack. All its computer systems had crashed. Ambulances were diverted, appointments were canceled, and surgical services closed for the day. The attack had paralyzed a third of the country’s National Health Service, which provides most of the health care in England.2
Later that morning in Redmond, Microsoft’s senior leadership team was in the middle of its regular Friday meeting. These weekly gatherings with Satya Nadella and his fourteen direct reports have a routine. They begin at eight a.m. in the company’s boardroom, on the floor where several of us have our offices. We cycle through various discussions on product and business initiatives before adjourning midafternoon. But May 12, 2017, was not a typical meeting.
Before we’d finished the second topic, Satya interrupted the discussion. “I’m getting copied on a bunch of emails about a widespread cyberattack on our customers. What’s going on?”
We quickly learned that Microsoft’s security engineers had been scrambling in response to customer calls and were trying to find the cause and assess the impact of the rapidly spreading attack. By lunchtime, it was clear that this was no ordinary hack. Engineers in the Microsoft Threat Intelligence Center, which we call MSTIC (pronounced “mystic”), quickly matched the malware to code that a group called Zinc had experimented with two months earlier. MSTIC gives each nation-state hacking group a code name based on an element from the periodic table. In this instance, the FBI had connected Zinc to the North Korean government. It was the same group that had overwhelmed Sony Pictures’ computer network a year and a half earlier.3
Their latest attack was unusually sophisticated from a technical perspective, with new malware code added to the original Zinc software that allowed the infection to worm its way automatically from computer to computer. Once replicated, the code encrypted and locked a computer’s hard disk, then displayed a ransomware message demanding three hundred dollars for an electronic key to recover the data. Without the key, the user’s data would remain frozen and inaccessible—forever.
The cyberattack started in the United Kingdom and Spain. Within hours it spread around the world, ultimately impacting three hundred thousand computers in more than 150 countries.4 Before it ran its course, the world would remember it by the name WannaCry, a malicious string of software code that not only made IT administrators want to cry but served as a disturbing wake-up call for the world.
The New York Times soon reported that the most sophisticated piece of the WannaCry code was developed by the US National Security Agency to exploit a vulnerability in Windows.5 The NSA had likely created the code to infiltrate its adversaries’ computers. The software was apparently stolen and offered on the black market through the Shadow Brokers, an anonymous group that posts toxic code online to wreak havoc. The Shadow Brokers had made the NSA’s sophisticated weapon available to anyone who knew where to find it. While this group has not been linked definitively to a specific individual or organization, experts in the threat intelligence community suspect that it is a front for a nation-state bent on disruption.6 This time, Zinc had added a potent ransomware payload to the NSA code, creating a virulent cyberweapon that was ripping through the internet.
As one of our security leaders put it, “The NSA developed a rocket and the North Koreans turned it into a missile, the difference being the thing at the tip.” Essentially the United States had developed a sophisticated cyberweapon, lost control of it, and North Korea had used it to launch an attack against the entire world.
A few months earlier, this plot had seemed implausible. Now it was the news of the day. But we didn’t have time to dwell on the irony of the situation. We had to scramble to help customers identify what systems were affected, stop the malware from spreading, and resuscitate computers that were disabled. By midday, our security team concluded that newer Windows machines were protected against the attack by a patch we had released two months earlier, but older machines running Windows XP were not.
This was not a small problem. There were still more than a hundred million computers in the world running Windows XP. For years we had tried to persuade customers to upgrade their machines and install a newer version of Windows. As we pointed out, Windows XP had been released in 2001, six years before the first Apple iPhone and six months before the first iPod. While we could release patches for specific vulnerabilities, there was no way technology this old could keep pace with current security threats. Expecting sixteen-year-old software to defend against today’s military-grade attacks was like digging trenches to defend against missiles.
Despite our urgings, discounts, and free upgrades, some customers stuck with the old operating system. As we sought to move the installed base forward, we eventually decided that we would continue to create security patches for the older systems, but unlike with newer versions, we required that customers purchase them as part of a subscription service. Our goal was to create a financial incentive to move to a more secure version of Windows.
While the approach made sense in most circumstances, the May 12 attack was different. The “wormable” nature of WannaCry enabled the malware to move at unusual speed. We had to stem the damage. This led to a vigorous debate within Microsoft. Should we make the Windows XP patch for this attack available to everyone worldwide, outside the security subscription, including for computers that ran pirated copies of our software? Satya cut off the debate, deciding that we would release the patch to everyone free of charge. When some inside Microsoft objected, saying the move would undercut the effort to move people off XP, Satya quelled the dissent with an email saying, “Now is not the time to debate this. This is too widespread.”
While we made technical progress to contain and suppress the WannaCry infection, the political ramifications were heating up. By dinnertime in Seattle that Friday, it was Saturday morning in Beijing. Officials in the Chinese government reached out to our team in Beijing and emailed Terry Myerson, who led the Windows division, asking about the status of patches for Windows XP.
We weren’t surprised by the inquiry, given that China had more Windows XP machines than any other nation. China had been mostly spared from the initial attack because most office computers had been turned off for the weekend when the malware was unleashed on Friday evening local time. But their dated Windows XP machines remained vulnerable.
But XP patches weren’t the only thing on China’s mind. The official who emailed Terry asked about a point made that same day in the New York Times. The story claimed that the US government had been searching for and stockpiling software vulnerabilities, keeping them secret rather than notifying tech companies so they could be patched.7 He wanted our reaction. We said it was a question that they should discuss with the US government, not us. But not surprisingly, it was not a practice we or other tech companies were enthusiastic about. To the contrary, we had long urged governments to disclose vulnerabilities they had identified so they could be fixed for the common good.
We knew that this was just the first of many questions to come from people around the world. By Saturday morning, we realized that we needed to do more than support our affected customers. We needed to address the emerging geopolitical issues more publicly. Satya and I spent some time on the phone that morning discussing our next step. We decided to go public to address the coming wave of questions about WannaCry.
We stepped back from the specifics of the attack and addressed the broader cybersecurity landscape. We said explicitly that Microsoft and other companies in the tech sector had the first responsibility to protect our customers against cyberattacks. This was a given. But we thought it was important to underscore how cybersecurity had become a shared responsibility with customers. We needed to make it easier for customers to update and upgrade their computers, but one of the episode’s lessons was that our advances would do little good if they weren’t used.
We also made a third point, one that we thought the WannaCry attack had made clear. As more governments developed advanced offensive capabilities, they needed to control their cyberweapons. As we said, “An equivalent scenario with conventional weapons would be the US military having some of its Tomahawk missiles stolen.”8 The fact that cyberweapons could be stored—or stolen—on a thumb drive made their safeguarding both more difficult and more important.
Some officials at the White House and NSA were less than enthused by our reference to a Tomahawk missile. Some of their counterparts in the British government agreed with them, arguing, “It’s more accurate to compare WannaCry to a rifle than a Tomahawk missile.” But has a rifle attack ever hit targets in 150 countries simultaneously? All of this was beside the point. If anything, it reflected the degree to which cybersecurity officials were unaccustomed to talking directly about these issues in the press or defending their practices to the public.
What surprised us most was the absence of any broad discussion about why North Korea had launched the attack in the first place. To this day we don’t have a definitive answer, but one theory is especially intriguing.
Just a month before the attack, the North Koreans experienced an embarrassing failure of a high-profile missile launch. As David Sanger and two other reporters at the New York Times wrote at the time, the US government had been seeking to slow the missile program, “including through electronic-warfare techniques.”9
As they noted in the Times, it was impossible to know what caused any specific missile failure, but Defense Secretary James Mattis had been cryptic in commenting on this one, saying only that “The president and his military team are aware of North Korea’s most recent unsuccessful missile launch. The president has no further comment.” This from a president who is seldom known for having no further comment.
What if the North Koreans had responded to a cyberattack on their missile by retaliating with a cyberattack of their own? WannaCry was indiscriminate in its effect, but what if that was the point? What if it was their way of saying, “You can hit me in one particular place, but I can hit back everywhere”?
Several aspects of WannaCry are consistent with this theory. First, it was launched against targets in Europe just about the time that everyone in east Asia was turning off their computers and going home for the weekend. If the North Koreans wanted to maximize the impact on Western Europe and North America while reducing the impact in China, they chose the ideal moment. The infection spread as the sun moved west, as employees in businesses and governments on other continents continued their workday. But the Chinese had the weekend to respond before returning to work on Monday.
In addition, the North Koreans had added what security experts call “kill switches,” which made it possible to stop the malware from spreading further. One kill switch directed the malware to look for a specific web address that did not yet exist. As long as it wasn’t there, WannaCry would continue to spread. But once someone registered and activated the web address, which was a simple technical step, the code would stop replicating.
Late on May 12 a security researcher in the United Kingdom analyzed the code and found this kill switch. For the modest price of $10.69, he registered and activated the URL, stopping WannaCry from spreading further.10 Some speculated that this reflected a lack of sophistication by WannaCry’s creators. But what if the opposite was true? What if WannaCry’s designers wanted to ensure that they could turn off the malware before Monday morning, so they could avoid causing too much disruption in China or North Korea itself?
Finally, there was something fishy about the ransomware message and approach used by WannaCry. As our security experts noted, North Korea had used ransomware before, but their tradecraft had been different. They had selected high-value targets such as banks and demanded large sums of money in a discreet way. Indiscriminate demands to pay three hundred dollars to unlock a machine represented a departure, to say the least. What if the whole ransomware approach was just a cover to throw the press and public off the real message, which was intended to be more discreetly understood by US and allied officials?
If North Korea was responding with its own cyberattack to a US cyberattack, then the whole episode was even more significant than people have appreciated. It was the closest thing the world has experienced to a global “hot” cyberwar. It would mean that this was an attack in which the impact on civilians represented more than collateral damage. It was the intended effect.
Regardless of the answer, the question reflects a serious issue. Cyberweapons have advanced enormously over the past decade, redefining what is possible in modern warfare. But they are used in ways that obscure what is truly happening. The public doesn’t yet fully appreciate either the risks or the urgent public policy issues that need to be addressed. And until these issues are brought out of the shadows, the danger will continue to grow.
If anyone doubted the threat of cyberwarfare, the cyber bomb that hit just six weeks later should have made them believers.
On June 27, 2017, a cyberattack pummeled Ukraine, using the same software code stolen from the NSA, disabling an estimated 10 percent of all computers in that country.11 The attack was later attributed by the United States, the United Kingdom, and five other governments to Russia.12 Security experts dubbed it NotPetya because it shared code with a known ransomware named after the armed satellite Petya, which was part of the Soviet Union’s fictional GoldenEye weapon in the 1995 James Bond movie of the same name.13 That weapon could knock out electronic communications across a thirty-mile radius.
In the nonfiction world of 2017, the NotPetya attack had a far broader reach. It rippled across Ukraine, crippling businesses, transit systems, and banks, then continued spreading outside the country’s border, infiltrating multinationals including FedEx, Merck, and Maersk. The Danish shipping giant saw its entire worldwide computer network grind to a halt.14
When Microsoft security engineers arrived at Maersk’s offices in London to help revive their computers, what they encountered was eerie by twenty-first-century standards. Mark Empson, a tall, fast-talking, fast-moving Microsoft field engineer, was one of the first on the scene. “You always hear noise and ambient sound effects from computers, printers, and scanners,” he said. “Here there was nothing at all. It was absolutely silent.”
As Empson walked Maersk’s hallways, he said it felt as if the office had died. “You go through the standard troubleshooting logic of, ‘Okay, well, what’s the situation? What servers are out? What have we got?’ The answer was that everything’s out.” He continued to quiz people. “‘Okay, telephones?’ ‘Out.’ ‘What about the internet?’ ‘No, that’s out as well.’”
It was a stark reminder of the degree to which our economy and lives rely on information technology. In a world where everything is connected, anything can be disrupted. That’s part of what makes it so serious to contemplate a cyberattack targeting today’s electronic grid.
If a city loses its electricity, telephones, gas lines, water system, and internet, it can be thrown back into something that can feel like the Stone Age. If it’s winter, people may freeze. If it’s summer, people may overheat. Those who rely on medical devices to survive could lose their lives. And in a future with autonomous vehicles, imagine a cyberattack that penetrates automobile control systems as cars barrel down the highway.
All these are sobering reminders of the new world we live in. Following NotPetya, Maersk took the unusual step of reassuring the public that its ships remained under the control of their captains. The need for such reassurance illustrated the extent of the world’s reliance on computers and the potential for cyberattack-based disruption.
This ubiquity of software in the infrastructure of our societies also explains part of the reason more governments are investing in offensive cyberweapons capabilities. Compared to early teenage hackers and their successors who operate in international criminal enterprises, governments operate on a completely different scale and level of sophistication. The United States was an early investor and remains a leader in the space. But others are quick studies, including Russia, China, North Korea, and Iran, who have all joined the cyberweapons race.
The WannaCry and NotPetya attacks represented a massive escalation of the world’s growing cyberweapon capabilities. Yet just a few months later, it became apparent that the world’s governments were not necessarily heeding this wake-up call.
In conversations with diplomats around the world, we heard the same skepticism: “No one has been killed. These aren’t even attacks on people. They’re just machines attacking machines.”
As we also found, perhaps more than any prior advance in weapons technology, views about cybersecurity fall along generational lines. Younger generations are digital natives. Their entire lives seem to be powered by technology, and an attack on their device is an attack on their home. It’s personal. But older generations don’t always see the impact of a cyberattack the same way.
This leads to an even more sobering question. Can we wake up the world before a digital 9/11? Or will governments continue to hit the snooze button?
After NotPetya, we wanted to show the world what had happened on the ground in Ukraine, a country that had suffered multiple cyberattacks. Although the country had been devasted by NotPetya, media coverage outside Ukraine was sparse. We decided to send a team of Microsoft employees to interview people in Kiev and find out what really happened.15 They captured firsthand accounts of people who had lost their businesses, their customers, and their jobs. They talked to Ukrainians who weren’t able to buy food because credit cards and ATMs stopped working. They spoke to mothers who couldn’t find their children when a communications network failed. It obviously wasn’t a 9/11, but it showed where the world was heading.
The Ukrainians were open about their experiences, but often the victims of cyberattacks remain silent because they’re embarrassed about their network security. This is a recipe that will perpetuate rather than solve the problem. Microsoft has faced this same dilemma. In 2017, our lawyers raised the prospect of prosecuting two criminals in the United Kingdom who had successfully hacked into part of our Xbox network. Although this issue raised a few awkward questions, I gave the green light to go public. We could never hope to offer public leadership if we didn’t exercise more courage ourselves.
But we not only needed to say more, we needed to do more.
Diplomats across Europe agreed. “We know there’s more to be done, but we don’t yet know what we should do,” one European ambassador said to me at the United Nations in Geneva. “And even if we did, it’s not easy to get governments to agree on anything at the moment. This is an issue where tech companies will need to lead. That’s the best way to get governments to follow.”
Soon this opportunity emerged. A group of security engineers concluded that if a handful of companies acted together and simultaneously, we could dismantle an important part of the malware capability of the North Korean group—or Zinc—that was responsible for WannaCry. We could distribute patches for the vulnerabilities Zinc was seeking to exploit, clean up impacted PCs, and turn off the accounts across our collective services that the attackers were using. The impact would not be permanent, but it would inflict a blow to the group’s capabilities.
At Microsoft, Facebook, and elsewhere, we talked at length about whether and how to move forward. This move would make us a bigger target. I talked it through with Satya and in November we shared with the Microsoft board of directors our plan to go forward. We concluded that we were on strong ground, both legally and otherwise, and if we acted with other companies, it was a step worth taking.
We also concluded that we needed to inform the FBI, NSA, and other officials in the United States and other countries. We would not ask their permission, but simply inform them of our plans. We wanted to make sure that there wasn’t an intelligence operation underway to address the North Korean threat based on the same user accounts we were disabling.
A few days later, I was in Washington, DC, and spent part of the day at the White House. I met in the basement offices of the West Wing with Tom Bossert, the president’s Homeland Security adviser, and Rob Joyce, the White House cybersecurity coordinator. I told them of our plans, which by then had been set for the following week.
They shared that, with President Trump’s strong support, they were close to formally attributing the WannaCry attack to North Korea. It was a key step in starting to hold governments accountable publicly for cyberattacks. Bossert had concluded that it was important for the US government to formally voice its opposition to cyberattacks that it regarded as “disproportionate and indiscriminate.” And in this case, the White House was actively working with other countries so that, for the first time, they could all stand together to point the finger publicly at North Korea.
Bossert first asked us to postpone. “We won’t be ready to make our announcement until a week later, and it would be better if we all went at the same time.” I said we couldn’t delay our operation because it had to be timed with the release of certain patches that the public expected on December 12. It was well known that we released patches on the second Tuesday of every month, on what we call Patch Tuesday.
I offered an alternative. “Let’s explore postponing our communications about our operation, and maybe we can pursue that together.”
The conversation surfaced an important but ironic aspect of the governmental response to WannaCry. As Bossert explained to me and would later say at a press briefing, the US government had only limited responses available in terms of what it could do in response to the incident, given all the sanctions that were already in place. “President Trump has used just about every lever you can use, short of starving the people of North Korea to death, to change their behavior,” he said publicly.16
While the Administration later would conclude that it could expand its potential responses to cyberattacks, the tech sector was in a position to take some steps that the government could not. Tech companies could readily dismantle certain key parts of North Korea’s malware capability. So, it would make a stronger statement to nation-state actors if we could couple the two announcements.
We set out on two coordinated paths, one to execute the operation and the other to announce it publicly. The security teams at Microsoft, Facebook, and another tech company that didn’t want to be named publicly worked side by side to disrupt Zinc’s cyber capabilities on the morning of December 12, Patch Tuesday. The operation proceeded without a hitch.
But the effort to announce the operation was more complicated. Security professionals in almost any field are typically reluctant to talk publicly about what they do. In part, this is because their culture is about shielding rather than sharing information. And there’s always some risk that action will encourage retaliatory attacks. But we had to overcome this reluctance if we were to take effective action against nation-state cyberattacks.
We also faced the added complexity of the tech sector’s complicated relationship with the Trump White House. We’d spent recent months in renewed immigration battles, among other issues. This made some people reluctant to acknowledge publicly that they were doing anything with the administration. But I felt that we needed to partner where we could even if we needed to stand apart where we should. And cybersecurity was a common cause in which we not only could partner but needed to work together if we were to make real progress.
The White House said it would make its announcement on December 19. We quickly told Facebook and the other company that we would be willing to make our action against Zinc public if we all wanted to move forward together. But on the morning before, we were standing alone, still waiting for the other two to decide. If necessary, I determined that we would go alone. I believed that the only way we’d effectively deter countries from engaging in these attacks was if we showed that there was a growing ability to respond. Somebody had to be the first to step forward. It might as well be us.
That evening, the good news arrived that Facebook would go public with us and talk about our collective action. And even better news came the next morning when Bossert spoke at a White House press briefing. He explained that the United States was joined by five other nations—Australia, Canada, Japan, New Zealand, and the United Kingdom—in its public attribution. It was the first time countries had come together multilaterally to hold another nation publicly accountable for a cyberattack. He then announced that Microsoft and Facebook had taken concrete steps the previous week to dismantle part of the group’s cyberattack capability.
Acting together, governments and tech companies had accomplished more than any of us could have separately. It was hardly a panacea for the world’s cybersecurity threats. It was barely a victory. But it was a new beginning.