The trends in the previous four chapters are not new—not the technical realities, not the political and economic trends, nothing. What’s changing is how computers are being used in society: the magnitude of their decisions, the autonomy of their actions, and their interactions with the physical world. This increases the threat over several dimensions.
Information security is traditionally described as a triad consisting of confidentiality, integrity, and availability. You’ll see it called the “CIA triad,” which admittedly is confusing in the context of national security. But basically, the three things I can do with your data are steal a copy of it, modify it, or delete it.
Until now, threats have largely been about confidentiality. These attacks can be expensive, as when the North Koreans hacked Sony in 2014. They can be embarrassing, as in the theft of celebrity photos from Apple’s iCloud in 2014 or the breach of the Ashley Madison adultery site in 2015. They can be damaging, as when the Russians hacked the Democratic National Committee in 2016, or when unnamed hackers stole 150 million personal records from Equifax in 2017. They can even be a threat to national security, as in the case of the Office of Personnel Management data breach of 2015. These are all confidentiality breaches.
Once you give computers the ability to affect the world, though, the integrity and availability threats matter more. Information manipulation is an increasing threat as systems become more capable and autonomous. Denial of service is an increasing threat as systems become more essential. Hacking is an increasing threat as systems have implications to life and property. My car has an Internet connection. And while I am worried that someone will hack into the car and eavesdrop on my conversations through the Bluetooth connection (a confidentiality threat), I am much more worried that they will disable the brakes (an availability threat) or modify the parameters of the automatic lane-centering and following-distance systems (an integrity threat). The confidentiality threat affects my privacy; the availability and integrity threats can kill me.
It’s the same with databases. I am concerned about the privacy of my medical records, but I am even more concerned that someone could change my blood type or list of allergies (an integrity threat) or shut down lifesaving equipment (an availability threat). One way of thinking about this is that confidentiality threats are about privacy, but integrity and availability threats are really about safety.
Larger systems are vulnerable as well. In 2007, the Idaho National Laboratory demonstrated a cyberattack against an industrial turbine, causing it to spin out of control and eventually self-destruct. In 2010, Stuxnet basically did the same thing to Iranian nuclear centrifuges. In 2015, someone hacked into an unnamed steel mill in Germany and disrupted control systems such that a blast furnace could not be properly shut down, resulting in massive damage. And in 2016, the Department of Justice indicted an Iranian hacker who had gained access to the Bowman Dam in Rye, New York. According to the charges, he had the ability to remotely operate the dam’s sluice gates. He didn’t do anything with this access, but he could have.
These are the industrial-control systems known as SCADA. Our dams, power plants, oil refineries, chemical plants, and everything else are on the Internet—and vulnerable. And because all of them affect the world directly and physically, the risks increase dramatically as they come under computer control.
These systems will fail, and sometimes they’ll fail badly. They’ll fail by accident, and they’ll fail under attack. Sociologist Charles Perrow studies complexity and accidents, and presciently wrote in 1984:
Accidents and, thus, potential catastrophes are inevitable in complex, tightly coupled systems with lethal possibilities. We should try harder to reduce failures—and that will help a great deal—but for some systems it will not be enough. . . . We must live and die with their risks, shut them down, or radically redesign them.
In 2015, an 18-year-old outfitted a drone with a handgun for a science project and posted a YouTube video of that handgun being fired remotely.
That’s just one way someone could use the Internet+ to commit murder. Someone could also take control of a victim’s car at speed, hack a hospital drug pump to send the victim a fatal dose of something, or compromise electrical systems during a heat wave. These are not theoretical concerns. They’ve all been demonstrated by security researchers.
Cars are vulnerable. So are airplanes, commercial ships, electronic road signs, and tornado sirens. Nuclear weapons systems are almost certainly vulnerable to cyberattacks, as are the electronic systems that warn people of them. Satellites, too.
For society to work, we need to trust the computer processes that affect our lives. Attacks against the integrity of data undermine this trust. There are many examples. In 2016, Russian government hackers broke into the World Anti-Doping Agency and altered data from athletes’ drug tests. In 2017, hackers—possibly hired by the government of the United Arab Emirates—hacked into a news agency in Qatar and planted inflammatory quotes falsely attributed to the country’s emir, praising Iran and Hamas, and precipitating a diplomatic crisis between Qatar and its neighbors.
There is evidence that the Russians accessed voter databases in 21 US states on the eve of the 2016 election. The effects were minimal, but a more extensive integrity or availability attack would be devastating.
This is how it was phrased in the US director of national intelligence’s 2015 Worldwide Threat Assessment:
Most of the public discussion regarding cyber threats has focused on the confidentiality and availability of information; cyber espionage undermines confidentiality, whereas denial-of-service operations and data-deletion attacks undermine availability. In the future, however, we might also see more cyber operations that will change or manipulate electronic information in order to compromise its integrity (i.e. accuracy and reliability) instead of deleting it or disrupting access to it. Decision-making by senior government officials (civilian and military), corporate executives, investors, or others will be impaired if they cannot trust the information they are receiving.
In separate testimonies before multiple House and Senate committees in 2015, then–director of national intelligence James Clapper and then–NSA director Mike Rogers spoke about these sorts of threats. They consider them far more serious than the confidentiality threat, and believe that the US is vulnerable.
The 2016 Worldwide Threat Assessment describes the threat this way:
Future cyber operations will almost certainly include an increased emphasis on changing or manipulating data to compromise its integrity (i.e., accuracy and reliability) to affect decisionmaking, reduce trust in systems, or cause adverse physical effects. . . . Russian cyber actors, who post disinformation on commercial websites, might seek to alter online media as a means to influence public discourse and create confusion. Chinese military doctrine outlines the use of cyber deception operations to conceal intentions, modify stored data, transmit false data, manipulate the flow of information, or influence public sentiments—all to induce errors and miscalculation in decisionmaking.
We’re also worried about criminals. Between 2014 and 2016, the US Treasury Department ran a series of exercises to help banks plan for data manipulation attacks related to transactions and trades, and then established a program to help banks restore customer accounts after a widespread attack. Someone inserting fake data into the financial system could wreak havoc; no one would know which transactions were real, and sorting it out manually could easily take weeks.
This all makes security critical in a way it hasn’t been before. There is a fundamental difference between crashing your computer and losing your spreadsheet data, and crashing your pacemaker and losing your life, even though they might involve the same computer chips, the same operating system, the same software, the same vulnerability, and the same attack software.
At their core, computers run software algorithms. In Chapter 1, I talked about the bugs and vulnerabilities and the increased vulnerability from complexity, but there’s a new aspect that makes the problem even worse.
Machine learning is a particular class of software algorithm. It’s basically a way of instructing a computer to learn by feeding it an enormous amount of data and telling it when it’s doing better or worse. The machine-learning algorithm modifies itself to do better more often.
Machine-learning algorithms are popping up everywhere because they do things faster and better than humans, especially when large amounts of data are involved. They give us our search results, determine what’s on our social network news feeds, score our creditworthiness, and determine which government services we’re eligible for. They already know what we’ve watched and read, and they use that information to recommend books and movies we might like. They categorize photographs and translate text from one language to another. They play Go as well as a master; read X-rays and diagnose cancers; and inform bail, sentencing, and parole decisions. They analyze speech to assess suicide risk and analyze faces to predict homosexuality. They’re better than we are at predicting the quality of fine Bordeaux wine, hiring blue-collar employees, and deciding whether to punt in football. Machine learning is used to detect spam and phishing e-mails, and also to make phishing e-mails more individual and believable, and therefore more effective.
Because these algorithms essentially program themselves, it can be impossible for humans to understand what they do. For example, Deep Patient is a machine-learning system that has surprising success at predicting schizophrenia, diabetes, and some cancers—in many cases performing better than expert humans. But although the system works, no one knows how, even after analyzing the machine-learning algorithm and its results.
On the whole, we like this. We prefer the more accurate machine-learning diagnostic system over the human technician, even though it can’t explain itself. For this reason, machine-learning systems are becoming more pervasive in many areas of society.
For the same reasons, we’re allowing algorithms to become more autonomous. Autonomy is the ability of systems to act independently, without human supervision or control. Autonomous systems will soon be everywhere. A 2014 book, Autonomous Technologies, has chapters on autonomous vehicles in farming, autonomous landscaping applications, and autonomous environmental monitors. Cars now have autonomous features such as staying within lane markers, following a fixed distance behind another car, and braking without human intervention to avert a collision. Agents—software programs that do things on your behalf, like buying a stock if the price drops below a certain point—are already common.
We’re also allowing algorithms to have physical agency. This is what I was thinking about when I described the Internet+ as an Internet that can affect the world in a direct physical manner. When you look around, computers with physical agency are everywhere, from embedded medical devices to cars to nuclear power plants.
Some algorithms that might not seem autonomous actually are. While it might be technically true that human judges make bail decisions, if they all do what the algorithm recommends because they believe the algorithm is less biased, then the algorithm is as good as autonomous. Similarly, if a doctor never contradicts an algorithm that makes decisions about cancer surgery—possibly out of fear of a malpractice suit—or if an army officer never contradicts an algorithm that makes decisions about where to target a drone strike, then those algorithms are as good as autonomous. Inserting a human into the loop doesn’t count unless that human actually makes the call.
The risks in all of these cases are considerable.
Algorithms can be hacked. Algorithms are executed using software, and—as I discussed in Chapter 1—software can be hacked. All the examples in the previous chapters are the result of hacking software.
Algorithms require accurate inputs. Algorithms need data—often data about the real world—in order to function properly. We need to ensure that the data is available when those algorithms need it, and that the data is accurate. Sometimes the data is naturally biased. And one of the ways of attacking algorithms is to manipulate their input data. Basically, if we let computers think for us and the underlying input data is corrupt, they’ll do the thinking badly and we might not ever know it.
In situations of what’s called adversarial machine-learning, the attacker tries to figure out how to feed the system specific data that causes it to fail in a specific manner. One research project focused on image-classifying algorithms and found they were able to create images that were totally unrecognizable by humans and yet classified with high confidence by machine-learning networks. A related research project was able to fool visual sensors on cars with fake road signs in ways that wouldn’t fool human eyes and brains. Yet another project tricked an algorithm into classifying rifles as helicopters, without knowing anything about the algorithm’s design. (It’s now a standard assignment in university computer science classes: fool the image classifier.)
Like the Microsoft chatbot Tay, which became racist and misogynistic because of deliberately fed data, hackers can train all sorts of machine-learning algorithms to do unexpected things. Spammers could similarly figure out how to fool anti-spam machine-learning algorithms. As machine algorithms become more prevalent and more powerful, we should expect more of these kinds of attacks.
There are also new risks in algorithms’ speed. Computers make decisions and do things much faster than people. They can make stock trades in milliseconds, or shut power off for millions of homes at the same time. Algorithms can be replicated repeatedly in different computers, with each instance of an algorithm making millions of decisions per second. On the one hand, this is great because algorithms can scale in ways people can’t—or at least can’t easily, cheaply, and consistently. But speed can also make it harder to put meaningful checks on an algorithm’s behavior.
Often, the only thing that slows algorithms down is interaction with people. When algorithms interact with each other at computer speeds, the combined results can quickly spiral out of control. What makes an autonomous system more dangerous is that it can do serious damage before a human intervenes.
In 2017, Dow Jones accidentally published a story about Google buying Apple. It was obviously a hoax, and any human reading it would have immediately realized it, but automated stock-trading bots were fooled—and stock prices were affected for two minutes until the story was retracted.
That was just a minor problem. In 2010, autonomous high-speed financial trading systems unexpectedly caused a “flash crash.” Within minutes, a trillion dollars of stock market value was wiped out by unintended machine interactions, and the incident ended up bankrupting the company that caused the problem. And in 2013, hackers broke into the Associated Press’s Twitter account and falsely reported an attack on the White House. This sent the stock markets down 1% within seconds.
We should also expect autonomous machine-learning systems to be used by attackers: to invent new attack techniques, to mine personal data for purposes of fraud, to create more believable phishing e-mails. They will only get more sophisticated and capable in the coming years.
At the DefCon conference in 2016, the US Defense Advanced Research Projects Agency (DARPA) sponsored a new kind of hacking contest. “Capture the Flag” is a popular hacking sport: organizers create a network filled with bugs and vulnerabilities, and teams defend their own part of the network while attacking other teams’ parts. The Cyber Grand Challenge was similar, except teams submitted programs that tried to do the same automatically. The results were impressive. One program found a previously undetected vulnerability in the network, patched itself against the bug, and then proceeded to exploit it to attack other teams. In a later contest that had both human and computer teams, some computer teams outperformed some human teams.
These algorithms will only get more sophisticated and more capable. Attackers will use software to analyze defenses, develop new attack techniques, and then launch those attacks. Most security experts expect offensive autonomous attack software to become common in the near future. And then it’s just a matter of the technology improving. Expect the computer attackers to get better at a much faster rate than the human attackers; in another five years, autonomous programs might routinely beat all human teams.
As Mike Rogers, the commander of US Cyber Command and the director of the NSA, said in 2016: “Artificial intelligence and machine learning . . . is foundational to the future of cybersecurity. . . . We have got to work our way through how we’re going to deal with this. It is not the if, it’s only the when to me.”
Robots offer the most evocative example of software autonomy combined with physical agency. Researchers have already exploited vulnerabilities in robots to remotely take control of them, and have found vulnerabilities in teleoperated surgical robots and industrial robots.
Autonomous military systems deserve special mention. The US Department of Defense defines an autonomous weapon as one that selects a target and fires without intervention from a human operator. All weapons systems are lethal, and they are all prone to accidents. Adding autonomy increases the risk of accidental death significantly. As weapons become computerized—well before they’re actual robot soldiers—they, too, will be vulnerable to hacking. Weapons can be disabled or otherwise caused to malfunction. If they are autonomous, they might be hacked to turn on each other or their human allies in large numbers. Weapons that can’t be recalled or turned off—and also operate at computer speeds—could cause all sorts of lethal problems for friend and foe alike.
All of this comes together in artificial intelligence. Over the past few years, we’ve read some dire predictions about the dangers of AI. Technologists Bill Gates, Elon Musk, and Stephen Hawking, and philosopher Nick Bostrom, have all warned of a future where artificial intelligence—either as intelligent robots or as something less personified—becomes so powerful that it takes over the world and enslaves, exterminates, or ignores humanity. The risks might be remote, they argue, but they’re so serious that it would be foolish to ignore them.
I am less worried about AI; I regard fear of AI more as a mirror of our own society than as a harbinger of the future. AI and intelligent robotics are the culmination of several precursor technologies, like machine-learning algorithms, automation, and autonomy. The security risks from those precursor technologies are already with us, and they’re increasing as the technologies become more powerful and more prevalent. So, while I am worried about intelligent and even driverless cars, most of the risks are already prevalent in Internet-connected drivered cars. And while I am worried about robot soldiers, most of the risks are already prevalent in autonomous weapons systems.
Also, as roboticist Rodney Brooks pointed out, “Long before we see such machines arising there will be the somewhat less intelligent and belligerent machines. Before that there will be the really grumpy machines. Before that the quite annoying machines. And before them the arrogant unpleasant machines.” I think we’ll see any new security risks coming long before they get here.
There’s another class of attacks that we have addressed only peripherally, and that’s supply-chain attacks. These are attacks that target the production, distribution, and maintenance of computers, software, networking equipment, and so on—everything that makes up the Internet+, which means everything.
For example, there is widespread suspicion that networking products made by the Chinese company Huawei contain government-controlled backdoors, and that computer security products from Kaspersky Lab are compromised by the Russian government. In 2018, US intelligence officials warned against buying smartphones from the Chinese companies Huawei and ZTE. Back in 1997, the Israeli company Check Point was dogged by rumors that the Israeli government added backdoors into its products. In the US, the NSA secretly installed eavesdropping equipment in AT&T facilities and collected information on cell phone calls from mobile providers.
All of these hacks target the underlying products and services we use on the Internet and the trust we have in them. They demonstrate the vulnerability of our very international supply chain for technological products.
These risks were never considered in the course of the Internet’s evolution, and they’re largely an accidental outcome of its unexpected growth and success. Our hardware is made in Asia where production costs are low. Our programmers come from all over the world, and more and more programming is done in countries like India and the Philippines, where labor costs less than in the US. The result is a supply-chain mess. A product’s computer chips might be manufactured in one country and assembled in another, run software written in a third, and be integrated into a final system in a fourth before it is quality tested in a fifth and sold to a customer in a sixth. At any of those steps, the security of the final system can be subverted by those in the supply chain. All of those countries have their own local governments with their own incentives, and any one of them can coerce their own citizens into doing their bidding. Adding a backdoor onto a computer chip during the fabrication process is straightforward, and resistant to most detection techniques.
One of the ways governments try to defend against some of these attacks is to demand to see the source code for the software they buy. China demands to see source code. So does the US. Kaspersky offered to let any government see its source code after it was accused of having backdoors inserted by the Russian government. Of course, this cuts both ways: countries can use offered source code to find vulnerabilities to exploit. In 2017, HP Enterprise faced criticism because it had given Russia the source code to its ArcSight line of network security products.
Governments aren’t just compromising products and services in their own countries during the design and production process. They’re interdicting the distribution process as well, either individually or in bulk. According to NSA documents from Edward Snowden, the NSA was looking to put its own backdoor in Huawei’s equipment. We know from the Snowden documents that NSA employees would routinely intercept Cisco networking equipment being shipped to foreign customers and install eavesdropping equipment. That was done without Cisco’s knowledge—and the company was livid when it found out—but I’m sure there are other American companies that are more cooperative. Backdoors have been discovered in Juniper firewalls and D-Link routers—and there’s no way to tell who placed them there.
Hackers have introduced fake apps into the Google Play store. They look and act like real apps—and have similar enough names to fool people—but they collect your personal information for malicious purposes. One report said that 4.2 million fake apps were downloaded by unsuspecting people in 2017. This included a fake WhatsApp app. Users got lucky here; it was just designed to steal advertising revenue, not to eavesdrop on people’s conversations.
Here are more examples from 2017. Hackers linked to China compromised the legitimate download site for a popular Windows tool called CCleaner, resulting in millions of users unsuspectingly downloading a malware-infected version of the software. Unknown hackers corrupted the legitimate software update mechanism for a piece of Ukrainian accounting software to spread NotPetya malware throughout the country. Another group used fake antivirus updates to spread malware. Researchers demonstrated how to hack an iPhone by infecting a third-party replacement screen. And there are enough similar attacks that some people are warning not to buy any used IoT devices from sites like eBay.
Larger systems are vulnerable to these attacks, too. In 2012, China funded, and Chinese companies built, the new headquarters for the African Union in Addis Ababa, Ethiopia—including the building’s telecommunications systems. In 2018, the African Union discovered that China was using that infrastructure to spy on the organization’s computers. I am reminded of the US embassy that Russian contractors built in Moscow during the Cold War; it was riddled with listening devices.
Supply-chain vulnerabilities are an enormous security issue, and one we are largely ignoring. Commerce is so global that it just isn’t feasible for any country to keep its entire supply chain within its borders. Pretty much every US technology company makes its hardware in countries like Malaysia, Indonesia, China, and Taiwan. And while the US government occasionally touches on this issue by blocking the occasional merger or acquisition, or by banning the occasional hardware or software product, those are minor interventions in what is a much larger problem.
Our critical dependence on the Internet is, well, becoming critical. In a 2012 speech, then–secretary of defense Leon Panetta warned:
An aggressor nation or extremist group could use these kinds of cyber tools to gain control of critical switches. They could derail passenger trains, or even more dangerous, derail trains loaded with lethal chemicals. They could contaminate the water supply in major cities, or shut down the power grid across large parts of the country.
This is from the 2017 Worldwide Threat Assessment:
Cyber threats also pose an increasing risk to public health, safety, and prosperity as cyber technologies are integrated with critical infrastructure in key sectors. These threats are amplified by our ongoing delegation of decisionmaking, sensing, and authentication roles to potentially vulnerable automated systems.
This is from the 2018 version:
The potential for surprise in the cyber realm will increase in the next year and beyond as billions more digital devices are connected—with relatively little built-in security—and both nation states and malign actors become more emboldened and better equipped in the use of increasingly widespread cyber tool-kits. The risk is growing that some adversaries will conduct cyber attacks—such as data deletion or localized and temporary disruptions of critical infrastructure—against the United States in a crisis short of war.
To be sure, some of this is hyperbole. But much of it is not.
In 2015, Lloyd’s of London developed a hypothetical scenario of a large-scale cyberattack on the US power grid. Its attack scenario was realistic, not any more sophisticated than what Russia did to Ukraine in December 2015 and June 2017, combined with the Idaho National Lab demonstration attack against power generators. Lloyd’s researchers envisioned a blackout that affected 95 million people across 15 states and lasted anywhere from 24 hours to several weeks, costing between $250 billion and $1 trillion—depending on the details of the scenario.
The admittedly clickbait title of this book refers to the still-science-fictional scenario of a world so interconnected, with computers and networks so deeply embedded in our most important technical infrastructures, that someone could potentially destroy civilization with a few mouse clicks. We’re nowhere near that future, and I’m not convinced we’ll ever get there. But the risks are becoming increasingly catastrophic.
There’s a general principle at work. Advances in technology allow attacks to scale, and better technology means that fewer attackers can do more damage. Someone with a gun can do more damage than someone with a sword, and that same person armed with a machine gun can do more damage still. Someone armed with plastic explosives can do more damage than someone with a stick of dynamite, and someone with a dirty nuke can do more damage still. That gun-carrying drone will become cheaper and easier to make; maybe someday it will be possible to easily make one on a 3D printer; there’s already a kludgy demo on YouTube.
We’ve already seen this scaling on the Internet. Cybercriminals can steal more money from more bank accounts faster than criminals on foot. Digital pirates can copy more movies faster to cloud servers than when people had to use VHS tapes. The governments of the world have learned that the Internet allows them to eavesdrop more efficiently than the old telephone networks ever did. The Internet allows attacks to scale to a degree impossible without computers and networks.
Remember in Chapter 1 where I talked about distance not mattering, class breaks, and the ability to encapsulate skill into software? These trends become even more dangerous as our computer systems become more critical to our infrastructure. Risks include someone crashing all the cars—to be fair, it’s more likely crashing all the cars of a particular make and model year that share the same software—or shutting down all the power plants. We’re worried about someone robbing all the banks at once. We’re worried about someone committing mass murder by seizing control of all the insulin pumps of the same manufacturer. These catastrophic risks have simply never been possible before the interconnection, automation, and autonomy afforded by the Internet.
As we move into the world of the Internet+, where computers permeate our lives at every level, class breaks will become increasingly perilous. The combination of automation and action at a distance will give attackers more power and leverage than they have ever had before. The US has always seen itself as a risk-taking society, and we prefer to act first and clean up afterwards. But if the risks are too great, can we continue in this vein?
This is the risk that keeps me up at night. It’s not a “Cyber Pearl Harbor,” where one nation launches a surprise attack against another. It’s a criminal attack that escalates out of control.
Additionally, there’s an asymmetry between different nations. Liberal democracies are more vulnerable than totalitarian countries, partly because we rely on the Internet+ more and for more critical things, and partly because we don’t engage in heavy-handed centralized control. At a 2016 press conference, President Obama admitted as much: “Our economy is more digitalized, it is more vulnerable, partly because . . . we have a more open society, and engage in less control and censorship over what happens over the Internet.”
This asymmetry makes deterrence more difficult. It makes preventing escalation more difficult. And it puts us in a more dangerous position with respect to other countries in the world.
An interesting thing happens when increasingly powerful attackers interact with society. As technology makes each individual attacker more powerful, the number of them we can tolerate decreases. Think of it basically as a numbers game. Thanks to the way humans behave as a species and as a society, every society is going to have a certain percentage of bad actors—which means a certain crime rate. At the same time, there’s a particular crime rate that society is willing to tolerate. As the effectiveness of each criminal increases, the total number of criminals a society can tolerate decreases.
As a thought experiment, assume the average burglar can reasonably rob a house a week, and a city of 100,000 houses might be willing to live with a 1% burglary rate. That means the city can tolerate 20 burglars. But if technology suddenly increases each burglar’s efficiency so that he can rob five houses a week, the city can only tolerate four burglars. It has to lock up the other 16 in order to maintain that same 1% burglary rate.
Society actually does this. People don’t calculate the equation explicitly, but it’s no less real. If the crime rate gets too high, we start complaining that there aren’t enough police. If the crime rate gets too low, we start complaining that we’re spending too much money on police. In the past, with inefficient criminals, we were willing to live with a given percentage of criminals in our society. As technology makes each individual criminal more efficient, the percentage we can tolerate decreases.
This is the real risk of terrorism in the future. Because terrorists can potentially do so much more damage with modern technology, we have to ensure that there are proportionally fewer of them. This is why there has been so much talk about terrorists with weapons of mass destruction. The technologies that we most feared after 9/11 were nuclear, chemical, and biological. Later, radiological weapons were added to those three. Cyberweapons have been invoked in the same breath as the others, mostly because of the high uncertainty as to how bad they can be. Electromagnetic pulse weapons are specifically designed to disable electronic systems. I’m sure that future technological developments will result in still-unimagined technologies of mass destruction, but these are the ones we fear today.
Internet terrorism is still a few years away. Even the 2017 Worldwide Threat Assessment limits concerns about terrorism and the Internet to coordination and control:
Terrorists—to include the Islamic State of Iraq and ash-Sham (ISIS)—will also continue to use the Internet to organize, recruit, spread propaganda, raise funds, collect intelligence, inspire action by followers, and coordinate operations. Hizballah and HAMAS will continue to build on their cyber accomplishments inside and outside the Middle East. ISIS will continue to seek opportunities to target and release sensitive information about US citizens, similar to their operations in 2015 disclosing information about US military personnel, in an effort to inspire attacks.
My guess is that we’re not going to see Internet terrorism until the Internet can kill people in a graphic manner. Shutting off the electricity of a million people doesn’t terrorize in the same way. It happens regularly by accident, and even if some people die, it will only be a footnote to the event. Driving a truck into a crowd of people guarantees top billing on the nightly news, even if it’s low-tech. But Internet attackers are getting more aggressive, ingenious, and tenacious every year, and someday Internet terrorism involving planes or cars will be possible.
What makes these attacks so different from conventional ones is the damage they can cause. The potential consequences are so great that we believe we cannot afford to have even one serious incident. To return to that thought experiment, we fear that technological advances will render each attacker so powerful that we cannot tolerate even one successful attack.
In November 2001, then–vice president Dick Cheney articulated the “One Percent Doctrine,” described by journalist Ron Suskind thus: “If there was even a 1 percent chance of terrorists getting a weapon of mass destruction—and there has been a small probability of such an occurrence for some time—the United States must now act as if it were a certainty.” In essence, I have just supplied a rationale for Cheney’s doctrine.
Some of these new risks have nothing to do with attacks by hostile nations or terrorists. Rather, they arise from the very nature of the Internet+, which encompasses and connects almost everything, making it all vulnerable at the same time. Like large utilities and financial systems, the Internet+ is a system that’s too big to fail. Or, at least, the security is too important to fail because the attackers are too powerful to succeed and their results would be too catastrophic to consider.
These failures could come from smaller attacks, or even accidents, that cascade badly. I have long thought that the 2003 blackout that covered most of the northeastern US and southeastern Canada was the result of a cyberattack. It wasn’t deliberate by any stretch of the imagination, but the attack happened on a day that a Windows worm—Blaster—was spreading virulently and causing computers to crash. The official report on the blackout specifically said that none of the computers directly controlling the power grid were running Windows, but the computers that were monitoring those computers were, and the report said that some of them were offline. I blame the virus for hiding the small initial power outage long enough for it to have catastrophic effects, although the authors of the virus had no idea this would happen and couldn’t have deliberately done it on a bet.
Similarly, the authors of the Mirai botnet didn’t realize that their attack against Dyn would result in so many popular websites being knocked offline. I don’t think they even knew what companies used Dyn’s DNS services, and that they were a single point of failure without any backup. In fact, three college students wrote the botnet to gain an advantage in the video game Minecraft.
Damage to computers controlling physical systems radiates outwards. A 2012 attack against the Saudi Arabian national oil company only affected the company’s IT network. But it erased all data on over 30,000 hard drives, crippling the company for weeks and affecting oil production for months—which had an effect on global availability. The shipping giant Maersk was hit so badly by NotPetya that it had to halt operations at 76 port terminals around the world.
Devices not normally associated with critical infrastructure can also cause catastrophes. I’ve already mentioned class breaks against systems like automobiles, especially driverless cars, and medical devices. To this we can add mass murder by swarms of weaponized drones, the disruption of critical systems by ever-more-massive botnets, using biological printers to produce lethal pathogens, malicious AIs enslaving humanity, malicious code received from space aliens hacking the planet, and all the things we haven’t thought of yet.
Okay; let’s pause to catch our collective breath. We tend to panic unduly about the future. Think of all the doomsday scenarios throughout history that never happened. During the Cold War, many were sure that humans would kill themselves in a thermonuclear war. They put less money into long-term savings. Some people decided not to have children, because what was the point? In hindsight, there are a lot of reasons why neither the US nor the USSR started World War III, but none of them were obvious at the time. Partly, it turned out our world’s leaders weren’t as fanatical as we thought they were. Over the years, there were plenty of technical glitches in both the US’s and the USSR’s missile detection systems—instances where the equipment clearly showed that the country was under nuclear attack—and in neither instance did the country retaliate. The Cuban Missile Crisis is probably the closest we came politically to a nuclear war, although the 1983 false alarm is a good close second. Yet it didn’t happen.
Our collective fears after the 9/11 terrorist attacks were similar. That singular event, with its 3,000-person death toll and $10 billion cost in property and infrastructure damage, was way out of proportion to every other terrorist attack we have experienced in the history of our planet (although much less damaging than the annual death toll from automobiles, heart disease, or malaria). But instead of regarding it as a singular event unlikely to be repeated anytime soon, people decided it was the new normal. The truth is that the typical terrorist attack looks more like the Boston Marathon bombings: 3 people dead, 264 people injured, and not a lot of ancillary damage. Bathtubs, home appliances, and deer combined kill many more Americans per year on average than do terrorists. But while we seem to be coming out of our collective PTSD reaction to 9/11, we’re still much more fearful of terrorist threats than makes sense, given the actual risk. In general, people are very bad at assessing risk.
For years, I have been writing about what I call “movie-plot threats”: security threats so outlandish that, while they make great movie plots, are so unlikely that we shouldn’t waste time worrying about them. I coined the term in 2005 to poke fun at all the scary, overly specific terrorism stories the media was peddling: terrorists with scuba gear, terrorists dispersing anthrax from crop dusters, terrorists contaminating the milk supply. My point was twofold. One: we are a species of storytellers, and detailed stories evoke a fear in us that general discussions of terrorism don’t. And two: it makes no sense to defend against specific plots; instead, we should focus more on general security measures that work against any plot. With respect to terrorism, that’s intelligence, investigation, and emergency response. Smart security measures will be different for other threats.
It’s easy to discount the more extreme scenarios in this chapter as movie-plot threats. Individually, some of them probably are. But collectively, these are classes of threat that have precursors in the past and will become more common in the future. Some of them are happening now, to a varying degree of frequency. And while I certainly have the details wrong, the broad outlines are correct. As with fighting terrorism, our goal isn’t to play whack-a-mole and stop a few particularly salient threats, but to design systems from the start that are less likely to be successfully attacked.