4

Everyone Favors Insecurity

Flaws in the technology are not the only reason we have such an insecure Internet. Another important reason, maybe even the main reason, is that the Internet’s most powerful architects—governments and corporations—have manipulated the network to make it serve their own interests.

Everyone wants you to have security, except from them. Google is willing to give you security, as long as it can surveil you and use the information it collects to sell ads. Facebook offers you a similar deal: a secure social network, as long as it can monitor everything you do for marketing purposes. The FBI wants you to have security, as long as it can break that security if it wants to. The NSA is just the same, as are its equivalents in the UK, France, Germany, China, Israel, and elsewhere.

The reasons differ—and the parties involved will never admit this plainly—but basically, insecurity is in the interests of both corporations and governments. They both benefit from loopholes in security and work to maintain them. Corporations want insecurity for reasons of profit. Governments want it for reasons of law enforcement, social control, international espionage, and cyberattack. The dynamics of all of this are complicated, so we’ll take it a step at a time.

SURVEILLANCE CAPITALISM CONTINUES TO DRIVE THE INTERNET

Corporations want your data. The websites you visit are trying to figure out who you are and what you want, and they’re selling that information. The apps on your smartphone are collecting and selling your data. The social networking sites you frequent are either selling your data, or selling access to you based on your data. Harvard Business School professor Shoshana Zuboff calls this “surveillance capitalism,” and it’s the business model of the Internet. Companies build systems that spy on people in exchange for services.

This surveillance is easy because computers do it naturally. Data is a by-product of computer processes. Everything we do that involves a computer creates a transaction record. This includes browsing the Internet, using—and even just carrying—a cell phone, making a purchase online or with a credit card, walking past a computerized sensor, or saying something in the same room as Amazon’s Alexa. Data is also a by-product of any socializing we do using computers. Phone calls, e-mails, text messages, and Facebook chatter all create transaction records. As I’ve previously written, we’re all leaving digital exhaust as we go through our lives.

Our data used to be thrown away because the value of it was so marginal and using it was so difficult. Those days are over. Today, data storage is so cheap that all of this data can be saved. This is the raw material that has become “big data.” It is fundamentally surveillance data, and it’s being collected and used by corporations, primarily to support the advertising model that underpins much of the Internet.

If you look at lists of the world’s most valuable companies over the past decade, you’ll find the ones that engage in surveillance capitalism: Alphabet (Google’s parent company), Facebook, Amazon, and Microsoft. Apple is the exception; it makes its money only by selling hardware, and that’s why its prices are higher than the competition’s.

The advertising model of the Internet is getting more personal. Companies are trying to figure out your emotions. They’re trying to determine what you’re paying attention to and how you react. They’re trying to learn what images you respond to, and exactly how to flatter you. They’re doing all of this so as to more precisely and effectively advertise to you, and sell things to you.

No one knows how many online data brokers and tracking companies operate in the US; I’ve read estimates from 2,500 to 4,000. These corporations know an amazing amount about us from the devices we use and carry. Our cell phones reveal where we are at all times: where we live, where we work, who we spend time with. They know when we wake up and when we go to sleep—because checking our phones is often the first and last thing we do in a day. And because everyone has a cell phone, they know who we sleep with.

Take a moment to consider who else knows where your smartphone is, and therefore where you are. That list would include any apps that you’ve given the permission to track your location—and some that track your location by other means. There are obvious ones: Google Maps and Apple Maps. There are also less obvious ones. In 2013, researchers discovered that apps like Angry Birds, Pandora Internet Radio, and the Brightest Flashlight—yes, a flashlight app—also tracked their users’ locations.

Smartphones now contain many different sensors. Any Wi-Fi networks your phone connects to can pinpoint your location, even if your phone is just trying to associate with Wi-Fi networks as you walk around. Your phone’s Bluetooth can notify nearby computers that you’re around. The company Alphonso provides apps with the ability to use the phone’s microphone to collect data on what people are watching on television. Facebook has a patent on using accelerometer and gyroscope readings from multiple phones to detect when people are facing each other or walking together. And on and on and on.

There are other ways to determine your location. Did you use your credit card at a store? Did you use an ATM? Maybe you passed by one of the thousands of security cameras in a city. (And while the camera probably didn’t identify you, soon automatic face recognition will become common enough that it will.) Did an automatic license plate scanner register your car?

Surveillance companies know a lot about us. Google is probably the best example. Internet search is incredibly intimate. We never lie to our search engines. Our interests and curiosities, hopes and fears, desires and sexual proclivities, are all collected and saved by the companies that search the Internet in our name.

To be clear: when I say “Google knows” or “Facebook knows,” I am not implying that the companies are sentient or even conscious. Rather, I mean two very specific things. One: Google’s computers contain data that would allow a person who has access to it—either authorized or unauthorized—to learn the facts, if they chose to do so. Two: Google’s automatic algorithms can use this data to make inferences about us and perform automated tasks based on them.

In the future, our devices will be able to reconstruct a startlingly intimate model of who we are, what we think about, where we go, and what we do. Refrigerators will monitor our food consumption and, by extension, our health. Our cars will know when and how often we violate traffic laws, and might tell the police or our insurance companies. Fitness trackers will try to figure out our moods. Our beds will know how well we’ve slept. Already, all new Toyota cars track speed, steering, acceleration, and braking—even whether a driver has her hands on the wheel.

The twin enticements of surveillance capitalism are “free and convenient.” It has shaped the commercial Internet for over two decades. Soon it will drive much, much more. And it requires insecurity to operate at peak efficiency. As long as companies are free to gather as much data about us as they possibly can, they will not sufficiently secure our systems. As long as they buy, sell, trade, and store that data, it’s at risk of being stolen. And as long as they use it, we risk its being used against us.

CORPORATE CONTROL OF CUSTOMERS AND USERS IS NEXT

Computers don’t just allow us to be surveilled to a degree never before possible; they also allow us to be controlled. It’s a new business model: forcing us to pay for features individually, use only particular accessories, or subscribe to products and services that we previously purchased. This kind of control relies on Internet insecurity.

If you’re a farmer who just bought a tractor from John Deere, you might think that tractor is yours. That might be the way it used to be, but things are different today. Because tractors contain software—because they’re in essence just computers with an engine, wheels, and a tiller attached—John Deere has been able to move from an ownership model to a licensing model. In 2015, John Deere told the copyright office that farmers receive “an implied license for the life of the vehicle to operate the vehicle.” And that license comes with all sorts of rules and caveats. For one, farmers now have no right to repair or modify their tractors; instead, they have to use authorized diagnostic equipment, parts, and repair facilities that John Deere has monopoly control over.

Apple maintains strict control over which apps are available in its store. Before an app can be sold or given away to iPhone customers, it has to be approved by Apple. And the company has some strict rules about what it will and won’t allow. No porn, of course, and no games about child labor or human trafficking—but also no political apps. This latter rule meant that Apple censored apps that tracked US drone strikes and apps containing “content that ridicules public figures.” Such restrictions put Apple in a position to be able to implement government censorship demands. And it has done so: in 2017, Apple removed security apps from its China store.

Apple is an extreme example, but it’s not the only company that censors your Internet. Facebook regularly censors posts, images, and entire websites. YouTube censors videos. Google censors search results. Google has also banned an app that randomly clicks on ads from its Chrome browser because the app messes with its advertising business model.

Normally, we wouldn’t have a problem with a company making decisions about which products it chooses to carry. If Walmart won’t sell music CDs with a parental warning advisory label, we are all free to buy those albums elsewhere. But many Internet companies can be very powerful, more so than predominantly brick-and-mortar stores, even chains as enormous as Walmart, because they benefit from what is called the network effect. That is, they become more useful as more people use them. One telephone is useless, and two are marginally useful, but an entire network of telephones is very useful. The same thing is true for fax machines, e-mail, the web, text messages, Snapchat, Facebook, Instagram, PayPal, and everything else. The more people use them, the more useful they are. And the more powerful the companies that control them become, the more control those companies can exert over you.

Unless you know how to jailbreak your phone to remove its restrictions, sideload apps, and live with a warranty-free device that can’t receive updates without a lot of effort, the iTunes store is the only place you can go for iPhone apps. So if Apple decides not to carry an app, there is no way for ordinary customers to get it.

In most cases, control equals profits. Facebook controls how people get their news, taking power—and ad revenue—away from traditional newspapers and magazines. Amazon controls how people buy their stuff, taking power away from traditional retailers. Google controls how people find information, taking power away from all sorts of more traditional information systems. The battle over net neutrality is all about the telecommunications providers wanting to control your Internet experience.

In older writings, I have described the situation on the Internet as feudal. We give up control of our data and capabilities in exchange for services. I wrote:

Some of us have pledged our allegiance to Google: we have Gmail accounts, we use Google Calendar and Google Docs, and we have Android—probably Pixel—phones. Others have pledged allegiance to Apple: we have Macintosh laptops, iPhones, and iPads; and we let iCloud automatically synchronize and back up everything. Still others of us let Microsoft do it all. Or we buy our music and e-books from Amazon, which keeps records of what we own and allows downloading to a Kindle, computer, or phone. Some of us have pretty much abandoned e-mail altogether . . . for Facebook.

These companies are like feudal lords in that they protect us from outside threats, and also in that they have surprisingly complete control over what we’re allowed to see and do.

Companies are eyeing the Internet+ in the same way. Philips wants its controller to be the central hub for your light bulbs and other electronics. Amazon wants Alexa to be the central hub for your entire smart home. Both Apple and Google want their phones to be the singular device through which you control all your IoT devices. Everyone wants to be central, essential, and in control of your world.

And companies will give away services for free to get that access. Just as Google and Facebook give away services in exchange for the ability to spy on their users, companies will do the same thing with the IoT. Companies will offer free IoT stuff in exchange for the data they receive from monitoring the people using it. Companies owning fleets of autonomous cars might offer free rides in exchange for the ability to show ads to the passengers, mine their contacts, or route them past or make an intermediate stop at particular stores and restaurants.

Battles for control of customers and users are going to heat up in the coming years. And while the monopolistic positions of companies like Amazon, Google, Facebook, and Comcast allow them to exert significant control over their users, smaller, less obviously tech-based companies—like John Deere—are attempting to do the same.

This corporate power grab is all predicated on abusing the DMCA—the same law I discussed back in Chapter 2, that stymies the patching of software vulnerabilities. The DMCA was designed by the entertainment industry to protect copyright. It’s a pernicious law that has given corporations the ability to enforce their commercial preferences with the rule of law. Because software is subject to copyright, protecting it with DRM copy protection software invokes the DMCA. The law makes it a crime to analyze and remove the copy protection, and hence to analyze and modify the software. John Deere enforces its prohibitions against farmers maintaining their own tractors by copy-protecting the tractors’ embedded computers.

Keurig coffee makers are designed to use K-cup pods to make single servings of coffee. Because the machines use software to verify the codes printed on the K-cups, Keurig can enforce exclusivity, so only companies who pay Keurig can produce pods for its coffee machines. HP printers no longer allow you to use unauthorized ink cartridges. Tomorrow, the company might require you to use only authorized paper—or refuse to print copyrighted words you haven’t paid for. Similarly, tomorrow’s dishwasher could enforce which brands of detergent you use.

As the Internet+ turns everything into computers, all that software will be covered by the DMCA. This same legal trick is used to tie peripherals to products, to force consumers to only buy authorized compatible components, or only buy repair services from authorized dealers. This affects smartphones, thermostats, smart light bulbs, automobiles, and medical implants. And while some companies have overreached their DMCA claims and have lost in court, such power grabs are still a common tactic.

Often, user control goes hand in hand with surveillance. In order to ensure compliance with whatever restrictions they demand from their customers and users, companies often closely monitor what those customers and users are doing. Then they deny the customers access to that data. Customers are rebelling.

People are increasingly trying to hack their own medical devices. Hugo Campos is one of them. For years, he has had an implanted cardioverter defibrillator, which controls his heart condition but also continuously collects data about his heart. Think of it as something like a Fitbit with electroshock capabilities. But unlike a Fitbit, his implanted device is proprietary, and Campos has been unable to access this data. He has resorted—so far, unsuccessfully—to suing the manufacturer. None of the companies that make implantable devices—Medtronic, Boston Scientific, Abbott Labs, and Biotronik—will allow patients access to their own data, and there’s nothing anyone can do about it. The data is owned by the companies.

Similarly, people have been hacking their Toyota Priuses since 2004 to improve fuel efficiency, disable annoying warnings, get better diagnostic information out of the engine, modify engine performance, and access options available in European and Japanese versions of the car but not in the US version. These hacks may void the warranty, but the car manufacturers can’t stop them. There are hacks and cheat codes for many other car models, too.

It’s no different with automobile black-box data. Police and insurance companies use the data post-crash, but users don’t have access to it. (A California law allowing individuals to access their car’s data stalled because of opposition from car manufacturers.) And John Deere tractor owners have resorted to buying pirated firmware from Ukraine in order to repair their own tractors.

This isn’t a black-and-white issue. We don’t want people to have unfettered ability to hack their own consumer devices. For example, thermostats deliberately have wide control limits. Changing the software to maintain the temperature can damage the heating system by forcing it to turn on and off too frequently. Similarly, that pirated tractor software from Ukraine might remove—either accidentally or on purpose—a piece of the software that protects the transmission, causing it to fail more often. If John Deere is responsible for transmission repairs, that’s a problem.

Similarly, we don’t want people to hack their cars in ways that break emission control laws, or their medical devices in ways that evade legal restrictions surrounding the use of those devices. For example, some people are hacking their insulin pumps to create an artificial pancreas—a device that will measure their blood sugar levels and automatically deliver the proper doses of insulin on a continuous basis. Do we want to give them the ability to do that, or do we want to make sure that only regulated manufacturers produce and sell those devices? I’m not sure where the proper balance lies.

As the Internet+ permeates more of our lives, this kind of conflict will play out everywhere. People will want access to data from their fitness trackers, appliances, home sensors, and vehicles. They’ll want that data on their own terms, in formats they can use for their own purposes. They’ll want to be able to modify those devices to add functionality. Device manufacturers and governments will try to prevent such enhanced capability—sometimes for profit or anticompetitive reasons, sometimes for regulatory reasons, and sometimes just because vendors didn’t bother making the data or controls accessible.

All of this reduces security. In order for companies to control us in the ways they want, they will build systems that allow for remote control. More importantly, they will build systems that assume the customer is the attacker and needs to be contained. This is a design requirement that runs counter to good security, because it gives outside attackers an avenue to gain access. At the same time, hackers can add insecurities through customers’ back-room modifications to take control.

GOVERNMENTS ALSO USE THE INTERNET FOR SURVEILLANCE AND CONTROL

Governments want to surveil and control us for their own purposes, and they use the same insecure systems that corporations have given us to do it.

In 2017, the University of Toronto’s research center Citizen Lab reported on the Mexican government’s surveillance of what it considered political threats. The country had purchased surveillance software—spyware—from the cyberweapons manufacturer NSO Group, and had used it to spy on journalists, dissidents, political opponents, international investigators, lawyers, anti-corruption groups, and people who supported a tax on soft drinks.

Many other countries use Internet spyware to surveil their residents. The products of FinFisher, another commercial spyware company, were found in 2015 to be used by Bosnia, Egypt, Indonesia, Jordan, Kazakhstan, Lebanon, Malaysia, Mongolia, Morocco, Nigeria, Oman, Paraguay, Saudi Arabia, Serbia, Slovenia, South Africa, Turkey, and Venezuela. This software was being deployed against dissidents, activists, journalists, and other individuals these governments wanted to arrest, intimidate, or just monitor.

Government surveillance for political and social control is normal on today’s Internet. The same technologies that gave us surveillance capitalism also enable governments to conduct their own surveillance. The degree to which this has been occurring has come to light only in the past few years, and it shows no signs of slowing down. In fact, the Internet+ will almost certainly bring with it more government surveillance—some of it for good, but a lot of it for ill.

Modern government surveillance piggybacks on existing corporate surveillance. It isn’t that the NSA woke up one morning and said: “Let’s spy on everyone.” It said: “Corporate America is spying on everyone. Let’s get ourselves a copy.” And it does—through bribery, coercion, threats, legal compulsion, and outright theft—collecting cell phone location data, Internet cookies, e-mails and text messages, log-in credentials, and so on. Other countries operate in a similar fashion.

Internet surveillance often involves the cooperation of telecommunications providers, who give the intelligence agencies copies of everything that goes through their switches. The NSA is a master of this, collecting the data that flows across US borders and internationally through its agreements with partner countries. We know that the NSA installs surveillance equipment at AT&T switches inside the US, and has collected cell phone metadata from Verizon and others. Similarly, Russia gets bulk access to data from ISPs inside its borders.

Most countries don’t have either the budget or the expertise to develop this caliber of surveillance and hacking tools. Instead, they buy surveillance and hacking tools from cyberweapons manufacturers. These are companies like FinFisher’s seller Gamma Group (Germany and the UK), HackingTeam (Italy), VASTech (South Africa), Cyberbit (Israel), and NSO Group (also Israel). They sell to countries like the ones I listed in the beginning of this section, allowing them to hack into computers, phones, and other devices. They even have a conference, called ISS World and nicknamed the “Wiretappers’ Ball,” and they explicitly market their products to repressive regimes for this purpose.

Internet surveillance has also been used for the purposes of foreign espionage for as long as the Internet has been around. The NSA might have led the way, but other countries weren’t far behind. Early espionage operations against the US included Moonlight Maze in 1999 (probably Russia), Titan Rain in the early 2000s (almost certainly China), and Buckshot Yankee in 2008 (no idea who was behind this one).

The Chinese have been conducting cyberespionage operations against the US government for decades. Over the years, China has stolen the blueprints and design documents for several weapons systems, including the F-35 fighter plane. In 2010, China hacked into Google to get at the Gmail accounts of Taiwanese activists. In 2015, we learned that China was accessing the e-mail accounts of top US government officials. Also in 2015, the Chinese hacked into the Office of Personnel Management (OPM) and stole detailed personnel files of, among others, every US citizen with a security clearance.

Over the past decade, antivirus companies have exposed sophisticated hacking and surveillance tools from Russia, China, the US, the US and Israel together, Spain, and several unidentified countries. In 2017, North Korea hacked the South Korean military, stealing classified wartime contingency plans.

This isn’t just political or military intelligence, but the widespread theft of intellectual property from corporations by other governments. China, for example, has stolen so much commercial intellectual property from the US that Chinese espionage was one of the key items of discussion between President Obama and Chinese president Xi Jinping in 2015, when the two countries reached an agreement to desist. (China does seem to have toned down its economic cyberespionage as a result.)

All of this is considered normal. Spying is a legitimate peacetime activity, and normally, countries can do whatever they can get away with. Just as the NSA spied on German chancellor Angela Merkel’s smartphone, someone else spied on White House chief of staff John Kelly’s smartphone. Even though the OPM breach affected 21.5 million Americans, we couldn’t really condemn China, because we do the same thing. Indeed, Director of National Intelligence James Clapper said at the time: “You have to kind of salute the Chinese for what they did.”

The country whose activities we know the most about is the US. The NSA is in a class by itself for several reasons. One: its budget is significantly larger than that of any comparable agency on the planet. Two: most of the world’s large tech companies are located inside the US—or in one of its partner countries—giving it greater access to their data. Three: the physical location of the planet’s major Internet cables results in much of the world’s communications going through the US at some point. And four: the NSA has secret agreements with other countries for even greater access to the raw communications networks of the planet.

US law enforcement conducts surveillance as well, but it’s fundamentally different from what the NSA does. Law enforcement officers are governed by a different and more restrictive set of laws, and have to follow due-process laws concerning search and seizure. We can argue about whether those laws are well crafted, and how diligently the police follow them, but they do have important consequences. Law enforcement has to target its surveillance on individual suspects; the NSA does not. Law enforcement needs the evidence it collects to be admissible in court; the NSA does not. Law enforcement usually jumps in after a crime has occurred; the NSA conducts espionage on ongoing activities.

Some countries take surveillance to an extreme, using the Internet to spy on their entire population. China leads the way: the country’s social media platforms are all monitored by the government, and offending statements can be censored. (The government’s goal is not so much to limit speech as it is to limit the ability to create social movements, organize protests, and the like.)

Aside from surveillance, many countries use the Internet for censorship and control of their citizens. Authoritarian governments saw the Arab Spring and the “color revolutions” of the early 2000s as an existential threat, and believe that this kind of control is essential to the regime’s survival. Countries like Russia, China, and Iran directly prosecute people who publish certain material, force companies to do their censorship for them, or steer online discussions in innocuous directions. Here, too, China takes the lead. It has the most extensive censorship regime of any country. The Great Firewall of China is a comprehensive system designed to limit access to the global Internet from inside China. And in 2020, the Chinese government plans to enact a “social credit” system. Each citizen will be given a score based on all their surveilled activities, and that score will be used as a gateway to various rights and privileges. And China exports its expertise in social control to other totalitarian countries.

Not all censorship is nefarious. France and Germany censor Nazi speech. Lots of countries censor speech that is considered to be copyright violation. And pretty much everyone censors child pornography.

To accomplish all this espionage, surveillance, and control, nation-states are making use of the Internet’s insecurities, as I’ll talk about more in Chapter 9. This isn’t going away anytime soon, and will continue to be one of the driving forces behind nations’ Internet+ security policies.

CYBERWAR IS THE NEW NORMAL

Some say cyberwar is coming. Some say cyberwar is here. Some say cyberwar is everywhere. In truth, “cyberwar” is a term that everyone uses, that no one agrees on, and that has no agreed-upon definition. But whatever we’re calling it, countries are using the inherent insecurity of the Internet to attack each other. They’re prioritizing the ability to attack over the ability to defend, which helps perpetuate an insecure Internet for all of us.

Stuxnet, discovered in 2010, was a sophisticated weapon developed by the US and Israel to attack the Natanz nuclear weapons plant in Iran. It specifically targeted a Siemens brand of programmable logic controllers that automate factory equipment like the centrifuges used to enrich weapons-grade uranium. It spread through Windows computers, looking for specific Siemens centrifuge controllers. When it found them, it repeatedly sped up and slowed down the centrifuges, causing them to tear themselves apart—while at the same time hiding what they were doing from the operators.

Militaries and national intelligence agencies all over the Internet are breaking into foreign computers, and in some cases causing both virtual and physical damage. International rules and norms about what’s allowed and what’s a just and proportional response remain mostly undefined. This environment favors attack over defense, just as Internet security technology makes attack easier than defense. And the dynamics are much different from those of conventional warfare.

Targets are not limited to military sites and systems, but extend to industrial sites for things like oil production, chemical processing, manufacturing, and power generation, all of which are now controlled via the Internet.

A cyberattack can be part of a larger operation. In 2007, Israel attacked a Syrian nuclear plant. This wasn’t a cyberattack; conventional warplanes bombed the place. But there was a cyber component. Before the planes took off, Israeli hackers conducted a cyberattack to disable radar and antiaircraft systems in Syria and neighboring countries. In 2008, Russia coordinated conventional and cyber operations in an attack against Georgia. The US conducted a series of cyber operations during the Iraqi war in 1990–1991. In 2016, President Obama acknowledged that the US is conducting cyber operations as part of its larger offensive against ISIS.

Sometimes attacks are exploratory or preparatory. In 2017, we learned about a group of Russian hackers who broke into at least 20 power company networks in the US and Europe, in some cases gaining the ability to disable the system. In 2016, the Iranians did the same thing to a dam in upstate New York. Experts surmise that these operations were reconnaissance for potential future action. This is something known as “preparing the battlefield,” and many countries appear to be doing it to each other.

The risks have increased as our world has become more computerized, more networked, and more standardized. During the Cold War, most military computers and communications systems were distinct from their civilian counterparts, but no more. Millions of Department of Defense computers run Windows, including the computers that control weapons systems. The same computers and networks you have in your home and office control the critical infrastructure of pretty much every country. This makes the Internet itself a potential target.

It’s not just the stronger powers attacking the weaker, as with Russia attacking the networks of Estonia in 2007, Georgia in 2008, and Ukraine repeatedly. A smaller nation-state can inflict disproportionate damage on its target in cyberspace for many of the reasons discussed in Chapter 1. For example, the Syrian Electronic Army attacked US news sites in 2013, and Iran attacked Las Vegas’s Sands Hotel in 2014.

Countries vary widely in their capabilities. On the high end are countries with fully developed military cyber commands and intelligence agencies that can create their own custom attack tools. These include the US, the UK, Russia, China, France, Germany, and Israel. They are well funded, very skilled, and not easily dissuaded. They are the elite few, although most of their cyber operations are not sophisticated, because security is generally so bad that they don’t have to be. One tier lower than the high end are countries that buy commercial tools and services from the cyberweapons manufacturers mentioned earlier. And even lower are the countries that simply use criminal hacking software they’ve downloaded off the Internet. Both of these tiers of countries can also hire cyber mercenaries. Increasing capabilities seems to require little more than making it a priority. If an isolated and heavily sanctioned country like North Korea can go from a nonentity in cyberspace to a significant threat in less than a decade, anyone can do it.

The risks of nation-state cyberattack are increasing, and governments are taking notice. Every year, the US director of national intelligence submits a Worldwide Threat Assessment document to the Senate and House select committees on intelligence. It’s a good guide to what we’re concerned about. The 2007 document didn’t mention cyber threats at all. Even in the 2009 report, “the growing cyber and organized crime threat” was discussed only at the end of the document, where it felt like an afterthought. By 2010, cyber threats were the first threat listed in the annual report; and since then, they’ve been painted in increasingly dire terms. From the 2017 report:

Our adversaries are becoming more adept at using cyberspace to threaten our interests and advance their own, and despite improving cyber defenses, nearly all information, communication networks, and systems will be at risk for years.

Cyber threats are already challenging public trust and confidence in global institutions, governance, and norms, while imposing costs on the US and global economies. Cyber threats also pose an increasing risk to public health, safety, and prosperity as cyber technologies are integrated with critical infrastructure in key sectors. These threats are amplified by our ongoing delegation of decisionmaking, sensing, and authentication roles to potentially vulnerable automated systems. This delegation increases the likely physical, economic, and psychological consequences of cyber attack and exploitation events when they do occur.

Similarly, the Munich Security Conference—the most important international security policy conference in the world—didn’t have a panel on cybersecurity until 2011. Now, cybersecurity has its own separate event.

We’re all within the blast radius. Even a well-targeted cyberweapon like Stuxnet damaged networks far away from the Iranian Natanz nuclear plant. In 2017, the global shipping giant Maersk had its operations brought to a halt by NotPetya, a Russian cyberweapon used against Ukraine. The company was a bystander caught in the cross fire of an international cyberattack.

So far, most cyberattacks haven’t happened in wartime. There was no war when the US and Israel attacked Iran with Stuxnet in 2010, or when Iran attacked the Saudi national oil company in 2012. There was no war when North Korea used WannaCry to lock up computer systems around the world in 2017, or in the years prior when the US conducted cyber operations against North Korea in an attempt to sabotage its nuclear program. In 2012, a senior Russian general published a paper articulating what became known as the Gerasimov Doctrine, calling for “the use of special-operations forces and internal opposition to create a permanently operating front,” including engagement in “long-distance, contactless actions against the enemy” via “informational actions, devices, and means.” That sounds a lot like the Russian hacking of the 2016 US election process. In today’s world, the lines between war and peace are blurred, and covert tactics—such as the cyber operations discussed in this chapter—have become more important. Other countries seem to agree. This is why some people are saying that we’re already involved in cyberwar.

There are cyberattacks that will be considered acts of war. And the US has stated that any response to such attacks won’t necessarily be constrained to cyberspace. Still, most offensive actions in cyberspace have been conducted in a grey zone between peace and war—a state that political scientist Lucas Kello calls “unpeace”—and no one is sure how to respond. The US responded to the North Korean attack against Sony with some minor sanctions. The US responded to Russian hacking of the 2016 elections by closing consulates and expelling diplomats. Most countries respond to attacks with strong words, if that.

There are several reasons for the limited response. The first is that there isn’t a clearly defined line between what is considered an act of war and what is not. International espionage is generally considered to be a valid peacetime activity; killing large numbers of people is generally considered to be an act of war. Everything else is in the middle.

As I described in Chapter 3, attribution can be difficult. In particular, there is a continuum of government involvement in cyberattack. Cyber policy expert Jason Healey developed an entire spectrum, from state-encouraged to state-coordinated to state-executed attacks, with many other shades of involvement in between. So, even if you can attribute an attack to a geographical location, it can be hard to figure out whether and to what degree a government is responsible.

The final reason why responses to attacks tend to be muted is that it can be hard to tell the difference between cyberespionage and cyberattack until it is too late—because until the last second, when an unauthorized intruder either copies everything or fires off a destructive payload, they look exactly the same.

Military cyberattacks have largely been ineffective over the long term. Espionage is easy. And short-term harmful but fleeting effects, like a power blackout in Ukraine, are easy. But anything more seems to be hard. Although Stuxnet was successful, at best it slowed Iran down by a couple of years, and had minimal effect on any international negotiations. The US also used cyberattacks to thwart North Korea in its attempts to build an atomic weapon and delivery system. Here again, the operations had very little long-term effect. Cyberweapons were used in the recent armed conflict in Ukraine, as well as in Syria’s civil war; again, the effects were minimal.

A few more issues emphasize the importance and prevalence of attack over defense in modern cyberwarcraft. Cyberweapons are unique among weapons in that they are inherently unstable. That is, if you have a cyberweapon that uses a certain vulnerability to deliver its payload, I can disable that weapon by locating and patching the vulnerability. This means that a nation finding itself at a temporary advantage will have to weigh the risks of launching a preemptive attack against the risks of having its arsenal depleted by ongoing defensive research. This instability makes cyberweapons more attractive to use, and to use now, before they are independently discovered.

And cyberweapons can be stolen and put to use in a way that conventional weapons cannot. In 2009, China exfiltrated the blueprints and other data for the US F-35 fighter aircraft from Lockheed Martin and a number of subcontractors. While that intellectual-property theft undoubtedly saved the Chinese government some of the $50 billion the US spent on development, as well as many years of effort, the Chinese military still had to design and build the planes. By contrast, the attackers who stole cyberweapons from both the NSA and the CIA were able to use them with minimal additional time and cost. And when those hacking tools were leaked to the public, both foreign governments and criminals immediately deployed them for their own purposes.

Countries are also getting more brazen in their attacks. The continuing attacks against the US by Russia, China, and North Korea—and the occasional attacks by Iran, Syria, and others—demonstrate that other countries can attack us with impunity.

Honestly, the US has only itself to blame. We prioritized offense over defense. We were the ones who first used the Internet for both espionage and attack. Via the NSA, we undermined confidence in American technology companies. We pushed the envelope of what’s acceptable. And because we felt we had an advantage over other countries, we didn’t try to negotiate any treaties or establish any norms. At the same time, we developed the Internet as a commercial space where security was an afterthought, if that. Our actions were shortsighted, and now our actions are coming back to bite us.

The result is what foreign-policy scholars call a “security dilemma.” Attack is not only easier than defense; it’s cheaper than defense. So if a country wants to become more powerful in cyberspace, it’s smarter to invest in offense—which means using the Internet’s inherent insecurities. But if everyone does that, the world becomes less stable and the Internet becomes even less secure. This is the cyberwar arms race that nations find themselves in right now.

Western democracies are both the most vulnerable countries on the planet and the most unprepared for cyberattack. This is not to say that other countries aren’t also worried. Sir John Sawers, the former head of the UK’s MI6, said this in 2017: “I think both China and the United States—and probably Russia too—feels more vulnerable to being attacked than they feel the power of being able to attack themselves.”

As national-security reporter Fred Kaplan wrote about the US: “We have better cyber rocks to throw at other nations’ houses, but our house is glassier than theirs.” I’ll talk much more about this in Chapter 9.

The upshot is that countries have found themselves in this new state of perpetual unpeace, where the rules of engagement are still unwritten and everything is off-balance and unfamiliar. The major powers, all perceiving their own vulnerability, are naturally loath to lay down their cyber arms, all of which rely on vulnerabilities in the Internet. To preserve and enhance their offensive capabilities in this unfamiliar theater of war, they work diligently to perpetuate insecurity. In Chapters 9 and 10, I’ll talk more about how they do this, why their logic is exactly wrong, and what they need to do to reverse course.

CRIMINALS BENEFIT FROM INSECURITY

Of course criminals prefer an insecure Internet; it’s more profitable for them.

Willie Sutton famously robbed banks because “that’s where the money is.” Today, the money is online, and increasingly, criminals are, too. Criminals steal money from our bank accounts. They steal our credit card data and use it to commit fraud, or they steal our identity information and use that. They also lock up our data and then try to coerce us into paying for its return—that’s ransomware.

In early 2018, the Indiana hospital Hancock Health was the victim of a cyberattack. Criminals—we have no idea who—encrypted its computers and demanded $55,000 in bitcoin to unlock them. Medical staff had no access to computerized medical records. Even though they had backups, they feared that the time required to restore the data would put patients at risk. They paid up.

Ransomware is increasingly common and lucrative. Victims range from organizations, as in the preceding story, to individuals. Kaspersky Lab reported that attacks on business tripled, and the number of different ransomware variants increased 11-fold, during nine months in 2016. Symantec found that average ransom amounts jumped from $294 in 2015 to $679 in 2016 to over $1,077 in 2017. Carbon Black reported that total sales of ransomware software on the black market increased 25 times from 2016 to 2017, to $6.5 million. Ransomware now comes with detailed instructions on how to pay, and some of the criminals behind the ransomware even have telephone help lines to assist victims. (If you’re thinking that a help line is risky for the criminals, remember the international nature of this. The criminals don’t fear prosecution in their home countries.) All in all, it’s a billion-dollar business.

Cybercrime is a global big business, netting anywhere from $500 billion to $3 trillion annually, depending on whose analysis you trust. Additional losses due to intellectual-property theft are thought to cost another $225 to $600 billion per year.

Much of cybercrime involves impersonation: subverting the authentication systems discussed in Chapter 3. Walking into a bank pretending to be someone else is a dangerous way to make money, but doing the same thing on a bank’s website is much easier and less risky. Often, all the criminal needs is the victim’s username and password. It’s no different with credit cards: if a criminal has the victim’s card number and other information—name, address, whatever—he can use the card for whatever he wants. This is identity theft. It has many variants, all based on stolen credentials and impersonation.

CEO fraud, or “business e-mail compromise,” is a specific form of identity theft. A thief pretends to be a company’s CEO or other executive officer and sends an e-mail to accounts payable, telling them to send a check to the criminal. Or to send a copy of every employee’s W-2 tax form, as a precursor to filing a fake tax return. Or to divert the proceeds of a real estate sale. This ploy can be very effective if the criminal does his research; we are all used to treating e-mails from the boss as legitimate and important.

There’s more. A lot of cybercrime follows from this question: I’ve hacked into all of these computers; now what can I do with them? Turns out that the answer is: plenty. Criminals have harnessed large numbers of hacked computers into bot, or zombie, networks. Botnets can be used for all sorts of things: sending spam at high rates, solving CAPTCHAs, and mining bitcoin. Hackers use bots to commit click fraud: repeatedly clicking on ads on sites they control and collecting revenue from the third parties that place them, or clicking on ads placed by competitors and forcing them to pay. They use massive botnets to launch DDoS attacks against other victims.

If you control millions of bots, you can use them to overwhelm the Internet connections of individuals and even companies, and kick them off the Internet. These attacks can be hard to defend against, and it really is a contest of size: whether the defender’s data pipe is large enough to handle all the incoming traffic. Sometimes the attackers extort money from companies with a threat.

International criminal organizations exploit legal and jurisdictional loopholes around the globe. They sell attack tools and even offer crimeware-as-a-service, also known as CaaS. According to Interpol:

The CaaS model provides easy access to tools and services across the entire spectrum of criminality, from entry-level to top-tier players, including those with other motivations such as hacktivists or even terrorists. This allows even entry-level cybercriminals to carry out attacks of a scale disproportionate to their technical capability.

Individual criminals specialize in such things as credential stealing, payment fraud, and money laundering. They sell hacking tools and offer botnet services. There are even governments that engage in criminal activities, and governments that turn a blind eye to criminals in their countries who operate internationally. North Korea is particularly egregious. It employs hackers to raise money for government coffers, and in 2016 it stole $81 million from Bangladesh Bank.

Of course, profit isn’t the only criminal motivation. People commit crimes out of hate, fear, revenge, politics, and so on. It’s hard to find data on what percentage of total crime is not financial. We do know that people commit such crimes regularly. And, increasingly, they’re committing them on the Internet—cyberstalking, stealing and publishing personal information for political gain or out of personal spite, and otherwise causing harm.

Every day, there are more computers to hack and control, and more data to steal. We’re already seeing this. We’ve seen webcams, DVRs, and home routers hacked, made part of bot networks, and used to launch DDoS attacks. We’ve seen home appliances like refrigerators used to send spam e-mails. Attackers have bricked IoT devices, rendering them permanently nonfunctional.

We haven’t yet experienced murder committed over the Internet, but the capability exists. Back in 2007, then–vice president Dick Cheney’s heart defibrillator was specially modified to make it harder for him to be assassinated. In 2017, a man sent a tweet designed to cause a seizure in an epileptic recipient. Also in 2017, WikiLeaks published information about the CIA’s work on hacking cars remotely.

Ransomware is also coming to the Internet of Things. Our embedded computers are no more resistant to ransomware than your laptop is, and criminals already understand that one obvious defense against computer ransomware—restoring the data from backup—won’t work when lives are at immediate risk. Hackers have demonstrated ransomware against smart thermostats. In 2017, an Austrian hotel had its electronic door locks hacked and held for ransom. Cars, medical devices, home appliances, and everything else hackers can get into are next. The potential for additional criminal revenue is enormous.

And so is the potential for serious harm. A bricked car displaying a demand for $200 worth of bitcoin is an expensive inconvenience; a similar demand at speed is life-threatening. It’s the same with medical devices. In 2017, the NotPetya ransomware shut down hospitals across the US and the UK. In some cases, UK hospitals were so incapacitated that they had to delay surgeries, route incoming emergency patients elsewhere, and replace damaged medical equipment. Over the next few years, we’ll watch the attacks shift mostly to IoT devices and other embedded computers. We saw the harbinger of this trend in the Mirai botnet in 2016. It corralled a wide variety of IoT devices into the world’s largest botnet, and while it was not used to spread ransomware, it could easily have done so.