In the previous chapter, I said that governments regulate things that kill people. The Internet+ is about to fall into this category. When policy makers wake up to this, we will no longer have a choice between government regulation and no government regulation. Our choice will be between smart government regulation and stupid government regulation.
It’s the stupid that worries me. Nothing motivates a government like fear: both fear of attack and fear of looking weak. Remember the months after the 9/11 terrorist attacks? The PATRIOT Act was passed almost unanimously and with little debate, and a small-government Republican administration created and funded an entire new government agency that has since mushroomed to nearly a quarter-million employees to protect the “homeland.” Both of these actions were poorly thought out, and we’ll be living with the consequences for years or decades to come.
Whatever Congress does in the wake of an Internet+ security disaster might make headlines, but it probably won’t improve security. Congress could enact laws and policies that inadequately address the underlying threats, while actually exacerbating the problems.
One example of inadequate legislation is the Child Online Protection Act, passed in 1998. Its purpose was to protect minors from Internet porn. Its provisions were not only sweeping and unworkable, but also would have embedded a pervasive surveillance architecture into even the remotest corners of the Internet. Luckily, the courts prevented the law from taking effect. Another example is the DMCA, first discussed in Chapter 2. Not only does it not prevent digital piracy, it harms all of our security.
This chapter is also about what might happen in the near term, and outlines some of the bad policy ideas currently being debated. Any of them could become law quickly after a disaster.
In Chapter 9, I talked about the need to put security ahead of surveillance, and how governments often work against that principle. The NSA does it surreptitiously, by weakening encryption. The FBI wants to do it publicly, by forcing companies to insert backdoors into their encryption systems.
This isn’t a new demand. Pretty much continuously since the 1990s, US law enforcement agencies have claimed that encryption has become an insurmountable barrier to criminal investigation. In the 1990s, alarm was raised about encrypted phone calls. In the 2000s, FBI representatives, in their discussions of encryption, began to refer to the perils of “going dark,” and turned their concern to encrypted messaging apps. In the 2010s, encrypted smartphones have become the new peril.
The rhetoric is uniformly dire.
Here’s then–FBI director Louis Freeh, scaring the House Permanent Select Committee on Intelligence in 1997: “The widespread use of robust unbreakable encryption ultimately will devastate our ability to fight crime and prevent terrorism.”
Here’s then–FBI general counsel Valerie Caproni, scaring the House Judiciary Committee in 2011: “As the gap between authority and capability widens, the government is increasingly unable to collect valuable evidence in cases ranging from child exploitation and pornography to organized crime and drug trafficking to terrorism and espionage—evidence that a court has authorized the government to collect. This gap poses a growing threat to public safety.”
Here’s then–FBI director James Comey, scaring the Senate Judiciary Committee in 2015: “We may not be able to identify and stop terrorists who are using social media to recruit, plan, and execute an attack in our country. We may not be able to root out the child predators hiding in the shadows of the Internet, or find and arrest violent criminals who are targeting our neighborhoods. We may not be able to recover critical information from a device that belongs to a victim who cannot provide us with the password, especially when time is of the essence.”
And here’s Deputy Attorney General Rod Rosenstein, trying to scare me personally—I was in the audience at the Cambridge Cyber Summit—in 2017: “But the advent of ‘warrant-proof’ encryption is a serious problem. It threatens to destabilize the constitutional balance between privacy and security that has existed for over two centuries. Our society has never had a system where evidence of criminal wrongdoing was totally impervious to detection, even when officers obtain a court-authorized warrant. But that is the world that technology companies are creating.”
The Four Horsemen of the Internet Apocalypse—terrorists, drug dealers, pedophiles, and organized crime—always scare people. “Warrant-proof” is a particularly scary phrase, but it just means that a warrant won’t get the information. Papers burned in a fireplace are also “warrant-proof.”
The notion that the world has never seen a technology that is impervious to detection is complete nonsense. Before the Internet, many communications were permanently unavailable to the FBI. Every voice conversation irrevocably disappeared after the words were spoken. Nobody, regardless of legal authority, could go backwards in time and retrieve a conversation or track someone’s movements. Two people could go for a walk in a secluded area, look around and see no one, and have a confidence of privacy that is now forever lost. Today, we’re living in the golden age of surveillance. As I said in Chapter 9, what the FBI needs is technical expertise, not backdoors.
Over the decades, the government has proposed a variety of backdoors. In the 1990s, the FBI suggested that software developers provide copies of every encryption key. The idea was called “key escrow,” akin to everyone having to give the police a copy of their house key. In the early 2000s, the FBI argued that software vendors should deliberately insert vulnerabilities into computer systems, to be exploited by law enforcement when necessary. A decade later, demands devolved to a more general “Figure out how to do this.” More recently, the FBI has suggested that tech companies use their update process to push fake updates to specific users and install backdoors into individual software packages on demand. Rosenstein has given this security-hostile proposal the friendly-sounding name of “responsible encryption.” By the time you read this, there might be a different preferred solution.
This isn’t just playing out in the US. UK policy makers are already implying that the 2016 Investigatory Powers Act gives them the power to force companies to sabotage their own encryption. In 2016, Croatia, France, Germany, Hungary, Italy, Latvia, and Poland called on the EU to demand that companies add backdoors. Separately, the EU is considering legislation that bans backdoors. Australia is also trying to mandate access. In Brazil, courts temporarily shut down WhatsApp three times in 2016 because local police couldn’t access encrypted messages. Egypt blocked the encrypted messaging app Signal. Many countries banned BlackBerry devices until the company allowed governments to eavesdrop on communications. And both Russia and China routinely block apps they can’t monitor.
No matter what it’s called or how it’s done, adding backdoors for law enforcement in computers and communications systems is a terrible idea. Backdoors go against our need to put security before surveillance. While in theory it would be great for police to be able to eavesdrop on criminal suspects, gather forensic evidence, or otherwise investigate crimes—assuming proper policies and warrants were in place—there’s no way to design this securely. It’s impossible to build a backdoor mechanism that only works in the presence of a legal warrant, or when a law enforcement officer tries to use it for legitimate purposes. Either the backdoor works for everyone or it doesn’t work for anyone.
This means that any backdoor will make us all less secure. It’ll be used by foreign governments and criminals, against our political leaders, against our critical infrastructure, against our corporations—against everybody. It’ll be used against our diplomats and spies overseas, and our law enforcement agents at home. It’ll be used to commit crimes, facilitate government espionage, and enable cyberattacks. It’s an incredibly stupid idea that keeps being proposed.
If the US successfully imposes backdoor requirements on US companies, there’s nothing to stop other governments from making the same demands. All sorts of repressive countries, from Russia and China to Kazakhstan and Saudi Arabia will demand the same level of “lawful access,” even though their laws are designed to punish political dissidents.
Rosenstein’s idea to use the update process is particularly damaging. There are already vulnerabilities in the update process. We are all more secure when everybody installs updates as quickly as possible. One security measure we’re starting to see is more transparency in updates, so individual systems can be sure that the update they’re receiving is both authorized by the company and applicable to every user. This is important for security; recall in Chapter 5 when I talked about attackers co-opting the update process to deliver malware. The FBI’s requirements would prevent companies from adding such transparency and other security measures to the update process.
If the update mechanism is a known method for the police to hack into someone’s computers and devices, all sorts of people will turn automatic update off. This loss of trust will take years to regain, and the overall effect on security will be devastating. This is akin to hiding combat troops in Red Cross vehicles. We don’t do it, even if it is an effective tactic.
Finally, using the update process to enable the delivery of malware would make it much less secure. If an update occurs infrequently, a company can build strong security around the authentication process. If a company like Apple received multiple requests per day to unlock phones—the FBI claimed in 2017 that it had 7,000 phones it couldn’t unlock—the routine procedures it would have to put in place to respond to those requests would be much more vulnerable to attack.
Those who understand security know backdoors are dangerous. A 2016 congressional working group concluded: “Any measure that weakens encryption works against the national interest.” Lord John Evans, who ran the UK’s MI5, said: “My personal view is that we should not be undermining the strength of cryptography across the whole of the cyber market because I think the cost of doing that would be very considerable.”
More important than all of this, giving the FBI what it wants won’t solve its problem.
Even if the FBI succeeds in forcing large US companies like Apple, Google, and Facebook to make their devices and communications systems insecure, many smaller competitors offer secure products. If Facebook adds a backdoor to WhatsApp, the bad guys will move to Signal. If Apple adds a backdoor to its iPhone encryption, the bad guys will move to one of the many encrypted voice apps.
Even the FBI is less demanding in private. Conversations I and others have had with FBI officials suggest they are generally okay with optional encryption, since most bad guys don’t bother turning it on. What they object to is encryption by default.
And that’s precisely the reason we need it. Defaults are powerful; most of us don’t know enough, or otherwise don’t bother, to turn on optional security features on our computers, phones, web services, IoT devices, or anything else. And while the FBI is correct that the average criminal won’t turn on optional encryption, the average criminal will make other mistakes that will leave them vulnerable to investigation—especially if the FBI improves its digital investigative techniques. It’s the smart criminals that the FBI should be worried about, and those criminals will use the more secure alternatives.
Backdoors harm the average Internet user, both good and bad. The FBI myopically and histrionically focuses on the bad guys. But once we realize there are far more good guys on the Internet than bad ones, it’s obvious that the benefits of ubiquitous strong encryption outweigh the disadvantages of giving criminals access to strong encryption.
In the US, prior to the mid-1990s, encryption was regulated as a munition. Software and hardware products using encryption were export controlled, just like grenades or rifles. Strong encryption couldn’t be exported, and any product for export had to be weak enough for the NSA to easily break. These controls ended when the Internet—and the international tech community—rendered them obsolete because the notion of “export” of software no longer made sense.
There’s talk of bringing encryption controls back. In 2015, then–UK prime minister David Cameron proposed banning strong encryption in the country entirely. Current prime minister Theresa May echoed this after the 2017 London Bridge terrorist attack.
This is a step beyond mandating backdoors in popular encryption systems like WhatsApp and the iPhone. This is making any computer system, software, or service featuring strong encryption illegal. The problem, of course, is that national laws are domestic and software is international. In 2016, I surveyed the market for encryption products. Out of 865 software products from 55 different countries, 811 would be immune from a UK ban because they were created outside the UK. If the US passed a similar ban, 546 products would be immune.
Keeping those foreign products out of a country would be impossible. It would require blocking search engines from finding foreign encryption products. It would require monitoring all communications to ensure that no one downloaded a foreign encryption product from a website. It would require scanning every computer, phone, and IoT device entering the country, whether carried by a person at the border or sent through the mail. It would require banning open-source software and online code repositories. It would require banning, and interdicting at the border, books that contained encryption algorithms and code. In short, it’s simply crazy.
If attempted, the result of such a ban would be even worse than mandating backdoors. It would force all of us to be much less secure against all threats. It would put domestic companies that had to comply at a competitive disadvantage against those that didn’t. And it would give criminals and foreign governments an enormous advantage.
I don’t think this is likely to happen, but it is possible. In the past year, I have seen a change in rhetoric around encryption. In their attempts to demand backdoors, Justice Department officials are quick to paint encryption as a criminal tool, with evocative references to those Four Horsemen of the Internet Apocalypse and anonymity services like Tor. On a completely different front, “crypto” is starting to be used as a shorthand for cryptocurrencies like bitcoin, and painted as a tool for those who want to buy illegal goods on the scarily named Darknet. The result is that the positive uses for cryptography and the ways it protects all of our security are being crowded out of the conversation. If this trend continues, we might see serious proposals to ban strong encryption.
In 2015, Mike McConnell, Michael Chertoff, and William Lynn, three former senior government officials with extensive experience in these matters, wrote about the importance of computer and Internet security to national security:
We believe that the greater public good is a secure communications infrastructure protected by ubiquitous encryption at the device, server and enterprise level without building in means for government monitoring.
These sentiments run counter to the official position of their former employers, but safely retired and widely respected, the three felt free to speak out. We need to change the official position of government, so that everyone works towards more security for everyone.
There are regular calls to ban anonymity on the Internet. They come from those who want to control hateful and harassing speech, under the assumption that if you can find the trolls, you can banish them—or, better yet, they would be too ashamed to be trollish. They come from those who want to prevent cybercrime, assuming that being able to identify someone makes it easier to apprehend them. They come from those who want to arrest spammers, stalkers, drug dealers, and terrorists.
Banning anonymity takes various forms, but you can basically think of it as giving everyone the Internet equivalent of a driver’s license. We would all use that license to configure our computers and sign up for various Internet services like e-mail accounts, and no one without one would have any access.
This won’t work, for four reasons.
First, we don’t have the real-world infrastructure to provide Internet user credentials based on other identification systems: passports, national identity cards, driver’s licenses, or whatever. Remember what I wrote in Chapter 3 about identification and breeder documents. Second, a system like this might make identity theft rarer, but would also make it much more profitable.
These two reasons alone explain why mandatory identification for Internet usage is a bad idea. Right now, there are plenty of adequate identification and authentication systems. Banks manage well enough to let you transfer money online, and companies like Google and Facebook manage well enough that they allow others to use their systems. There are several competing smartphone payment apps. Your cell phone number is turning into a unique identifier that’s good enough for purposes such as two-factor authentication.
However, when we build a mandatory identification system, we need to catch precisely those people who want to subvert the system. Every existing identification system is already subverted by teenagers trying to buy alcohol in face-to-face transactions. Securing a mandatory identification system for the Internet would be much more difficult.
Third, any such system would have to work globally. But consider that anyone from any country can pretend to be from anywhere else. If the US were to outlaw anonymity and mandate that its citizens use driver’s licenses to register in person for an e-mail address, any American could simply get an anonymous e-mail account from another country without an ID requirement. So, either we acknowledge that anyone can obtain an anonymous e-mail address, or we prohibit communicating with the rest of the world. Neither option will work.
Fourth, and most importantly, it is always possible to set up an anonymous communications system on top of an identified communications system. Tor is a system for anonymous web browsing, used by both political dissidents and criminals around the world. I’m not going to cover how it works here, but it can provide anonymity even if everyone on the system is positively identified.
Those are the reasons why banning anonymity won’t work. It’s terrible because it’s bad for society. Anonymous speech is valuable—and in some countries, lifesaving. The ability for individuals to maintain multiple personas for different aspects of their lives is valuable. Banning anonymity would sacrifice an essential liberty in exchange for the illusion of temporary safety.
This doesn’t mean everyone deserves anonymity in all things. Society already bans anonymity in many areas. You are not permitted to drive an anonymous car on public roads; all cars must have license plates. Similar rules are coming for drones. The US has imposed know-your-customer rules on banks around the world. The boundary between spaces where anonymity is permitted and where it is forbidden seems to be the point where someone can cause significant physical or economic damage. As the Internet+ crosses that boundary, expect more spaces with less anonymity.
Mass surveillance isn’t limited to totalitarian countries. The US government collected phone call metadata on most Americans until 2015, and still has access to this information on demand. Many local governments keep comprehensive data about people’s movements, collected from license plate scanners mounted on street poles and mobile vans. And, of course, many corporations have us all under surveillance through a variety of mechanisms. Governments regularly demand access to that data in ways that don’t require a warrant, such as subpoenas and national security letters.
I worry that some of the catastrophic risks I wrote about in Chapter 5 will lead policy makers to go beyond backdoors and weakened cryptography, to authorize ubiquitous domestic surveillance. Leaving out the 1984-like ramifications that make it a terrible idea on the face of it, the effectiveness of ubiquitous surveillance is very limited. It’s only useful between the moment a new capability becomes possible and the moment it becomes easy.
To understand how this might happen, consider the development curve of any particular destructive technology. In the early days of a technology’s development, super-damaging scenarios just aren’t possible. Right now, for example, despite what television and movies like to portray, we do not have the technical knowledge required to concoct a super-germ that can kill millions of people.
As biological science develops, catastrophic scenarios become possible but extremely expensive. Making them a reality would require concerted effort on the scale of the World War II Manhattan Project, or similar military efforts to develop and build biological weapons.
As technology continues to improve, damaging capabilities become cheaper and available to increasingly smaller and less organized groups. At some point, it becomes possible for a conspiracy to implement a catastrophe. Both money and expertise would be required, but both can be readily acquired. One could envision a large-scale effort to disrupt the global economy by coordinated attacks on stock exchange IT systems or on critical infrastructure, such as power plants or airline navigational systems.
This is the point at which ubiquitous surveillance might possibly provide security. The hope would be to detect the conspiracy in its planning stages, and collect enough evidence to connect the dots and disrupt the plot before it happens. This is the primary justification of the current NSA’s ubiquitous anti-terrorism surveillance efforts.
But while ubiquitous surveillance could succeed in the majority of those cases, primarily against less technically savvy attackers, it would fail against the most motivated, most skilled, and best-funded attackers. As technology improves, the number of conspirators and the amount of planning required to unleash havoc shrinks further, making surveillance-based detection even less effective. Think of Timothy McVeigh’s fertilizer bomb, and the handful of accomplices who helped him attack the Alfred P. Murrah Federal Building. Maybe ubiquitous surveillance could have detected the plot in the planning and purchasing stages, but probably not. Targeted surveillance, based on old-fashioned follow-the-lead police work, might more effectively identify those who advocate violent overthrow of the US government and go about assembling bomb-making materials.
As technology continues to improve, and catastrophic scenarios might be effected by only one or two people, ubiquitous surveillance becomes useless. We already know of incidents like this. No amount of surveillance can stop mass shootings like the ones at Fort Hood (2009), San Bernardino (2015), or Las Vegas (2017). No amount of surveillance could have stopped the DDoS attacks against Dyn. The failure to anticipate the Boston Marathon bombing was less a failure of mass surveillance than a failure to follow investigative leads, if indeed it can be considered a failure at all.
At best, mass surveillance can only ever buy society time. Even then, it wouldn’t be very effective. Surveillance is more effective at social control than at crime prevention, which is why it’s such a popular tool among authoritarian governments.
This doesn’t mean we won’t see domestic mass surveillance, especially in the wake of another catastrophic terrorist attack. As bad as we are at defending against what we perceive to be catastrophic threats, we are very good about panicking over specific scenarios involving these threats. And historically, the panic is much more dangerous to freedom and liberty than the actual threats themselves. Furthermore, there will always be new technological threats at different points along that development curve, each potentially justifying mass surveillance.
Hacking back is another terrible idea, one that frequently rears its ugly head. Basically, it’s private counterattack. It’s an organization going on the offensive to retaliate against its attackers—in the pursuit of a criminal, to acquire evidence, or to recover stolen data. Sometimes it goes by the euphemism “active cyber defense,” but that just serves to hide what it really is: server-to-server combat. It’s illegal today in every country, but there’s constant talk about making it legal.
Proponents like to talk about two specific scenarios that might justify hacking back. The first is when victims know the location of their stolen data; they could hack into that computer and delete the data. The second scenario is an ongoing attack; they could hack into their attacker’s computer and stop the attack in real time.
On the surface this might seem reasonable, but it could quickly result in disaster. First, there are the difficulties of attribution I wrote about in Chapter 3. How can an organization be sure who is attacking it, and what happens if it erroneously penetrates an innocent network while retaliating? It’s easy to disguise the source of an attack, or route an attack through an innocent middleman.
Second, what happens if a “hackbacker” penetrates a network in a foreign country? Or worse, that country’s military? It would almost certainly be considered a crime, and might create an international incident. So many countries use surrogates, front companies, and criminals to do their dirty work on the Internet that the chances of a mistake, miscalculation, or misinterpretation are already high. Authorized hacking back would add to the mess, and we don’t want some company to accidentally start a cyberwar.
Third, hacking back is ripe for abuse. Any organization could go after a competitor by staging an attack against its own servers, or by planting sensitive files in its competitor’s network—and then go hacking back.
Fourth, it would be easy for hostilities to escalate. An enterprising schemer could start a battle between two organizations by spoofing hacks by each against the other.
Fifth and finally, it’s unclear whether this is even an effective tactic. Vengeance is satisfying, but there’s no evidence that hacking back either improves security or has a deterrent effect.
The real reason this is a terrible idea, though, is that it sanctions vigilantism. There are reasons why you are not allowed to break into your neighbor’s house to retrieve an item, even if you know they stole it from you. There’s a reason we no longer issue letters of marque, which authorized private merchant ships to attack and capture other vessels. These sorts of capabilities are rightly the sole purview of governments.
Almost everybody agrees with this. Both the FBI and the Justice Department caution against hacking back. A 2017 bill legitimizing some hackback tactics died with minimal support. The main exception seems to be Stewart Baker—attorney and former NSA and DHS senior official—who regularly recommends hacking back. And some cybersecurity companies around the world are pushing for legal authorization, because they want to offer hackback services to corporate customers. Israel seems to want to be a home country for this industry.
Despite its illegality, hacking back is already happening. Companies that offer hackback services don’t advertise openly, and they’re likely hired through intermediary companies and with deniable contracts. Like corporate bribery, it exists, and some companies break international law by engaging in the practice.
My guess is that this will always be the case. Regardless of what the US and like-minded countries do, others will be safe havens for this practice. This means we need to treat hacking back like bribery: we need to declare it illegal everywhere in the world, and prosecute US companies that engage in it. We need to push for international treaties and norms against hacking back. And we need to do our best to marginalize the outliers. Right now, there is no official US stance on hacking back, but I believe there will be one soon.
Historically, we have often relied on scarcity for security. That is, we secure ourselves from the malicious uses of a thing by making that thing hard to obtain. This has worked well for some things—polonium-210, the smallpox virus, and anti-tank missiles come to mind—and less well for others: alcohol, drugs, handguns. The Internet+ destroys that model.
The radio spectrum is tightly regulated, and numerous rules govern who can transmit on which frequencies. Some are reserved for the military, others for the police, and still others for communications between aircraft and ground control. There are frequencies you can only broadcast on if you have a license, and others you can only broadcast on if you have a specific communications device.
Before computers, this was all enforced by limiting what sorts of radios were commercially available. A normal off-the-shelf radio would only tune to the legal frequencies. This solution wasn’t perfect—it was always possible to buy or build a radio that could transmit or receive on other channels—but that was a complicated solution that required specialized knowledge (or at least access to specialized equipment). It wasn’t a perfect security solution, but it was good enough for most purposes. Today, radios are just computers with antennas attached, and you can buy a software-defined radio card for your PC that will allow you to broadcast on any frequency.
In Chapter 4, I talked about the risks from people hacking their own computers to evade common and reasonable laws. I’ve also talked about the potential for people to modify their automobile software in violation of emissions control laws, their 3D printers to make objects in violation of copyright laws, and their biological printers in violation of laws about killing large numbers of people.
In each of these new technologies, we’re going to have levelheaded calls to restrict what users can do with their devices. For example, what Mattel, Disney, censors, and gun-control advocates are going to want is a 3D printer that will let the customer make any object except those on a list of prohibited objects.
This is exactly the same issue as the copyright problem. Digital rights management was the technical solution that failed, and the DMCA was the law that came after. It has only been effective at preventing hobbyists from making copies of digital music and movies. It hasn’t prevented professionals from doing the same thing, and it hasn’t prevented the spread of copyrighted works with the DRM protections removed.
The fear of hacked autonomous-car software or printed killer viruses will be much greater than the fear of illegally copied songs. The industries that will be affected are much more powerful than the entertainment industry. Both government and the private sector will look at the entertainment industry’s experience with DRM and correctly conclude that the problem is that computers are, by nature, extensible. They will look at the DMCA and conclude that the law wasn’t sufficiently onerous and restrictive. I worry that analogous laws for 3D printers, bio-printers, cars, and so on will be supported by government and private interests working together against users.
Laws restricting access to software that allows people to modify their IoT computers might work against most people for a while, but in the end, they would be ineffective because the Internet allows the free flow of software and information worldwide, and because a domestic-only law would never keep computers out of a country. This isn’t simply a matter of accepting a mostly effective solution and living with the exceptions. With songs and other digital content, the cost of failure is minimal. With these new technologies, the cost of failure will be much higher.
We need to solve these problems directly: not with laws limiting the use of technology or computer capabilities, but by developing counteracting capabilities.
With respect to radios, one solution would be to enable all the radios to police themselves. Radios could be transformed into a detection grid that located malicious or improperly configured transmitters, and then forwarded that information to the police, who could then investigate alleged violations. Radio systems could be designed to withstand attempts at eavesdropping and jamming so that rogue transmitters couldn’t interfere with their operation. There would, of course, be details to be worked out. My goal here isn’t to solve the technical problem—only to demonstrate that solutions can be found.
This is a general lesson that will apply to many aspects of the Internet+, from 3D and bio-printers to autonomous algorithms and artificial intelligence. If we’re going to live in a world where individuals can cause widespread damage, we’re eventually going to have to figure out how to engineer around the threats inherent in each system. DMCA-like restrictions will buy us some time, but they won’t solve our security problems.