If governments are going to take on this central role in improving Internet+ security, as I’ve been arguing they must, they need to shift their priorities. Right now, they prioritize maintaining the ability to use the Internet for offensive purposes, as I described in Chapter 4. But if we are ever going to make any progress on security, they need to switch their thinking and start prioritizing defense. Governments should support what Jason Healey calls a “defense dominant” strategy.
Yes, offense is essential to defense. Intelligence and law enforcement agencies in liberal democracies have legitimate needs to monitor hostile governments, surveil terrorist organizations, and investigate criminals. They use the insecurities in the Internet to do all of those things, and they make legitimate claims about the security benefits that result. They don’t characterize themselves as being anti-security. In fact, their rhetoric is very pro-security. But their actions undermine the security of the Internet.
The NSA has two missions: surveilling the communications of other countries’ governments, and protecting the US government’s communications from surveillance. In the bygone world of point-to-point circuits, these missions were complementary, because the systems didn’t overlap. The NSA could figure out how to monitor a naval communications link between Moscow and Vladivostok and use that expertise to protect the naval communications link between Washington, DC, and Norfolk, VA. Eavesdropping on Soviet and Warsaw Pact communications systems didn’t affect American communications, because the radio systems were different. Subverting Chinese military computers didn’t affect American computers, because the computers were different. In a world where computers were rare, networks were rarer still, and interoperability was bespoke, the NSA’s actions abroad had no effect inside the US.
This is no longer true. With few exceptions, we all use the same computers and phones, the same operating systems, and the same applications. We all use the same Internet hardware and software. There is simply no way to secure US networks while at the same time leaving foreign networks open to eavesdropping and attack. There’s no way to secure our phones and computers from criminals and terrorists without also securing the phones and computers of those criminals and terrorists. On the generalized worldwide network that is the Internet, anything we do to secure its hardware and software secures it everywhere in the world. And everything we do to keep it insecure similarly affects the entire world.
This leaves us with a choice: either we secure our stuff, and as a side effect also secure their stuff; or we keep their stuff vulnerable, and as a side effect keep our own stuff vulnerable. It’s actually not a hard choice. An analogy might bring this point home. Imagine that every house could be opened with a master key, and this was known to the criminals. Fixing those locks would also mean that criminals’ safe houses would be more secure, but it’s pretty clear that this downside would be worth the trade-off of protecting everyone’s house. With the Internet+ increasing the risks from insecurity dramatically, the choice is even more obvious. We must secure the information systems used by our elected officials, our critical infrastructure providers, and our businesses.
Yes, increasing our security will make it harder for us to eavesdrop, and attack, our enemies in cyberspace. (It won’t make it impossible for law enforcement to solve crimes; I’ll get to that later in this chapter.) Regardless, it’s worth it. If we are ever going to secure the Internet+, we need to prioritize defense over offense in all of its aspects. We’ve got more to lose through our Internet+ vulnerabilities than our adversaries do, and more to gain through Internet+ security. We need to recognize that the security benefits of a secure Internet+ greatly outweigh the security benefits of a vulnerable one.
Here’s what I propose the US and other democratic governments should do to emphasize defense over offense. Taking these actions will go a long way towards securing the Internet+. Even more importantly, the shift in priorities from offense to defense will allow governments to fulfill the badly needed role of Internet+ security enablers.
Recall Chapter 2, where I talked about software vulnerabilities. They have both offensive and defensive uses, and when someone discovers one, there’s a choice. Choosing defense means alerting the vendor and getting it patched. Choosing offense means keeping the vulnerability secret and using it to attack others.
If an offensive military cyber unit—or a cyberweapons manufacturer—discovers a vulnerability, it keeps that vulnerability secret so that it can be exploited. If it is used stealthily, it might remain secret for a long time. If unused, it will remain secret until someone else discovers it. This is true for the vulnerabilities the NSA exploits to eavesdrop, and the vulnerabilities US Cyber Command exploits for its offensive weaponry. Eventually, the affected software’s vendor finds out about the vulnerability—the timing depends on when and how extensively the vulnerability is used—and issues a patch to fix it.
Discoverers can sell vulnerabilities. There’s a rich market in zero-days for attack purposes: to criminals on the black market and to governments. Companies like Azimuth sell vulnerabilities and hacking tools only to democracies; many others are much less discerning. And while vendors offer bounties for vulnerabilities to motivate disclosure, they can’t match the rewards offered by criminals, governments, and cyberweapons manufacturers. One example: the not-for-profit Tor Project offers a bug bounty of $4,000 for vulnerabilities in its anonymous browser, while the cyberweapons manufacturer Zerodium will pay up to $250,000 for an exploitable Tor vulnerability.
Back to the NSA’s dual mission: it can play either defense or offense. If the NSA finds a vulnerability, it can alert the vendor and get it fixed while it’s still secret, or hold on to it and use it to surveil foreign computer systems. Fixing the vulnerability strengthens the security of the Internet against all attackers: other countries, criminals, hackers. Leaving the vulnerability open makes the agency better able to attack others. But each use runs the risk that the target government will learn of the vulnerability and use it—or that the vulnerability will become public and criminals will start using it. As Harvard law professor Jack Goldsmith wrote, “Every offensive weapon is a (potential) chink in our defense—and vice versa.”
Many people have weighed in on this debate. Activist and author Cory Doctorow calls it a public health problem. I have said similar things. Computer security expert Dan Geer recommends that the US government corner the vulnerabilities market and fix them all. Both Microsoft’s Brad Smith and Mozilla have commented on this, demanding more vulnerability disclosure by governments.
President Obama’s Review Group on Intelligence and Communications Technologies, convened post-Snowden, concluded that vulnerabilities should only be hoarded in rare instances and for short times.
We recommend that the National Security Council staff should manage an interagency process to review on a regular basis the activities of the US Government regarding attacks that exploit a previously unknown vulnerability in a computer application or system. . . . US policy should generally move to ensure that Zero Days are quickly blocked, so that the underlying vulnerabilities are patched on US Government and other networks. In rare instances, US policy may briefly authorize using a Zero Day for high priority intelligence collection, following senior, interagency review involving all appropriate departments.
The reason these arguments aren’t obviously convincing is the cyberwar arms race I talked about in Chapter 4. If we give up our own offensive capabilities in order to make the Internet safer, that would amount to unilateral disarmament. Here’s former NSA deputy director Rick Ledgett in 2017:
The idea that these problems would be solved by the U.S. government disclosing any vulnerabilities in its possession is at best naive and at worst dangerous. Such disclosure would be tantamount to unilateral disarmament in an area where the U.S. cannot afford to be unarmed. . . . And this is not an area in which American leadership would cause other countries to change what they do. Neither our allies nor our adversaries would give away the vulnerabilities in their possession.
Moreover, not all vulnerabilities are created equal. Some are what the NSA calls “NOBUS.” That stands for “nobody but us” and is meant to designate a vulnerability that the US has found and can exploit, but that no one else can exploit—because it requires more resources than anyone else has, or its discovery is based on some specialized knowledge that no one else has, or its use requires some unique technology that no one else has. If a vulnerability is NOBUS, the argument goes, then the US can safely reserve it for offense because no one else can use it against us.
This approach seems sensible on the surface, but the details quickly become a morass. In the US, the decision about whether to disclose or use a vulnerability takes place during what’s called the “vulnerabilities equities process” (VEP), a secret interagency process that considers the various “equities”; that is, the reasons for keeping the vulnerability secret. In 2014, then–White House cybersecurity coordinator Michael Daniel wrote a public explanation of the VEP that lacked any real details. In 2016, the official, heavily redacted White House document establishing the policy was released to the public. In 2017, new cybersecurity coordinator Rob Joyce published a revised VEP policy with some more details. So we have some clues, but still not enough information to adequately judge the policy.
We don’t know how the government decides what to disclose and what to hoard. We do know that only organizations with different equities in a particular vulnerability have a say in whether that vulnerability is kept secret; and that no one in the VEP seems to be specifically charged with arguing for increased security through disclosure; and that private citizens concerned with securing data at risk from a given vulnerability are not represented.
It is inevitable that the VEP will result in the nondisclosure of vulnerabilities with powerful offensive potential—no matter how much risk they impose. For example, ETERNALBLUE—the critical Windows vulnerability that the Russians stole from the NSA and then published in 2017—was deemed suitable for hoarding and not for disclosing. That seems crazy. Any process that allows such a serious vulnerability in such a widely used system to remain unpatched for over five years isn’t serving security very well.
This raises the concern that the VEP is leading to much more vulnerability hoarding than is wise. Vulnerabilities are independently discovered far more often than random chance alone would suggest. The reason seems to be that certain types of research fall in and out of vogue, and multiple research groups are often investigating the same areas. This implies that if the US government discovers a vulnerability, there is a reasonable chance someone else will independently discover it. Plus, NOBUS doesn’t take into account countries stealing vulnerabilities from each other, like ETERNALBLUE. Both the NSA and the CIA have had cyberattack tools, including zero-day vulnerabilities, stolen and published. These included some pretty nasty Windows vulnerabilities that the NSA had been exploiting for years. Maybe nobody else could have independently discovered them, but that didn’t matter once they were stolen and published.
We also don’t know how many vulnerabilities go through the process. In 2015, we learned that the US government discloses 91% of the vulnerabilities it discovers, although it is unclear whether this figure refers to exploitable vulnerabilities, or whether the percentage is padded by the much larger number of total vulnerabilities. Without knowing the denominator, the statistic is meaningless.
My guess is similar to what Jason Healey writes:
Every year the government only keeps a very small number of zero days, probably only single digits. Further, we estimate that the government probably retains a small arsenal of dozens of such zero days, far fewer than the hundreds or thousands that many experts have estimated. It appears the U.S. government adds to that arsenal only by drips and drabs, perhaps by single digits every year.
Finally, we don’t even know which classes of vulnerabilities go through the process and which don’t. It seems as if all vulnerabilities discovered by the government—probably almost entirely by the NSA—go through the process, but not vulnerabilities purchased from third parties, or based on bad design decisions like having a default password. What about vulnerabilities that the NSA finds after infiltrating foreign networks and stealing their cyberweapons? We don’t know.
We do know that vulnerabilities are reassessed annually; that’s a good thing. And as much as I want the US VEP to improve, at least the US has a process. No other country has anything similar—at least, nothing that’s public. Many countries would never disclose vulnerabilities in order to improve the world’s cybersecurity. We don’t know anything about European countries, although I know Germany is working on some sort of disclosure policy.
There are more wrinkles that affect the VEP. Cyberweapons are a combination of a payload (the damage the weapon does) and a delivery mechanism (the vulnerability used to get the payload into the enemy network). Imagine that China knows about a vulnerability and is using it in a still-unfired cyberweapon, and that the NSA learns about it through espionage. Should the NSA disclose and patch the vulnerability, or hoard it for attack? If it discloses, it disables China’s weapon, but China could find a replacement vulnerability that the NSA won’t know about. If it hoards, it’s deliberately leaving the US vulnerable to cyberattack. Maybe someday we can get to the point where we can patch vulnerabilities faster than the enemy can use them in an attack, but we’re nowhere near that point today.
An unpatched vulnerability puts everyone at risk, but not uniformly. The US and other Western countries are at high risk because of our critical electronic infrastructure, intellectual property, and personal wealth. Countries like North Korea are at much less risk, so they have less incentive to fix vulnerabilities. Fixing vulnerabilities isn’t disarmament; it’s making our own countries much safer. We also regain the moral authority to negotiate any broad international reductions in cyberweapons, and we can decide not to use them even if others do.
It’s clear to many observers that the VEP is badly broken. Despite Joyce’s attempt at transparency, there’s really no way for the public to judge its efficacy. From what we can tell of the results, the secret process isn’t resulting in a balance between the various equities. Instead, it’s making us much less secure.
Rick Ledgett is correct that our enemies will continue to stockpile vulnerabilities regardless of what we choose to do. But if we do choose disclosure, four things will happen. One: the vulnerabilities we disclose will eventually get fixed, depriving everyone of them. Two: security will improve as we all learn from vulnerabilities that are found and disclosed. Three: we will set an example for other countries and can then begin to change the global norm. And four: once organizations like the NSA and the CIA willingly relinquish these attack tools, we will be able to get these agencies firmly on the side of defense over offense. And once those four things happen, we can actually make progress on securing the Internet+ for everyone.
It’s not just necessary to find security vulnerabilities in existing software systems. Far too often, governments intervene in security standards—not to ensure that they’re strong, but to weaken them. That is, they prioritize offense over defense.
For example, IPsec is an encryption and authentication standard for Internet data packets. I was around in the 1990s when the Internet Engineering Task Force—that’s the public, multi-stakeholder standards group for the Internet—debated these standards, designed to defend against a broad spectrum of attacks. The NSA participated in the process, and deliberately worked to make the protocol less secure. Specifically, it tried to make minor changes that weakened the security, pushed for a then-weak encryption standard, demanded a no-encryption option, delayed the process in a variety of ways, and generally made the standard so complex that any implementation would be both difficult and insecure. I evaluated the standard in 1999 and concluded that its unnecessary complexity had a “devastating effect” on security. Today, end-to-end encryption still isn’t ubiquitous on the Internet, although it’s getting better.
A second example: in the secret government-only standards process for digital cellular encryption, many believe that the NSA ensured that algorithms used to encrypt voice traffic between the handset and the tower are easily breakable, and that there is no end-to-end encryption between the two communicating parties. The result is that your cell phone conversations can easily be monitored.
Both of these were probably part of NSA’s BULLRUN program, whose aim was to weaken public security standards. (The UK’s analogous program was called Edgehill.) And in both of these cases, the resulting insecure communications protocols have been used by both foreign governments and criminals to spy on private citizens’ communications.
Sometimes the government weakens security by law. CALEA, the Communications Assistance for Law Enforcement Act, is a 1994 law that required telephone companies to build wiretapping capabilities into their phone switches so that the FBI could spy on phone users. Fast-forward to today, and the FBI—and its equivalents in many other countries—are demanding similar backdoors into computers, phones, and communications systems. (More about this in Chapter 11.)
And sometimes, the US government doesn’t have to deliberately weaken security standards. Sometimes, the standards are designed insecurely for other reasons and the government takes advantage of that insecurity—while at the same time hiding the fact and delaying attempts to secure those systems.
“Stingray” is now a generic name for an IMSI-catcher, which is basically a fake cell phone tower originally sold by Harris Corporation (as StingRay) to various law enforcement agencies. (It’s actually just one of a series of devices with fish names—AmberJack is another—but it’s the name used in the media.) Basically, a stingray tricks nearby cell phones into connecting to it. The technology works because the phone in your pocket automatically trusts any cell tower within range. There’s no authentication in the connection protocols between the phones and the towers. When a new tower appears, your phone automatically transmits its international mobile subscriber identity (IMSI), a unique serial number that enables the cellular system to know where you are. This enables collection of identification and location information of the phones in the vicinity and, in some cases, eavesdropping on phone conversations, text messages, and web browsing.
The use of IMSI-catchers by the FBI and other law enforcement agencies in the US was once a massive secret. Only a few years ago, the FBI was so scared of explaining this capability in public that the agency made local police departments sign nondisclosure agreements before using the technique, and instructed them to lie about their use of it in court. When it seemed possible that local police in Sarasota, Florida, might release documents about IMSI-catchers to plaintiffs in civil rights litigation against them, federal marshals seized the documents. Even after the technology became common knowledge, and a key plot point on television shows like The Wire, the FBI continued to pretend it was a big secret. As recently as 2015, St. Louis police dropped a case rather than talk about the technology in court.
Cellular companies could add encryption and authentication to their standards, but as long as most people don’t understand their phone’s insecurities, and cellular standards are still set by government-only committees, it’s unlikely.
It’s basically the NOBUS argument. When the cell phone network was designed, putting up a cell tower was an incredibly difficult technical exercise, and it was reasonable to assume that only legitimate cell providers would do it. With time, the technology became cheaper and easier. What was once a secret NSA interception program and a secretive FBI investigative tool became usable by less-capable governments, cybercriminals, and even hobbyists. In 2010, hackers were demonstrating their home-built IMSI-catchers at conferences. By 2014, dozens of IMSI-catchers had been discovered in the Washington, DC, area, collecting information on who knows whom, and run by who knows which government or criminal organization. Now, you can browse the Chinese e-commerce website Alibaba.com and buy your own IMSI-catcher for under $2,000. You can also download public-domain software that will turn your laptop into one with the right peripherals.
Another example: IP intercept systems are used to eavesdrop on what people do on the Internet. Unlike the surveillance by companies like Facebook and Google that happens at the sites you visit, or surveillance that happens on the Internet backbone, this surveillance happens near the point at which your computer connects to the Internet. Here, someone can eavesdrop on everything you do.
IP intercept systems also exploit existing vulnerabilities in the underlying Internet communications protocols. Most of the traffic between your computer and the Internet is unencrypted, and what is encrypted is often vulnerable to man-in-the-middle attacks because of insecurities in both the Internet protocols and the encryption protocols that protect it.
We know from the Snowden documents that the NSA conducts extensive data collection operations in the Internet backbone, and directly benefits from the lack of encryption on the Internet. But so do other countries, cybercriminals, and hackers.
Similarly, when the Internet protocols were first designed, adding encryption would have slowed down those early computers considerably; then, it felt like a waste of resources. Now, computers are cheap and software is fast, and what was difficult to impossible only a few decades ago is now easy. At the same time, the Internet surveillance capabilities once unique to the NSA have become so accessible that criminals, hackers, and the intelligence services of any country can employ them.
It’s no different with cell phone encryption or CALEA-mandated wiretapping systems. That same phone-switch wiretapping capability was used by unknown attackers to wiretap more than a hundred senior members of the Greek government over the course of ten months in the early 2000s. CALEA inadvertently caused vulnerabilities in Cisco Internet switches. And, according to Richard George, the former NSA technical director for information assurance, “when the NSA tested CALEA-compliant switches that had been submitted prior to use in DoD systems, the NSA found security problems in every single switch submitted for testing.”
In each of these stories, the lesson is the same: NOBUS doesn’t last. Even former NSA and CIA director Michael Hayden, who popularized the term in the public press, wrote in 2017 that “the NOBUS comfort zone is considerably smaller than it once was.” In a world where everyone uses the same computers and communications systems, any insecurity we deliberately insert—or even that we find and conveniently use—can and will be used against us. And just like fixing vulnerabilities, we are much safer if our systems are designed to be secure in the first place.
Governments should have the goal of encrypting as much of the Internet+ as possible. There are many facets to this.
One: we need end-to-end encryption for communications. This means that all communications should be encrypted from the sender’s device to the receiver’s device, and that no one in the middle should be able to read that communication. This is the encryption used by many messaging apps, like iMessage, WhatsApp, and Signal. This is how encryption in your browser works. In some cases, true end-to-end encryption isn’t desirable. Most of us want Google to be able to read our e-mail, because that’s how it sorts it into folders and deletes spam. In those cases, we should encrypt communications up to the point of our designated (and presumably trusted) communications processor.
Two: we need to encrypt our devices. Encryption greatly increases the security of any end-user device, but is especially important for general-purpose devices like computers and phones. These are often central nodes in our Internet+ life, and they must be as secure as possible.
Three: we need to encrypt the Internet. Data should be encrypted whenever possible as it moves around the Internet. Unfortunately, we’ve all become accustomed to an unencrypted Internet, and there are many protocols that demonstrate that fact. When you log on to a strange Wi-Fi network, what usually happens is that your surfing is intercepted by the router and the page you want is replaced with a log-in screen. That happens because your data isn’t being encrypted. Even though this feature takes advantage of unencrypted communications, we need to encrypt anyway and develop other ways to log in.
And four: we need to encrypt the large databases of personal information that are out there.
Encryption isn’t a panacea. Attacks against authentication often bypass encryption by stealing the password of an authorized user. And encryption won’t prevent government-on-government espionage. All the lessons of Chapter 1 still hold: computers are very hard to secure. We know that the NSA is able to circumvent most encryption by attacking the underlying software. But those attacks are more targeted.
An encrypted communications system or computer isn’t impenetrably secure, and one without encryption isn’t irrevocably insecure. But encryption is a core security technology. It protects our information and devices from hackers, criminals, and foreign governments. It protects us from surveillance overreach of our own governments. It protects our elected officials from eavesdropping, and our IoT devices from being subverted. Increasingly, it protects our critical infrastructure. Combined with authentication, it’s probably the single most essential security feature for the Internet+. Many security failures can be traced to a lack of encryption.
Ubiquitous encryption forces the attacker to specifically target its attacks. It makes bulk surveillance impossible in many cases. This affects government-on-population surveillance much more than government-on-government espionage. And it hurts repressive governments much more than it hurts democracies. Encryption is beneficial to society, even though the evildoers can use it to secure their communications and devices as well as anyone else.
This is not a universally held position. There is strong pressure to weaken encryption, not only from totalitarian governments that want to spy on their citizens but from politicians and law enforcement officials in democracies, who see encryption as a tool used by criminals, terrorists, and—with the advent of cryptocurrencies—people who want to buy drugs and launder money.
I, and many security technologists, have argued that the FBI’s demands for backdoors are just too dangerous. Of course, criminals and terrorists have used, are using, and will continue to use, encryption to hide their plots from the authorities, just as they will use many other aspects of society’s capabilities and infrastructure. In general, we recognize that cars, restaurants, and telecommunications can be used by both honest and dishonest people. Society thrives nonetheless because the honest so greatly outnumber the dishonest. As a thought experiment, compare the idea of mandating backdoors with the tactic of adding a governor to every car engine to ensure that no one ever speeds. Yes, that would help prevent criminals from using cars as getaway vehicles, but we would never accept the burden on honest citizens. Weakening encryption for everyone is harmful in exactly the same way, even if the effects aren’t as obvious. I’ll talk more about this in Chapter 11.
Not only is the NSA’s dual mission at odds with itself; it also doesn’t make sense organizationally. Offense gets the money, attention, and priority. As long as the NSA has responsibility for both offense and defense, it’ll never be fully trusted to secure the Internet+. This means the organization as currently constituted is harmful to cybersecurity.
We need strong government agencies on the side of security, and that means splitting apart the NSA and significantly funding defensive initiatives. In Data and Goliath, I recommended splitting the NSA into three organizations: one to conduct international electronic espionage, one to provide security in cyberspace, and a third—rolled into the FBI—to conduct legal domestic surveillance. If the security organization could work closely, or even become a part of, the Internet+ regulatory agency I described in Chapter 8, it would go a long way to making the world safer.
It’s the same in other countries. As long as the UK’s National Cyber Security Centre is subservient to GCHQ (that’s the Government Communications Headquarters, the country’s surveillance agency), it can never be fully trusted. A better model is Germany. The German Federal Office for Information Security (BSI) reports to the chancellor via a different minister than does the Federal Intelligence Service (BND), its offensive agency.
Separating security from spying (and also attack) has other benefits as well. Disclosing vulnerabilities is very hard for an organization that also wants to use them offensively. Sure, the two agencies might be allocated different amounts of funding, but at least that process is public and subject to some scrutiny. In general, the separation will reduce the secrecy that’s currently surrounding everything that is government security in cyberspace. That secrecy primarily comes from the offensive capabilities and mission.
Less secrecy also means more oversight, and that’s a key issue with agencies like the NSA. The more their authorities, capabilities, and programs can be debated in public, the less likely they are to be abused.
Unfortunately, in 2016, the NSA underwent a major reorganization where it combined its offensive and defensive directorates into a single operational directorate. While that makes a lot of sense technically—the same skills and expertise are required for both—it’s the exact opposite of what we need politically. If the NSA is ever to be trusted to secure the Internet+ rather than attack it, defense can’t be commingled with offense. Just as the intelligence and attack capabilities are now separate organizations (the NSA and US Cyber Command, respectively), even though the skills and expertise are the same, defense and offense need to be separate organizations.
If we’re going to prioritize defense over offense, we’re going to have to recognize the challenges this creates for law enforcement. The FBI needs investigative capabilities suitable for the 21st century.
In 2016, the FBI demanded that Apple unlock an iPhone belonging to the dead San Bernardino terrorist Syed Rizwan Farook. Apple has implemented encryption on phones by default, and the FBI couldn’t access the data. Because it was an iPhone 5C, Apple had access to the data. (Apple improved the security of later iPhone models.) Apple resisted the FBI’s demand, primarily because it recognized it as a test case for the agency’s ability to force it—and any tech company—to bypass the security of its systems and devices.
It’s the same backdoor demand we’ve heard from the FBI for decades, and I’ll talk about it more in Chapter 11. For the FBI, this was a good test case, and it thought it would easily prevail in court. Apple, along with pretty much every cybersecurity professional, fought back hard. Eventually, the FBI got some unidentified “third party”—probably the Israeli company Cellebrite—to break into the phone without Apple’s help. No court decided anything.
After it was all over, I and a group of colleagues wrote a paper about this issue with the title “Don’t Panic.” We meant that title literally; the FBI and others should stop panicking about encryption. Crimes won’t suddenly become unsolvable just because the FBI can’t extract data from computers or can’t eavesdrop on digital communications, any more than crime was unsolvable before any of us used computers or communicated digitally. We gave three main reasons not to panic:
The truth is that the FBI has lost a lot of its technical expertise. Before there were cell phones, when people’s conversations would irretrievably evaporate as soon as the words were spoken, the FBI had all sorts of investigative techniques it could bring to bear on an unsolved crime. Starting in the mid-1990s, its work became easier: get data off the cell phone. Now it’s more than 20 years later and that era is coming to a close, but all the FBI agents who remember the old days have retired. All the agency’s current employees are people who only know that there’s important data on smartphones.
This has to change. If we’re going to do the right thing and field ubiquitous security systems without any backdoors, the FBI needs new expertise in how to conduct investigations in the Internet+ era. In her testimony to the House Judiciary Committee, mathematician and cybersecurity policy expert Susan Landau described this:
The FBI will need an investigative center with agents with a deep technical understanding of modern telecommunications technologies; this means from the physical layer to the virtual one, and all the pieces in between. Since all phones are computers these days, this center will need to have the same level of deep expertise in computer science. In addition, there will need to be teams of researchers who understand various types of fielded devices. This will include not only where technology is and will be in six months, but where it may be in two to five years. This center will need to conduct research as to what new surveillance technologies will need to be developed as a result of the directions of new technologies. I am talking deep expertise here and strong capabilities, not light.
There are many pieces to this. In addition to better computer forensics, the FBI needs lawful hacking capabilities that it can use in exceptional circumstances. The FBI also needs to provide technical assistance to state and local law enforcement agencies, which are facing the same problems with technological forensics and evidence gathering. This problem is not going away, and it’s going to change with time. The FBI needs to continuously adapt.
To make this happen, the FBI must establish a viable career path for technical investigators. Right now there is none, and you’d be hard-pressed to find a top computer science undergrad thinking about a career in law enforcement. This is why the FBI’s computer forensic experts tend to come from outside the field. If the FBI is going to attract and retain the best talent, it will need to successfully compete with the private sector.
This won’t be cheap. Landau estimates that it will cost hundreds of millions of dollars per year. But that’s much less than the billions of dollars Internet+ insecurity will cost society, and really the only solution that will work.
The government can’t do this alone. The private sector can’t do this alone. Any real solutions require government and industry to work closely together.
Many of the recommendations in the previous chapters try to delineate the contours of that partnership. Whether they are software vendors, Internet companies, IoT manufacturers, or critical infrastructure providers, businesses need to understand their responsibilities.
This means more information sharing between government and the private sector. This isn’t a new idea, and the past four US presidents have made attempts at this. Most critical-industry sectors have their own information-sharing and analysis centers, where government and businesses can share intelligence information. Some other countries have similar organizations: the UK’s Centre for the Protection of National Infrastructure, the EU’s European Energy–Information Sharing and Analysis Centre (EE-ISAC), Spain’s Grupo Trabalho Seguridad, and Australia’s Trusted Information Sharing Network for Critical Infrastructure Resilience.
The reality always falls short, because both government and industry tend to value receiving information more than giving it. This is rational; the existing costs and barriers often outweigh the advantages.
Much of what the NSA and FBI know is classified, and the agencies haven’t figured out how to share their data with companies that lack staff with security clearances. Much industry data is proprietary or embarrassing, and won’t be shared without assurances that it won’t go any further. Reducing the amount of secrecy in government cybersecurity will go a long way to making information sharing easier, as will providing some assurances of confidentiality, and perhaps indemnification, to companies that share information.
This is easier for critical infrastructure. Governments have long been involved in regulating these industries, and they have experience dealing with threats against these industries. However, information sharing needs to extend past what is traditionally critical infrastructure.
One option is to create a national cyber incident data repository, which would allow businesses to anonymously report breach information to a database. The FAA maintains an anonymous database of airplane near misses. Reporting is voluntary but expected, and engineers can search the database for trends that help them build safer planes, safer airport runways, and safer procedures.
Another idea is to create a “National Cybersecurity Safety Board” for Internet-related disasters, modeled on the National Transportation Safety Board, the independent transport accident investigation bureau. This NCSB would investigate the most serious incidents, issue findings about fault, and publish information about which security measures actually work (and which don’t). It could also issue something like the NTSB’s annual “Most Wanted List” of the most critical changes needed to prevent future accidents.
Whatever we do, it will have to scale for the Internet+. For example, whenever a car crashes, everyone will want the data: the traffic police, the affected insurance agencies, the automobile manufacturer, the local safety agency, and so on.
Nongovernmental networks like the Cyber Threat Alliance have emerged to fill the huge gap for trusted information sharing. Created in 2014 by five US-based security vendors, the Cyber Threat Alliance has rapidly expanded globally. The idea is to help defenders get ahead of attackers (addressing some of the asymmetries we spoke about in Chapter 1) by sharing intelligence about attack methods and motives. While vital, this informal information sharing is no substitute for models that also include information on documented security failures. Companies are reluctant to share this information with each other, which speaks to the need for a government role in facilitating—or even mandating—more information sharing.
We also need to recognize the limits of any public–private partnership, and figure out what to do when civilians are attacked by governments on the Internet. Imagine that the North Korean military physically attacked a US media company. Or that the Iranian military stormed a US casino. We wouldn’t expect those companies to defend themselves. We would expect the US military to defend those companies, as we expect them to defend all US citizens against foreign attack.
What happens if those two countries attack US companies, as indeed they did, in cyberspace? Regardless of how much information the government shared with those companies, can we really expect Sony to defend itself against North Korea, or the Sands Casino to defend itself against Iran? Do we even want private companies to respond to foreign military attack? I don’t think so.
We also can’t expect corporations like Westinghouse Electric and US Steel to defend themselves against Chinese military hacking. We shouldn’t expect the Democratic and Republican national committees—and certainly not state and local political organizations—to defend themselves against Russian government hacking. In none of these cases is it a fair fight.
One of the core arguments of this book is that businesses need to do more to secure their devices, data, and networks. That would go a long way to defend against incursions by foreign governments, and it would make successful attacks harder. It’s no longer good enough to pretend the threat doesn’t exist. But in the end, militaries will always have better skill and more funds than civilian defenders. There will always be attacks that are beyond the ability of civilian defenders to resist. And government should remain the only institution with the authority and capability to respond to large-scale nation-state attacks in cyberspace. Such response might require coordination with and assistance from the private sector, but it should not be the private sector’s responsibility.
So what’s the model here? Is it a Cyber National Guard? Is it a Cyber Corps of Engineers? Do we expect US Cyber Command to defend civilian networks inside the US? Or should that be the charge of the Department of Homeland Security? Estonia has a volunteer Cyber Defence Unit made up of nongovernment experts that can be called on in times of national emergency. I don’t know if that’s what we need here, but we need something.
Any such organized governmental defense against nation-state attacks on private entities is a long way off, though. Until then, individuals and organizations must take more responsibility for their own security than they have at any time since the closing of the American frontier in 1890.