8

Government Is Who Enables Security

Airplanes should be incredibly dangerous. You’re inside what’s basically a rocket, hurtling through the air at 600 miles per hour. A modern airplane has upwards of six million parts, many of which have to work perfectly. Something fails, and the plane crashes. Common sense says that it’s super risky.

Airlines compete with each other on all sorts of attributes. They compete on price and routes. They compete on seat pitch and legroom. They compete on amenities in their premium cabins. They compete on nebulous “feel good” emotions with evocative branding. But they don’t compete on safety. Safety—and security—is set by the government. Airlines and airplane manufacturers are required to comply with all sorts of regulations. And it’s all invisible to the consumer. No airline ever touts its safety or security records in advertisements. But every time I board a plane—182 times in 2017—I know that the flight will be safe.

It wasn’t always like this. Airplanes used to be incredibly dangerous, and fatal accidents were common. What changed was airplane safety regulation. Over the decades, government has forced improvement after improvement in airplane design, flight procedures, pilot training, and so on. The result is that today, commercial airplanes are the safest way to travel, ever.

We need to do the same for Internet security. There are several models to consider for setting and enforcing the security standards described in Chapter 6. An independent testing agency could judge manufacturers on adherence to the standards. Consumers Union—a not-for-profit company funded by magazine subscriptions and grants—could be a model for that. We could rely on the market—that means customers—to demand more security by favoring more-secure products and services.

I’m not optimistic about any of those ideas, for all the reasons discussed in the previous chapter. Government is by far the most common way we improve our collective security, and it is almost certainly the most efficient. It’s how we change business incentives. It’s how we pay for common defense. It’s how we solve collective action problems and prevent free riding.

I can think of no industry in the past 100 years that has improved its safety and security without being compelled to do so by government. This is true for buildings and pharmaceuticals. It’s true for food and workplaces. It’s true for automobiles, airplanes, nuclear power plants, consumer products, restaurants, and—more recently in the US—financial instruments. In every one of those cases, before government regulation, sellers simply kept producing dangerous or harmful products and selling them into a naive market. Even when there was popular outrage, it took government to change the behavior of business owners. From the manufacturers’ point of view, it’s simply rational to hope for the best, rather than spend money up front to make their products safer. After all, the buying public often can’t tell the difference until something goes wrong, and producers are biased to prefer an immediate benefit—cost savings—over a longer-term safety or security benefit.

Whenever industry groups write about this, they stress that any standards should be voluntary. This is their own self-interest talking. If we want a standard enforced, it needs to be mandatory. Anything else is not going to work, because the incentives aren’t aligned properly.

A NEW GOVERNMENT AGENCY

Governments operate in silos. The Food and Drug Administration has jurisdiction over medical devices. The Department of Transportation has jurisdiction over ground vehicles. The Federal Aviation Administration has jurisdiction over aircraft, but doesn’t consider the privacy implications of drones to be part of its mandate. The Federal Trade Commission oversees privacy to a degree, but only in the case of unfair or deceptive trade practices. The Department of Justice gets involved if a federal crime is committed.

For data, jurisdiction can change depending on use. If data is used to influence a consumer, the FTC has jurisdiction. If that data is used to influence a voter, that’s the Federal Election Commission’s jurisdiction. If the same data is used in the same way to influence a student in a school, the Department of Education gets involved. In the US, there’s no authority for harms due to information leakage or privacy violations—unless the company involved made false promises to the consumer. Each agency has its own approach and its own rules. Congressional committees fight over jurisdiction; the federal departments and commissions all have their own separate domains: everything from agriculture to defense to transportation to energy. Sometimes states have concurrent regulatory authority—California, for example, has long been a leader in Internet privacy issues—and sometimes the federal government preempts state actions.

This is not how the Internet works. The Internet, and now the Internet+, is a freewheeling integrated system of computers, algorithms, and networks. It’s the opposite of a silo. It grows horizontally, destroying traditional barriers so that people and systems that never previously communicated are now able to do so. Whether it’s large personal databases or algorithmic decision-making or the Internet of Things or cloud storage or robotics, these are all technologies that interrelate with each other in very profound ways. Right now on my smartphone, there are apps that log my health information, control my energy use, and interact with my car. That phone has entered the jurisdiction of four different US federal agencies—the FDA, DOE, DOT, and FCC—and it’s barely gotten started.

These electronic platforms are general and need a holistic approach to policy. They all use computers, and any solutions we come up with will have to be general. I’m not saying there will be one set of regulations that covers every computer in every application, but there needs to be a single framework, applicable to all computers, whether they’re in your car, plane, phone, thermostat, or pacemaker.

I am proposing a new federal agency: a National Cyber Office (NCO). My model is the Office of the Director of National Intelligence (ODNI), created by Congress in the wake of the 9/11 terrorist attacks as a single entity to coordinate intelligence across the US government. The ODNI’s job is to set priorities, coordinate activities, allocate funds, and cross-pollinate ideas. It’s not a perfect model, and the ODNI has been criticized for how ineffective its cross-agency coordination has been. But this sounds like the model we need for the Internet+.

The initial purpose of this new agency would not be to regulate, but instead to advise other areas of government on issues that touch on the Internet+. Such advice is badly needed by other federal agencies, and by lawmakers at all government levels. The agency could also direct research where needed, convene meetings of stakeholders on different issues, and file amicus curiae briefs in court cases where its expertise would be valued. Instead of an enforcement agency like the FTC or the FDA, think of this agency as more like the Office of Management and Budget or the Department of Commerce: a repository of expertise.

The NCO would recognize that Internet+ policy necessarily spans multiple agencies, and that those agencies need to retain their existing scopes of responsibility. But many solutions need to be centrally coordinated, and someone needs to hold individual agencies accountable. So much of Internet+ hardware, software, protocols, and systems overlaps between wildly different applications.

This new agency could also manage other government-wide security initiatives, such as updating the NIST Cybersecurity Framework, and developing the other types of security standards I listed in Chapter 6, the academic grant-making and research tax credit I mentioned in Chapter 7, the security requirements that should be part of the government’s own procurement process, and government-wide best practices. It could manage partnerships between government and industry, and help develop strategies that encompass both. It would also serve as a counterweight to the military and national-security government organizations that are already setting policy in this space. Some of this is being done by NIST today, and some by the National Science Foundation, but neither NIST nor the NSF could easily be modified into this new role. It would make sense to move these functions into the new dedicated agency.

Finally, this agency would be a place for the government to consolidate its expertise. A dedicated Internet+ agency could attract (and pay competitive salaries to) talented individuals to help craft and advise on policy matters. This means that the agency would consist of engineers and computer scientists working closely with experts in law and policy. This is a theme I will return to in the Conclusion: the importance of technologists and policy makers working closely together.

Once the NCO was established, other “centers of excellence” could be created under its umbrella. Again, the ODNI is a good model, with its National Counterterrorism Center and National Counterproliferation Center. I imagine that the NCO might need a National Artificial Intelligence Center and a National Robotics Center, and perhaps even a National Algorithms Center. We might create a National Cyberdefense Academy—an interagency facility with a variety of classes, certifications, and tracks that all agencies could send staff to for training. It would also have to coordinate closely with the Department of Homeland Security, and probably the Department of Justice.

Eventually, in some way, regulation will have to cut across multiple Internet+ domains. Maybe this new agency will be the regulatory body, but more likely the existing agencies that already regulate various industries will continue to do so, adding Internet+ security regulations to their portfolios. Their broad mandates make them more nimble than Congress. They can respond to changes in technology or markets. They can motivate companies to change their behavior.

A model based on the FTC might be useful. The FTC doesn’t have specific rules. Instead, it has vague rules and prescribed outcomes, and it pursues the most flagrant violators. Everyone else watches the FTC’s actions and fines, and tries to be slightly better than the companies that get penalized. The FTC also issues guidance and works with the industry to promote compliance. And although it is sometimes described as toothless, the result is public awareness of norms of acceptable business conduct, accountability for those found to have violated them, and continuous improvement across the board.

Here’s one example: in 2006, Netflix published 100 million anonymous movie reviews and ratings as part of a contest. Researchers were able to de-anonymize some of that data, which surprised pretty much everyone. The FTC took action against Netflix only when the company took no better care with customer data when it ran a second contest the following year.

Today, both the FCC and the SEC have the authority to require publicly traded companies to audit and then certify their own cybersecurity. Those agencies could take an existing security framework and use it, or create one of their own.

I’m not the first to suggest this. A research group advising the European Commission proposed the formation of the European Safety and Security Engineering Agency. Ashkan Soltani, former chief technologist at the FTC, proposed a new “federal technology commission.” University of Washington law professor Ryan Calo proposed a Federal Robotics Commission. And Matthew Scherer of George Mason University proposed an agency to regulate artificial intelligence.

Some other countries are thinking along these lines. Israel created its National Cyber Bureau in 2011 to both increase the country’s defenses in cyberspace and advise the rest of the government on cyber-related issues. The UK created the National Cyber Security Centre in 2016 to “help protect our critical services from cyber attacks, manage major incidents, and improve the underlying security of the UK Internet through technological improvement and advice to citizens and organisations.” To my mind, both of these organizations are too closely tied to the military and therefore to the part of the government that relies on Internet insecurity, but they’re a start.

There is significant historical precedent in the US for this idea. New technologies regularly lead to the formulation of new government agencies. Trains did. Cars did. Airplanes did. The invention of radio led to the formation of the Federal Radio Commission, which became the Federal Communications Commission. The invention of nuclear power led to the formation of the Atomic Energy Commission, which became the Department of Energy.

We can debate the specifics, and the appropriate limits of this new agency. We can debate the organizational structure. But whatever the format, we need some government agency to be in charge of this.

I do think that Internet+-era regulation will look different from industrial-era regulation. The Internet is already governed on a multi-stakeholder model, whereby governments, industry, technologists, and civil society come together to resolve issues pertaining to its functioning. My guess is that this model is much more suited to Internet+ regulation than the other models we’re used to.

There are reasonable objections to this proposal. Government agencies are inefficient. They often lack needed expertise. They are bureaucracies, and lack vision and foresight. There are problems in speed, scope, efficacy, and the potential of regulatory capture. And—of course—there’s the pervasive opinion that government should just get out of the way.

But those worries exist, regardless of whether there is a single new government agency or the authority is delegated among a dozen or so existing government agencies. The value of a single agency is considerable. The alternative is to craft Internet+ policy ad hoc and piecemeal, in a way that adds complexity and doesn’t counter emerging threats.

Of course, the devil is in the details, and I don’t have any of them. My NCO idea might not work, and I’m okay with that. My hope is that, at a minimum, it can start a discussion.

GOVERNMENT REGULATIONS

The computer industry has largely been regulation-free. That’s partly the result of the nascence of the industry. It’s partly the result of the industry’s relative initial harmlessness, and an unwillingness on the part of its leaders to recognize how much things have changed. And it’s largely the result of governmental reluctance to risk disrupting the enormous wealth generator that the industry has become. I think these days are coming to an end and Internet+ regulation is inevitable. There are several reasons.

One: governments tend to regulate industries that are choke points for the overall economy, like telecommunications and transportation. The Internet+ is definitely one of these, and it’s increasingly more of an economic linchpin. Two: governments regulate consumer products and services that can kill people, and the Internet+ is rapidly joining the club. Three: many existing industries being permeated by computers—from toys to appliances to automobiles to nuclear power plants—are already regulated.

It’s important to realize that regulation is more than a list of things either required or forbidden. That’s regulation at its most blunt, but most of the time it’s far more nuanced. Regulation can create responsibilities and leave the details up to the market. It can push in one direction or the other. It can change incentives. It can nudge instead of force. It can be flexible enough to adapt to changes in both technology’s and society’s expectations.

The goal isn’t to be perfectionistic. We don’t demand that automobile manufacturers produce the safest car possible. We mandate safety standards like seat belts and air bags, require crash tests, and leave the rest to the market. This approach is essential in an environment as dynamic as the Internet+.

Europe is already significantly increasing its Internet regulations—we’ll talk about the EU’s General Data Protection Regulation in Chapter 10—and some US states are moving in a similar direction. While there’s little appetite in Washington for any sort of regulation, that appetite could increase quickly if a disaster occurs that takes a significant number of lives or destroys a certain portion of our economy.

The US has started to regulate at the federal level, but in fits and starts, and only on an industry-specific basis. For example, the FDA has issued guidance to medical-device manufacturers on regulatory requirements for Internet-connected devices. The agency doesn’t conduct the testing itself; developers test their products and services against the standards and submit their documentation to the FDA for approval. This is serious business. The FDA is not shy about denying approval to products that fail or demanding the recall of products that cause harm.

Rules for privacy of patients’ medical data are substantially different from those governing privacy of consumer data. As you’d expect, medical-data rules are much more stringent. Many developers of new health-related products and services are trying to position their wares as consumer devices, so they don’t require FDA approval. This sometimes works, as with health trackers like Fitbit. And sometimes the FDA fights back, as it did with genetic data collected by 23andMe.

For cars, the Department of Transportation has only issued voluntary security standards. Voluntary standards are never as effective as mandatory standards, but they can help. For example, in a lawsuit the court will often assess voluntary compliance with DOT guidance to help determine whether a manufacturer was negligent.

The FAA has taken a different approach with drone regulation. It does not require design certification for each new drone that enters the market; instead, it indirectly regulates consumer drones through policies that restrict how and where they can be used.

There have been some successes. In 2015, the FTC sued Wyndham Hotels over its computer security. Wyndham had terrible security practices that allowed hackers to repeatedly break into its networks and steal data about Wyndham customers. The FTC argued that because Wyndham had a privacy policy that made promises the company did not keep, it was deceiving the customer.

The court battle was complicated, and largely hinged on matters of authority that aren’t relevant here. But what’s interesting is that one of Wyndham’s defenses was that the FTC couldn’t fine it for not being secure enough, because the FTC never told Wyndham what “secure enough” meant in the first place. The Federal Court of Appeals sided with the FTC, basically saying that it was Wyndham’s job to figure out what “secure enough” meant and that it had screwed up by not doing so.

CHALLENGES OF REGULATION

The Internet is the most dynamic environment there is. Regulation, especially wrongheaded or excessive regulation, can retard new technologies and new innovations. In security, it can inhibit the flexibility and agility required to keep up with changing threats.

When it comes to regulating the Internet+, I see four problems: speed, scope, efficacy, and the potential of stifling the industries being regulated.

Speed first: government policy change is slow compared to the speed of technological innovation. It used to be the other way around; it was almost 40 years after Alexander Graham Bell first commercialized the telephone that it became a commonplace item. For the television, it took over 30 years. Those days are over. E-mail, cell phones, Facebook, Twitter—these penetrated society much faster than technologies of the previous decades. (It took 13 years for Facebook to amass two billion regular users worldwide.) We’re at the point where law will always lag behind technology. By the time regulations are issued, they’re often laughably out-of-date. A good example is the EU’s regulations requiring notices about cookies on websites; this would have made sense in 1995, but by the time the regulation came into force in 2011, web tracking was much more complex. Similarly, courts will always be trying to apply outdated laws to more modern situations, and—even worse—technological changes will result in laws having all sorts of unintended consequences.

Next, scope: laws tend to be written narrowly, focused on specific technologies. These laws can fail when technologies change. Most of our privacy laws were written in the 1970s, and while the concerns haven’t changed, the technology has. Here’s an example: the Electronic Communications Privacy Act was passed in 1986. One of the things the law did was regulate the privacy of e-mail, giving different privacy protections for two types of e-mail. To access newly received e-mail, the government needs a warrant. To access e-mail that’s been sitting on the server for more than 180 days, the government may search without any restrictions. In 1986, that made sense. Storage was expensive. People accessed their e-mail by having their e-mail client bring the mail from the server to their computer. Anything left on the server for more than six months was considered abandoned, and we have no privacy rights in abandoned property. Today, everybody leaves their e-mail on the server for six months and even six years. That’s how Gmail, Hotmail, and every other web-based e-mail system works. The law makes an important distinction between services that provide communication and services that process and store data—a distinction that no longer makes sense. The logic behind that old law has been completely reversed by technology, but it’s still in force.

This will happen repeatedly until we start writing laws that are technology neutral. If we focus on the human aspects of the law rather than the technological aspects, we can protect laws against both speed and scope problems. For example, we could write laws that address “communication,” regardless of whether it’s by voice, video, e-mail, text, private message, or whatever technology comes next. Our technological future is filled with emergent properties of new technologies, and regularly we’ll be surprised.

There’s another, and completely different, scope problem with regulations: How general should we make them? On the one hand, it’s obvious that we need different standards for cars and airplanes than we do for toys and other household objects, and different regulations for financial databases than we do for anonymous traffic data. On the other hand, the interconnectedness of everything makes things like toys and pothole data less innocuous than they might appear.

Additionally, it’s hard to know exactly where regulations should stop. Yes, we’re going to regulate things that affect the world in a direct physical manner, but because everything is interconnected and the threats are interlocking, it’s impossible to carve out any portion of the Internet+ and definitively say that it doesn’t matter. Someone might suggest not bothering to regulate low-cost Internet devices, but vulnerabilities in these can affect critical infrastructure. Someone else might suggest exempting pure software systems because they don’t have physical agency, but they can still have real-world effects: think of software that decides who gets released on bail or parole. Regulations should probably cover these systems as well.

The third problem with regulating the Internet+ is efficacy. Large corporations are very effective at evading regulation. The big tech companies are spending record amounts of money lobbying in Washington, to the point that they’re now spending twice what the banking industry does, and many times more than oil companies, defense contractors, and everyone else. Google alone spent $6 million on lobbying in just three months of 2017. And even without such lobbying, these companies are enormous wealth generators for the US, and Congress is loath to risk disrupting that.

We’re already seeing examples. One is the way developers of fitness devices worked to persuade the FDA that their products are not medical devices and therefore not subject to FDA rules. Data brokers have performed similar lobbying maneuvers with respect to personal information held in their databases. As privacy law professor Julie Cohen has said: “Power interprets regulation as damage and routes around it.”

We need to regulate fairly, and we need to regulate well. Both goals are hard to achieve in practice. Lots of regulations don’t work. We’ve already seen examples of these in Internet security: the CAN-SPAM Act that didn’t stop spam, the Child Online Protection Act that didn’t protect children, and the DMCA that didn’t prevent the making of unauthorized copies. We’ll see more ineffective and counterproductive legislative proposals in Chapter 11.

And regulation is only as effective as the enforcement. In the US, the FTC has taken legal action against robocallers, “do not call” list violators, deceptive telco advertisers, and excessive data collection by toys and televisions. FTC fines range from a few hundred thousand dollars to millions. But the agency is hamstrung by limited resources to investigate and bring cases, which allows the more clever companies to evade FTC regulations—potentially indefinitely. The chances of getting caught and successfully sued are so low that any rational company would simply take the chance.

Regulations are consistently co-opted. Instead of promoting the common good, they’re aimed at promoting some private agenda. This happens all the time, but the example closest to my field is the copyright office. It’s not a voice of the people, and its regulations aren’t designed to promote fairness. It’s a voice of the copyright holders—large companies like Disney—and its regulations are largely designed to promote the interests of those companies. I could say the same thing about many industries and the agencies that are supposed to regulate them. This is regulatory capture, and I could spend an entire chapter on it. It’s very common and happens for a lot of reasons, and I see no reason why Internet regulation will be immune from the same forces. If regulators become an enforcement arm of an entrenched industry group, the results can be worse than doing nothing at all.

The fourth and last problem with regulations is that they can stifle innovation. I think we just have to accept that, and in certain rare cases we may even want to do so deliberately. Unfettered innovation is only acceptable for benign technologies. We regularly put limits on technologies that can kill us, because we believe the safety and security are worth it. The precautionary principle dictates that when the potential of harm is great, we should err on the side of not deploying a new technology without proof of security. This way of thinking will become more important in a world where an attacker can open all of the door locks or hack all of the power plants. We don’t want to—and can’t—stop technological progress, but we can make deliberate choices between technological futures, or speed up or delay certain technologies with respect to the others.

We won’t get new features for our computers and devices at the same furious pace we’re used to, and that will be a benefit when the features could potentially kill you. But, as I’ve mentioned a couple of times, regulations can encourage innovation as well. By providing incentives to private industry to solve security problems, we’re likely to get more security.

We will need to work carefully through this. Regulations might disproportionally burden small companies where technological innovation tends to happen. They often benefit large incumbents who have the money to satisfy them, and end up serving as a protectionist barrier against new entrants into the industry rather than fostering competition. I don’t want to minimize these problems, but we’ve dealt with them in other industries and I’m confident that we can find the right middle ground here as well.

NORMS, TREATIES, AND INTERNATIONAL REGULATORY BODIES

Okay. I admit it. I’ve palmed a card throughout this chapter by ignoring the international nature of the problem. How can I propose that US citizens enact domestic regulations to solve what is inherently an international problem? Even if the US and the EU both pass strict regulations on IoT security, what’s to stop cheap, insecure products from coming over the borders from Asia or elsewhere?

It’s a fair criticism. Countries can regulate what is manufactured or sold within their borders. Many countries already do that for pretty much all consumer products. We can create a blacklist of products or manufacturers, and force companies like Amazon and Apple to remove them from their online stores. But that’ll only go so far. Unless we allow ourselves to be subjected to pervasive and invasive searches, we can’t regulate what crosses the border in suitcases, mail-order packages, or Internet downloads. We can’t regulate software services purchased from foreign websites; censoring them is simply not an option. This is nothing new, and something we’ll have to deal with.

Even so, domestic regulations can have a powerful effect worldwide. Unlike car manufacturers, which sell different products in different countries based on things like emissions control laws, software is more of a write-once-sell-everywhere kind of business. If a large enough market regulates a software product or service, it is likely that the manufacturer will simply make that change worldwide, rather than having to maintain multiple products. Because the Internet is global, regulating cybersecurity is a bit like imposing emissions standards: if a country regulates on its own, it bears all the costs, while the rest of the world shares in the benefits.

International cooperation is coming. It is often in governments’ interests to harmonize laws. Most countries are concerned about protecting their economies and infrastructure from interruption. States also have an interest in cooperating to fight cybercrime. Smart, organized crime groups engage in what I call jurisdictional arbitrage: specifically basing their criminal activities in countries that have lax cybercrime laws, easily bribable police forces, and no extradition treaties. We know that both Russia and China turn a blind eye to crime that is directed abroad. There are hacker havens in Nigeria, Vietnam, Romania, and Brazil as well. For poor countries, organized cybercrime can actually be a source of wealth and prosperity. Some states, like North Korea, actively engage in state-sponsored cybercrime to boost the regime’s coffers.

There are some promising developments in this area. There are hundreds of national response teams—CERTs or CSIRTs—around the world. These groups often cooperate across borders and provide a way for incident responders to share information. The Budapest Convention on Cybercrime has now been ratified by 52 countries, although notably not by some of the significant players, such as Russia, Brazil, China, and India. The treaty provides a framework for international police and judicial cooperation on cybercrime.

We don’t want governments managing the Internet. Much of the Internet’s innovation stemmed from the US government’s benign neglect. Today, countries worldwide want to be much more involved in how their domestic Internet is managed. At the extreme, large and powerful countries like Russia and China want to control their domestic Internet in ways that increase their surveillance, censorship, and control over their citizens.

The current governance model for the Internet is multi-stakeholder, consisting of governments, companies, civil society, and interested technologists. As dysfunctional as it might feel sometimes, such a model is our best defense against an insecure Internet. It also prevents a splintering—called balkanization—of the Internet that might result from totalitarian countries enforcing their own demands.

Norms—informal rules for individuals, corporations, and nations for what is acceptable behavior—regulate far more of our society than most people realize. However, we currently don’t have established international norms regulating the use of cyberweapons. And the current norm regarding cyberespionage is that it’s okay. As we saw in Chapter 4, countries are in the middle of a cyber arms race. And every country is pretty much making it up as it goes along.

Political scientist Joseph Nye believes that countries can develop norms limiting cyberattacks. For a variety of reasons, it’s in the self-interest of nations to agree on things like not attacking each other’s infrastructure in peacetime, or not using cyberweapons against civilians first in wartime. These norms will eventually find their way into treaties and other more formal agreements.

One roadblock to consensus is that many countries don’t think of cybersecurity simply in terms of preventing attacks from hostile attackers. It’s also about ensuring that dissenting ideas don’t influence domestic politics. Viral dissident content can represent just as much of a threat as viral code. This makes multilateral bargaining hard, but not impossible.

The UN had its GGE—Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security—which came up with a good list of internationally agreed-upon norms in 2013. These were immediately blocked by countries like China that didn’t agree with them, and in 2017, the group disbanded in deadlock.

Still, there is probably some common ground. If states can’t agree not to stockpile cyberweapons, for instance, it’s nevertheless plausible that they could agree to some nonproliferation standards for cyberweapons. The Proliferation Security Initiative (PSI) has been relatively successful at addressing illegal trafficking in WMD. Over a hundred countries, including the usually nonconformist Russia, have agreed to become involved. The idea is to prevent weapons proliferation by having better safety standards and export controls, by interdicting WMD materials, and through information-sharing and capacity-building exercises.

With any agreements there will be compliance problems. Cyberweapons are easy to hide from treaty inspectors, and offensive capabilities look a lot like defensive capabilities. But the early nuclear treaties signed in the 1960s had problems, too, yet they started what in retrospect was a successful process that made the world much safer.

Some ideas for how to get started on the path towards an international cybersecurity agreement are already on the table. In a 2014 report, cyber policy expert Jason Healey recommended creating an international regulatory regime similar to what was established after the 2008 global financial crisis. In the same year, Matt Thomlinson of Microsoft proposed a “G20+20 group” of 20 governments and 20 global information and communications technology firms to draft a set of principles for acceptable behavior in cyberspace. Microsoft’s president and chief legal officer Brad Smith proposed a “Geneva convention” for cyberspace, setting parts of it off limits from intergovernmental meddling. Google has its own proposal. At this point, these ideas are clearly aspirational.

Setting norms is a long process. We are likely to get further with incremental cooperation, and agreements, than we are with striving for a perfect grand bargain straight away. And even so, there will be countries that won’t comply with any rules, standards, guidelines, or whatever else we come up with. We will deal with this the same way we do in any other area of international law. It won’t be perfect, but we’ll work with it and improve it over time.

Unfortunately, we in the US are setting norms with our own behavior. By using the Internet for both surveillance and attack, we are telling the world that those things are okay. By prioritizing offense over defense, we are making everyone less safe.