Internet+ security looks pretty bleak. The threats are increasing, the attackers are more brazen, and the defenses are increasingly inadequate.
All the blame shouldn’t fall on the technology. Engineers already know how to secure some of the problems I’ve mentioned. Hundreds of companies, and even more academic researchers, are working on new and better security technologies against the emerging threats. The challenges are hard, but they’re “send a man to the moon” hard and not “travel faster than light” hard. And while nothing is a panacea, there really isn’t any limit to engineers’ creativity in coming up with novel solutions to hard problems.
Still, I don’t think it will get better anytime soon. My pessimism stems primarily from the policy challenges. The current state of Internet security is a direct result of business decisions made by corporations and military/espionage decisions made by governments—everything I wrote about in Chapter 4. What we’ve learned from the past few decades is that computer security is more a human problem than a technical problem. What’s important is the law and economics, and the psychology and sociology—and what’s critical is the politics and governance.
Consider spam. For years, spam was a problem you had to deal with on your computer, or maybe with help from your ISP if it provided local anti-spam services. The most efficient way to identify and delete spam was in the network, but none of the Internet backbone companies bothered, because they didn’t really care and had no way to bill the user for their effort. The situation only changed when the economics of e-mail changed. Once most users had accounts at one of only a few large e-mail providers and most e-mail passed between them, it suddenly made sense for them to provide anti-spam services to all of their users automatically. The result was a slew of technologies that detect and quarantine spam. Today, spam still constitutes just over half of all e-mail, but 99.99% of it is blocked. It’s one of computer security’s success stories.
Consider credit card fraud. In the early days of credit cards, banks passed most of the costs of fraud on to consumers. The result was that banks did little to prevent fraud. That changed in 1974, when the US enacted the Fair Credit Billing Act, limiting consumer liability to the first $50. By forcing banks to pay the costs of fraud, Congress provided a fraud reduction incentive. The result was all the anti-fraud measures that are now in place: real-time card verification, back-end expert systems that search transaction streams for signs of fraud, manual card-activation requirements, chip cards, and so on. These measures all reduced overall fraud and—more importantly—they weren’t anything that customers could possibly have implemented.
UK banks were more able to pass the costs of fraud on to consumers, so they were slower to adopt these measures. The EU’s Payment Services Directives have sought to align consumer protection more with US standards, but have left wiggle room for banks to claim that customers must have been grossly negligent. (Amazingly, the UK may make this even worse.) And similarly, in the US, debit cards weren’t secured until another law forced banks to pay the costs of fraud, just as they did for credit cards.
In both of those examples, once we got the incentives for security right, the technologies came along to make it happen. With spam, it took a change in the e-mail ecosystem to shift the incentives of e-mail providers. With credit cards, it took a law to shift the incentives of banks. Similarly, Internet+ security is primarily a problem of incentives—and of policy.
Until now, we have left both the market and the government largely alone and able to operate in secret, and they have settled on the situation I described in Part I. That’s the unsatisfactory state of security with the current policies we have in place. The market won’t improve things as long as there’s more near-term profit to be had in spying on us and selling our data, keeping security details secret from consumers and users, and ignoring security and hoping for the best. Governments won’t improve things as long as they’re largely controlled by corporate lobbyists, and by organizations, like the NSA and Justice Department, that prefer spying to security.
If we want to change the balance of losses due to poor security and expenses due to security improvements, we’re going to have to change the incentives. It will be our representative governments, working transparently, that will change things for the better. Government is the missing piece in Internet+ security today. Although there will certainly be all sorts of problems getting it done, I don’t see any other way it will work. Government involvement, whether in the form of regulation, liabilities, or direct funding, isn’t a panacea, but neither is its absence. At its best, government enables us all to overcome collective action problems, to finance efforts that don’t emphasize near-term payoffs, and to establish baselines of what is acceptable behavior. At its worst, government is captured by private interests or becomes an entrenched bureaucracy more concerned with its own survival than with governing. The reality is likely to be somewhere between the two.
In my book on trust, Liars and Outliers, I wrote that “security is a tax on the honest.” I mean that very generally: the additional costs we all incur because some of us are dishonest. We pay for it in higher store prices because the owners have hired guards and installed security cameras to deal with shoplifting.
Security spending is a kind of dead weight. It doesn’t do anything productive; instead, it reduces the bad things that happen. If banks didn’t need to spend money on security, their services could be cheaper. If governments didn’t need to spend money on police or military, they could lower taxes. If you and I didn’t have to worry about burglary, we could save money by not buying door locks, burglar alarms, and window bars. In some countries, something like a quarter of all labor can be defined as “guard labor.”
Internet+ security is no different. The tech analyst firm Gartner estimates 2018 worldwide Internet security spending at $93 billion. If we want more security, we’re going to have to spend money to get it. We’re going to have to pay higher prices for our computers, phones, IoT devices, Internet services, and everything else. There is simply no other option. The policy questions involve how we’re going to pay for it.
Sometimes it makes sense for us to pay for security individually. Home security works that way. We each buy our own door locks, and some of us also purchase burglar alarm systems. Some of us spend our money on guns in our homes. Maybe the wealthiest among us pay for bodyguards, panic rooms, or, if you’re a James Bond villain, henchmen. These are all expenses, but they’re personal. Whatever you do doesn’t affect me, and whatever I do doesn’t affect you.
Sometimes it makes sense for us to pay for security collectively. Policing works that way. We don’t say: “If you want some policing, then pay for it yourself.” Instead, a portion of the taxes we all pay goes towards community police services. We do this because common benefits are most effectively provided through collective decision-making and funding. The police protect society in general (at least theoretically), regardless of whether specific individuals want protection.
In the end, our improved security for the Internet+ will likely be a mixture of individual and collective expenditures, all of which I will talk about in Part II. Individual expenditures will include security programs for our computers and firewalls for our networks. Collective expenditures will include police investigations of cybercrime, military cyberwarfare units, and investments in Internet infrastructure. Companies will build security into their products, either because the market demands it or because government forces them to. There will be lawsuits where there’s insecurity, insurance to protect against losses, and the resultant security increase to prevent the lawsuits and reduce the insurance premiums. It won’t be one thing; it will be a patchwork of many things—just like security in the real world.
It’ll be expensive. But here’s the thing: we’re paying it anyway. It’s hard to get good numbers on how much Internet insecurity costs, but we have a range. A 2017 Ponemon Institute report concluded that one in four companies will be hacked at an average cost of $3.6 million each. A Symantec report estimated that 978 million people in 20 countries were affected by cybercrime in 2017, at a cost of $172 billion. A 2018 study by RAND provided the most comprehensive analysis I’ve seen, and the results are all over the map.
We found that resulting values are highly sensitive to input parameters; for instance, using three reasonable sets of parameters from existing research and our own data analysis, we found that cyber crime has a direct gross domestic product (GDP) cost of $275 billion to $6.6 trillion globally and total GDP costs (direct plus systemic) of $799 billion to $22.5 trillion (1.1 to 32.4 percent of GDP).
Regardless of which estimate you use, it’s a lot of money. And that cost will be a drag on the economy, whether we pay it all in losses, or pay some of it in security measures designed to minimize those losses. Anything we pay in losses is wasted. But anything we pay in improved security results in better security technologies, fewer criminals, more secure corporate practices, and so on—all things that will continue to pay off year after year.
There’s a joke that says technologists look to the law to solve their problems, while lawyers look to technology to solve their problems. In truth, to make any of this work, technology and law have to work together. This is the most important lesson of the Edward Snowden documents. We always knew that technology could subvert law. Snowden showed us that law—especially secret law—can also subvert technology. Both must work together, or neither can work.
Part II describes how we can do that.