7

How We Can Secure the Internet+

By and large, technologies already exist to satisfy all of the principles in the previous chapter. Yes, there are vulnerabilities remaining. Yes, there are usability issues with some of the solutions. But for the most part, these are commonsense security principles that could be put in place today—if there were only some incentive for companies to do so.

We need to create that incentive by crafting strong public policies. There are basically four places policy can exert its influence on society. The first is ex ante: rules that attempt to prevent bad things from happening. These include regulations on products and product categories, licensing of professionals and products, testing and certification requirements, and industry best practices. They also include subsidies or tax breaks for doing things right. The second is ex post: rules that punish bad behavior after it’s already happened. This includes fines for insecurity and liabilities when things go wrong. The third is by mandating disclosure: product-labeling laws and other transparency measures, testing and rating agencies, information sharing between government and industry, and breach disclosure laws. (Some of these disclosures are ex ante and others are ex post.) And the fourth is what I would broadly categorize as measures that affect the environment. These include deliberate market design, funding for research and education, and using the procurement process as a means to drive product improvement more broadly. That’s the toolbox. It’s what we have to work with.

The goal of these kinds of policies isn’t to require that everything be made safe, but to create incentives for safe behavior. It’s to put our fingers on the scales by raising the cost of insecurity or (less commonly) lowering the cost of security.

Critical to any policy is the enforcement process. Standards can be enforced by government, by professional organizations, by industry groups, or by other third parties through either coercive or market pressure. There are four basic ways that Internet+ security policies can be enforced. One: through norms such as best practices. Norms provide a reference point that consumer advocacy groups, the media, and corporate shareholders can use to hold companies to account. Two: voluntarily, through self-regulation. Sometimes industry and professional bodies have an interest in creating and enforcing voluntary standards. These serve to increase consumer trust, and form a protective barrier to entry for new competitors. Three: through litigation. If customers or businesses can sue when they suffer damage, companies increase their security to avoid those lawsuits. Four: through regulatory bodies. Government agencies with the power to issue fines, demand recalls, or force companies to redress defects can enforce standards.

Political considerations may push us towards a particular set of policy solutions as a default. I, for example, hope to demonstrate that government needs to play a major role in all of these policy initiatives. Others prefer more market-led initiatives. Additional options include non-binding government guidelines, voluntary best-practice standards, and multi-stakeholder international agreements. But that’s for Chapter 8; in this chapter, I will focus on the how without worrying about who will do it.

On their own, none of the measures I propose in this chapter are sufficient. Minimum security standards won’t solve everything. Liabilities won’t solve everything. That’s okay, though, because none of them will work in isolation. All of the suggestions in this chapter will interact with each other, sometimes complementarily and sometimes contradictorily. If we’re going to secure the Internet+, it’ll be through a series of mutually reinforcing policies—just like everything else in society.

CREATE STANDARDS

First and foremost, we need to create actual standards for many of the principles I listed in Chapter 6.

I use the term “standard” deliberately and in the policy sense. There’s a distinction in law about prescriptive rules, which are rigid, and more flexible, principle-based standards. Standards afford choice or discretion, can provide a framework for balancing several different factors, and can adapt to changing circumstance. So, while a rule might be “The speed limit in snow is 35 mph,” a standard might be “Exercise caution when it’s snowing.” In Internet+ security, rigid rules might include “Consumers must have the ability to inspect their personal data” and “Enable secure default operation.” A standard requiring the owner of a database to “Take due care” to protect personal information leaves a lot of room for interpretation, and that meaning can change as technology changes.

Another Internet+ standard might include the principle that IoT vendors need to “Take best efforts not to sell insecure products.” This might sound wishy-washy, but it’s a real legal standard. If an IoT device gets hacked and regulators can show that the manufacturers used insecure protocols, didn’t encrypt their data, and enabled default passwords, then they obviously did not take best efforts. If they did all of those things and more, and a hacker found a vulnerability that couldn’t reasonably be predicted or prevented, then they might not be considered at fault.

We’re likely to need both rules and standards. How either of those things is carried out, however, will be subject to a more flexible standard. My guess is that in the dynamic world of Internet+ security, most regulations will be in the form of principle-based standards and not rigid rules.

There will necessarily be different standards for different types of things. For example, we’re not going to treat large, costly stuff like a refrigerator the same way we treat low-cost, disposable stuff like a light bulb. If the latter has a vulnerability, the right thing to do is to throw it away and buy a new one—possibly forcing the manufacturer to pay for the swap. Refrigerators are different, but they’re also likely to have fewer producers. It will be easier to impose standards on those few producers.

In general, it’s much more effective to focus on outcomes than on procedures. It’s called “outcomes-based regulation,” and it’s increasingly common in most areas—from building codes, to food safety, and emissions reductions. For example, a standard should not prescribe the patching methodology that a product should have. That’s too detailed, and something the government isn’t good at doing, especially in a rapidly evolving technological environment. Better to require a specific result—that IoT products should have a secure way of being patched—and let the industry figure out how to achieve it. This approach to regulation can stimulate innovation rather than inhibiting it. Think of the difference between requiring that appliances be x% more efficient next year and specifying a particular engineering design.

We also need to standardize the safety protocols that businesses using Internet+ devices should follow. The National Institute of Standards and Technology’s “Framework for Improving Critical Infrastructure Cybersecurity” is a great example of this type of standard. It’s a comprehensive guide for private-sector organizations to proactively assess and minimize their cybersecurity risk.

Standards regulating business processes, like how to prevent, detect, and respond to cyberattacks, are also important. If done right, these can motivate businesses to improve their overall Internet security and make better decisions about which technologies to buy and how to use them. Less obviously, standardizing these types of business processes also makes it easier for business executives to share ideas, impose requirements on third-party partners, and tie security standards to insurance. They also serve as a model for best practices in litigation, and courts can refer to them when making decisions.

Unfortunately, the NIST Cybersecurity Framework is only voluntary at this stage, but it’s gaining traction. In 2017, it became mandatory for federal agencies. Making it compulsory for everyone would be an easy regulatory win.

Along the same lines, the US government has something called FedRAMP, which is a security assessment and authorization process for cloud services. It also uses a NIST standard, and federal agencies are supposed to buy from certified vendors.

Certainly, any standards will evolve over time—as the threats change, as we learn more about what’s effective and what’s not, and as technologies change and Internet-connected devices become more powerful and pervasive.

CORRECT MISALIGNED INCENTIVES

Imagine a CEO with the following choice: spend an additional 5% on the cybersecurity budget to make the corporate network, products, or customer databases more secure, or save that money and take the chance that nothing will go wrong. A rational CEO will choose to save the money, or spend it on new features to compete in the market. And if the worst happens—think of Yahoo in 2016 or Equifax in 2017—most of the costs of the insecurity will be borne by other parties. Equifax’s CEO didn’t get his $5.2 million severance pay, because he resigned, but he did keep his $18.4 million pension and probably his stock options. His failed bet cost the company somewhere between $130 million and $210 million, but that wasn’t relevant to him at the time. Neither was the fact that it was the wrong long-term decision for the company.

This is a classic Prisoner’s Dilemma. If every company spent the extra money on security, Wall Street would just accept the expense as normal, but with everyone choosing their own short-term self-interest, any company that thinks long-term and spends more is immediately penalized, either by shareholders when its profits are lower or by customers when its prices are higher. We need some way to coordinate companies and convince them all to improve security together.

The economic considerations go further. Even after deciding to prioritize security over near-term profits, a CEO will only spend enough money to secure the system up to the value of the company. This is important. Disaster recovery models will be built around losses to the company, and not losses to the country or to individual citizens. And while the maximum loss to the company is everything the company is worth, the true costs of a disaster can be much greater. The Deepwater Horizon disaster cost BP about $60 billion, but the environmental, health, and economic costs were much greater. Had that company been smaller, it would have gone out of business long before it paid out all that money. All of those extra costs that a company avoids paying are externalities, and borne by society.

Some of this is psychology as well. We are biased towards preferring sure smaller gains over risky larger gains, and risky larger losses over sure smaller losses. Spending on preventive security is a sure small loss: the cost of more security. Reducing spending is a sure small gain. Having an insecure network, or service, or product, is risking a large loss. This doesn’t mean that no one ever spends money on security, only that it’s an uphill battle to overcome this cognitive bias—and it explains why so often CEOs are willing to take the chance. Of course, this is assuming that the CEOs are knowledgeable on the threats, which they almost certainly are not.

This willingness to assume the risks of having an insecure network results partly from the lack of clear legal liabilities for producing insecure products, which I’ll talk about more in the next section. Years ago, I joked that if a software product maimed one of your children, and the software manufacturer knew that it would but decided not to tell you because it might hurt its sales, it still would not be liable. That joke only worked because back then, software couldn’t possibly maim one of your children.

There are other reasons security incentives aren’t aligned properly. Big companies with few competitors don’t have much incentive to improve the security of their products, because users have no alternative; they either buy a product—security warts and all—or go without. Small companies don’t have much incentive either, because improving security will slow down product development and constrain their products’ features, and they won’t be rewarded for it by the market.

Worse, companies have strong incentives to treat security problems as PR issues, and keep knowledge about security vulnerabilities and data compromises to themselves. Equifax learned about its 2017 hack in July, but managed to keep the fact secret until September. When Yahoo was hacked in 2014, it kept the fact secret for two years. Uber, for a year.

When this information does become public, it’s still not enough. Despite the bad press, congressional inquiries, and social media outrage, companies generally don’t get punished in the market for bad security. One study found that stock prices of breached companies are unaffected in the long term.

We’ve seen the consequences of badly aligned incentives before. In the years leading up to the 2008 financial crisis, bankers were effectively gambling with other people’s money. It was in their interest to make as much profit as they could in the short run, but they had no incentive to think about the consequences for families who put all their savings into risky products. Most consumers had no choice but to trust their bankers’ advice, because they were not financial experts and could not evaluate their risks. After the crisis, Congress introduced the Dodd-Frank Act to realign incentives. Bankers now face higher statutory duties (for example, they must first consider whether a consumer could reasonably repay a loan before writing one) and increased penalties for misconduct.

Something like 90% of the Internet’s infrastructure is privately owned. Among other things, this means that it is managed to optimize the short-term financial interests of the corporations that can affect it, not the interests of users or the overall security of the network.

We need to change the incentives so that companies are forced to care about the security implications of their products.

One way to do this is by fining companies—and their directors—when they get things wrong. These fines have to be big enough to change the company’s risk equation. The cost of insecurity is generally calculated as threat multiplied by vulnerability multiplied by consequences. If that’s less than the cost to mitigate the risk, a rational company accepts the risk. Fines, assessed either after an incident or as a penalty for insecure practices, raise the cost of insecurity and make paying for security that much more financially attractive.

In some cases, the fines might drive companies into bankruptcy. This is severe, but it’s the only way to demonstrate to the rest of the industry that we’re serious about cybersecurity. If a person kills your spouse, they will get sent to jail and might even be subjected to the death penalty. If a corporation kills your spouse, it should face the same end. Author John Greer proposes sending convicted corporations to “pseudo-jail”: they would be taken over by the government, have all investors wiped out, and then be sold off at some later date. If we’re afraid to impose the death penalty on corporations, then corporations will realize they can skimp on security and count on the public’s mercy.

Another way of thinking about this is that if a company can only stay in business by externalizing the cost of security, maybe it shouldn’t stay in business. Those companies don’t ask the public to pay their employees’ salaries, so why should we have to pay for their security failures? It’s like a factory that can only stay in business by illegally polluting; we’d all be better off if we closed it down.

Consider regulated professions like law and accountancy. Firms specializing in these professional services take their legal responsibilities seriously—in part because the consequences can be dire. You’d be hard-pressed to find a partner in an auditing firm for whom the collapse of Arthur Andersen does not loom large. Arthur Andersen was a “Big Five” global accounting firm, with 85,000+ employees, that more or less disappeared overnight after accusations that it had inappropriately audited Enron’s financial accounts—a serious regulatory offense.

And this failure of Arthur Andersen illustrates another point. The Arthur Andersen employees did fine, as pieces of the company were acquired by other companies. A similar company driven out of business because of negligent security practices would also have its departments acquired by other, hopefully more diligent, companies.

But even this isn’t enough. In particular, startup companies would rationally ignore security and risk fines or even pseudo-jail. They’re already risking much more on much less, and know that success depends on luck at least as much as on business skill. It’s smart for them to use their limited time and budget to grow bigger faster, take the chance, and worry about security later. Their investors and board members would counsel that as well.

In 2015, Volkswagen was caught cheating on its emissions control tests. Because software controls engine operation, programmers were able to create an algorithm that detected when an emissions test was being conducted and modified the engine’s behavior in response. The result? From 2009 to 2015, 11 million cars worldwide—500,000 in the US—emitted up to 40 times more pollutants than local laws allowed. The company was hit with fines and penalties totaling almost $30 billion, and that’s significant. But my fear is that the big lesson of the Volkswagen case won’t be that companies that cheat will get caught; it’ll be that companies can get away with cheating for six years. That’s longer than the tenure of most CEOs, who would expect to cash out long before the big fines kicked in. (Note: one VW manager and one engineer received prison sentences for their actions.)

To be sure, this is just one aspect of a much larger problem of incentives inside corporations. The only way to motivate companies is to hold company executives and board members (including venture capitalists, who routinely sit on the boards of companies they invest in) personally responsible for security failures. This will raise the personal costs of insecurity, and make it less likely that those individuals will cut corners out of self-interest.

Such accountability may be coming soon. Under current law in the US and the GDPR (General Data Protection Regulation) in the EU, executives and board members could face liability for data breaches. And the force of public expectation is moving in this direction, too. Equifax’s CEO, CIO, and CSO were all forced into early retirement in the wake of that hack. In the UK, the CEO of TalkTalk resigned after that company was fined £400,000 because it leaked customer data.

There is precedent for holding these people accountable. The Sarbanes-Oxley Act regulates corporate financial conduct and misconduct. It was passed in 2002 as a response to the crimes and abuses of Enron in order to rectify the many conflicts of interest that undermined the effectiveness of a lot of corporate law. According to Sarbanes-Oxley, directors can be held personally responsible for their companies’ behavior, which makes them highly motivated not to let the companies do anything illegal. The law’s reality might be much less than its aspirations, but it’s definitely the right idea. We need to think about doing the same kind of thing with software security.

I’m not going to pretend that changing liability responsibilities won’t be a huge battle. It’s hard to add liabilities where they’re not already required, because it’s a radical change for the affected industries—and they will fight it every step of the way. But not doing it will be worse for society.

Finally, correcting incentives doesn’t have to be just about imposing penalties for getting things wrong. Companies might be more likely to publicly disclose information about security breaches if they received some level of liability exemption. They might also be more inclined to share vulnerability information with competitors or with the government if they receive assurances that their sensitive intellectual property will be protected. And tax credits have their place, too.

CLARIFY LIABILITIES

Fines by government agencies are not the only way to tilt the Internet+ towards security. The government can change the law to make it easier for users to sue companies when their security fails.

SmartThings is a centralized hub that works with compatible light bulbs, locks, thermostats, cameras, doorbells, and more, controlled by a free phone app. In 2016, a group of researchers found a boatload of security vulnerabilities in the system. They were able to steal the codes that would unlock the door, trigger a fake fire alarm, and disable some security settings.

If one of these vulnerabilities enabled a thief to break into your home, whose problem would this be? Yours, of course. If you read SmartThings Inc.’s terms of service, you use SmartThings products entirely at your own risk, under no circumstances is SmartThings liable for any damages whatsoever in the case of any failure or malfunction, and you agree to hold SmartThings harmless for any and every possible claim.

Since the beginning of personal computers, both hardware and software manufacturers have disclaimed liability when things go wrong. This made some sense in the early years of computing. The reason we have the Internet is that companies were able to market buggy products. If computers were subject to the same product liability regulations as stepladders, they probably wouldn’t be available on the market yet.

Some of this inequity is enforced by the terms of service that govern the liability relationship between you and the company whose software you use. These are the “terms of service” that you must confirm you’ve read, even though no one reads through them. Not that it would matter if you did; the companies reserve the right to modify terms at will without telling you.

Companies aren’t liable if their programs lose your data, or expose it to criminals, or result in harm. Neither are cloud services. Terms of service more or less force you to assume all the risk when you use companies’ products and services, and they protect the companies from lawsuits when problems arise.

Suing software vendors is also expensive. Most users can’t go it alone; they need class-action lawsuits. To forestall this, many terms of service include binding arbitration agreements. Such agreements force unhappy users to go into arbitration, which is generally much friendlier to the companies than court is. Preventing class-action suits also greatly favors the companies.

All of this is made worse by the exemption of software from normal product liability law. By international standards, the US has pretty tough product liability laws, but only when it comes to tangible products. Users of defective tangible products can sue anyone in the chain of distribution, from the manufacturer who made it to the retailer who sold it. Software manages to evade all of this, both because it’s often licensed rather than purchased, and because code is legally categorized as a service rather than a product. And even when it is a product, the manufacturer can disclaim liability in the end-user license agreement—something the courts have upheld.

There are two other big problems.

First, where defective software has resulted in losses, courts have been reluctant to accept that software companies caused that harm. Judges have tended to blame hackers for exploiting vulnerabilities, not companies for creating the opportunity in the first place. The need for evidence complicates this further. If you live in the US, you were almost certainly a victim of the Equifax breach. But if your information is used for fraud and identity theft, you won’t be able to prove that the Equifax hack was to blame. Your information has probably been stolen on multiple occasions from multiple databases. This is why it is so hard to sue companies like Equifax when they lose your personal data: all of the data Equifax failed to secure is already available on the black market, so one more breach caused no new harm.

After the Mirai botnet caused the biggest DDoS attack in US history, the FTC tried and failed to hold D-Link router manufacturers accountable. But it could not prove that any individual routers were used as part of the Mirai botnet—only that D-Link routers were insecure and that some of them were used.

Second, users consistently struggle to prove that they have suffered “harm,” as the law defines that term. Courts will only hear cases of this type where there are allegations of monetary harm, which is very hard to demonstrate for privacy violations.

In 2016, the FTC issued a finding that LabMD had engaged in “unfair practices” by failing to protect its customers’ sensitive information. The FTC found that LabMD had not implemented even basic data security measures, and had left sensitive medical and financial information exposed for almost a year. LabMD challenged the FTC’s ruling in the US Court of Appeals. It argued that since there were no known instances of the exposed data being used for illicit purposes, its customers had not been “harmed” by its lax security and the FTC had no authority to sanction it. The signs are that the court will rule in LabMD’s favor—a ruling that will hamper the agency’s future ability to penalize organizations that breach customer privacy.

In Chapter 1, I mentioned the electronic-lock company Onity, whose hotel locks in major hotel chains were hacked to enable burglaries. The hotel chains’ 2014 class-action lawsuit was thrown out because the locks still functioned, and the plaintiffs couldn’t point to actual burglaries as a result of the security vulnerability.

Liability law doesn’t have to work this way. We need only look at the history of product liability for manufactured goods. After the Industrial Revolution, the law at first continued the harsh principle of caveat emptor: “Let the buyer beware.” However, as manufacturing became industrialized, and products more complex, courts and legislators slowly recognized that it was unreasonable to expect consumers to assess the safety of products they bought. From the late 1800s, product liability laws gradually emerged. Then, from the mid-20th century, most industrial economies moved to “strict liability” standards. If their product causes physical harm, manufacturers are liable, even if they were not negligent in making that product defective. In the 1940s, the California Supreme Court famously explained why strict liability for manufactured goods makes sense: “Public policy demands that responsibility be fixed wherever it will most effectively reduce the hazards to life and health inherent in defective products that reach the market.” This argument also applies to the Internet+.

Additionally, people shouldn’t have to demonstrate that they suffered monetary harm in order to hold software vendors liable for preventably faulty products. The law could provide for statutory damages in the event that companies have ineffective security for the devices they sell, the services they provide, or the data they keep. Statutory damages would be triggered once poor security was proved, without any further requirements for monetary harm. This is the way wiretap law works; if a police department is proved to have wiretapped someone illegally, that police department has to pay statutory damages. This is also the way copyright law works; an infringer has to pay damages to the copyright holder, even if there was no economic harm at all. It obviously won’t work for all areas of Internet+ security liability, but it will for some.

I believe all these types of arguments will gain traction in relation to the Internet+. Already, regulatory agencies are considering issues of data privacy and computer security. Also, many of the products that are becoming computerized and connected—cars, medical devices, appliances, toys, and so on—are already subject to liability laws. When connected versions of these things start killing people, the courts will take action and the public will demand legislative change.

However, today’s software is still more or less in the dark ages of product liability. When things go wrong, the loss generally must be borne by the user—companies more or less get off scot-free.

Liability won’t necessarily stifle innovation. Liabilities aren’t meant to be a black-and-white, all-or-nothing means of government intervention. Regularly, the law establishes carve-outs from liability in some circumstances. This happened in the 1980s, when the small-aircraft industry was almost bankrupted because of excessive liability judgments. There could also be caps on damages, as there are for some medical malpractice claims, although we need to be careful that any caps don’t undermine the desired incentives of liabilities. And while it’s clear that software manufacturers don’t deserve 100% of the liability for a security incident, it is equally clear that they don’t deserve 0%. Courts can figure this out.

Where there is risk of liability, the insurance industry follows. A properly functioning insurance market will protect companies from being forced out of business by liability claims. It makes them factor in the risk that their products will cause users’ harm as a normal cost of doing business.

Insurance is also a self-reinforcing mechanism for improving security and safety, while still allowing companies room to innovate. The insurance industry imposes costs on bad security. A company whose products and services prove to be insecure will face higher premiums, motivating it to spend money to improve security and reduce those premiums. On the other side, a company that adheres to reasonable standards and is hacked anyway will be protected from a large court judgment because its insurance company will pay it.

Insurance also works on the individual level. If we require people who purchase dangerous technologies to also purchase insurance, then we are effectively privatizing the regulation of these technologies. The market will determine what that insurance will cost, depending on the security of the technologies. Manufacturers could make their products cheap to insure by adding more security. In either case, consumers will pay for the risk inherent in what they buy.

There are challenges to creating these new insurance products. There are two basic models for insurance. One: the fire model, where individual houses catch on fire at a fairly steady rate, and the insurance industry can calculate premiums based on the predicted rate. Two: the flood model, where infrequent, large-scale events affect large numbers of people—but again at a fairly steady rate. Internet+ insurance is complicated because it follows neither of those models but instead has aspects of both: individuals are hacked at a steady (albeit increasing) rate, while class breaks and massive data breaches affect lots of people at once. Also, the constantly changing technology landscape makes it difficult to gather and analyze the historical data necessary to calculate premiums.

Perhaps it would be more accurate to use a health-insurance model: sickness is inevitable, and contagions can spread widely, so insurers should focus on risk prevention and incident response rather than straight reimbursement. Insurance companies are starting to figure out how to price premiums for cybersecurity insurance, though—in some cases scoring companies according to their security practices. More will happen once we better clarify liabilities.

CORRECT INFORMATION ASYMMETRIES

Recently, I had occasion to research baby monitors. They’re surveillance devices by design, and can pick up a lot more than a baby’s cries. Of course, I had a lot of security questions. How is the audio and video transmission secured? What’s the encryption algorithm? How are encryption keys generated, and who has copies of them? If data is stored on the cloud, how long is it stored and how is it secured? How does the smartphone app, if the monitor uses one, authenticate to the cloud server? Many brands are hackable, and I wanted to buy a secure one.

Uniformly, the product marketing materials are minimally informative. The analog monitors say nothing about security. The digital ones make vague statements like: “[Our] technology transmits a secure, encrypted signal so you can rest assured you’re the only one who can hear your baby.” Some claim to follow various wireless standards, and pair sender and receiver via some sort of encryption. Others rely entirely on transmission power and channel switching for security. All toss around the word “secure” without explaining what it means. Basically, comparison shopping is impossible. I couldn’t tell the good from the bad, which means the average consumer has no chance.

Security is complex and largely opaque, and right now there is no way for users to distinguish secure products from insecure ones. Baby monitors are pretty simple. The problem of IoT devices will get even more complicated as devices—and the interconnections between them—become more complex. The lack of information combined with the complexity of the systems is disempowering to consumers, and almost certainly lulls them into thinking that devices are more secure than they are.

In economics, this is known as a “lemons market.” Vendors only compete on features that buyers can perceive, and ignore features—like security—that they can’t. So, vague, pacifying claims about security are more likely to lead to a sale than detailed explanations of security features.

The result is that insecure products drive secure products out of the market, because there’s no return on investment for security. We’ve seen this over and over again in computer and Internet security, and we’re going to see it in the Internet+ as well. Security must be made meaningful, vivid, and obvious to the consumer; once consumers know more, they will be empowered to make better choices.

Many industries have labeling requirements. Think of nutrition and ingredient labels on food, all the fine print that accompanies pharmaceuticals, fuel efficiency stickers on new cars, and so on. Such labeling enables consumers to make better buying decisions. There is nothing like this today in computer security.

One helpful labeling requirement for computerized products would be a discussion of the threat model that the device is designed to be secure against. Looking again at that baby monitor—random channel switching might thwart a casual eavesdropper, but not a more sophisticated attacker. Other security measures would be stronger than channel switching. If the manufacturer explains the security in a simple manner, consumers can do more comparison shopping. I’m thinking of statements like: “This baby monitor uniquely pairs the transmitters and receivers to each other”; “Transmissions are encrypted between transmitter and receiver, which secures against neighborhood eavesdroppers”; “Transmissions on your wireless network are encrypted, which secures against network eavesdroppers”; or “This product encrypts audio and video sent to the cloud, which secures against Internet eavesdroppers.” And while a lot of that may be gobbledygook to the average consumer, product review sites can use the information to make better recommendations.

Samsung did something like this with its smart television, but it was buried in the fine print of its policy:

Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party through your use of Voice Recognition.

Product labels should also explain the user’s security responsibilities. Baby monitors are routinely put in bedrooms. It’s much easier to leave a monitor on all the time than to switch it on and off, which means that it’s very likely going to capture and transmit activity that the user doesn’t want broadcast. I want the product label to put the user on notice that this will happen. “When the transmitter is on, it relays every sound it captures to our headquarters in San Jose.” Or: “This product will need to be updated regularly; registered users will receive updates for at least the next five years.” The general idea is that users need to be informed where the product’s security features start and end, how security needs to be maintained, and when users are on their own.

Perhaps the most useful way to give customers information on product labels is a rating system. Secure products and services could get a higher rating, a seal of security, or some other simple marking that could guide customers’ buying decisions. It’s an interesting idea, and lots of government agencies are thinking about this—in the UK, the EU, Australia, and elsewhere.

We need more transparency in cloud services. Right now, you have no idea how Google secures your e-mail. My hope is that liabilities will change this. If a retail company is liable for securing its customers’ data, then it will have to hold its cloud service providers liable. And those service providers will, in turn, have to hold their cloud infrastructure providers liable. This sort of cascading liability will force everyone to be more transparent, if for no other reason than to satisfy demands from insurance companies.

If there were security standards, a government agency or an independent organization could test products and services against them to assign the ratings. Self-reporting could also work. In 2017, two senators introduced the Cyber Shield Act, which would have directed the Department of Commerce to develop security standards for IoT devices. Companies could display a label on their products touting their adherence to the standards. The bill went nowhere, but an industry consortium or another third party could easily do the same. The standards could even be tied to insurance.

Alternatively, companies might be rated according to their processes and practices, perhaps using the design principles from Chapter 6. That’s something like what Underwriters Laboratories does. The group was created by the insurance industry in 1894 to test the safety of electrical equipment. It doesn’t demonstrate that a product is safe, but uses a checklist to confirm that the manufacturer followed a set of safety rules.

Consumers Union—the organization behind Consumer Reports—has been looking into doing some kind of security testing of IoT products for years. It may even have a viable rating system by the time this book is published. But while it may be able to rate automobiles and major appliances, the sheer number—and the rapid change—of cheaper consumer devices will be too much for any general-purpose organization to deal with.

In the computer security field, two existing rating programs are worth mentioning. The Electronic Frontier Foundation’s Who Has Your Back? project evaluates companies’ commitments to protecting users when the government seeks private data. And the Open Technology Institute’s Ranking Digital Rights initiative evaluates companies on how they respect freedom of expression and privacy.

When it comes to products, though, there will be some pitfalls in setting up a security rating system. One is that there are no simple tests with simple results. We can’t extensively test a piece of software and pronounce it secure. And any security rating changes with time; what was verifiably secure last year might be demonstrably insecure this year. This is more than a matter of time and expense. We’re butting up against some technological limits of computer science theory: our inability to declare something “secure” in any meaningful way.

Still, there’s a lot we can do. We can test a product against a known suite of attack techniques and declare it resistant. We can test it for behaviors indicative of security failures and demonstrate a corner-cutting development process. We can test against many of the design principles from Chapter 6. And we can do much of this without the consent of the companies developing and selling the software.

We’ll have to find the right balance of manageable testing requirements and security. For instance, the FDA requires testing of computerized medical devices. Initially, its rules required complete retesting if there was any software change, including patches to fix vulnerabilities, but it has since revised those rules so that updates that don’t change functionality don’t require retesting. This is not the most secure way to do things, but it is probably the most reasonable compromise.

We’ll also have to figure out how to educate customers to understand the meaning of the ratings, rankings, descriptions, or seals of approval. How do we explain that a logo stating that an IoT toy says “Meets A-1 industry security standards” doesn’t actually mean that the toy is guaranteed to be secure—only that it meets some minimal security standards that may be good enough today against certain threats? In the food world, there are a zillion competing ratings and scales; we don’t want that to happen with Internet+ security.

Despite these problems, some sort of security rating will be essential. I have trouble imagining any market-based security improvements without it.

Beyond product labeling and security ratings, there are two other good ways to give consumers more information. The first is breach disclosure laws, which require companies to notify individuals when their personal information is stolen. Such notifications not only alert people that their data has been stolen, but give all of us information about the security practices of the companies that store personal data. In the US, 48 states have these sorts of laws. They’re all different: what information counts as personal, how long the companies have to disclose, when they can delay disclosure, and so on.

There have been several failed attempts at a national law. In theory, I am in favor of this, but I worry that it will be less comprehensive than the laws of some states—notably Massachusetts, California, and New York—and that it will preempt all the existing state laws. Right now, the state laws are de facto national laws, because every successful company has users in all 50 states.

These state laws need to be expanded. Incident reporting where there’s no loss of personal information remains voluntary, and participation is generally low because companies fear bad press or litigation. Breach disclosure laws need to encompass other sorts of breaches as well. If a piece of critical infrastructure is breached, for example, there should be a requirement for the owner to report it.

The second way to give consumers more information is to improve vulnerability disclosure. In Chapter 2, I talked about how security researchers find vulnerabilities in computer software, and how publishing those research results is critical to motivating software vendors to fix them. Vendors hate the process because it makes them look bad, and in some cases they’ve successfully sued researchers under the DMCA and other laws. This situation needs to be reversed; instead, we need laws protecting researchers who find vulnerabilities, responsibly disclose them to the software vendor, and publish their results after a reasonable time. Not only will improved vulnerability disclosure nudge companies to increase their security to avoid unflattering publicity, but it will also give consumers important information about the security of different products.

INCREASE PUBLIC EDUCATION

Public education is vital for Internet+ security. Citizens need to understand their role in cybersecurity; like every other aspect of personal or public safety, our individual actions matter. Additionally, an educated public will be able to pressure companies to improve their security, either by refusing to use insecure products or services, or by pressuring government to take action where appropriate.

There have been attempts at public-awareness campaigns for Internet security. The Department of Homeland Security unveiled a “Stop.Think.Connect.” campaign in 2016. That you almost certainly aren’t familiar with it demonstrates how effective it has been. Perhaps other campaigns can do better.

Education is hard. We need to educate people that security matters and how to make security choices, without necessarily turning them into security engineers. These are technical issues, yet we don’t want to create a world where only technical experts are guaranteed security. That’s where we are right now with our laptops and home networks. The complexity of these systems is beyond the understanding of average consumers. You have to be an expert to configure them, so most people don’t bother. We have to do better.

Today, a lot of security advice we give to users just covers for bad security design. We tell people not to click on strange links. But it’s the Internet; clicking is what links are for. We also tell them not to insert strange USB drives into their computers. Again, what else would you possibly do with a USB drive? We have to do better: we need systems that remain secure regardless of which links people click on, and regardless of which USB drives they stick into their computers.

Compare this to the automobile. When cars were first introduced, they were sold with a repair manual and a tool kit; you needed to know how to fix one to drive one. As cars became easier to use and service stations more common, even the mechanically disinclined were able to buy one. We’ve reached this place with computers, but not with computer security.

There are areas where public education won’t help. For low-cost devices, there is no market fix, because the threat is primarily botnets, and neither the buyer nor the seller knows enough about them to care (except for people like me, and maybe some of my readers). The owners of the webcams and DVRs used in the denial-of-service attacks can’t tell, and mostly don’t care: their devices were cheap to buy, they still work, and the owners don’t know any victims. The sellers of those devices don’t care: they’re now selling newer and better models, and their customers only care about price and features. Think of it as a kind of invisible pollution.

For more expensive devices, and as safety risks increase, the market works better. Automobile drivers and airline passengers want their modes of transport to be secure. Education might actually help people make better choices about the security of their products, just as they do today about the safety of their automobiles.

We can teach users specific behaviors, as long as they’re simple, actionable, and make obvious sense. In public health, we’ve taught people that they should wash their hands, sneeze into the crook of their arm, and get an annual flu vaccine. Fewer people do those things than we might like, but most know they should.

RAISE PROFESSIONAL STANDARDS

There are many rules you have to follow if you want to build a building. You need to hire an architect to design it. That architect has to be state certified. Any complicated engineering needs to be approved by a certified engineer. The construction company you hire needs to be licensed. The electricians and apprentices that the construction company hires all need to be licensed by state boards. To navigate the complexities of all this, you’ll certainly need to hire an attorney and accountant. Both will be certified and licensed through tests administered by state professional certification boards. Of course, the real estate agent who helped you buy the land was licensed as well.

Professional certification is a higher standard than occupational licensing, but right now there is no system for either certifying or licensing software designers, software architects, computer engineers, or coders of any kind. Creating one is not a new idea; it has been promoted in the industry for decades. Existing organizations for software professionals, like the Association for Computing Machinery and the IEEE Computer Society, have studied this issue at length and have proposed several different licensing schemes and professional development criteria for software engineers. The International Organization for Standardization (ISO) has some relevant standards as well. There has always been fierce pushback from developers, both for personal reasons and because software engineering is not engineering in the traditional sense. It isn’t a discipline where the engineer applies known principles based on science to create something new. This makes it hard to figure out what a professional software engineer is, let alone what competencies they require.

Even so, I believe this will change. There will be some level of software engineer that will be licensed—either by the government or by some professional association with the government’s approval—and that engineer will be required to sign off on a software design in the same way that a certified architect signs off on building plans.

It will take a lot of work to get there, though. You can’t just create a licensed profession out of thin air; an entire educational infrastructure needs to be in place. So before it happens, we’ll need to be able to consistently train software engineers in reliability, safety, security, and other responsibilities. We’ll need a curriculum in colleges and universities, and continuing education for engineers in the middle of their careers. We’ll need professional organizations to step up and define what accreditation looks like, and what sort of recertification is needed in this fast-changing environment. We’ll also need to figure out how to account for the international nature of software development.

None of this will be easy, and it will likely take decades to get it all working. It took three centuries for medicine to become a profession in post-Renaissance Europe; we can’t wait anywhere near that long. Today, anything we can do to move towards increased professionalism will benefit the field in the long run.

CLOSE THE SKILLS GAP

Along with raising professional standards, we need to drastically increase the number of cybersecurity professionals.

The lack of trained people is called the cybersecurity skills gap, and it’s been a major topic of conversation at almost every IT security event I have attended in recent years. Basically, there aren’t enough security engineers to meet the demand. This is true at every level: network administrators, programmers, security architects, managers, and chief information security officers.

The numbers are scary. Various reports forecast 1.5 million, 2 million, 3.5 million, or 6 million cybersecurity jobs going unfilled globally in the next few years because demand is exceeding supply. Whichever estimate is correct—and my prediction is towards the higher side—this could be a disaster. All of the technical security solutions discussed in this book require people, and if we don’t have the people, the solutions won’t get implemented.

Jon Oltsik, an industry analyst who has been following this issue, writes: “The cybersecurity skills shortage represents an existential threat to our national security.” Given current trends, it’s hard to argue with that assessment.

The solution is both obvious and difficult to implement. On the supply side, we need to expose students to cybersecurity as children, graduate more software engineers with a cybersecurity specialty, and create midcareer retraining programs to shift practicing engineers over to cybersecurity. We need to attract more women and minorities to cybersecurity careers. We need to pour money into all of this, and quickly.

On the demand side, we need to automate away those jobs wherever possible. We’re already starting to see the benefits of automation for security, and the situation will improve significantly once the benefits of machine learning and artificial intelligence start to kick in. This leads directly into my next recommendation.

INCREASE RESEARCH

We have some serious technical security problems to solve. And while there is already a lot of research and development under way, we need more strategic long-term, high-risk/high-reward research into technologies that can dramatically alter the balance between attacker and defender. Today, there are far too few resources devoted to this type of R&D.

Most business organizations won’t engage in this sort of research because any payoff is distant and nebulous. The majority of substantive improvements in the coming decades will result from government-funded academic research.

We need smaller-scale, near-term applied research as well. Academic institutions can’t do it all; companies must also step in. A research tax credit could provide the proper incentive for development of secure products and services.

Research has the potential to change some of the fundamental assumptions about Internet+ security discussed in Chapter 1. I’ve seen calls for a cyber Manhattan Project, a cyber moonshot, and other similar buzz-phrases. I don’t know if we’re ready for anything like that, though. Those sorts of projects need specific tangible goals. A generic goal of “improving cybersecurity” just doesn’t cut it.

Whatever the mechanism, we need a concerted and sustained research and development project for new technologies that can secure us against the wide variety of threats we’re facing now and in the coming years and decades. It’s ambitious, yes, but I don’t think we have any alternative. The primary thing holding us back is the severe lack of trust of government by the tech industry.

Again, nothing new here. This call has been made for climate change, food and overpopulation, space exploration, and many other problems we collectively face.

FUND MAINTENANCE AND UPKEEP

There’s a lot of talk in the US about our failing national infrastructure—roads, bridges, water systems, schools and other public buildings—and the need for a massive investment to modernize them all. We also need to be talking about a massive investment in our Internet infrastructure. It’s not as old as our physical infrastructure, but in some ways it’s just as decrepit.

Computers degrade faster than conventional physical infrastructure. You know this is true; you are much more likely to upgrade your laptop and phone because the older models don’t work as well than to upgrade your automobile or refrigerator. Companies like Microsoft and Apple maintain only the most recent few versions of their operating systems. After a decade, it can actually be risky to keep using old computer hardware and software.

We’re not going to replace the Internet with something else; the current technologies are too pervasive for that to work. Instead, we’ll have to upgrade pieces of it, one at a time, all while maintaining backwards compatibility. Someone needs to coordinate that. Someone also needs to fund development and maintenance of the critical pieces of Internet infrastructure. Someone needs to work with technology companies to help secure shared pieces of infrastructure, and respond quickly to vulnerabilities when they occur. In the next chapter, I argue that government is that “someone.”

After we’re done upgrading our critical Internet infrastructure, we’ll need to keep upgrading it. The era where you can build a system and have it work for decades is over (if it ever existed); computer systems need to be upgraded continuously. We need to accept this new, minimalist life span; we need to figure out how to keep our systems current; and we need to get ready to pay for it. This will be expensive.