Story one: In the aftermath of the 2017 Equifax hack, there was bipartisan outrage in Congress and a lot of talk about regulating data brokers—and those who collect and sell our personal data in general. Despite some very strong words, a flurry of congressional hearings, and several proposals, nothing came of any of it. Even a bill imposing the tiniest of regulation specifically on credit bureaus went nowhere. The only thing Congress did was pass a law preventing consumers from suing Equifax.
Story two: The 2017 Internet of Things Cybersecurity Improvement Act was a modest piece of legislation. It didn’t prescribe, regulate, or otherwise force any company to do anything. It imposed minimum security standards on IoT devices purchased by the US government. Those standards were reasonable and not very onerous, along the lines of the things I discussed in Chapter 6. The bill went nowhere. There were no hearings. It was never voted on by any committee. It barely made the papers after it was introduced.
Story three: In 2016, President Obama established the Commission on Enhancing National Cybersecurity. Its mandate was broad:
The Commission will make detailed recommendations to strengthen cybersecurity in both the public and private sectors while protecting privacy, ensuring public safety and economic and national security, fostering discovery and development of new technical solutions, and bolstering partnerships between Federal, State, and local government and the private sector in the development, promotion, and use of cybersecurity technologies, policies, and best practices. The Commission’s recommendations should address actions that can be taken over the next decade to accomplish these goals.
At the end of that year, the bipartisan group issued its report. It’s a good document based on solid research; it contains 16 recommendations with 53 specific action items that the administration could do to improve Internet security. While there are things I quibbled with, it wasn’t a bad road map for both immediate action and long-term policy planning. It’s almost two years later, and only one of the recommendations has been turned into policy: making the NIST Cybersecurity Framework mandatory for government agencies. No agency has yet followed that policy. The rest of the report has been ignored.
As you read the previous four chapters, you might have found it easy to accuse me of painting a nightmare and responding with daydreams—that while my recommendations might be a good list of what we should do, they bear no resemblance to what we actually will do.
Partly, I agree. I don’t foresee Congress taking on the powerful computer and Internet industries and imposing enforceable security standards. I don’t foresee any increase in spending on our cyberspace infrastructure. I don’t foresee the creation of any new federal regulatory agencies. And I don’t foresee either the military or the police forgoing the offensive uses of cyberspace in order to improve the defensive.
Let me talk about the psychology of this for a minute. Just as the CEOs of companies tend to underspend on security, politicians tend to underplay threats that aren’t immediately salient. Imagine a politician looking at a large budget allocation for mitigating a hypothetical, long-term, strategic risk. She could designate funds for that purpose, or for more immediate political priorities. If she does the latter, she’s a hero with her constituents, or at least the constituents from her party. If she sticks with spending on security, she risks being criticized by her opponents for wasting money or ignoring those immediate priorities. This is worse if the threat doesn’t materialize (even if it’s the spending that causes that). It’s even worse if the threat materializes when the other party is in power: they’ll take the credit for keeping people secure.
In all the years I’ve been writing about these issues, I have seen very little serious policy progress. I have seen the ever-more-powerful IT industry digging in its heels and opposing any governmental limits on its behavior, and legislators without the stomach to take it on. I have seen law enforcement groups in multiple countries propose technical changes that weaken security, painting anyone who opposes them as weak on crime and terrorism. I have seen government get accused of over-and under-regulating at the same time. And I have seen new technologies become mainstream without any thought about security or regulation.
Meanwhile, the risks have grown more dire, the consequences more catastrophic, and the policy issues more intractable. The Internet has become critical infrastructure; now it’s becoming physical. Our data has moved onto computers managed by other companies. Our networks have become global.
Governments regulate things that kill people, and when the Internet starts killing people it will be regulated. It’s true that fear is a powerful motivator, and can overcome the psychological bias towards doing nothing and the political bias towards smaller government.
What would such an event look like?
That depends on the time frame. Some observers have noted parallels between today’s Internet+ and the pre-1970s automobile industry. Free from regulation, manufacturers were building and selling unsafe cars, and people were dying. It was the 1965 publication of Ralph Nader’s Unsafe at Any Speed that spurred the government into action, resulting in a slew of safety laws covering seat belts, headrests, and so on. A slew of Internet+ related fatalities could cause a similar regulatory flurry.
On the other hand, companies have been killing people via the environment for decades. Rachel Carson published Silent Spring in 1962, before the EPA was formed in 1970—and almost 50 years later, the EPA’s regulations are still insufficient to combat the threats. Never underestimate the power of industry lobbying groups to push their own agendas, even at the expense of everyone else.
The difference between events that prompt immediate versus sluggish reactions might lie in the ability to connect a fatality to the underlying insecurity. Environmental fatalities are much harder to pin on a specific cause than is an automobile fatality. This is also true on the Internet. When you’re the victim of identity theft, it’s very hard to point to the specific instance of hacking that precipitated it. Even when a power plant is hacked and a city is plunged into darkness, it can be hard to know exactly which vulnerability was to blame.
I’m not optimistic in the near term. As a society, we haven’t even agreed about any of the big ideas. We understand the symptoms of insecurity better than the actual problems, which makes it hard to discuss solutions. We can’t figure out what the policies should be because we don’t know where we want to go. Even worse, we’re not having any of these big conversations. Aside from forcing tech companies to break encryption to satisfy law enforcement, Internet+ security isn’t an issue that most policy makers are concerned about—apart from the occasional strong words. It’s not debated in the media. It’s not a campaign issue in any country I can think of. We don’t even have a commonly agreed-upon vocabulary for talking about these issues.
Compare this to money laundering, child porn, or bribery. Those are all big international problems with complex geopolitical implications and nuanced policy solutions. But for those issues, at least we all agree about the direction we want the world to head in. With Internet+ security, we’re nowhere close to there yet.
Plus, the threats are all jumbled together. “Cyber” is an umbrella term that encompasses everything from cyberbullying to cyberterrorism. That might make sense from a technological perspective—the Internet+ is the common aspect—but it makes no sense from a policy perspective. Cyberbullying, cybercrime, cyberterrorism, and cyberwar are not the same, and they’re different from cyberespionage and surveillance capitalism. Some threats are properly countered by the police, and some by the military. Other threats are not the government’s business at all, and are properly countered by the affected party. Some need to be countered through legislation. Just as we don’t think about road rage and car bombs in the same way, even though they both involve cars, we can’t treat all cyber threats in the same way. I don’t think US policy makers understand that yet, but they’ll need to if we want them to act reasonably and responsibly.
My prediction is continued legislative inaction in the US in the near term. The regulatory agencies, especially the FTC, will continue to investigate and fine the most egregious violators. And there will be no changes in regulating either government surveillance or surveillance capitalism. Any fatalities will be blamed on specific individuals and products, and not on the system that enables them. Despite the imminent threats, I think it will take the younger generation coming into power before any real change in the US takes place.
There’s more hope in Europe. The EU is the world’s single largest market and is turning into a regulatory superpower. And it has been regulating data, computers, and the Internet. The GDPR—General Data Protection Regulation—is a sea change in privacy law. It’s a sweeping EU-wide regulation that affects any company in the world that handles personal data about EU citizens. The complex law primarily focuses on data and privacy, but also contains requirements for computer and network security. Moreover, it’s a reasonable blueprint for what the EU might eventually do with respect to Internet+ security and safety.
For example, the GDPR mandates that personal data can only be collected and saved for “specific, explicit, and legitimate purposes,” and only with explicit consent of the user. Consent can’t be buried in the terms and conditions, nor can it be assumed unless the user opts in. Users have the right to access their personal data, correct false information, and object to particular uses of their data. Users also have the right to download their data and use it elsewhere, and demand that it be erased. The provisions go on and on.
The GDPR’s regulations only affect European users and customers, but will have repercussions worldwide. For example, pretty much all Internet companies have European users, and if they suffered a data breach they would invariably have to quickly publicize it. If companies have to explain their data collection and use practices to Europeans, we’ll all learn what they are. Additionally, legislatures worldwide—from Argentina to Colombia to South Korea—are reviewing their privacy laws to ensure they are adequate by the new EU standards, since the EU now links free-trade agreements to the partner nation’s privacy regulations.
The GDPR was passed in 2016 and took effect in May 2018, with enforcement expected sometime in 2019. Organizations are already doing things to comply with the law, but in many cases they’re waiting to see what implementation and enforcement will look like.
I think EU enforcement will be harsh. Fines can be as high as 4% of a company’s global revenue. And in 2017, we saw several demonstrations that the EU isn’t afraid to go after the biggest Internet companies. The EU fined Google 2.4 billion euros (and threatened to further fine the company 5% of its daily revenue) for the manner in which it presented search results for shopping services. Separately, the EU fined Facebook 110 million euros for misleading regulators about its ability to link Facebook and WhatsApp accounts.
Compare potential fines for digital insecurity in the US and the EU. In 2017, the state of New York fined Hilton Hotels $700,000 for two 2015 breaches involving personal information—including credit card numbers—of 350,000 customers. That’s $2 per person and, for a company with $410 billion in revenue, basically a rounding error. Under the GDPR, the fine would have been $420 million.
The EU is regulating computer security in other ways, too. If you look at product packaging in Europe—and often in the rest of the world—you’ll notice a lowercase “ce” on the label somewhere. This means that the product conforms to all applicable European standards, including one for responsible vulnerability disclosure. Like the GDPR, the “ce” mark only affects products sold in Europe. Nonetheless, these standards will get incorporated into international trade agreements like GATT, and they will affect products everywhere.
My guess is that the EU will turn its attention to security and the Internet of Things and, more generally, cyberphysical systems. As Cambridge University professor Ross Anderson and his colleagues wrote, referring to security from the sorts of damaging attacks I talked about in Chapter 5 as “safety”: “The EU is already the world’s main privacy regulator, as Washington doesn’t care and no one else is big enough to matter; it should aim to become the main safety regulator too—or risk compromising the safety mission it already has.” If the EU starts flexing its regulatory muscle in this way on safety and security, companies will take notice.
The question is how this will affect the rest of the world. There are several possibilities.
Automobiles are often designed for local markets. A car sold in the US is different from the same model sold in Mexico, because the environmental laws are different and manufacturers optimize engines to comply with local laws. The economics of building and selling a vehicle easily allows for this differentiation. Software is different. It’s much easier to maintain one version of a piece of software and sell it everywhere, especially when embedded in a product. If European regulations force minimum security standards on routers or Internet-connected thermostats, those are likely to be sold throughout the world. (California fuel emissions standards affect cars sold all over the US.) And if companies have to meet those standards anyway, they’re likely to boast about it on their packaging.
This could go either way for products and services that make their money on surveillance. In April 2018, Facebook announced that it would change its data collection, use, and retention practices for its users worldwide as a result of the GDPR. It remains to be seen what other companies will do.
I don’t think there are any other markets large enough to matter. Singapore has the Personal Data Protection Act, South Korea has the Personal Information Protection Act, and Hong Kong has the Personal Data (Privacy) Ordinance, but it’s hard to tell whether these laws have any teeth. If enforced, the affected companies would probably just pull out of those countries rather than change their business practices worldwide. There might be exceptions for larger markets—and maybe Korea falls in that category—and for expensive Internet-enabled devices, but there might not. I could easily imagine major car manufacturers ignoring the regulations and daring the governments to ban their products, and the governments backing down.
Some other countries are starting to regulate, too. In 2017, India’s Supreme Court recognized a right to privacy for the first time in the nation’s history. Eventually, this might result in some stronger laws in that country. Singapore passed a new Cybersecurity Act in 2018, formalizing minimum standards and reporting obligations for critical infrastructure providers, and establishing a Commissioner for Cybersecurity position with broad investigation and enforcement powers. New Israeli security regulations affecting organizations that run databases came into effect in 2018; they include requirements for encryption, staff security training, security testing, and backup and recovery procedures.
Even the UN is starting to regulate. The United Nations Economic Commission for Europe sets standards for cars. Its regulations affect not only the EU, but also other countries in Europe, Africa, and Asia that make cars. Its regulations will certainly affect any future autonomous computers in cars.
In the US, some states are trying to fill the regulatory gap left by the federal government by prosecuting companies with weak security. New York, California, and Massachusetts lead the way here. In 2016, New York fined Trump Hotels for data breaches, and California investigated companies that abuse student data. In 2017, Massachusetts sued Equifax, and Missouri began investigating Google’s data-handling practices. Thirty-two state attorneys general joined the FTC to penalize computer manufacturer Lenovo for installing spyware on its laptops. Even the city of San Diego is suing Experian over a 2013 data breach.
In 2017, New York State’s Department of Financial Services issued security regulations affecting banks, insurers, and other financial services companies. The rules required these corporations to have a chief information security officer, conduct regular security testing, provide security awareness training to employees, and implement two-factor authentication on their systems. In 2019, these standards will also apply to their vendors and third-party contractors.
In 2017, California temporarily tabled a bill requiring IoT manufacturers to disclose the data they were collecting on customers and users. Ten other states debated legislation on IoT privacy in the absence of any federal movement on this issue. I expect more of this sort of thing in 2018 and beyond.
In January 2018, California’s Senate passed the “Teddy Bears and Toasters” bill that would require manufacturers to equip all Internet-connected devices sold in California with security features appropriate to the device. As this book went to press, the bill was before the legislature. California’s legislature is also considering a proposal to create a “California Data Protection Authority” inspired by the GDPR.
So there is some progress. But in the absence of meaningful regulation, what do we do?
We can try to comparison shop for security, but that’s difficult. Corporations don’t make their security practices public, precisely because they don’t want them to factor into consumers’ buying decisions or future lawsuits. If you want to buy a DVR with better security because you don’t want it to become part of a botnet, you can’t. If you want to buy a thermostat or a doorbell with better security because you don’t want anyone hacking it, you can’t. You can’t study Facebook’s or Google’s privacy and security practices—only their vague promises. As long as corporations don’t use security and privacy as a market differentiator, you cannot base your buying decisions on them. And while an organization like Consumers Union might be able to help, it’ll just be a part of a larger solution.
There are some things concerned consumers can do. We can research different IoT products, try to determine which ones take security seriously, and refuse to purchase products that don’t. We can see what sorts of smartphone permissions an app demands, try to research what data is being collected and what is being done with it, and refuse to install apps that seek irrelevant and invasive access. I admit that this is a tall order, and most people won’t bother.
In some cases, we can opt out, but that option is going to become increasingly rare. Soon, just as it is now effectively impossible to live a normal modern life without an e-mail address, a credit rating, or a credit card, it will be impossible not to be connected to the Internet of Things. These are the tools for living a normal life in the early 21st century.
We can—and should—shore up our own personal cybersecurity. There’s plenty of good advice on the Internet, mostly related to data privacy. But in the end, a significant portion of our cybersecurity is out of our hands, because our data is in others’ hands.
Organizations have more options because of their size and budget. Out of pure self-interest—both economic and reputational—they need to make cybersecurity a board-level concern. Yes, these are technical risks, but attacks already can grievously damage a company. I am the CTO of IBM Resilient and a special advisor to IBM Security, and have repeatedly seen that companies make smarter security decisions when senior management is involved.
Organizations need to know about the security of the devices and services they’re using, both on their network and in the cloud. They should make any decisions about the Internet+ deliberately, and ensure that new equipment acquisitions don’t unintentionally affect their network. This is going to be an uphill battle. My prediction is that organizations will find the Internet+ creeping into their networks in ways they don’t expect or even know about. Someone will buy a coffee machine or refrigerator with an Internet connection. The smart lighting system, or elevators, or overall building control system will connect to the internal corporate network.
Organizations need to know where their data is. Already it can be a battle to keep your data under your control on your network. The cloud is enticing; it’s easy to park your data on other people’s computers without really understanding the ramifications. An instructive story from Sweden came to light in 2017. Two years previously, the Swedish Transport Agency moved all of its data to the cloud, including classified information that should have never left the government’s internal networks. My guess is that the person who made the decision never considered the security ramifications.
Organizations need to use their buying power to make the Internet+ a more secure place, both for themselves and for everyone else. They should apply pressure on manufacturers to improve security, both through their own purchasing decisions and through industry associations. They should engage with policy makers, and lobby their governments for regulations to improve security. Although corporations are almost pathologically anti-regulation, this is one area where smart regulations can create new incentives that actually lower the overall cost of security, and the cost of security failures.
We have to accept that we’re stuck with the government we have, and not the government we wish we had. And if we can’t rely on government as the first mover here, our only hope is that some companies step up and make the Internet+ secure anyway. It’s not much, but it’s what we have.
What I will say about trust in Chapter 12 is probably the most important thing to remember. We are forced to trust everyone whose products and services we use. Try to understand who you are trusting and to what degree, and make those trust decisions as intelligently as possible. Make your decisions about the cloud, the Internet of Things, and everything else with as much knowledge and forethought as you can.
This can mean making some hard choices. Who will you allow to violate your privacy and security in exchange for a service? Would you prefer to give Google or Apple access to your e-mail? Would you prefer to give Flickr (owned by Yahoo) or Facebook your photos? Do you prefer Apple’s iMessage or Facebook’s WhatsApp—or the independent Signal or Telegram—for your text messages?
It also means deciding on the countries to which you prefer to make yourself vulnerable. US companies are subject to US law, and will almost certainly relinquish data in response to court orders. Storing your data in another country might insulate you from US law, but will subject you to that country’s laws. And while the NSA’s global surveillance is unmatched in the world, US law constrains the NSA far more than similar agencies are constrained elsewhere—and you have more legal protection of your data when it’s stored in the US than when it’s stored elsewhere.
Making these decisions can be impossible and—honestly—most people won’t bother. Facebook might be headquartered in California, but it maintains data centers all over the world; your data will likely be stored in several of them. Many companies you deal with use cloud services with data scattered around the world. Whatever service provider you decide to patronize, you’re not likely to know for sure which countries’ laws will apply to your data. But some companies push back against data requests, depending on the customer’s location—for example, Microsoft’s ongoing battle with the Department of Justice about turning over data from an Irish customer stored in Ireland to the FBI.
I see two ways to think about this. The first is that you are already subject to the laws of your home country, and the most prudent course is to minimize the number of additional countries whose laws your data is subject to. Conversely, any incriminating data is more likely to get you in trouble in your home country, so making it harder for your domestic law enforcement agency to access it is the more prudent course. For myself, I choose the first option. Elsewhere I have argued that, given the choice, I would rather have my data under the jurisdiction of the US government than under pretty much any other government on the planet. I’ve taken a lot of flak for that opinion, but I still stand by it.