Human beings operate on trust. No other species trusts at anywhere near the scale we do. Society would collapse without trust; indeed, society would have never formed without it. We trust continually, throughout our days, without even a second thought. And it’s not as if we have any choice. We trust the food in our supermarkets not to make us sick. We trust the people we pass on the street not to attack us. We trust banks not to steal our money. We trust other drivers not to hit us. Of course, as you read this you’re thinking about caveats and exceptions, but the reason you’re thinking about them is that they’re so rare. Unless you’re living in a lawless part of the planet, every day you’re blindly trusting millions of people, organizations, and institutions. That we hardly think about it is testament to how well the system actually works.
Think about your computer and all of the companies you are forced to trust, simply by using it. You trust the designers and manufacturers of the chips inside it, and the company that assembled it. In fact, you trust the entire supply chain, from the manufacturer to the company that sold it to you. You trust the company that wrote your operating system—likely Microsoft or Apple—and the companies that wrote the software you’re using. That includes applications like your browser and word processor, and security software like your antivirus program. You trust the Internet services you’re using: your e-mail provider, your social networking platforms, and any cloud services handling your data. You trust your Internet service provider, and the companies that designed, built, and installed your home router. There are easily dozens of companies you have no choice but to trust, along with the governments of the countries those companies are from. Any one of them has the capability to subvert your security and take advantage of you. Any one of them can have insecure processes that allow others to do the same.
You trust all of those entities because you have to, not because you think any of them are trustworthy. On the Internet, the universe of trustworthy actors is shrinking considerably. A 2017 survey illustrated that 70% of Americans believe it is at least somewhat likely that their phone calls and e-mails are being monitored by the government. People all over the world mistrust the NSA and the US in general.
That 2016 Obama cybersecurity report I talked about in Chapter 10 put it this way:
The success of the digital economy ultimately relies on individuals and organizations trusting computing technology and trusting the organizations that provide products and services that collect and retain data. That trust is less sturdy than it was several years ago because of incidents and successful breaches that have given rise to fears that corporate and personal data are being compromised and misused. Concern is increasing, too, about the ability of information systems to prevent data from being manipulated; the 2016 US election heightened public awareness of that issue. In most cases, data manipulation is a more dangerous threat than data theft.
Right now, this mistrust isn’t too great. We’re still mostly able to ignore the risks and trust those governments and companies—or, at least, act like we do—because we don’t have much choice. We pretend that our Facebook feed is filled with posts from friends, not paid advertisements insinuated into our personal communications. We pretend that our search engines aren’t being manipulated by algorithms surreptitiously promoting commercial products. We pretend that the companies entrusted with our data aren’t using that data against our interests. We accept all of this because we really have no choice. We ignore all the secrecy because secrecy breeds suspicion.
So far, that’s worked. We use our computers and phones. We store our data in the cloud. We have private conversations on Facebook and email. We buy things over the Internet. We buy and use Internet-connected things. And we don’t think about it too much.
That could flip at any time. And if it does, it’s going to be bad. The ill effects from living in a low-trust society are considerable. Economies suffer. People suffer. Everything suffers.
In 2011, I published Liars and Outliers, which looked at security through the lens of trust. Security systems are mechanisms to enforce trust: to ensure that people cooperate with each other and do what’s expected. Informally, we enforce that internally with our own moral codes and externally by learning and remembering others’ reputations. More formally, we enforce it with rules, laws, and penalties. And we enforce it with security technologies like fences, locks, security cameras, audits, and investigations.
In Chapter 4, I said that everyone wants you to have security, except from them. This is not sustainable. Over the long term, government mass surveillance is not sustainable. We’ve got to limit it if we want a trustworthy Internet and, by extension, a trustworthy society.
Surveillance capitalism is not sustainable. We’ve got to limit it, too. As long as surveillance is the business model of the Internet, the companies you entrust with your data and capabilities will never fully support keeping you secure. They will make design decisions that weaken your security against both criminals and governments. We need to change the fabric of the Internet so that it doesn’t provide governments with the tools to create a totalitarian state. This isn’t going to be easy, and it’s not going to happen within the decade. It’s not even clear how it could happen in the US; free-speech issues will hamper any legislative efforts to limit commercial surveillance. Even so, I believe it will happen eventually. Perhaps the change will be driven by changing norms. We’re starting to chafe under the unremitting extraction of data about both our public and our inner lives—data available to both governments and corporations but not to us. Surveillance capitalism is pervasively damaging to society; sooner or later, society will demand reform.
In order for corporations and governments to be trusted, they need to be trustworthy. This underpins much of what I wrote in Chapter 9. It’s not enough for governments to prioritize defense over offense; their priorities must be evident. Government secrecy and duplicity hurt trust.
It’s not enough for companies to secure their systems; they need to do it transparently, so that they’re seen to be working for the public’s benefit and not abusing their positions of power. Every suggestion in this book should be implemented and enforced publicly. Standards should be open. Details of breaches should be disclosed. Enforcement and fines should be public. An insecure Internet+ won’t be trusted, but a secure Internet+ must be public to be trusted.
All the suggestions in this book are intended to move us towards an Internet that is fundamentally trustworthy, one where the most powerful actors are prevented from preying on unsuspecting ordinary users. We’ve got a lot of work to do to get there, and the goal itself may seem a bit utopian; but while we’re at it, let’s talk about two other key attributes of the ideal Internet we’re working towards: resilience and, finally, peace.
According to sociologist Charles Perrow’s theory of complexity, complex systems are less secure than simpler ones and, as a result, attacks and accidents involving complex systems are both more prevalent and more damaging. But Perrow demonstrates that not all complexity is created equal. In particular, complex systems that are both nonlinear and tightly coupled are more fragile.
For example, the air traffic control system is a loosely coupled system. Both individual air traffic control towers and airplanes have failures all the time, but because the different parts of the system only mildly affect the others, the results are rarely catastrophic. Yes, you can read headlines about this or that airport being in chaos as a result of computer problems, but you rarely read about planes crashing into buildings, mountains, or each other.
A row of standing dominos is a linear system. When one topples over, it hits the next domino and causes that one to topple over. Although these chains are large, they topple in an orderly fashion.
The Internet is the opposite: it’s both nonlinear in that pieces can have wildly out-of-proportion effects on each other, and tightly coupled in that these effects cascade immediately—characteristics that make catastrophes much more likely. It’s so complex that no one understands everything about how it works. It’s so complex that it just barely works. It’s so complex that we can’t predict how it will work in many cases.
We need better security in our large-scale sociotechnical systems, but most of all we need more resilient security.
I have long liked the term “resilience.” If you look around, you’ll see it used in human psychology, in organizational theory, in disaster recovery, in ecological systems, in materials science, and in systems engineering. Here’s a definition from a 1991 book by Aaron Wildavsky called Searching for Safety: “Resilience is the capacity to cope with unanticipated dangers after they have become manifest, learning to bounce back.”
I have been talking about resilience in IT security for over 15 years. In my 2003 book, Beyond Fear, I spent pages on resilience. I wrote:
Good security systems are resilient. They can withstand failures; a single failure doesn’t cause a cascade of other failures. They can withstand attacks, including attackers who cheat. They can withstand new advances in technology. They can fail and recover from failure.
In 2012, the World Economic Forum described cyber resilience as an enabling capability—one that provides physical safety, economic security, and competitive business advantage.
In 2017, the US National Intelligence Council—part of the Office of the Director of National Intelligence—published a comprehensive document looking at long-term security trends. It talked about resilience:
The most resilient societies will likely be those that unleash and embrace the full potential of all individuals—whether women and minorities or those battered by recent economic and technological trends. They will be moving with, rather than against, historical currents, making use of the ever-expanding scope of human skill to shape the future. In all societies, even in the bleakest circumstances, there will be those who choose to improve the welfare, happiness, and security of others—employing transformative technologies to do so at scale. While the opposite will be true as well—destructive forces will be empowered as never before—the central puzzle before governments and societies is how to blend individual, collective, and national endowments in a way that yields sustainable security, prosperity, and hope.
Tactically and technologically, resilience means many different things: multiple layers of defense, isolation, redundancy, and so on. We need to be resilient as a society as well. Much of the damage caused by cyberattacks is psychological. Russia shut off electric power in Ukraine twice, and now Ukrainian citizens have to live with the knowledge that their access to power is fragile. A more resilient power grid would mean a more resilient society.
We can prevent some attacks, but we have to detect and respond to the rest of them after they happen. That process is how we achieve resilience. It was true 15 years ago and, if anything, it is even more true today.
In Chapter 4, I wrote that we’re in the middle of a cyberwar arms race. Arms races are always expensive. They’re fueled by ignorance and fear: ignorance of our enemies’ capabilities, and fear that theirs are greater than ours. That ignorance and fear is magnified in cyberspace. Remember how hard it was for the US to discern Iraq’s nuclear and chemical weapons capabilities? Cyber capabilities are even easier to hide.
This arms race harms our security in two ways. First, it directly reduces security by ensuring that the Internet+ remains insecure. As long as there are countries that need vulnerabilities for cyberweapons, and are willing to discover them themselves or buy them from others, there will be vulnerabilities that don’t get patched.
Second, it increases the chances of a cyberwar. Weapons beg to be used, and the more weapons there are in the world, the greater the risk they might be used. The inherent perishability of cyberweapons that I discussed in Chapter 4 makes them attractive to use. The offensive nature of battlefield preparations increases the chance of retaliation, even if that retaliation is based on a misunderstanding. And the attribution gap increases the possibility for misunderstanding—and deliberate deception—especially for countries not privy to the US’s intelligence capabilities.
We need to work towards demilitarizing the Internet. It might seem impossible, and in today’s geopolitical climate it might very well be, but it’s certainly achievable in the long term. It is the only path forward for a sustainable future.
A start would be to move beyond military metaphors for Internet security. For example, conceptualizing it as a public hygiene or pollution problem will lead us towards different sorts of solutions. A 2017 report by the New York Cyber Task Force suggested that governments could tax harmful “emissions” by ISPs—malware, DDoS traffic, and so on—and even implement some sort of cap-and-trade regime. International laws regarding pollution might also provide a useful comparison when wrestling with the international-security issues for the Internet+.
Even more than either of these two things, we need to actively create a peaceful Internet+. The term “cyber peace” has been advanced as an alternative to the increasingly martial rhetoric about cyberspace. Here’s Indiana University cybersecurity law professor Scott Shackelford’s attempt at defining this nebulous term:
Cyber peace is not the absence of attacks or exploitations, an idea that could be called negative cyber peace. Rather, it is a network of multilevel regimes working together to promote global, just, and sustainable cybersecurity by clarifying norms for companies and countries alike to help reduce the risk of conflict, crime, and espionage in cyberspace to levels comparable to other business and national security risks. Working together through polycentric partnerships, and with the leadership of engaged individuals and institutions, we can stop cyber war before it starts by laying the groundwork for a positive cyber peace that respects human rights, spreads Internet access, and strengthens governance mechanisms by fostering multi-stakeholder collaboration.
Political scientist Heather Roff agrees, arguing that “cyber peace must be grounded in a conception of positive peace that eliminates structural forms of violence” based on four necessary factors: “a society, trust, governance, and the free flow of information.”
In some ways, this sounds like a UN Security Council for the Internet, and we can learn from the successes and failures of that organization. It’s a worthy goal, and one we should strive towards.
On a smaller, more immediate scale, there are ways we can work to promote a more just and equitable Internet right now. For all the problems with government and corporate surveillance we have in the US and other Western democracies, and for all the looming dangers to our lives and liberties that I’ve been describing, it’s important to remember that billions of people enjoy considerably less digital freedom and face far more grave risks as a result of their Internet use, in countries like Egypt, Ethiopia, Myanmar, and Turkey.
I am on the board of an organization called Access Now. Our mission is to defend and extend the digital rights of users at risk around the world. One of the services we offer is a Digital Security Helpline, which provides immediate tech support for civil society members who are being spied on and attacked on the Internet. We also provide policy analysis and advocacy on government proposals around the world, advocate for policy changes in different countries, and convene an annual conference on human rights in the digital age.
I’ve had that organization and its work in the back of my mind as I’ve been writing this book. Both the problems and solutions I’ve been talking about focus on the world’s liberal democracies. They don’t apply as well to countries that use the Internet to find and arrest dissidents, or arrest people who give security training to dissidents. Even so, the recommendations I make would have a positive effect on them as well, even if it’s only the democracies that follow through on them. In the meantime, there are groups like Access Now working to improve digital rights around the globe: Paradigm Initiative in Nigeria, SMEX in Lebanon, KICTANet in Kenya, Derechos Digitales in Chile, and so on.
The Internet is often talked about as a societal equalizer, and that’s a fair characterization. It circulates and amplifies important ideas and human ideals. It connects people across borders. And it has sparked and enabled a dozen street-level revolutions led by people seeking greater freedoms and a better future. Who knows what the positive potential of the Internet+ may turn out to be? Of course, the Internet has a dark underbelly as well, and I’ve spent most of this book talking about problems lurking there. But as with most human endeavors, we need to continue hammering away to shape the emerging Internet+ into a medium that embodies and enables the human ideals of trust, security, resilience, peace, and justice the best it can.