Of all the security mechanisms provided by cryptography, encryption is the most politically charged because it protects the confidentiality of information. Not only do we all wish to have secrets, but we all harbor a desire, at least to some extent, to learn what others want to keep secret. However, what should or should not be confidential is neither objective nor stable over time. We might all agree that personal financial data should generally be kept confidential. But should the financial data of a corporation accused of massive tax evasion be kept secret? This type of conflict creates social dilemmas, which continue to cause political storms over the use of encryption.
Naughty Crypto
Cryptography, I hope you’ll agree, is immensely useful. By providing the basis for security, it enables us to do amazing things in cyberspace. But cryptography is not always benign. Here are six potentially unwelcome scenarios.
1. You are wise to the dangers of unprotected data, so you encrypt all the data on your laptop just before you go on vacation. You return after three weeks in the sun, rested and refreshed. Unfortunately, your mind is so refreshed that you can’t recall the passphrase used to derive the key to decrypt the data. No passphrase, no key, no data.
2. You switch on your home computer, to be greeted by this message: Oops, your files have been encrypted! Many of your documents, photos, videos, databases, and other files are no longer accessible. Do not waste time looking for your files. Nobody can recover them without our decryption service. You have three days to submit payment in bitcoin only, after which you won’t be able to recover your files—forever. Disaster! You’ve been infected by ransomware, a nasty computer program that has encrypted all your files, and now someone is demanding payment in exchange for the decryption key required to recover them.1
3. You are a network manager who has configured some rules governing the type of internet traffic that is allowed to enter your network. You have a blacklist of web addresses, keywords, and malware. Any connections from outside are inspected to see whether they relate to anything on the blacklist and, if so, are prevented from connecting. Alas, one day you discover that a known piece of malware is infecting many users on your system. How did it get past your checks? Apparently, it was encrypted, which made its true nature so hard to identify that it slipped through your protective net.
4. You are a detective, investigating a suspect accused of murder. You have seized the suspect’s phone, on which you believe are a number of incriminating photographs. Unfortunately, the images stored on the phone are inaccessible, because the suspect has encrypted them. You are sure these images are critical to proving the case, but you just can’t see them.2
5. You are a police investigator who has seized a web server that hosts indecent images of children. You can see from the logs that this server has a large number of visitors every day, whom you want to bring to justice. Unfortunately, the visitors are all using the cryptographic software Tor, designed to make it hard to trace their origin. Who are they? Where are they?3
6. You are an intelligence officer monitoring a cell of suspected terrorists. You have managed to obtain access to the communications to and from the mobile phone of one of the targets. The suspect is communicating by using an internet messaging service renowned for its strong cryptographic security. You can see that the suspect is in regular contact with another of the cell members, but you cannot access the details of their conversations.4
As you can see from these six scenarios, really only two functions of cryptography are problematic: Confidentiality allows everyone to hide their data, but “everyone” includes blackmailers, murderers, and terrorists. Anonymity allows everyone to be untraceable in cyberspace, but “everyone” includes child abusers. When cryptography hits the news, it tends not to be about hash functions, MACs, digital signatures, or perfect passwords. Instead, when debates rage about cryptography, they invariably concern the use of encryption, since encryption not only protects secrets but can be used to build technologies that provide anonymity such as Tor.
A Dilemma
Let’s unpack the issues behind our six examples of instances when encryption causes problems. There is a big difference between the first three and the last three.
The first scenario is the only one relating to an accident, rather than a deliberate act. You forgot the password necessary to decrypt your disk. Such a lapse could be disastrous, but is cryptography at fault? I don’t think so; it was your mistake. Encrypting your disk does more good than harm, but it does come with the caveat that you will need to be able to recall the key in order to retrieve your data. This key is so important that you must put in place a process for making sure you don’t lose it. If you are prone to forgetting passwords, then this password should probably be stored securely on a separate device, or written down and stored somewhere physically secure. This scenario is not a problem with cryptography itself.5 We don’t often blame cars when we have accidents; it’s the driver who tends to be at fault.
Ransomware, the culprit in the second scenario, is a problem created by cryptography. If there were no cryptography, there would be no ransomware. Analogously, if there were no electricity, there would be no electrocutions. The value of electricity far exceeds the element of danger presented by the widespread use of electricity. Likewise, I would argue, the benefits of encryption far exceed the problems created by ransomware. Besides, there are things that can be done about ransomware. Just as for other types of malware, controls such as having backups, keeping systems up to date, installing and maintaining antivirus software, and educating users not to click on unsolicited links and attachments all go a long way to reducing the risk of a ransomware infection. While cryptography can be used against you, you can also take simple steps to prevent bad things from happening.
Cryptography can sometimes pose difficulties for network security. A possible precaution, in the third scenario, would be to inspect any incoming data that appeared to be encrypted, treating it with the same level of suspicion that you treat other blacklisted items.6 Legitimate uses of cryptography could then be identified, and allowed to pass through. Let’s face it, it is very hard to defend a network from all the ills of cyberspace. Even highly secure networks that are not connected to the internet can become infected if someone manually introduces malware on the likes of a memory stick.7 Networks should be managed in ways to ensure that the use of cryptography is a defense, not a threat.
The last three scenarios concerning problematic cryptography are quite different. They all feature (suspected) “bad” people using cryptography. However, they are using encryption for the same kinds of things you do.
The suspected murderer encrypted the photos on their phone, just like you probably do (most modern phones encrypt all stored data by default, which protects your privacy if your phone is stolen).
The child abusers connected to the image server using Tor to preserve their anonymity. There are many different reasons why you might, quite legitimately, wish to use Tor. Perhaps you are not a privacy advocate yourself and can only imagine that anyone desiring anonymity in cyberspace must be up to no good. But what if you were an investigative journalist or a whistle-blower, or you worked for law enforcement, or you were simply a regular citizen living in an oppressive regime?
The suspected terrorists used encrypted messaging so that nobody could learn the details of their conversation. What about you? Are you happy that anyone (not just the government, but potentially all your friends) could potentially read every conversation you have by using a messaging service? Today, messaging services increasingly encrypt all conversations by default, using state-of-the-art encryption algorithms. You now have to make an effort not to encrypt your messages.
The problem is that encryption works, regardless of the nature of the data it’s applied to. These “bad” users of encryption are all doing things that you might quite legitimately want to do. Use of encryption thus presents society with a dilemma. If society allows widespread use of encryption, then cryptography will be used to protect data relating to illegitimate activities. If, on the other hand, society somehow tries to restrict the use of encryption, then attempts by honest citizens to protect data relating to legitimate activities might be thwarted.8
Should Something Be Done?
Should society do anything to control the use of encryption? Well, there are many disparate views, and I suspect there always will be.
The case for taking action to control the use of cryptography has been argued with passion by certain figures of authority. Speaking to law enforcement officers in 2014, Sir Bernard Hogan-Howe, former head of London’s Metropolitan Police (the largest police force in the UK), warned: “The levels of encryption and protection that we are seeing in the devices and methods used to communicate are frustrating the efforts of police and intelligence agencies to keep people safe. . . . The Internet is becoming a dark and ungoverned space where images of child abuse are exchanged, murders are planned, and terrorist plots are progressed. . . . In a democracy we cannot accept any space—virtual or not—to become anarchic where crime can be committed without fear.”9
In 2015, James Comey, then director of the FBI, raised similar concerns: “As all of our lives become digital, the logic of encryption is all of our lives will be covered by strong encryption, and therefore all of our lives—including the lives of criminals and terrorists and spies—will be in a place that is utterly unavailable to court-ordered process. And that, I think, to a democracy should be very, very concerning.”10
US senator Tom Cotton was even more forceful in expressing a need for action against unfettered use of encryption: “The problem of end-to-end encryption isn’t just a terrorism issue. It is also a drug-trafficking, kidnapping, and child pornography issue.”11
In contrast, others have been outspoken on the need for widespread access to encryption technology. Speaking in response to widely expressed concerns about the use of strong encryption, Zeid Ra’ad Al Hussein, high commissioner for human rights at the United Nations, warned: “Encryption and anonymity are needed as enablers of both freedom of expression and opinion, and the right to privacy. Without encryption tools, lives may be endangered.”12
Writing in a previous era of heated debate about the use of encryption, Esther Dyson, US journalist and businesswoman, argued in 1994: “Encryption . . . is a powerful defensive weapon for free people. It offers a technical guarantee of privacy, regardless of who is running the government. . . . It’s hard to think of a more powerful, less dangerous tool for liberty.”13
Computer science professor and cryptographer Matt Blaze has expressed an opinion shared by many academic researchers: “It may be true that encryption makes certain investigations of crime more difficult. It can close down certain investigative techniques or make it harder to get access to certain kinds of electronic evidence. But it also prevents crime by making our computers, our infrastructure, our medical records, our financial records, more robust against criminals. It prevents crime.”14 This viewpoint has been expressed more succinctly by Edward Snowden: “We need to think about encryption not as this sort of arcane, black art. It’s a basic protection.”15
Eating Cryptocake
A huge question is whether we can all benefit from the security protection provided by encryption but still have a means to, in specific circumstances, remove this protection. In other words, can we have our cryptocake and eat it too?
Some figures of authority believe we can. This argument is often made in terms of a need to balance competing goals. For example, the security of general users of a messaging service such as WhatsApp or Signal needs to be balanced against the privacy that the provider offers to customers who are using its services to support undesirable activities. Former UK home secretary Amber Rudd argued for the need to “balance encryption and counter-terrorism.”16 Former GCHQ director Sir David Omand has commented that he feels the UK is getting the balance between (national) security and privacy about right: “2017 is the year of reconciliation, in which we recognize as a mature democracy, it is possible to have sufficient security and sufficient privacy.”17
The idea of there being such a “balance” may well be alluring, and those who call for it most surely have good intentions. But what does balancing the use of encryption mean? What is the unit of measurement? How do we know when a state of balance has been achieved? Who decides? And perhaps most importantly of all, is such balance even technically feasible?
Considering this issue from another angle, encryption has long been regarded as a dual-use technology. This term recognizes that certain technologies have both civilian and military applications and, broadly speaking, can be used to do good as well as execute harm. Cryptography joins an illustrious list that includes various nuclear materials, chemical processes, biological agents, thermal imaging, night vision cameras, lasers, and drones—all technologies that bring society a complex mix of gain and pain. Dual-use technologies are often subject to various government controls.18
The dual-use label is rather general and, I think, unhelpful when it comes to consideration of cryptography. It suggests that this technology is safe in the hands of government scientists but everything should be done to prevent it from being acquired by terrorists. This might be true of highly enriched plutonium, but what about encryption? Once upon a time, this argument had some validity, when cryptography was a technology mainly of the military domain. But today, when cryptography underpins everyone’s security in cyberspace, is it appropriate to be so concerned about who is capable of protecting their data by using strong encryption?
To me, cryptography has more in common with a seat belt than with a bomb. A terrorist driving to conduct an attack might wear a seat belt, just like the rest of us. Seat belts thus save terrorists’ lives, yet few would argue we shouldn’t continue to strive to make seat belts more effective. The benefits of cryptography to the many far exceed the drawbacks of its use by the few.
Cryptography may well be used much more today than ever before, but it’s not a new technology. Likewise, the debate concerning the use of encryption has raged ever since cryptography became more widely available.19 It’s worth reflecting on how the dilemma presented by encryption has been historically addressed. Such review reveals not just that all attempts to “balance” the use of encryption are at best temporary fixes, eventually swept aside by technological advances, but that most techniques to do so are, inevitably, problematic.20
Breakable Unbreakable Cryptosystems
When authorities call for some way to circumvent the cryptographic protection of data, let’s be very clear about what they’re asking for. The cryptosystem should, under normal circumstances, be secure enough to protect data. In other words, for all practical intents and purposes, the cryptosystem should be unbreakable. However, in special situations there should be some means of accessing data that has been encrypted using the cryptosystem. This requirement effectively demands a known means by which the cryptosystem can be “broken.”21 The design challenge is problematic from the outset. What is required is a “breakable unbreakable” cryptosystem.
Recall that there are many different ways of breaking a cryptosystem. An “attacker,” who in this case let’s assume is a legitimate authority (which I will loosely refer to as the state), could break a cryptosystem by exploiting any aspect of it. Aspects that could be exploited are the underlying cryptographic algorithm, implementation, key management, or security of the endpoints. In fact, as we will see shortly, all of these approaches have been used in the past.
In most circumstances it seems paradoxical that a normally unbreakable cryptosystem could be breakable. However, this idea is at least conceivable if there exists a significant imbalance between the capabilities of the state and those of “normal users”22 of a cryptosystem. This imbalance could be in terms of knowledge (of cryptography or system design), computing power, or the ability to enforce behavior. If the state can do something nobody else can, then breakable unbreakable cryptosystems are at least a possibility.
Suppose that the state is believed to have such an advantage over everyone else, and a cryptosystem is designed that can be broken by exploitation of this advantage. Regardless of how it works, let’s refer to this ability to break an unbreakable cryptosystem as the magic wand. Users of this cryptosystem can protect their data using encryption, believed to be strong enough to provide confidentiality against all potential attackers. However, should a user become the legitimate target of investigation, the state can wave the magic wand and—presto!—plaintext is revealed.
This magic wand scenario raises many issues. Let’s set aside all the thorny political questions, including those of cross-border jurisdictions, and assume that we accept the state’s need to have a magic wand. Let’s also ignore the myriad procedural and implementation concerns, and trust the state to wave the wand responsibly. By far the most significant remaining question is this: Given that the magic wand exists, can we be sure nobody else can wield it? After all, a breakable unbreakable cryptosystem is breakable, so is it really safe to assume that its breakability will ever be exploitable only by the state? Keep this question in mind as some candidate magic wands are reviewed.
The Tradesman’s Entrance
The Second World War provides a benchmark in our consideration of breakable unbreakable cryptography. Up until the end of the war, the only significant users of encryption were states, particularly the military, deploying their own encryption algorithms for their own private use. Since the use of encryption was restricted to top secret communications within tightly managed organizations, it made perfect sense for these encryption algorithms to be kept secret. Nobody else used them or even needed to know how they worked.
After the war, advances in communications led to an increased interest in encryption technology around the world, particularly from governments. Expertise in cryptography, however, was extremely limited, with only a few organizations able to build encryption machines. Demand for cryptography outstripped supply, and cryptography became a marketable product, albeit a highly specialized and sensitive one.23
Consider now a hypothetical scenario from, say, the late 1950s. The technologically advanced state of Freedonia manufactures and sells encryption devices. The Freedonians receive a request from the government of the less technologically advanced Ruritania for a set of encryption devices to protect Ruritanian diplomatic communications. Freedonia and Ruritania are not at war, but they have a combative relationship, and Ruritania tends to be less politically stable than Freedonia would like. Should Freedonia sell Ruritania some of its state-of-the-art encryption devices? It’s a chance to make some money, sure, but it would also be a blow to Freedonian intelligence gathering.
Spot the capability imbalance here. Freedonia has knowledge and technology that Ruritania does not. Thus there’s a potential opportunity for Freedonia to make some minor alterations to its normally unbreakable encryption technology, converting its encryption devices into breakable unbreakable devices. In other words, the devices will encrypt and decrypt as expected, but there is an additional magic wand trick, known only to Freedonia, that provides a means of decrypting ciphertext generated by the devices. Such a trick is sometimes referred to as a backdoor, because it provides a means of accessing the plaintext that is not apparent to most users of the cryptosystem.
The most natural place to establish a backdoor is in the encryption algorithm itself. For example, a very crude backdoor would be to reset the encryption key to a fixed value before encrypting the plaintext. The Ruritanians would think they were using different keys to encrypt, without realizing that the algorithm always resets the key to this fixed value. The Freedonians would know this fixed value and thus be able to decrypt Ruritanian communications.
We’d like to think such a backdoor would quickly be discovered by the Ruritanians. However, the Ruritanians know very little about cryptography, so they are probably not even aware of the possibility that the device will not function as intended. Even if they harbor suspicions, the Ruritanians probably lack the skills they would need to disassemble this device and determine how it works.
How can these Freedonians act so immorally! Well, security tends to be something that states take very seriously. In this case, Freedonia’s concerns about its own security have edged any ethical concerns about revealing exactly how its export encryption devices function. Most importantly, Freedonia is confident it won’t get caught. Freedonia wants to continue selling encryption devices around the world. Freedonia has rigged its exported encryption devices because . . . it can.24
Backdoors Become Front Doors
There are two very strong reasons why placing backdoors in cryptographic algorithms was an option for Freedonia in the 1950s but is not viable today as a means of addressing the dilemma created by the use of encryption.
First, cryptography is far too important today for the components at the heart of any cryptosystem, the cryptographic algorithms, to be “dodgy.” If there is any justification for creating a breakable unbreakable cryptosystem, algorithms are the wrong part of any cryptosystem in which to introduce a backdoor. Imagine the situation today if the Ruritanian government were unwittingly sold an encryption algorithm with a backdoor. While Freedonia might have been intending to exploit the backdoor for acquiring diplomatic intelligence, with today’s much more widespread use of cryptography, what would happen if the Ruritanians decided to use this same algorithm to protect the medical records of its citizens? Freedonia’s intention was to gain a diplomatic edge, not risk the security of the sensitive personal data of the entire Ruritanian population.
Perhaps more fundamentally, while the Freedonians would quite possibly get away with “backdooring” an algorithm in the 1950s, they probably wouldn’t be able to do so today. The knowledge about cryptography and the design of cryptographic algorithms is much greater now than it was then. There are more experts, all over the world, who can evaluate algorithms. Indeed, we have come to expect the details of cryptographic algorithms to be published, scrutinized, and approved for public use.25 Even when algorithms are sold in hardware devices, they can often be inspected and tested. If an algorithm contains a backdoor, then experts will call foul and nobody will want to use the algorithm. More worryingly, anyone already using the algorithm will be put at risk.
Perhaps the most infamous attempt to place a backdoor in a twenty-first-century cryptographic algorithm is Dual_EC_DRBG. This is not an encryption algorithm, but rather a pseudorandom number generator that can be used to generate cryptographic keys. The algorithm was shepherded into an international standard by representatives of the US National Security Agency (NSA), but quickly questioned by cryptographers.26 There was, it was pointed out, a means by which someone could foresee output from this generator. Consequently, keys that it generated could be predicted, and subsequent ciphertext could be decrypted. In the end, and after significant furor, Dual_EC_DRBG was withdrawn from the standards.27
Placing a backdoor in a modern cryptographic algorithm is a reckless act, with a high danger of unforeseen and undesirable consequences. The imbalance in knowledge about cryptographic algorithm design that existed in the 1950s no longer exists. Today, hidden backdoors readily become blatantly obvious front doors,28 thereby defeating the purpose of using encryption in the first place.
Deploying the Long Arm of the Law
There is one area where the state tends to hold an advantage over others: its ability to make and enforce regulations. One approach to addressing the “problem” created by cryptography is to regulate its use.
Once upon a time, there was a technology that helped people exchange ideas in a new way. This technology soon came to the attention of authorities of the state, who regulated it through licensing and applying import and export controls. Some states simply banned it. The subsequent period was one of struggle against restrictions on its use. States argued for the need to control the technology to maintain order. Both users and suppliers of the technology called for an end to regulations in the name of political freedom and human rights.
This was the story of the printing press, which was invented in the mid-fifteenth century and was a political hot potato for well over three centuries.29 A combination of social pressure and technological evolution eventually forced a relaxation of controls on the printing industry throughout most of the world, although some countries, such as Japan, have only relatively recently done so. Without changing a word, however, this story could have been about cryptography since the Second World War.
The crudest regulatory response to an unwelcome technology is an outright ban. Some states, such as Russia and the Ottoman Empire, were so fearful of the spread of ideas in books that they opted to ban the printing press. In similar fashion, some states today, such as Morocco and Pakistan, decree it illegal to use or sell encryption technology without prior approval by the government.30 It’s very hard to justify, or indeed enforce, contemporary bans on encryption. Cryptography is too widespread and useful to suppress.
A more common approach to controlling encryption is to regulate the export and import of encryption technology, as was done with the printing press. This approach makes the most sense in a world of limited providers of encryption technology—a world that no longer exists. An importer, such as Ruritania, could control who uses encryption by overseeing the entry of encryption technology into the country. An exporter, such as Freedonia, could control who buys its encryption technology. Export controls also enable a state to manage the strength of encryption permitted to enter or leave the country. In the early 1990s, an infamous US export policy permitted a maximum key length of only 40 bits on exported symmetric encryption technology. It is safe to assume that the NSA deemed an exhaustive search for a 40-bit key feasible. The early Netscape web browsers, for example, permitted strong 128-bit encryption on the internal US version of the software, but the international export version controversially provided only 40-bit security.31
Export and import controls are a viable means of governing the distribution of tangible objects, since these can be inspected at borders. Up until the 1970s, encryption happened only in objects that either were too heavy to lift or would at least hurt your foot if you dropped them. Restrictions on the movement of encryption around the world could thus, at least in theory, be enforced at borders.
This situation radically changed toward the end of the twentieth century, when encryption became more readily available in software. Since software consists of just a series of instructions for a computer to perform, its movement around the world is almost impossible to control. As a protest against US export controls, software for RSA encryption was documented in a book and even printed on T-shirts, instantly converting them from innocent items of clothing into illegal export-restricted munitions.32 Nowadays, software is transferred around the world at the mere click of a mouse or press of a button.
The use of export and import controls as a means of addressing the state’s concerns about the use of cryptography has considerably weakened over time, leading to almost farcical situations. In the late 1990s, I was working on a multipartner European project that developed a piece of software to show how cryptography could support making small payments on a mobile phone. The software ran on a standard personal computer and was implemented by a partner in southern Germany. The European Commission required that the software be demonstrated at an event in Como, in northern Italy. For the Germans, this should have been a short, four-hour hop south through Switzerland. However, Switzerland at this time had strict regulations on the export of cryptography, which would have required the acquisition of a special license to permit movement of the software across its border. My German colleagues thus embarked on an epic, although admittedly highly scenic, twelve-hour detour through the Austrian Alps in order to circumnavigate the Swiss border. What a waste of time and energy, all because of a somewhat archaic solution to the “problem” of cryptography.
Toward a Cryptopia
In the search for balance between national security and privacy, the 1990s was an awkward decade to be a state controller of cryptography. Export controls had served their purpose well, but things were changing. The problem wasn’t that strong cryptography, including asymmetric encryption, had entered the public domain. This knowledge had been out there for almost two decades. What radically changed in the 1990s is that people sat up and took notice.
Advancements in computers and the networks connecting them led some people to imagine very different futures, enabled by the power of connected machines. Some simply spotted commercial opportunities. Others, however, dreamt of a new society, freed from the shackles of conventional governance.
States determined to maintain control of encryption could probably strike deals with aspiring businesspeople. However, a much more powerful force to counter is social change. The internet and nascent World Wide Web opened many people’s eyes to an entirely new universe where amazing things could be done: ideas could be shared with like-minded strangers anywhere in the world; items could be traded globally without a sales counter; virtual societies could be established among people who could never physically meet.
Behind the more extreme end of these visions were enthusiasts who realized that these new activities could happen without conventional societal constraints. New rules could be established, set by “We the People” in cyberspace. This wasn’t conventional anarchism, since its aim wasn’t the elimination of central state governance. Rather, it imagined cyberspace as a parallel existence within which certain aspects of state rule could be bypassed.
What all these visions fundamentally relied on was an emerging cyberspace where secrets could be kept. These future worlds required encryption, not just for confidentiality but also for its ability to facilitate anonymity. Somewhat surprisingly, cryptography suddenly found itself to be the flag around which proponents of these ideas rallied. Groups such as cypherpunks and crypto-anarchists published manifestos declaring how important cryptography was to achieving their visions of a different society.33 For example, Timothy May, in his “Crypto Anarchist Manifesto,” referred to asymmetric encryption (presumably with RSA in mind) as a “seemingly minor discovery out of an arcane branch of mathematics” that will “come to be the wire clippers which dismantle the barbed wire around intellectual property.”34 Powerful stuff!
Such utopian views of how cryptography could transform the world arose, in part, from a deep distrust of the state. However, concerns about existing controls on cryptography did not come from only the fringes of mainstream society. Many technologists could see just how important cryptography would be in the future. They were concerned that state restrictions on cryptography would hamper the development of security in cyberspace.
States were nervous, and rightly so. Widespread access to encryption and anonymity technologies threatened the effectiveness of several aspects of current state governance, including intelligence gathering and dealing with crime. Export controls on encryption technology now seemed like a weak dam wall, close to bursting point, and the champions of a less inhibited access to cryptography could see the cracks forming. A perhaps unlikely alliance of freethinkers, technologists, corporations, and civil libertarians began to argue vocally for the relaxation of controls over cryptography. They released free cryptographic software, such as Phil Zimmermann’s Pretty Good Privacy (PGP).35 They issued legal challenges, such as Bernstein v. United States.36 They even declared war.
Crypto War
The so-called crypto war began in the 1990s and continues to this day (some would argue the war is much older). War is, of course, too strong a term, since barely a shot has been fired in anger, but the arguments about control of cryptography have been heated, and occasionally framed in terms of life and death.
The center of the crypto war has been, and remains, the US, although it is a global conflict. Many attribute the start of the crypto war to President Bill Clinton’s administration, who sought to control the use of cryptography in a time of rapid technological change. The essential idea was simple, in both description and sophistication: Want to use cryptography? Use as much as you want, for whatever you like; just make sure you slip us a copy of the decryption key. Ouch! Really?
According to this proposal, officially termed key escrow, users of encryption would need to use approved algorithms and hand over a copy of the decryption key to the government. This decryption key would be accessed by the state only if a legal warrant to do so had been granted by the courts.
Well, you can just imagine how well key escrow went down as an idea for controlling the use of cryptography! There were the security concerns: Can the state really be trusted to look after decryption keys? There were the logistical concerns: How would key escrow systems be built and integrated into business processes? There were the cost concerns: Who is paying for key escrow?37 Most fundamentally, however, was this question: What problem did key escrow really solve?
If the ultimate goal was to enable the state to access data that had been encrypted by targets of an investigation, why would a potential target use an escrowed cryptosystem in the first place? By the 1990s, encryption software was widely available, so anyone really wishing to hide their data could do so without using an approved algorithm and escrowing the key. Would using non-escrowed encryption now become a crime? As the cypherpunks paraphrased it: “If crypto is outlawed, only outlaws will have crypto.”38
Key escrow was not adopted. Export controls on cryptography were slackened. As the century turned, we entered an era of apparent pragmatism with respect to control of cryptography. States such as the UK, unable to license and escrow cryptography, instead passed laws requiring suspects who possessed encrypted data to release decryption keys under warrant. This is quite a cumbersome type of control to apply from the state’s perspective. First there are legal technicalities to be followed. Then the suspect has to be persuaded to cooperate. Finally, the suspect has to actually find their decryption key and make it available, and not “forget” it, “lose” it, or simply not know how to begin going about finding it.39
Meanwhile, use of cryptography increased rapidly. Encryption software became widely available. Cryptography was embedded into our daily technologies. Devices containing strong cryptography were traded around much of the world without legal obstruction.
One day, not long after key escrow had faded as an initiative, I sat in the office of a wise colleague, a cryptography pioneer. “There’s no going back, is there?” I observed. “Cryptography is out there, and you can’t stop anyone from using it. The crypto war has been won.” I was expressing a commonly held sentiment among those who had been following the struggles over key escrow. My colleague leaned back in his chair, eyes smiling at my naivete, and chuckled. “It will all be back; you’ll see.”
What he knew then, we all know now. The crypto war continues, and it will never have a victor.
The Age of Mass Encryption
The first decade of the twenty-first century saw few public skirmishes in the crypto war. Not because the war was over, but because one side was much too distracted by playing with all the cryptography it had fought so hard to liberate from state control. The other side, however, was far from idle.
In an age of widespread knowledge and mass use of cryptography, where anyone can invent their own strong encryption algorithm and freely use it, it might be tempting to believe the state has no advantage when it comes to tackling cryptography. It’s probably true that in terms of cryptographic algorithm expertise, the state no longer has the significant edge it once enjoyed. Nor, it seems, does the state have an advantage with respect to raw computation ability, since the most powerful supercomputer in the world can’t do much to counter AES encryption.
However, the state retains several significant advantages that can be used against cryptography. One is that the state tends to be in control of the critical physical infrastructure, such as backbone network technology, that cyberspace depends on. Another is that the state has an ability to influence and regulate the organizations whose products and services provide the means by which we engage with cyberspace, such as internet service providers. The state also has extensive resources, both computing and human, to devote to tackling the “cryptography problem.” But perhaps the state’s greatest advantage is its unique capability to see the big picture concerning how and where data flows and rests as it journeys through the networks connecting cyberspace. The state sees the whole forest, while we, at best, see trees.
It is often claimed that complexity is the enemy of security.40 We have created an incredibly complex cyberspace, and we continue to develop it further. We use cryptography in many sophisticated ways, to secure the things we do there. This cryptography needs to be carefully implemented and integrated. The keys need to be managed. Attention needs to be paid to where unprotected data resides before encryption and after decryption. Remember, there are many ways to break a cryptosystem, and few of them these days have anything to do with breaking cryptographic algorithms.
The crypto war resumed, full tilt, in 2013. Like some of the most sordid conflicts in history, it was triggered by an attempted assassination.
Kiss and Tell
When discussing Edward Snowden, it’s important to disentangle the ethical questions around what he did from the issues concerning what he revealed. Snowden, a contractor with the NSA, publicly released extensive sensitive information about, among other things, methods by which the NSA had been dealing with the surveillance challenge provided by encryption. It was a devastating series of leaks, which exposed many of the NSA’s tools, techniques, and tactics, and forced Snowden into exile.41
Whether you believe that Snowden deserves a pedestal and plinth, or handcuffs and a dungeon, is a discussion for another day (or book). What is indisputable is that we now have an indication of how certain states (in particular, the US and the UK) reacted following the failure to establish key escrow. The state could have done many things to combat the use of cryptography. What we learned from Edward Snowden is that the state did all of them.
Of course, we don’t know exactly what has been going on. Snowden released a substantial number of documents and presentations,42 but much of the information lacks detail and is hard to confirm. The overall picture, however, is clear. For intelligence purposes, the state has been doing everything it can to try to overcome cryptography.
Rather than focusing on particular allegations, which may or may not be true, it is perhaps more informative to consider the wide range of things that a state could do. Let’s imagine what might be well within the capabilities of a state, particularly one with some influence over many of the most powerful technology companies. Some of the following techniques do appear among the Snowden revelations.
A state with extensive funds, reach, and facilities could store as much of the data whizzing around in cyberspace as possible, encrypted or not. This data could include copies of all communications as they arrive at a major entry hub of the state’s national computer networks (in the UK, for example, many of the international communications flow in via undersea cables, which reach land at a few remote locations). The state could then attempt to analyze this data in order to build a comprehensive picture of a target individual’s engagement with cyberspace. Even if messages and phone calls are encrypted, linking information like who was communicating with whom (and when) to data such as web browsing history could yield a very detailed picture of a suspect’s life.
A state could come to an agreement with a company providing internet access and email services to millions of citizens. Suppose this company uses encryption to protect email and connections to the servers where client email is stored. The company could give the state access to all the metadata associated with its customers’ activities, decrypt email on behalf of the state, or provide the state with the necessary decryption keys.
A state could employ some of its cybersecurity experts to hack into a company’s networks and try to obtain data surreptitiously—for example, by seeking plaintext on unprotected internal networks.
A state could fool a computer network switch into encrypting traffic using a public encryption key belonging to the state instead of the public encryption key belonging to the intended recipient. The state could then decrypt the traffic using its private decryption key, take a copy of the plaintext, and then re-encrypt the plaintext by using the intended recipient’s public encryption key. The recipient would receive correctly encrypted traffic and be none the wiser.
A state could influence encryption standardization processes in order to have a cryptographic algorithm with a backdoor approved for widespread use.
A state could develop, or purchase, cyberattack techniques that the wider world is not yet aware of and thus has no defenses against (sometimes referred to as zero-day exploits). The state could fool a suspect using encryption into clicking on a web link or opening an attachment that launched an attack on the suspect’s smartphone. Such an attack might, for example, read data before it was encrypted, steal decryption keys, or switch on the smartphone’s microphone to record the content of encrypted calls.43
Frankly, Snowden’s revelations haven’t left much to our imagination.
Life after Snowden
Whether the world is a more secure place as a result of the Snowden revelations depends on your point of view. Michael Hayden, former director of the NSA, described Snowden’s action as “the greatest hemorrhaging of legitimate American secrets in the history of my nation.”44 Strictly from an intelligence perspective, Hayden may well be correct, but I would argue that, from the point of view of our security in cyberspace, we are possibly better off because wider society is now able to better appreciate and debate the relevant issues.
Perhaps most fundamentally, the revelations are a spectacular, and timely, reminder of how full of vulnerabilities cyberspace is. The internet is not a carefully designed network with strong built-in security features. The cyberspace we interact with today has evolved in a somewhat disorganized, piecemeal manner, with security often provided (if at all) as an afterthought. As an overall system, cyberspace is full of security gaps, created by weaknesses in technology, poor integration of different technologies, process failures, and bad governance. Even when we encrypt data, these gaps provide myriad opportunities for anyone trying to learn something about the data we have encrypted. I think most cybersecurity professionals were well aware of these vulnerabilities before the Snowden revelations. But like many hidden truths, not enough is done about it until a light is shone on what is really going on.45
Snowden’s revelations have had many consequences. From a cryptographic perspective, the most significant has been the increased adoption of end-to-end encryption by many technology providers, including the likes of Apple for its messaging service. All encryption has “ends,” in the sense that data is encrypted from only one point to another. However, end-to-end encryption is a term used to indicate that the endpoints for an encrypted communication should be devices under the control of the communicating parties, and not a server in between. In particular, it should mean that the provider of the service (such as Apple in the case of its messaging service) is unable to decrypt the communications.
Much to the frustration of some state authorities, end-to-end encryption eliminates some of the routes to obtaining plaintext, such as doing a deal with (or compelling) the corporate provider of the service to cooperate. During the 2016 dispute between Apple and the FBI over access to an encrypted iPhone, supporters of the FBI claimed that Apple had chosen to “protect a dead ISIS terrorist’s privacy over the security of the American people,”46 whereas Apple’s Tim Cook claimed that giving in to the FBI’s demands would be like creating the “software equivalent of cancer.”47
At a more basic level, the revelations have at least resulted in society discussing the issue of encryption and its impact on state governance. Commenting on the challenge of accessing encrypted data, former Australian prime minister Malcolm Turnbull announced, somewhat bravely, that “the laws of mathematics are very commendable, but the only law that applies in Australia is the law of Australia.”48
The very fact that so many prominent people have spoken out about encryption, whether in favor of or against the control of its use, has to be a good thing. Some states, such as Australia, the UK, and the US, have been revising relevant legislation. Many users of technology have been made aware of encryption, thus creating a demand for services deploying it. The Tor network, for example, has more than quadrupled in size since 2010.49 Many influential companies have reacted by enhancing their cryptographic security.50
Only time will tell whether we will see substantial changes to the way cryptography is implemented and used in the light of Snowden’s revelations. They have certainly given society plenty to contemplate regarding the data we generate, the digital surveillance of that data, and the dilemma arising from the use of encryption. While simply talking about these issues doesn’t solve anything, being ignorant of them is far worse.
Cryptopolitik
In one sense, the crypto war has been well and truly won by proponents of the unrestricted use of cryptography. Today, we all use strong cryptography, and there is no going back to an era in which states can control precisely what strength of cryptography is used, and for what purpose. Many governments openly recognize cryptography as being vital to establishing a secure digital society.51
Yet, the dreams of those who regarded cryptography as the facilitating technology for a new world remain unfulfilled. Cryptography creates genuine difficulties for states, who will always, perhaps justifiably, seek means of addressing them. Backdoors in cryptographic algorithms and export controls on cryptographic devices are no longer regarded as appropriate ways of dealing with the cryptography dilemma. However, some of the breakable unbreakable cryptosystems that we use today seem, at least to me, to be far too breakable. The complexity of cyberspace leaves too many points of vulnerability, which are not just routes to plaintext for state authorities but also potential attack points for others. By embracing cryptography without building into it an ability to apply state control, we have forced states to adopt approaches to tackling encryption that are not widely acknowledged, are at times disproportionate, and potentially place our computer systems at risk.52
I’m sure you are hoping for an optimistic close to this discussion. Perhaps a proposal for a crypto war peace accord? I wish I had an elegant suggestion for a way forward, but I don’t, and I’m not sure anyone else does either. I’ll at least offer some thoughts on what the future could look like, but they are certainly not proposals for a full cessation of hostilities.
Perhaps the biggest obstruction to resolving the crypto war is a combination of the behavior and nature of the two sides in the argument. Each uses inflammatory and ambiguous dialogue, and arguably fails to acknowledge the very genuine concerns that the other holds about the use of cryptography, both now and in the future. This sort of entrenchment is dangerous since, as former US president Barack Obama observed when considering regulatory change in the US, “If everybody goes to their respective corners, and the tech community says ‘either we have strong perfect encryption or else it’s Big Brother and an Orwellian world,’ what you’ll find is that after something really bad happens, the politics of this will swing and it will become sloppy and rushed and it will go through Congress in ways that are dangerous and not thought through.”53 This is kind of where we are right now.
Addressing this lack of mutual understanding requires the development of shared trust and an ongoing series of very clear conversations. The problem in the case of cryptography tends to be that one of the two sides, the intelligence community, is traditionally somewhat unknowable and reluctant to be transparent. This issue needs to be overcome, at least to some extent, if we are going to make real progress.
Another reason we find ourselves in such a predicament regarding encryption is that the architecture of cyberspace, particularly the internet, is a complete mess. This messiness can be bad for security, and it also creates opportunities for exploiting weaknesses in cryptographic infrastructure. It would surely be preferable to have a tidier, more transparent architecture, but this is not something that can easily be retrofitted. If provision were made, somehow, within any redesigned architecture for lawful access to plaintext, then it is at least vaguely conceivable that the processes and risks involved could be understood, debated, and agreed.54 Maybe.
There’s also a disproportionate percentage of technology and services under the control of just a few states. Is it any surprise that the United States, home of the internet and the most influential companies in cyberspace, exploits this advantage when encryption gets in the way of other matters? Will it ever be possible to develop a more geopolitically fair and democratic cyberspace?
We are, quite rightly, employing cryptography to defend ourselves in cyberspace. I don’t think we should stop doing this, but it could be argued that, just occasionally, perhaps, we are using too much cryptography.
Think about mobile phones for a moment. Your mobile calls are encrypted between your phone and the nearest base station, in order to prevent the call from being intercepted over the air by anyone with a simple receiver. After this point, traditionally the data has been decrypted and has entered the standard telephone network.55 It is re-encrypted only for the final part of its journey back over the air between a base station close to the recipient and their mobile phone. In other words, for most of this journey the call is not encrypted. It doesn’t need to be, because the standard telephone network is relatively hard to break into. We don’t ever hear the state complaining that we are using mobile phones to make calls. This is because, if the state really needs to, and follows due legal process, it can intercept a mobile call after the call has been decrypted. Nor, indeed, do we hear many people complaining that the state can do this. We seem to accept that the state will apply this capability responsibly, which hopefully it does.
Now consider sending a message from your mobile phone using a secure messaging app. If you use an app that supports end-to-end encryption, then your message is encrypted for the entire journey from phone to phone. This is stronger confidentiality protection than you get when you make a phone call, send an email, or even mail a letter. In one sense, this is a wonderful thing. But, is this level of confidentiality really necessary? If we were to negotiate a new relationship between the state and the individual with respect to the use of cryptography, might the users of encryption be willing to make some concessions with respect to the strength of cryptographic security they enjoy today?56
There is past precedent for restraining the use of cryptography. During the Cold War, as part of the second round of Strategic Arms Limitation Talks (SALT II), the United States and the Soviet Union agreed not to use encryption during particular types of weapons testing so that the other side could gather intelligence on the function and capability of the armaments.57 Relaxing their use of cryptography might appear to be a backward step in terms of data security, but it reassured each side about the other’s capability in order to help relieve tensions. Phone messaging and weapons testing are clearly very different applications, but the point is that sometimes security can be weakened for justifiable reasons.
Don’t get me wrong; I’m all in favor of end-to-end encryption. While cyberspace remains a messy space, with states indiscriminately capturing data, and infrastructure companies exploiting user data in less-than-transparent ways, end-to-end encryption seems the safest way to ensure that data on the move is adequately protected. I’m just proposing that in a future cyberspace, it might be possible to reimagine what is strictly necessary.
One possibility, which is more a reframing of the cryptography dilemma than a resolution, is for cyberspace to become more partitioned than it is today. Cyber “subspaces” might form that would be deemed “safer” than the rest of cyberspace. Users could join these virtual gated communities and gain a level of protection by doing so. If sufficient trust were established in the design and governance of these safe spaces, then users might accept a level of state capability to access encrypted traffic within them, as long as this government access took place openly and within the terms of the law. Outside of safe spaces, in the badlands, the crypto war would rage uncontrolled.
Elements of this partitioning concept have already emerged. Apple, for example, has created a somewhat restricted space for users of its devices, where only certain approved software can be installed. Some people criticize Apple for being overly controlling, while others embrace Apple technology because they believe it to be more secure as a result.
Restricting software downloads is one thing, but providing the capability to access encrypted traffic is quite another. Is it, in fact, even possible to engineer a system in which such access capability is guaranteed to remain under the control of the relevant authorities? It’s far from clear whether this type of system could be built. And, of course, most of the people the authorities are really concerned about would probably never use such a system.
The crypto war will cease only if we engage in careful and constructive debate about the type of cyberspace we wish to inhabit in the future. Daniel Moore and Thomas Rid offer the following argument:
The future design of cryptosystems should be informed by hard-nosed political and technical considerations. A principled, yet realistic, assessment of encryption and technology more broadly is needed, informed by empirical facts, by actual user behavior and by shrewd statecraft—not by cypherpunk cults, an ideology of technical purity and dreams of artificial utopias. Pragmatism in political decision-making has long been known as realpolitik. Too often, technology policy has been the exception. It is high time for cryptopolitik.58
Cyberspace certainly has its dangers, but most of us, taking moderate precautions, muddle through. Even those of us who know we are potentially exposed to state surveillance programs simply press Return and carry on. When we choose to deploy cryptography, I think it would be better to know precisely what security we’re getting for our efforts, and not be left to wonder what’s going on behind the scenes. We should accept that using encryption creates a dilemma, but the state’s response to this dilemma should be transparent and acceptable to us. What’s wrong with dreaming?