In 2010 the government of India approached Research In Motion (RIM), the manufacturer of Blackberry devices, with a demand. India wanted to monitor the encrypted e-mails and Blackberry messages (a form of Internet chat) that passed across RIM's servers between corporate clients. And, it wanted help in decrypting the encrypted messages. This was, the Indian government argued, essential to allow it to combat terrorism. And, they added, if you don't give us this access, we'll pull your wireless license and close down Blackberry in India. Faced with the loss of more than one million Indian corporate customers, RIM compromised—it found a way to share with the Indian government details on where to find the encrypted messages the government wanted—in effect identifying the servers where the information originated—without actually decrypting the messages itself.1
In making this arrangement (and, by all reports, placating the Indian government), RIM nicely illustrated two distinct, yet linked, issues that relate to the security of cyber communications, and are deeply imbedded in all aspects of the conflict in cyberspace. One is the issue of encryption—when and how communications and information can be encoded and decoded so that only the people you want to read the information can have access to it. The other is wiretapping—that is, whether and under what rules someone can intercept messages in transit and divert or copy them to their own purposes. The linkage between the two seems apparent. Wiretapping a message you cannot decrypt still leaves the content of the message concealed and even unencrypted information is safe if the transmission channels are absolutely secure. Those engaged in a conflict in cyberspace want both capabilities—to intercept/divert information and to decode it so that they can read its contents.
And, therein hangs a tale. India is not alone in its interest in being able to read people's encrypted mail. Other governments from Dubai and China to the United States have the same interests—for good or for ill. Indeed, late in 2010 and, again, in 2012 the U.S. government disclosed tentative plans to expand its wiretapping laws to apply to encrypted email transmitters like BlackBerry, social networking websites like Facebook, and software that allows direct peer-to-peer messaging like Skype.2 How well (or poorly) a nation achieves this objective bears directly on its ability to successfully win conflicts of espionage, crime, and war in cyberspace—and also on how great or little intrusion the government makes into the communications of its private citizens.
The Internet is a means, essentially, of transmitting information across large distances at a ridiculously rapid pace. All of the various types of attacks we have seen described in the preceding chapters are, fundamentally, based on the ability to corrupt the flow of accurate information—whether by stealing a portion of it for misuse, disrupting the flow so that accurate information doesn't arrive in a timely manner, or inserting false information into an otherwise secure stream of data. If the confidentiality and integrity of the information being transmitted cannot be relied on, then the system or network that acts based on that data is vulnerable. That, in a nutshell, is the core of much of cyber warfare—the ability to destroy or corrupt the flow of information from your enemies through espionage or attack—and the collateral real-world effects of that destruction.
What if your data could not be deciphered or altered (or, slightly less useful, but almost as good, if you could make your data tamper-evident, so that any corruption or interception was known to you)? If your goal is to protect your own information from attack, there are a number of ways by which you might achieve that objective. One of the earliest defensive measures taken in cyberspace was a method as old as human history—data and information were protected by encryption.
The idea behind encryption as a defensive mechanism is not, as we shall see, a perfect solution. But, what it can do, if done properly, is ensure that your information is confidential and can't be read by anyone else. It also, in contemporary usage, can provide you with a means of confirming that the information has not been tampered with in any way (since it uses modern algorithms, any alteration of the data would result in a gibberish product). Encoding information can keep it secret and sealed. Properly used, it can also allow you to share information with a trusted partner while excluding others.
But, this expansion of cryptographic capabilities to protect cyber networks comes with an uncertain cost to order and governance. Advances in cryptographic technology have made it increasingly difficult for individuals to crack a code. Code breaking is as old as code making, naturally. But, as the run of technology has played out, encryption increasingly has an advantage over decryption, and recent advances have brought us to the point where decryption can, in some cases, be effectively impossible. This has the positive benefit of allowing legitimate users to protect their lawful secrets, but it also has the inevitable effect of distributing a technology that can protect malevolent uses of the Internet. If the U.S. government can encrypt its data, so can China, or the Russian mob, or a Mexican drug cartel.
An alternative strategy that works in concert with encryption is to make your information transmission immune to interception. Here, too, the changes wrought by Internet technology have made interception more difficult and enhanced the security of communications. In the world of telephone communications, for example, intercepting a communication was as simple as attaching two alligator clips to the right wire—hence the word wiretapping. Communications through the Internet are wholly different; the information being transmitted is broken up into small packets that are separately transmitted along different routes and then reassembled when they arrive at their destination. This disassembly of the data makes effective interception appreciably more difficult.
These two technological developments have led to controversy over critical policy issues that bear on cyber conflicts today: Can a government require the manufacturers of encryption technology to limit their distribution to prevent strong cryptography from falling into malevolent hands? And, can they require communications transmission companies to assure the government access to communications? Can they, in effect, require that code makers build in a back door by which they can access and decrypt encrypted messages? And, can they require ISPs to provide them access to the data as it transits the net?
And, if they can, under what rules would these back doors be accessed? At the whim of a government? Or, only with an appropriate court order? Under what sorts of standards?
Encryption, or secret writing, is one of the oldest forms of human activity. Secret coding has been around almost as long as there have been secrets worth keeping. One of the earliest instances is recorded in The Histories by Herodotus, who described how secret writing techniques saved the Greeks from being conquered by Xerxes, the King of Persia. A Greek, witnessing the buildup of the Persian fleet sent a message to Sparta, warning of Xerxes’ plans. To keep the message from being intercepted, he concealed the writing beneath the wax covering of a wood tablet. Forewarned and prepared, the Greeks were able to assemble their navy and defeat the Persian fleet at the battle of Salamis.3
But, simply hiding a message (beneath wax on a tablet or in a hollowed out pen, or the heel of a shoe) isn't strictly speaking encoding the message. It merely provides a means of preventing the message from being intercepted and detected while en route. Encryption, or encoding, is intended to keep the message secret even if it is physically intercepted.
Conceptually, encryption involves three separate components that come together—the plaintext, the algorithm, and the key. The plaintext is the substance of the message that the sender wants to convey. Of course, this information doesn't have to be a text at all; it can be the firing code for a nuclear missile or the formula for Coca Cola products; or, quite literally, any data of any form that is more valuable to the sender if not known to anyone.
The algorithm is a general system of encryption; in other words a general set of rules for transforming a plaintext. An example of an algorithm is a cipher where, say, each letter of the plaintext (assuming it is actually a written text) is replaced with another letter. The algorithm here is “replace each letter with another.” The third, and most vital, component of an encryption system is the key, that is, the specific set of instructions that will be used to apply the algorithm to a particular message. A cipher key might therefore be “replace the letter with the letter which is five letters after it in the English alphabet.” Using this simple algorithm and key, the plaintext “cat” would then be converted to the word “hfy” and that result would now be known as the ciphertext. The critical feature, of course, is that as an initial premise, only someone who has the algorithm and the key can decrypt the ciphertext, so even if it is physically intercepted, the contents remain confidential.
We've been creating cyphertexts for quite a long time. Earliest mentions of coded writing can be found in the Kama Sutra (which counseled women to record their liaisons in secret writing). Julius Caesar's use of codes was so common that the type of algorithm he used (the letter shift system mentioned previously) is actually sometimes called the Caesar shift cipher.4
Of course, where some seek to keep secrets, others seek to reveal them. It is one of the truisms of encryption that the key to keeping a secret is the key—not the algorithm. The algorithm—the general method—is often too widely known to be a usefully kept secret. So, the strength of the key—how hard it is to guess—defines how good the encryption product is.
To return to the Caesar shift cipher, if we restrict ourselves to just shifting the alphabet, there are only 25 possible keys to use, depending on how far down the alphabet we shift the letters. That's a pretty weak key—if someone knows the general algorithm the key can be cracked with brute force in a short time period. If we loosen the algorithm a bit however, and instead of a shift rule apply a rule that allows any rearrangement of the 26 letters of the English alphabet, then the number of keys increases astronomically to well over 400 septillion (!) different possible arrangements, making a brute force effort to discover the key difficult indeed.
Technically, a cipher replaces plaintext at the level of a letter, while codes replace higher level items (e.g., a word or phrases). Thus, a code might be “apple = President.”
Codes have the advantage of being much less subject to frequency analysis of the sort developed in the ninth century. On the other hand, codes are much less flexible than ciphers (since they are limited to the predefined code substitutions) and the loss of a codebook can be catastrophic for the secrecy of the messages it has been used to encode.
By contrast, cipher keys are more flexible in encrypting content and more readily created, distributed, and, if need be, changed. As a general rule, one can say that ciphers are of greater utility than codes.
Of course, brute force is not the only method of breaking a cipher. Since at least the ninth century (when the method of frequency analysis was first reported by Arab scholars),5 it has been well established that a cipher can be broken by analysis rather than by brute force. Frequency analysis is relatively simple to describe. It rests on the knowledge that, for example, in English the letter “e” is the most common vowel. Other common letters in regular usage include “a,” “i,” “n,” “o,” and “t.” With this knowledge derived from analysis external to the code, the deciphering of a ciphertext is made much easier. It is far more likely than not that the most frequently used cipher letter, whatever it may be, represents one of these common English letters. In a ciphertext of any reasonable length, there is virtually no chance, for example, that the most common cipher letter is being used to signify a “q” or a “z.”
This sort of knowledge makes decryption easier and reduces the need for a brute force approach. Indeed, it is a fair assessment of the art of cryptography that, until the dawn of the computer era, those decrypting ciphers had the upper hand. Either the keys themselves could be stolen or they could be decrypted using sophisticated techniques, like frequency analysis. Even the notoriously difficult German Enigma code from World War II eventually yielded to analysis.
There things stood for a number of years. Those who wanted to keep secrets were at a fundamental disadvantage—in order to transmit a secret message they first had to exchange a (secret) key to the message. Besides the possibilities of backward analysis to determine what the key was, there were all sorts of problems with the exchange of keys in the first instance. They could be lost, stolen, revealed, or compromised in any number of ways. By their very nature, private keys were only good for as long as they were private.
In the late 1970s, however, enterprising cryptographers developed a way to encrypt information using the multiplication of two impossibly large prime numbers and certain one-way mathematical functions (a one-way function is one that only works in one direction; most mathematical functions, like addition and subtraction, work in both directions—you can get the results from the precursors or the precursors from the results, so to speak). With one way functions, a recipient can publish the result of his impossibly large multiplication as a public key. People who want to send the recipient a message can use the public key to encrypt their message. And, since only the recipient knows how to break down this impossibly large number to its original primes, only he can decrypt the message.6 Today, you can embed this type of encryption into your e-mail system using a program that can be purchased over the Internet for less than $100. If the users at both ends of a message use this form of public key encryption, the secret message they exchange between themselves becomes, effectively, undecryptable by anyone other than the key's creator,7 unless, of course, a hacker attacks the creation of the key at its source, by breaking into the key generation algorithm, in some way.
This last scenario was thought to be entirely theoretical and outside the box. Nobody could break into the key generation process—until someone did. In March 2011, the leading manufacturer of public encryption security key devices (those little key fobs that people carry around and which generate random six-digit numbers every 60 seconds), the company RSA—named after its founders Rivest, Shamir and Adelman, who discovered and invented public key mathematics—was hacked.8 The product, known as SecureID token, was a way of granting remote access to a set of servers for people who are working offsite. Later, in May 2011, someone who had access to the stolen RSA data, used that knowledge to attack the systems of Lockheed Martin (a major defense contractor) and attempted to gain access.9 Given the sophistication of the breach at RSA and the focus of the attack on a defense contractor, espionage rather than theft was suspected, and inferences have been drawn that another state-actor (probably the Chinese) was behind the attack. Whatever the source, this experience demonstrates that for even the strongest of encryption keys, key security is vital.
But, the U.S. government is not going to hack RSA or any other key generator. And so, the government's solution: if we can't decrypt information by analysis or brute force, we should force those who manufacture encryption software to build into the system a back door decryption key that would allow the government to read any encrypted messages. These decryption keys would be stored (or escrowed) with a trusted third party (say, a judge at a federal court) who would only release the key under specified, limited circumstances. Hence, this cluster of issues often goes by the name of key escrow—a system where the makers of technology would be required by law to include decryption keys that governments can get access to. Needless to say, many privacy advocates opposed this effort—and their opposition was successful.
In the 1990s, the FBI sought to require encryption technology manufacturers to include such a back door that went by the name of Clipper Chip.10 In part, opposition to Clipper was based on civil liberties objections, but it was also based on a practical realization that the government, itself, was a beneficiary of strong encryption to protect its own secrets and on the recognition that the United States had no monopoly on the development of encryption algorithms. If the United States required back doors in American products, the market would naturally tend to favor non-American products and our own national security system would have a back door into our own secrets.11
Indeed, at this juncture, encryption technology is widely available, with exceedingly strong encryption keys. Free software (for example, that provided by TrueCrypt.Org) with a 256 bit key size, is readily available and easy to install, as are open source public key encryption programs. In effect, with the death of the Clipper Chip back door movement, it is now possible to encrypt data in a way that cannot be decrypted after even a year of effort.12
Pre-Internet, wiretapping was an easy physical task. Early telephony worked by connecting two people who wished to communicate through a single, continuous wire (typically made of copper). The image that captures this concept most readily is of a telephone operator moving plugs around on a board and, by that effort, physically establishing an end-to-end wire connection between the two speakers.
That made wiretapping easy. All that was required was attaching a wire to a terminal post and then hooking the connection up to a tape recorder. The interception didn't even need to be made at the central Publicly Switched Telephone Network (PSTN) switching station. Any place on the line would do. And, there was (with limited exceptions) only one American telephone company, AT&T, and only one system, so coordination with the PSTN was easy if it was authorized.
Things became a little more complicated when AT&T broke up into the Baby Bells, but the real challenge came with the development of new communications technologies. As microwave, FM, and fiber optic technologies were introduced, the technical challenges of intercepting communications increased as well.13 The technological difficulty in intercepting communications grew exponentially in a relatively short period of time.
Today, the problem is even more complex. In addition to cellular telephones, we now have instant messaging and e-mail and text messaging for written communications. If you want to communicate by voice, you can use Skype (a web-based video conferencing system), or Google Chat (an embedded browser-based chat program).14 Businesses use web-teleconference tools for teleconferences and many people (particularly in the younger generation) communicate while present in virtual worlds through their avatars. Twitter and Facebook allow instant communication between large groups of people.
In short, we have created an almost infinite number of ways in which one can communicate.15 When combined with the packet-switching nature of Internet web transmissions, and the development of peer-to-peer networks (that completely do away with centralized servers), the centralized PSTN network has become a dodo. Indeed, the Internet Engineering Task Force (the organization that sets standards for operation of the Internet) has rejected requests to mandate an interception capability within the architecture of the Internet communications protocols, making interception of Internet-based transmissions even more difficult.16 With these changes, the laws and policies for authorized wiretapping have, effectively, become obsolete.
Skype
X-fire
Google Chat
Google Apps
Go-To-Meeting
Quick Connect
Tumblr
My Space
Second Life
EVE Online
Chat Anywhere
Napster
Grokster
LimeWire
The law enforcement and intelligence communities face two challenges in administering wiretap laws in the age of the Internet—one of law and one of technology. The legal issue is relatively benign and, in some ways, unencumbered by technical complexity. We need a series of laws that define when and under what circumstances the government may lawfully intercept a communication. For the most part, the authorization issues are ones involving the updating of existing authorities to apply explicitly to new technologies. The technical issue is far harder to solve—precisely how can the desired wiretap be achieved?
In Katz v. United States,17 the Supreme Court held that the Fourth Amendment applied to electronic communications, and that a warrant was required for law enforcement-related electronic surveillance conducted in the United States. Katz was codified in the Omnibus Crime Control and Safe Streets Act of 1968, with particular requirements for such interceptions laid down in Title III.18 In general, Title III prohibits the interception of “wire, oral, or electronic communications” by government agencies without a warrant and regulates the disclosure and use of authorized intercepted communications by investigative and law enforcement officers.
Reflecting its pre-Internet origins, Title III originally covered only wire and oral communication. It has since been modified to take account of technological changes and now covers all forms of electronic communication (including, for example, e-mails).19 The law also regulates the use of pen register and trap and trace devices (i.e., devices designed to capture the addressing information of a call, such as the dialing information of incoming and outgoing phone calls). In general, this noncontent information may be collected without a warrant or showing of probable cause, unlike the content portions of a message.20
As a core part of its structure, Title III also incorporates certain privacy and civil liberties protections. It permits issuance of an interception warrant only on a judicial finding of probable cause to believe that the interception will reveal evidence that “an individual is committing, has committed, or is about to commit” certain particular criminal offenses.21 Title III has minimization requirements, that is, it requires the adoption of procedures to minimize the acquisition and retention of nonpublicly available information concerning nonconsenting U.S. persons who are not the targets of surveillance, unless such a person's identity is necessary to understand the law enforcement information or assess its importance. In other words, if while investigating a terrorist case, the wiretap intercepts a conversation with a doctor, or a lover, or a pizza salesman that is not relevant to the investigation, that conversation must be minimized and information not meeting that standard may not be disseminated. Most significantly, electronic evidence collected in violation of Title III may not be used as evidence in a criminal case.
As Title III applies in the law enforcement context, the Foreign Intelligence Surveillance Act (FISA) authorizes the collection of communications for certain intelligence purposes. Passed in 1978, the Act creates the mechanism by which such orders permitting the conduct of electronic surveillance could be obtained from a specialized court—the Foreign Intelligence Surveillance Court (FISC). This court was, initially, authorized to issue orders for targeting electronic communications in the United States of both U.S. and non-U.S. persons based on a showing of probable cause of clandestine intelligence activities, sabotage, or terrorist activities, on behalf of a foreign power. The law was subsequently expanded to authorize the court to issue warrants for physical searches (1994), the use of pen registers/trap and traces (1999), and the collection of business records (1999).
To obtain a FISC order authorizing surveillance, the government must meet the same probable cause standard as in a criminal case; it must make a showing of probable cause to believe that the target of the electronic surveillance is a foreign power or an agent of a foreign power. As with Title III, the law imposes minimization obligations on the agency intercepting the communications.22
While amending the laws authorizing wiretaps to accommodate changes in technology has been, for the most part, a ministerial exercise of amending legislation, the same cannot be said of maintaining the technical capacity to tap into the ever-changing stream of communications.
In 1994 Congress attempted to address this problem through the Communications Assistance for Law Enforcement Act, known as CALEA.23 CALEA's purpose was to insure that law enforcement and the intelligence agencies would not be left behind the technology curve, by requiring telecommunications providers to build the ability to intercept communications into their evolving communications systems.
CALEA, in effect, imposed a new technical requirement on communications providers. Initially, many digital telephone systems did not have interception capabilities built in.24 CALEA required providers to change how they built their telecommunications systems so that they had that capacity—an effort that could be achieved, generally, without interfering with subscriber services. (As an aside, CALEA also provided for a federal monetary subsidy to the telecommunications providers to pay for the changeover.)
As originally adopted, CALEA's requirements were applicable only to facilities-based telecommunications providers, that is, companies who actually owned the lines and equipment used for the PSTN and Internet. Information services providers (in other words, those who provide e-mail, instant messaging, chat, and other communications platforms that are not dependent on traditional telecommunications) were excluded, at least in part because those forms of communication were still in their infancy and of relatively little importance.25
Finally, and perhaps most importantly, CALEA didn't say that telecommunications providers had to give the government a way of decrypting encrypted messages that were put on its network for transmission. A telecommunications provider only had to decrypt messages if it provided the encryption services itself. So, if an individual independently used encryption at the origin of the message, all that CALEA required was that the telecommunications provider should have a means of intercepting the encrypted message available when authorized to do so.
Hence the problem, which is two-fold: Cybercriminals, cyber spies and cyber warriors are increasingly migrating to alternative communications systems—ones like Skype and virtual worlds that are completed disconnected from the traditional PTSN networks covered by CALEA. And, along the way, they are increasingly using encryption technology that prevents law enforcement, counter-espionage, and counter-terrorism experts from having the ability to listen in on communications.26 On the wiretapping front, the problems are both technical and legal.
Technologically, the distributed nature makes true interception capabilities extremely difficult. In a peer-to-peer network, there is no centralized switching point. In a packet switching system, where the message is broken in many parts, there is no place on the network where the whole message is complied, save at the two end points. While peer-to-peer systems can be used for illegal activity (e.g., illegal file sharing27), they are also an integral part of legitimate file-sharing activities.28
Instead, the government must use sampling techniques to intercept portions of a message and then, when a problematic message is encountered, use sophisticated techniques to reassemble the entire message (often by arranging for the whole message to be redirected to a government endpoint). The FBI developed such a system in the late 1990s, called Carnivore.29 It was designed to sniff packets of information for targeted messages. When the Carnivore program became public, the uproar over this sort of interception technique forced the FBI to end the program.
It is said that the NSA uses a packet sniffing system, called Echelon, for intercepting foreign communications traffic that is significantly more effective than Carnivore ever was.30 Indeed, according to the New York Times, the Echelon system was at the core of the NSA's post-9/11 domestic surveillance system.31 While little is publicly known about the capacity of the Echelon system, one observer (an EU Parliamentary investigation) has estimated that the system could intercept three million faxes, telephone calls, or e-mails per minute.32
In order for a system like Carnivore, or Echelon, to work, however, the routing system must insure either that traffic is routed to the sniffer along the way or that the sniffer is physically located between the two endpoints. But, therein lies the problem: many of the peer-to-peer systems are not configured to permit routing traffic to law enforcement sniffers.
To address these problems, the U.S. government has spoken publicly of its intent to seek an amendment to CALEA. According to public reports, the government would seek to extend CALEA's wiretapping requirements for traditional telecommunications providers to digital communications technologies. Doing so would, according to the government, close a growing gap in existing surveillance capabilities that increasingly places criminal or espionage activity behind a veil that the government cannot pierce—a phenomenon the government calls Going Dark.
The proposed changes would have three components: (1) an expansion of CALEA's decryption requirement to all communications service providers who give their users an ability to encrypt their messages; (2) a requirement that foreign-based service providers doing business in the United States have a domestic office to which the government may go where interceptions can take place; and (3) a requirement that providers of peer-to-peer communications systems (like Skype) alter their software to allow interception. The government, speaking through Valerie Caproni, the General Counsel for the FBI, has argued that these proposed changes (which are expected to be the subject of legislative consideration in the coming years) would not give additional wiretapping authority to law enforcement officials, but simply extend existing authority “in order to protect the public safety and national security.”33
The principal legal issues in this proposal will, as before, involve authorization rules and standards for operation. Presumably, if the government is to be taken at its word, they will be seeking no greater interception authority than exists today for wire communications, that is, routinized access to noncontent header information joined with a probable cause standard for access to content.
In some conceptions, the CALEA expansion might also implicate the Fifth Amendment protection against self-incrimination. In general, the Constitution says that an individual cannot be compelled to give evidence against himself through testimony. When questioned he is, by law, entitled to stand mute and say nothing.
Imagine an individual who encrypts messages he sends across the Internet. The courts have yet to determine whether or not an effort to compel that individual to disclose the decryption key constitutes a violation of his Fifth Amendment privilege. In general, the answer to the question will turn on whether disclosing the decryption key is thought of more like the production of a physical object (such as the physical key to a lock box), which may be compelled, or like the production of a person's mental conceptions (such as the memorized combination to a safe), which may not be.34
These Fifth Amendment considerations are likely to be of limited applicability. Even in many peer-to-peer applications (like Skype), the encryption keys are held by a centralized provider who uses the user-generated keys to enable encrypted communications from a variety of different platforms where the user might log in. In effect, to make the system more convenient, the user allows a third party coordinator (here, Skype) to have access to the key. In doing so, Fifth Amendment protections are likely waived.
Moreover, despite the widespread availability of encryption tools that provide solid security, it seems that nobody uses them—not even the bad guys. Every year, the U.S. court system publishes a report on the wiretapping activities of the federal government. In the 2010 report,35 of the more than 3,100 wiretaps authorized, only 6 had any encryption in use. Even those six cases didn't stop the government's efforts; it was able to get the evidence it was seeking every time.
At the bottom, however, the issues raised by the nascent proposal are more policy questions than legal questions. Consider a short list of these sorts of questions:
Is implementation of an expanded CALEA even technically feasible in all cases? How will software developers who are providing peer-to-peer services provide access to communications when there is no centralized point in the network through which the data will have to pass? Presumably, this will require developers to reconfigure their software products in ways that permit the interception and decryption.
Think, for example, of an open-platform encryption program like TrueCrypt (that is, one that is developed by a public consortium and where the underlying code is open to all for inspection), where users retain sole possession of their own generated keys. Here, the users might retain Fifth Amendment rights against self-incrimination that would protect them against the compelled disclosure of their keys, but could CALEA be amended to require that software commercial vendors who manufacture such programs include decryption back-doors? The answer is unclear.
And, even if they could, what then? Depending on how broad the modified CALEA requirements are, the economic costs of modifying the existing platforms could run into the millions, if not billions, of dollars. When CALEA was first implemented, the federal government made funds available to offset the costs of the upgrades.36 Would it do so again, and to what degree?
More significantly, what would be the security implications of requiring interception capabilities in new technologies? Building in these capabilities would necessarily introduce potential vulnerabilities that could be exploited, not by those who would have authorized access, but rather by hackers who found a way to crack the capabilities of the protection itself.37
And, finally, there are issues to be considered in connection with international perceptions of American conduct. In recent years, efforts have been made by various foreign governments to secure access to Internet communications.38 It is difficult, if not impossible, for the United States to oppose such efforts in international forums when its own policy favors expansions of interception capabilities domestically. Indeed, our stated public policy favors Internet freedom, in large part as a way of energizing democracy movements around the world39—a policy that is difficult to square with a domestic move toward greater governmental interception capabilities.
Jay Wack has the air of an evangelical preacher. He looks the part too, with black wrap-around glasses that shade his eyes. But, Wack isn't preaching religion in the classical sense. Instead, he is preaching a new conception of encryption. Wack works with Ed Scheidt, who is one of the grand old men of cryptography. When the CIA commissioned a new sculpture for its offices in Langley, Virginia, they wanted one that stood out—so they asked a sculptor, Jim Sanborn, to construct a sculpture with a puzzle in it. Sanborn, in turn, collaborated with Scheidt, to put 4 separate code sections on the Kryptos sculpture. Today, more than 20 years after the sculpture was first unveiled, only 3 of the code sections have been decrypted. The fourth remains unsolved, and Ed Scheidt has moved on to new encryption challenges.
The problem with encryptions systems, as Wack and Scheidt will tell you, is that they are based on static encryption keys, that is, keys that apply only at either end of the communication chain. In other words, the sender uses a particular key to encrypt the information on the front end, and then the recipient uses the same key (or a related one) to decrypt it on the other. The key is independent of the message and can travel separately from the message.
The direction Wack and Scheidt are headed in is a novel one. They are creating dynamic keys that actually travel with the information being transmitted. What this means, in practice, is that an encryption header can travel with any digital object—and so authorize access to the content contained in a data set. In the end, instead of the encryption protecting the transmission of information from point to point, the encryption will be directly attached to the content and protect the information itself. Even when the data is at rest in a computer system, the dynamic encryption will limit who can see it (and even allow people with different permissions to see different parts of the data set). One beauty of this approach is that it will solve a persistent problem in communications security—how to deal with different levels of secrecy and access. Using dynamic keys, each bit of data is associated with a level of permission access.
This new approach also provides a way of getting around the problem of trust. As encryption is commonly practiced today, you have to trust the entity that is issuing the encryption keys. But, in the real world, as Wack will tell you, “trust is not a transitive event.”40 Just because Jim trusts Steve and Steve trusts Bob, doesn't mean that Jim should necessarily trust Bob. They may well have different reasons to trust people and apply those reasons in different ways. But, the modern system of key issuance is based on transitive trust; everyone accepts encryption standards that are accompanied by a certificate of issuance. Right now, more than 600 different companies and institutions issue those sorts of certificates that are used to verify the authenticity of messages and websites.
What happens when someone hacks one of the certificate issuing authorities? Sadly, we know the answer now. As we discussed in chapter 6, an Iranian hacker (who some suspect of working for the Iranian government), hacked DigiNotar, one of the certificate issuers. He used his access to DigiNotar's certificate systems to create certificates for various websites. Among those subject to attack were sites operated by major Internet players like Google, Microsoft, and Mozilla. The certificates would let the hacker spoof users of the sites and redirect traffic to places they thought were legitimate—and then allow him to collect, for example, their e-mail traffic.41
The use of dynamic keys offers the promise of breaking this cycle of the abuse of trust. When encryption follows the data, permission to access the information will be allowed locally, without the need for reference to a centralized certificate authority. From a user's perspective, that looks like a great improvement.
From the perspective of governments worldwide, however, it may be a problem. Dynamic encryption may make wiretapping irrelevant. If data or information can be encrypted at both ends, and also while in transit, intercepting the data may not really advance your interests since it will be undecipherable. In the never-ending tug-of-war between those who want the ability to destroy or corrupt the flow of information from your enemies through espionage or attack, and those who want to protect that information from destruction or corruption, the protectors seem to be gaining the upper hand. Whether that's a good thing or a bad thing depends on where you sit. It's great for democracy advocates in the Middle East but problematic for signals intelligence experts at the NSA.