Two graduate students stood silently next to a lectern, listening as their professor presented their work to a conference. This wasn’t the done thing: usually, the students themselves would get to bask in the glory. And they’d wanted to, just a couple days previously. But their families talked them out of it. It wasn’t worth the risk.
A few weeks earlier, the Stanford researchers had received an unsettling letter from a shadowy agency of the United States government. If they publicly discussed their findings, the letter said, that would be deemed legally equivalent to exporting nuclear arms to a hostile foreign power. Stanford’s lawyer said he thought they could defend any case by citing the First Amendment’s protection of free speech. But the university could cover the legal costs only for professors. That’s why the students’ families persuaded them to keep silent.1
What was this information that U.S. spooks considered so dangerous? Were the students proposing to read out the genetic code of smallpox or lift the lid on some shocking conspiracy involving the president? No: they were planning to give the humdrum-sounding International Symposium on Information Theory an update on their work on public-key cryptography.
The year was 1977. If the government agency had been successful in its attempts to silence academic cryptographers, it might have prevented the Internet as we know it.
To be fair, that wasn’t what the agency had in mind. The World Wide Web was years away. And the agency’s head, Admiral Bobby Ray Inman, was genuinely puzzled about the academics’ motives. In his experience, cryptography—the study of sending secret messages—was of practical use only for spies and criminals. Three decades earlier, other brilliant academics had helped win the war by breaking the Enigma code, enabling the Allies to read secret Nazi communications. Now Stanford researchers were freely disseminating information that might help adversaries in future wars encode their messages in ways the United States couldn’t crack. To Inman, it seemed perverse.
His concern was reasonable: throughout history, the development of cryptography has indeed been driven by conflict. Two thousand years ago, Julius Caesar sent encrypted messages to far-flung outposts of the Roman Empire—he’d arrange in advance that recipients should simply shift the alphabet by some predetermined number of characters.2 So, for instance, “jowbef Csjubjo,” if you substitute each letter with the one before it in the alphabet, would read “invade Britain.”
That kind of cypher wouldn’t have taken the Enigma codebreakers long to crack, and encryption is typically now numerical: first, convert the letters into numbers; then perform some complicated mathematics on them. Still, the message recipient needs to know how to unscramble the numbers by performing the same mathematics in reverse. That’s known as symmetrical encryption. It’s like securing a message with a padlock, having first given the recipient a key.
The Stanford researchers were interested in whether encryption could be asymmetrical. Might there be a way to send an encrypted message to someone you’d never met before, someone you didn’t even know—and be confident that they, and only they, would be able to decode it?
It sounds impossible, and before 1976 most experts would have said it was.3 Then came the publication of a breakthrough paper by Whitfield Diffie and Martin Hellman; it was Hellman who, a year later, would defy the threat of prosecution by presenting his students’ paper. That same year, three researchers at MIT—Ron Rivest, Adi Shamir, and Leonard Adleman—turned the Diffie-Hellman theory into a practical technique. It’s called RSA encryption, after their surnames.*
What these academics realized was that some mathematics are a lot easier to perform in one direction than another. Take a very large prime number—one that’s not divisible by anything other than itself. Then take another. Multiply them together. That’s simple enough, and it gives you a very, very large semiprime number. That’s a number that’s divisible only by two prime numbers.
Now challenge someone else to take that semiprime number and figure out which two prime numbers were multiplied together to produce it. That, it turns out, is exceptionally hard.
Public-key cryptography works by exploiting this difference. In effect, an individual publishes his semiprime number—his public key—for anyone to see. And the RSA algorithm allows others to encrypt messages with that number, in such a way that they can be decrypted only by someone who knows the two prime numbers that produced it. It’s as if you could distribute open padlocks for the use of anyone who wants to send you a message—padlocks only you can then unlock. They don’t need to have your private key to protect the message and send it to you; they just need to snap one of your padlocks shut around it.
In theory, someone could pick your padlock by figuring out the right combination of prime numbers. But it takes unfeasible amounts of computing power. In the early 2000s, RSA Laboratories published some semiprimes and offered cash prizes to anyone who could figure out the primes that produced them. Someone did scoop a $20,000 reward—but only after getting eighty computers to work on the number nonstop for five months. Larger prizes for longer numbers went unclaimed.4
No wonder Admiral Inman fretted about this knowledge reaching America’s enemies. But Professor Hellman had understood something the spy chief had not.5 The world was changing. Electronic communication would become more important. And many private-sector transactions would be impossible if there was no way for citizens to communicate securely.
Professor Hellman was right, and you demonstrate it every time you send a confidential work e-mail, or buy something online, or use a banking app, or visit any website that starts with “https.” Without public-key cryptography, anyone at all would be able to read your messages, see your passwords, and copy your credit card details. Public-key cryptography also enables websites to prove their authenticity—without it, there’d be many more phishing scams. The Internet would be a very different place, and far less economically useful. Secure messages aren’t just for secret agents anymore: they’re part of the everyday business of making your online shopping secure.
To his credit, the spy chief soon came to appreciate that the professor had a point. He didn’t follow through on the threat to prosecute. Indeed, the two developed an unlikely friendship.6 But then, Admiral Inman was right, too—public-key cryptography really did complicate his job. Encryption is just as useful to drug dealers, child pornographers, and terrorists as it is to you and me when we pay for some printer ink on eBay. From a government perspective, perhaps the ideal situation would be if encryption couldn’t be easily cracked by ordinary folk or criminals—thereby securing the Internet’s economic advantages—but government could still see everything that’s going on. The agency Inman headed was called the National Security Agency, or NSA. In 2013, Edward Snowden released secret documents showing just how the NSA was pursuing that goal.
The debate Snowden started rumbles on. If we can’t restrict encryption only to the good guys, what powers should the state have to snoop—and with what safeguards?
Meanwhile, another technology threatens to make public-key cryptography altogether useless. That technology is quantum computing. By exploiting the strange ways in which matter behaves at a quantum level, quantum computers could potentially perform some kinds of calculation orders of magnitude more quickly than regular computers. One of those calculations is taking a large semiprime number and figuring out which two prime numbers you’d have to multiply to get it. If that becomes easy, the Internet becomes an open book.7
Quantum computing is still in its early days. But forty years after Diffie and Hellman laid the groundwork for Internet security, academic cryptographers are now racing to maintain it.