“here’s something interesting. . . .”
A casual handoff of an academic paper from a graduate student to a professor. Ron Rivest, a twenty-nine-year-old assistant professor at the Massachusetts Institute of Technology, had no reason to believe that this paper was any more interesting than the hundreds of papers, articles in journals, and technical memos he had already seen in his nascent career in academia. One of its authors, Whit Diffie, had worked in the same building—Tech Square in Cambridge, where the AI lab was one floor above Rivest’s office at the Laboratory for Computer Science. But neither that name nor that of the coauthor, Martin Hellman, was familiar to him. And actually, Rivest knew very little about encryption and virtually nothing about how sensitive a topic it was. Nor did the paper contain any breakthroughs in mathematical reasoning; the spirit of Fermat was nowhere to be found in its equations.
Even so, “New Directions in Cryptography” turned out to be more than interesting to Rivest: it thrilled him. Ultimately, it changed his life.
The paper appealed to Rivest’s heart as well as his head. Rivest was a theoretician, but one for whom simple abstractions were not enough. The ideal for him was actually putting the ethereal mechanics of math to work, of making a tangible difference in the world of flesh and dirt. Diffie and Hellman’s breakthrough wedded the spheres of abstraction and reality, applying an original mathematical formula to meet a need in society. Ron Rivest wanted to spend his time in the neighborhood where those two realms met.
Despite a prodigious talent for math, Rivest did not grow up as a classic numbers nerd. His father had been an electrical engineer at the General Electric lab at Schenectady, New York, and Rivest had taken advantage of the strong science programs in the public high school there. For one summer, he’d attended a special math program at Clarkson College. But as high school graduation loomed, he mulled over careers in psychology or law. He wound up majoring in mathematics at Yale but only, he remembers, because “it had the fewest course requirements, and it allowed me to take a lot of other courses.” These included plenty of classes in psychology, history, and other sojourns sans slide rule. Mathematics, he says, was “just one of many things I was doing.”
He speaks of this in his characteristic soft, thoughtful cadence, a ruminative mumbling that draws a listener closer. Rivest is a balding man with pleasantly plump cheeks, neatly bearded. He certainly does not appear to be the sort of man who poses a threat to national security. While at Yale, Rivest attended a few marches protesting the Vietnam conflict, but he was far from a flaming activist. Thoughts of sedition had never truly crossed his mind.
At Yale, Rivest discovered computer science. While taking courses offered by the engineering department, he realized that programming offered an opportunity to merge theory with tangible effect, and he fell in love with that form of instant karma. He used his programming skills in a part-time job for an economics professor. Working on a huge punch-card-munching IBM mainframe, Rivest hacked away at arcane subjects like price indices in Latin America or New Zealand—and felt just as powerful as if he were moving mountains. If Yale had offered a computer science major back then, Rivest would have signed up in a minute. In any case, after graduating from Yale in 1969 with a math degree, he went on to graduate school at Stanford, in the four-year-old computer science department.
Rivest spent much of his time at Stanford’s cutting-edge artificial intelligence lab, helping with a fairly quixotic project involving an autonomous robot rover. The idea was to get the electronic beast to roam the parking lot with no human intervention, a typical overly optimistic task for AI workers in the 1960s. He had terrific fun with this, and was fascinated with the idea of making computers “smart.” But the problems of making robots behave forced him to concentrate on hard-core engineering problems, and he didn’t want to get too far from theory. He increasingly became drawn to understanding the mathematics of computation itself. His guru was not the AI elder John McCarthy but Don Knuth, Stanford’s Jedi Master of algorithms. But Rivest’s goal was always applying theory.
“Artificial intelligence gets to be a bit mushy—it’s hard to tell what it is you’re doing, and hard to tell when you’ve done something right,” Rivest explains. “But with theory you can make a crisp model and say, ‘This is what I want to do and here’s the solution to it.’” There was nothing like using the beauty of mathematics to solve a problem. Not only was it possible to pull a cerebral arrow from your quiver and hit the bull’s-eye dead center, but you had the equivalent of a celestial arbiter—your proof—ringing the buzzer to let you know you’d scored. So while Rivest enjoyed writing AI software programs, his doctoral thesis involved database retrieval algorithm and research techniques. Very Knuth-ish. And in a yearlong postdoc at the Institut National de Recherche en Informatique et en Automatique (INRIA) outside Paris, he concentrated on other theoretical problems.
In the fall of 1974, Rivest accepted his post as an assistant professor on a tenure track at MIT. It was an ideal job, one that would enable him to pursue his theoretical interests in a department that also allowed him the freedom to work on programming problems as well. Rivest had been married since graduating from Yale. At twenty-seven, he seemed poised to begin a productive yet quiet life as an academic in one of America’s best scientific institutions. From his eighth-floor window in the boxlike Tech Square building in Cambridge, he would watch the gorgeous campus sunsets, their drama enhanced by pollution spewed out by Boston-area industry. And then he would return to his algorithms.
In December 1976, and throughout that entire winter, the algorithms Rivest grappled with were the ones suggested by Diffie and Hellman’s “interesting” paper. It might be more accurate to say that he was consumed by the formulas missing from that cryptologic manifesto. While the two Stanford researchers had indeed presented a mathematical outline for a new way of passing secret messages—and also digitally “signing” messages so that a communication could be definitively associated with its author—when it came to an implementation that one could really use, they’d come up dry. The Diffie-Hellman key exchange approach allowed two parties to set up a common key, but there was no obvious way that it could be extended to signatures. (Merkle’s not-yet-published knapsack solution also fell short of this.) Diffie and Hellman had speculated on various ways that one might eventually come up with a workable system where each individual could have his or her own key pair, one public and one kept secretly. But without the proper mathematical scaffolding, it was really nothing more than a suggestion. It all hinged on finding sufficiently powerful one-way functions. Was there indeed a set of these that could stand as the reliable scaffolding of a volks-cryptosystem? A set of functions so sound that the system based on them would be impervious to all sorts of eavesdroppers and codebreakers, even highly motivated ones equipped with high-speed computers, deep cryptographic experience, and a touch of genius themselves?
Answering those questions became Rivest’s obsession. Though the mathematical component of the quest was exciting in itself, the process was charged with a thrilling frisson, in that a successful solution could potentially kick off an entirely new kind of commerce—business done over computer networks. This is important, Rivest thought, and immediately began evangelizing the challenge to his colleagues.
Leonard Adleman was the first one to fall victim to Rivest’s exhortations. He was a young mathematician who also split his time between the computer science lab and the math department. One day that December, he recalls, he walked into Rivest’s office just a few doors down from his own at Tech Square. “Did you see this paper?” Rivest asked. “It shows how you can build this secret code, where if I wanted to send you something and we wanted it to be secret, and somebody was listening . . .”
As Rivest gushed about the workings of public key, Adleman asked himself, Do I care about this? Unlike Rivest, Leonard Adleman worshipped theory, pure and simple. He often thought about Gauss, Euler, Fermat . . . giants of previous centuries who had discovered the foundations of mathematical truth, blue-sky brainiacs without regard for any practical applications their constructs may have had. These geniuses were as gods to Adleman, and he longed for nothing less than to play in the same arenas of pure mind. This stuff about cryptography that so excited Rivest sounded to Adleman like some problem about how to build a better automobile or something. Not the sort of intellectual gauntlet that a math god like Carl Friedrich Gauss would have jumped at. So Adleman waited patiently until Rivest was finished, then remarked, “That’s very interesting, Ron.” And changed the subject.
Rivest had more luck with another recent addition to MIT’s computer faculty. Just that month, Adi Shamir, a rail-thin, witty Israeli, had arrived at MIT for a visiting professorship in the Laboratory for Computer Science. Shamir was having a hectic time. Though he was a world-class mathematician, he had yet to learn much about computer algorithms. So he had been unhappily surprised when, several weeks earlier, Rivest had sent him a letter “to discuss the contents of the advanced algorithm course you will teach this spring term.” Shamir winced: bad enough an algorithm course—but an advanced one? To doctoral candidates? Fortunately, Shamir was a lightning-quick study. As soon as he arrived at Tech Square he zoomed to the library and checked out a shelf full of books on the subject; in the next two weeks, he learned everything he needed to know about algorithms. It was sometime during that remedial reading period that his new colleague, Ron Rivest, popped into his office and enlisted him in the effort to implement public key cryptography.
Once he got a look at it, Shamir agreed with Rivest that the Diffie-Hellman paper was significant. Not that it was groundbreaking from a mathematical point of view. He figured that if you took anyone experienced in number theory and tried to explain the Diffie-Hellman scheme to him, it would have taken exactly two minutes. The novelty was how the Stanford guys took something that had absolutely no relation to cryptography in the past and suddenly applied it to a new field. Shamir quickly became Rivest’s partner in the search for the perfect mix of one-way functions.
As the winter progressed, Rivest and Shamir became friends; with Adleman they formed a jolly threesome. Adleman, at first almost as a social concession, joined in the algorithmic hunt. “We were roughly the same age, we were all in the same discipline, and we liked each other, so we became not only colleagues and collaborators but hung out all the time,” Adleman says. Adleman and Shamir were bachelors, and Rivest’s more domestic existence served as a sort of anchor to the group, both at work and in his home in Belmont, a warm, open apartment with access to a nice yard. (Adleman lived in an apartment in Arlington and Shamir had a place in Cambridge.) As the weeks progressed, the young men, with adjoining offices on the eighth floor of Tech Square, began working seriously on their quest.
Not surprisingly, Rivest was the most focused of the group. Though he taught classes during this period, his mental efforts never strayed far from crypto. “Whatever Ron decides to do, he does extremely well,” says Adleman. “If he decided, say, to start building rocket ships, I’d put my money on it that in five years he’d be one of the five best rocket builders on earth.” Shamir was similarly dogged. “Adi’s like an intellectual lion; you just throw some meat in front of him and he’ll chew it up,” says Adleman.
Adleman himself acted as more of a foil. Of the three, he was the one who most looked and acted like a classic, dreamy mathematician—the kind of shaggy-haired young guy who would be the helpless prey of a wacky heroine in a screwball comedy (by the end of the movie, though, we’d learn that he had his own devilish streak). Perhaps once or twice a week, Rivest and Shamir would come up with a scheme, and then present it to Adleman, the group’s Mr. Theory, who would then set about to identify its flaws and break the scheme, sending the other two mathematicians back to the blackboard. To Adleman the exercise was like swatting flies, and not much more intriguing. Even weeks into the project, he was convinced that the whole project was not really worth his effort—it was too grounded in the real world. He understood that both his friends had this sense that the potential practical applications made the quest desirable. That didn’t matter to Adleman. He loved math because its beauty transcended earthly concerns.
At first, every scheme they came up with was easily obliterated by an Adleman attack. Frustratingly so. “We experimented with a lot of different approaches, including variations on things that Diffie and Hellman suggested,” says Rivest. “We weren’t happy with the approaches we came up with.” At one point, they got so discouraged that they wondered whether an answer existed at all. Maybe Diffie and Hellman’s apparent breakthrough was a dud. So for a little while, they switched gears and attacked the problem from the opposite end, trying to come up with a proof to show that public key cryptography was impossible. “We didn’t get very far at that,” says Rivest.
In February, the three MIT mathematicians went to the Killington ski resort in Vermont. It was definitely a working holiday. Even as the three computer scientists tried to teach themselves to ski, their minds were never far from the problem. For Shamir, and even more for Rivest, it was almost a biological drive; Adleman was literally along for the ride. “All the way up in the car, around the fire, riding the ski lifts, that’s what they were talking about, so that’s what I was talking about,” he says. Of course, when actually schussing down a mountain on skis, they couldn’t continue the discussion—so they thought about it. Shamir later recalled, only half facetiously, that they settled into a routine of each racing down the hill for a half hour devising a new public key cryptography scheme. And then the others would break the scheme. On only the second day that the Israeli had ever been on skis, he felt he’d cracked the problem. “I was going downhill and all of a sudden I had the most remarkable new scheme,” he later recalled. “I was so excited that I left my skis behind as I went downhill. Then I left my pole. And suddenly . . . I couldn’t remember what the scheme was.” To this day he does not know if a brilliant, still-undiscovered cryptosystem was abandoned at Killington.
In a way, their difficulties were only to be expected. Why would anyone think that three young computer science assistant professors could ever come up with a sound cryptosystem, let alone a bulletproof scheme that for the first time in history allowed people to communicate with each other in total secrecy without having to make arrangements beforehand? A reasonable mind would conclude that this could only be done by someone intimately familiar with the field. If you had a magical instrument that measured cryptographic knowledge, the combined experience of the MIT Three wouldn’t have moved the needle even a tickle.
But such ignorance was perhaps their most valuable asset. “We were extremely lucky,” Shamir later said. “If we’d known anything about cryptography and known about differential sequences and Lucifer and DES we probably would have been misled into expanding those ideas and using them for public key cryptography. But we were rank amateurs—we knew nothing about cryptography. And as a result we were just exploring the ideas we were taught at university.”
These ideas were a mathematical grab bag that suggested all sorts of possibilities—everything from linear algebra to equation sets. And they went through them all. Generally they’d meet in Rivest’s office, scrawling equations on the blackboard. Someone would come up with an idea and they’d think about it for a while, and then maybe they’d see a flaw with it. “Sometimes I would break my own scheme, or Adi would break his, or I would break Adi’s,” says Rivest. The more promising possibilities would go to Adleman, who, despite his initial lack of interest, was developing quite a talent for locating, then tugging at, the threads that would unravel a given scheme.
Eventually, they found a system that looked like it might fly. It was about the thirty-second candidate. Adleman immediately thought this one looked more interesting than the predecessors. He pulled an all-nighter before he broke it—“It took real research to break it, as opposed to observation,” he says—and discovered that he had mixed feelings about his success. He was now hooked, too. (Several years later, some researchers published a paper proposing an almost identical scheme, only to be embarrassed when other mathematicians rediscovered Adleman’s “scheme 32” attack.)
By then their solutions were beginning to utilize the idea of a promising one-way function: factoring. Though Knuth had suggested this to Diffie and Hellman, the Stanford researchers hadn’t followed up on it; by coincidence, Rivest was settling on his former mentor’s hunch.
Once again, factoring is a mathematical problem tied to the use of prime numbers. A prime number, of course, is one that cannot be arrived at by multiplying two numbers together (the lone exception being the prime itself and the number one). If you multiply two large primes together, then, you get a much larger number that isn’t a prime. To factor that number, you have to somehow reverse the process, identifying the two original seeds that produced it. This had been understood as a hard problem ever since a few years before Christ’s birth, when Eratosthenes of Alexandria devised a mathematical process called a “sieve” to try to perform this task. At that time, people considered factoring to be virtually the same problem as trying to figure out whether a number was a prime or not. Twelve hundred or so years later, Fibonacci improved the method somewhat, but by no means did he offer a way to reasonably break down a large product into its two parent primes. When Gauss in 1801 recognized that factoring and finding primality were two different problems, he identified the former conundrum as a vexing but critical challenge:
The problem of distinguishing prime numbers from composite numbers and of resolving the latter into their prime factors is known to be one of the most important and useful in arithmetic. . . . The dignity of the science itself seems to require that every possible means be explored for the solution of a problem so elegant and celebrated.
Gauss never did find an efficient solution to the factoring problem, and no one else did either, though no proof existed that a solution was impossible. Not that it was a very hot topic in the mid-1970s. “Factoring at the time was not a problem that people cared about very much,” Rivest says. “Publications were few and far between.”
Still, as the MIT Three continued trying different variations of schemes to implement the Diffie-Hellman concept, they became increasingly drawn to using factoring in their system.
On April 3, 1977, a graduate student named Anni Bruce held a Passover seder at her home. Rivest was there, and Shamir, and Adleman. For several hours ideas of mathematical formulas and factoring were put aside for a recapitulation of the escape of the Jewish people from Egypt. As is customary with seders, people downed a lot of wine. It was nearly midnight when Rivest and his wife returned home. While Gail Rivest got ready for bed, Ron stretched out on the couch and began thinking about the problem that had consumed him and his colleagues for months. He would often do that—lie flat on the sofa with his eyes closed, as if he were deep in sleep. Sometimes he’d sit up and flip through the pages of a book, not really looking, but reworking the numbers. He had a computer terminal at home, but that night he left it off. “I was just thinking,” he says.
That was when it came to him—the cognitive lightning bolt known as the Eureka Moment. He had a scheme! It was similar to some of their more recent attempts in that it used number theory and factoring. But this was simpler, more elegant. Warning himself not to get overexcited—Shamir and Adleman, after all, had broken many of his previous proposals—he jotted down some notes. He did allow himself the luxury of saying to his wife that he’d come up with an idea that just might work. He doesn’t remember phoning the guys that night. Adleman, though, insists that he received a call sometime after midnight.
“I’ve got a new idea,” Rivest announced, and explained it.
Essentially, Rivest’s idea was to strip the factoring problem down to almost naked essentials. A public key is generated by multiplying two large (over 100 digits), randomly chosen prime numbers. Easy. Then another simple step (if you have a computer): randomly choose yet another large number, one that had certain easy-to-calculate specified properties. This would be known as the encryption key. The complete public key consists of both that encryption key and the product of those two primes.
Rivest then provided a simple formula by which someone who wanted to scramble a message could use that public key to do so. The plaintext would now be ciphertext, profoundly transformed by an equation that included that large product. Finally, using an algorithm drawn from the work of the great Euclid, Rivest provided for a decryption key—one that could only be calculated by using the two original prime numbers. Using the decryption key, one could easily revert the ciphertext to the plaintext message.
Thinking of it another way, on its way to ciphertext, the original message was intimately intertwined with the product of the two primes. What made the information in the plaintext unreadable was a mathematical transformation involving that large product—a transformation that could only be reversed if you knew what those two primes were. Then everything would become clear.
Some of the mathematics of the decryption key—which works as the private key in this system—was derived from the work of another legendary mathematician, Leonhard Euler, who in 1763 devised an equation that dealt in the remainders of numbers obtained after dividing whole numbers. Almost two hundred years after its Swiss inventor first conceived it, an idea that had been deemed valuable only in theoretical math had found an application in the real-world mechanics of codemaking.
The scheme satisfied all of Diffie and Hellman’s requirements. A user could confidently broadcast a public key, because its essential component was only the product of the two primes. If snoops wanted to unscramble an intercepted message that had been encrypted with the public key, that information would be useless. In order to cook up a decryption key, they’d need the original primes. How could they do that? Only by factoring, and even Gauss couldn’t crack that nut. This was the beauty of the one-way function: easy to do if you’re going in the right direction, next to impossible if you approach it from the wrong end. If the people using the system used primes as big as Rivest was specifying, factoring that product would require hunkering down with some supercomputers for a long winter—and for some billions of winters thereafter. As long as factoring remained difficult, this new scheme was secure.
The scheme wasn’t limited to encryption, either. If you used the decryption (private) key to scramble a number, that jumbled result could be unscrambled by using the encryption key and the product of the primes—the public key. Since only the owner of the closely held private key could do this, this process would reliably authenticate the source of the message. What Diffie and Hellman had first imagined now seemed real: a solid formula for digital signatures, the enabler for new kinds of commerce, and a means to establish trust on an electronic network.
The formulas sounded beautiful to Adleman. It was a much less messy system than any they’d been dealing with. Others had used relatively convoluted schemes involving multiplication, division, addition. But Rivest had hit the target dead on. “I think that’s it, Ron,” said Adleman. “I think that’s going to work.” But Adleman, too, held off on popping a champagne cork. Too often, midnight excitement dissipates when a scheme is examined in cold morning light.
When morning broke, though, the elegance of Rivest’s solution hadn’t dimmed. When the three researchers convened in Tech Square as usual, a flushed and breathless Rivest presented a manuscript to his colleagues with the whole shebang written out in a near-publishable format. It was signed Adleman, Rivest, Shamir. “I looked at this,” said Adleman, “and it was the description of what he’d said the night before.” He felt it was Rivest’s breakthrough, not his.
“Take my name off,” he said. “It’s your work.”
Rivest insisted that it was a joint project, that Shamir’s and Adleman’s contributions were crucial, that the scheme was the final point in an evolutionary process. To Rivest, it was as if the three of them had been in a boat together, all taking turns rowing and navigating in search of a new land. Rivest might have stepped out of the boat first, but they all deserved credit for the discovery. Still, Adleman objected again. Maybe Shamir had contributed conceptually, but Adleman had mostly stuck pins in various algorithmic trial balloons. No way he could take credit.
Rivest urged Adleman to reconsider overnight. “So I went home and thought about it,” said Adleman. He was, after all, a logical man. Though he felt in his bones that he didn’t deserve to share credit, he knew that as an aspiring academic, any publication credit might help when he came up for tenure. And after all, breaking their “Scheme 32” hadn’t been trivial. What if he hadn’t been around to break it, and Rivest and Shamir had gone on to publish a faulty paper—they certainly would have looked like morons if some pimply grad student cracked their scheme. Given that he had made a contribution, why fight Ron on the matter? After all, Adleman thought, it wasn’t as if this was a paper anyone would actually see. “I thought that this would be the least important paper my name would ever appear on,” he recalls. So Adleman agreed to keep his name on it, if it were listed last. Meanwhile, Adi Shamir agreed with Adleman that Rivest’s name should go first. This order determined the name of the algorithm itself: RSA.
With input from his collaborators, Rivest quickly turned his original draft into MIT/Laboratory for Computer Sciences Technical Memo Number 82: “A Method for Obtaining Digital Signatures and Public Key Cryptosystems.” It was dated April 4, 1977. Though Adleman might still have dismissed the outcome as mathematically unimportant, a quick glance at the “key words and phrases” offered for indexing purposes demonstrated that this was at the least an unusual effort for three number crunchers from MIT. In fact, the words offered a remarkable blueprint for a network society that would not be widely discussed for twenty years:
. . . digital signatures, public key cryptosystems, privacy, authentication, security, factorization, prime number, electronic mail, message-passing, electronic funds transfer, cryptography.
With fanfare reminiscent of the Diffie-Hellman work that had first triggered the project, the paper’s first words proclaimed, “The era of electronic mail may soon be upon us; we must insure that two important properties of the current ‘paper mail’ system are preserved.” These properties were that messages remain private and able to be signed. And then the authors promised to unveil a means by which these characteristics, long accepted as only the domain of hard copy, could be used in the coming, networked era.
The paper was also notable for a more whimsical touch. Instead of what had been the standard form of delineating the recipient and sender of a message by alphabetic notation—A for the sender, B for the recipient, for instance—Rivest personified them by giving them gender and identity. Thus the RSA paper marks the first appearance of a fictional “Bob” who wants to send a message to “Alice.” As trivial as this sounds, these names actually became a de facto standard in future papers outlining cryptologic advances, and the cast of characters in such previously depopulated mathematical papers would eventually be widened to include an eavesdropper dubbed Eve and a host of supporting actors including Carol, Trent, Wiry, and Dave. The appearance of these dramatis personae, however nerdly, would be symbolic of the iconoclastic personality of a brand-new community of independent cryptographers, working outside of government and its secrecy clamps.
Despite their confident language, Rivest wasn’t sure how significant the discovery was. “It was unclear at the time whether [the scheme] would be broken within a few months,” he says. “It was also unclear whether there were better approaches.” Still, he initiated a journal publication process, with an eye to the Communications of the ACM, where he was a contributing editor. He sent copies to colleagues for peer review. One to Don Knuth. And, in his first contact with the authors of “New Directions in Cryptography,” on whose system his own was built (a connection made explicit in his paper), he sent one to Whitfield Diffie and Martin Hellman. (Rivest later explained that among researchers it is not particularly unusual for a group of academics to build upon previous work without notifying the original team until a result is obtained.)
There were still some things that needed to be nailed down before the paper was submitted to a journal. One of them was definitively pinpointing the current state of factoring—the system, after all, relied on the difficulty of extracting two long primes from their product. Through Marty Hellman, they got in touch with Rich Schroeppel, the former MIT hacker whom Diffie had visited on his transcontinental crypto adventure. (Ironically, Schroeppel had been pessimistic about the prospect of cryptosystems based on one-way functions.) Schroeppel was among the few people on earth still doing very serious thinking on factoring.
Schroeppel now was ready to discard his skepticism of one-way functions and was eager to contribute. After reading what Don Knuth had offered as the best available formula for factoring, Schroeppel had done a timing analysis of it and had a deep realization of how truly knotty the problem was: no matter how you tackled it, it seemed that the work required to factor something was many, many times larger than the effort expended on the initial multiplication. “I think it was the first time anybody had looked at how hard it was to factor,” he says. Schroeppel was impressed with the RSA paper and sent some suggestions, including an analysis of how long it would take the fastest factoring scheme (an unpublished one by Schroeppel himself) to crack keys. Conclusion: plenty long enough for a good cryptosystem.
Rivest also sent a paper to Martin Gardner, who wrote the “Mathematical Recreations” column for Scientific American. “He was always writing these columns about big numbers, and looking for primes,” says Rivest. Gardner had a loyal following among both amateur figure twiddlers and serious mathematicians: it was not unusual for one of his monthly dispatches to catapult a hitherto obscure problem into an international obsession.
On April 10, 1977, less than a week after Rivest’s breakthrough had occurred, Gardner wrote back. “Your digital signature scheme is indeed fascinating,” he wrote. “The whole idea behind it is new to me, and I think a very interesting column could be written around it.” He invited Rivest to explain the scheme to him personally.
An excited Rivest headed out to Gardner’s home in Hudson, New York. Gardner was an old-school gentleman and something of a scamp. The columnist performed a few card tricks; years later Rivest was still wondering how the hell he did them. The magic show completed, Gardner asked for examples of how the RSA system worked, and it was Rivest’s turn to produce magic. Eventually they decided to offer a challenge to readers of the column. Rivest would generate a public key of 129 digits and use it to encode a secret message. If the system worked as promised, no one in the world would be able to read that message, with two exceptions. One would be someone who had both a powerful computer set to break the message with brute force and a very large amount of time on his hands: if the computer was, for instance, a million-dollar PDP-10, the effort would take somewhere in the neighborhood of a quadrillion years. (This estimate, provided by Rivest on an apparent misinterpretation of Schroppel’s factoring time analysis, was an error on his part; what he meant to say was that it would take merely hundreds of millions of years to crack the code by calculation. Still not an undertaking for mortals.) The other exception, of course, was the person holding the private key match to that particular 129-digit public key. That person could decode the message in a few seconds.
And if the RSA system didn’t work as promised? Then some bright, motivated reader might figure it out. In that case, Rivest, Shamir, and Adleman would present that person a $100 prize. And the RSA system would be given a quick funeral, as it would be useless for protecting people’s privacy and authenticating their identities.
Gardner’s column appeared in the August 1977 edition of Scientific American. It was spiked throughout with enthusiasm for the achievement of the three young MIT scientists. Gardner, in fact, predicted that the breakthroughs by Diffie-Hellman, and then RSA, meant an end to an entire era of codebreaking: “[They are] so revolutionary,” he wrote, “that all previous ciphers, together with the techniques for cracking them, may soon fade into oblivion.” From now on, he wrote, armed with RSA and similar systems, we would enter a golden age of secure electronic communications, where all messages could be secure, unreadable even by the masters of cryptanalysis. In fact, Gardner used the moment to declare void Edgar Allan Poe’s contention that “human ingenuity cannot concoct a cipher which human ingenuity cannot resolve.” In Gardner’s view, the ingenuity of the Stanford and MIT “outsiders” had concocted that very cipher. The columnist, while excited by the discovery, confessed to a wistfulness at the new reality, where the spy vs. spy aspects of encryption would be relegated to antiquity. “All over the world there are clever men and women, some of them geniuses, who have devoted their lives to the mastery of modern cryptanalysis. . . .” he wrote. “Now these people are standing on trapdoors that are about to spring open and possibly drop them completely from sight.”
Gardner completed the column by printing the message encoded by Rivest with the RSA system using a 129-digit key, inviting anyone to try his or her luck, skill, and cryptanalytic prowess at breaking the code. Readers were invited to begin the process, or simply learn more about the system, by sending a self-addressed, stamped envelope to MIT and requesting a copy of the technical paper.
Though the three professors were all on summer break, the secretaries at Tech Square could attest to the instant impact of Gardner’s column—thousands of letters began pouring in. When Shamir finally returned to Cambridge after spending the summer backpacking in Alaska, he encountered a near avalanche as the stacks of envelopes that had been stored in his office engulfed him on his way to his desk.
But that was only the first indication of the excitement that Gardner’s column inspired. This was the first public notice of the movement that began with Whit Diffie’s iconoclastic quest, and it seemed to have unleashed all the pent-up frustrations of anyone who once had been temporarily obsessed with the dark art of codes, only to have sublimated that attention elsewhere, since all the good stuff in the crypto world existed only behind the Triple Fence or, perhaps, its international counterparts. Reading Gardner’s account of what seemed like a turning point in this history of cryptography—not only in terms of what the tools were but who had forged them—was like the sun breaking through after decades of gray gloom.
Len Adleman first saw the evidence of this that August, when he was browsing in a bookstore in Berkeley. Waiting to pay for his purchase, he overheard a conversation between a clerk and a customer buying a new copy of Scientific American. “Did you see the thing in here about this new code system?” asked the customer.
“Yeah, I read about it,” said the clerk. “Isn’t it wild?”
Adleman could not contain himself. “That’s the stuff we did,” he exclaimed, identifying himself as one of the three MIT professors in Gardner’s column. When the magazine buyer understood that Adleman was on the level, he held out the issue. “Would you sign this for me?” he asked.
As an instrument of crypto’s liberation, Len Adleman was suddenly being asked for autographs à la Tom Cruise. Even Fermat hadn’t gotten that kind of treatment!
And what about the people who were supposedly standing on those trapdoors Gardner mentioned—namely, the codemakers, codebreakers, analysts, and outright spooks who disappeared each day into the Cone of Silence at Fort George Meade? How did they view the work of Rivest, Shamir, and Adleman and the advances of Diffie and Hellman?
As one might expect: with sheer horror.
The midseventies had already been traumatic for the NSA. For twenty-five years, its relationship with Congress had proceeded with nary a legislative speed bump. The agency addressed only the few representatives who sat on classified intelligence oversight committees. After briefing sessions held in shielded rooms swept for bugs, the legislators routinely rubber-stamped all of The Fort’s requests. But in 1975 and 1976, the NSA found itself the focus of a fearlessly insolent investigation of its eavesdropping practices by Senator Frank Church’s Intelligence Committee. The committee was shocked to discover the extent of the NSA’s snooping efforts, particularly a strategy called Project Shamrock that included surveillance of American citizens. Church was incensed at the agency’s blithe insistence that such eavesdropping, performed without benefit of warrants, was still within its authority. The senator’s final report concluded with an almost biblical admonition on what could happen if the agency continued on its course without restraint, warning that its monitoring capabilities “could at any time be turned around on the American people and no American would have any privacy left, such [is] the capability to monitor everything. . . . There would be no place to hide.” While the NSA avoided any serious repercussions, this “indecent exposure” (as described by an NSA official in an internal memo) was sobering.
The wiser heads of the NSA obviously knew that if there was ever a time to lie low, this was it. Still, Diffie-Hellman’s work, and its alarmingly practical follow-ups, represented an encroachment into what the NSA had regarded as its birthright: the domination of cryptography. This was something that the agency could not ignore. After all, if people had access to the means to encrypt their private communications, there could be a place a hide—and a universal means to privacy was exactly what an agency charged with eavesdropping is hell-bent to prevent. Though the realization of a such a threat to its mission was slow to filter through the complex bureaucracy at Fort Meade, clearly some officials recognized the problem. As early as 1975 the NSA began to work behind the scenes (where else?) to restrict the nascent academic field.
Its first efforts were directed at the National Science Foundation. The NSF was an independent government agency designed to foster research into all sorts of scientific inquiries; it was extremely common for mathematicians and computer scientists to have work funded, at least in part, by NSF grants. (These would come to include Diffie, Hellman, and the RSA team.) In June 1975, the NSF official in charge of monitoring such grants, Fred Weingarten, was warned that the NSA was the only government agency with the authority to fund research on cryptology. Weingarten was alarmed that he may have been breaking the law. So he held off awarding any new grants while he sought to clarify the matter.
What he found was interesting. Neither the NSF lawyers nor the National Security Agency itself, when pressed for documentation, could come up with any statutory justification for the agency’s claim. So Weingarten felt free to ignore the warnings and resume his grants.
Marty Hellman, for one, always appreciated Weingarten’s backbone. “When the NSA told him that he couldn’t fund cryptography, that the NSA had a monopoly on that funding, Fred not only was courageous but he handled it very well,” says Hellman. “He didn’t say, ‘You’re full of shit,’ but asked them to put it in writing so he could take it to his counsel for an opinion.”
But then came the Diffie-Hellman paper, followed by the RSA discovery. Together, of course, these created the underpinnings for the NSA’s worst fear: a communications systems where everyone used a secure code. So it seemed hardly a coincidence that on April 20, 1977—barely three weeks after Rivest dashed off his MIT technical memo—the NSA’s assistant deputy director for communications security, Cecil C. Corry, ventured from Fort Meade to the capital to meet with Weingarten. He was accompanied by a colleague. Once again the officials attempted to ax any NSF grants that might involve crypto, invoking what they portrayed as a presidential directive giving them “control” over such research. Weingarten reminded them of his previous experience, which established that no such directive was ever issued. While he did agree to forward relevant proposals to the NSA so that the security agency could offer a technical evaluation to use in considering the grant, he insisted that the process be conducted openly, with no decisions made under the shroud of silence.
The NSA people weren’t happy with that compromise, offhandedly remarking to Weingarten that “they would have to get a law passed”—presumably to ban such academic research unless the Diffies, Hellmans, and Rivests of the world were willing to deep-six their work under the classified seal. Later, Corry wrote to John R. Pasta, Weingarten’s boss, thanking him for a concession that the NSF never made—agreeing to consider “security implications” when evaluating grant proposals. Pasta made it clear that the NSF made no such promise.
In a memo he wrote at the time, Fred Weingarten summarized his views of the agency’s motives:
NSA is in a bureaucratic bind. In the past the only communications with heavy security demands were military and diplomatic. Now, with the marriage of computer applications with telecommunications . . . the need for highly secure digital processing has hit the civilian sector. NSA is worried, of course, that public domain security research will compromise some of their work. However, even further, they seem to want to maintain their control and corner a bureaucratic expertise in this field. . . .
It seems clear that turning such a huge domestic responsibility, potentially involving such organizations as banking, the U.S. mail, and cable televisions, to an organization such as NSA should be done only after the most serious debate at higher levels of government than represented by peanuts like me.
Clearly, NSA wasn’t going to slink away.
As the skies darkened inside the Beltway, the MIT professors, crypto virgins all, were unaware of anything but sunshine. They certainly didn’t know of anything in the nation’s export laws and agreements that could conceivably affect the dissemination of their work. They had no idea that while the first half of 1977 was marked by their major contribution to the field of cryptography, the latter portion of that year would be marked by the government’s efforts to stop people from knowing about such work.
That summer a letter dated July 7, 1977, arrived at the New York offices of the IEEE, addressed to E. K. Gannett, the staff director of the organization’s publications board. “I have noticed in the past months,” the correspondent began, “that various IEEE groups have been publishing and exporting technical articles on encryption and cryptology—a technical field which is covered by federal regulations. . . .” There followed detailed citations, down to the proper subsections of individual regulations that may have already been violated, not only by the publishing of certain articles in IEEE publications, but at various symposia sponsored by the group, including the event in Ronneby, Sweden, where Hellman had first presented public key crypto. As further documentation, the letter writer included photocopies of “a few pages of the relevant law,” namely the International Traffic in Arms Regulation (ITAR) code. These regulations were drawn to “control the import and export of defense articles and defense services.” While people like Ron Rivest had always assumed that defense articles were things like nuclear detonating devices, Stinger missiles, and aircraft carriers, it turned out that these “instruments of war” were joined on the United States munitions list by “privacy devices [and] cryptographic devices.” None of these was allowed to be shipped overseas without specific permission from the State Department. Furthermore, these restrictions did not cover merely the actual devices, but any “technical data” covering these “weapons.” This was defined as “any unclassified information that can be used . . . in the design, production . . . or operation” of a restricted weapon. If you disseminated that information to a foreign national, or even allowed such a person to get his or her hands on your matériel (so to speak), you were in violation of the law—an enemy of the state.
The letter writer noted that in October the IEEE planned an International Symposium on Information Theory at Cornell that would include papers on encryption. Under current law, he warned, such presentations or publications were restricted, and if preprints were sent abroad, “a difficulty could arise, because, according to ITAR, an export license is required.” His implication seemed to be that such a violation of the law could lead to fines, arrests, and even jail terms. At the Ronneby conference, the letter darkly noted, “this formality was skipped.”
The message was clear: You academic cryptographers may believe that your ideas were conceived under the protection of academic freedom and that your mathematical formulas belonged to no one but perhaps the God who first crunched them . . . but that is not the case when it comes to ideas and algorithms that can be used to encrypt information. Those ideas should be kept under close watch—and government control. Clearly, the letter implied, by allowing the Cornell conference to proceed, the IEEE would be illegally providing the equivalent of heavy-duty military equipment to our nation’s foes. “As an IEEE member,” the writer concluded, “I suggest that IEEE might wish to review this situation, for these modern weapons technologies, uncontrollably disseminated, could have more than academic effect.”
The letter was signed by a J. A. Meyer, who identified himself only by his home address in Bethesda, Maryland, and his IEEE membership number.
Who was this concerned member? It turns out that in January 1971 this same Joseph A. Meyer had written an article for an IEEE publication called Transactions on Aerospace and Electronics Systems, a paper so unusual that the editors felt compelled to include an introductory note on its controversial nature. Entitled “Crime Deterrent Transponder System,” it proposed a system whereby “small radio transponders would be attached to criminal recidivists, parollees, and bailees to identify them and detect their whereabouts.” By tagging likely lawbreakers, Meyer claimed, we could create “an electronic surveillance and command-control system to make crime pointless.” The biographical material described Meyer as a New Jersey native born in 1929 who got a math degree from Rutgers, spent two years in the air force in the early 1950s, and, from that point, “joined the Department of Defense, where he has worked primarily in the field of mathematics, computers, and communications in the United States and overseas.”
Even a moderately seasoned observer could guess that the unspoken branch of the Defense Department was a three-letter agency whose name seldom appeared in print in 1971. Indeed, several weeks after the Meyer letter was received, Science magazine confirmed the rumors: Joseph A. Meyer worked at the National Security Agency.
The timing of Meyer’s missive aroused deep suspicions about the NSA’s involvement in crushing independent work on crypto. It was sent almost at the moment that Vice Admiral Bobby Inman assumed the NSA directorship and began waging the very war that Meyer had declared against academic cryptographers. In the succeeding years, however, nothing has emerged to contradict Meyer’s claim (vociferously seconded by the NSA) that he had received no orders from Inman or anybody else to send his notorious letter. (Inman now says that on the day Meyer was writing his letter, he was getting a “turnover” briefing from the outgoing director, Lewis Allen—and the topic of public cryptography never even came up.) The Senate Intelligence Committee, looking into the matter, came to that same conclusion in 1978, and now even Marty Hellman believes that it’s probable that Meyer was simply a loose cannon. On the other hand, the NSA conspicuously refused to repudiate the letter, and Inman later asserted to Congress that he believed that Meyer’s comments were valid ones.
In any case, the Meyer letter had an immediate effect. Certainly, the organizers of the Cornell conference took the letter seriously—after all, if Meyer was right, they and the speakers at their conference could wind up in jail for simply presenting their research! It turned out, however, that the issue of technical data and the export regulations had come up a decade before at the society, and, as E. K. Gannett, the recipient of the letter, wrote back to Meyer in a fawning letter dated July 20, 1977, “All IEEE conference publications and journals are exempted from export license requirements under [ITAR] Section 125.11 (a) (1).” He went on to cite a footnote to that section that “places the burden of obtaining any required government approval for publication of technical data on the person or company seeking publication.” In other words, he was saying, it’s not our problem—it’s the problem of those members who dare perform research in the field. He expressed his gratitude to Meyer for “bringing this potentially important question to our attention,” and promised to bring the problem to the attention of “potentially interested parties.” Sure enough, on the same day, Gannett wrote a memo to Dr. Narenda P. Dwivedi, the organization’s director of technical activities, suggesting that the IEEE should perhaps ensure that the researchers “are aware of the rules of the game.”
On August 20, Dwivedi wrote to researchers at six institutions. “A concerned and good-meaning member has drawn our attention to a possible violation by authors of ITAR regulations. . . . It appears that IEEE and its groups/societies/councils are exempt but the individuals (and/or their employers) have to watch out.” Dwivedi then offered some advice for the new breed of researchers in cryptography: they “should refer the paper to the Office of Munitions Control, Dept. of State, Washington, D. C., for their ruling.”
What Dwivedi was suggesting was neatly in line with J. A. Meyer’s wishes. But if a researcher submitted a paper to the State Department, he or she would effectively yield control of the work to the government. As far as the MIT researchers were concerned, there would be, as Science put it, “a censorship system by the NSA over the research of the MIT Information Theory Group.”
One of the recipients of Dwivedi’s letter was Marty Hellman. He quickly showed it to Ron Rivest, who was spending his summer break at Xerox PARC in Palo Alto, just down the road from Stanford. “It was probably my first realization that our work might involve sensitivities,” he says. As soon as he got back to MIT, a worried Rivest consulted the institution’s lawyers.
Rivest, of course, was concerned about the legal implications of stuffing copies of Technical Memo Number 82 into the self-addressed letters with 35-cent stamps as part of the Scientific American “contest.” Was distribution of the RSA paper to the publication’s readers an illegal act? Could MIT be held at fault? Could Rivest and Adleman be jailed? And what about Shamir—he wasn’t even a U.S. citizen! Could MIT be cited for distributing a paper to one of its coauthors?
“The requests for our paper were from all over the world,” says Rivest. “Some were from foreign governments. It wasn’t clear to me what we should do. When you receive this sort of ominous note from the NSA that this stuff is illegal, you want to be conservative and get it checked out.” Rivest even considered the possibility that some of the foreign requests for the memo might have been planted to entrap him under the export regulations, making him a poster boy for mathematicians who ventured too deeply into the forbidden turf of spy agencies.
An answer came back quickly from the MIT administration— don’t send out those papers until this mess is resolved. To their credit, however, the heads of the university, sensitive to principles of academic freedom, worked diligently to clear the path for a free distribution of the tech memo. Despite MIT’s long history of working with national security agencies, often in top-secret research, this wasn’t easy. This time it was dealing with the National Security Agency—and at least some NSA officials, now face-to-face with an open challenge to their crypto monopoly, were themselves running scared. But this time, they had clear-eyed foes who believed that intellectual freedom should not be compromised on the basis of unproved claims of national security. In this new academic research area, new ground rules would be laid and most of the major decisions would be made in the early days. After setting the precedents, the MIT researchers believed, it would be much harder to change things in a fundamental way.
At Stanford, Marty Hellman also wasted no time getting an opinion from the university lawyers. On October 7, university counsel John J. Schwartz assured him that “it is our opinion that the dissemination of the results of the research you describe is not unlawful.” Of course there was the danger that the lawyers were wrong, and the views of J. A. Meyer reflected those of the federal government; if so, Hellman might be prosecuted for delivering his paper. Schwartz promised that if that were the case, the university would defend him. “Nevertheless,” he added, “there would always remain a risk to you personally of fine or imprisonment if the government prevailed in such a case.”
In the end, the Cornell conference—the ostensible focus of Meyer’s letter—went on as scheduled, including the very talks that Meyer had tagged as potential violations of the export rules and a threat to national security. It turned out that the professors had more backbone than the IEEE, which had urged them to vet their papers with the government. When two of Hellman’s graduate students fretted over the implications of getting cited by the government in the tender beginnings of their careers, he volunteered to read their papers himself. “I have tenure at Stanford,” Hellman told the New York Times, “and if the NSA should decide to push us in court, Stanford would back me. But for a student hoping to begin a career, it would not be so pleasant to go job hunting with three years of litigation hanging over his head.”
Ralph Merkle spoke at a panel discussion, too. And Whit Diffie, who was not scheduled to speak at the conference, went out of his way to give a presentation at an informal session. “There was no trouble at the meeting,” he says. “My attitude was that the Meyer letter should be ignored.”
Meanwhile, MIT’s lawyers were still wrangling with the National Security Agency over the legality of stuffing Tech Memo No. 82 into the 7000 self-addressed, stamped envelopes moldering in Shamir’s office and dropping them off at the post office. The academics had pointed out that a clause in the ITAR rules put them in the clear: a specific exemption on “published materials.” What did The Fort say to that?
“As usual with NSA, it was hard to get any complete answer from them,” Shamir later recalled. More to the point, it became increasingly clear that the NSA could not come up with a legal rationale for its actions. So MIT allowed its professors to proceed. In December 1977, half a year after Gardner’s column appeared and the requests began tumbling in, the namesakes of the RSA algorithm invited grad students to a pizza and envelope-stuffing party. And then the papers were mailed. The RSA algorithm had gone global.
Perhaps the existence of these thousands of papers circulating around the world, in addition to thousands of reprints and photocopies of the Diffie-Hellman papers, should have been a signal to the NSA that the crypto toothpaste was out of the tube, and no decrees or scare tactics could generate the requisite physics to squeeze it back in. But for the next few years the agency, perhaps more from reflex than an expectation of success, kept trying to suppress the intellectual activity in the crypto world that now seemed to be exploding outside the Triple Fence.
In retrospect, the institutional behavior seems strange and conflicted. But what else could the NSA do? The CIA may have had a rich and sordid history of bag jobs, honey traps, and other nut-squeezing enterprises, but the Fort Meade culture was dramatically different. Though the agency had certainly stepped over the line at times (as the Church committee documented), the organizational ethos always seemed to regard heroism in terms of the highly intellectual tasks of sucking up signals, concocting ciphers, and cracking codes. During the years that Whit Diffie crisscrossed the nation seeking guidance in his crypto efforts, there hadn’t been even a veiled threat against him, and certainly no indication that anyone would sneak up behind him in a Palo Alto coffeehouse and quietly use the end of a doctored umbrella to inject him with some exotic, slow-acting poison. That just wasn’t the NSA’s style.
A better question would be, “Given that the law might not back up the agency, why bother to fight the movement toward research in crypto?” Surely some of the smarter strategists within the Triple Fence recognized that, in some ways at least, an independent crypto movement would not be so bad for Fort Meade. Who was better positioned to exploit the revolutionary advances in cryptography than the NSA, whose expertise and knowledge of the field was infinitely ahead of anything resembling competition in either the private or public sectors?
This was the dilemma facing Vice Admiral Bobby Inman literally within days after he took his post as director in July 1977. Though he had considerable experience with crypto as the director of naval intelligence—and years before that as a military recipient of signals intelligence—the idea of outsiders making important cryptologic advances was new to him. He had believed, along with most of his peers in the intelligence community, that “the NSA had a monopoly on talent,” he now says. “If there were incredibly bright people who wanted to work on cryptographic problems, the odds were high that they either worked inside the NSA, or worked with one of the scientific advisory groups [whose work was classified].” This insurgent revolt hit him like a fighter sucker punched at the instant the bell rang to begin the fight—especially since the furor over Meyer’s letter drew articles in the New York Times and the Washington Post. Inman understood immediately that not only was this a new sort of threat to his agency, but that new, perhaps unprecedented, responses were called for.
Nonetheless, during the first few months of Inman’s tenure, the NSA kept acting as if the rules had not changed. In October 1977, an electrical engineering professor at the University of Wisconsin named George Davida applied for a patent for a device that used mathematical techniques to produce stream ciphers. He had produced the plans for this invention without any access to classified information, and his funding from the National Science Foundation had no strings attached to require him to clear his work with any defense agency. The patent itself was filed in the name of the university’s Alumni Research Foundation, conforming to a process whereby the university community retains the bulk of any invention profits by Wisconsin professors funded by the NSF. Davida next heard from the government on April 28, 1978, not with a patent approval but with a piece of paper marked SECRECY ORDER. The National Security Agency had declared his invention classified material.
It was bad enough that the NSA had banned production of his device. Worse was the dilemma in which Davida found himself. The order put a clamp of secrecy not only over his device, but over the intellectual material behind the patent application as well. In effect, the NSA regarded Davida’s actual ideas as a sort of poison, a forbidden substance he was banned from circulating. Davida had little guidance as to how he might adhere to the ban, since his materials had already been well distributed. Was he really expected to follow the requirement to report all the people who might have seen his work—in effect, to drag his colleagues into this kafkaesque realm of ideas too dangerous to share? On the other hand, if he refused to comply with the secrecy order, he was subject to a $10,000 fine and two years in the pokey.
Davida was not alone. On that same day in April, the NSA had slapped a secrecy order on the “Phasorphone,” a voice-scrambling device created by a team of scientists led by thirty-five-year-old Seattle technician Carl Nicolai. Five months after applying for a patent for an invention that he hoped would make him a fortune, Nicolai was not only prevented from selling his invention, but also from even using it.
In spook parlance, Davida and Nicolai had become “John Does,” stripped not only of their work but of the credit due to them. As James Bamford explained in The Puzzle Palace, theirs were the relatively rare cases in which objectionable inventions were not independently discovered duplications of devices that already existed behind the Triple Fence but original creations that the government unilaterally regarded as too dangerous to be produced.
But as the NSA was to learn, the days were gone when it could casually apply a secrecy order to the work of an academic or entrepreneur and have the matter closed. Davida and Nicolai went public, organizing well-placed letter-writing campaigns, educating their representatives in Congress, and spilling the story to the press. Davida, in particular, a compact, scrappy man who was disinclined to take the U.S. government at its word, was strident in his own defense. In his case, a quick meeting of university officials led the chancellor to write a furious letter to the NSF, demanding due process. The chancellor also brought the matter before Commerce Secretary Juanita Kreps, who was apparently dismayed at how easily her patent office could become an instrument of censorship. Meanwhile, Davida raged to Science magazine that the NSA’s actions were a form of academic McCarthyism.
The NSA backed down. On June 13, it rescinded the order. Vice Admiral Inman’s later explanation, offered during a House hearing on “The Government’s Classification of Private Ideas,” was that the Davida decision was a mistake by a middle-level employee.
Several months later, the restrictions on the Nicolai patent were also reversed. Since Inman himself had signed off on that secrecy order, he later offered a “heat of battle” excuse to the House subcommittee. “From dealing day to day with the Invention Secrecy Act, you have to make snap decisions,” he explained. Overall, he insisted that the problem with those two orders was “not a faulty law but inadequate government attention to its application.” Still, that double rebuke made it clear that the NSA no longer had free rein in using the law to keep crypto in government-approved sealed containers.
By then Inman had decided to take his concerns directly to the institutions he was worried about. In what David Kahn called a “soft sell” attempt to quash work in cryptography, he embarked on a tour of research institutions. One memorable session occurred in the faculty club at the UC Berkeley campus, where Inman’s attempts at explaining his point of view were met by relentless, hostile questioning. “It was a dialogue of the deaf,” he says. Still, some comments made at the session led him to believe that a more productive relationship was possible. In an extraordinary move for an NSA director, he phoned Marty Hellman and asked for a meeting. “I liked him,” says Inman of the coinventor of public key crypto and DES’s most virulent critic. “I think he was impressed that I had driven down to see him, so his answer [to the request to begin a dialogue on how public crypto should be handled] was a tentative yes.”
Inman tried to diffuse the most blatant of the NSA’s restrictive acts against researchers, many of whom believed that, more than ever, the NSA was trying to lure them behind the Triple Fence, where their findings could be restricted. One of those who learned this firsthand was Len Adleman, the once-reluctant “A” in the RSA algorithm. For years Adleman had been receiving research funds from the NSF, routinely renewing his grants every three years. In the first proposal he filed after being involved with the RSA algorithm, he included a section outlining some work involving mathematics that might apply to cryptography. After fielding the normal questions on such a proposal—budget questions and the like—Adleman was startled by a phone call from an NSF official informing him there would be additional changes. Specifically, the portion of the work that involved crypto would be funded by the National Security Agency.
“I didn’t submit a proposal to the NSA,” Adleman told him. “I submitted it to the NSF, right?”
The official conceded that this was so. But, he said, “It’s an interagency matter,” and ended the conversation.
Adleman was incensed. He understood that there might be legitimate national security concerns about the direction of academic cryptography. (What if someone suddenly released a means to crack an important code?) But this was over the line. It meant that the country’s most secretive intelligence agency was influencing the premier scientific funding agency. “In my mind this threatened the whole mission of a university, and its place in society,” he says. Adleman decided to go public with his concerns. He called Gina Kolata, the reporter for Science who had been covering the conflict, and told her the story.
Not long afterward, Adleman got another call—from Bobby Inman himself. The whole thing, explained the director of the National Security Agency, was a misunderstanding. “He was very nice,” recalls Adleman. The researcher wound up getting his entire grant funded by the NSF.
For Inman, such compromises were in the service of eventually reaching some sort of détente with the academics that would satisfy both national security concerns and the researchers’ insistence on academic freedom. He believed that, ultimately, he held the trump card—one that would not only force the academics to play ball but also actually stem the potential tide of actual crypto implementations from covering the world. This winning hand lay in the laws known as the International Traffic in Arms Regulation. When Inman first arrived at The Fort, he told Congress at a hearing some years later, “I didn’t even know what an ITAR was.” But, he added, “my education went at a pretty fast pace.”
Specifically, he now says, he came to realize that when it came to controlling crypto late in the twentieth century, “the whole issue is export.” Those laws were all that prevented a disastrous free-for-all in the distribution of cryptography—the equivalent of a national security meltdown. Inman recognized that restrictions on what could be shipped overseas, and the threat of prosecution if those laws were broken, would force people to deal with the NSA not only in what they were permitted to export, but in what they produced for domestic use. Those regulations would become the linchpin of the agency’s efforts to stop worldwide communications from becoming ciphertext.
Ironically, the NSA’s own attempts to control private research about cryptography had set events in motion that threatened to thwart those regulations. The then–White House science advisor was a man named Frank Press. The controversy over public crypto had piqued his interest, and he asked the Justice Department to provide a legal opinion as to whether the ITAR laws violated First Amendment free-speech protections. The job fell to an assistant attorney general named John Harmon, who carefully analyzed the way the regulations were drafted. He discovered that ITAR required a license not only from arms dealers, but also from “virtually any person involved in a presentation or discussion, here or abroad, in which technical data could reach a foreign national.” Presentations and discussions? That was the First Amendment turf! On May 11, 1978, the Office of the General Counsel issued its opinion. It was a bombshell:
It is our view that the existing provisions of the ITAR are unconstitutional insofar as they establish a prior restraint on disclosure of cryptographic ideas and information developed by scientists and mathematicians in the private sector.
Inman was furious at this analysis, and he set about to fight it. He recruited “a brilliant new lawyer that I had persuaded to come work for NSA” to argue against the opinion. One gambit was to claim that a recent legal precedent had rendered the Harmon opinion moot. But a Justice official rebuffed that interpretation. “We do not believe that [the precedent] either resolves the First Amendment issues presented by restrictions of the export of cryptographic ideas or eliminates the need to reexamine the ITAR,” wrote deputy assistant attorney general Larry Hammond.
Meanwhile, the NSA was treading a fine line. It was attempting to threaten crypto researchers who circulated their findings and ideas while it was fully aware that the Justice Department had concluded that such threats violated the Constitution.
All of this wrangling was conducted out of the public eye. And none of it seemed to have affected the way that the NSA chose to interpret the export laws. So even though Vice Admiral Inman’s sharp young counsel was legally unable to overturn John Harmon’s findings, the attack against his opinion was effective. Because by not circulating its judgment in the matter, the Justice Department was effectively colluding with the NSA to ignore the possibility that its enforcement of the ITAR regulations violated the Bill of Rights.
All of this came out in 1980, when the government operations subcommittee of the House of Representatives held hearings on “The Government’s Classification of Private Ideas.” At one point, the committee staff director, Tim Ingram, posed a pretty good question. “How would I know, as a private litigant somehow ensnarled in the ITAR regulations, that I am being involved in a matter that the Justice Department, two years previously, has declared unconstitutional?” he asked. A Justice official explained that the opinion hadn’t been offered for the benefit of such citizens, but simply as advice to the department itself.
This was not acceptable to Ingram. Perhaps thinking of the Rivests and Hellmans who had been threatened with jail for presenting their papers, or the Davidas and Nicolais who had been confronted with secrecy orders, or all the current researchers like Adleman who were now encountering more subtle pressures, Ingram had another question to ask:
You have this two-year-old opinion finding the regulation unconstitutional. There has been no change in the regulations. Is there any obligation on the department at some point to go to the president and force the issue and to tell the president that one of his executive agencies is currently in violation of the Constitution?
No satisfactory answer was forthcoming. In any case, Bobby Inman was worried about the new movement in cryptography and his limited power to stem it. His worst fear was that public adoption of encryption “would very directly impact on the ability of the NSA to deliver critical information.” He became convinced the agency needed a more formal authority to regain the controls over crypto. In his attempt to obtain this, he did something no one in his place had ever done. He went public.
His chosen venue for this debut was Science magazine, the most aggressive press watchdog over the past few years. Of course, the very fact that the interview was granted was news in itself. The article quoted F. A. O. Schwarz, who had been chief counsel in the Church investigation, as saying, “I’m flabbergasted. Back when we dealt with the NSA, they considered it dangerous to have even senators questioning them in closed session.” But there was news in Inman’s message, too—the NSA director was now openly extending his invitation for researchers to engage in “dialogue” with him and his people. “One motive I have in this first public interview,” he said, “is to find a way into some thoughtful discussion of what can be done between the two extremes of ‘that’s classified’ and ‘that’s academic freedom.’ ” But in almost the next breath, he conceded that if he got his way—and was able to censor academic research that involved national security—his proposed “thoughtful discussion” would probably end in “a debate between the Administration and the academic community” (one in which presumably the pissed-off college professors wouldn’t have much of an impact on making the government change its national security policy).
A few weeks later, Inman made an even more extraordinary break with the NSA’s tradition of secrecy. He actually delivered a public speech in defense of his agency. True, the venue wasn’t exactly hostile—it was the January 1979 gathering of a trade association of electronics manufacturers who dealt largely in defense contracts. Yet the very fact that he was doing it represented a sea change that could provoke vertigo in even a vice admiral like Bobby Inman. He acknowledged this in his very first words: “A public address by an incumbent director of the National Security Agency on a subject relating to the agency’s mission,” he said, “is an event which—if not of historic proportions—is, at least to my knowledge, unprecedented.” In fact, just a few years previous, merely uttering the name of the agency would have been unprecedented.
Now Inman was frankly admitting that the world had changed, and not by his choice. He referred wistfully to the days, only now gone, when his people “enjoyed the luxury of relative obscurity,” remaining closemouthed about their work to spouses and even office mates . . . the days when NSA “could perform its vital functions without reason for public scrutiny or public dialogue.” But now, in what he called “the encounter between the NSA and the rest of the world,” a new era had begun, where the NSA’s happy life spent “entirely in the shadows” was replaced by an era of “complex tensions” between the government and those wishing to communicate securely. Inman’s hope for his talk was to explain the NSA’s point of view on those tensions, the better for people to understand why it was, well, necessary to do things his way.
Trust the NSA? Yes, said Inman. His people had gotten a bad rap recently, and he wanted to set the record straight. Did his agency cook the specifications for DES, perhaps inserting a trapdoor? No way. Did the NSA use export regulations to suppress scholarly work? Uh-uh. Exert influence to quash research grants? Please. The NSA, he insisted, was anything but “some kind of all-powerful secret influence.” In fact, that was the problem: while outsiders griped about a mighty spy agency with too much power over cryptography, “My concern,” said Bobby Inman, “is that the government has too little.”
In a way, Inman had an excellent point; despite being the richest intelligence agency on the planet, the NSA was relatively toothless. But for its first decades of existence, the agency hadn’t needed laws of its own. Its advantages included not only the force of law but the fact that sophisticated cryptography was a devilishly specialized field, one that few people attempted to engage in, and even fewer could gain sufficient knowledge in to be a player. It was nearly inconceivable that outsiders, or even small governments, could compete with its fire-breathing computers, its world-class mathematicians, its unparalleled experience, its understanding of crypto history. But then came the Whit Diffies of the world—mathematically knowledgeable, with access to computers, and knowledge gleaned from books like David Kahn’s, books that the NSA had failed to suppress. Now there were dozens of them, academics like Ron Rivest and potential entrepreneurs like Carl Nicolai. These outsiders were backed by a cadre of civil libertarians, screeching that crypto breakthroughs could strike a blow to Big Brother. And suddenly, even the weak-hearted attempts of the NSA to stop the tide were being demonized on the front page of the New York Times. In Inman’s view, the victim was not free speech, but national security.
But Inman’s proposed solution—a national sacrifice of free speech to preserve the national security—was doomed. He wanted trust. If he were to get academics to consciously forgo their freedom of speech, he needed trust. If trust were currency, though, the NSA’s balance would be roughly zero. It had never even bothered to open a bank account! It would take more than historic speeches by a sitting director for the NSA to figure out how to manipulate the increasingly out-of-control beast of nongovernmental crypto.
As far as stopping academic research in cryptography, Inman lost that round. Despite his attempts to get Congress to grant the NSA legal authority to suppress publications, the First Amendment prevailed. Most impressively, the exemption in the ITAR for “technical publications” was clarified to the point that even a Fort Meade apparatchik couldn’t call it ambiguous. “Provision has been added,” went a 1980 revision of the rules, “to make it clear that the export of technical data does not purport to interfere with the First Amendment rights of individuals.”
Bob Inman ultimately did forge a sort of compromise with the research community. At the NSA’s request, the American Council on Education organized a Cryptography Study Group to seek common ground. The group, which included both the NSA’s general counsel and a host of academics, including critics Marty Hellman and George Davida, held its first meeting in March 1980 to consider Inman’s proposal that some sort of statutory review process be imposed on private crypto researchers. The group rejected the idea, citing First Amendment considerations and the NSA’s inability to show evidence that such laws were absolutely necessary to defend the nation. The group’s alternative solution was a two-year experimental process by which those publishing work with relevance to cryptography could voluntarily submit papers to the NSA for review. If the NSA read the paper and felt that the information would somehow compromise national security, the researcher could consider such warnings and decide for himself whether or not to publish. Meanwhile, the agency would continue to fund the research of professionals willing to follow its rules, while allowing others to pursue funding by the NSF or any other agency.
George Davida issued his own minority report, rejecting even voluntary review. He dismissed the NSA’s concerns outright, including its worry that research results might help foes crack our own cryptosystems. “This is not likely,” he wrote, “because researchers do not engage in cryptanalysis.” His conclusion was “the NSA’s effort to control cryptography [is] unnecessary, divisive, wasteful, and chilling. The NSA can perform its mission the old-fashioned way: STAY AHEAD OF OTHERS.”
Nonetheless, the policy worked quite well from the point of view of researchers, since this meant that there was a way to deal with the NSA—or ignore it—without having to worry about getting their work deemed a government secret. The two-year trial period of this policy passed peacefully, after which the NSA quietly dropped any pretense of demanding a presubmission of anything produced by an American academic. It faithfully read papers in the field submitted voluntarily, and one of its scientists would occasionally address a question to an author, even pointing out a mistake here and there. It was all done cordially, because the NSA had no authority to go further than that.
As the 1980s began, the first decade in the NSA’s existence when it had private competition, no one understood the challenge better than Bobby Inman, whose agency was charged with routinely intercepting foreign communications concerning the Iran hostage crisis and the Russian war in Afghanistan. He was haunted by the idea that one day Fort Meade would not be able to deliver such high-quality intelligence—because cryptosystems conceived and developed in the United States would be put into widespread commercial use. “I began to appreciate the export concern much more strongly,” he says. In a world where the basic concepts behind sophisticated encryption were found in public libraries and articles in Scientific American, and where a cryptosystem endorsed by the government itself—DES—was turning out to be more popular than the NSA expected, it was more important than ever to stop crypto at the border. The NSA director had it pegged: the whole issue is export.
Diffie, Hellman, and the MIT trio might have broken the NSA monopoly, but Inman and his successors were not without their weapons. In a way, the war over crypto was only beginning.