to Ray Ozzie the whole thing was a no-brainer. He was creating a product by which people exchanged information that they might want to protect. Including encryption in the product was simply a means of providing them that protection. It was simple business. It was common sense. But now that Lotus was actually preparing to include RSA as an essential component of Notes, he found himself waist deep in a thicket of red tape concerning its export—almost as if he were a virtual enemy of the state. To his horror, he discovered that as far as the export rules were concerned, even a strictly commercial program that helps people run their businesses is considered a weapon. Not a handgun or a stiletto, either, but a weapon of mass destruction, like a Stinger missile or a nuclear bomb trigger.
Ozzie could have simply avoided the whole mess by not exporting his product. On a practical level, though, limiting sales to America was unthinkable. It would mean cutting potential revenues at least in half. Software for personal computers was a global market, particularly when it came to big corporations that were the prime consumers of Notes. But such a market hadn’t existed when the export regulations were created. When Ozzie and the Lotus lawyers did their research, they found that crypto export licenses were generally issued only when the exporter (typically some company with ties to the military establishment) was able to identify and vouch for the friendliness and trustworthiness of the final users. The process was called an “end-user certification.” But Notes was a mass-market product, sold shrink-wrapped like a cassette tape. The users would be . . . just plain people. To their dismay, the Lotus lawyers were unable to find any previous case where a crypto export license had been issued in those circumstances.
To wend one’s way through the political, technical, and spookified minefield of these regulations and restrictions, you needed a white-shoed D.C. lawyer-minesweeper, so Lotus went out and got one. His name was Dave Wormser. His first piece of advice was to go directly to what would be the source of all objections: the NSA. The law didn’t require this—the specified avenue was the State Department—but Wormser knew that even filling out an application would be a waste of time unless they knew what the minds behind the Triple Fence might find troublesome in the product.
So, in mid-1986, not long after inking the deal with RSA, Ray Ozzie went to Fort Meade, Maryland, to see what he was up against. He was accompanied by Wormser and Alan Eldridge, the Iris engineer who was in charge of the security components in Notes. Ozzie was thirty years old at the time, just a bit too young to have been swept up in the sixties rebellion but still old enough to have a skeptical attitude toward the military. As a heads-down engineer and product developer, though, he had little idea of what he had stumbled into.
Ray Ozzie, of course, knew nothing about the similar journey made over a decade earlier by Walt Tuchman of IBM. Tuchman, too, had been an outsider with a plan that would extend the powers of crypto beyond the area that The Fort had cordoned off for itself. The NSA, confident that a company like IBM would never defy a request made in the name of national security, had originally felt it had risen to that challenge, but in the years after the approval of the Data Encryption Standard, it had become clear that the problem had not gone away. As crypto edged its way more and more into the public sector—and DES became more and more common within U.S. borders—certain forces within the NSA now saw the approval of DES, despite IBM’s extraordinary concessions, as a horrible mistake. Who knew that everybody from middle managers to grandmas were going to be using computers strong enough to do industrial-strength encryption? To some in the agency, the arrival of the Lotus team was probably the strongest indication yet that crypto was already leaching out into the mainstream. To those NSA people, Ray Ozzie’s visit meant that the crypto barbarians were indeed at the gate.
Fort Meade, with its fences, its guardhouse, the long hallway with pictures of obscure generals, the generic meeting room you’re ushered into with furniture that looked like it had been there since the McCarthy era, was pretty intimidating. It made Ray Ozzie think, These people are obviously in control and they know it.
The meeting began when several NSA officials came in. One of them, apparently the case officer on this matter, began questioning the trio. (This particular functionary—Ozzie is loath to disclose his name—wound up following the progress of Notes for more than ten years.) What was the product? When would it be ready? What sort of cryptography do you hope to use? Ozzie and his team described their hybrid crypto scheme: RSA for the key exchange and DES for the actual encryption.
But the very mention of DES made the NSA people go nuts. “I’ll tell you right now,” one of them said. “You’re not going to export DES, no way, under no circumstances . . . you will never export DES.” This seemed strange: hadn’t the NSA put its seal of approval on DES? Not to be exported to anyone with a couple hundred bucks to spend, baby. The NSA functionary explained that DES was not merely a cryptosystem but a red-hot political issue at The Fort, with implications that a private-sector engineer would not understand and had no need to understand.
Ozzie didn’t know it then, but the NSA was going through a period of post–Data Encryption Standard remorse. In fact, the agency was just then working on a project of its own called the Commercial COMSEC Endorsement Program, which it hoped would kill off the Lucifer-based cipher and replace it with a cryptosystem of its own, dubbed Project Overtake. The ostensible reason was that widespread use of DES “could motivate a hostile intelligence organization to mount a large scale attack” on the cipher. This in itself was sort of ironic, since it was the NSA that mandated the smaller key size for the code, thus making it vulnerable to such an attack. The real problem wasn’t that DES was weak, but that it was sound, too sound for a cryptosystem used by the general public. DES now threatened to fall into much wider use than the agency had estimated—and if mass-market public key systems like Notes used DES, the problem would get far worse. So Fort Meade now viewed the cipher as a rogue element in its global mission. The solution was for the NSA to come up with its own cipher, which would be strictly under its control.
Yet Project Overtake was a doomed initiative because its potential private-sector customers weren’t buying. For one thing, its technology was expensive and clunky. It involved audiocassette-sized devices built to snap into computers. The boxes cost well over $1000 each. Worse, the banks and other financial institutions asked to participate in this project were given no control over the system. The algorithms themselves were protected. The boxes would be tamperproof. Even the keys were to be generated and distributed by the NSA itself. What assurances did the NSA give that the agency would not be keeping copies of the keys for itself? In a rare public interview in the Wall Street Journal, an NSA representative sniffed, “We have better things to do with our time.” In other words: Trust us. Elsewhere in that article, the NSA’s neo-Stalinistic marketing tactics were examined. A banking executive described a typical Project Overtake sales call: “An NSA guy stands up and makes pronouncements. ‘You guys have to do this.’ It’s a directive. You can imagine how far this gets them.” No, thank you, said the banks. They’d stick with DES.
Though Ray Ozzie was unaware of all this, he was beginning to realize that the idea of exporting crypto was a very big deal for these guys. As the obstensibly amiable interrogation continued that day, it became clear that the NSA people did not even have the vocabulary to deal with a mass-marketed product with strong security like Lotus Notes. “They had dealt with people who knew their customers, and could vouch for them with end-user certifications,” says Ozzie. “But we had to explain to them that our industry didn’t work that way.” When Ozzie tried to elaborate on this, his attorney began kicking him under the table—this wasn’t the kind of thing that the NSA wanted to hear. But Ozzie felt it important to defend the crypto component in Notes, explaining that if people were going to use the product, they’d be risking their entire businesses on the security of the information. That argument didn’t seem to impress the spooks.
Flying back to Boston after that first meeting, Ozzie asked himself, Would it really be so bad to distribute Lotus Notes only within the United States, and avoid this whole battle? But that approach would be financial suicide. You simply could not compete if you wrote off the global marketplace.
So Ozzie had the lawyers arrange another meeting, this time in Cambridge. Had the National Security Agency softened its position at all? “Just to make sure you know where we stand,” said one of the NSA representatives to the Lotus people, “we’ve long known you’ve had encryption in Lotus 1-2-3, and from our standpoint that’s within our jurisdiction. We could stop your shipments of 1-2-3 tomorrow if we felt like it.”
Lotus 1-2-3, of course, was the spreadsheet that provided the lion’s share of the company’s revenues. It was the most popular software product in the world and a huge percentage of its sales was overseas. What was the “encryption” to which the NSA referred? Lotus’s spreadsheet program contained a simple password option that blocked access to unauthorized users. Now, it was highly unlikely that the U.S. government would dare halt all shipments of software that used passwords, an act that would cause the entire personal computer software industry to collapse. Still, the threat had its effect. Ozzie glanced over at his lawyer, and saw a look of sheer panic.
In the course of that meeting and several others over the next three years, it became very clear to Ray Ozzie that no matter how crucial Lotus Notes might be to his company or even to the U.S. economy, any approval he got for export would be on the government’s terms only. On the other hand, he was relieved that no one dealing on behalf of the NSA ever made any demands on what encryption might be sold within the borders of the United States. (Such a demand would have been a violation of the Computer Security Act, but who knew where those guys would stop?) Whenever Ozzie indicated that export restrictions might force Lotus to release two versions of Notes, one with strong encryption for domestic use and the other for approved export, the government negotiators would shrug and say, “Well, that’s your decision.” At times Ozzie would wonder whether the NSA wanted Lotus to create some secret skeleton key by which the spooks could quickly unscramble messages encrypted by Notes. He once probed to see if that was the case. “What the hell do you want?” he asked his tormentors. “Are you waiting for me to offer you a back door?” The response was immediate: No, we don’t want you to compromise the security of the product. “So what the hell do you want?” Ozzie would ask, and he’d get no good answer. And the stalemate would continue.
Finally, around the middle of 1987, Ozzie and his team got a concession from the NSA: If Lotus dropped DES and found a replacement cipher, the government would evaluate that cipher’s strength and allow Notes to be exported, with a key length that the parties would then negotiate. Lotus immediately hired Ron Rivest to cook up a new encryption algorithm. After a few weeks of intense work, he came up with his own cipher that he named RC-2, for Rivest Cipher 2. (A first effort was shelved.) Rivest’s system was similar to DES in that it was a block cipher that used complicated substitutions, but unlike DES, it had a variable key length. Lotus paid for all the development costs but allowed RSA to hold the patents. Rivest submitted the code to the NSA in 1987; not long afterward, he heard that the Triple Fence crypto wizards required a couple of tweaks.
“How do you know they’re not doing something to weaken it?” Ozzie asked him.
Rivest replied that the government’s comments actually made good sense, so he felt safe making their changes. That took a month or so, and the negotiations picked up again. Not that they were getting anywhere. “The content of the meetings was getting very thin,” says Ozzie. “I believe we were definitely being stalled.” His impression was that there was strife within the NSA itself on how to proceed. During 1987 and 1988, the lack of an export license wasn’t that much of a crisis for Lotus, because Notes was one of those ambitious software efforts that were years late in production. So the encryption issue wasn’t holding up the product itself. But as 1989 rolled around, it looked like the program might finally be ready to ship. Now an export solution was essential.
The only thing that Lotus had going for it, really, was perseverance. Not that Ozzie had any alternatives. Every time he’d mention the possibility of shipping a product only in the United States, the marketing people insisted such a course was just not financially viable. So he kept pressing. Kept asking for more meetings with the NSA. Kept supplying any and all information the government requested. So much information, he figured, that if he ever did get an export license, there wouldn’t be a chance in hell that the government could come back and say, “Hold on, you didn’t tell us that the system works like this.” That would give it an opportunity to stop shipments. So Ozzie made sure that Lotus completely fulfilled even the Defense Department’s most trivial requests.
While Ozzie was definitely the supplicant, he did have some leverage. “Are you telling me that I have to go to my congressman and tell him you’re preventing me from shipping my product overseas?” he’d ask the export gatekeepers. “How much of an issue do I have to make of this?” Lotus may not have been a multibillion-dollar company, but it was the biggest company in the software industry at the time, and it wouldn’t have looked very good to have some faceless spooks barring the door to the darling of the business press.
Suddenly, inexplicably, the ice broke in mid-1989. Ozzie is convinced that the struggle within the NSA had finally ended in a compromise. “It was clear that there were people for us and people against us,” he says. “Originally they’d been meeting with us because it was their job and they were curious about what we in this new personal computer industry wanted. Then I believe there were severe internal battles, with some people in favor of letting a little crypto out, to make us go away. And others who didn’t want a precedent set, and wanted nothing out.” Apparently the former prevailed. An offer materialized. Verbally, of course. A written offer would be akin to a binding promise, an animal that does not exist in the export control menagerie.
Here was the offer: Lotus Notes could ship overseas with RSA and RC-2 encryption built in, with a key size of 32 bits. The NSA people thought that was a major concession on their part. After all, their job was to break codes. So they had to be very concerned about what might happen if the president or the National Security Council came and asked them to break a message encrypted in a program they’d allowed exported. Their first instinct had been to permit only a 24-bit key. But “after serious leaning on NSA senior policy people,” said one of the government reps, they were willing to “go the extra mile” and allow what it considered unusually strong 32-bit keys.
Unusually strong? The Lotus team was appalled. That meant that the keys one chose to encrypt and decrypt data were limited to a universe of just over four billion keys. While you wouldn’t want to try to crack this by hand, it was totally lame in the age of supercomputers. For the silicon sweathogs in the basement of Fort Meade, finding a key among four billion was a definite yawner. In the meeting, the NSA folks admitted that their supercomputers could indeed crack such keys inside of a couple of days (an estimate that seemed rather modest). But potential data thieves didn’t really need supercomputers to crack a code scrambled with a 32-bit key. If they were determined enough, and had serious dollars to spend as well as time to kill, they’d be able to throw enough personal computing power at the problem to find the keys. According to RSA estimates, this could be accomplished within 60 days. The government officials insisted that this was plenty of security. “Who would go to the trouble to break a single corporate message or several of them at 60 days a pop?” they asked.
This seemed to ignore the guiding high-tech principle of Moore’s Law, which dictated that personal computers would double in power every eighteen months or so. So, that 60 days would soon be less than a month. By 1995, the time to crack a 32-bit key would be less than a week. But all of that was almost beside the point. True, for most relatively innocuous messages sent on Lotus Notes, spending days or weeks on decryption was excessive. But some of the information transmitted by these multimillion-dollar companies was bound to be valuable. And how would Lotus be able to assure those firms if the key length was limited to 32 bits? It couldn’t say that breaking the code was unimaginable—or even a challenge. Basically, getting hold of a secret message would be little more than a nuisance.
There was no legal reason, however, to stop Lotus from producing two versions of the product: an export version with 32 bits and a much more secure version for use only within the United States. The latter used Lotus’s preferred key length of 64 bits, a degree of strength many times more difficult to crack than the export version. (Remember, each single bit doubles the size of the keyspace. A key that’s twice as hard to guess as the 32-bit version would not be 64 bits long, but only 33 bits. The domestic version, then, was like doubling the difficulty 32 separate times, changing the time frame to crack a key from days to aeons. The bottom line was that it required no stretch of the imagination to use brute force to come up with a 32-bit key. But considering 1989 computer power, one could reasonably declare such an attack on a 64-bit key next to impossible.)
The drawbacks of producing two products of different key strengths were daunting. The obvious logistical costs—two packages, two sets of disks, two inventories of products—were only the beginning. Ozzie and his team had to make sure that both versions operated with each other. Because the target customer base for Notes included multinational companies like General Motors, the software had to be written so that companies with some users in the United States and others overseas could communicate securely. So Lotus had to have the product work in such a way that people didn’t have to worry whether or not some of the recipients of an e-mail might be in Spain or Kansas City. Essentially (though none of this was apparent as one used the product), each person who used Notes was given two sets of keys—an international pair and a domestic pair. Implementing this was a programming nightmare. But, says Ozzie, “we were not going to compromise in this country,” so Lotus went ahead and did the work.
The one problem that simply could not be coded around was that the government-imposed limitation made the international product much, much weaker than its American cousin. You could view it as a bug, but one that was built into the product. Would international customers reject it for that reason?
At first, they didn’t—mainly because the entire idea of buying a product with built-in encryption was so novel that customers weren’t attuned to the nuances of security. “We were trying to sell a product that was for uses they didn’t know they had,” says Ozzie. “It required a network card they didn’t have, a graphical interface they didn’t have. Only after we convinced them to put these things in did they ask, ‘Is it secure?’ And we’d tell them, ‘Yeah, it’s secure; not as much as the version in the U.S., but it’s secure.’ And they’d ask, ‘Can someone break in?’ And we’d go, ‘Well, if you ganged together thirty or forty personal computers, maybe you could. But you’d have to write special software and all.’ It was a customer education process to let them know we were trying to protect their data. It wasn’t for a few years that the questions began coming about why the international version isn’t as strong, and why didn’t we use DES.”
Lotus’s hope was that by the time international customers got wise to the fact that their version of the software offered significantly weaker protection, the government would bend its restrictions and allow larger keys. Thirty-two-bit keys were just a compromise Ozzie made to get the product out the door. “Once we were shipping, and we had customers who had pull, we could [have the clout to argue for] a change to forty-eight-bit keys [in the export version],” says Ozzie. “That was what we were pushing for.”
But the government seemed to be pushing in the opposite direction. The NSA believed that the export version, even with that lame key size, was still too strong because of certain design elements. These concerned the possible reencryption of already-encrypted information—something Ozzie figured that, at worst, would make decrypting messages only slightly more difficult. Without explaining its reasons, the government suggested design changes that might satisfy them. The best Ozzie could figure was that the issue probably related to the way that NSA cryptanalysts broke codes. But settling the matter took months of further negotiations, ultimately resulting in significant product redesign that made the program run more slowly in certain instances.
Ozzie couldn’t help but wonder: what was the point of all this? Did shipping Lotus Notes overseas only in a 32-bit version really improve national security?
The struggle with Lotus over software exports was only one sign that after years of inaction, the National Security Agency had to wake up and face the challenge of a crypto revolution. After the mild panic following the first breakthroughs in the late 1970s, officials at The Fort thought things were under control. Though Bobby Ray Inman’s compromise—the scheme by which crypto researchers would voluntarily submit their work to the NSA for a once-over—was not foolproof, an impressively high percentage of the top independent cryptographers actually went through the process. Because the choice was theirs, they could justify their decision to comply with the principles of academic freedom. Besides, these academics had no desire to destabilize national security. Correspondence with the spooks was also fun, in a way. It provided a certain frisson, not to mention an implicit validation that one’s work was indeed serious. In over nine times out of ten, the NSA made no suggestions, and other times, a minor adjustment would be requested—typically, this would be when the researcher inadvertently stumbled on some issue that was related to the NSA’s techniques in either its codes or its cryptanalysis.
Furthermore, in at least one case, the NSA actually appeared to have intervened on behalf of a researcher. This was none other than Adi Shamir. In the years since leaving MIT, Shamir had been extraordinarily productive. Using the ideas of public key as a starting point, he and various colleagues had come up with new ideas for crypto. Some of them were amazing. One that he worked on with Adleman and Rivest involved a way to play “mental poker . . . played just like ordinary poker, except there are no cards.” A more significant creation was “secret sharing.” Only two years after helping invent RSA, Shamir had been intrigued by what he considered to be a problem looking for a solution—how do you share a single key among several parties, particularly when mistrust and suspicion festers among them? The classic situation is an electronic equivalent of what happens in nuclear missile silos: in order to launch, multiple keys must be turned simultaneously, requiring more than one person. Could you replicate this safeguard in cyberspace? It turns out you could, and once Shamir got to thinking about it, he came up with the idea of secret sharing, a means to parcel out a decryption key among several people. If a foe got hold of any individual’s share of the key (known as a “shadow”), he or she would have no advantage in an attempt to retrieve the entire key. Implementing that was the only the beginning, though. It was obvious how to do it in a way requiring the cooperation of all the participants to reconstruct the key. But then Shamir thought a little. . . . What would happen if one of those people disappeared or died or was kidnapped? This led to the idea to build tolerance, so that if you were given any predetermined subset of the keys, you would be able to reconstruct the secret. This came to be known as a “threshold scheme,” and its uses were endless. A trade secret like a recipe for Coca-Cola, for instance, could be distributed among ten people, and then you could prearrange any number of complicated combinations to retrieve the key. If, say, the six least trusted people holding shadows of the key got together, they might not be able to reconstruct the key. But the most trusted shadow holder might be able to build the key with any two other people in the consortium.
In 1986, Shamir and two of his colleagues at the Weizmann Institute came up with another innovative and potentially valuable technique, known as “zero-knowledge proofs of identity.” Using one-way functions, these allowed Alice to verify that she knew a number (typically something that identified her, like a social security or credit-card number) without revealing that number to the interrogator. Using this system, Shamir later said, “I could go to a Mafia-owned store a million successive times and they would still not be able to misrepresent themselves as me [and use that information to buy goods, etc.].” Recognizing the value of this scheme in future e-commerce transactions, Shamir and his coinventors applied for a patent. But in early 1987, the patent office informed the cryptographers that, by order of the U.S. Army, their invention was now an official secret; circulating information on it “would be detrimental to the national security.” Not only were the Israeli scientists prevented from discussing it, but they were instructed to warn anyone who had seen the paper that sharing the idea could put one in jail for two years. Since they had already presented the paper at several universities as well as the Crypto ’86 conference, and had submitted it to the Association of Computing Machinery for publication that May, this seemed a difficult, if not futile, task. Furthermore, since the authors weren’t even Americans themselves, how could the U.S. government tell them what they could and could not talk about?
The NSA apparently wasn’t involved in that secrecy order, but soon heard about it from concerned American scientists—and from the New York Times, which had been tipped off about the controversy. Within two days the order was quietly lifted. It was weeks before Shamir learned about the reprieve, and he became convinced that the NSA had intervened in his behalf. Why? As Susan Landau, an academic researching crypto policy, later guessed, the agency had intervened to preserve its prepublication submission program. If the perception was that submitting a good crypto idea could lead to a sudden embargo, the flow of papers to the NSA would end. And, as Landau wrote, “it is much easier to find out what the competition is doing if they send you their papers.”
As the 1980s came to a close, however, it was clear that the voluntary submission system had reached the end of its usefulness. The turning point came, significantly, with a paper written by Ralph Merkle. Merkle had gone to work at the Xerox Corporation, in its famed Palo Alto Research Center (PARC). His main area of study—indeed, his passion—was nanotechnology, a new science based on molecule-sized machines. But he kept up with the crypto world. In 1989, he wrote a paper that introduced a series of algorithms that would speed up cryptographic computation, driving down the price of encryption. This in itself was threatening to the NSA’s mission. But Merkle’s paper was particularly worrisome to the agency because it included a discussion of the technology of S-box design. Ever since Lucifer, this had been a hot-button issue at The Fort.
Xerox sent the paper off to the NSA for a prepublication review. (Apparently, it had hopes of one day getting an export license for a product based on Merkle’s research.) As usual, the NSA itself circulated it to experts both inside and outside the Triple Fence. But this time the result was not a helpful correction or gentle request for a change in wording. The agency wanted the whole paper suppressed, claiming—without explaining why, of course—that circulating Merkle’s scheme would be a national security risk.
Xerox, as a huge government contractor, quietly agreed to the agency’s request. Normally, that might have been the end of it. But in this case, apparently one of the outside reviewers of Merkle’s paper was upset that the agency had spiked it—so upset that he or she slipped it to an independent watchdog, a computer-hacker millionaire named John Gilmore.
Gilmore had a weapon that wasn’t available a decade earlier, when the prepublication process was initiated: the Internet. One of the most popular Usenet discussion groups on this global web of computers was called sci.crypt. It was sort of an all-night-diner equivalent of the yearly Crypto feasts in Santa Barbara, featuring a steady stream of new ideas, criticism of old schemes, and news briefs from the code world. Gilmore posted Merkle’s paper to the group, and in an instant, it went out to readers on 8000 different computers around the world. Cyberspace had made the NSA’s prepublication system irrelevant.
The agency rescinded its request to withhold publication. Anyway, by then even the bureaucrats at The Fort were getting wise to a new reality: its real challenges weren’t coming from academic papers but from the marketplace. And the prime example was that once moribund public key software company, now rejuvenated by Jim Bidzos.
As the 1990s approached, Bidzos was dancing a complicated pas de deux with the National Security Agency. Though he had no real proof of it, he now imagined that behind the scenes it was working overtime to sabotage him and his company. It seemed that a lot of his potential customers showed enthusiasm at first, but then mysteriously stopped returning his calls. There were also government agencies whose interest in deploying his products suddenly evaporated. Bidzos felt in his bones that the silence resulted not from a failure of his sales prowess, but from clandestine pressure from Maryland.
He even came to wonder about the nature of a relationship he had with a woman who for some reason spontaneously began giving him inside dope on the NSA. It had seemed plausible at the time, but later he wondered whether she was being paid to feed him disinformation. “I believe in the intelligence community they call it a ‘honey trap,’ ” he later said. It was ironic that from time to time people would still wonder whether Bidzos was some sort of double agent, putting on a charade of fighting the NSA while secretly implanting back doors in his company’s technology. In his mind, he truly believed that he was the single greatest thorn in the agency’s cybernetic paw.
But what really scared Jim Bidzos circa 1990 was not the National Security Agency, but a far more immediate threat to his business. It involved not the government but the public key cryptography patents that were the foundation of his technology. The problem involved a company whose products didn’t compete directly with those of RSA—but whose patents threatened the company’s existence.
The company was named Cylink, and its own history was considerably more placid than the roller-coaster ride of RSA. Its cofounder, Jim Omura, was a Stanford Ph.D. who became a UCLA professor in electrical engineering. His main field was information theory. Like just about everyone in computer science back then who didn’t work for the NSA, he knew almost nothing about cryptography. But he knew of a young associate professor at Stanford who was interested in the subject. “I used to ask him, ‘Why waste your time in cryptography?’ It seemed like there was nothing there,” says Omura. Fortunately for the invention of public key cryptography, the professor—Marty Hellman—didn’t take Omura’s advice.
By the late 1970s, Omura’s views had changed, however, and he became an expert in the field. For extra money he would teach a five-day cryptography course to people in industry, mainly government contractors who wanted to develop products for the military. It covered the basic principles of crypto, and he taught it not only in the United States but also in places like Switzerland. “We had to be careful not to include any classified knowledge,” he says. Omura himself had never been briefed with classified material, but who knows what the government might consider verboten?
After a few years, Omura and a friend began tinkering with actual code, and they came up with a hardware product: a silicon-chip implementation of public key, using the Diffie-Hellman key exchange. He went to another friend, Lew Morris, who was an early participant in Sun Microsystems, and they began to explore the idea of making a business out of it. They wrote a business plan, and started making the rounds of venture capitalists.
This was in 1984, about the same time that RSA was going through its roughest period. Omura and Morris didn’t find the going any easier. “The venture community then couldn’t have cared less about information security,” says Omura. It was only through a private referral that the business plan fell into the hands of Jim Simons, who was not only a mathematician and cryptographer (he’d been one of the early reviewers of Lucifer) but dabbled in venture capital as well. He agreed to help put the newly dubbed Cylink company on its feet.
Unlike RSA, which had a mission of getting crypto into the hands of the general public, Cylink focused on securing the communications of big companies, typically those that were government contractors. Cylink wasn’t about to push the envelope of what the NSA would or would not permit. Its first product, shipped in 1986, was dubbed the CIDEC-HS (so much for sexy branding). It was a chip-stuffed metal box that scrambled telephone communications within a company, using a hybrid crypto system: Diffie-Hellman to generate keys, DES to encrypt the data. Since many of Cylink’s customers were financial institutions that had already won clearance to use DES-based cryptography (including SWIFT, the international clearinghouse for bank transactions, which handled over a trillion dollars on a slow day), Cylink didn’t run into the export problems plaguing software companies like Lotus. It quickly became profitable.
From the start, of course, Cylink had gone to Stanford University to license the Diffie-Hellman patent. At first, the arrangement was nonexclusive. “Stanford was deliriously happy,” says Robert Fougner, Cylink’s general counsel. “They’d finally found someone who was going to actually use the patent, and we made a very, very good deal with Stanford.” During the mid-1980s, in fact, while RSA was struggling to establish itself, Cylink seemed to be the only company turning a buck from public key. The relationship with Stanford flourished. Eventually, Cylink proposed that the university give the company additional rights to the public key patents. Essentially, it wanted to control all the patents itself. When others sought to devise and market potential public key crypto schemes, they would go not to Stanford for the licensing rights, but to Cylink for sublicensing rights.
Stanford agreed to this, but there was a significant wrinkle: a continuing conflict over its patent rights and those of MIT, which owned the RSA patent. Stanford believed that its patents were, essentially, the public key patents, since they embodied the broad idea of split-key cryptography. By this logic, anyone who wanted to use the RSA scheme would also have to license the Stanford patents. MIT’s lawyers, however, believed that RSA could stand alone. This disagreement triggered tension between the universities that went on for several years. It was (pardon the expression) a low-key dispute, since there wasn’t much money involved at the time.
Even so, everyone felt that a dispute between two august institutions was unseemly, and finally the parties reached a compromise. Stanford bundled all its public key patents and sublicensed them to MIT. MIT in turn transferred those rights to RSA Data Security, Inc. This removed a huge cloud hanging over RSA, whose system really did depend on the original public key idea of Whit Diffie and Marty Hellman. Now its software was not only fully covered by patent protection, but there was no question of infringing on the Stanford patent.
While this was fine for RSA, it put Cylink at a disadvantage. Now if someone wanted to license public key crypto, they could go either to Cylink or to RSA Data Security. But only from RSA could they acquire the rights to the public key system created by its founders. This didn’t become a problem immediately, since the two companies were pursuing different customers. While both championed public key and were located within ten miles of each other, Cylink was, in Fougner’s words, “very insular, very inward . . . focused on our technology, on making a good product, on selling that product to a [limited, but] nice portfolio of customers.” On the other hand, RSA’s marketplace was the broader world of personal computing, with their eyes on a mass market.
Almost inevitably, though, the companies found themselves up against each other. Because of the way the patents were divided, each company had an interest in encouraging a certain approach to public key software—and disparaging the other approach. Because Cylink didn’t have access to MIT’s patents, it aggressively promoted the idea of using the Diffie-Hellman key exchange. Previously, people in the field had thought that, in a practical sense, the Stanford-derived work only provided for a way for two parties to agree upon secret keys; unlike RSA, it didn’t outline the means for a full and efficient public key cryptosystem. But Cylink believed that by cleverly using the Diffie-Hellman patents, users could do everything that RSA did, just as elegantly: privacy, authentication, the whole works. Jim Omura had written a paper about it in 1987. “You could use the Stanford patents to do the same thing as RSA,” says Omura. “I think this upset Jim Bidzos because suddenly his technology wasn’t the unique technology.”
“In order for RSA to succeed, it had to promote its software implementations, which were really focused on the MIT software,” says Fougner. “And here was Cylink having obvious commercial success with the Stanford-type technology. There was going to be a fight, or there was going to be a business deal.”
Fougner himself joined Cylink as counsel in 1989 specifically to deal with this issue. On his second day of work, he met with Jim Bidzos. He had little idea what to expect. Would Bidzos, who already had gained a reputation within the budding industry as a pressure artist, play tough? Far from it. As Fougner recalls, Bidzos took pains to appear submissive, acting as if he were almost in awe of Cylink’s financial success. RSA, he told Fougner, was still struggling to keep its head above water: Cylink had nothing to worry about from RSA. On the other hand, both companies faced an uphill battle getting crypto established more widely. Both of them, Bidzos said, were evangelizing a technology that nobody understood, that nobody wanted to pay for. On top of that, here were the two top public key companies, each promoting a different implementation, and confusing the hell out of everybody!
Let’s not fight each other, said Bidzos. Why not pool all the patents, work together, agree on a public key standard, and license the hell out of it? We’ll make a gazillion dollars!
It made a lot of sense to Fougner. Why not join forces? For one thing, he figured, it would probably make Stanford’s lawyers happy. They had long regretted granting MIT the sublicensing rights to its patents. By making RSA a one-stop shop for public key, Stanford had cut itself out of the loop! “The joke at Stanford,” says Fougner, “was that the MIT deal was often used in their seminars as an example of what not to do in patent licensing.” So Bidzos’s idea of putting all the patents in one pot (with the promise of more fees for the public key patents) sounded very attractive to the Stanford people, and they urged Cylink to go along with it.
On October 17, 1989—the same day that an earthquake charting 7.0 on the Richter scale rocked the Bay Area—the two companies and the two universities came to an understanding. (The formal contract was signed the following April.) The patents would all belong to a new corporation jointly owned by RSA and Cylink. Control of the new entity, called Public Key Partners (PKP), would be shared equally between the two parent firms. Bidzos, arguing that the MIT rights were worth more (RSA had already gained some access to Stanford’s patents whereas Cylink had no rights to use RSA’s technology), negotiated a favorable revenue split: 55–45 in his company’s favor. Meanwhile the universities themselves got only a fraction of the potential cash: out of every dollar paid to PKP by sublicensees for patent rights, Stanford University would get nine cents and MIT would take in a little under fourteen cents.
Omura recalls that after the partnership was established, Bidzos tried to get Cylink to downplay the idea that people could perform public key functions without the RSA algorithm. “He essentially said to me, ‘Now that we’re partners, I hope you’ll stop promoting the Diffie-Hellman approach and support RSA.’ ” Omura told him that his company would still use the alternative method, but didn’t see why that should be a problem. “It doesn’t matter what technology we use,” he said to Bidzos. “We’re partners.”
“In 1990, who cared?” explains Fougner. “Within a couple of years, though, a lot of people cared.”
Initially, the two executives of Public Key Partners, Fougner and Bidzos, worked well together. Technically, Fougner was head of licensing and Bidzos the president. But the bylaws dictated unanimous consent on any decisions. For Fougner, an unassuming corporate lawyer, teaming up with a swashbuckling deal-maker like Bidzos, the enterprise was sort of a mad adventure. Two wild and crazy guys, trying to set a global standard for public key cryptography—and make tons of money for their respective companies.
So enamored was Fougner of the idea that he tended to shrug off the almost immediate signs that in many ways the interests of RSA and Cylink remained divergent. The first order of business for PKP was to send a letter to the National Institute of Standards and Technology (NIST), the government agency that acted as the ultimate referee of what protocols the marketplace should agree upon as a standard. In large part, the success of the partnership between the two companies would depend on whether NIST adopted as standards the patents now jointly controlled by Bidzos and Fougner. There were actually several different cryptographic standards that NIST would have to approve: one for digital signatures, one for encryption, one for key exchange, and so on. Once these were determined, the crypto revolution would be poised for liftoff. All the software developers would know exactly which algorithms were required for privacy and authentication, and they would build them into their programs. All the programs would then interact with each other: once this got going, a user of Lotus would be able to send encrypted mail to someone using WordPerfect, and a Microsoft Word user could stamp a digital signature on his or her Intuit account ledger. It was a crucial step for a crypto society, and NIST knew it.
The government decided to establish the digital signature technology as the first standard. Uh-oh. Cylink and RSA had different approaches to signatures, each one based on their separate public key religions: Stanford or MIT. Which one would PKP offer to the government as its official candidate for a standard? Jim Bidzos had the answer: Let’s make this one RSA, he said. The Cylink people were unsure; after all, they’d been working on Diffie-Hellman signatures for six years. Bidzos had an answer to that: We’ll do RSA for signature, and when it comes to a key-management standard (the means of handling and verifying the zillions of digital keys that a large-scale system would handle), we’ll do Diffie-Hellman. The Cylink people agreed. Public Key Partnership’s letter to NIST, under Fougner’s signature, went out on April 20, just two weeks after PKP was formally established. It urged that the agency adopt the RSA scheme as a standard. “Public Key Partners,” the letter said, “hereby gives its assurance that licenses to practice RSA signatures will be available under reasonable terms and conditions on a nondiscriminatory basis.”
But when it came to digital signatures, the government had its own ideas.
In the midst of all that wrangling, Jim Bidzos was still concerned with keeping his company afloat. He was now working on his biggest licensing deal yet—a broad arrangement with the most powerful software company on earth: Microsoft, the White Whale of high tech. For the previous few years, its wizards had become increasingly aware that their customers might need cryptography built into Microsoft products. From the company headquarters in Redmond, Washington, its chief technical officer, Nathan Myhrvold, had begun to circulate memos on how crucial this would become. Myhrvold often invoked his grandmother, who lived in a small farm community where people left their doors unlocked: This was fine in an isolated setting where strangers were seldom seen, but simply would not do in an urban setting. It was the same with computers, he would say; they were moving from isolated, unconnected units on desktops to networked nodes in a large infrastructure. To protect everything from taxes to medical records, you needed locks, and Myhrvold understood that public key cryptography would provide those locks.
Myhrvold had been in college when Martin Gardner’s Scientific American article about RSA appeared. “I thought it was infinitely cool,” he said, and the future physicist (who would study under Stephen Hawking at Cambridge University) devoured the RSA paper as well as the Diffie-Hellman paper that inspired it. A decade later, after a software company Myhrvold had started was bought out by Microsoft, he had become one of Bill Gates’s most trusted lieutenants. He was excited about his opportunity to help get public key into the mainstream. As was the case with Ray Ozzie and Lotus, he wound up dealing with the obvious person: Jim Bidzos.
The Microsoft license was crucial to Bidzos. It would make his technology a security standard for the hundreds of millions of customers who used Microsoft’s DOS and Windows operating systems as well as its applications like the word-processor Word and the spreadsheet Excel. Nonetheless, Bidzos approached the negotiations with his usual aggressiveness, boasting that, as the patent holder, he was the only game in town for crypto supplicants. Myhrvold wasn’t intimidated. If RSA is so great, he wanted to know, why isn’t anybody else using it? He conceded that public key systems may be inevitable, but joked with Bidzos that they might not catch on until the patents ran out toward the end of the century.
Bidzos wasn’t fazed, and the negotiations proceeded—two major egos, each giving as good as he got. The issues were complicated because Microsoft wanted the right to modify the code of RSA’s crypto toolkits to suit their products. Inevitably, though, as Ray Ozzie had already learned, there was an even bigger hurdle facing all of them: the export laws.
Anticipating that including crypto in its products would be problematic, Microsoft had begun a dialogue with the NSA. Though cordial, the new relationship was uneasy. The first few times representatives from Fort Meade ventured to the Redmond headquarters, they wouldn’t even reveal their last names; to get them building passes, Myhrvold had to go to the reception desk to approve badges with first names only. “They were reflexively secretive,” says Myhrvold, half amused and half annoyed. Worse, they never seemed to be explicit about what was and was not permitted. But they were vocal about one thing: RSA Data Security. They seemed to have it in for the company.
Obviously, the NSA people did not relish the prospect of this upstart company providing a surveillance-proof shield to hundreds of millions of Microsoft customers. As Myhrvold tells it, they tried to turn him against Jim Bidzos and his company. Their method of dissuasion was interesting. Without saying it outright, they began dropping broad hints that behind the Triple Fence, the cipher devised by Rivest, Shamir, and Adleman had already been broken. Myhrvold was worried about giving his customers reasonable security—if the government could crack the code, why not a crook?—so he grilled Bidzos about the NSA’s claim.
Bidzos was stunned: he’d felt the Microsoft deal was almost completed. He sprang into action to refute the charges. “We contacted every number theorist, every mathematician, every researcher in this field we knew, and within twenty-four hours had gotten back,” he says. “[Microsoft was] blown away by what we had done and they said that obviously the charge isn’t true.”
Myhrvold’s recollection is different. He says that the refutation was superfluous: he always did believe the RSA algorithm was sound. But Myhrvold does say that he teased Bidzos by noting that no system short of a one-time pad could be provably impervious to cryptanalysis. Bidzos answered, quite reasonably, that one could trust a publicly published cipher—open to challenge from anyone in the community—more than one of the NSA’s secret algorithms. RSA’s future was totally linked to the strength of its codes, so it had every incentive to make sure those codes were strong. “If somebody breaks it,” Bidzos said, “what you’ve got are the remnants of a once-valuable company.” In any case, Bidzos convinced Myhrvold. To Myhrvold the NSA’s antipathy toward RSA was in a sense an endorsement: why would the agency want it stopped so much unless it was actually hard to break?
But the NSA wasn’t through. According to Myhrvold, the agency made another eleventh-hour attempt to discourage Microsoft from licensing RSA, this time questioning the validity of the company’s patents. In addition, its people speculated that future government standards would not use RSA technology, and Microsoft might have an orphaned set of algorithms. Bidzos rushed back to Redmond to orchestrate a presentation that conclusively proved the solidity and breadth of his patent rights.
According to Bidzos, the final NSA attempt at sabotaging the deal came when an agency official called Myhrvold and said, basically, “Don’t do it.” (Myhrvold says that he doesn’t recollect those words specifically, but confirms the NSA conveyed to Microsoft that it believed licensing RSA would be a mistake: a powerful disincentive for the software giant to link up with this unproven company.)
Bidzos was furious. As he recollects now, he dialed up the highest ranking person he knew behind the Triple Fence and laid out what he had heard. Then, before his contact could utter a word in reply, he demanded that the official fix the problem and call Microsoft back to tell them that the agency had made a big mistake. “If that doesn’t work, you’re going to answer to the congressman in my district,” he said. “If that doesn’t work, you’re going to answer to a district attorney, because I’m going to file a complaint. If that doesn’t work, I’ll try the New York Times. But one way or another, if you don’t fix this, I’m gonna make you answer for it.” Bidzos more or less expected his contact to deny everything, or at least insist that he knew nothing of the sabotage. Instead, Bidzos claims, the man said, “I’ll call them.” And, according to Bidzos, his contact called Microsoft and recanted.
The path was now clear for a deal. One small point holding up the arrangement had been Bidzos’s insistence that Bill Gates personally sign the contract. Bidzos wanted to display that final page of the contract on his wall, and what would it look like without the John Hancock of Microsoft’s famous CEO? By implying that Gates’s signature might be a problem, Myhrvold brags that he was able to get a few deal sweeteners from Bidzos. (But Bidzos got a sweetener, too—Gates’s presence at an RSA event.)
A few days later, over Memorial Day weekend in 1991, Bidzos called Fougner to boast about the now-completed deal. Fougner recalls being blown away. “Jim, that’s amazing,” he said. “You got Microsoft to license your proprietary toolkit, and they’re going to put it in their operating system? That’s unbelievable! How did you do that?”
“Salesmanship, Bob,” said Jim Bidzos. “I’m a great salesman.”
Salesmanship or not, by early 1991, the future of the public key patents was very much in doubt because of the lack of a government endorsement. Bidzos was, of course, desperate to have RSA established as the standard. Early in the process, NIST, the arbiter of the process, had been enthusiastic about doing just that. RSA, wrote a senior scientist at the agency, was “a most versatile public key system.” Indeed, as late as December 1990, NIST was trying to convince Bidzos’s foe, the NSA—whose voice in the process was crucial—that the system should be adopted. Not only was it commercially effective, said its representatives in meetings with the intelligence agency, but there was no reasonable technical argument for anything else.
But then progress stalled. None of the entreaties from Bidzos or Fougner to establish RSA as the standard seemed to have been effective. And on August 30, 1991, it became clear why. The National Security Agency had devised its own scheme.
Publishing in the Federal Register, NIST proposed a new set of algorithms as the prime candidate for a standard. The government’s product, known as the Digital Signature Algorithm (DSA), was written by an NSA employee named David Kravitz. In many ways, it was similar to the RSA signature scheme. Both schemes employed a public-private key pair. In both, when Alice wishes to prepare a digitally signed message, she first applies an algorithm known as a hash function, which boils the content down to a compressed “message digest.” (This, essentially, is the message boiled down to its essence, for easy processing.) Then, by way of a mathematical function that uses Alice’s unique private key, that message digest is scrambled, or “signed.” Both the original message and the digest are then sent off to Bob. When Bob—or anyone else—gets the message, he now has a way to verify that it was indeed Alice who sent it and that the message itself wasn’t tampered with in transit. He uses Alice’s public key to “unsign” the message and the digest. Then he uses the hash function to re-create Alice’s message from the digest. Only if the letter came from Alice and only if the content was unchanged would the re-creation match the original.
The government method differed from RSA’s signature scheme in one profound way: its public-private key pair could be used only for authentication, not encryption. In other words, this was a public key system that couldn’t keep a secret. Thus it presented no threat to national security or law enforcement—literally, it was just what the government ordered. “Our underlying strategy,” an NIST official would testify to Congress, “was to develop encryption technologies that did not do damage to national security or law enforcement capabilities in this country. And our objective . . . was to come out with a technology that did signatures and nothing else very well.”
But NIST, which originally looked favorably on adopting the RSA solution, came to adopt this objective only after pressure from Fort Meade. During the last months of 1990, the NSA had been pushing hard for its system, and in February 1991, its new director, General William O. Studeman, forced the issue, urging NIST to “cut short the debate and get on with the things that need to be done to provide the necessary protection.” At the next meeting of the two agencies’ joint technical working group, NIST representatives raised the white flag, and indicated that their management “has accepted the NSA’s proposal.” But when NIST publicly signed off on the NSA-created algorithm in April, nothing was mentioned about the involvement of the secret intelligence agency.
Bidzos wasn’t fooled, though, and was furious about the government’s choice of the DSA as its standard. He contended that the NSA had completely subverted the Commerce Department, the agency to which NIST belonged. Instead of helping American industry, he charged, the Commerce Department was now working against it, totally in service to the spooks. (This suspicion was later bolstered by a congressional investigation that led the House Government Operations Committee to declare, “NSA is the wrong agency to be put in charge of this important program.”) The next step, Bidzos warned, would be the unveiling of an encryption standard that didn’t adopt the familiar algorithms—his algorithms!—but some new ones that the government could break.
Bidzos had a lot of ammunition for his attack. In purely technical terms, it was clear that the DSA was inferior to RSA. It was, as one observer put it, “an oddball standard,” much slower to verify signatures than RSA’s system (though faster to sign messages), more difficult to implement, and more complicated. And, of course, it didn’t have encryption. Unlike RSA, it had no track record. The government scheme did offer one advantage over RSA, however, something that Bidzos was hard-pressed to match. It was free. Indeed, in the August 30 announcement, the government had proclaimed its intention to make its signature standard available worldwide on a royalty-free basis.
Bidzos felt he could fight the proposed standard by way of a patent challenge. But that would not be easy. Public Key Partners, of course, controlled the Stanford patents that involved the first digital signatures. But the government claimed that its scheme bypassed those patents by relying on a different implementation of digital signatures, one designed by another Stanford cryptographer named Tehar ElGamal. A former student of Hellman’s, ElGamal had refined the idea of using the hash algorithm and the message digest for digital signatures. But ElGamal had made the mistake of publishing before applying for a patent (his paper had appeared in 1985), thus forfeiting his rights to a patent. So if the government’s claim was correct, the DSA was free and clear of any patent claims.
Bidzos disagreed, but he understood that staking his claim would be time-consuming and costly. Still, there was one other way to accuse the government of pilfering intellectual property. It involved yet another patent.
This one was based on the work of a German cryptographer named Claus Schnorr, who’d patented his own digital signature scheme in February 1991. After hearing about the DSA, Schnorr insisted that it infringed upon his patent, and demanded $2 million from the United States. To many observers, this was overstepping: the conventional wisdom was that both Schnorr’s and Kravitz’s systems were variations of ElGamal’s work. Nonetheless, the government was concerned. In its own patent application, it took pains to assert that the ideas behind the DSA were independent of Schnorr. Still, Schnorr had at the least a “scarecrow” patent: a claim that might not prove to be defensible in a long, drawn-out lawsuit, but one that nonetheless gave its holder a plausible reason to attack a similar concept. As long as Schnorr was unhappy, the government had a problem.
Bidzos saw this as a great opportunity. While the government dithered, he would try to add the German’s patent to the Public Key Partners portfolio. It would be like landing on Park Place after already owning Boardwalk: patent monopoly! Bidzos found out that Schnorr was attending a conference in Marseilles, so he flew there with Fougner in tow. They arranged to have lunch at a one of the fanciest restaurants in town. The meal lasted for hours, with multiple bottles of fine wine delivered to the table. Schnorr was in his midforties, a conservative scientist who was proud of his most recent triumph—winning the lucrative Leipzig Prize. Bidzos quickly figured out the way to handle him. “I talked to him like a coach would to a tennis player,” says Bidzos. “That he could do it himself, or he could let me negotiate his deals and manage his contracts and endorsements, so he could work on his game.” Fougner was impressed at the hard sell. “Bidzos regaled him with tales of his friendship with Bill Gates and his global vision of public key cryptography and the universe,” he says.
The meal finally wound down, with the waiters standing around, anxious to clear this final table. They moved to a pub by the waterfront. Fougner quickly sketched out on a piece of paper a transfer by which PKP would receive all rights from Schnorr’s patent. At the pub, in the shadow of a fifteenth-century galleon, Schnorr, whether captivated by Bidzos’s promises of riches, or just plain exhausted, signed the paper.
When Bidzos got back to the States, he had another in his endless series of meetings with NIST. His contacts were Dennis Branstad and Lynn McNulty, two computer scientists at the agency who were often caught between the demands of the public and those of their bosses. In hoping to resolve the government’s patent problems, they had been desperately urging NIST to buy the Schnorr patent. They also wanted to pay off RSA to clear up any alleged conflict with the Stanford patents, and they assumed the meeting would focus on such an offer. Instead, Bidzos began by declaring, “I represent Claus Schnorr and you’re infringing on my patent.”
Bidzos was exultant. “I had never seen two guys look more tired,” he later boasted.
Meanwhile, Bidzos was helping engineer opposition to the DSA on other fronts. As a response to the August 30 Federal Register announcement, NIST had received 109 comments on the scheme, the vast majority of them critical. Companies already using RSA, including Microsoft and Lotus, were unhappy that their investment in that scheme would be lost, and they would have to develop new software for the new standard. Other complaints dealt with the relatively laggardly computation rate of the DSA. Also, critics were concerned about the vulnerability of the scheme. Because the proposed standard used only 512-bit keys to calculate the signatures (RSA used 1024 bits), there was a question about whether the powerful computers inside the Triple Fence might be able to churn out forgeries. How could anyone assert that a signature was valid beyond question when an intelligence agency had the potential to create counterfeits? To Ron Rivest, the whole thing was symbolic of the government’s policy in general: “What crypto policy should this country have?” he asked at a 1992 conference held in D.C. “Codes which are breakable or not?”
Though the controversy never caused major debate within the general public, it did ignite some civil liberties groups, which had been closely watching the relationship between the NSA and NIST. In fact, the balance of power between the two agencies was risible—one was the flagship of our multibillion-dollar intelligence operation, the other a dime-store government backwater. While the liberals and the libertarians hoped that the latter organization would protect the interests of ordinary citizens, they had little confidence it would do so.
Their fears were justified. A look at the prior history of the two organizations laid the blueprint for an imbalance of power. After the Church hearings in the seventies, the entire organization of the NSA had felt chastened. But in 1984, at the apex of Ronald Reagan’s presidential power, the NSA showed signs of reentering the realm of domestic policy. At the apparent behest of Fort Meade, Reagan issued a National Security Decision Directive intended to monitor information in databases—both in- and outside government—that fell into the vague category of “sensitive, but unclassified, government or government-derived information.” This caused a minor firestorm, and eventually, the NSA’s congressional nemesis, Representative Jack Brooks of Texas, gave the agency a tongue-lashing: “The basement of the White House and the back rooms of the Pentagon,” he said in a hearing, “are not places in which national policy should be developed.” Eventually, the government backed down.
The experience led some in Congress, urged by frantic lobbying from civil liberties groups, to create a law that would set boundaries for the government in the computer age. In what was an unusual act of independence from the demands of an intelligence agency, Congress in 1987 passed the Computer Security Act, which specifically turned over the responsibility for securing the nation’s computer infrastructure—particularly in recommending the standards to which industry would adhere—from the NSA to the National Bureau of Standards (which was about to take on the higher-tech appellation of National Institute for Standards and Technology).
Why did Congress flout the spooks? True, the civil liberties groups had lobbied hard. But more to the point, says Marc Rotenberg, who was then a staffer for Senator Patrick Leahy, “U.S. business didn’t particularly like the NSA setting the standards. The NSA’s concerns about computer security are not the concerns that businesses face—they weren’t worried about the Kremlin, they were worried about their competitors.”
Bolstered by industry support, the lawmakers moved fast and the NSA was caught flat-footed. Not even an appearance by then–NSA director General William E. Odom could stop the bill. His complaint that shifting security responsibilities to the civilian agency would be an unnecessary “duplication” of functions really missed the point: industry preferred that the Commerce Department, and not the spies, set standards for the national computer infrastructure. As one NSA official later wrote in a memo, “By the time we fully recognized the implications . . . [Brooks] had it orchestrated for a unanimous-consent voice-vote passage.”
Of course, The Fort was not shut totally out of the process of securing the nation’s computers. As the undisputed world capital of crypto, it had invaluable expertise in computer security, and Congress outlined an advisory role for Fort Meade to NIST. The question was, how would the two work together? In negotiations to determine that, the NSA sat across the table from the acting director of NIST, a bureaucrat named Raymond Kammer. Not only was Kammer sympathetic to the National Security Agency, he was actually the son of two of its veterans! The official Memorandum of Understanding reached between the two agencies did preserve the concept that NIST would take the lead in establishing standards, but formalized an NSA role as well. In “all matters related to cryptographic algorithms and cryptographic techniques,” said the memo, NIST would solicit the NSA’s help. To implement this, the two agencies would work through a “technical working group.” Though NIST was supposedly in charge of the process, it would not hold a majority presence in the group, which consisted of three people from each agency.
Though both agencies insisted that NIST was really in the driver’s seat, skeptics suspected otherwise. Even with its zippy new name, NIST was the nerdy Mr. Peepers of government agencies, suddenly thrust into the center of a huge political and national security battle. At least one high-ranking official of the agency later admitted that NIST not only hadn’t sought the powers granted by the Security Act, but it didn’t want them once the bill was passed. “It put us in charge of what we didn’t want to be in charge of,” he says.
The skirmishes over the digital signature standard seemed the ultimate proof that NIST was pretty much Fort Meade’s stooge. In the years to follow, investigations would bear this out; one General Accounting Office report concluded that, contrary to congressional intent, “NIST follows NSA’s lead in developing certain cryptographic standards.” Declassified documents outlining the discussions in the monthly meetings of the two agencies’ technical working group clearly illustrated this. At every step, the NIST people seemed to be waiting for the NSA’s verdict on the signature issue.
Even NIST’s own oversight group, the Computer System Security and Privacy Advisory Board, had serious problems with the relationship between the two agencies. In March 1992, it determined that “a national-level public review of the positive and negative implications of the widespread use of public and private key cryptography is required.” But the NSA wanted no part of a discussion or review, and squelched that idea. In a classified memo, the new NSA head, Admiral Mike McConnell, put it bluntly: “The National Security Agency has serious reservations about a public debate on cryptography.”
Still, the government was beginning to feel some heat. Once again, Representative Jack Brooks held hearings. They featured scorching testimony by the NSA’s critics. Nathan Myhrvold of Microsoft testified that “the government’s late publication of its proposed signature standard, together with its serious technical flaws . . . made it impossible for the computer industry to adopt the government standard for commercial use.” Addison Fischer, an early RSA Data Security investor who used the company’s algorithms in the mainframe computer products of his eponymous company, invoked a powerful metaphor that would reappear in crypto debates to come: “Cryptography, especially public key cryptography, is entering the mainstream,” he said. “It is simply another of a long line of technological genies which is exceedingly useful, and which cannot be put back into the bottle—even if there may be some unpleasant side effects.”
All of this criticism, of course, was music to Jim Bidzos’s ears. While he had become a crusader for the free rein of crypto, his main goal had always been strengthening his company. If the pressure on the government continued—and he kept threatening to exercise the Schnorr patent to fight the government’s candidate—he figured that eventually the standards process might go his way, and RSA technology would at least win approval as the official digital signature standard.
And then, astonishingly, the feds caved. Or at least seemed to.
As Bidzos tells it, the government finally concluded that its own standard would fail not on crypto grounds but on patent grounds. At a June 1993 meeting at the Commerce Department, a NIST lawyer said the words Bidzos longed to hear: “We want to work with you.” While Bidzos and his attorneys sat stunned, the official continued. “Why don’t you make us a proposal for a licensing situation if you want to be compensated?”
Bidzos said he would get back to them in writing. And a negotiation began, with the government offering an amazing financial concession to Public Key Partners: an exclusive patent on the government’s algorithm, the DSA. The United States would use the DSA as its standard, and would pay PKP a royalty fee. It was estimated that this could be as high as a dollar a user. Since millions of dollars would potentially come from this—every citizen would use this standard to communicate with the government, in everything from making contracts to filing IRS returns—there was a huge incentive for Bidzos to accept. So he did. In this sense, he was acting on behalf of his company’s bottom line and against the interests of the general public. After all, his company would now be party to the use of the NSA’s product as a standard, an algorithm Bidzos himself had gleefully trashed in public.
Some people began to question whether RSA’s strategy of protecting crypto by patents was itself a path that retarded the progress of computer privacy. Maybe Bidzos was in league with the spooks. After all, as one observer noted, “One of the purposes of the patent system is to cause technology to be exploited. . . . Public key cryptography was invented almost twenty years ago, and yet is not yet in widespread use. A visit to the supermarket checkout counter reveals no digital signatures. Why not?”
But the deal would never be closed. In its haste to eliminate a nasty patent battle, the government underestimated the outrage that would come from its abandoning a commitment to make the algorithm royalty-free. When the government solicited comment on the deal, the criticism was withering. Critics called it a $2 billion giveaway to Public Key Partners. The Canadian government and the European Commission indicated that they wouldn’t pay the royalties, and to hell with the patents claimed by the United States government. It was a revolt that the government didn’t need. So NIST reneged on its offer to Bidzos, and reaffirmed that whatever standard it chose, it would be royalty free. And so, once again, it was back to square one on the digital signature standard.
Bidzos was philosophical about the turnaround. He did regret losing all that potential cash. But with the plan killed, Bidzos could once again take the side of the angels, a foe of a government that wanted to crush individual privacy, even if it meant impoverishing American software companies.
In any case, the bickering over the signature standard was to continue for another year. It wasn’t until October 1994 that NIST finally made its choice. It chose to dismiss the patent issue, ignore the overwhelmingly negative public response, and endorse the DSA as its own candidate as the official standard for digital signatures. “NIST reviewed all the asserted patents and concluded that none of them would be infringed,” it stated in a fact sheet. (To assure those who still had qualms, the agency took the extraordinary step of assuming liability for anyone using the standard who might later be sued for patent infringement.) While NIST made some beneficial technical changes from its original proposal, most notably extending the key length from 512 to 1024 bits, essentially the result was an authentication system created in secret by the government intelligence agency, one that virtually no one in industry had found attractive enough to adopt. This instead of a system already implemented by Microsoft, Apple, IBM, and Novell. Is it any wonder that years later, the digital signature standard would still be an orphan—and that in the midst of an electronic boom, there would exist no universal means of authenticating e-mail?
The funny thing is, as NIST scientist Lynn McNulty later said, “We thought that the digital signature would be the easy one.” But as contentious as it was, the battle over signatures was only a warm-up for the main event in the cryptography war: the war over encryption.