the standard

on March 17, 1975, a dry government document produced a shock wave that just about tore the plaster off the walls of Martin Hellman’s little cipher operation at Stanford University. It was a Federal Register posting from the National Bureau of Standards (NBS), ostensibly one of countless protocols proposed by that agency that, if adopted, would become the officially endorsed means of doing things for the federal government. By extension, it would become the no-brainer choice for private industry and just plain folks as well. This proposal involved something seldom ventured in the public literature: a brand-new encryption algorithm. And a strong one to boot. It was to be called the Data Encryption Standard, or DES.

The Stanford team had known that the unprecedented move was in the offing—the NBS had been issuing requests for such a standard—and Hellman knew that his old and trusted colleagues at IBM had been cooking up a system designed to satisfy the government’s criteria. So at first they welcomed the announcement. “This was big news,” recalls Hellman. “We were happy to see a standard. We thought it was a wonderful thing.”

Then they began to actually examine the DES system—and learned that the National Security Agency apparently had a hand in its development. And their enthusiasm turned to dismay. Right away, it was glaringly obvious that the flaw in the DES was the size of the encryption key, a metric that directly determines the strength of a cryptographic system. It was 56 bits long. That’s a binary number of 56 places. You could envision this as a string of 56 switches, each of which could be on or off. Though 2 to the 56th power was a hell of a big number in most circumstances—it meant that there were 256 possible keys, or about 70 quadrillion—Hellman and Diffie believed that it was too small for high-grade encryption. Sophisticated computers, they insisted, could eventually work hard enough to find solutions to such encrypted messages by “exhaustive search”: trying out billions of key combinations at lightning speed until the proper key was discovered and the message suddenly resolved itself into the orderly realm of plaintext. This would be a classic “brute-force” attack. “A large key is not a guarantee of security,” says Hellman, “but a small key is a guarantee of insecurity.”

Diffie wrote as much in an otherwise respectful initial analysis of the standard, submitted in May 1975 as part of the NBS’s public comment process. “The key size is at best barely adequate. Even today, hardware capable of defeating the system by exhaustive search would strain but probably not exceed the budget of a large intelligence organization.” He postulated that a free-spending agency could feasibly build a customized machine that would crack such a key within a day. “Although cryptanalysis by exhaustive search is far from cheap, it is also far from impossible,” he wrote, “and even a small improvement in cryptanalytic technique could dramatically improve the cost performance picture. We suggest doubling the size of the key space to preclude searching.”

Naively, the Stanford duo believed that such advice might be heeded by the United States government: Well, damn, you guys are right! Let’s double that silly key size! Instead, the government’s response was sufficiently evasive for Hellman to suspect that a smoke screen lay behind the NBS’s actions. In subsequent months, in fact, Hellman would publicly begin to question whether the DES algorithm might have been a daring ruse on the government’s part to lull citizens and perhaps even foreign foes into an illusion that they were protecting information—while that supposedly secure data was easily accessible to the NSA. At his most paranoid, Hellman wondered whether the DES had a “back door” implanted in it by Fort Meade’s clever cryptographers. While there was no direct proof of that, there was reason for suspicion. If everything was on the up-and-up, Hellman wanted to know, why was it that the design principles of the algorithm, as well as its inner workings, were being treated as government secrets? If the government had nothing to hide, why were they hiding something?

Diffie and Hellman were only the first to question the murky origins of the Data Encryption Standard. The debate would continue even as the DES became a kind of gold standard for strong commercial cryptography—and an object of continued suspicion among the outsiders of the crypto and civil liberties world. Only with the passage of time would it become clear that the development and certification of DES was in a sense an inspiring story of its own, one that had elements in common with the quest of Diffie and Hellman themselves.

 

The story began with one of IBM’s most enigmatic researchers, Horst Feistel. He was the German-born cryptographer who had done the work on Identification Friend or Foe protocols that Whit Diffie had learned from Alan Tritter. Feistel had been working at IBM’s research division in Yorktown Heights since the late sixties. It was one of the few jobs in the private sector that involved work in cryptographic research.

In fact, some of his colleagues suspected that Feistel had been in the NSA’s employ and was somehow still hooked up with it, even while working for IBM. In any case, his biography is somewhat sketchy. Born in 1914, he had left Germany as a young man. His aunt had married a Swiss Jew living in Zurich, and on the concocted pretext of tending to his aunt’s illness, Feistel joined them just before the Third Reich began a military conscription that would have prevented his escape. After studying in Zurich, Feistel came to the United States in 1934. He was about to become a naturalized citizen when America was thrust into World War II. Feistel was put under what he once described as “house arrest,” his movements restricted to the Boston area where he was living. But in January 1944, Feistel’s circumstances changed abruptly. He was not only granted citizenship but also given a security clearance and a job at a highly sensitive facility: the Air Force Cambridge Research Center.

What he did there is unclear. Codes had fascinated him since his boyhood, but in the early 1990s he told Whit Diffie that while crypto work was indeed his desire, he was informed that this was not suitable wartime work for a German-born engineer. On the other hand, in a 1976 interview with David Kahn, Feistel said that during the war he had worked on Identification of Friend or Foe systems—not cryptography per se at that time, but close.

There are other contradictions in Feistel’s various accounts of his activities. He told Diffie that before he was granted U.S. citizenship, he had to report to authorities every time he left Boston to visit his mother in New York. But he once told a coworker that his mother didn’t emigrate until the Cold War began. The U.S. had spirited her out of East Berlin, he reportedly said, just in case the Soviets discovered that Feistel was doing crypto and decided to pressure her.

There was no doubt, however, that after the war, Feistel began to specialize in IFF. He headed a crypto group at the Cambridge Research Center, and part of his job was testing an advanced IFF system that depended on an amazing new invention, the transistor. This tiny marvel would enable an IFF system to be built so compactly that it could fit into the nose of a fighter plane. Another important project of Feistel’s was a longtime passion: constructing a strong cryptosystem based on block ciphers. (This kind of system encrypted messages by processing them in chunks, or “blocks,” as opposed to stream ciphers, which did their scrambling on text as it flowed, or “streamed,” by.)

Did the NSA embrace Feistel’s work, or did it see his work as a threat, and try to stifle it? According to what Feistel told Diffie, the people at The Fort had closely monitored his air force work and used the NSA’s power to influence the direction Feistel’s work took. But the agency also regarded the project as a threat and eventually managed to kill the entire crypto effort at the Cambridge lab. When Feistel left for another job in the mid-1960s at Mitre (the same military contractor that would later put Whit Diffie on its payroll), he unsuccessfully tried to organize a group there that would resume his crypto work. He blamed the failure on more NSA pressure.

So Feistel took the advice of his friend, A. Adrian Albert, and went to work for IBM, which seemed more open to such pursuits. (Albert was a mathematician, a onetime head of the American Mathematical Society, who had himself done extensive cryptography work for the government.) IBM was an amazingly rich company with little competition, and its research division was an intellectual playground where incredibly bright scientists were encouraged to explore whatever interested them. “If they hired you at Yorktown, you’d do what you wanted, as long as you did something,” says Alan Konheim, who became Feistel’s boss in 1971. “And Feistel did something—he formalized this idea for a cryptosystem.”

The most remarkable aspect of Feistel’s creation was not its mathematics or its technology—or even its resistance to codebreakers—but the motivation behind it. His superstrong cipher wasn’t intended to defend government secrets or diplomatic dispatches, but to protect people’s privacy—specifically, to protect databases of personal information from intruders who might steal the contents to create detailed dossiers on individuals. “Computers,” wrote Feistel in a 1973 article for Scientific American, “now constitute, or will soon constitute, a dangerous threat to individual privacy. . . . It will soon be feasible to compile dossiers in depth on an entire citizenry.” Feistel declared that the antidote was cryptography, traditionally the domain “of military men and diplomats.” He proposed that computer systems be adapted “to guard [their] contents from anyone but authorized individuals by enciphering the material in forms highly resistant to cipher-breaking.” Considering Feistel’s familiarity with the government’s zeal for keeping cryptography to itself, this was a significant position to take. So important was privacy in the computer era, Feistel believed, that the knee-jerk national security arguments would have to be shelved.

Meanwhile, Feistel was concocting a system that would grant people that privacy.

The system was called Demon, so dubbed because file names in the computer language he used (APL) could not handle a word as long as his unimaginative choice for the first version, “Demonstration.” Later, in a burst of inspiration, an IBM colleague would change the name, carrying over the satanic theme from Demon, to “Lucifer,” thus containing a cryptographic pun.

As a block cipher, Lucifer was a virtual machine that sucked in blocks of plaintext data and spit out blocks of ciphertext. Feistel created several versions; the best known used a digital key of 128 bits, an enormously tough target for a brute-force attack. Impossibly tough. Of course, the issue of key length would be of little importance if a codebreaker could quickly crack the system by detecting and exploiting structural weaknesses that would recover plaintext without having to bother with brute-force attacks. If even the most subtle pattern could be discernible in ciphertext, a codebreaker would be on his way to breaking the system. Lucifer’s strength, like that of any other cipher, depended on denying potential foes any such shortcuts. Feistel’s cipher avoided telltale patterns by subjecting the plaintext characters to a tortuous mathematical journey, leading them through a complicated whirl of substitutions. Ultimately, after sixteen “rounds” of furious swapping with other letters in the alphabet, the actual plaintext words and sentences would appear only as a block of seemingly random letters: an oblique ciphertext.

The crucial rules of substitution took place by means of two substitution boxes, or “S-boxes.” These, of course, were not physical boxes, but sets of byzantine nonlinear equations dictating the ways that letters should be shifted. (At least one colleague of Feistel’s, Alan Konheim, believes that the idea of S-boxes had been given to Feistel by the NSA at a summer workshop, supposedly to get a technology well understood by Fort Meade into the mainstream. “Horst is a very clever guy, but my guess is he was given guidance,” says Konheim.)

The S-boxes did not merely initiate a set of predictable substitutions in the letters; they used information drawn from a series of numbers that comprised a secret key to vary the sequence as the bits passed through the boxes. The security of the system ultimately rested with this key. Without knowing this key, even a foe who understood all the rules of Lucifer would have no advantage in transforming ciphertext into plaintext by some reverse-engineering technique.

Such knowledge of the rules was to be assumed; the nuts and bolts of a well-distributed commercial cipher were much more likely to be accessible to eavesdroppers than the workings of military codes, which could be more tightly controlled. A cryptanalyst trying to crack an army code would often have no clue as to the system used to produce the ciphertext, a problem that required not only plenty of extra time to break the code, but also a huge amount of resources in the black art of undercover intelligence. Huge spy networks devoted themselves to learning the sorts of codes the enemy used. On the other hand, if Chase Manhattan Bank decided to use IBM’s brand-name code to encrypt its financial transactions, a potential crook would find it relatively simple to discover what cryptosystem the bank used. Since IBM might license the cryptosystem to others, the rules of that system would probably be circulated fairly widely. So in this new era of non-military crypto, all the secrecy would rely on the key.

IBM applied for, and received, several patents for Lucifer. As an innovation of its Watson Research Lab, Lucifer fell into the research category. But unlike some blue-sky schemes at Watson that were way ahead of their time, an invention that provided an instant answer to a pressing problem—data security in the communications age—was naturally positioned on a fast-track to commercialization. Lucifer’s first serious implementation came quickly, in Lloyds of London’s Cashpoint system, a means for distributing hard currency to bank customers. Undoubtedly, this was a harbinger of bigger things to come for both IBM and crypto. It was only a matter of time before Horst Feistel’s baby would no longer be a research project; it would be a major IBM initiative. And that would change everything.

 

As Feistel was refining Lucifer, a thirty-eight-year-old engineer named Walter Tuchman was working at IBM’s Kingston, New York, division. He was a Big Blue lifer, having first gotten his feet wet during a three-month period at IBM in 1957 between college and the army. When he finished his stint, IBM not only rehired him but sent him off to Syracuse to pursue a doctorate in information theory. Most of his classmates remained in academia, but Tuchman wanted to use his knowledge to actually create sophisticated technology, so he stuck with IBM and wound up heading product groups.

Tuchman’s most recent IBM task involved an odd sort of computer security vulnerability. When computer terminals are in operation, they leak out faint electronic impressions that a sophisticated eavesdropper can use to reconstruct the information being shown on the screen. In effect, those blips represent an unauthorized computer-data wiretap. The government wanted a special means to shield its computers from such potential leaks, and IBM responded by devising what came to be known as Tempest technology. It was considered a big win, and when Tuchman’s team finished its work around 1971, people in the group wanted to stay together rather than disperse to other projects, a routine known internally as “volkerwanderung.” To do this, they needed a new mission. Tuchman’s boss knew there were some interesting things going on in the banking division that might require innovative advances in computer security, and suggested Tuchman and his team look into it.

IBM’s banking division was fortuitously located just across the road from Tuchman’s offices in Kingston. He quickly found that his boss’s instinct was sound in sending him there. Building on the Lloyd’s project, IBM had decided to advance the idea of cash-issuing terminals, where bank customers could get money from their accounts without having to see a teller. The first cash-issuing machines had been giant safes that held not only the money but also all the electronic and computer equipment necessary to process the transaction. This was both costly and unwieldy. The better solution would be to spread the computer application between a terminal and the bank’s mainframe computer, which could do all the heavy-duty processing. This solution was not only efficient, but hewed to IBM’s recent, painful realization that the standard model of computing was headed to the junkyard. “Before then, data processing was all done on the mainframe. The security model was that you locked your door, you locked your desk, and you had a guy with a gun guarding the building,” explains Tuchman. But now, even the most tradition-bound minds in Armonk understood that in the future, as Tuchman puts it, “data processing was leaving the building.” And since a guard with a gun couldn’t be everywhere, the security model would have to change.

Of course, a system that actually doled out cash would represent a trial by fire for whatever new type of security IBM employed. The crucial commands that flashed a green light to spit out twenty-dollar bills would be sent over the phone line. Tuchman was quick to understand how precarious this could be. Imagine if some techno-crook managed to elbow his way on to the phone line and mimic the messages that said, “Lay on the twenties!”

The answer was cryptography. Though Tuchman had a background in information theory, he had never specifically done any crypto work. But he soon found out about the system that the guys in IBM research at Yorktown Heights had cooked up. He ventured down to Watson Labs one day and heard Feistel speak about Lucifer. He immediately set up a lunch with Feistel and Alan Konheim. The first thing Tuchman asked Feistel was where he had gotten the ideas for Lucifer. Feistel, in his distinctive German accent, mentioned the early papers of Claude Shannon. “The Shannon paper reveals all,” he said.

Meanwhile, Tuchman’s colleague Karl Meyer was exploring whether Lucifer might be a good fit for an expanded version of the Lloyd’s Cashpoint system. Ultimately he and Tuchman concluded that it would probably need a number of modifications before it was strong enough to rely upon. But it would be a fine beginning. And so, they made an arrangement with Alan Konheim and his Information Theory Group. Tuchman and Meyer’s team at Kingston would build a revised algorithm for Lucifer. Then they would send it to Yorktown for evaluation and testing.

The internal name for the cipher was the DSD-1.

Before this arrangement was approved, however, a top IBM executive demanded to know why they were even bothering with Lucifer when he knew of a cheaper, faster algorithm. Tuchman took this supposedly superior algorithm home and broke it over the course of a weekend. (He and Meyer eventually published the break in the trade magazine Datamation.) Tuchman would often cite this triumph as proof that his team knew what it was doing—and to ensure that the work wouldn’t be disrupted by clueless interference from upstairs. “We can’t deal with amateurs in the field,” he remembers telling the muckety-mucks high on the corporate food chain. “There’s no cheap way out of doing a crypto algorithm. You’ve gotta work, work, work. Qualify, qualify, qualify. It’s going to take a long time.”

This was a fairly difficult process because, as Whit Diffie could have told the Kingston group, there was pathetically little information available on how one could construct a modern, military-strength cryptosystem. “All of it was classified,” sighs Tuchman. “But we understood from our mathematics classes what makes a cipher hard to solve.” His group read everything they could in the library, and, as Feistel had predicted, the most helpful papers were those of Shannon. And they talked a lot to Feistel himself. But mainly they reinvented a lot of what must have been common knowledge among the algorithm weavers at Fort George Meade. “We sat around in our conference rooms working on the blackboard, teaching ourselves,” says Tuchman.

Ideally, Feistel himself would have been recruited to temporarily move to Kingston. Tuchman kept asking Konheim, “What does Horst want to do? I’ll give him a nice desk and his own office, and he can come up here.”

And Konheim would say, “Nah, I don’t think it’ll work out.”

Tuchman eventually came to understand why. “Horst was like a European version of James Stewart in the movie Harvey,” he later said. “He was sort of living in a little magical world between what happens in a commercial business like IBM and his hobbies. I never quite felt that Horst understood what the business world—especially the high-tech business world—was all about. He was cloistered in research in Yorktown, and here we were, these crazy guys from Kingston who were actually willing to make products, to see if we could do something that made money.

Konheim agrees that Feistel was oddly misplaced in the corporate world and, as time went on, even in the research division of that universe. According to Konheim, as Lucifer became less and less Feistel’s invention and more the commercial product of an IBM division, Feistel would arrive at Yorktown later and later in the day. And even then, he wouldn’t seem to be working on the project, but rather spending a lot of time on the phone speaking German. Konheim says that Feistel’s elderly aunt had promised him a considerable inheritance, and a lot of that phone time was spent cultivating her almost fanatically. (According to Konheim, it was a bitter disappointment years later when she died and left him nothing.)

And Feistel’s 1973 article for Scientific American—one of the most explicit scientific descriptions of crypto presented to the public in years—could have been interpreted as a rebellion of sorts. Certainly in some quarters such frankness about the cryptographic innards of a potential IBM product could have more than raised an eyebrow. Apparently, the NSA itself objected to the article; years later, Feistel would allude to the agency’s unhappiness with it, also remarking that if it hadn’t been for the Watergate scandal then turning Washington upside down, the NSA might have tried to shut down the entire Lucifer project, as it had with his previous ventures.

The Kingston group was blissfully unaware of such intrigues. To them, the Lucifer effort was simply a product ramp-up. They focused on their goal of modifying the system, of increasing its complexity and difficulty so that its ciphertext would pass the Shannon tests for apparent information randomness. The first step was to set up a list of what they called “heuristic qualifiers,” a series of mathematical tests that would evaluate the cryptosystem’s output—the scrambled message—so that it bore no apparent relationship to the original message, appearing to be a random collection of letters. In Claude Shannon’s terminology, the apparent information content would be zero.

Feistel’s version of Lucifer certainly attempted to reach this ideal but didn’t go far enough. Its strongest feature was its two S-boxes, where the trickiest substitutions took place—the nonlinear transformations designed to drive cryptanalysts batty. So the Kingston team decided that the new, improved Lucifer—DSD-1—would have even more devious S-boxes. And the number of those would increase from Lucifer’s two to a much more formidable eight.

Complicating that effort were the requirements for compactness and speed: “It had to be cheap and it had to work fast,” says Tuchman. To fulfill those needs, the entire algorithm had to fit on a single chip. So another part of the team was a VLSI (Very Large Scale Integration) group, split between Kingston and IBM’s Burlington, Vermont, labs, whose job was to put the entire scrambling system on a 3-micron, single wiring layer chip. If everything worked out, IBM would have the tiniest strong-encryption machine ever known.

Working under those constraints, the Kingston team constructed the complicated DSD-1, still informally referred to as Lucifer. If all went well, their new Lucifer would take a 64-bit block of plaintext, submit those bits through a torturous process of permutation, blocking, expansion, blocking, bonding, and substitution involving a digital key, and then repeat the process fifteen times more, for a total of sixteen rounds. The result would be 64 bits of what appeared to be total digital anarchy, a Babel that could only be returned to order by someone reversing the encryption process by using the digital key that determined how the scrambling had been done.

Then the Watson Lab team would try to attack it, to see if things really had gone well.

 

Though Horst Feistel was not involved in the actual reconstruction of DSD-1, he did help bring his colleagues in research up to speed for the testing process. On January 11, 1973, he gathered five fellow members of the Data Security Group at Yorktown Heights and gave them their first exposure to the Lucifer cipher. One of the group, Alan Tritter (the same eccentric computer scientist who had told Whit Diffie about IFF protocols), raised questions as to the wisdom of the entire enterprise. Was IBM putting itself at risk by vying to be a power in the new world of commercial cryptography? What if Lucifer could be cracked?

Tritter’s comments drew interest because they seemed to echo some remarks made, but not proven, by a professor at Case Western Reserve University named Edward Glaser. A blind man who was one of the endless consultants IBM routinely hired with its bottomless budget, Glaser, according to Konheim, had blustered that if he were given twenty examples of ciphertext, along with the original plaintext (this is known as a chosen plaintext attack), he could break Lucifer’s system. (It turned out to be a specious claim.)

But the point was well taken, and Tritter repeated it in a memo written later that year. “We were/are in an unusually exposed position,” he wrote. Noting that the first use of Lucifer was already implemented in a Lloyd’s cash terminal, he ticked off the consequences that could come if the system, like so many seemingly “unbreakable” ones before it, was somehow compromised. If someone was able to produce a valid key for a Lucifer cipher, he wrote, “a clever, resourceful, highly organized attempt to remove illicitly but without the use of force the entire cash contents of all the terminals in the ‘Cashpoint’ system, say over a single bank holiday weekend, would certainly succeed.”

But such a possible loss was only the beginning of the sorts of perils IBM was courting by drawing on crypto’s implicit promise of security. With Big Blue’s fat cash reserves, it would be no problem replacing even a steep stack of twenties to reimburse Lloyd’s. More troublesome would be restoring public confidence. And then would come the lawsuits.

“Were the security of [Lucifer] or of any other crypto product we may subsequently field to be breached publicly, the harm it would do us in the marketplace would be incalculable,” wrote Tritter. “And this is in addition to actual damages and the very real possibility of exemplary damages awarded against us in a lawsuit which would give the press, the industry, and the public a field day.”

On the other hand, how could IBM not pursue cryptography? Its business was the information age, and without a means of protecting data as they moved from one computer to another, IBM would not sell nearly as many computers. The lack of cryptography was a potential roadblock to the computerization of America—and the computerization of the world itself. So on February 5, 1973, a high-level meeting was held to review “the status and plans of cryptography within the entire IBM corporation.” As Tritter later summarized the meeting, “It appeared to be broadly agreed . . . that IBM was apparently in the crypto business for keeps, and would have to acquire a corporate expertise in the area. In the meanwhile, attacks on Lucifer were to be intensified.”

An outside expert, Jim Simons of the math department at the State University of New York at Stony Brook—who had also practiced cryptography at the Institute for Defense Analysis, the NSA satellite in Princeton—was recruited to organize a concentrated attack on Lucifer. He worked with three researchers from Yorktown Heights for about seven weeks in the late spring of 1973. Even before he issued his report, IBMers were buzzing with the good news: Simons and his team hadn’t cracked it.

“The Lucifer machine is certainly stronger than I had originally thought,” Simons wrote in his report of August 18, 1973. But he didn’t exactly bestow a crypto seal of approval on it. “It seems highly improbable that Lucifer will be broken by two high school students as part of their science fair project,” concluded Simons. “On the other hand, there isn’t nearly enough evidence to feel confident that it won’t succumb to sophisticated attacks by a professional cryptanalyst.” Simons worried that if Lucifer, as currently constituted, was put into commercial use, it would almost inevitably be used to protect “traffic of genuine importance” (like money, or trade secrets), providing the incentive to encourage an intense, ultimately successful effort to break it. So while Lucifer seemed to be a good start for IBM, Simons warned, the company should work harder to come up with an improved product. “There really is no choice,” he concluded.

Meanwhile, IBM itself kept wondering if Lucifer was up to the task. In a confidential memo in May 1973, its chief scientist Lewis Branscombe, summarizing the consensus of the firm’s Scientific Advisory Committee, emphasized the need for the company to “establish a single cryptographic architecture, technology and product strategy.” Lucifer, he wrote, was not the only candidate. But later in the month, another memo deemed the Kingston scheme superior, with one caveat: “Unless there is a clear evidence of a significant threshold of vulnerability.”

The tests continued for months, conducted by private-sector researchers hired by IBM. “Alan would give them the algorithm and say, ‘Break it. Just go break it.’ And Alan kept reporting back that nobody could find a shortcut,” says Tuchman. “Finally I reached that magical psychological place where I figured this thing doesn’t have a shortcut, so there is just no shortcut solution. Forget it, guys, let’s concentrate on implementing the product now.”

Still, compared to the world-class codebreakers behind the Triple Fence, most of the math professors hired to bang their heads against Lucifer were Little Leaguers. How could IBM be sure the scheme was really sound? They certainly didn’t want to find out its vulnerabilities by discovering that one day some former KGB cryptanalyst hired by the Mafia had cleaned out their virtual cash vault.

 

At the beginning of 1974, Tuchman figured his team was about halfway through its work. “We had a pretty good idea how much algorithm we could get on a single chip,” he says. And much of that algorithm was written. But two things happened that year that would profoundly affect the project. The first would throw it open to the public. The second would cast a clandestine shadow over it that would last for a generation.

IBM was not the only institution aware of the vital need for cryptographic protection in the computer age. That view was also shared at the National Bureau of Standards, the government agency in charge of establishing commonly accepted industry standards for a wide variety of commercial purposes. The bureaucrats and scientists there believed that digital protection should be centered in a single system, one well-tested means of encrypting information that would be accessible by all. So NBS decided to solicit proposals for a standard cryptographic algorithm. (The NSA declined to submit one of its own ciphers, since allowing outsiders to examine its work was unthinkable.) In the May 15, 1973, Federal Register, the NBS listed a number of exacting criteria that such a standard should meet.

Not surprisingly, the NBS received no submissions at that time that even vaguely met the criteria. By and large the only cryptographers in this country who had the wherewithal and expertise to meet this challenge were working behind the Triple Fence. And the work done there was never published, never revealed.

But there was one cryptosystem in development that seemed to fit a lot of the government’s needs: Lucifer, the DSD-1. Lewis Branscombe, IBM’s chief scientist—who, not coincidentally, was himself a former head of the NBS—in particular felt that this work in progress might be an excellent candidate for the encryption standard for the next generation.

Walt Tuchman was against the idea, primarily because of the trade-off involved in submitting the revised Lucifer as a federal standard: IBM would be required to relinquish its patent rights, essentially giving—not selling—the algorithm to the world. “I was this typical capitalistic product manager,” he explains. “I’m in this thing to make money, not to foster some great social improvement.” He argued his point before IBM’s high-level executive Paul Rizzo, who was then Big Blue’s number two. Branscombe presented the other point of view: make it public. Finally, Rizzo weighed in. Lucifer, he argued, was like a safety component that benefited all of society. If the Ford Motor Company came up with a seat belt superior to those of its competitors, one that saved the lives of moms and dads, would they allow General Motors to use it? You better believe they would, because it was the right thing to do. Jimmy Stewart couldn’t have topped that homily. You could almost hear the violins playing. The speech convinced not only the IBM board, but Tuchman himself, who called a staff meeting when he returned to Kingston. “Well, guys,” he said, “we’re going to give the stuff away.”

Not completely, of course. The ways they built Lucifer into a chip, the ways they would implement it within a full-featured solution, the little tricks to get the most of it . . . these would be great selling points for IBM-created versions of the DSD-1. Other companies would get access just to the algorithm itself. So maybe it wasn’t such a bad idea from a business perspective to give the thing away.

The feeling at IBM was that merely submitting its work to the NBS was sufficient to fast-track DSD-1 toward a coronation as the standard. Even though the response date for the NBS’s request for crypto algorithms in 1973 had long expired, Branscombe wrote to his NBS successor Ruth Davis in July 1974, offering what he described as the “Key-Controlled Cryptographic Algorithm,” developed at Kingston, as a candidate. With this favored new candidate already in hand, the NBS, somewhat superfluously reissued its request for crypto algorithms in the August 27, 1974, Federal Register. No serious competitor emerged. And thus the revised Lucifer, a.k.a. DSD-1, was destined to be known by a lofty, though generic, moniker: the Data Encryption Standard. The title would eventually become so familiar among the digital cognoscenti that it would be pronounced not as an acronym but as a single phoneme: Dez.

 

By then, the other crucial process in Lucifer’s transformation was well under way. It had been fairly early in 1974 when Walt Tuchman received what he later would refer to as “that deadly phone call.” It was his boss, telling him he had to take a trip down to the National Security Agency to cool them down about Lucifer.

Tuchman didn’t like it. But he understood the importance of playing ball with Uncle Sam. By creating a cryptographic product for the commercial sector, IBM was treading on strange turf. If the company didn’t get export clearance to send its crypto chip to its international customers, the whole product might as well be scrapped. What good was a product for a global company like IBM if you couldn’t sell it to the global market?

So Tuchman went on his first visit to The Fort. He eyeballed the Triple Fence, contemplated the armed marine guards, parked in the visitors’ lot, and entered the small concrete building where outsiders lacking previous clearance fill in a stack of papers and wait to be called. Then an elderly woman appeared and guided him through a labyrinth of hallways to the second-level manager assigned to the case, a guy just below the deputy-director level. He was not in a military uniform or even in a suit. And he quickly proposed a quid pro quo: We want to control the implementation of this system. You will develop it in secret, and we will monitor your progress and suggest changes. We don’t want it shipped in software code—just chips. Furthermore, we don’t want it shipped to certain countries at all, and we will allow you to ship it to countries on the approved list only if you obtain a license to do so. That license will be dependent on customers we approve signing a document vowing that they will not subsequently ship the product to anyone else.

This went on for a while, until Tuchman finally had a chance to speak. “What’s the pro quo of the quid pro quo?” he asked. After all, the NSA man had focused entirely on restrictions and conditions, and had neglected to mention what IBM would receive for its troubles.

“The pro quo will be something very useful to you,” said the NSA man. The agency itself would qualify the algorithm. Their all-star cryptanalysts would analyze it and bang away at it. If there was a weakness, it could be noted and corrected. And when the mathematical dust settled, IBM would have a priceless imprimatur, one that would assure the instant confidence of its customers: the National Security Agency Good Secret-Keeping Seal.

This was a powerful offer. It spoke directly to Tuchman’s greatest fear—that outlaw codebreakers would discover a shortcut solution that would allow them to steal secrets and even money from IBM customers, thus exposing the fabled computer giant to international embarrassment and a legal Armageddon. Instead of having to rely on the smart but inexperienced amateurs at Yorktown and the random consultants they hired, IBM would have the ultimate in due diligence: the cryptanalysis gold standard. As soon as he returned from Fort Meade, he went to see his boss and urged him, “Let’s do it. Let’s work with these guys.” It was a solution that felt good to the top IBMers, who, after all, were virtually synonymous with the “Establishment.” So, just like that, the country’s single most important cryptographic effort in the private sector—save for that of Whit Diffie, still in obscurity struggling at Stanford with his weird ideas about one-way functions—came under the friendly but firm embrace of the National Security Agency.

Unspoken was the question as to whether the NSA—which after all was not an arm of the Commerce Department but an intelligence agency, the ultimate spook palace—might discover a gaping weakness in DES but keep its collective mouth shut, smug in the knowledge that it could use that shortcut to quickly break messages encrypted in the IBM code. Tuchman understood the risk of this. As the development process unfolded over the next few months and years, he watched for signs that this might be happening. Ultimately, he was convinced of the NSA’s sincerity. “If they fooled me,” he says, “I will go to my grave being fooled. I looked at those guys eyeball to eyeball. I’m a bit of a film buff, and I’ve seen good acting and poor acting. And if the NSA people fooled me, they missed their profession. They should’ve gone to Hollywood and become actors.”

From that point on, DES’s development process became, for all practical purposes, a virtual annex within the Triple Fence. The government issued a secrecy order on Horst Feistel’s Lucifer patent, known as “Variant Key Matrix Cipher System.” On April 17, 1974, an IBM patent attorney sent a memo to the crypto teams at Yorktown Heights and Kingston explaining that this meant there would be not only no publishing on the subject, but no public discussion whatsoever without the written consent of the Commissioner of Patents. Even the fact that a secrecy order existed was itself considered a secret, and talking about that was just as serious a crime as handing out encryption algorithms in the departure lounge at Kennedy Airport. A loose lip could result in a $10,000 fine, two years in prison, or both. Fortunately, the memo explained, “IBM has been granted a special permit which allows the disclosure of the subject matter in the application to the minimum necessary number of persons of known loyalty and discretion, employed by or working with IBM, whose duties involve cooperation in the development, manufacture, or use of the subject matter.” Without that exemption, of course, IBM could not have continued its effort, because of the obvious difficulty of collaborating on a project when one risked a jail term for admitting its existence to a co-worker.

The NSA’s demands for secrecy were particularly rigid concerning the agency’s cryptanalysis of DES. Anything—anything—that shed light on the way that The Fort’s codebreakers went about their business was regarded as the blackest of black information. The agreement drawn between the agency and the corporation clearly outlined the limited nature of what IBM’s scientists could glean from the collaboration. IBM was strictly required to limit those who were involved in the evaluation, and to keep up-to-date lists of those people. Any contact between Big Blue and Big Snoop would come at a series of briefings with rules as circumscribed as a Kabuki performance: IBM would essentially present information, and the NSA people would silently evaluate it. No geeky chatter: the NSA people were formally prohibited “from entering into technical discussions with IBM representatives in regard to the information presented.” Afterward, the NSA folks would hold postmortems to determine whether the IBM scientists might have stumbled on information or techniques “of a sensitive nature.” In that case NSA would then formally notify the company, and IBM would keep the information under wraps.

The NSA certainly did know its stuff. It was particularly interested in a technique discovered by the IBM researchers that was referred to at Watson labs as the “T Attack.” Later it would be known as “differential cryptanalysis.” This was a complicated series of mathematical assaults that required lots of chosen plaintext (meaning that the attacker needed to have matched sets of original dispatches and encrypted output). Sometime that year, the Watson researchers had discovered that, under certain conditions, the IBM cipher could fall prey to a T Attack—a successful foray could actually allow a foe to divine the bits of the key. To prevent such an assault, the IBM team had redesigned the S-boxes. After the redesign, under even the most favorable conditions, a T Attack would provide a cracker only a slight, virtually insignificant advantage.

Hearing about this unhinged the NSA crowd. Apparently, the T Attack was very well known—and highly classified—behind the Triple Fence. So imagine the agency’s dismay when the IBM team not only discovered the trick (which, presumably, the NSA had been merrily employing to crack enemy codes) but had created a set of design principles to defend against it. The crypto soldiers at Fort Meade could not tolerate the possibility that such information might leak into the general literature. And so the NSA put its secrecy clamp down harder on IBM.

“They asked us to stamp all our documents confidential,” says Tuchman. “We actually put a number on each one and locked them up in safes, because they were considered U.S. government classified. They said do it. So I did it.”

The man who probably did the most work for IBM on the T Attack, Don Coppersmith, would not discuss the issue for twenty years. It was not until 1994, long after other researchers had independently discovered and described the technique, that he divulged the S-box design principles. “After discussions with the NSA,” he explained in a technical article for the IBM Research Journal, “it was decided that the disclosure of the design considerations would reveal the technique of differential cryptanalysis, a powerful technique that can be used against many ciphers. This in turn would weaken the competitive advantage the United States enjoyed over other countries in the field of cryptography.”

Ultimately, IBM got what it wanted for DES—a clean bill of health from the NSA. (This was also a crucial factor in the process by which the National Bureau of Standards would place its imprimatur on DES as a federal standard.) But IBM paid a steep price for adhering to the NSA’s demands to keep its S-box design principles secret. The behavior of the S-boxes in the DES system involved complicated substitutions and permutations that put Rube Goldberg to shame. The best way that outsiders could evaluate whether those bizarre transformations were done simply to produce a tougher cipher—or were clandestinely jimmied to put in a back door by which the NSA could secretly get a head-start on codebreaking—was to know why the designers chose their formulas. So IBM’s refusal to explain the logic behind the S-box design encouraged critics like Diffie and Hellman to let their suspicions run wild and entertain all sorts of theories about secret back doors.

Telling people that a presumably public algorithm was based on secret designs was a recipe for paranoia, and indeed, the resulting dish nourished critics for years. But to the NSA, this point was nonnegotiable. The Fort Meade brain trust might have considered it a necessary evil to allow a strong crypto algorithm into the world of banks and corporations. But permitting the release of sophisticated techniques that might encourage outsiders to bulletproof their own codes . . . well, that was quite unacceptable.

The whole episode turned out to embody in a nutshell a dilemma that the NSA had yet to acknowledge, even to itself. For years, people at The Fort could be reasonably confident that when they devised a breakthrough technique like differential cryptanalysis, such information would be unlikely to tumble into the public domain. Those days were over. Consider that the IBM group had come across the T Attack on its own, without the help of government. Differential cryptanalysis was ultimately a mathematical technique just waiting to be rediscovered by someone outside the Triple Fence interested in sophisticated codes. The NSA couldn’t hold on to such mathematical machinations any more than an astronomer discovering a previously unknown nebula could cover up the skies to mask its presence to future stargazers.

This was to be the reality of the dawning era of public crypto: whether the NSA liked it or not, bright minds were inevitably going to reinvent the techniques and ideas that had been formerly quarantined at Fort Meade—and maybe come up with some ideas never contemplated even by the elite cryptographers behind the Triple Fence.

 

S-boxes aside, the most controversial feature of DES would be its key length. Horst Feistel’s Lucifer specified a 128-bit key. But clearly the National Security Agency did not want the national encryption standard—even if it were used only by financial institutions and corporations—to lock information within such a mighty safe. By the time the algorithm had threaded its way through the Triple Fence and was released as a potential NBS standard, the key length had been cut in half, and then cut some more, down to the relatively paltry 56 bits.

It’s hard to exaggerate the difference this makes. Assume that a codebreaker trying to crack DES is unable to discover any shortcuts to cracking. The only way that an intruder can recover an encrypted message, then, is to launch a brute-force attack, experimenting with every possible key combination until he finds the one that was used to scramble the original. Such a search is the equivalent of a safecracker painstakingly twisting the dial to stumble upon the exact series of numbers that would align the tumblers. Even with a computer twisting the virtual dials at high speed, a very large “keyspace” (a numerical range that contains all possible key combinations) can make such a search impossible to pull off. A 128-bit key is very, very large. If a computer tried one million keys every second—a million different combinations of the numbers on the safe dial—it would take aeons to try every possible key.

So what would be the effect of cutting the key size in half? To assess this, you have to keep in mind the nature of digital numbers. Each bit in a binary key is like a fork in the road that a codebreaker must negotiate in order to get to the destination of the correct combination of ones and zeros. Every fork presents a random choice between the correct turn and the wrong turn; a 128-bit key means that you have to guess the correct way to turn 128 times in a row. To make the course twice as difficult, you simply have to add one more fork; then you’ve created twice as many possible paths to negotiate, but still only one is correct. But to make the course half as difficult, you don’t divide the number of forks by two, but simply remove one.

That’s why removing a single bit from the key size means that the encrypted message is only half as safe as it was before. Switching from a 128-bit key to a 127-bit key means you’re cutting by half the work factor to break it. If you cut the key size one more bit, to 126 bits, then you’ve halved that key. And so on.

According to Tuchman, the Kingston group figured that a 128-bit key was not only overkill but would require too much chip space and computation. “We had to fit the whole algorithm on there,” says Tuchman. “The S-boxes, everything. We were using two-micron CMOS chips, and the data coming in could only be 8 bytes wide [one byte equals eight bits]. So our first key length was 64 bits.” Sixty-four bits was a good fit for a chip, a number divisible by the eight-bit bytes.

This was quite a dramatic reduction. It cut down the time required for a full search on the theoretical million-keys-a-second computer from billions of years to around 300,000. Still, a 64-bit key length was considerable in the mid-1970s, especially since it was agreed that computer technology would not be sufficiently advanced to conduct searches at such speeds for the next couple of decades.

But then the Kingston group made a seemingly inexplicable second cut, to the mathematically awkward key length of 56 bits. And suddenly, the possibility of a brute-force attack was smack in the picture. Why did a lousy eight bits make such a difference? Remember, every time the key is reduced by a single bit, it becomes twice as easy to crack. So this eight-bit loss made the cipher 256 times easier to crack: from 300,000 years to a little over a thousand. Put another way: the percentage of key space that formerly would have occupied a foe’s computers from January to August could now be scanned in less than a day.

What was IBM’s explanation for this? According to Tuchman, it was standard company practice in hardware design to allow a certain number of extra bits for “parity checks,” a sort of synchronization to make sure that the electronic signals were being properly read. “It was an IBM internal spec,” he says, at the same time admitting that it was a “foolish” requirement. “We don’t do that anymore, but at the time we had a standard—so I had to reduce the key size [to accommodate the extra bits].”

Tuchman didn’t think that this further cut really compromised DES. (Privately disagreeing with this was Horst Feistel, who still preferred a 128-bit key. But he was no longer actively involved with the project and would soon be quietly eased out of IBM itself.) Tuchman and his colleague Karl Meyer believed that a 56-bit key, with its 70 quadrillion variations, was more than sufficient for the commercial, even the financial, secrets that DES would protect. The idea of DES, Tuchman would argue, was to provide computer networks the level of security that people had in their physical workplaces: “locked desk drawers, locked doors on computer rooms, and loyal, well-behaved employees.” Not the military secrets customarily transported in exploding briefcases handcuffed to couriers or entrusted to spies who were taught to ingest poison pills upon capture.

Others, however, have always believed that the reduction was caused by NSA pressure. This even included skeptics inside IBM, like Alan Konheim, who headed the mathematical team on the DES project. “Fifty-six bits is very unnatural,” says Konheim, obviously disregarding Tuchman’s “parity check” explanation. “The government [must have] said, ‘Listen, 64 bits is too much—make it 56.’” Why would IBM go along with it? “You see, IBM does business all over the world. It can’t send a pencil outside the United States without an export license. Not only that, when [the NSA invokes] patriotism and national security, well, these are not things you can argue about.”

To outsiders like Martin Hellman and Whit Diffie, of course, the key size was a smoking gun that proved the NSA had weakened the standard for its own nefarious purposes. In the months after the standard was first announced, the Stanford cryptographers wrote a steady stream of suggestions and objections to their contact at the National Bureau of Standards—and became increasingly frustrated that the officials kept insisting that there was no problem. Hellman came to believe that the NBS wasn’t speaking for itself but was acting as a stooge for Fort Meade.

To prove his point about the weakness of the key size, Hellman challenged an executive he knew at IBM to contradict his and Diffie’s contention that this DES key could actually fall in a day to a sophisticated, high-powered machine. At this point, the Stanford researchers were postulating that such a machine could be built for $20 million. Thus, if one key were broken each day, over a five-year period the price of breaking each key would be around $10,000. Not a bad investment if some of the broken messages included precious data like oil reserve locations and corporate merger plans—such information was worth millions. “But even if we were off by a whole order of magnitude, and it would cost $100,000, that wouldn’t matter,” says Hellman. “Because in five years computers would be ten times faster, and the solution would cost only a tenth as much as it would now.” According to Hellman, the IBM executive ordered his own researchers to investigate. “He called me back and said that their numbers were in the same ballpark as ours,” says Hellman. “That was his exact word, the ‘ballpark.’ But he told me that the key size was set by the NBS, not IBM.”

Meanwhile, officials at the NBS were assuring Hellman, in their responses to his frequent, increasingly pointed letters, that their own studies showed that a machine like the one envisioned by Hellman would take all of ninety-one years to search through a DES keyspace. Obviously, they were not playing in the same ballpark.

Hellman believed that all of this was bald evidence that the Data Encryption Standard was a swindle from the start. It was all the NSA’s master plan. The supposedly benign NBS—acting as the NSA’s public face—allowed IBM to construct its algorithm independently. This gave it deniability: Hey, it wasn’t us spooks who cooked it up, Big Blue did. But by getting IBM to cut the key size to an infuriatingly puny 56 bits, the spooks got what they wanted anyway. “They knew they could control the key size, which would ultimately control the strength of the standard,” complains Hellman.

And that was the kindest interpretation. If you wanted to be skeptical—and like any good cryptographer, Hellman and his colleagues were plenty skeptical—you’d still wonder about the possibility of an actual trapdoor that would allow the Fort Meade tricksters to decode a DES message within seconds. Why else were they keeping the design principles a secret?

In any case Hellman rejected the government’s ninety-one-year estimate and decided to go over the heads of the NBS functionaries with whom he was corresponding. On February 23, 1976, Hellman stated his complaints in a letter to Elliot Richardson, who, as secretary of commerce, was the ultimate boss of the NBS:

I am writing to you because I am very worried that the National Security Agency has surreptitiously influenced the National Bureau of Standards in a way which seriously limited the value of a proposed standard, and which may pose a threat to individual privacy. I refer to the proposed Data Encryption Standard, intended for protecting confidential or private data used by non-military federal agencies. It will also undoubtedly become a de facto standard in the commercial world.

. . . I am convinced that NSA in its role of helping NBS design and evaluate possible standards has ensured that the proposed standard is breakable by NSA.

The response Hellman received from Ernest Ambler, the acting director of the NBS, did little to cool him down. Instead of answering Hellman’s charges directly, Ambler gave some general comments defending DES, and praised the NSA for its contributions in certifying the algorithm. He helpfully attached an executive order which outlined “the functions and responsibilities of NSA.” Monkeying with private-sector algorithms didn’t make the list.

That summer, Hellman, Diffie, and five other academics took a month to bang on the system and produced a paper called “Results of an Initial Attempt to Cryptanalyze the NBS Data Encryption Standard.” They were straightforward about their concerns: any algorithm approved by the NSA was “mildly suspect a priori” because “the NSA does not want a genuinely strong system to frustrate its cryptanalytic intelligence operations.” It was not surprising, then, that while falling far short of actually breaking a DES key, they concluded that the system could not be trusted. Besides the key strength, they found what they considered a “suspicious structure” in the S-boxes—possibly, they wrote, “the result of a . . . deliberately set trapdoor.”

To IBM’s Walt Tuchman, though, the Diffie-Hellman complaints were a travesty born of paranoia and ignorance. He was no secret agent—he was a product guy—and to the best of his ability, he’d led a team to create a good product! It had been a happy day for his team when the first two DES devices were completed. They were shoe-box-sized metal cases stuffed with chips that went between a mainframe computer and a modem. Such a device on each end of a data transfer would allow two computers to communicate in a secret stream, impervious to eavesdroppers—no matter what Marty Hellman said. One box was sent to IBM’s Paris headquarters, the other to Lew Branscombe’s office in Armonk. Then they made some history. The Paris office sent off an encrypted message to the Armonk machine. The Armonk machine, having been previously fed the symmetrical key that performed both encryption and decryption, deciphered the message back to its original form. “It went to a little printer and the message was printed in all the IBM newspapers,” recalls Tuchman. “It was some innocuous little message, of course, because everybody knew it was going to be published in the clear.”

All that happiness, though, was tempered by the attacks that came from Hellman and friends. Tuchman and his colleague Karl Meyer had to defend themselves at two public workshops sponsored by the NBS. The second, held in September 1976 at the NBS’s Gaithersburg, Maryland, headquarters, was the most contentious. I didn’t do anything wrong! insisted Tuchman. The key size was plenty big enough, and building a machine to crack DES would not take Hellman’s low-seven-figure pricetag, but a cool $200 million. And if that key size wasn’t large enough, people could design devices to run DES through its paces twice, with two different keys. Though such a process might be difficult to set up, this would effectively double the key size to 112 bits—enough keyspace to confound every damned computer on the planet for the next gajillion years. (Eventually, a process would emerge called “Triple DES,” which would use three keys and rule out even the most extravagantly brutish of attacks. But all of this was a moot point because the version of DES with the allegedly hobbled 56 bits was the one proposed for the standard.)

Tuchman’s appeal failed to quiet the critics. Why didn’t you publish the design heuristics? they wanted to know. Did you put a trapdoor in DES?

Then came the newspapers. “Those professors told the New York Times and the Washington Post ,” Tuchman complains. The next thing he knew, at IBM’s request, Tuchman himself was being interviewed. After taking a gander at the newly famous desks of Woodward and Bernstein, he told the Post reporter the same thing he told the Times reporter: The NSA didn’t modify the algorithm. They didn’t put a trapdoor in. Look, you guys, it’s ridiculous; we’re not going to risk the entire IBM company by putting a trapdoor in its product.

Even so, the publicity took its toll. It was bad enough that the Times, the Post, and the Wall Street Journal were listening to Hellman and the critics. Worse came when Tuchman’s own mother called him from her retirement home in Florida, concerned with what friends had been telling her after reading the New York papers. She pleaded with her son, who had started life so wonderfully as a whip-smart college boy from Brooklyn: Please, Walter, leave IBM and stop hanging around with those bad people. Tuchman had to explain to her that he wasn’t going to wind up in a jail cell with Ehrlichman and Haldeman—he was a good guy!

After the publicity came hearings by the Senate Intelligence Committee. These top-secret sessions were closed, and the final report was classified. But a summary was issued for the general public, too. Its contents provided ammunition to both sides.

On one hand, Hellman was proved correct in asserting who the power was that dictated the 56-bit key: “The NSA convinced IBM that a reduced key size was sufficient,” the report read. The reduction wasn’t, as Tuchman still insists, due to the rigor of chip design or the need for parity checks: it was the fact that the government wouldn’t tolerate anything more. IBM knew that it would need export licenses for approved customers. But the NSA, which had been charged to collaborate with the National Bureau of Standards in evaluating DES as a government standard, certainly was not going to rubber-stamp an algorithm that used, in its view, too long a key. Apparently, the 56-bit key length provided the NSA a certain comfort level. Though the work factor to break a cipher of that length seemed dauntingly high, it was clear that if anyone could contemplate a brute-force attack on DES, it was the National Security Agency itself, with what were assumed to be literally acres of computers in its top-secret basement. Obviously, while an ideal code for users was the strongest one possible, the ideal code for the NSA’s purposes would be one that was too powerful for criminals and other foes to break, but just weak enough to be broken by the billions of subterranean computer cycles at Fort Meade. Did a 56-bit key fit into that sweet spot? The NSA didn’t say. And never would.

Despite its conclusion that the key size was a result of NSA demands, the committee concluded that there was no wrongdoing by either IBM or the government. The Data Encryption Standard had been determined fairly. Like it or not, this was something that Marty Hellman and his friends would have to accept.

It took years, but eventually they not only accepted it, but came to eat some crow. As Walt Tuchman proudly notes, for more than two decades after the algorithm was formally accepted as a standard in 1977, no one had been successful at finding a significant shortcut to cracking a DES-encrypted message. (Of course, if the NSA had done so, it would never have admitted it.)

In 1990, outside cryptanalysts revealed the technique of what was called differential cryptanalysis, proving that under certain (admittedly rare) conditions, one could crack a DES key using slightly less computation than a brute-force attack would require. But this was essentially the “T Attack,” discovered by IBM during the development process in time to fortify the algorithm against such assault. And kept confidential at the NSA’s request. (A different group of researchers introduced another theoretical attack on DES, linear cryptanalysis, in 1993—but neither did it truly compromise the cipher.)

So if the key size was indeed the only point of attack in DES—if one had to devote massive computational resources to breaking a single message and then wait for days, weeks, or months for the cipher to crumble—then the National Security Agency had certified what could be an extraordinarily powerful tool for the spread of strong encryption throughout the land, and maybe even the world. It had always been the impression of the folks behind the Triple Fence that the users of DES would be conservative, trustworthy institutions like banks and financial clearinghouses. They misjudged the situation. Instead, the development of DES marked the beginning of a new era of cheap, effective means of using computer power to keep personal information private. It was used not only in banks but in all sorts of commercial communications, and was widely available to private communications, too. Though the NSA still controlled its export, it quickly grew unfettered within U.S. borders. And while U.S. producers could not market DES overseas, the algorithm itself would find its way overseas, allowing foreign developers to make their own versions.

The dawning of this era of increased protection might have pleased some of the people in the communications security branch of the NSA, which was in charge of securing American data as they moved around the globe. But it was already causing conniptions among those in the signals intelligence area, the people whose job it is to make sure that our guys can quickly intercept and circulate all the rich and fascinating information buzzing around the globe as electronic blips. If those blips were encrypted, and thus not easily read, well, then, that would be a problem. Making things even worse were the faster and cheaper computer technologies that made it feasible—made it the rule, in fact—for DES users to switch keys not every few months as the NSA assumed they might, but on a daily basis or even more often than that.

Yes, the Data Encryption Standard was a problem for The Fort. Years later even Martin Hellman came to realize that his attacks sometimes were based more on bravado than substance. “They were Darth Vader and I was Luke Skywalker,” he says. “I was bearding the NSA, and that’s a pretty heady thing for a young guy to be involved in.” Now, however, he admits that there were two sides to the issue: that DES, despite its key size, was strong enough to provide a measure of security to people, and that even though the NSA could presumably marshal the resources to brute-force a DES key into submission, the process was certainly more cumbersome and costly than simply reading an unencrypted intercept. DES was the NSA’s first lesson that the new age of computer security was going to complicate its life considerably—perhaps even to the point of shaking the entire institution.

Alan Konheim thinks that the bottom line on DES came from Howard Rosenblum. He was the deputy director for research and development at the NSA, where football fields of mainframe computers cracked the codes of the country’s friends and enemies and tested the codes that potentially protected our own secrets. One day, Rosenblum and Konheim were talking about DES, and the NSA official made an off-the-cuff remark that stayed with Konheim for years. “You did too good a job,” he said.

“It was not,” Konheim says delightedly, “a comment of flattery.”