Cryptography and the Emergence of Truly Unbreakable Codes
IN THOSE LONG AGO TIMES OF THE EARLY 1990s, BEFORE e-commerce was commonplace, before most people even thought about the problems of keeping data secret, of typing credit card numbers into Web sites and assuming they will be secure, the federal government and scientists engaged in an epic debate.
Mathematicians had devised a clever way to make coding systems that could not be compromised. The most popular relied on factoring large numbers—the idea behind it was that it was simple to multiply prime numbers together and get a huge composite number, but if all you had was the composite number, there was no feasible way to find the primes that make it up. There was the method of simply trying all the primes smaller than the composite, and there were a few shortcuts. But if the composite was big enough, it would simply be infeasible, even with the fastest computers, to break that composite down.
So encoding would, essentially, involve multiplying primes and using that method to encrypt a digital file. Decoding, for anyone who did not know the primes, would be impossible because it would be so hard to find the original set of primes.
Mathematicians called these “one-way functions.” They were easy in one direction—multiply primes—and almost impossible in the other direction—find the primes that were multiplied.
Yet if the method was that good, the codes that unbreakable, did we really want the system to be public so anyone, including terrorists or enemy countries, could use it? The National Security Agency, which makes codes used by the federal government, including the military, and breaks codes used by other countries or terrorists, stepped in and asked to review mathematicians’ papers before they were published, lest they unintentionally reveal secrets and methods that could imperil the N.S.A.’s mission.
What ensued was a battle and a debate over how public this work should be and whether, once an idea is mentioned, it is even possible to prevent others from developing it.
The Times articles tell the story of the wrenching arguments and the research that ensued—attempts to find good ways of breaking large composite numbers into primes, attempts to probe the codes and see how good they really are. It is a story that still resonates, even as secure codes have become part of everyone’s life.
Harassment Alleged over Code Research
Computer scientists and mathematicians whose research touches on secret codes say they have been subjected by the National Security Agency to growing harassment and the threat of sanctions or even prosecution for publishing articles about their work.
The scientists, some working for universities, some for private industry and some for the federal government, charge that research in the United States faces a muted but growing threat from the National Security Agency, the government’s supreme authority on secret codes.
The agency’s tactics, they say, may eventually mean that a scientist working outside the government could suddenly be informed that his work had been officially classified as secret, or he could even be constrained from participating in international professional meetings.
Norman Boardman, spokesman for the N.S.A., told The New York Times that neither he nor any other employee of the agency could comment in any way on the accusations made by the scientists.
Complaining scientists, lawyers for scientific institutions and government experts agree that laws covering such potential constraints are ambiguous. But because of the vagueness of the current legal position, the security agency’s pressure must be taken seriously, they assert.
Prosecution Reported Threatened
Scientists said in interviews that threats leveled against them by employees of the agency included the possibility of official action to get research grants canceled or even criminal prosecution for violation of security laws.
A source within the National Science Foundation, a federal agency that underwrites many research programs by private organizations and individuals, said that his organization was being subjected to increasing “systematic, bureaucratic sniping” from the security agency with a view to bringing certain kinds of research under the agency’s control.
The informant, who asked that his identity not be disclosed, said there was a danger of intimidating a large segment of the private American scientific community if the government pressure was not resisted.
Such allegations have been simmering in the scientific press for about six months, but the controversy was brought to a boil by a symposium on information theory held last week at Cornell University at Ithaca, N.Y.
Among the scientists who presented papers at the meeting was a group from Stanford University headed by Dr. Martin E. Hellman and an associate, Whitfield Diffie.
The Stanford group had been concentrating its studies on a kind of mathematical problem known as the Nondeterministic Polynomial Complete Problem (N-P Complete).
A Computer “Lock”
The N-P Complete problem’s special interest to computer scientists is that at this point no computer can be programmed to solve it. Dr. Hellman and his associates have therefore proposed it as a device for “locking” data in the memory banks of computers against unauthorized use or theft.
“The right to privacy of the American citizen is what this is all about,” he said.
However, the N-P Complete problem could also be used as a system for devising secret communications codes that, Dr. Hellman contends, would also be virtually impossible to break, even by giant intelligence organizations, including the National Security Agency.
The implication, Dr. Hellman and others said, was that any government, private organization or individual could devise and use a code that would be immune to eavesdropping, whether from the Central Intelligence Agency, the Soviet Union’s counterpart, the K.G.B., or any other group.
They, say the experts involved, created anxiety within the United States intelligence community. In an interview, a senior government intelligence official described the issues raised by the Hellman team as “extremely serious” to the national security interests of the country.
The Stanford work and related projects by the Massachusetts Institute of Technology and others, he said, could enable foreign powers to develop virtual impenetrable command-and-control military communications systems.
Most computer scientists and mathematicians in the United States are members of the Institute of Electric and Electronic Engineers, which publishes their papers and distributes them to countries abroad, including the Soviet Union among other countries.
Letter Regarded as Threat
Several weeks before the Cornell conference, the institute received a letter from one of its members, Joseph A. Meyer, who is listed, well-placed scientists say, in the National Security Agency directory as an employee.
The letter warned the Institute that publication and distribution of scientific papers by the Stanford team and other groups planning to participate in the Cornell conference could result in prosecution under the 1954 Munitions Control Act, known in its current revision as the Arms Export Control Art.
Lawyers of the institute studied Mr. Meyer’s letter and the law itself, and finally concluded that they were within the law in publishing the Stanford research. A similar view was taken by lawyers representing Stanford University and officials interviewed by The New York Times in the Department of Commerce, the agency responsible for issuing licenses for exported technology.
Denial by Security Agency
Mr. Boardman, the security agency spokesman, denied that the N.S.A. had directed any employee to bring pressure on Mr. Hellman or the others. But an informant in the National Science Foundation said the letter from Mr. Meyer to the engineering institute was merely one of a number of similarly threatening letters that had been sent to scientists and their organizations by known employees of the security agency.
Efforts by The New York Times to reach Mr. Meyer at his home in Bethesda, Md., or at his N.S.A. office at Fort Meade, Md., were not successful. Telephone calls were referred to the N.S.A. spokesman, who declined comment.
October 19, 1977
Researchers to Permit Pre-Publication Review by U.S.
A group of mathematicians and computer scientists has tentatively decided to try to dispel a concern of the National Security Agency by voluntarily submitting research papers to the agency for review before they are published in the scientific literature.
The National Security Agency, which has the responsibility for collecting intelligence information, is concerned because, it says, cryptographic research in universities has become so advanced in recent years that it is viewed as a security hazard. Until a few years ago the government had a virtual monopoly on such research.
Cryptography, the art of writing or deciphering messages in code, has increasingly become a focus of academic interest, in great measure because of the need to protect industrial secrets. There is said to be apprehension within the National Security Agency over whether an academic researcher might write a paper on methods of analytically attacking and breaking a code system similar to one used by the government.
The decision to attempt to cooperate with the agency without violating the tradition of academic freedom was made at a recent meeting of the Public Cryptography Study Group. Details of the precise form the cooperation will take, beyond allowing agency review, have not been formulated.
Concern about Possible Controls
The group was set up last year by the American Council on Education, which is made up of university administrators. The study group itself is nine members representing professional societies in computer sciences and mathematics. It does not represent all cryptographers, and some remain acutely concerned that the cooperation may become the first phase of federal control over cryptography in American universities.
Daniel Schwartz, general counsel of the agency, said yesterday that nothing of the sort was contemplated. Asked what the government would do if the voluntary program did not work, Mr. Schwartz said the agency “would consider as one option seeking legislation if the problem became serious enough and there was no other way to resolve it.”
But he emphasized the agency considered such an action a last resort and added: “We have no interest in going through an enormous fight in the Congress on this particular issue.” He said the agency would probably not object to most papers and if it did, the problem might be resolved by the deletion of an explanatory footnote.
Conflict concerning Financing
Dr. Ronald Rivest, a computer scientist at the Massachusetts Institute of Technology, said he thought the study group’s recommendations might be workable. “But I think it is important not to forget that the need for any sort of recommendations has not been made to anybody’s satisfaction outside of the National Security Agency,” Dr. Rivest added.
He was asked how he would feel about the financing of his work by the agency. Agency financing of academic research has been a disputed issue in recent years.
“I feel it is institutionally inappropriate,” he said. “There would be a conflict of principles within the National Security Agency. On the one hand it would be concerned with maintaining national security, and on the other hand it would be concerned with maintaining the principle of open and academic research.”
Dr. Martin Hellman, professor of electrical engineering at Stanford University, said he was also ready to try to cooperate. “If it doesn’t work,” he said, “we can back off.” Dr. Hellman said he thought the agency had become easier to work with in recent years.
The relationship between academic cryptographers and the agency became a matter of some controversy last August when Leonard Adleman, a computer scientist at M.I.T. and at the University of Southern California, learned that the National Science Foundation had passed his research proposal along to the National Security Agency, which then approached him about the possibility of providing some funds.
Dr. Adleman said he wanted no money from the agency. Since then he has been allowed to re-apply for money from the National Science Foundation.
November 1, 1980
Tighter Security Rules for Advances in Cryptology
The Japanese Navy was defeated in World War II largely because American code-breakers had deciphered Japanese intentions before the pivotal Battle of Midway. Cryptography played a key role several other times in that war.
Today, however, it is confronted with a two-edged crisis, and there is some doubt that it will ever play so significant a role again. Some specialists believe that thanks to computers, cryptography has become so impenetrable that the days when great powers could “read each other’s mail” may be nearing an end. At the same time, as interlocking computer networks proliferate, secure coding systems are becoming essential to safeguard the national economy, individual businesses and personal privacy.
These developments have led to a confrontation between the National Security Agency, responsible for deciphering the communications of other nations and safeguarding those of the United States, and academic specialists seeking to meet the needs of the private sector. Although a voluntary compromise has been reached, it is unclear how wide the compliance will be.
Some academicians contend that increased federal control of their work will have a “chilling effect.” Officials of the security agency fear that public disclosure of coding systems that are increasingly difficult to break will give away its secrets or enable foreign powers to frustrate American code-breaking efforts.
The Background
In 1976, Martin E. Hellman of Stanford University and his colleagues, Whitfield Diffie and Ralph Merkle, conceived a “public key” approach to cryptography, in which one key that can be made public is used to encode the information and a second key, kept secret, is needed to decipher it.
At the Massachusetts Institute of Technology, Ronald L. Rivest, Adi Shamir and Leonard Adleman then proposed a system employing the “public key” approach. Known as the R.S.A. system, it is widely regarded as unbreakable, given present computer capabilities.
In this system the message can be typed into a computer that converts it into numbers and alters them, using a special mathematical key. The recipient must use another key to retrieve the original numbers and text. To reverse the encoding process without the deciphering key, it is said, would require computer processing lasting thousands or even billions of years.
When officials of the National Security Agency learned of these developments, they feared that the academic community was arming foreign cryptographers with ways to foil its deciphering efforts. They sought to take over from the National Science Foundation the awarding of some research grants in this area, including one sought by Dr. Adleman. The agency also asked the Federal Patent Office to impose secrecy classification on some encoding or voice-scrambling devices intended for commercial use, including one developed by Dr. George I. Davida of the University of Wisconsin.
Last year, in response to a suggestion by Vice Adm. Bobby R. Inman, then the director of the security agency, the American Council on Education formed a Public Cryptography Study Group to explore the situation. Dr. Hellman and Dr. Davida were members of the study group.
For Control
Two years ago, as reported by the study group in February, Admiral Inman “publicly indicated his deep concern that some information contained in published articles and monographs on cryptography endangered the mission of N.S.A. and thus the national security.” Admiral Inman, who was succeeded by Lieut. Gen. Lincoln Faurer of the Air Force, favored some form of legislated control.
It was argued that foreign powers were aided by the publication of such material as the detailed accounts of public-key cryptography that appeared in Transactions of the Institute of Electrical and Electronic Engineers and then in Scientific American.
Although some scientists always voluntarily offer their work for review, the security agency cannot require that reports be submitted to it for approval, unless the agency has financed the work. The National Science Foundation refers sensitive information developed under its grants to the security agency or to the intelligence officials of one of the armed forces.
One suggested way to achieve the control sought by the security agency would be to make it a crime to reveal sensitive cryptologic information as defined in a statute. Another approach would be to require prepublication review. Compensation would be paid those who suffered financial loss when their invention was made secret.
Against Control
The nation is becoming increasingly vulnerable to clandestine penetration of its computer networks. Information of commercial value can be extracted without leaving any evidence of the theft. Money transactions can be manipulated in subtle forms of embezzlement. Personal privacy can be invaded.
According to Michael Dertouzos, director of M.I.T.’s Laboratory for Computer Science, hostile agents could cause chaos by putting false information in the computers that control the nation’s monetary system. Such hazards could be avoided by the use of cryptographic methods to protect legitimate information and authenticate additions.
It has been argued that development and introduction of such methods in the private sector was more important to the nation than concealing them from foreign cryptographers. Dr. Adleman even suspects that such advanced countries as the Soviet Union are already using codes that cannot be broken by any foreseeable methods.
Dr. Davida, however, is less certain of this. Methods like the public key system are not intrinsically undecipherable. There is always the possibility that someone will find a shortcut to its solution, he says. He doubts that cryptography will become so undecipherable that the role of the security agency will become obsolete.
Dissent on Papers
As a member of the study group, Dr. Davida was the sole dissenter from its recommendation for voluntary submission of cryptography papers to the security agency. Computer nets, he wrote, constitute “electronic windows into the most intimate details of people’s lives.” Even more disturbing, he added, “is that it is usually impossible to know who is looking in.” Encryption can serve as a curtain.
In a reference to the American Telephone and Telegraph Company, as well as other communications companies, he said the need for such curtains, “is crystal clear in view of the remarkable insensitivity of the common carriers to the public’s concern about privacy.” Dr. Dertouzos opposes any control over academic research. “A university, to be effective, has to be free,” he said in a recent interview. “We are responsible people.” Like many of those working in this field, he sends his papers to the National Security Agency. He says, however, that this is “for information only,” not for clearance.
Dr. Hellman said that if he was persuaded that publication of a paper would hinder the operations of the security agency more than it would aid the private sector he would be willing to withhold it.
The Outlook
The study group, on which the security agency was represented by its general counsel, Daniel C. Schwartz, has agreed to a compromise, voluntary arrangement for a trial period. Under the procedure, which so far has won only limited acceptance in the academic community, the security agency would define the nature of papers it wished to review and “invite” their submission. If its recommendations were not accepted by the author, the issue would be referred to a five-member advisory committee.
Two of the advisory panel’s members would be from the security agency and three nominated by the president of the National Academy of Sciences. The procedure is just getting under way. Formation of the committee will be time-consuming because of the need to obtain security clearances for those members not representing the security agency.
Each researcher or academic institution will be faced with a decision. If this procedure wins wide compliance it may serve the security agency’s needs. On the other hand, it may prove ineffective if even a few sensitive papers are not submitted for screening.
June 1, 1981
A New Approach to Protecting Secrets Is Discovered
Mathematicians and cryptologists have discovered a way for people to prove that they possess secret information—from their credit card numbers to military access codes—without giving any of the information away.
This keenly paradoxical discovery has upset the traditional understanding of mathematical proof. Beyond that, however, it may also hold the power to transform the many aspects of modern life where processes of identification are subject to abuse, from everyday financial transactions to encounters between enemy aircraft.
Blending pure logic with computer technology, the researchers are developing an area of mathematics called zero-knowledge proof. Where a conventional proof conveys information, a zero-knowledge proof is meant to convey only the assurance that the information is in hand. The goal is to convince a second party without providing any of the knowledge that would allow him, or an eavesdropper, to convince a third party.
“You want to be able to prove some fact and not disclose why the fact is true,” said Silvio Micali of the Massachusetts Institute of Technology, one of the originators of the idea. “You want to convince without giving more knowledge than is strictly necessary.” Ideally, that is no knowledge at all.
Although zero-knowledge proof began as an abstraction, computer scientists quickly realized its applicability to many everyday uses of secrecy. The issue arises whenever someone tears up credit-card carbons, looks over his shoulder while signing onto a computer or worries about the photocopying of a passport left with a hotel concierge.
“Everybody has secret information that he needs to show to somebody to identify himself,” said Adi Shamir of Israel’s Weizmann Institute of Science. With zero-knowledge proofs, “instead of giving away the secret, he proves he knows the secret,” he said.
Dr. Shamir and two co-workers, Amos Fiat and Uriel Feige, have designed a way to use zero-knowledge proof cheaply and quickly in electronic chips that could be implanted, for example, in credit cards. Credit cards using chips, “smart cards,” are already in development, but a smart card using zero-knowledge proof could identify the cardholder to a merchant without giving the merchant the knowledge—the card number—that would let him make illicit purchases or forge a new card.
“I can go to a Mafia-owned store a million successive times and they still will not be able to misrepresent themselves as me,” Dr. Shamir said. Because this invention also affects military uses of identification procedures, the United States recently imposed a secrecy order on it; the order was quickly rescinded amid an outcry from American mathematicians. The fundamental mathematical ideas of zero-knowledge proof that underlie the invention are firmly in the public domain, and many researchers are pursuing other applications.
The essence of zero-knowledge proof lies in an interactive exchange of information between the “prover” and the “verifier.” That is a break with the traditional practice of simply writing a proof down, once and for all—or simply revealing one’s password.
“What I find most exciting is that the whole notion of what constitutes a mathematical proof is broadening,” said Manuel Blum of the University of California at Berkeley, who has played a central role in making such proofs workable. “This interactive exchange has never existed before. In Euclid the proofs are written down—from Euclid to us—just one way.”
The other new element is a role for chance. In effect, the verifier asks a series of questions, choosing the questions by some electronic equivalent of flipping a coin.
Suppose, for example, the prover has a bafflingly complex maze the size of New York City. She points to two locations—Times Square and Shea Stadium—and asserts that she knows paths through the maze connecting them. How can she convince a verifier without actually showing him how to get from one point to the other?
Process of Interrogation
She picks a third point on a path—say, the Brooklyn Promenade—and offers to reveal either the path from Times Square to the promenade or the path from the promenade to the stadium. The verifier flips a coin and chooses one half-path or the other. If she is lying, he already has a 50 percent chance of catching her.
Now she picks yet another point—Rikers Island—and they repeat the process of interrogation. The verifier checks either the path from Times Square to Rikers or from Rikers to the stadium. Again, if the prover is lying, she has a 50 percent chance of being caught.
“Eventually you are convinced, because of my willingness to show you either of two halves,” Dr. Shamir said. “You chose which half you wanted to see.”
The verifier can repeat the process as long as he likes. After 300 inquiries, the chance that she is lying has fallen to one part in 2300, more than the number of atoms in the universe.
This example is not quite a zero-knowledge proof, however. Even though the verifier never actually sees a full path, the many fragments of paths can eventually enable him to piece together an understanding of the maze.
A more precise example involves a collection of lines joined at nodes, which mathematicians call a graph. A well-known problem is to ask whether it is possible to paint the nodes of a certain graph, using just three different colors, so that no line has the same color at both ends.
To construct a large graph that can be colored this way is easy. But the reverse problem, to find a way of coloring a given graph, has been shown to be very hard—a member of a class of problems that cannot be solved in a reasonable amount of time. The difficulty of the problem rises rapidly as the graph becomes bigger.
Practical Applications
Dr. Micali and other experts in the field, who include Shafi Goldwasser, Charles Rackoff, Oded Goldreich and Avi Wigderson, have shown that graph colorings—and all other problems in the same hard class—can be proved in ways that convey zero knowledge of the proof.
Similar problems of pure mathematics are used electronically in the practical applications. Instead of a credit-card number, a user may have a computer chip containing, in effect, a unique, complicated graph and its coloring. A verifying computer knows the graph associated with each user, but does not know how to color it.
For practical applications, the mathematics of graph coloring are too slow. But using techniques of number theory, researchers have come up with zeroknowledge proofs that adapt well to electronic circuitry. Dr. Shamir’s device carries out the whole back-and-forth exchange in less than one second.
“It’s about 100 times faster,” he said. “If you had to wait 100 seconds for a smart card to prove its identity, no one would use such a scheme.”
Most important to many applications is that the technology of zero-knowledge proof so effectively separates the ability to generate a proof from the ability to verify it.
Protecting Aircraft Signals
That addresses a security problem that arises in the most sensitive areas. For example, “friend or foe” recognition signals used by military aircraft can, in theory, be intercepted electronically or analyzed from captured aircraft. A system based on zero-knowledge proof would be impervious to both threats, mathematicians suggest.
The same applies to emergency destruct signals sent in encoded form from ground controllers to missiles. By separating the ability to generate an identification from the ability to verify the identification, a system would remain secure even when missiles with their electronics were captured and examined.
“There are going to be thousands of those boxes in missiles to verify those instructions, and it’s going to be very difficult to keep the secrecy of the verification device,” said one researcher. “It’s going to be much easier to keep the secrecy of the instruction-generation device that’s going to be stored in the basement of the White House.”
As the technology advances, however, the less apocalyptic applications will come first. Some mathematicians speculate about automated procedures for signing contracts or communicating orders to stockbrokers.
Signing on to Computers
It may even be possible for users to dispense with the electronics. Dr. Blum is designing a procedure for signing onto computer systems that would remove the possibility of intercepting a password. He envisions users being presented with a string of about 10 letters, and typing a response of the same length—but a different response every time.
In effect, the sequence of letters is like a sequence of 10 questions to the user, and the “password” is actually a procedure for changing some letters to others. For the system to work, the procedure has to be so easy that no calculation will be involved.
“But somebody who’s watching you won’t be helped a bit,” Dr. Blum said, “because when he tries to log on as you, he’ll be asked different questions.”
February 17, 1987
Brief U.S. Suppression of Proof Stirs Anger
No one at Israel’s Weizmann Institute of Science knew more about secrets than Adi Shamir.
Nevertheless, Dr. Shamir, one of the world’s best-known cryptologists, was caught off guard last month when the United States imposed a blanket secrecy order on a breakthrough he had made in the rapidly expanding mathematical field of zero-knowledge proofs. Declaring the Israeli work “detrimental to the national security” and warning of penalties that included imprisonment, the government ordered Dr. Shamir and his co-workers to recover and destroy all related material.
Within days, as the order began to cause a public furor among American mathematicians, the government agreed to reverse its action on the narrow ground that secrecy orders may not be imposed on the work of foreigners.
The incident has alarmed some mathematicians, who say they consider it an unusual and potentially harmful effort by the government to block the dissemination of a theoretical advance in science. In its aftermath, they see it as a threat to an uneasy working relationship that has developed over the last six years between mathematicians and the National Security Agency.
“I think it’s a farce,” Zvi Galil, a Columbia University mathematician and computer scientist, said as the incident began to develop. “It was already a concession from the scientists to agree to a review process giving some power to these guys from N.S.A.”
Math and Cryptography
Officials of the Army, which requested the order, and the Patent and Trademarks Office, which issued it Jan. 6, describe it as a simple mistake and say that, to their knowledge, the highly secretive National Security Agency had no part in the decision.
The agency would not comment publicly, but one official said it had been “blind-sided.” That was also the impression the agency sought to convey privately to many mathematicians.
The relationship between the nation’s security agencies and mathematicians goes back many decades, but it has never been so intimate as it is today. Modern cryptography has become inseparable from the abstract development of mathematics, and advances in many corners of mathematics can prove to have sensitive application to encoding and decoding information.
The N.S.A., believed to be the world’s largest employer of mathematicians, has the delicate and sometimes irreconcilable missions of improving its own coding techniques while not giving foreign countries the ability to keep their communications secure from American decryption. The agency—sometimes known as No Such Agency—thrives on secrecy, and the number of mathematicians who work for it far exceeds the number who say they do.
The Shamir incident began after the Weizmann Institute, Israel’s major civilian research center, applied last July for an American patent titled “Method, Apparatus and Article for Identification and Signature.” The application described “unforgeable ID cards” based on the breakthrough by Dr. Shamir and two colleagues in zero-knowledge proofs, which allow a person to prove that he possesses sensitive information without actually revealing any of the information.
“That Word Military”
The application cited “numerous commercial and military applications.”
As a result, the application was sent to the Army. “That word military automatically kicked it into the system,” said Patricia Ann James of the Army’s Legal Services Agency. After experts at Fort Belvoir in Virginia reviewed the matter, she put through the secrecy order on the ground that the research, though unclassified, belonged on the military critical technology list.
While the Army was studying the application, Dr. Shamir was describing the work at conferences in the United States and Europe, and written material was circulating among mathematicians. He was scheduled to present a full version in May at a major computer science conference to be held in New York, and he had submitted a paper to be published in connection with the conference.
After receiving the secrecy order and consulting with the Weizmann Institute’s lawyers, Dr. Shamir formally notified the conference organizers that the paper could not be published. All copies of the preliminary papers in the hands of scientists or libraries “must be promptly destroyed,” he told them.
The conference organizers considered halting publication. “Clearly I don’t want to jeopardize the conference, and I wish to abide by the law,” said Alfred Aho, a computer scientist who is program chairman for the conference. “But it seems to me that this is rather outrageous that the United States government is trying to put a clamp of secrecy on this work.”
Some mathematicians called the National Security Agency to register their objections.
To the patent office, the secrecy order may have seemed routine. Several hundred such orders are issued each year. But to mathematicians, it recalled a furor that broke out in 1977 when a secrecy order was imposed on an encryption device invented by George Davida, a professor at the University of Wisconsin. That order—also ultimately rescinded—helped lead to the current system of voluntary cooperation with the National Security Agency.
Under this system, many mathematicians submit copies of technical papers to the agency before publication, giving the agency an opportunity to request the suppression of sensitive material. Not all mathematicians abide by the arrangement, seeing it as a threat to the free exchange of ideas on which research depends. But many mathematicians believe that the government’s security concerns are significant and that the bargain is a fair one. Many universities and corporate research departments cooperate with the N.S.A., as does the National Science Foundation.
“Tricky and Delicate”
“It’s tricky and it’s delicate and so far it’s worked well,” said Dr. Ronald L. Rivest of the Massachusetts Institute of Technology, one of the architects of the arrangement, along with Adm. Bobby R. Inman, then the agency’s director.
Since 1980, the result has been a softening of tensions, which is one reason mathematicians were concerned about the new secrecy order. “It turns out that N.S.A., which several years ago were the bad guys, now are not the bad guys,” Dr. Galil said.
Even foreign mathematicians tend to comply. In fact, according to several of those involved, Dr. Shamir submitted his latest breakthrough to the agency and it made no objection.
Soon after mathematicians began protesting to the agency, and within hours of inquiries by a newspaper reporter, the patent office said it would reverse the secrecy order. Some of those involved suggest that the agency had intervened forcefully behind the scenes. The agency would not say.
February 17, 1987
A Most Ferocious Math Problem Tamed
By piecing together the output of hundreds of computers on three continents, a team of mathematicians succeeded yesterday in solving a monster computational problem that had defied all previous efforts. The achievement is likely to force cryptographers to reassess the future application of some codes used by governments and banks.
At 2:03 a.m., Pacific daylight time, the last sequence of numbers required for the solution popped up in a computer laboratory in Palo Alto, Calif., and news of the triumph was flashed to collaborators around the world. The team had succeeded for the first time in splitting a number 100 digits long into two large, prime factors.
The factors of a number are smaller numbers which, when multiplied by each other, yield the larger number. A prime number is one that is evenly divisible only by one or by itself. The prime factors of 15, for example, are 3 and 5.
Maximum Possible Difficulty
The two factors found for the 100-digit number, which was selected by an elaborate mathematical screening process to pose the maximum possible difficulty, are respectively 41 digits and 60 digits long.
The organizers of the project were Dr. Mark S. Manasse of the Digital Equipment Corporation’s Systems Research Center in Palo Alto and Dr. Arjen K. Lenstra of the University of Chicago. But a dozen users of some 400 computers in the United States, the Netherlands and Australia were recruited to join in the project, donating computing time from intervals when the computers were not needed for their regular work.
Several of the most secure cipher systems invented in the past decade are based on the fact that large numbers are extremely difficult to factor, even using the most powerful computers for long periods of time. The accomplishment of factoring of a 100-digit number “is likely to prompt cryptographers to reconsider their assumptions about cipher security,” Dr. Lenstra said in a telephone interview.
“We Have Upped the Ante”
His colleague, Dr. Manasse, added: “What this shows is that a cryptographer should avoid basing a cipher on any factorable number smaller than about 200. The cipher system still works, but we have upped the ante.” Using larger numbers makes the work of cryptographers more cumbersome and time-consuming.
One cryptographic system based on the difficulty of factoring large numbers was invented in 1977 by Ronald L. Rivest, Adi Shamir and Leonard Adleman, all of the Massachusetts Institute of Technology. The three mathematicians patented their system and now market computer chips especially designed to apply it. In this and similar systems, digits replace each letter in a text message, and the entire sequence of digits is treated as a single large number. A mathematical operation is then performed on this number, and to decipher the result requires either that the receiver possesses the key or breaks the code by factoring the large number.
This kind of cipher system is regarded as too slow and cumbersome for such routine cipher messages as those used for secret government messages and the transfer of funds between banks. But according to Dr. Ron Graham, a mathematician at A.T.&T. Bell Laboratories, the “R.S.A. System” (which takes its initials from the names of its inventors) is used fairly extensively for the secure transmission of encryption and decryption keys from one organization to another.
Guarding Secret Keys
Encryption and decryption keys must be protected more securely than any other secret message, because these are the keys that allow either the intended recipient of a cipher message or a spy to decipher it. Since such a key is relatively short, the slowness of encrypting and decrypting it is a disadvantage more than offset by the method’s relative security.
The R.S.A. method can accommodate factorable numbers of virtually any length as cipher keys, so yesterday’s achievement does not compromise the method itself.
“Even if a user were to adopt a key 100 digits long,” Dr. Manasse said, “it would take a gigantic organization working with single-minded dedication to accomplish what our network has done. I doubt that any organization using this kind of cipher is seriously worried by what we have done.”
Potential of Supercomputer
Theoretically, a costly supercomputer such as the Cray, operating continuously, could have solved the problem in about one week, an expert said. But running such supercomputers costs many thousands of dollars an hour. By operating many simpler computers, when they were idle for a few minutes or hours, the problem was solved at virtually no cost.
Progress toward factoring a 100-digit number has been rapid this year, even though no new methods of factoring have been developed. Just three weeks ago, the same group led by Dr. Manasse and Dr. Lenstra established another record by factoring a number 96 digits long, and last spring, a Netherlands mathematician, Herman te Riele, factored 92 digits. In the 1980s the record has been broken about once a year, Dr. Lenstra said.
Technique Is Not New
Dr. Andrew M. Odlyzko, a mathematician at A.T.&T. Bell Laboratories in Murray Hill, N.J., said that the basic technique used for the calculation was not new.
The algorithm, or computer method, the team applied, “which is called the ‘quadratic sieve,’ was invented some years ago by Dr. Carl Pomerance of the University of Georgia at Athens,” Dr. Odlyzko said. “In a way, it’s slightly disappointing that we have not come up with any real advance over that method in factoring large numbers. There are a lot of variations of the same general method, but they are all roughly equal in efficiency.”
The great advantage of the quadratic sieve is that it enables its user to break an enormously complex computational problem down into a large number of relatively simple computations. Each of these computations can be carried out independently and in any sequence.
The strategy, Dr. Lenstra explained, is to calculate a “matrix” of numbers, a square consisting of 50,000 rows and 50,000 columns, in each of which is a number that contributes some information about the solution of the problem. Ideally, a computer could be assigned to calculate each square in the matrix, but in this case, only about 400 computers were available.
“In any case, however,” Dr. Manasse said, “we approached the problem as a massively parallel computation, one in which many processors were working along parallel lines simultaneously. As each small task was completed, the computer user would send us the result by electronic mail.
It Was All There
“Last weekend, we realized that we had enough of these preliminary computations to put them together, using a technique called Gaussian elimination. That gave us the final result last night,” he said.
Gaussian elimination is a procedure analogous to the elimination of variables in an algebraic expression by canceling out like values.
Dr. Manasse said that the factoring program had attracted the interest of mathematicians and computer scientists in Canada, West Germany and other countries, and that many had volunteered to participate in a continuation of the program.
Next: 106 Digits
The factoring of a 100-digit number took less than one month, and the pace is likely to continue. “By the end of the winter I would expect that we will have factored a number of 106 digits,” Dr. Manasse said.
Asked whether mathematicians had sighted new factoring techniques that might speed the process, he replied: “I’m not anxious to see any radically new techniques emerge. Of course, if they’re out there, we’ll have to use them. But I would like to see the R.S.A. encryption system continue as a secure safeguard.”
All encryption systems are under the continuous scrutiny of the National Security Agency, the federal agency chiefly responsible for safeguarding American ciphers and cracking the ciphers of other nations. Experts from the agency attend meetings of mathematicians and computer scientists and follow academic developments closely.
“It’s a safe bet that the N.S.A. knows all about what we’ve done,” Dr. Manasse said, “although we’ve had no communication from them. In any case, this is not going to give them nightmares. It may just make the code makers a little more cautious. We’ve done something that once would have been regarded as practically impossible.”
THE QUESTION: 9, 412, 343, 607, 359, 262, 946, 971, 172, 136, 294, 514, 357, 528, 981, 378, 983, 082, 541, 347, 532, 211, 942, 640, 121, 301, 590, 698, 634, 089, 611, 468, 911, 681
THE ANSWER: 86, 759, 222, 313, 428, 390, 812, 218, 077, 095, 850, 708, 048, 977 × 108, 488, 104, 853, 637, 470, 612, 961, 399, 842, 972, 948, 409, 834, 611, 525, 790, 577, 216, 753
(Source: Dr. Andrew M. Odlyzko,
A.T.&T. Bell Laboratories, Murray Hill, N.J.)
October 12, 1988
Biggest Division a Giant Leap in Math
In a mathematical feat that seemed impossible a year ago, a group of several hundred researchers using about 1,000 computers has broken a 155-digit number down into three smaller numbers that cannot be further divided.
The number is about 50 digits longer than any that mathematicians have reported being able to break down in the same way, an unusually long leap in this area of mathematics.
The latest finding could be the first serious threat to systems used by banks and other organizations to encode secret data before transmission, cryptography experts said yesterday.
These systems are based on huge numbers that cannot be easily factored, or divided into numbers that cannot be divided further. For example, the factors of 10 are 2 and 5.
First Break in the System
This is the first time that mathematicians have factored a number of the size used in these coding systems, said Dr. Arjen Lenstra, director the project who is at Bellcore Inc., in Morristown, N.J., the research arm of the Bell operating companies.
To break the huge number into three smaller numbers, which are 7, 49 and 99 digits long, the mathematicians had to find a new method because the one used in recent years was not up to the job. If someone had asked him to break up a 155-digit number a year ago, Dr. Lenstra said, “I would have said it was impossible.”
Dr. Andrew Odlyzko, a mathematician at the American Telephone and Telegraph Bell Laboratories in Murray Hill, N.J., said: “This is a great achievement. From the standpoint of computational number theory, it represents a breakthrough.”
The number itself was famous among mathematicians as a factoring challenge. In October 1988, mathematicians reported the factoring of a 100-digit number. It is a rule of thumb in mathematics that for every 10-digit increase in the size of a number, the amount of computing needed to factor it increases 10 fold. Until now, factoring advances had come in increments of 10 digits or less.
Secrets Are at Stake
But the practical importance of the result, experts said, is what it might mean to cryptography. In 1977, a group of three mathematicians devised a way of making secret codes that involves scrambling messages according to a mathematical formula based on factoring. Now, such codes are used in banking, for secure telephone lines and by the Defense Department.
In this system, a string of letters in the message are replaced a number. That number is multiplied by itself many times, making a bigger number that helps mask the message. Then the big number is divided by a large number whose factors are secret. The remainder of that division—the amount left over—is the coded message. It can only be decoded by a person who knows the secret factors of the large number.
In making these codes, engineers have to strike a delicate balance when they select the numbers used to scramble messages. If they choose a number that is easy to factor, the code can be broken. If they make the number much larger, and much harder to factor, it takes much longer for the calculations used to scramble a message.
For most applications outside the realm of national security, cryptographers have settled on numbers that are about 150 digits long, said Dr. Gus Simmons, a senior fellow at Sandia National Laboratories in Albuquerque, N.M., who advises the Defense Department on how to make coding secure.
Broader Application Seen
Dr. Lenstra, who also led in the breaking of the previous 100-digit number, said: “For the first time, we have gotten into the realm of what is being used in cryptography. It means it is impossible to guarantee security.”
Although the number the group factored had a special mathematical structure, Dr. Lenstra and his colleagues say the factoring method can be modified so it would have broad application.
Others are more circumspect. Dr. Simmons said that although he agrees that the method is generally applicable, he is waiting to see whether it can break down other numbers quickly enough to be practical. The method, he said, “may become of concern to cryptographers, but that depends on how efficiently it can be implemented.”
Nonetheless, Dr. Simmons said, he would not feel comfortable advising the use of a 150-digit number to maintain security. “If national security were hidden behind a 150-digit number, we’re getting very close to a situation where it would be feasible to factor that,” he said. “Do I advise the government to use bigger numbers? You bet.”
The newly factored number was the largest number on a list mathematicians keep of the 10 Most Wanted Numbers, which are large numbers that are set up as a challenge to factoring experts. And it is so large that it is inconceivable to even think of factoring it without special mathematical tricks.
Answer in a Few Months
Dr. Mark Manasse of the Digital Equipment Corporation’s Systems Research Center in Palo Alto, Calif., calculates that if a computer could perform a billion divisions a second, it would take 10 to the 60th years, or 10 with 59 zeros after it, to factor the number simply by trying out every smaller number that might divide into it easily. But with a newly discovered factoring method and with a worldwide collaborative effort, the number was cracked in a few months.
The new factoring method was discovered last year by John Pollard of Reading, England, and Dr. Hendrik Lenstra Jr. of the University of California at Berkeley, the brother of Dr. Arjen Lenstra. The two mathematicians found a shortcut to factoring numbers of a particular form that happened to fit the form of many large numbers that were purposely derived so as to be difficult to factor.
Then Dr. Manasse and Dr. Arjen Lenstra recruited computer scientists and mathematicians from around the world to help in the factoring effort. Each person who agreed to help got programs sent electronically to their computers and a piece of the problem to work on.
Like a Jigsaw Puzzle
It was like solving a giant, and twisted, jigsaw puzzle, Dr. Manasse said. Each computer was set to work doing the mathematical equivalent of sorting through a box with about 50 million pieces “including all sorts of useless stuff that look like jigsaw pieces that are not,” Dr. Manasse said, adding: “Each person has to find the real pieces in the box. Some boxes don’t have any and some have just one or two.”
After about a month, the researchers got back the equivalent of about two and a half million pieces of the puzzle. To speed up the search and the final putting together of the pieces that would allow them to factor the number, the researchers used a powerful computer at the University of Florida that finished the job for them in three hours.
The current factoring landmark is the latest in a series of what to mathematicians have been breathtaking feats. In 1971, mathematicians scored a coup by factoring a 40-digit number. Ten years ago, a 50-digit number was thought to be all but impossible to factor. Then, with advances in research that led to unexpected shortcuts, 60-, 70- and 80-digit numbers fell. A year and a half ago, the 100-digit number was cracked.
June 20, 1990
Scientists Devise Math Tool to Break a Protective Code
Two Israeli scientists have developed a powerful mathematical technique that for the first time makes it possible under certain circumstances to break the code used by many businesses for encrypting highly sensitive information.
The code, which depends on a government system, the Data Encryption Standard, now appears to have a significant chink in its armor, even though experts say it is not immediately threatened by the new mathematical attack.
Information that is often encrypted includes designs for high-tech products, data about oil and other energy reserves and records of financial transactions. Government officials have become increasingly concerned that this material is vulnerable to manipulation or theft as more of it is transmitted between computers.
The encryption standard “is clearly showing its age, and over the next few years we’re going to have to figure out something else,” said Ronald Rivest, a computer scientist and cryptographic expert at the Massachusetts Institute of Technology.
The two Israeli scientists, Adi Shamir and Eli Biham, have not yet published their results, but Dr. Shamir recently sent electronic mail messages notifying several American colleagues of the breakthrough. The new approach relies on a systematic mapping of subtle variations in very large samples of statistics.
No System Switch Is Seen
Several corporate officials said that the limited breakthrough would not make the Data Encryption Standard obsolete immediately and that switching to other encryption systems would be risky because they had not been extensively verified.
“If I were going to buy cryptographic gear tomorrow I will still buy D.E.S.,” said Leslie Chalmers, chairman of the security and risk management division of the American Banking Association. “Using D.E.S. is still preferable to running bare.”
Ms. Chalmers said the banking industry chose to rely on government standards because such cryptographic approaches were the most thoroughly tested.
Dr. Shamir, who is a professor of applied mathematics at the Weizman Institute in Israel and one of the world’s leading cryptographers, said in a telephone interview that he did not wish to comment on his work until its publication in a scientific journal later this year.
Although other experts are thus unable to review his work, a number of government and academic scientists said that because of Dr. Shamir’s standing in the field they were confident that he was accurately describing his achievement.
News by Computer Message
In a computer message describing his attack on the encryption standard, Dr. Shamir wrote that he was able to “finally break the full 16-round D.E.S.” He said his approach was now considerably faster than an earlier method in which a computer tests every possible key to unlock the code. Dr. Biham worked with Dr. Shamir as a student and is now at the Technion, the Israeli research university.
The encryption standard uses a system based on a secret number, or key, that is used to perform a series of mathematical scrambling operations on a message or on other computer data. When the code message is received, the same secret key is used to reverse the process and unscramble the data.
The decoding technique is referred to as a “chosen clear-text attack” in which the cryptographer is assumed to possess the coded text of any message of his choice. Such an approach is realistic only if the code breaker can trick his opponent into encrypting an already known message.
When the known message has been unraveled from its encrypted form by the new method, the original secret key can be derived, enabling the code breaker to solve all messages sent with that key.
Cryptographic research takes place in two separate lines of work. Academic researchers think of themselves as being on the “light side” and insist on openly sharing results. The “dark side” group of cryptographic researchers works behind closed doors at intelligence and military agencies like the National Security Agency. Little is known about their activities or scientific progress.
Original Work at I.B.M.
The Data Encryption Standard is based on research that was originally done by experts at the International Business Machines Corporation’s Thomas J. Watson Research Laboratory in the 1970s as part of a project code named “Lucifer” and adopted as a national standard in 1977.
The standard has been a source of controversy because the final version, developed by scientists at the National Institute of Standards and Technology in cooperation with the National Security Agency, shortened the mathematical key used to code and decode information.
Some academic scientists have suggested that the security agency shortened the key to make it possible for organizations like itself, and other American intelligence agencies that have sufficient computing power, to break the code if they really needed to by using brute computer force to try all the possible combinations. This would make breaking codes impossible for anyone but those who have access to multimillion-dollar supercomputers. An exhaustive search to break the code requires 60 quadrillion computations.
The government recertified the D.E.S. in 1988 for commercial and nonclassified government use.
An I.B.M. cryptographer said that based on his familiarity with Dr. Shamir’s earlier work it was unlikely that the standard was in any immediate danger.
“It depends on the numbers,” said Donald Coppersmith, a scientist who participated in the original I.B.M. research. “It still seems to be perfectly adequate for the purposes it was designed for.”
Martin Hellman, a cryptographer at Stanford University, said: “There have been questions about the security level of D.E.S. since it was advanced, now we have the final verification that the system can be broken. I would encourage people to buy equipment that can be upgraded in the future.”
October 3, 1991
Tied Up in Knots, Cryptographers Test Their Limits
Edgar Allan Poe thought there was no such thing as an unbreakable code. “It may be roundly asserted that human ingenuity cannot concoct a cipher which human ingenuity cannot resolve,” he wrote.
In this spirit, cryptographers have spent nearly 15 years picking at a code that the National Security Agency, the nation’s code-making and code-breaking agency, helped design for business and government to use in banking and in transmitting sensitive, but unclassified, data over computer lines. Yet, to the surprise of many mathematicians and computer scientists, the code, known as the Data Encryption Standard, or D.E.S., has stubbornly resisted their efforts to penetrate it. Only recently has anyone found a dent in the code and that dent by no means breaks it. In fact, the new system is only a slight improvement over the laborious method of trying every possible key to the code until the correct one is chanced upon.
The encryption standard is “a Gordian knot,” said David Kahn, author of The Codebreakers and Seizing the Enigma.
Poking away at the code has become an unofficial hobby for many civilian cryptographers, whose frustrations have led them to ask, What is the N.S.A.’s secret in building codes? The N.S.A., whose initials, some say, stand for Never Say Anything, is not commenting, of course.
“N.S.A. sure knows a hell of a lot more than we do,” said Dr. Cipher Deavours, the son of a cryptographer who is himself a cryptographer at Kean College in New Jersey and editor of Cryptologia, a journal for the field. “Whoever had his finger in designing the D.E.S. sure knew what he was doing,” Dr. Deavours said.
Dr. Thomas A. Berson, a communications security expert in Palo Alto, Calif., agreed. “The fact that the D.E.S. has withstood attack from all quarters for so long shows it has amazing strength,” he said. In fact, the slightest change in the code makes it easier to crack. “That demonstrates to me that there is theory about making codes that is not known to the outside world,” he said.
To mathematicians and computer scientists, the encryption standard is a nagging challenge, a publicly available code that seems to thumb its nose at their attempts to crack it. These researchers believe that by exposing a weakness, they could protect those who rely on the code, warning them that others who are not so friendly might have discovered for themselves the Achilles heel and may already be reading their mail.
The D.E.S. takes a message, translated into the computer language of zeros and ones, and scrambles it by repeatedly applying mathematical operations. Like a lock with many possible combinations, the code has a family of many possible operations that can be used to scramble a message. Once one of these keys has been chosen, anyone who has it can encode or decode a message. To break the D.E.S., a spy would have to figure out the user’s key.
The difficulty is that there are 2 to the 56th possible keys, which is about 72 thousand million million. A computer built especially to break the code could check all the keys in two hours, said Dr. Martin Hellman of Stanford University. But the machine would cost $10 million. For most people who wanted to decrypt unclassified data, an exhaustive search is likely to be too much trouble and expense. So mathematicians and computer scientists have been looking for shortcuts.
The recent attack on the D.E.S. was announced by Dr. Adi Shamir of the Weizman Institute in Israel, and his former student, Dr. Eli Biham. In a tantalizing message that they sent to friends through electronic mail, they claimed to have found a weakness in the code that makes breaking it slightly easier than trying every combination. The attack is still not practical because it requires nearly as many calculations as an exhaustive search. And it requires an encoded copy of a known message, meaning that a spy would have to somehow plant a message to be encrypted.
Dr. Shamir, in a telephone interview, said he will not reveal the details of his work until it is published in a few months. He said, however, that it is a refinement of a previously published method in which he was able to attack a simplified version of the D.E.S. by comparing unencrypted blocks of text with encrypted blocks and looking for patterns.
Dr. Shamir’s reluctance led Whitfield Diffie, a cryptography expert at Sun Microsystems in Mountain View, Calif., to comment, “Adi is a master of the suspense campaign.”
Not only did Dr. Shamir not break the D.E.S., but he found, in probing the code, that it is actually far stronger than anyone had imagined. In fact, Dr. Shamir said, it seems to be the strongest possible code of its kind. Dr. Shamir’s method devastates similar codes. He and Dr. Bihan reported in a previous paper that they can use their methods to crack a competing family of codes that were proposed recently by the Japanese and that were on the verge of being accepted internationally.
Without a Clue
What this means, said Dr. Deavours, is that “we do not have the slightest idea of what distinguishes a good code from a bad one.” The Japanese codes, he said, “were claimed to be as secure as the D.E.S. and more appropriate for regular computers,” because they did not require a special purpose chip for encoding and decoding. “Imagine the standards organization getting ready to accept this stuff and the next day cracking them turns out to be a footnote in Shamir’s paper,” he said.
Although reluctant to generalize, Dr. Berson agreed that “there’s no theory for developing a cryptosystem.” Except, apparently, within the N.S.A.
When they first began working on the D.E.S., mathematicians and computer scientists predicted an easy victory. Some argued that the N.S.A. would never have promulgated a code, to be widely used and sold, that it could not break. If an enemy used the code, the N.S.A. would want to read the enemy’s messages. So some civilian cryptographers argued that the code must have a trap door—a secret way of breaking it. Once they found the trap door, they would be in.
Now many doubt that there is such a secret entry. “Not only did no one demonstrate a trap door, but no one has been able to show how you could even make one,” Dr. Berson said.
Dr. Shamir agreed. “I would say that, contrary to what some people believe, there is no evidence of tampering with the D.E.S. so that its basic design was weakened.”
Asked whether people who use the D.E.S. should be wary, Mr. Diffie said that a lot depends on how and why they use it. In banking, for example, people typically change the key every day. But a message that you hope will be secure for years, even if it falls into enemy hands, might be very vulnerable.
Mr. Diffie said that code breakers have been known to work for decades trying to decipher important messages. For example, in the 1930s, the British intelligence service got hold of an encrypted Soviet message thought to include the identities of spies. “They continued trying to read it for 30 years,” Mr. Diffie said.
“If you are designing cryptosystems, you’ve got to think about long-term applications,” he said. “You’ve got to try to figure out how to build something that is secure against technology in the next century that you cannot even imagine.”
October 13, 1991
A Public Battle over Secret Codes
An issue long relegated to testy grumbling between software engineers and intelligence agents has suddenly grown into a public dispute between the Bush administration and business executives.
In a digital age that finds more and more information protected by elaborate coding techniques, both sides are asking: Who should hold the keys to the codes?
Not the government, say members of an increasingly militant computer and software industry. Apple Computer, Microsoft and Sun Microsystems are among the companies vowing to oppose federal efforts to keep tight control on the use of coding technology, known as encryption.
“There is really no way to control this technology,” said Nathan P. Myhrvold, vice president for advanced technology at Microsoft. “What are you going to do, call up John Gotti and tell him that it’s illegal to use coding technology? All regulations do is hurt people who are trying to be law-abiding, and it’s a nightmare for business users who are trying to protect information.”
Technique Available to All
Once a tool only of diplomats, military officials and spies, advanced encryption techniques have become available to anyone with access to cheap computer chips.
Nowadays, virtually all information can be translated into digital form and protected with electronic codes—whether it is a cellular telephone conversation, electronic memo, medical record, corporate payroll, television program or cash from the automated teller machine. And advances in hardware and software have made these codes virtually uncrackable to anyone not knowing the precise string of letters or numbers that represents the key to translating the encrypted information.
A House Judiciary panel will hear testimony on the issue today, for the second time in nine days, as Congress ponders whether to resurrect legislation that would give intelligence officials a greater ability to monitor the use of encryption by businesses and individuals.
An alternative move, favored by a growing number of industry executives, would be to scale back government control of computer encryption by curtailing the powerful National Security Agency’s broad jurisdiction over the private use and export of encryption technology. The president has vowed to veto such a bill.
In his concluding remarks at last week’s hearing, Representative Jack Brooks, Democrat of Texas, chairman of the subcommittee, indicated that business executives would have a chance in the next session to present their case. “We need to examine closely the claims by industry that the current attempts by U.S. intelligence and law-enforcement agencies to restrict this technology will seriously impair privacy and technological development in our country,” he said.
Working on Differences
The computer industry had been trying for months to work out its differences quietly with the N.S.A., the secretive Pentagon branch whose job it is to protect the military’s computers and conduct global electronic intelligence gathering. But dissent within the industry and the convening of Congressional hearings have brought the dispute into the open.
Since the first computers appeared in the 1940s, old-line manufacturers like I.B.M. have traditionally cooperated—however grudgingly—with nationalsecurity and law-enforcement officials to keep computer codes out of the wrong hands and the keys in the right ones. Such cooperation was wise in the days when the military and the federal government were the largest computer customers. And during the cold war, it was easier for the government to defend its policies.
New Attitude among Companies
But lately, the younger generation of companies that grew up selling business computers, including Apple, Microsoft and Sun, has dug in its heels. The companies argue that government efforts to stifle encryption technology is not only wrong but futile, putting American business at a disadvantage to foreign competitors who face few constraints in creating and using encryption.
“The most important security measures in which any of us engage in our daily business no longer have anything to do with safes, locks guards or badges,” said Whitfield Diffie, a computer researcher at Sun Microsystems and one of the nation’s leading cryptographers. “Modern security technology has transplanted written signatures and the simple act of recognizing a colleague from the traditional world of face-to-face meetings and pen-and-ink communications to a world in which digital electronic communications are the norm.”
The banking industry is also concerned about the threat of legislation that would force software makers to provide a “trap door,” making it easier for federal agents to decipher encrypted files. The industry currently transmits $350 trillion a year via encrypted wire transfers, and each day United States banks transmit 350,000 encoded messages transferring funds to other nations.
Changing encryption techniques not only would compromise the security of computerized banking transactions but would cost many millions of dollars, said John Byrne, general counsel for the American Banking Association.
But government officials see any costs or inconveniences to business as the trade-off for maintaining law and order. “This is a very real problem,” said Kier Boyd, deputy assistant director of technical services of the F.B.I. “Somebody who has a rudimentary knowledge of cryptography can generate something on a personal computer which would give us fits as far as reading it.”
The genie, indeed, is out of the bottle. Officials at the Information Technology Association of America, a computer industry trade group, recently obtained a copy of a commercial program called Cryptos written by programmers in Moscow.
Cryptos, which runs on a standard I.B.M. personal computer, encodes data by the two most popular techniques. One is the United States Data Encryption Standard, which the N.S.A. established in 1977 to foster a commercial encryption format that businesses could use (and the government could read, as needed). The second is known simply as R.S.A., an encryption format widely adopted for business use; it was developed outside the N.S.A.’s sphere by academic researchers. The Cryptos package, which is available in software stores in Moscow, sells for about $200.
The encryption war between American industry and government has been waged on several fronts in recent months. Just last month, for instance, the Justice Department began pressing Congress for a bill that would require telephone companies to install equipment making wiretaps easier to conduct in today’s network. The limited introduction of encryption into the telephone network, along with the widespread use of fiber optics and digital transmitters, has made electronic eavesdropping much more difficult than in the days when snoopers needed to do little more than hot-wire a copper phone line.
And it was also last month that some industry executives began charging that the N.S.A. had played a quiet role in limiting the strength of a proposed cryptographic standard for future cellular telephones.
Opposing Needs
“The N.S.A.’s needs run almost directly counter to the economic needs of the country, which include the development of high-technology products based on cryptography,” said Marc Rotenberg, national director for the Computer Professionals for Social Responsibility, a public interest group.
Before the International Business Machines Corporation introduced its newest family of mainframes in 1991, I.B.M. tried for more than a year to persuade the N.S.A. to allow it to build a special piece of hardware that would automatically encode information processed by the new computers. Finally, after unsuccessful meetings that went as high as Adm. William O. Studeman, director of the agency at the time, and high-ranking I.B.M. officials, the company threw up its hands. It now submits individual license requests for each export sale—and is frequently turned down.
Corporate executives and industry consultants who have experience in dealing with the N.S.A. say that despite the end of the cold war and the growing importance of cryptography for business purposes, they expect the agency to continue resisting any fundamental challenges to its control.
Still, industry pressure has apparently led the agency to attempt to negotiate a compromise with American software publishers.
The industry had originally backed trade legislation, still pending, which would move control over cryptography exports from the N.S.A. to the Commerce Department. But because the Bush administration has threatened to veto the legislation, the software publishers have quietly attempted to arrange a deal.
But, the largest American computer makers, including I.B.M. and the Digital Equipment Corporation, have refused to participate in the negotiations, saying privately that the weakened R.S.A. would not be acceptable to their customers.
The nominee for the job of director of the N.S.A., Rear Adm. John M. McConnell, declined to be interviewed for this article. Michael S. Conn, chief of information policy for agency, said that national security concerns with cryptography could not be dealt with in a public debate. But the agency, he said, remains confident that it can “continue to meet our mission demands in light of advances in technology.”
May 7, 1992
U.S. Code Agency Is Jostling for Civilian Turf
The National Security Agency is trying to establish a standard for electronically scrambling computer communications, a move that would go far beyond the agency’s usual military and intelligence domain to include civilian activities like electronic tax returns and computerized medical payments.
The plan by the N.S.A., which may be announced as early as today, worries business executives and privacy advocates, who fear government encroachment. And some officials in the Clinton administration believe that the N.S.A. is overstepping its bounds.
The highly secret federal agency is responsible for electronic surveillance of global communications, though usually not civilian communications, within the United States.
But in an era when everyday business is increasingly conducted over computer networks, and when much of that electronic commerce is transmitted in scrambled form to prevent eavesdropping or theft of information, the agency is intent on having government and civilian computer users employ a standard approach to scrambling.
That way, after obtaining a court’s permission, law enforcement officials would have a way of cracking codes.
The National Security Agency will seek bids from companies to produce circuit cards based on its technology, which would be used to scramble electronic messages for government agencies and, eventually, private companies. Agency employees confirmed the plan late Friday, though no agency officials could be reached over the weekend for further details.
I.R.S. Testing Technology
The Internal Revenue Service, the government agency that has the most electronic communication with the public, has already started testing the system. “We need to know what the administrative issues are with this technology,” said Henry Philcox, the tax agency’s chief information officer.
Many computer industry executives oppose the National Security Agency’s effort, saying there is no way for industry specialists and outsiders to determine the reliability and security of the underlying scrambling technology, which the agency intends to keep secret.
Privacy-rights advocates, meanwhile, are wary of the system because of the electronic “back door” it contains, permitting government eavesdropping. And some other administration officials say that the agency is going too far by pushing the standard into civilian computing.
“What these guys are trying to do is run ahead of the blocking,” an administration official who spoke on the condition of anonymity said. “Trying to sell this as the wave of the future is premature as administration policy.”
The circuit card, which is designed to fit into a personal computer and which the agency calls Tessera, is based on technology similar to a device known as the Clipper Chip, a telephone voice-scrambling chip that provides a back-door means for letting law enforcement officials eavesdrop.
The Clipper plan, developed by the National Security Agency in cooperation with the National Institute of Standards and Technology, a Commerce Department agency, was announced in April by the Clinton administration. It has been almost universally opposed by computer and telecommunications executives and by public policy groups.
Letter Protests Secrecy
In a letter to be sent to President Clinton today, which was released on Friday to The New York Times, a group of 38 of the nation’s leading computer scientists, computer-security specialists and privacy experts have urged that the Clipper program be stopped.
“The current proposal was developed in secret by Federal agencies primarily concerned about electronic surveillance, not privacy protection,” the letter states. “Critical aspects of the plan remain classified and thus beyond public review.”
The letter was signed by most of the civilian pioneers of modern cryptography, including Whitfield Diffie of Sun Microsystems, Ralph C. Merkle of the Xerox Corporation, Martin Hellman of Stanford University and Ronald Rivest of the Massachusetts Institute of Technology.
While there has been no other indication so far that the government wants to force private industry to use Clipper or Tessera technologies, their adoption as government and military standards could go a long way toward making them de facto standards. The federal and military markets are some of the largest for the computer and communications industries, and the government has the power to determine what sorts of advanced technology can be exported.
Moreover, the government could insure widespread use of the Clipper and Tessera technologies by insisting that they be used by businesses and individuals when communicating electronically with federal agencies.
Official Reasoning
Law enforcement officials say the technologies are intended to resolve a longstanding problem of the information age: how to preserve the right of businesses and citizens to use codes to protect all sorts of digital communications without letting criminals and terrorists conspire beyond the law’s reach. Businesses and individuals who often communicate over computer networks already make use of a variety of scrambling systems—either of their own devising or those commercially available.
Many of these scrambling systems are unbreakable by anyone who does not hold the electronic keys to the code, something generally known only by the sender and the recipient of scrambled messages.
That is a problem for the National Security Agency, which routinely listens to many of the world’s telephone and computer conversations—although it has no jurisdiction for monitoring non-government conversations within the United States. The N.S.A.’s Tessera and Clipper systems would have an independent agency hold master keys to the codes, which could be obtained with a court’s permission for surveillance by law enforcement officials.
Agency Has Wider Vistas
The agency plans initially to purchase 10,000 to 70,000 of the Tessera cards for its use and that of the Pentagon. In an industry briefing held earlier this month, however, N.S.A. officials proposed the eventual use of the secure communications card in a vast range of civilian and government applications including some by the Internal Revenue Service, the Departments of Health and Human Services, Justice and State, and in the Senate and the House.
The agency also suggested that the card could be used for civilian functions like electronic mail and in the scrambling systems employed in cable television.
The National Security Agency’s new standard-setting effort is being introduced a couple of weeks before the Clinton administration completes a classified review of the Clipper proposal, and several industry executives said the announcement had been timed to apply pressure to the administration’s decision making.
Government-Industry Argument
The proposal angers industry executives who believe that the agency is rushing to establish a de facto standard that will undercut efforts to adopt a competing commercial standard without a built-in back door. That standard, being developed by R.S.A. Data Security, a Redwood City, Calif., software company, has been endorsed by the nation’s leading computer makers, software developers and telecommunications companies.
These companies are particularly troubled by the National Security Agency’s refusal to disclose the mathematical formula, or algorithm, on which its scrambling technology is based.
“The issue here is: Should a secret algorithm developed by the intelligence community be used for unclassified civilian uses?” said Stephen Walker, a computer security industry executive and a member of the government’s Computer System Security and Privacy Advisory Board. “I think the answer is it should not.”
The agency has increasingly come into conflict with industry and public policy groups who argue that independent and public coding technology is essential if the nation is to develop a viable electronic commerce system.
“These government surveillance plans focus on limiting public privacy at a time when everyone is calling for more privacy,” said Marc Rotenberg, Washington director of Computer Professionals for Social Responsibility, a public interest group that organized the letter that will be sent to President Clinton today. “Privacy is a key part of the national information infrastructure, and the decisions the administration is making are leaning in the wrong direction.”
Fragile Job Security
The new security standard is being proposed at a time the National Security Agency is trying to redefine its role after the cold war, and it raises questions in critics’ minds about whether the agency is overstepping its authority. The 1988 Computer Security Act limited the N.S.A.’s computer security role to military and intelligence agencies.
“These guys are fighting for job security,” said William Ferguson, vice president of Semaphore Inc., a Santa Clara, Calif., computer network security firm. “Now that the K.G.B. has gone commercial, the N.S.A. is trying to start its own initiatives that say, ‘all we’re trying to do is keep up with the K.G.B.’”
White House officials said the agency’s actions would not necessarily force the administration to authorize an unpopular coding technology.
One official said the administration policy review was likely to establish a permanent working group that would limit the National Security Agency’s role in policy making.
The agency originally planned to announce its request for proposals on Friday. But the notice was delayed because the government shut down Thursday in response to the frigid weather that disrupted the supply of electricity in Washington and other parts of the East. The agency tentatively plans to award contracts for the Tessera card by March 25.
January 24, 1994
Researchers Demonstrate Computer Code Can Be Broken
An international team of researchers demonstrated this week that it was possible to break the security codes commonly used to protect financial transactions over the Internet.
The actual feat involved finding the two prime factors for a number about 155 digits long. The answer enables one to obtain the key used to decode a particular piece of encrypted data.
While the seven-month effort required a significant amount of computing power—involving 292 computers at 11 sites in 6 countries—it showed that the standard level of encryption for Internet transactions, known as 512-bit encryption, was not secure.
“We have said for a long time that 512 is not a secure standard,” said Burt Kaliski, chief scientist for R.S.A. Research in San Mateo, Calif., a part of Security Dynamics Technologies’ R.S.A. Data Security Inc., which makes encryption technology and sponsored a challenge to break the 512-bit code. “I don’t think this means that overnight, Web sites will be attacked, but we need to make the transfer to a larger key size.”
The feat also raises questions about the government’s policy of export controls on encryption. The 512-bit system has become the standard because, with some exceptions, it is the most secure encryption allowed for export.
Although higher levels of encryption exist, experts estimate that 95 percent of Internet commerce uses the 512-bit standard.
Last September, the administration announced that the export policy would be reviewed and an update announced within one year.
William A. Reinsch, the Under Secretary of Commerce for Export Administration, said today that the administration would make an announcement by mid-September. He could not say, however, what impact the recent demonstration would have on the encryption policy of the United States.
“Our policy is intended to track market developments, but it’s too soon to come to a conclusion on this,” he said.
Being able to factor a 512-bit number enables one version of what is known as the R.S.A. public key encryption system to be broken. R.S.A., designed in the mid-1970s by Ronald Rivest, Adi Shamir and Leonard Adleman, is used extensively in software to protect electronic data traffic.
R.S.A. and similar encryption systems are based on the fact that it is immensely difficult and time-consuming for even the most powerful computers to factor large numbers.
Although the factoring task took the researchers seven months, they were using a relatively small amount of computing power, mostly overnight and on weekends.
“I think there is a huge potential to speed this up a lot more,” said Arjen Lenstra, a computer scientist at Citigroup, and one of the researchers involved in the project. He said that harnessing the computational power available to government agencies or major corporations could enable such codes to be broken in less than one week.
“Microsoft could do this over a Thanksgiving weekend,” he said.
In May, Mr. Shamir, a computer scientist at Weizmann Institute of Science in Rehovoth, Israel, presented a paper describing a device, not yet built, that could break a 512-bit code. But until this week, the feat was only theoretical. Mr. Lenstra and Mr. Shamir are working on a way to use the device to further speed the factoring process.
August 27, 1999
Nick Patterson; A Cold War Cryptologist Takes a Crack at Deciphering DNA’s Deep Secrets
Thirty years ago, Nick Patterson worked in the secret halls of the Government Communications Headquarters, the code-breaking British agency that unscrambles intercepted messages and encrypts clandestine communications. He applied his brain to “the hardest problems the British had,” said Dr. Patterson, a mathematician.
Today, at 59, he is tackling perhaps the toughest code of all—the human genome. Five years ago, Dr. Patterson joined the Broad Institute, a joint research center of Harvard and the Massachusetts Institute of Technology. His dexterity with numbers has already helped uncover startling information about ancient human origins.
In a study released in May, scientists at the Broad Institute scanned 20 million “letters” of genetic sequence from each of the human, chimpanzee, gorilla and macaque monkey genomes. Based on DNA differences, the researchers speculated that millions of years after an initial evolutionary split between human ancestors and chimp ancestors, the two lineages might have interbred again before diverging for good.
The controversial theory was built on the strength of rigorous statistical and mathematical modeling calculations on computers running complex algorithms. That is where Dr. Patterson contributed, working with the study’s leader, David Reich, who is a population geneticist, and others. Their findings were published in Nature.
Genomics is a third career for Dr. Patterson, who confesses he used to find biology articles in Nature “largely impenetrable.” After 20 years in cryptography, he was lured to Wall Street to help build mathematical models for predicting the markets. His professional zigzags have a unifying thread, however: “I’m a data guy,” Dr. Patterson said. “What I know about is how to analyze big, complicated data sets.”
In 2000, he pondered who had the most interesting, most complex data sets and decided “it had to be the biology people.”
Biologists are awash in DNA code. Last year alone, the Broad Institute sequenced nearly 70 billion bases of DNA, or 23 human genomes’ worth. Researchers are mining that trove to learn how humans evolved, which mutations cause cancer, and which genes respond to a given drug. Since biology has become an information science, said Eric S. Lander, a mathematician-turned-geneticist who directs the Broad Institute, “the premium now is on being able to interpret the data.” That is why quantitative-minded geeks from mathematics, physics and computer science have flocked to biology.
Scientists who write powerful DNA-sifting algorithms are the engine driving the genomics field, said Edward M. Rubin, a geneticist and director of the federal Joint Genome Institute in Walnut Creek, Calif. Like the Broad, the genome institute is packed with computational people, including “a bunch of astrophysicists who somehow wandered in and never left,” said Dr. Rubin, originally a physics major himself. Most have never touched a petri dish.
Dr. Patterson belongs to this new breed of biologist. The shelves of his office in Cambridge, Mass., carry arcane math titles, yet he can converse just as deeply about Buddhism or Thucydides, whose writings he has studied in ancient Greek. He is prone to outbursts of boisterous laughter.
He was born in London in 1947. When he was two his Irish parents learned that he had a congenital bone disease that distorted the left side of his skull; his left eye is blind. He became a child chess prodigy who earned top scores on math exams, and later attended Cambridge, completing a math doctorate in finite group theory. In 1969, he won the Irish chess championship.
In 1972, Dr. Patterson began working at the Government Communications Headquarters, where his research remains classified. He absorbed through his mentors the mathematical philosophy of Alan Turing, the genius whose crew at Bletchley Park—the headquarters’ predecessor—broke Germany’s encryption codes during World War II. The biggest lesson he learned from Dr. Turing’s work, he said, was “an attitude of how you look at data and do statistics.”
In particular, Dr. Turing was an innovator in Bayesian statistics, which regard probability as dependent upon one’s opinion about the odds of something occurring, and which allows for updating that opinion with new data. In the 1970s, cryptographers at the communications headquarters were harnessing this approach, Dr. Patterson said, even while academics considered flexible Bayesian rules heretical.
In 1980, Dr. Patterson moved with his wife and children to Princeton, N.J., to join the Center for Communications Research, the cryptography branch of the Institute for Defense Analyses, a nonprofit research center financed by the Department of Defense. His work earned him a name in the cryptography circle. “You can probably pick out two or three people who’ve really stood out, and he’s one of them,” said Alan Richter, a longtime scientist at the defense institute.
In 1993 Dr. Patterson moved to Renaissance Technologies, a $200 million hedge fund, at the invitation of its founder, James H. Simons, a mathematician and former cryptographer at the institute. The fund made trades based on a mathematical model. Dr. Patterson knew little about money, but the statistical methods matched those used in code breaking, Dr. Simons said: analyzing a series of data—in this case daily stock price changes—and predicting the next number. Their methods apparently worked. In Dr. Patterson’s time with the hedge fund, its assets reached $4 billion.
By 2000, Dr. Patterson was restless. One day, he ran into Jill P. Mesirov, another former defense institute cryptographer, and mentioned his interest in biology. Dr. Mesirov, then director of computational biology at the Whitehead/M.I.T. Center for Genome Research, which later became the Broad Institute, hired him.
“Really, what we do for a living is to decrypt genomes,” Dr. Mesirov said. Cryptographers look at messages encoded as binary strings of zeros and ones, then extract underlying signals they can interpret, Dr. Mesirov said. The job calls for pattern recognition and mathematical modeling to explain the data. The same applies for analyzing DNA sequences, she said.
One common genomic analysis tool—the Hidden Markov Model—was invented for pattern recognition by defense institute code breakers in the 1960s, and Dr. Patterson is an expert in that technique. It can be used to predict the next letter in a sequence of English text garbled over a communications line, or to predict DNA regions that code for genes, and those that do not.
Dr. Patterson said he also has a well-honed instinct about which data is important, after seeing “a lot of surprising stuff that turned out to be complete nonsense.” Dr. Lander of the Broad Institute describes him as a great skeptic, with the statistical insight to tell whether a signal is “simply random fluctuation or whether it’s a smoking gun.”
Making that distinction is one of the great difficulties of interpreting DNA. In studying the human-chimp species split, the genomics researchers strove to rule out possible errors and biases in the data.
Dr. Reich, with Dr. Patterson and Dr. Lander, and two other colleagues, used computer algorithms to compare the primate genomes and count DNA bases that did not match, like the C base in gorillas that had become an A in humans. Because such mutations naturally arise at a set rate, the researchers could estimate how long ago the human and chimp lineages separated from an ancient common ancestor.
A DNA base can mutate more than once, however. To correct for that, Dr. Patterson worked out equations estimating how often it occurred; Dr. Reich revised their computer algorithms accordingly. Two strange patterns emerged. Some human DNA regions trace back to a much older common ancestor of humans and chimps than other regions do, with the ages varying by up to four million years. But on the X chromosome, people and chimps share a far younger common ancestor than on other chromosomes.
After the researchers tested various evolutionary models, the data appeared best explained if the human and chimp lineages split but later began mating again, producing a hybrid that could be a forebear of humans. The final breakup came as late as 5.4 million years ago, the team calculated.
The project was “our hobby,” Dr. Reich said of himself and Dr. Patterson. Their main work, in medical genetics, includes devising a shortcut to scan the genome for prostate cancer genes.
Whether studying disease or evolution, Dr. Patterson noted, genomics differs from code breaking in one key respect: no adversary is deliberately masking DNA’s meaning. Still, given its complexity, the code of life is the most open-ended of cryptographic challenges, Dr. Patterson said. “It’s a very big message.”
December 12, 2006
Adding Math to List of Security Threats
One of the world’s most prominent cryptographers issued a warning on Friday about a hypothetical incident in which a math error in a widely used computing chip places the security of the global electronic commerce system at risk.
Adi Shamir, a professor at the Weizmann Institute of Science in Israel, circulated a research note about the problem to a small group of colleagues. He wrote that the increasing complexity of modern microprocessor chips is almost certain to lead to undetected errors.
Historically, the risk has been demonstrated in incidents like the discovery of an obscure division bug in Intel’s Pentium microprocessor in 1994 and, more recently, in a multiplication bug in Microsoft’s Excel spreadsheet program, he wrote.
A subtle math error would make it possible for an attacker to break the protection afforded to some electronic messages by a popular technique known as public key cryptography.
Using this approach, a message can be scrambled using a publicly known number and then unscrambled with a secret, privately held number.
The technology makes it possible for two people who have never met to exchange information securely, and it is the basis for all kinds of electronic transactions.
Mr. Shamir wrote that if an intelligence organization discovered a math error in a widely used chip, then security software on a PC with that chip could be “trivially broken with a single chosen message.”
Executing the attack would require only knowledge of the math flaw and the ability to send a “poisoned” encrypted message to a protected computer, he wrote. It would then be possible to compute the value of the secret key used by the targeted system.
With this approach, “millions of PCs can be attacked simultaneously, without having to manipulate the operating environment of each one of them individually,” Mr. Shamir wrote.
The research note is significant, cryptographers said, in part because of Mr. Shamir’s role in designing the R.S.A. public key algorithm, software that is widely used to protect e-commerce transactions from hackers.
“The remarkable thing about this note is that Adi Shamir is saying that R.S.A. is potentially vulnerable,” said Jean-Jacques Quisquater, a professor and cryptographic researcher at the Université Catholique de Louvain in Belgium.
Mr. Shamir is the S in R.S.A.; he, Ronald Rivest and Leonard Adleman developed it in 1977.
Because the exact workings of microprocessor chips are protected by laws governing trade secrets, it is difficult, if not impossible, to verify that they have been correctly designed, Mr. Shamir wrote.
“Even if we assume that Intel had learned its lesson and meticulously verified the correctness of its multipliers,” he said, “there are many smaller manufacturers of microprocessors who may be less careful with their design.”
The class of problem that Mr. Shamir described has been deeply explored by cryptography experts, said Paul Kocher, who is president of Cryptography Research, a consulting and design firm in San Francisco. However, he added that it illustrated how small flaws could subvert even the strongest security.
An Intel spokesman noted that the flaw was a theoretical one and something that required a lot of contingencies.
“We appreciate these and we look at everything,” said George Alfs, an Intel spokesman.
In e-mail correspondence after he sent the note, Mr. Shamir said he had no evidence that anyone is using an attack like the one he described.
November 17, 2007
Prizes Aside, the P-NP Puzzler Has Consequences
The September cover article in Communications of the Association of Computing Machinery touched off a distinct buzz last month when more than 10 times the usual number of readers downloaded the article in the first two days it went online.
The subject? A survey of progress being made (not much, apparently) in solving the grand challenge for the fields of theoretical computer science and complexity theory. The problem is described rather opaquely as P vs. NP, and it has to do with real-world tasks like optimizing the layout of transistors on a computer chip or cracking computer codes.
Like earlier grand math challenges like Fermat’s Last Theorem, there is a lot at stake, not the least of which is a $1 million cash prize that was offered for the solution as one of seven Mount Everest–style “Millennium Problems” the Clay Mathematics Institute offered almost a decade ago.
So far no one appears to be close to picking up a check. The challenge, in its simplest, but not most understandable phrasing, is to determine whether or not P equals NP. P stands for the class of problems that can be solved in polynomial time, or reasonably quickly. NP stands for the class of problems that can be verified in polynomial time—quickly. If it can be shown that P = NP, then it is possible that the world will be a very different place.
The issues center around a group of classic computer science problems, like the traveling salesman problem—how to figure out the most efficient route through a number of cities given certain constraints. Factoring a large number s also a good example of what is referred to as an NP problem. At present, there is no understood method for doing it in polynomial time, but given an answer it is easy to check to see if it is correct.
The most efficient layout of transistors on a computer chip is another example of problems that are said to be NP.
The allure of the challenge, in addition to the fame and the money, is that if it is possible to prove that P does in fact equal NP, some of the hardest computing challenges may collapse, leading to a burst of new economic and technological productivity.
All of the tasks above, like the traveling salesman problem, are similar in that as the problems grow in size—one more city for the salesman, one more transistor for the chip—the computing time required to solve them increases exponentially. In other words, one more city makes the problem 10 times harder.
Checking to see if any particular solution is correct is simple, but finding the best solution in the case of very large problems is infinitely difficult.
“There are a lot of smart people who have tried to solve this problem and failed,” said Patrick Lincoln, the director of the computer science laboratory at SRI International, a research group based in Menlo Park, Calif. “But they also failed to prove the problem can’t be solved.”
The editors of the journal said they were tickled to have a hit article on their hands.
“I don’t think we’ve ever had an article that started off with this kind of a bang,” said Moshe Y. Vardi, a Rice University computer scientist who is editor in chief of the journal. “Our e-mail blast went out and thousands of people felt they had to click.”
The author of the article, Lance Fortnow, a computer scientist at Northwestern University McCormick School of Engineering, initially told Dr. Vardi that the article would be a short one. “Still open,” he writes, was in first reaction to writing about the state of the work on solving the problem.
There remains a glimmer of hope, he noted. An esoteric branch of mathematics known as algebraic geometry may offer an avenue to prove or disprove the problem, which was first outlined by Stephen A. Cook, a University of Toronto computer scientist, in 1971.
That prospect feels a bit intimidating. As Dr. Vardi said, “It’s a bit scary because we have to start learning a very difficult mathematical field.”
October 7, 2009