15

THE POST-SPUTNIK EFFECT

DUDLEY BUCK BURST THROUGH THE DOOR OF THE LAB AT MIT, flustered and excited, with bags under his eyes. He had been up all night, but this time it wasn’t because his baby son Douglas had been crying too much. He had spent the night glued to his ham radio gear, listening to an eerie stream of bleeping noises coming from space.

It was Saturday, October 5, 1957. The night before, the Russians had launched Sputnik 1, the first ever artificial satellite. It was a polished metal sphere, weighing about eighty kilograms and trailing three long, spring-loaded aerials that were pulsing a signal back to Earth. With Sputnik traveling at about eighteen thousand miles per hour, each orbit of Earth took only ninety-six minutes.

“At MIT, everyone I knew was stunned and wildly excited,” recalls Bert Korkegaard, a physics student in Buck’s lab at the time who had been a radar operator in the Korean War. “I remember Dudley showing up excited and exhausted after listening to its radio signals on ham radio gear. Somehow that helped make us realize it was really happening.”

The Soviets had drawn first blood in the battle to conquer space. Not only had they succeeded in getting something into orbit, but it was communicating with Earth. If they could do that, they could launch a missile—or suspend one in orbit. Indeed, the rocket that had been used to launch Sputnik had already been through successful tests for use as an intercontinental ballistic missile.

Sputnik generated its own propaganda. For twenty-two days it beamed radio signals that were picked up by ham operators all over the world. Anyone with a telescope, or even a pair of binoculars, was able to see the shiny ball pass overhead.

Four weeks later, the Russians bettered themselves, launching Sputnik 2. It contained Laika, a part-terrier, part-husky stray picked from the streets of Moscow, which became the first (and only) dog to orbit Earth. She died within a few hours of launch, as she was always destined to eventually: going into space was still a one-way ticket, with no means for reentering Earth’s atmosphere having yet been devised.

President Dwight D. Eisenhower tried to keep a cool reaction in public to the Sputnik launches, claiming that he knew all about the Soviet program thanks to information gleaned from U-2 missions over the Soviet Union. Nothing could have been further from the truth, however, according to NASA historian Roger Launius, who maintains that while America was still better placed, both technologically and militarily, Sputnik was a phenomenal coup in the propaganda wars.

On the night of the launch America’s satellite experts had been at a cocktail reception at the Soviet embassy in Washington, DC. Scientists from either side of the iron curtain had come together for a sixday conference on space under the auspices of the Comité Spécial de l’Année Géophysique Internationale—a neutral international organization that had been created to oversee the informal competition to get a satellite into space first.

According to their official schedules, both the Americans and the Russians were on track for their first launches to come early the following year. It was a reporter from the New York Times attending the party who broke the news of Sputnik to the American delegation. His editor had called the embassy to say that the launch had been announced by TASS, the Soviet news agency.

Sputnik had already orbited Earth twice, without any of the US systems detecting it. Suddenly Soviet premier Nikita Khrushchev’s boasts that the USSR was the greatest country on Earth were starting to carry more weight. The Project Vanguard team had been comfortably beaten. It would be another six months before Vanguard 1 was launched successfully. The first attempt saw the rocket exploding in flames about three feet off the ground; the next got to an altitude of about four miles before breaking apart.

In the end, Vanguard was not even America’s first satellite to launch. Eager to put something into space, Wernher von Braun and his team sent Explorer 1, a modified rocket, into orbit on January 31, 1958—just to prove that they knew how to do it.

The political impact of Sputnik was overwhelming. Eisenhower was under pressure. He had come to power on the back of his military credentials following discontent with the Korean War. Now he was viewed as a lazy president who spent too much time on the golf course. Sputnik had cemented the idea of the “missile gap” in the public debate, and the Democrats were out to cash in.

“The only appropriate characterization that begins to capture the mood on 5 October involves use of the word hysteria,” wrote NASA’s Launius. “Almost immediately, two phrases entered the American lexicon to define time, ‘pre-Sputnik’ and ‘post-Sputnik.’”

Lyndon B. Johnson, the Texan Democrat and future president, was the Senate majority leader at the time. He immediately launched hearings of the Senate Armed Services Committee to review America’s space and defense programs. The committee concluded that America’s space efforts had been woefully underfunded and pinned the blame directly on the president and the Republican Party.

“The simple fact is that we can no longer consider the Russians to be behind us in technology,” said George Reedy, one of Johnson’s aides who would serve as his press secretary once they eventually reached the White House. “It took them four years to catch up to our atomic bomb and nine months to catch up to our hydrogen bomb. Now we are trying to catch up to their satellite.”

Eisenhower had started to scramble for ideas before the Johnson camp went on the attack. Four days after Laika was blasted into space, he called MIT for help. James Killian, the MIT president who had helped devise the U-2 (and wrote the blueprint for finding future scientists in Life magazine), was called into government full-time as Eisenhower’s special assistant for science and technology.

He also instructed the US Department of Defense to set up a new technological research agency, the Advanced Research Projects Agency, to coordinate the science being developed in different branches of the military.

After a degree of badgering from Killian, his newly created President’s Science Advisory Committee recommended that a new civilian agency be created to handle America’s space programs—the National Aeronautics and Space Administration, or NASA.

In the first days after the first Sputnik launch, Buck’s diaries and notebooks go quiet. The usual routine of entries about lab experiments, developments in projects being run by his students, and progress in finding key materials simply stopped.

He appears to have been diverted to a greater cause. When the satellite first went up, many of America’s top scientists were told to drop their routine work and turn their attention to Sputnik—calculating the orbit of the tiny metal sphere and trying to predict how long it would take before it started to lose height and then burn up in the atmosphere.

As the “hysteria” that Launius described evolved into a more pragmatic response, the need for America to raise its scientific game became a national priority. Better computers were a big part of the agenda, as computers were clearly going to underpin all technological progress. They could run the calculations to put satellites in the air and they could guide missiles; they could also be used for their original purpose of cracking codes and processing intelligence.

The quicker the computer, the more it could do. The cryotron was seen as the answer—certainly in some quarters. The National Security Agency was still the dominant force in the American hierarchy when it came to all things computers.

As David Brock from the Computer History Museum explains,

There were two things going on at this time. The NSA wants the biggest, fastest, most-powerful computers it can get. They don’t care if they are huge. They don’t care if they need a special building. They don’t care if you have to freeze it until it’s colder than the outer reaches of space. They don’t care how much it costs.

On the other hand, there is all the stuff going on in aerospace: ICBMs [intercontinental ballistic missiles], supersonic jets. The computerization of aerospace introduces another set of pressures. In aerospace, they were saying we care that you can shake it and it doesn’t break. We care how much it weighs. It actually has to be superreliable, because if we put it in a satellite or an ICBM we can’t change the part. It has to be heat resistant. It has to fly. Smaller, more reliable, more rugged. This makes everyone interested in microelectronics and microcircuitry. The cryotron was ticking off many of these boxes. The refrigeration was the only thing.

At this point, nobody really expected the silicon integrated circuit—what we would broadly refer to as the microchip—to evolve into meaningful technology. It was only in about 1964, five years after Buck’s death, that silicon-based semiconductors started to take over from the cryotron.

“What first attracted me to the Cryotron and to Dudley Buck was that there was a time when he had a considerable lead,” explains Brock. “These integrated thin film cryotron arrays were ahead of where silicon integrated circuits were. More complex cryotron devices had been made. They were more integrated than silicon integrated circuits.”

A lot of the most enthusiastic research into Buck’s cryogenic switch was being undertaken by IBM. The company had been working on different variations of the cryotron ever since it had first encountered the device in the summer of 1955.

After Buck refused their invitations to join the company, IBM set its own researchers to the task of turning the cryotron into a revolutionary computer component. One such researcher was a former University of Washington college classmate, James Crowe, who worked for IBM’s Military Products Division and had developed a memory system using cryotrons that could switch in ten nanoseconds, about one hundred times faster than the magnetic core memories that were installed in IBM’s commercially available machines at the time.

Crowe came to MIT and showed Buck an even quicker switch that could flip from zero to one in eight nanoseconds. It was a secretive project; Crowe waited ten years to file the patent on his discovery.

In October 1956 a full IBM cryotron research program was set up under the physicist Richard Garwin, who had joined from the army’s renowned lab at Los Alamos, New Mexico. It was Garwin who had refined the design of the Ivy Mike hydrogen bomb. Soon he had a team of researchers running experiments at the company’s Watson research lab at Columbia University, another at the IBM headquarters in Poughkeepsie, New York, and two other teams in temporary research facilities that had been set up across New York State.

By that time IBM had won the contract to build an upgraded version of the Semi-Automatic Ground Environment (SAGE) air defense system—the project that had evolved from the Whirlwind computer. Garwin had been seconded to the Whirlwind program during 1953–54 while he was still on the military payroll. He would have encountered Buck then, had they not met previously. IBM had been set a tough specification by the air force for the SAGE system, and believed that cryotrons could be the key to building a computer capable of the task.

Garwin spoke about the cryotron research project in a 1986 interview on his career with the American Institute of Physics: “I had a hundred people working for me at various IBM locations by the end of 1956, to build superconducting computers out of thin film cryotrons.”

Buck had filed the patent for an upgraded high-speed cryotron in January 1957; three more variations on the design were filed in the following three months. By the time of the Sputnik launch in October 1957, the device was becoming more sophisticated.

The day after Killian was appointed to the Eisenhower government in response to Sputnik, Buck gave a lecture on his invention to the American Institute of Electrical Engineers in New York. It spawned yet more research projects.

A few days later he gave a more advanced version of his speech to the NSA at Arlington Hall, the former wartime codebreaking station. Solomon Kullback, the agency’s director of research and development, was so keen for his staff to hear about the cryotron that he opened up Buck’s part of an all-day seminar to “all agency personnel,” according to a memo in Buck’s files.

Building a supercomputer was now a top priority for the NSA. An earlier scheme that had been more or less forgotten about was given fresh attention. Project Lightning had been dreamed up in 1956 by the then head of the NSA, General Ralph Canine, who wanted to chalk up one last milestone before retiring.

At a cocktail party in July of that year he had blurted out, “Build me a thousand megacycle machine! I’ll get the money!” The machine he demanded was about ten times quicker than anything in existence at the time.

Eisenhower signed off on the budget for Project Lightning personally. Buck had been loosely involved in the project from the start, having been summoned to see Canine two days after he got the budget signed off.

There were a number of concerns about Project Lightning, however. As a concept, it sounded rather like Project Nomad, the giant costly flop that Buck had been sent to monitor. Both machines were conceived as a means to process large volumes of data, such as the intelligence that came flooding in from NSA listening posts. An expansion of intelligence-gathering had created an overwhelming bureaucratic burden.

Declassified NSA documents reveal the problem: “Sites around the world were sending [redacted] intercepts to NSA each month in the 1950s; conventional machines were not equal to the task of sorting, standardizing and routing this tonnage.”

Post-Sputnik, Project Lightning got a new injection of energy and interest. The belief was that “with an adequate budget and a genuine ‘free hand’ NSA could create a new generation of super-fast computers, perhaps tripling processing speed at a stroke.”

Project Lightning was not about building a single machine but instead devising the components that could allow lots of new machines to be built. The reinvigorated second phase of the project was mostly about cryotrons. IBM, which was by this stage dedicating a lot of resource to Buck’s gadget and its own incarnations of the device, was one of the main contractors hired to work on the scheme.

IBM historians have previously declared that by 1958 roughly 85 percent of the Project Lightning funding was being directed to cryotron technology.

As Brock explains,

Project Lightning was the NSA saying, “Let’s find the thing that’s better than the transistor that will give us more gigantic computers that are faster and use less power.” Lightning was a huge part of getting cryotrons going. The main points of interest were superconducting electronics and the cryotron. And a device called tunnel diodes, which burned pretty brightly then died out.

The cryotron effort in Lightning, a lot of the money, went to IBM; a lot went to RCA. A lot of other companies saw this and jumped in—like GE—even though they did not get any Lightning money. Lightning was an important shot in the arm to this microcircuitry area.

There was now so much work being conducted on the cryotron that concerns were emerging about the stocks of helium being consumed by the experiments. When big companies wrote to Buck to inquire about the cryotron, he would ask that they contact their congressman to alert him to the need for conserving helium in America’s public interest.

Although there were now huge numbers of scientists researching the cryotron, the most advanced work was still taking place at MIT. It was not just the device itself that was groundbreaking, but the way in which it was made.

Efforts to build quicker and smaller cryotrons had moved in a new direction, technically. Buck and his team were trying to draw cryotrons by firing beams of electrons, using a device similar to the equipment you would find inside an old television set.

The electron guns inside a cathode ray TV set are used to excite the phosphor on the back of the screen to create a picture. In Buck’s experiments, the fine beam of electrons fired by the gun was being used to start a chemical reaction. In the areas where the gun was fired, it would leave behind pathways of superconducting metals that could be arranged in different patterns to replicate the effect of winding the two tiny wires around one another.

The chemicals were deposited first on thin layers of quartz or silicon. They could be stacked on top of one another. Repeating the process with different chemicals, layer upon layer, allowed Buck and his lab colleagues to create an incredibly small version of the cryotron. As with the earlier experiments, the smaller it could be made, the quicker it should become.

The ideas they were using were not wholly original. The German scientist Gottfried Möllenstedt at the University of Tübingen had used electron guns to change the properties of chemical films. His work was focused on optics technology, however.

Using an electron gun to make a computer circuit was a new concept. It is ultimately how silicon-based integrated circuits—the microchip—later came into being. Chuck Crawford, Buck’s former lab assistant, believes it was Ken Shoulders, his lab partner, who read into Möllenstedt’s work and decided it could be translated for their purposes. It was Buck who then pulled the concept apart, breaking down what must have seemed a rather fanciful notion into a practical series of experiments.

Shoulders became renowned in later years for his wacky ideas. He had a contract with the CIA to build jet packs and primitive drones. He built a flying car, the Girodyne Convertiplane, but no one would let him test it on the roads. The association with Shoulders, Brock believes, has been bad for Buck’s reputation over the decades since his death. The broad sweep of history has relayed the narrative of how the silicon microchip changed the world. Those with a hand in its development have been heaped with praise. Buck’s work has been mostly forgotten or ignored by everyone other than those who continued the research. Yet it was Buck and Shoulders who led the pack, in some regards.

In more recent years, thanks partly to Professor Karl Berggren of MIT’s Department of Electrical Engineering, the role of Buck and Shoulders in the evolution of computer chips has started to be a little more widely acknowledged. Since the mid-2000s, Berggren has been teaching MIT students about Dudley Buck and the work he did on campus. Berggren refers to all forms of superconducting chips as nanocryotrons “in Buck’s honor.”

Berggren also insists that credit is properly apportioned to Buck and Shoulders for devising the process of lithography—the writing of circuits with electron beams. Over a period of decades the origin of the concept had been attributed to Nobel Laureate Richard Feynman. Berggren has set essay questions where he asks students to read Feynman’s paper and the Buck–Shoulders research and then identify whose vision reflects how the technology evolved. “The ‘correct’ answer is for the students to observe that Buck’s vision was the one followed,” says Berggren. He wrote an editorial for the journal Nanoscale in 2011 to make this point specifically:

It has become a cliché to reference Richard Feynman’s There’s Plenty of Room at the Bottom lecture in nearly every commentary published on nanoscale engineering and fabrication. However, less well known, but perhaps more accurately visionary, was a paper written by Buck and Shoulders in 1958, a year before Feynman’s speech, which laid out a procedure by which nanostructured electronics might be written by electron beams. Although their vision was narrower, it was closer to the path that the nanotechnological revolution actually followed in the ensuing 50 years.

AS BUCK AND Shoulders worked on their new electron-gun cryotron, it seems evident that they were far from certain it would work. While working on the Cryotron Mark 2, using this cutting-edge electron beam technology, they continued to run a parallel set of experiments based on improved versions of the original wound-wire design.

Their attempts to manufacture the new thin-film cryotron, as they called it, required some new equipment. The electrons were concentrated down to a beam a fraction of a millimeter in diameter using two condensing lenses. Buck and Shoulders were working on such a small scale by this stage that they did not have a microscope powerful enough to see what they were doing. Buck sunk a considerable sum of the lab’s money into buying an electron microscope from RCA—a necessary tool for the experiments he planned to do, but something no one at MIT had seen before. It could blow up images to sizes one thousand times greater than standard microscopes.

“Much to everybody’s amazement, not the least of which mine, Dudley said to me, ‘Chuck, you are in charge of running this microscope,’” explains Crawford. “I was an undergraduate, and this fancy new electron microscope is put in my charge. The service technician from RCA was appalled. It was like he was a car salesman who had just sold a man a new car who then took his five-year-old kid, put him in the driver’s seat, and said, ‘Okay, you’re the driver.’”

Soon Crawford was running experiments with a variety of chemicals in every possible combination to try to take the thin-film cryotron from concept to reality. The trick was to find chemicals that could be successfully converted into fine lines of superconducting metal after being blasted with a beam of electrons.

Crawford reflects on his work at this time as being similar to the research performed by the early pioneers of photography. Just as photographic film reacts to light to leave an image behind, so the different chemicals reacted to the electrons.

“For a century and a half, the human race has made silver halide–type compounds to make photographic film,” explains Crawford. “That silver halide process has to be pretty carefully prepared to make it work. There were decades of experimentation learning to make first black and white pictures, then color pictures, then Polaroid instant pictures before digital photography took over. For every reaction you can make go by light optics, there are probably thousands of reactions you can make by electron optics. There’s a broad spectrum of things you can do with electron optics. To figure out which reactions were worthwhile, you had to begin learning organic chemistry.”

For a group of electronic engineers and physicists, this was straying into less familiar territory. They did not know too much about the finer points of chemistry.

“Dudley decided he had to understand organic chemistry better to make some of these things work,” Crawford recalls. “To get the chemistry, Dudley started sitting in on organic chemistry courses. If you are a member of the staff at MIT you can audit any courses you like, so he started sitting in on organic chemistry courses.”

The experiments went on endlessly. Writing circuits with an electron gun was not as easy in practice as it sounded. There were lots of botched attempts.

Other scientists around America were also starting to find ways to make smaller and smaller computer components—most notably, Jack Kilby at Texas Instruments. By the summer of 1958, Kilby was trying to make all the components of a computer circuit on one lump of either germanium or silicon. It was an attempt to miniaturize the transistor, the revolutionary switch that had been devised by Bell Laboratories in 1947. Kilby had not yet worked out how to insulate the components, which Buck was doing by layering circuits on top of one another.

No one at MIT thought these semiconductor computer components would take off—especially not Buck and Ken Shoulders. As Crawford recalls,

There were a couple of papers around explaining that you could never make really small transistors. The reason given was that power dissipation in the semiconductor would be high to the point where if you tried to pack a large number of them into a small space the heat would be overwhelming. That was because the quality of the silicon semiconductors was pretty lousy. It was an industry that took a number of years to come up to speed.

At that point in time, for the group at MIT, making the computer components in solid state and thin layers was almost an obvious idea. It was apparently blocked by what was then regarded to be valid physics calculations that suggested it would be very hard to do.

By the spring of 1959, Robert Noyce, an MIT alumnus who was one of the cofounders of Fairchild Semiconductor, came up with an idea similar to Buck’s, only using ultraviolet light particles rather than electrons to write the circuit. Noyce later cofounded Intel Corporation, one of the world’s biggest manufacturers of microchips.

Buck knew that he was part of a generation of scientists that would leave a mark on history. One of these competing teams of scientists would soon make a monumental breakthrough. He suspected he would get there first. Buck’s chips were superconducting, whereas Kilby and Noyce were using semiconductors. The process of making them was similar. Where Kilby and Noyce had an advantage was that their transistor-based invention did not need to be suspended in liquid helium in order to work.

In December 1957, more than a year before Noyce conceived his version of the microchip, Buck wrote in his notes, “I feel that there is a revolution in digital computer fabrication available to us in the next decade and that our present work with cryotrons and other vacuum-deposited computer components ranks high among the possible ways in which that revolution will come about.”