4
The Chip and Silicon Valley
After a decade of slow but steady progress in transistor development, a breakthrough occurred in the early 1960s: simultaneously inventors in Texas and California devised a way of placing multiple transistors and other devices on a single chip of silicon. That led rapidly to circuits that could store ever-increasing amounts of data—the storage component of computing. The integrated circuit (IC) was a breakthrough, but it was also part of a long evolutionary process of miniaturization of electronic circuits. Well before the invention of the transistor, manufacturers of hearing aids sought ways to make their products small and light enough to be worn on the body and even concealed if possible.
That impetus to miniaturize vacuum tubes had a direct influence on one of the most famous secret weapons of World War II, the Proximity Fuze—a device that used radio waves to detonate a shell at a distance calculated to be most effective in destroying an enemy aircraft, but not requiring a direct hit. Combined with the analog gun directors mentioned earlier, the Fuze gave the Allies a potent weapon, especially against the German V-1 buzz bomb. The Proximity Fuze had to fit inside a shell (“about the size of an ice-cream cone”) and be rugged enough to withstand the shock and vibration of firing.1 From that work came not only rugged miniature tubes, but also the printed circuit: a way of wiring the components of the device by laying down conducting material on a flat slab of insulating material rather than connecting the components by actual wires. Descendants of these printed circuits can be found inside almost all modern digital devices. The concept itself, of printing a circuit, would have much in common with the invention of the integrated circuit itself decades later.
One characteristic of computing that is most baffling to the layperson is how such astonishing capabilities can arise from a combination of only a few basic logical circuits: the AND, OR, NOT circuits of logic, or their mathematical equivalents: the addition, multiplication, or negation of the digits 1 and 0. The answer is that these circuits must be aggregated in sufficient numbers: a few dozen to perform simple arithmetic, a few hundred to perform more complex calculations, tens of thousands to create a digital computer, millions or billions to store and manipulate images, and so on. Besides the active components like vacuum tubes or transistors, a computer also requires many passive devices such as resistors, diodes, and capacitors. The key element of computer design, software as well as hardware, is to manage the complexity from the lower levels of logical circuits to ever-higher levels that nest above one another. One may compare this to the number of neurons in the brains of animals, from the flatworm, to a cat, to Homo sapiens, although the history of artificial intelligence research has shown that comparing a human brain to a computer can distort as much as clarify. A better analogy might be to compare the complexity of a computer chip to the streets of a large metropolis, like Chicago, which can support amenities not found in smaller cities: for example, symphony orchestras, major league sports teams, and international airports.
In designing the ENIAC, Eckert and Mauchly recognized the need to manage complexity, for which they designed standard modules containing a few dozen tubes and other components. These performed a basic operation of adding and storing decimal numbers. If a module failed, it could be quickly replaced by a spare. In the early 1960s, IBM developed what it called the standard modular system of basic circuits, mounted on a printed circuit board about the size of a playing card, for its mainframes. The first products from the Digital Equipment Corporation were also logic modules, which performed complex operations likely to be used in a digital computing system. Those who designed computers without such modularity found their systems almost impossible to maintain or troubleshoot—the systems were literally “haywire.”
The Invention of the Integrated Circuit
The next logical step in this process was to place all of the elements of one of those modules on a single chip of material, either germanium or silicon. That could not happen until the transistor itself progressed from the crude device of 1947 to a reliable device that emerged around 1959. Another step was required, and that was to understand how to integrate the transistors with the passive components, such as resistors and capacitors, on the same piece of material. Passive components were cheap (they cost only a few pennies each) and rugged. Why make them out of the same expensive material as the transistors?
The reason is that making the passive devices out of germanium or silicon meant that an entire logic circuit could be fashioned on a single chip, with the interconnections built in. That required a way of depositing circuit paths on the chip and insulating the various elements from one another. The conceptual breakthrough was really the first step: to consider the circuit as a whole, not as something made of discrete components. Jack Kilby, working at Texas Instruments in Dallas, took that step in 1958. Before joining Texas Instruments, Kilby had worked at a company named Centrallab in Milwaukee, an industry leader in printed circuits. He moved to Texas Instruments, which at the time was working on a government-funded project, Micro-Module, that involved depositing components on a ceramic wafer. Kilby did not find this approach cost-effective, although IBM would later use something like it for its mainframes. In summer 1958, Kilby came to the idea of making all the components of a circuit out of the same material. He first demonstrated the feasibility of the idea by building an ordinary circuit of discrete components, but all of them, including its resistors and capacitors, were made of silicon instead of the usual materials. In September he built another circuit, an oscillator, and this time all components were made from a single thin wafer of germanium. Fine gold wires connected the elements on the wafer to one another. In early 1959 he applied for a patent, which was granted in 1964.
Robert Noyce was working at Fairchild Semiconductor in Mountain View, California, when he heard of Kilby’s invention. In January 1959 he described in his lab notebook a scheme for doing the same thing, but with a piece of silicon. One of his coworkers, Jean Hoerni, had literally paved the way by developing a process for making silicon transistors that was well suited for mass production. He called it the planar process. As the name implies, it produced transistors that were flat (other techniques required raised metal lines or, in Kilby’s invention, wires attached to the surface). The process was best suited to silicon, where layers of silicon oxide could be used to isolate one device from another. Noyce applied for a patent in July 1959, a few months after Kilby (see figure 4.1). Years later the courts adjudicated the dispute over the competing claims, giving Kilby and Noyce, and their respective companies, a share of the credit and claims. For his work, Kilby was awarded the 2000 Nobel Prize in Physics. Noyce had passed away in 1990 at the age of sixty-two; had he lived, he no doubt would have shared the prize.2
Figure 4.1
Patent for the integrated circuit, as invented by Robert Noyce of Fairchild Semiconductor.
The invention of the IC had two immediate effects. The first came from the U.S. aerospace community, which always had a need for systems that were small and lightweight and, above all, reliable. Developers of guided and ballistic missiles were repeatedly embarrassed by launch failures, later traced to the failure of a simple electronic component costing at most a few dollars. And as systems became more complex, the probability of wiring errors introduced by the human hand increased, no matter how carefully the assembly process was organized. In the late 1950s the U.S. Air Force was involved with the design of the guidance system for the Minuteman solid-fueled ballistic missile, where reliability, size, and weight were critical. From the Minuteman and related projects came the critical innovation of the “clean room,” where workers wore gowns to keep dust away from the materials they were working with, and air was filtered to a degree not found in the cleanest hospital. At every step of the production of every electronic component used in Minuteman, a log was kept that spelled out exactly what was done to the part, and by whom. If a part failed a subsequent test, even a test performed months later, one could go back and find out where it had been. If the failure was due to a faulty production run, then every system that used parts from that run could be identified and removed from service. Although these requirements were applied to circuits made of discrete components, they were immediately applicable to the fabrication of ICs as well, where obtaining what engineers called the yield of good chips from a wafer of silicon was always a struggle.
In the early 1960s the air force initiated development of an improved Minuteman, one whose guidance requirements were far greater than the existing missile’s computer could handle. The reengineering of Minuteman’s guidance system led, by the mid-1960s, to massive air force purchases for the newly invented Integrated Circuit, primarily from Texas Instruments. Those purchases that helped propel the IC into the commercial marketplace. The Minuteman contracts were followed shortly by NASA contracts for the guidance computer for the Apollo spacecraft, which first carried astronauts to the moon between 1969 and 1972. It is no exaggeration to say that in the mid-1960s, the majority of all the ICs in the world could be found in either the guidance systems of Minuteman intercontinental ballistic missiles or in the command and lunar modules of the Apollo spacecraft.
Route 128 and Silicon Valley
Invention of the IC led to a rapid acceleration of minicomputers. The first model of the PDP-8 used discrete components, but later models used ICs, and the Digital Equipment Corporation followed with other products, notably the PDP-11, with capabilities reaching into the mainframe’s market. DEC was soon joined by a host of competing minicomputer companies, which found it relatively easy to enter the market with designs based on the availability of standardized logic chips offered by Fairchild, Texas Instruments, and their brethren. Their entry was helped by a particular product offered by Fairchild: a circuit that could be used as a computer’s memory in place of the magnetic cores then in common use. Cores—small doughnut-shaped pieces of magnetic material with wires threaded through them—were compact but typically had to be hand assembled, whereas IC memories could be produced along with the logic circuits. And their power needs, size, and electrical characteristics fit well on the printed circuit boards that made up a computer.
Many of these minicomputer companies were located along Route 128 in the Boston suburbs, but others were located near Fairchild in the Santa Clara Valley below San Francisco. Don Hoefler, a local journalist, renamed the region “Silicon Valley” in 1971, and it has been known by that name ever since. Fairchild was the leading company, but from the beginning of its involvement with the IC, many of its best employees began leaving the company to found rivals nearby. A running joke in Silicon Valley was that an engineer might find himself with a new job if he accidently turned into the wrong parking lot in the morning. The exodus of Fairchild employees was poetic justice: Fairchild itself was founded by defectors from a company founded in Palo Alto by William Shockley, one of the inventors of the transistor. Although disruptive in the short term, this fluidity ultimately contributed to the success of Silicon Valley as a dynamo of innovation.
Among the Fairchild spin-offs was a company called Intel, founded in 1968 by Robert Noyce, the IC coinventor, and Gordon Moore, who in 1965 observed the rapid doubling of the capacity of semiconductor memory chips. They were soon joined by Andrew Grove, also a Fairchild alumnus. Intel scored a major success in 1970 with its introduction of a memory chip, the 1103, that stored about 1,000 bits of information (128 “bytes,” where a byte is defined as 8 bits). With that announcement, core memories soon became obsolete. Intel’s emphasis on a memory chip was no accident and pointed to a potential problem with the invention: one could construct a complex circuit on a single chip of silicon, but the more complex the circuit, the more specialized its function, and hence the narrower its market. That has been a fundamental dilemma of mass production ever since Henry Ford tried to standardize his Model T. Memory chips avoided this problem, since the circuits were regular and filled a need for mass storage that every computer had. But for other circuits the problem remained. We look at how it was solved in the next chapter.
IBM’s System/360
In the midst of this revolution in semiconductor electronics, IBM introduced a new line of mainframe computers that transformed the high end as much as these events transformed the low end of computing. In 1964 IBM announced the System/360 line of mainframes. The name implied that the machines would address the full circle of scientific and business customers, who previously would have bought or leased separate lines of products. The System/360 was not just a single computer but a family of machines, from an inexpensive model intended to replace the popular IBM 1401 to high-end computers that were optimized for numerical calculations. Because each model had the same instruction set (with exceptions), software written for a small model could be ported to a higher-end model as a customer’s needs grew, thus preserving customers’ investment in the programs they had developed.
With this announcement, IBM “bet the company,” in the words of a famous magazine article of the day.3 It invested enormous resources—unprecedented outside the federal government—in not only the new computers but also tape and disk drives, printers, card punches and readers, and a host of other supporting equipment. IBM also invested in a new operating system and a new programming language (Pl/1) that eventually were delivered but were less successful. Fortunately, extensions to existing FORTRAN and COBOL software, as well as operating systems developed independent of IBM’s main line, saved the day. The announcement strained the company’s resources, but by the end of the 1960s, IBM had won the bet. The company not only survived; it thrived—almost too much, as it was the target of a federal antitrust suit as a result of the increase in market share it had obtained by the late 1960s.
Did the System/360 in fact cover the full range of applications? Certainly there was no longer a need for customers to choose between a scientific-oriented machine (like the IBM 7090) or a comparable business data processing machine (the corresponding IBM product was the 7030). Low-end System/360 models did not extend downward into the PDP-8 minicomputer range. High-end models had trouble competing with so-called supercomputers developed by the Control Data Corporation, under the engineering leadership of the legendary designer Seymour Cray. In particular the Control Data CDC-6000, designed by Cray and announced around the time of IBM’s System/360 announcement, was successfully marketed to U.S. laboratories for atomic energy, aerodynamics, and weather research. In 1972 Cray left Control Data to found a company, named after himself, that for the following two decades continued offering high-performance supercomputers that few competitors could match. The company did not worry much about the minicomputer threat, but it was concerned about Control Data’s threat to the high end of the 360 line.
Another blow to the System/360 announcement was especially galling to IBM: researchers at MIT’s Project MAC concluded that the System/360’s architecture was ill suited for time-sharing. Although MIT had been using IBM 7090 mainframes for initial experiments, for the next phase of Project MAC, researchers chose a General Electric computer, which they thought would better suit their needs. This choice hardly affected IBM’s sales—the company was delivering as many System/360s as it could manufacture. But some within IBM believed that the batch-oriented mode of operation and, by implication, the entire 360 architecture, was soon be obsolete in favor of conversational, time-shared systems. IBM responded by announcing the System/360, Model 67, which supported time-sharing, but its performance was disappointing. Eventually IBM was able to offer more robust time-sharing systems, but it was the networked personal workstation, not time-sharing, that overturned batch, and that did not come for many years later. MIT had trouble scaling up the time-sharing model, and we saw that the Advanced Research Projects Agency (ARPA), initially a strong supporter of time-sharing, turned to other avenues of research that it thought were more promising, including networking that led to the ARPANET (to which a number of System 360 computers were connected). Time-sharing did not go away; it evolved into client-server architecture, an innovation that came from a laboratory located in Silicon Valley and run by the Xerox Corporation. Large, dedicated mainframes, now called servers, store and manipulate massive amounts of data, delivering those data over high-speed networks to powerful personal computers, laptops, and other smart devices, which are nothing like the “dumb” terminals of the initial time-sharing model. These clients do a lot of the processing, especially graphics. The origins of client-server are intimately connected with the transition from the ARPANET to the Internet as it exists today, a story examined in more detail in chapter 6.
IBM was aware of the invention of the integrated circuit and its implications for computer design, but for the System/360 it chose instead to develop its own type of circuits, called solid logic technology, in which individual components were deposited on a ceramic wafer. The decision was based on IBM’s ability to manufacture the devices reliably and in quantity, which was more important than the potential for the IC to surpass solid logic technology in performance. That was the case in 1964 when the announcement was made; however, the rapid pace of IC technology coming out of Silicon Valley caused IBM to reconsider that decision. It shifted to ICs for the follow-on System/370, introduced near the end of the decade. In most other respects, the System/370 was an evolutionary advance, keeping the basic architecture and product line of the 1964 announcement. By the mid-1970s, the mainframe world had made the transition to ICs, with IBM dominating, followed by a “BUNCH” of competitors: Burroughs, Univac, National Cash Register, Control Data, and Honeywell (Honeywell had bought General Electric’s computer line in 1970, and UNIVAC, now Sperry-UINIVAC, took over RCA’s customer base in 1971). This world coexisted somewhat peacefully with the fledgling minicomputer industry, dominated by Digital Equipment Corporation, whose main competitors were Data General, Hewlett-Packard, another division of Honeywell, Modular Computer Systems, and a few others.