2

The First Computers, 1935–1945

In summer 1937, Konrad Zuse was a twenty-seven-year-old mechanical engineer working at the Henschel Aircraft Company in Berlin. Under the Nazi regime, Germany was arming itself rapidly, although Zuse recalled that neither he nor his fellow young engineers foresaw the war and destruction that would come two years later. He was occupied with tedious calculations relating to the design of aircraft. He began work, on his own time, on a mechanical calculator that would automate that process. Zuse was one of several engineers, scientists, and astronomers across Europe and in the United States who were thinking along the same lines as they struggled with the limitations of existing calculating devices. In June 1937, however, he made a remarkable entry in his diary: “For about a year now I have been considering the concept of a mechanical brain. . . . Discovery that there are elementary operations for which all arithmetic and thought processes can be solved. . . . For every problem to be solved there must be a special purpose brain that solves it as fast as possible.”1

Zuse was a mechanical engineer. He chose to use the binary, or base-2, system of arithmetic for his proposed calculator, because as an engineer, he recognized the inherent advantages of switches or levers that could assume one of only two, instead of ten, positions. But as he began sketching out a design, he had an insight that is fundamental to the digital age that has followed: he recognized that the operations of calculation, storage, control, and transmission of information, until that time traveling on separate avenues of development, were in fact one and the same. In particular, the control function, which had not been mechanized as much as the others in 1937, could be reduced to a matter of (binary) arithmetic. That was the basis for his use of the terms mechanical brain and thought processes, which must have sounded outrageous at the time. These terms still raise eyebrows when used today, but with each new advance in digital technology, they seem less and less unusual. Zuse realized that he could design mechanical devices that could be flexibly rearranged to solve a wide variety of problems: some requiring more calculation and others requiring more storage, though each requiring varying degrees of automatic control. In short, he conceived of a universal machine. Today we are familiar with its latest incarnation: a handheld device that, thanks to numerous third-party application programs called “apps,” can do almost anything: calculate, play games, view movies, take photographs, locate one’s position on a map, record and play music, send and process text—and by the way, make phone calls (see figure 2.1).

Figure 2.1

Figure 2.1

Programs for a computer built in Berlin by Konrad Zuse, circa 1944, using discarded movie film. Zuse’s program-controlled calculators may have been the first to realize Babbage’s vision of an automatic calculator. Most of Zuse’s work was destroyed during World War II, but one of his machines survived and was used into the 1950s. (Credit: Konrad Zuse)

Zuse recalled mentioning his discovery to one of his former mathematics professors, only to be told that the theory Zuse claimed to have discovered had already been worked out by the famous Göttingen mathematician David Hilbert and his students.2 But that was only partially true: Hilbert had worked out a relationship between arithmetic and binary logic, but he did not extend that theory to a design of a computing machine. The Englishman Alan M. Turing (1912–1954) had done just that, in a thirty-six-page paper published in the Proceedings of the London Mathematical Society the year before, in 1936.3 (Zuse was unaware of Turing’s paper until years later; he learned of Babbage only when he applied for a German patent, and the patent examiner told him about Babbage’s prior work.) So although Zuse took the bold step of introducing theoretical mathematics into the design of a mechanical calculator, Turing took the opposite but equally bold step: introducing the concept of a “machine” into the pages of a theoretical mathematics journal.

In his paper, Turing described a theoretical machine to help solve a problem that Hilbert himself had proposed at the turn of the twentieth century.4 Solving one of those problems placed Turing among the elite of mathematicians. Mathematicians admired his solution, but it was his construction of this “machine” that placed Turing among the founders of the digital age. I put the term in quotation marks because Turing built no hardware; he described a hypothetical device in his paper. It is possible to simulate the workings of a Turing machine on a modern computer, but in only a restricted sense: Turing’s machine had a memory of infinite capacity, which Turing described as a tape of arbitrary length on which symbols could be written, erased, or read. No matter. The machine he described, and the method by which it was instructed to solve a problem, was the first theoretical description of the fundamental quality of computers. That is, a computer can be programmed to perform an almost infinite range of operations if human beings can devise formal methods of describing those operations. The classical definition of machine is of a device that does one thing and one thing well; a computer is by contrast a universal machine, whose applications continue to surprise its creators. Turing formalized what Zuse had recognized from an engineer’s point of view: a general-purpose computer, when loaded with a suitable program, becomes “a special purpose brain” in Zuse’s words that does one thing—whatever the programmer wants it to do.5

Beginning in the mid-1930s, as people began designing machines that could carry out a sequence of operations, they rediscovered Babbage’s prior work. In fact it had never been lost. Descriptions of Babbage’s designs had been published and were available in research libraries, and fragments of his machines had been preserved in museums. But because Babbage had failed to complete a general-purpose computer, many concluded that the idea itself was flawed. Babbage had not gotten far enough in his work to glimpse the theories that Turing, and later John von Neumann, would develop. He did, however, anticipate the notion of the universality of a programmable machine. In a memoir published in 1864 (for many years out of print and hard to find), Babbage remarked: “Thus it appears that the whole of conditions which enable a finite machine to make calculations of unlimited extent are fulfilled in the Analytical Engine. . . . I have converted the infinity of space, which was required by the conditions of the problem, into the infinity of time.”6

A contemporary of Babbage, Ada Augusta, had the same insight in annotations she wrote for an Italian description of Babbage’s engine. On that basis Augusta is sometimes called the world’s “first” programmer, but that gives her too much credit. Nevertheless, she deserves credit for recognizing that a general-purpose programmable calculator by nature is nothing like the special-purpose devices that people associated with the term machine. One can only speculate how she might have elaborated on this principle had Babbage gotten further with his work.

There was a renewed interest in such machines in the mid-1930s. But then it took another twenty years before the hardware could get to a level of operations where these theoretical properties could matter. Until about 1950, it was a major accomplishment if one could get an electronic computer to operate without error for even a few hours. Nonetheless, Turing’s insight was significant.7 By the late 1940s, after the first machines based on ad hoc designs began working, there was a vigorous debate about computer design among engineers and mathematicians. From those debates emerged a concept, known as the stored program principle, that extended Turing’s ideas into the design of practical machinery. The concept is usually credited to the Hungarian mathematician John von Neumann (1903–1957), but von Neumann’s description of it came only after a close collaboration with American engineers J. Presper Eckert and John Mauchly at the University of Pennsylvania. And von Neumann was familiar with Turing’s work at least as early as 1938.8 Modern computers store both their instructions—the programs—and the data on which those instructions operate in the same physical memory device, with no physical or design barrier between them. Computers do so for practical reasons: each application may require a different allocation of memory for each, so partitioning the storage beforehand is unwise. And computers do so for theoretical reasons: programs and data are treated the same inside the machinery because fundamentally programs and data are the same.

Zuse eventually built several computing devices, which used a mix of mechanical and electromechanical elements (e.g., telephone relays) for calculation and storage. He used discarded movie film, punched with holes, to code a sequence of operations. His third machine, the Z3 completed in 1941, was the first to realize Babbage’s dream. Its existence was little known outside Germany until well after the war’s end, but Zuse eventually was given credit for his work, not only in building this machine but also in his theoretical understanding of what a computer ought to be. The mainstream of computing development, however, now moved to the United States, although the British had a head start. The early American computers were designed with little theoretical understanding. The notion of basing a design on the principles of symbolic logic, whether from Boole, Hilbert, or others, was not adopted until the early 1950s, when California aerospace companies began building digital computers for their specialized aerospace needs.

Along with Zuse, other mathematicians, astronomers, and engineers began similar efforts in the late 1930s. Some adopted punched cards or perforated paper tape—modifications of existing Teletype or IBM storage media—to encode a sequence of operations, which a corresponding control mechanism would read and transmit to a calculating mechanism.9 Many used IBM tabulators or telephone switching equipment, which transmitted signals electrically at high speed but calculated mechanically at a lower speed. Other projects used vacuum tubes to do the actual counting, which increased the overall speed of computation by several hundred-fold or more. It is from these latter experiments that the modern electronic digital computer has descended, but the early attempts to compute electronically must be considered in the context of the initial attack on the need to automate the procedure for solving a problem.

Of the many projects begun in that era, I mention only a few. I have selected these not so much because they represent a first of some kind, but because they illustrate the many different approaches to the problem. Out of these emerged a configuration that has survived into this century through all the advances in underlying technology.

In 1934, at Columbia University in New York City, Wallace Eckert founded a laboratory that used IBM equipment to perform calculations related to his astronomical research, including an extensive study of the motion of the moon, drawing on the earlier use of punched cards by the British astronomer L. J. Comrie. The two astronomers used the machines to do calculations that the machines were not designed for, taking advantage of the ability to store mathematical tables as decks of punched cards that could be used over and over without errors of transcription that might occur when consulting printed tables.10 At first Eckert modified that equipment only slightly, but in the following decade, he worked with IBM to develop machinery that used cables and switches, controlled by cards that had commands, not data, punched on them. In essence, those cables and switches replicated the actions of the human beings who operated a 1930s-era punched card installation.11

While Eckert was at Columbia University, Howard Aiken, a physics instructor at Harvard University, faced a similar need for calculating machinery. He proposed a machine that computed sequences directly, as directed by a long strip of perforated paper tape. Aiken’s machine was thus more in line with what Babbage had proposed, and indeed Aiken’s proposal acknowledges Babbage’s prior work. By the late 1930s, Aiken was able to take advantage of electrical switching technology, as well as the high degree of mechanical skills developed at IBM. Aiken’s proposal, written in 1937, described in detail why he felt that existing punched card machines, even when modified, would not suit his needs.12 Eckert had considered such a special-purpose calculator but resisted having one built, believing that such a device would be very expensive and would take several years to build. Eckert was correct: although Aiken’s proposal came to fruition as the Automatic Sequence Controlled Calculator, unveiled publicly at Harvard in 1944, it did take several years to design and build (at IBM’s Endicott, New York, laboratories). And it would not have been built had a world war not been raging and had Aiken not received financial support from the U.S. Navy.13

The Advent of Electronics

All of the machines described used electricity to carry signals, but none used electronic devices to do the actual calculation. Among those who first took that crucial step, around 1938, was J. V. Atanasoff, a professor of physics at Iowa State College in Ames. He came to that step from an investigation of ways to mechanize the solution of large systems of linear algebraic equations, which appear throughout many fields of physics and related sciences. The equations themselves were relatively simple—“linear” implying that they form a straight line when graphed. What was most intriguing about this problem was that a method of solution of the systems had been known and described mathematically for at least a century, and the method could be written as a straightforward sequence of operations. In other words, the procedure for their solution was an algorithm—a recipe that, if followed, guaranteed a solution. The solution of a large system required many such steps, however, and above a small threshold of complexity, it was impractical for human beings to carry them out. What was needed was, first of all, a way to perform arithmetic more rapidly than was done with mechanical calculators and, second, a method of executing the sequence of simple steps that would yield a solution.

It was to address the first of those problems that Atanasoff conceived of the idea of using vacuum tubes. Like his contemporary Zuse, he also saw the advantages of using the binary system of arithmetic, since it made the construction of the calculating circuits much easier. He did not carry that insight over to adopting binary logic for sequence control, though. Atanasoff’s machine was designed to solve systems of linear equations, and it could not be programmed to do anything else. The proposed machine would have a fixed sequence coded into a rotating drum.14

In a proposal he wrote in 1940 to obtain support for the project, Atanasoff described a machine that computed at high speeds with vacuum tubes. He remarked that he considered “analogue” techniques but discarded them in favor of direct calculation (that is probably the origin of the modern term analog referring to computing machinery). With financial support from Iowa State College and engineering support from Clifford Berry, a colleague, he completed a prototype that worked, although erratically, by 1942. That year he left Iowa for the Washington, D.C., area, where he was pressed into service working on wartime problems for the navy. Thus, while the onset of World War II made available large sums of money and engineering resources for some computer pioneers, the war was a hindrance for others. Atanasoff never completed his machine. Had it been completed, it might have inaugurated the computer age a decade before that happened. Howard Aiken was also called away by the navy, but fortunately for him, IBM engineers and Harvard staff were well on their way to finishing his Sequence Controlled Calculator. In Berlin, Zuse was able to learn of the prior work of Babbage and study the international effort to develop mathematical logic, but after 1940, he worked in isolation, finding it difficult to get skilled workers and money to continue his efforts.

If the onset of war hindered Atanasoff’s attempts to use vacuum tubes, it had the opposite effect in the United Kingdom, where at Bletchley Park, a country estate located in Buckinghamshire, northwest of London, multiple copies of a device called the “Colossus” were in operation by 1944. Details about the Colossus, indeed its very existence, remained a closely guarded secret into the 1970s. Some information about its operation and use remains classified, although one of the first tidbits of information to appear was that Alan Turing was involved with it. The work at Bletchley Park has an odd place in the history of computing. By the 1970s, several books on this topic appeared, and these books set a pattern for the historical research that followed. That pattern emphasized calculation: a lineage from Babbage, punched card equipment, and the automatic calculators built in the 1930s and 1940s. The Colossus had little, if any, numerical calculating ability; it was a machine that processed text. Given the overwhelming dominance of text on computers and the Internet today, one would assume that the Colossus would be heralded more than it is. It has not, in part because it was unique among the 1940s-era computers in that it did no numerical calculation, which is implied by the definition of the word computer. By the time details about the Colossus became known, the notion of “first” computers had (rightly) fallen out of favor among historians. The Colossus did operate at electronic speeds, using vacuum tubes for both storage and processing of data. And its circuits were binary, with only two allowable states, and they exploited the ability to follow the rules of symbolic logic.15

The Colossus was not the only computing machine in use at Bletchley. Another machine, the Bombe, used mechanical wheels and electric circuits to decode German messages were encrypted by the Enigma machine. The Bombes were the result of an interesting collaboration between the mathematicians at Bletchley and engineers at the American firm National Cash Register in Dayton, Ohio, where much of the hardware was manufactured. The Enigma resembled a portable typewriter, and the Germans used it to scramble text by sending typed characters through a series of wheels before being transmitted. The Bombes were in a sense Enigma machines running in reverse, with sets of wheels that tested possible code combinations. The Colossus, by contrast, attacked messages that the Germans had encrypted electronically by a German version of the Teletype. One could say that the Bombes were reverse-engineered Enigmas, while the Colossi were protocomputers, programmed to decode teletype traffic.

The work at Bletchley shortened the war and may even have prevented a Nazi victory in Western Europe. The need for secrecy limited, but did not prevent, those who built and used the machines to transfer their knowledge to the commercial sector. The British did establish a computer industry, and many of those who worked at Bletchley were active in that industry, including pioneering work in marketing computers for business and commercial use. But the transfer of the technology to the commercial world was hindered, as the value of cryptography to national security did not end in 1945. It is as critical, and secret, today as it ever was.

The Colossus machines may have been destroyed at the end of the war, but even if they were not, there was no easy path to build a commercial version of them. The United States did a better job in transferring wartime computing technology to peaceful uses, although National Cash Register did not exploit its experience building the Bombes to further its business machines technology. After the war, American code breakers working for the navy used their experience to develop an electronic computer, later marketed and sold commercially by the Minneapolis firm Engineering Research Associates. The ERA 1101 was a general-purpose electronic computer, the marketing of which did not divulge secrets of how it may have been used behind security walls. In the United States, this work is concentrated at the National Security Agency (NSA), whose headquarters is at Fort Meade, Maryland. The ERA computers were not the only examples of technology transfer from the secret world of code breaking. In spite of its need for secrecy, the NSA has published descriptions of its early work in computing, enough to show that it was at the forefront of research in the early days.16 We do not know where it stands today or how its work dovetails with, say, the parallel cutting-edge work on text analysis going on at places like Google, on the opposite coast of the United States.

Fire Control

At the same time that the development of machinery for code breaking was underway, an intensive effort in the United States and Britain was also directed toward the problem of aiming antiaircraft guns, or “fire control”: the topic of the secret meeting of the National Defense Research Committee (NDRC) that George Stibitz attended, where he suggested the term digital for a class of devices.

The NDRC was established in June 1940. Its chair was Vannevar Bush, an MIT professor of electrical engineering who had moved to Washington, D.C., to assume the presidency of the Carnegie Institution. In the late 1930s, Bush was among those who saw a need for new types of calculating machinery to assist scientists and engineers and had the even more radical observation that such machines, when completed, would revolutionize large swaths of pure as well as applied mathematics.

While at MIT, Bush developed an analog computer called the Differential Analyzer, one of many devices that used a spinning disk to solve differential equations (power companies use a similar wheel in the meters that compute kilowatt-hours consumed by a residence). He and his students explored a variety of other mechanical and electronic devices for fire control and solving other mathematical problems, including cryptography. As early as 1938, Bush proposed a “rapid arithmetical machine” that would calculate using vacuum tubes. With his move from Cambridge to Washington coinciding with the outbreak of war in Europe in 1939, the priorities shifted. Work on the rapid arithmetical machine continued, and although its design was quite advanced, a complete working system was never completed. A bachelor’s and later master’s thesis by Perry Crawford, an MIT student, described what would have been a very sophisticated electronic digital computer had it been implemented.17 Another MIT student, Claude Shannon, had a part-time job operating the differential analyzer, and from his analysis of the relays used in it, he recognized the relationship between the simple on-off nature of relay switching and the rules of binary arithmetic. That became the basis for his master’s thesis, published in 1938.18 It has been regarded as a foundational document in the emergence of the digital age. It mirrored what Zuse had independently discovered in Berlin around the same time, and the thesis put on solid theoretical grounds what had been discovered on an ad hoc basis elsewhere. George Stibitz independently discovered this principle in 1937, and after building a breadboard circuit at his home, he went on to oversee the construction of several digital fire-control devices at Bell Labs. And long before 1937, railroads had come up with the idea of an “interlocking”: a mechanical or electromechanical apparatus that ensured that in the complex switching in a rail yard, no two trains would be sent along the same track at the same time—an example of what computer scientists would later call an “exclusive-or” circuit.

As the war progressed, especially after December 1941, any project that could address the problem of the aiming and directing of guns, especially against enemy aircraft, received the highest support. Among those pressed into service was Norbert Wiener, an MIT mathematician who worked out mathematical theories about how to track a target in the presence of both electrical and other noise in the tracking system and against the ability of an enemy pilot to take evasive action. Wiener proposed building special-purpose machinery to implement his ideas, but they were not pursued. His mathematical theories turned out to have a profound impact, not only on the specific problem of fire control but on the general question of control of machinery by automatic means. In 1948 Wiener coined the term cybernetics and published a book under the name. It was one of the most influential books about the coming digital era, even if it did not directly address the details of digital electronic computing.19

Two researchers at Bell Laboratories, David Parkinson and Clarence A. Lowell, developed an antiaircraft gun director that used electronic circuits as an analog computer. The M-9 gun director was effectively used throughout the war and, in combination with the proximity fuse, neutralized the effectiveness of the German V-1 robot “buzz bomb” (because the V1 had no pilot, it could not take evasive action under fire). One interesting feature of the M-9 was that it stored equations used to compute a trajectory as wire wound around a two-dimensional camshaft, whose geometry mimicked the equation. Its most significant breakthrough was its ability to modify the aiming of the gun based on the ever-changing data fed to it by the radar tracking, an ability to direct itself toward a goal without human interaction. The success of the M-9 and other analog fire-control devices may have slowed research on the more ambitious digital designs, but the notion of self-regulation and feedback would be embodied in the software developed for digital systems as they came to fruition.

The Digital Paradigm

A central thesis of this narrative is that the digital paradigm, whose roots lay in Turing’s 1936 paper, Shannon’s thesis, and elsewhere, is the key to the age that defines technology today. The previous discussion of the many devices, digital or analog, special or general purpose, the different theories of computation, and the emphasis on feedback and automatic control, is not a diversion. It was out of this ferment of ideas that the paradigm emerged. While Bush’s preference for analog techniques would appear to put him on the wrong side of history, that neglects the enormous influence that his students and others at MIT and Bell Laboratories had on the articulation of what it meant to be “digital.” The 1940s was a decade when fundamental questions were raised about the proper role of how human beings should interact with complex control machinery. Do we construct machines that do what is technically feasible and adapt the human to their capabilities, or do we consider what humans cannot do well and try to construct machines that address those deficiencies? The answer is to do both, or a little of each, within the constraints of the existing technological base. Modern tablet computers and other digital devices do not bear much physical resemblance to the fire-control machines of the 1940s, but the questions of a human-machine interface stem from that era.

At the end of the war, Bush turned to a general look at what kinds of information processing machines might be useful in peacetime. He wrote a provocative and influential article for the Atlantic Monthly, “As We May Think,” in which he foresaw the glut of information that would swamp science and learning if it were not controlled.20 He suggested a machine, which he called the “Memex,” to address this issue. As proposed, Memex would do mechanically what humans do poorly: store and retrieve large amounts of facts. It would also allow its human users to exploit what human do well: make connections and jump from one thread of information to another. Memex was never completed, but a vision of it, and that magazine article, had a direct link decades later to the developers of the World Wide Web. Likewise, Norbert Wiener’s Cybernetics did not articulate the digital world as much the writings of others. The term, however, was adopted in 1982 by the science-fiction author William Gibson, who coined the term cyberspace—a world of bits. Wiener contributed more than just the word: his theories of information handling in the presence of a noisy environment form a large part of the foundation of the modern information-based world.

The ENIAC

Much of the work described here involved the aiming of antiaircraft guns or guns mounted on ships. The aiming of large artillery also required computation, and to that end, Bush’s differential analyzer was copied and heavily used to compute firing tables used in the field. Human computers, mostly women, also produced these tables. Neither method was able to keep up with wartime demand. From that need emerged a machine called the ENIAC (Electronic Numerical Integrator and Computer), unveiled to the public in 1946 at the University of Pennsylvania’s Moore School of Electrical Engineering in Philadelphia. With its 18,000 vacuum tubes, the ENIAC was touted as being able to calculate the trajectory of a shell fired from a cannon faster than the shell itself traveled. That was a well-chosen example, as such calculations were the reason the army spent over a half-million dollars for the risky and unproven technique of calculating with unreliable vacuum tubes. The ENIAC used tubes for both storage and calculation, and thus could solve complex mathematical problems at electronic speeds.

The ENIAC was designed by John Mauchly and J. Presper Eckert (no relation to Wallace Eckert) at the Moore School. It represented a staggering increase in ambition and complexity over most of the ambitious computing machines already in use. It did not arise de novo. In the initial proposal to the army, Mauchly described it as an electronic version of the Bush differential analyzer, careful to stress its continuity with existing technology rather than the clean break it made. And Mauchly had visited J. V. Atanasoff in Iowa for several days in June 1941, where he most likely realized that computing with vacuum tubes at high speeds was feasible.21 The ENIAC’s design was nothing like either the Babbage or the Atanasoff machines. It used the decimal system of arithmetic, with banks of vacuum tubes that replicated the decimal wheels of an IBM tabulator. The banks of tubes were used for both calculation and storage—no separation of the two as Babbage, Zuse, and others had proposed and as is common today. The flow of numbers through the machine was patterned after the flow through the analog differential analyzer.

Of the many attributes that set the ENIAC apart, the greatest was its ability to be programmed to solve different problems. Programming was difficult and tedious. Today we click a mouse or touch an icon to call up a new program—a consequence of the stored program principle. Eckert and Mauchly designed the ENIAC to be programmable by plugging the various computing elements of it in different configurations, effectively rewiring the machine for each new problem. It was the only way to program a high-speed device until high-speed memory devices were invented. There was no point in having a device that could calculate at electronic speeds if the instructions were fed to it at mechanical speeds. Reprogramming the ENIAC to do a different job might require days, even if, once rewired, it could calculate an answer in minutes. For that reason, historians are reluctant to call the ENIAC a true “computer,” a term they reserve for machines that can be flexibly reprogrammed to solve a variety of problems. But remember that the “C” in the acronym stood for “computer,” a term that Eckert and Mauchly deliberately chose to evoke the rooms in which women computers operated calculating machines. The ENIAC team also gave us the term to program referring to a computer. Today’s computers do all sorts of things besides solve mathematical equations, but it was that function for which the computer was invented and from which the machine got its name.