Introduction

A familiar version of Zeno’s paradox states that it is impossible for a runner to finish a race. First, he must traverse one-half of distance to the finish, which takes a finite time; then he must traverse one-half of the remaining distance, which takes a shorter but also finite time; and so on. To reach the finish line would thus require an infinite number of finite times, and so the race can never be won. The history of computing likewise can never be written. New developments transform the field while one is writing, thus rendering obsolete any attempt to construct a coherent narrative. A decade ago, historical narratives focused on computer hardware and software, with an emphasis on the IBM Corporation and its rivals, including Microsoft.1 That no longer seems so significant, although these topics remain important. Five years ago, narratives focused on “the Internet,” especially in combination with the World Wide Web and online databases. Stand-alone computers were important, but the network and its effects were the primary topics of interest. That has changed once again, to an emphasis on a distributed network of handheld devices, linked to a cloud of large databases, video, audio, satellite-based positioning systems, and more. In the United States the portable devices are called “smart phones”—the name coming from the devices from which they descended—but making phone calls seems to be the least interesting thing they do. The emphasis on IBM, Microsoft, and Netscape has given way to narratives that place Google and Apple at the center. Narratives are compelled to mention Facebook and Twitter at least once in every paragraph. Meanwhile the older technologies, including mainframe computers, continue to hum along in the background. And the hardware on which all of this takes place continues to rely on a device, the microprocessor, invented in the early 1970s.

Mathematicians have refuted Zeno’s paradox. This narrative will also attempt to refute Zeno’s paradox as it tells the story of the invention and subsequent development of digital technologies. It is impossible to guess what the next phase of computing will be, but it is likely that whatever it is, it will manifest four major threads that run through the story.

The Digital Paradigm

The first of these is a digital paradigm: the notion of coding information, computation, and control in binary form, that is, a number system that uses only two symbols, 1 and 0, instead of the more familiar decimal system that human beings, with their ten fingers, have used for millennia. It is not just the use of binary arithmetic, but also the use of binary logic to control machinery and encode instructions for devices, and of binary codes to transmit information. This insight may be traced at least as far back as George Boole, who described laws of logic in 1854, or before that to Gottfried Wilhelm Leibinz (1646–1716). The history that follows discusses the often-cited observation that “digital” methods of calculation prevailed over the “analog” method. In fact, both terms came into use only in the 1930s, and they never were that distinct in that formative period. The distinction is valid and is worth a detailed look, not only at its origins but also how that distinction has evolved.

Convergence

A second thread is the notion that computing represents a convergence of many different streams of techniques, devices, and machines, each coming from its own separate historical avenue of development. The most recent example of this convergence is found in the smart phone, a merging of many technologies: telephone, radio, television, phonograph, camera, teletype, computer, and a few more. The computer, in turn, represents a convergence of other technologies: devices that calculate, store information, and embody a degree of automatic control. The result, held together by the common glue of the digital paradigm, yields far more than the sum of the individual parts. This explains why such devices prevail so rapidly once they pass a certain technical threshold, for example, why digital cameras, almost overnight around 2005, drove chemical-based film cameras into a small niche.

Solid-State Electronics

The third has been hinted at in relation to the second: this history has been driven by a steady advance of underlying electronics technology. That advance has been going on since the beginning of the twentieth century; it accelerated dramatically with the advent of solid-state electronics after 1960. The shorthand description of the phenomenon is “Moore’s law”: an empirical observation made in 1965 by Gordon Moore, a chemist working in what later became known as Silicon Valley in California. Moore observed that the storage capacity of computer memory chips was increasing at a steady rate, doubling every eighteen months. It has held steady over the following decades. Moore was describing only one type of electrical circuit, but variants of the law are found throughout this field: in the increase in processing speeds of a computer, the capacity of communications lines, memory capacities of disks, and so forth. He was making an empirical observation; the law may not continue, but as long as it does, it raises interesting questions for historians. Is it an example of “technological determinism”: that technological advances drive history? The cornucopia of digital devices that prevails in the world today suggests it is. The concept that technology drives history is anathema to historians, who argue that innovation also works the other way around: social and political forces drive inventions, which in turn shape society. The historical record suggests that both are correct, a paradox as puzzling as Zeno’s but even harder to disentangle.

The Human-Machine Interface

The final thread of this narrative concerns the way human beings interact with digital devices. The current jargon calls this the user interface. It goes to the philosophical roots of computing and is one reason that the subject is so fascinating. Are we trying to create a mechanical replacement for a human being, or a tool that works in symbiosis with humans, an extension of the human’s mental faculties? These debates, once found only among science-fiction writers, are not going away and will only increase as computers become more capable, especially as they acquire an ability to converse in natural language. This theme is a broad one: it ranges from philosophical implications about humanity to detailed questions about machine design. How does one design a device that humans can use effectively, that takes advantage of our motor skills (e.g., using a mouse or touch screen) and our ability to sense patterns (e.g., icons), while providing us with information that we are not good at retaining (e.g., Google or Wikipedia)? We shall see that these detailed questions of human use were first studied intensively during World War II, when the problem of building devices to compute and intercept the paths of enemy aircraft became urgent.

With those four themes in mind—digitization, convergence, solid-state electronics, and the human interface—what follows is a summary of the development of the digital information age.