On July 1, 1948, Bell Telephone Laboratories announced the creation of a tiny new electronic device that might just change the world. Making such an announcement required the device to have a name. And so, earlier that year, the team of scientists responsible had set up a committee to do just this. On May 28, they circulated an internal memorandum titled “terminology for semiconductor triodes,” which explained that “on the subject of a generic name to be applied to this class of devices, the committee is unable to make a unanimous recommendation.” A ballot sheet was thus included, inviting recipients to “designate by the numbers 1, 2 and 3, the order of your preference for the names listed below: semiconductor triode, surface states triode, crystal triode, solid triode, iotatron, transistor, (Other suggestion).”4
There are no prizes for guessing which was the winner: transistor, a term mixing varistor (a semiconductor diode with resistance dependent on voltage) and transconductance (the ratio between change in current at an output terminal compared to change in the voltage at an input terminal) into a distinctly more manageable whole. “It may have far-reaching significance in electronics and electrical communication,” Bell Labs noted in its press release—and, as author James Gleick tells the tale in his 2011 book The Information, “for once the reality surpassed the hype.”5
By replacing the fragile vacuum tubes of early computing with something robust, efficient, and tiny, the transistor commenced an age of ever-faster and ever-smaller developments in electronic devices. Transistor radios—first seen at Bell’s July 1948 press conference, when a prototype was used to demonstrate the component’s potential—would go on to become the most popular electronic devices in history (since surpassed by mobile phones).
The word transistor itself hadn’t been the product of a committee, however. John Robinson Pierce, a Bell Labs engineer, created it for the ballot. A man with more sensitivity to language than most of his peers, he wrote science fiction under the pseudonym J. J. Coupling—and among other distinctions pioneered coast-to-coast television via the use of balloons for bouncing signals.
Truly portable electronic communications had arrived, built from hundreds, then thousands, then millions, and then billions of transistors (as of 2015, the most advanced commercially available CPUs have over 5 billion transistors apiece). And these unerringly steady advances in miniaturization have given the Digital Age one of its most iconic observations: Moore’s law. This originated in a 1965 paper by Gordon E. Moore, the cofounder of Intel, subsequently published under the title “Cramming More Components onto Integrated Circuits”—which observed that the number of components within circuits looked set to double annually at least until 1975.6
Annual doubling may have been slightly ambitious—the actual trend worked out at more like every two years—but, remarkably, this approximate rate of growth has held true for the fifty years since Moore’s observation. It’s an astonishing thought, not least in the fact that every two years brings as much of an increase in computing power as the entire previous history of computing to that point. Will it end? Someday, yes. Soon? Perhaps—although how fast and how smart computers will be by then is one of technology’s great debates (and some technologists’ great anxieties).