One day ladies will take their computers for walks in the park and tell each other, ‘My little computer said such a funny thing this morning.’

ALAN TURING

I think the world market for computers is maybe … five.

THOMAS J. WATSON, Chairman of IBM, 1943

‘Computers are useless,’ said Pablo Picasso. ‘They can only give you answers.’ But what answers they give! In the past half a century, those answers have dramatically changed our world.1 The computer is unlike any other human invention. A washing machine is a washing machine is a washing machine. It is impossible to change it into a vacuum cleaner or a toaster or a nuclear reactor. But a computer can be a word processor or an interactive video game or a smart phone. And the list goes on and on and on. The computer’s unique selling point is that it can simulate any other machine. Although we have yet to build computers that can fabricate stuff quite as flexibly as human beings, it is merely a matter of time.2

Fundamentally, a computer is just a shuffler of symbols. A bunch of symbols goes in – perhaps the altitude, ground speed, and so on, of an aeroplane; and another bunch of symbols comes out – for instance, the amount of jet fuel to burn, the necessary changes to be made in the angle of ailerons, and so on. The thing that changes the input symbols into the output symbols is a program, a set of instructions that is stored internally and, crucially, is infinitely rewritable. The reason a computer can simulate any other machine is that it is programmable. It is the extraordinary versatility of the computer program that is at the root of the unprecedented, world-conquering power of the computer.

The first person to imagine an abstract machine that shuffles symbols on the basis of a stored program was the English mathematician Alan Turing, famous for his role in breaking the German ‘Enigma’ and ‘Fish’ codes, which arguably shortened the Second World War by several years.3

Turing’s symbol shuffler, devised in the 1930s, is unrecognisable as a computer. Its program is stored on a one-dimensional tape in binary – as a series of 0s and 1s – because everything, including numbers and instructions, can ultimately be reduced to binary digits. Precisely how it works, with a read/write head changing the digits one at a time, is not important. The crucial thing is that Turing’s machine can be fed a description of any other machine, encoded in binary, and then simulate that machine. Because of this unprecedented ability, Turing called it a Universal Machine. Today, it is referred to as a Universal Turing Machine.

Bizarrely, Turing devised his machine-of-the-mind not to show what a computer can do but what it cannot do. He was at heart a pure mathematician. And what interested him, even before the advent of nuts-and-bolts hardware, was the ultimate limits of computers.

Remarkably, Turing very quickly found a simple task that no computer, no matter how powerful, could ever do. It is called the halting problem and it is easily stated. Computer programs can sometimes get caught in endless loops, running around the same set of instructions for ever like a demented hamster in a wheel. The halting problem says: if a computer is given a computer program, can it tell, ahead of actually running the program, whether it will eventually halt – that is, that it will not get caught in an interminable loop?

Turing, by clever reasoning, showed that deciding whether a program eventually halts or goes on for ever is logically impossible and therefore beyond the capability of any conceivable computer. In the jargon, it is ‘uncomputable ’.4

Thankfully, the halting problem turns out not to be typical of the kind of problems we use computers to solve. Turing’s limit on computers has, therefore, not held us back. And, despite their rather surprising birth in the abstract field of pure mathematics as machines of the imagination, computers have turned out to be immensely practical devices.

A vast numerical irrigation system

Computers, like the Universal Turing Machine, use binary. Binary was invented by Gottfried Leibniz, a seventeenth-century German mathematician who clashed bitterly with Isaac Newton over who had invented calculus. Binary is a way of representing numbers as strings of 0s and 1s. Usually, we use decimal, or base 10. The right-hand digit represents the 1s, the next digit the 10s, the next the 10 × 10s, and so on. So, for instance, 9217 means 7 + 1 × 10 + 2 × (10 × 10) + 9 × (10 × 10 × 10). In binary, or base 2, the right-hand digit represents the 1s, the next digit the 2s, the next the 2 × 2s, and so on. So, for instance, 1101 means 1 + 0 × 2 + 1 × (2 × 2) + 1 × (2 × 2 × 2), which in decimal is 13.

Binary can be used to represent not only numbers but also instructions. It is merely necessary to specify that this particular string of binary digits, or bits, means add; this one means multiply; this one execute these instructions and go back to the beginning and execute them again, and so on. And it is not only numbers and program instructions that can be represented in binary. Binary can encode anything – from the information content of an image of Saturn’s rings sent back by the Cassini spacecraft to the information content of a human being (although this is somewhat beyond our current capabilities). This has led some physicists, drunk on the information revolution, to suggest that binary information is the fundamental bedrock of the Universe out of which physics emerges. ‘It from bit,’ as the American physicist John Wheeler memorably put it.5

Binary is particularly suitable for use in computers because representing 0s and 1s in hardware requires devices that can be set only to two distinct states. Take the storage of information. This can be done with a magnetic medium, tiny regions of which can be magnetised in one direction to represent a 0 and in the opposite direction to represent a 1. Think of an array of miniature compass needles. To manipulate the information, on the other hand, an electronic device that has two distinct states is necessary. Such a device is the transistor.

Imagine a garden hose through which water is flowing. The water comes from a source and ends up in a drain. Now imagine stepping on the middle of the hose. The flow of water chokes off. Essentially, this is all a transistor in a computer does.6 Except of course it controls not a flow of water but a flow of electrons – an electrical current. And, instead of a foot, it has a gate. Applying a voltage to the gate controls the flow of electrons from the source to the drain as surely as stepping on a hose controls the flow of water.7 When the current is switched on, it can represent a 1 and when it is off a 0. Simples.

A modern transistor (on a microchip) actually looks like a tiny T. The top crossbar of the T is the source/drain (the hose) and the upright of the T is the gate (the foot).

Now imagine two transistors connected together – that is, the source of one transistor is connected to the drain of another. This is just like having a hose beside which you and a friend are standing. If you step on the hose, no water will flow. If your friend steps on it, no water will flow either. If you both stand on the hose, once again no water will flow. Only if you do not stand on the hose and your friend does not stand on the hose will water flow. In the case of the transistor, electrons will flow only if there is a certain voltage on the first gate and the same voltage on the second gate.

It also possible to connect up transistors so that electrons will flow if there is a particular voltage on the first gate or on the second gate. Such AND and OR gates are just two possibilities among a host of logic gates that can be made from combinations of transistors. Just as atoms can be combined into molecules, and molecules into human beings, transistors can be combined into logic gates, and logic gates into things such as adders, which sum two binary numbers. And, by combining millions of such components, it is possible to make a computer. ‘Computers are composed of nothing more than logic gates stretched out to the horizon in a vast numerical irrigation system,’ said Stan Augarten, an American writer on the history of computing.8

Cities on chips

Transistors are made from one of the most common and mundane substances on the planet: sand. Or, rather, they are made from silicon, the second most abundant element in the Earth’s crust and one component of the silicon dioxide of sand. Silicon is neither a conductor of electricity – through which electrons flow easily – nor an insulator – through which electrons cannot flow. Crucially, however, it is a semiconductor. Its electrical properties can be radically altered merely by doping it with a tiny number of atoms of another element.

Silicon can be doped with atoms such as phosphorus and arsenic, which bond with it to leave a single leftover electron that can be given up, or donated. This transforms it into a conductor of negative electrons, or an n-type material. But silicon can also be doped with atoms such as boron and gallium, which bond with silicon and leave room for one more electron. Bizarrely, the empty space where an electron isn’t can move through the material exactly as if it is a positively charged electron. This transforms the silicon into a conductor of positive holes, or a p-type material.

A transistor is created simply by making a pnp or an npn sandwich – most commonly an npn. You do not need to know any more than this to grasp the basics of transistors (in fact, you already know more than you need).9

In the beginning, when transistors were first invented, they had to be linked together individually to make logic gates and computer components such as adders. But the computer revolution has been brought about by a technology that creates, or integrates, billions upon billions of transistors simultaneously on a single wafer, or chip, of silicon. The ‘Very Large Scale Integration’ of such integrated circuits is complex and expensive.10 But, in a nutshell, it involves etching a pattern of transistors on a wafer of silicon, then, layer by layer, depositing doping atoms, microscopic wires, and so on.

To make a computer you need a computer. It is only with computer-aided design that it is possible to create a pattern of transistors as complex as a major city. Such a pattern is then made into a mask. Think of it as a photographic negative. By shining light through the mask onto a wafer of silicon, it is possible to create an image of the pattern of transistors. But a pattern of light and shadows is just that – a pattern of light and shadows. The trick is to turn it into something real. This can be done if the surface of the silicon wafer is coated with a special chemical that undergoes a chemical change when struck by light. Crucially, light makes the photoresistant material resistant to attack by acid.11 So, when acid is applied to the silicon wafer in the next step of the process, the silicon is eaten away, or etched, everywhere except where the light falls. Hey presto, the image of the mask has been turned into concrete – or, rather, silicon – reality.

There are many other ingenious steps in the process, which might involve using many masks to create multiple layers, spraying the wafer with dopants and spraying on microscopic gold connecting wires, and so on. But, basically, this is the idea. The technique of photolithography quickly and elegantly impresses the pattern of a complex electric circuit onto the wafer. It creates a city on a chip.

Probably, most people think that microchips originate in the US or in Japan or South Korea. Surprisingly, they are born in Britain. The company behind the designs of the overwhelming majority of the chips in the world’s electronic devices is based in Cambridge. ARM started out as Acorn Computers in 1985. While the big chip-makers like Intel in the US concentrated on making faster and more compact chips for desktop computers, or PCs, ARM struck out in a different direction completely. It put entire computers on a chip. This made possible the vast numbers of compact and mobile electronic devices from SatNavs to games consoles to mobile phones. It moved chips from dedicated and unwieldy computers into the everyday world.

Big bang computing

The limit on how small components can be made on a chip is determined by the kind of light that is shone through a mask. Chip-makers have made ever smaller components – packing in more and more transistors – by using light with a shorter wavelength, such as ultraviolet or X-rays, which can squeeze through smaller holes. They have even replaced light with beams of electrons since electrons have a shorter wavelength than light.12 And chips have become ever more powerful.

In 1965, Gordon Moore, one of the founders of the American computer chip-maker Intel, pointed out that the computational power available at a particular price – or, equivalently, the number of transistors on a chip – appears to double roughly every eighteen months.13 ‘If the automobile had followed the same development cycle as the computer, a Rolls-Royce would today cost $100, get a million miles per gallon, and explode once a year, killing everyone inside,’ observed Robert X. Cringely, technology columnist on Info World magazine.14

People have been claiming that Moore’s law is about to break down every decade since it was formulated. But so far everyone has been wrong.

Undoubtedly, however, Moore’s law will break down one day. It is a sociological law – a law of human ingenuity. But even human ingenuity cannot do the impossible. There are physical limits set by the laws of nature, which are impossible to circumvent, that ultimately determine the limits of computers.

The speed of a computer – the number of logical operations it can perform a second – turns out to be limited by the total energy available.15 Today’s laptops are so slow because they use only the electrical energy in transistors. But this energy is totally dwarfed by the energy locked away in the mass of the computer, which provides nothing more than the scaffolding to keep a computer stable. The ultimate laptop would have all of its available energy in processing and none of its energy in its mass. In other words, it would have its mass-energy converted into to light-energy, as permitted by Einstein’s famous E = mc2 formula.16

The computing power of such a device would be formidable. In a ten-millionth of a second it would be able to carry out a calculation that would take a state-of-the-art computer today the age of the Universe to complete. But it would carry out the calculation at a price. If all the available energy is converted into light-energy for computing, a computer would not be anything like a familiar computer. Far from it. It would be a billion-degree ball of light. It would be like a nuclear fireball, a blindingly bright piece of the big bang. Though it might be nice to have the most powerful computer imaginable on your desk, it might be just a little inconvenient.

Notes

1 Of course, computers have their downside. ‘Imagine if every Thursday your shoes exploded if you tied them the usual way. This happens to us all the time with computers, and nobody thinks of complaining,’ said Jef Raskin, an expert in human–computer interaction (Geoff Tibballs, The Mammoth Book of Zingers, Quips, and One-Liners).

2 With computer power increasing remorselessly, some have predicted that it will be one day possible to simulate a universe. In fact, philosopher Nick Bostrom thinks it is likely that such simulations have already been carried out by advanced beings an enormous number of times. If so, it is very likely that we are living in a Matrix-like computer-generated artificial reality! (Nick Bostrom, ‘Are You Living In a Computer Simulation?’, Philosophical Quarterly, vol. 53 (2003), pp. 243–55).

3 The first true all-purpose computer was imagined by the British engineer Charles Babbage in 1837. However, his ‘analytical engine ’ was not built in his lifetime because of the difficulty and expense of implementing the design with mechanical cogs and wheels. Babbage worked on the project with Augusta Ada King, Countess of Lovelace and the daughter of the poet Lord Byron. She is considered the first programmer, and the computer language Ada is named in her honour.

4 There is a deep connection between Turing’s discovery of uncomputability and undecidability, another great discovery in mathematics. In 1931, the Austrian logician Kurt Gödel showed that there were mathematical statements (theorems) that could never be proved either true or false. They were undecidable. Gödel’s undecidibility theorem – more usually known as his incompleteness theorem – is one of the most famous and shocking results in the history of mathematics. See ‘God’s Number’, Chapter 6 of my book The Never-Ending Days of Being Dead.

5 Physicists have a tendency to imagine nature to be like the technological world in which they live. In the nineteenth century, in an industrial world powered by coal, for instance, they speculated that the Sun was a giant lump of coal. Today, they speculate that the Universe is a giant computer. The lesson of history suggests they are likely to be wrong, as they have been before.

6 The ‘a transistor is just like a garden hose with your foot on it’ image comes from Computer Science for Fun by Paul Curzon, Peter McOwan and Jonathan Black of Queen Mary, University of London, http://www.cs4fn.org.

7 See Chapter 8, ‘Thank goodness opposites attract: Electricity’.

8 Stan Augarten, State of the Art: A Photographic History of the Integrated Circuit.

9 The transistor was invented by a trio of physicists at Bell Laboratories in New Jersey, USA, in 1947. For their achievement, John Bardeen, Walter Brattain and William Shockley won the 1956 Nobel Prize for Physics.

10 The integrated circuit was patented in 1959.

11 Actually, illuminated areas will remain if using a negative photoresist; they will be dissolved if using a positive photoresist.

12 See Chapter 15, ‘Magic without magic: Quantum theory’.

13 Gordon Moore, ‘Cramming More Components onto Integrated Circuits’, Electronics, vol. 38 no. 8 (19 April 1965).

14 Robert X. Cringely is actually the pen name journalist Mark Stephens and a number of other of technology writers adopted for a column in InfoWorld, a one-time computer newspaper.

15 Seth Lloyd, ‘Ultimate physical limits to computation’, Nature, vol. 406 no. 6799 (31 August 2000), p. 1047.

16 See Chapter 16, ‘The discovery of slowness: Special relativity’.