TWO

OUR IMAGINARY REALITY

“Theoretical physicists live in a classical world, looking out into a quantum-mechanical world.”
— John Bell30

“Describing the physical laws without reference to geometry is similar to describing our thoughts without words.”
— Albert Einstein31

THE SCHOOL OF ATHENS, by Raphael (click to see photo), is one of the most breathtaking paintings of the Italian Renaissance. It represents a key moment in human history: the flowering of free thinking in Classical Greece. Somehow, the people of the time were able to look at the world with fresh eyes, to set aside traditional superstitions and beliefs in dogma or high authority. Rather, through discussion and logical argument, they began to figure out for themselves how the universe works and what principles human society should be based upon. In doing so, they changed Western history forever, forming many of the concepts of politics and literature and art that underlie the modern world.

Raphael’s picture is full of philosophers like Aristotle, Plato, and Socrates engaged in discussion. There is also the philosopher Parmenides, in some ways the ancient Greek version of Stephen Hawking. Like Hawking, Parmenides believed that at its most fundamental level, the world is unchanging, whereas Heraclitus, also in the portrait, believed that the world is in ceaseless motion as a result of the tension between opposites. There are mathematicians too. At front right is Euclid, giving a demonstration of geometry, and at front left is Pythagoras, absorbed in writing equations in a big book. Beside Parmenides is Hypatia, the first woman mathematician and philosopher. The whole scene looks like a sort of marvellous university — which I, for one, would have loved to attend — full of people exploring, exchanging, and creating ideas.

An odd figure is peering over Pythagoras’s shoulder and scribbling in a notebook. He looks as if he is cheating, and in some ways he is: he has one eye on the mathematics and the other on the real world. This is Anaximander, who some consider to be the world’s first scientist.32 He lived around 600 B.C. in Miletus, then the greatest Greek city, in the eastern Aegean on the coast of modern Turkey. At that time, the world was dominated by kings and traditional rulers with all kinds of mystical and religious beliefs. Yet somehow Anaximander, his teacher Thales, and his students — and the thinkers who followed — trusted their own powers of reason more than anyone had ever done before.

Almost every written record of their work has been lost, but what little we know is mind-boggling. Anaximander invented the idea of a map — hence quantifying our notion of space — and drew the first known map of the world. He is also credited with introducing to Greece the gnomon, an instrument for quantifying time, consisting of a rod set vertically in the ground so that its shadow showed the direction and altitude of the sun. Anaximander is credited with using the gnomon, in combination with his knowledge of geometry, to accurately predict the equinoxes.33

Anaximander also seems to have been the first to develop a concept of infinity, and he concluded, although we do not know how, that the universe was infinite. He also proposed an early version of biological evolution, holding that human beings and other animals arose from fish in the sea. Anaximander is considered the first scientist because he learned systematically from experiments and observations, as when he developed the gnomon. In the same way he was taught by Thales, he seems to have taught Pythagoras. Thus was built the scientific tradition of training students and passing along knowledge.

Just think of these phenomenal achievements for a moment, and imagine the transformations they eventually brought about. How often have you arrived in a strange city or neighbourhood without a map or a picture of your location? With nothing but your immediate surroundings, with no mental image of their context, you are lost. Each new turn you take brings something unexpected and unpredictable. A map brings a sense of perspective — you can anticipate and choose where you want to go and what you want to see. It raises entirely new questions. What lies beyond the region shown in the map? Can we map the world? And the universe?

And how would you think of time without a clock? You could still use light and dark, but all precision would be lost in the vagaries of the seasons and the weather. You would live far more in the present, with the past and the future being blurred. The measurement of time opened the way to precise technologies, like tracking and predicting celestial bodies and using them as a navigational tool. Yet even these matters were probably not Anaximander’s primary concerns. He seems to have been more interested in big questions, such as what happened if you traced time back into the distant past or far forward into the future.

What about Anaximander’s idea that the universe is infinite? This seems plausible to us now, but I distinctly remember that when I was four years old, I thought the sky was a spherical ceiling, with the sun and stars fixed upon it. What a change it was when I suddenly realized, or was told, that we are looking out into an infinite expanse of space. How did Anaximander figure that out? And what about his idea that we arose from fish in the sea? His ideas suggested above all a world with potential. If things were not always as they are now, they might be very different in the future.

It was no accident that these beginnings of modern science occurred around the same time as many new technologies were being invented. Nearby, on the island of Samos, Greek sculptor and architect Theodorus developed, or at least perfected, many of the tools of the building trade: the carpenter’s square, the water level, the turning lathe, the lock and key, and the craft of smelting.

Hand in hand with these developments was a flowering of mathematics, philosophy, art, literature, and, of course, democracy. But the civilization of ancient Greece was fleeting. Throughout its existence, it was ravaged by wars and invasions: the Greco-Persian Wars, the war between Athens and Sparta, the invasion by Alexander the Great, and the chaos following his death. Finally, there was the triumph of Rome and then its decadent decline, which snuffed out civilization in Europe for a millennium. The great libraries of the ancient world, like the one at Alexandria, were lost. Only fragments and copies of their collections survived.

In the fifteenth century, Aldus Manutius, Italy’s leading printer, made it his personal mission to reproduce cheap and accurate pocket editions of the ancient classics and make them widely available. In this way, the ideas of ancient Greece directly seeded the Renaissance and the Scientific Revolution that followed.

MORE THAN FOUR HUNDRED years later, we come to a modern counterpart of Raphael’s masterpiece: a black-and-white photograph of the Fifth Solvay International Conference on Electrons and Photons, held in Brussels in 1927 (click to see photo).

Towards the end of the nineteenth century, physicists had felt they were close to converging on a fundamental description of nature. They had Newton’s laws of mechanics; Maxwell’s theory of electricity, magnetism, and light; and a very successful theory of heat founded by Maxwell’s friend William Thomson (Lord Kelvin), among others. Physics had provided the technical underpinning of the Industrial Revolution and had opened the way to global communication. A few small details remained to be wrapped up, like the inner structure of the atom. But the classical picture of a world consisting of particles and fields moving through space and time seemed secure.

Instead, the early twentieth century brought a series of surprises. The picture became increasingly confused and was only resolved by a full-scale revolution between 1925 and 1927. In this revolution, the physicists’ view of the universe as a kind of large machine was completely overturned and replaced by something far less intuitive and familiar. The Fifth Solvay Conference was convened just as this new and abstract representation of the world had formed. It might be considered the most uncomfortable conference ever held in physics.

In 1925, the young German prodigy Werner Heisenberg launched quantum theory with a call to “discard all hope of observing hitherto unobservable quantities, such as the position and period of the electron,” and instead to “try to establish a theoretical quantum mechanics, analogous to classical mechanics, but in which only relations between observable quantities occur.”34 Heisenberg’s work replaced the classical picture of the electron orbiting the atomic nucleus with a far more abstract, mathematical description, in which only those quantities that were directly observable in experiments would have any literal interpretation. Soon after, in 1926, the Austrian physicist Erwin Schrödinger found an equivalent description to Heisenberg’s, in which the electron was treated as a wave instead of a classical particle. Then, in early 1927, Heisenberg discovered his famous uncertainty principle, showing that the central concept in Newton’s classical universe — that every particle had a definite position and velocity — could not be maintained.

By the time the physicists got to Brussels for the Solvay Conference, the classical view of the world had finally collapsed. They had to give up any notion of making definite predictions because there was, in a sense, no longer a definite world at all. As Max Born had realized in 1926, quantum physics could only make statements about probabilities. But it wasn’t even a case of little demons playing dice in the centre of atoms: it was far stranger than that. There was an elegant mathematical formalism governing the world’s behaviour, but it had no classical interpretation. No wonder all the physicists at Solvay are looking so glum!

Front and centre in the photo, of course, is Einstein. Next to him, with his legs crossed, is Dutch physicist Hendrik Lorentz. And then there is Marie Curie — the only woman in the picture and also the only one among them to win two Nobel prizes. Curie, with her husband Pierre, had shown that radioactivity was an atomic phenomenon. Their discovery was one of the first hints of the strange behaviour in the subatomic world: radioactivity was finally explained, a year after the Fifth Solvay meeting, as the quantum mechanical tunnelling of particles out of atomic nuclei. Next to Curie is Max Planck, holding his hat and looking sad. Planck had been responsible for initiating the quantum revolution in 1900 with his suggestion that light carries energy in packets called “photons.” His ideas had been spectacularly confirmed in 1905, when Einstein developed them to explain how light ejects electrons from metals.

In the middle of the next row back, between Lorentz and Einstein, is Paul Dirac, the English genius and founder of modern particle physics, with Erwin Schrödinger standing behind him. Werner Heisenberg is standing at the back, three in from the right, with the German-British mathematician Max Born sitting in front of him. Heisenberg and Born had together developed the matrix mechanics formulation of quantum theory, which Dirac had brought to its final mathematical form. Next to Born is the Danish physicist Niels Bohr, a towering figure who had extended Planck’s quantum idea to the hydrogen atom and who had since played the role of godfather to quantum theory. Bohr founded the Institute for Theoretical Physics of the University of Copenhagen and became its director. There, he mentored Heisenberg and many other physicists; it became a world centre for quantum theory. Heisenberg would later say, “To get into the spirit of the quantum theory was, I would say, only possible in Copenhagen at that time [1924].”35 Bohr was responsible for developing what became the most popular interpretation of quantum theory, known as the Copenhagen interpretation. On Heisenberg’s right is Wolfgang Pauli, the young Austrian prodigy who had invented the Pauli exclusion principle, stating that two electrons could not be in the same state at the same time. This principle, along with the quantum theory of spin that Pauli also developed, proved critical in understanding how electrons behave within more complicated atoms and molecules. Dirac, Heisenberg, and Pauli were only in their mid-twenties and yet at the forefront of the new developments.

The participants came from a wide range of backgrounds. Curie was more or less a refugee from Poland. 36 Einstein himself had worked at a patent office before making his sensational discoveries in 1905. He, Born, Bohr, their great friend Paul Ehrenfest (standing behind Curie in the photo), and Pauli’s father were representatives of a generation of young Jews who had entered maths and science in the late nineteenth century. Before that time, Jews had been deliberately excluded from universities in western Europe. When they were finally allowed to enter physics, maths, and other technical fields, they did so with a point to prove. They brought new energy and ideas, and they would dispel forever any notion of intellectual inferiority.

So there we have many of the world’s leading physicists meeting to contemplate a revolutionary new theory — and to figure out its repercussions for our view of the universe. But they seemed none too happy about it. They had discovered that, at a fundamental level, the behaviour of nature’s basic constituents is truly surreal. They just don’t behave like particles or billiard balls or masses sliding down planes, or weights on springs or clouds or rivers or waves or anything anyone has ever seen in everyday life. Even Heisenberg saw the negative side: “Almost every progress in science has been paid for by a sacrifice; for almost every new intellectual achievement, previous positions and conceptions had to be given up. Thus, in a way, the increase of our knowledge and insight diminishes continually the scientist’s claim on ‘understanding’ nature.” 37 On the other hand, objectively speaking, the 1920s were a golden age for physics. Quantum theory opened up vast new territories where, as Dirac told me when I met him many years later, “even mediocre physicists could make important discoveries.”

It would take most of the remainder of the twentieth century for physicists to fully appreciate the immense opportunities that quantum physics offers. Today, we stand on the threshold of developments through which it may completely alter our future.

· · ·

THE STRANGE STORY OF the quantum begins with the humble electric light bulb. In the early 1890s, Max Planck, then a professor in Berlin, was advising the German Bureau of Standards on how to make light bulbs more efficient so that they would give out the maximum light for the least electrical power. Max Born later wrote about Planck: “He was by nature and by the tradition of his family conservative, averse to revolutionary novelties and skeptical towards speculations. But his belief in the imperative power of logical thinking based on facts was so strong that he did not hesitate to express a claim contradicting to all tradition, because he had convinced himself that no other resort was possible.” 38

Planck’s task was to predict how much light a hot filament gives out. He knew from Maxwell’s theory that light consists of electromagnetic waves, with each wavelength describing a different colour of light. He had to figure out how much light of each colour a hot object emits. Between 1895 and 1900, Planck made a series of unsuccessful attempts. Eventually, in what he later called an “act of despair,” 39 he more or less worked backward from the data, inferring a new rule of physics: that light waves could accept energy only in packets, or “quanta.” The energy of a packet was given by a new constant of nature, Planck’s constant, times the oscillation frequency of the light wave: the number of times per second the electric and magnetic fields vibrate back and forth as an electromagnetic wave travels past any point in space. The oscillation frequency is given by the speed of light divided by the wavelength of the light. Planck found that with this rule he could perfectly match the experimental measurements of the spectrum of light emitted from hot objects. Much later, Planck’s energy packets became known as photons.

Planck’s rule was less ad hoc than it might at first seem. He was a sophisticated theorist, and well appreciated a powerful formalism that had been developed by the Irish mathematical physicist William Rowan Hamilton in the 1830s, building on earlier ideas of Fermat, Leibniz, and Maupertuis. Whereas Newton had formulated his laws of motion as rules for following a system forward from one moment in time to the next, Hamilton considered all the possible histories of a system, from some initial time to some final time. He was able to show that the actual history of the system, the one that obeyed Newton’s laws, was the one that minimized a certain quantity called the “action.”

Let me try to illustrate Hamilton’s idea with the example of what happens when you’re leaving a supermarket. When you’re done with your grocery shopping, you’re faced with a row of checkouts. The nearest will take you less time to walk to, but there may be more people lined up. The farthest checkout will take longer to walk to but may be empty. You can look to see how many people have baskets or trolleys, how much stuff is in them, and how much is on the belt. And then you choose what you think will be the fastest route.

This is, roughly speaking, the way Hamilton’s principle works. Just as you minimize the time it takes to leave the supermarket, physical systems evolve in time in such a way as to minimize the action. Whereas Newton’s laws describe how a system edges forward in time, Hamilton’s method surveys all the available paths into the future and chooses the best among them.

Hamilton’s new formulation allowed him to solve many problems that could not be solved before. But it was much more than a technical tool: it provided a more integrated picture of reality. It helped James Clerk Maxwell develop his theory of electromagnetism, and it guided Planck to an inspired guess that launched quantum theory. In fact, Hamilton’s version of mechanics had anticipated the future development of quantum theory. Just as you find when leaving the supermarket that there may be several equally good options, Hamilton’s action principle suggests that in some circumstances the world might follow more than one history. Planck was not ready to contemplate such a radical departure from physics’ prior perspectives but, decades later, others would. As we will see in Chapter Four, by the end of the twentieth century all the known laws of physics were expressed in terms of the quantum version of Hamilton’s action principle.

The point of all this for our story is that Planck knew that Hamilton’s action principle was a fundamental formulation of physics. It was therefore natural for him to try to relate his quantization rule to Hamilton’s action. The units in which Hamilton’s action is measured are energy times time. The only time involved in a light wave is the oscillation period, equal to the inverse of the oscillation frequency. So Planck guessed that the energy of an electromagnetic wave times its oscillation period is equal to a whole-number multiple of a new constant of nature, which he called the “action quantum” and which we now call “Planck’s constant.” Because Planck believed that all the laws of physics could be included in the action, he hoped that one day his hypothesis of quantization might become a new universal law. In this guess, he would eventually be proven right.

Two of Planck’s colleagues at Berlin, Ferdinand Kurlbaum and Heinrich Rubens, were leading experimentalists of the time. By 1900, their measurements of the spectrum of light emitted from hot objects had become very accurate. Planck’s new guess for the spectrum, based on his quantization rule, fitted their data beautifully and explained the changes in colour as an object heats up. For this work, Planck came to be regarded as the founder of quantum theory. He tried but failed to go further. He later said: “My unavailing attempts to somehow reintegrate the action quantum into classical theory extended over several years and caused me much trouble.” 40 Physics had to wait for someone young, bold, and brilliant enough to make the next leap.

PLANCK WAS GUIDED TO his result in part by theory and in part by experiment. In 1905, Albert Einstein published a clearer and more incisive theoretical analysis of why the classical description of electromagnetic radiation failed to describe the radiation from hot objects.

The most basic notion in the theory of heat is that of thermal equilibrium. It describes how energy is shared among all the components of a physical system when the system is allowed to settle down. Think of an object that, when cool, is perfectly black in colour, so it absorbs any light that falls on it. Now heat up this object and place it inside a closed, perfectly insulating cavity. The hot object emits radiation, which bounces around inside the cavity until it is reabsorbed. Eventually, an equilibrium will be established in which the rate at which the object emits energy — the quantity Planck wanted to predict — equals the rate at which it absorbs energy. In equilibrium, there must be a perfect balance between emission and absorption, at every wavelength of light. So it turns out that in order to work out the rate of emission of light from a perfectly black object when it is hot, all you need to know is how much radiation of each wavelength there is inside a hot cavity, which has reached equilibrium.

The Austrian physicist Ludwig Boltzmann had developed a statistical method for describing thermal equilibrium. He had shown that in many physical systems, on average, the energy would be shared equally among every component. He called this the “principle of equipartition.” Einstein realized that electromagnetic waves in a cavity should follow this rule, and that this created a problem for the classical theory. The trouble was that Maxwell’s theory allows electromagnetic waves of all wavelengths, down to zero wavelength. There are only so many ways to fit a wave of a given wavelength inside a cavity of a certain size. But for shorter and shorter waves, there are more and more ways to fit the waves in. When we include waves of arbitrarily short wavelength, there are infinitely many different ways to arrange them inside the cavity. According to Boltzmann’s principle, every one of these arrangements will carry the same average amount of energy. Together, they have an infinite capacity to absorb heat, and they will, if you let them, soak up all the energy.

Again, let me try to draw an analogy. Think of a country whose economy has a fixed amount of money in circulation (not realistic, I know!). Imagine there are a fixed number of people, all buying and selling things to and from each other. If the people are all identical (not realistic, either!), we would expect a law of equipartition of money. On average, everyone would have the same amount of money: the total amount of money divided by the total number of people.

Now imagine introducing more, smaller people into the country. For example, introduce twice as many people of half the size, four times as many a quarter the size, eight times as many people one-eighth the size, and so on. You just keep adding people, down to zero size, with all of them allowed to buy and sell in exactly the same way. Now, I hope you can see the problem: if you let the tiny people trade freely, because there are so many of them they will absorb all the money and leave nothing for anyone else.

Planck’s rule is like imposing an extra “market regulation” stating that people can trade money only in a certain minimum quantum, which depends inversely on their size. Larger people can trade in cents. People half as big can trade only in amounts of two cents, people half as big again in four cents, and so on. Very small people can trade only in very large amounts — they can buy and sell only very expensive, luxury items. And the smallest people cannot trade at all, because their money quantum would be larger than the entire amount of money in circulation.

With this market regulation rule, an equilibrium would be established. Smaller people are more numerous and have a larger money quantum. So there is a certain size of people that can share all the money between them, and still each have enough for a few quanta so they aren’t affected by the market regulation. In equilibrium, people of this size will hold most of the money. Increase the total money in circulation, and you will decrease the size of the people holding most of the money.

In the same way, Einstein showed, if you imposed Planck’s quantization rule, most of the energy inside a hot cavity would be held by waves just short enough to each hold a few quanta while sharing all the energy between them. Heat up the cavity, and shorter and shorter waves will share the energy in this way. Therefore if a hot body is placed inside the cavity and allowed to reach equilibrium, the wavelength of radiation it emits and absorbs decreases as the cavity heats up.

And this is exactly how hot bodies behave. If you heat up an object like a metal poker, as it gets hotter it glows red, then yellow, then white, and finally blue and violet when it becomes extremely hot. These changes are due to the decrease in wavelength of the light emitted, predicted by Planck’s quantization rule. Human beings have been heating objects in fires for millennia. The colour of the light shining at them was telling them about quantum physics all along.

In fact, as we understand physics today, it is only Planck’s quantization rule that prevents the short wavelength electromagnetic waves from dominating the emission of energy from any hot object, be it a lighted match or the sun. Without Planck’s “market regulation,” the tiniest wavelengths of light would be like the “Dementors” in the Harry Potter books, sucking all the energy out of everything else. The disaster that Planck avoided is referred to as the “ultraviolet catastrophe” of classical physics, because the shortest wavelengths of visible light are violet. (In this context, “ultraviolet” just means “very short wavelength.”)

It is tempting to draw a parallel between this ultraviolet catastrophe in classical physics and what is now happening in our modern digital age. As computers and the internet become increasingly powerful and cheap, the ability to generate, copy, and distribute writing, pictures, movies, and music at almost no cost is creating another ultraviolet catastrophe, an explosion of low-grade, useless information that is in danger of overwhelming any valuable content. Where will it all end? Digital processors are now becoming so small that over the next decade they will approach the limits imposed by the size of atoms. Operating on these scales, they will no longer behave classically and they will have to be accessed and operated in quantum ways. Our whole notion of information will have to change, and our means of creating and sharing it will become much closer to nature. And in nature, the ultraviolet catastrophe is avoided through quantum physics. As I will discuss in the last chapter, quantum computers may open entirely new avenues for us to experience and understand the universe.

EINSTEIN’S 1905 PAPER CLEARLY described the ultraviolet catastrophe in classical physics and how Planck’s quantum rule resolved it. But it went much farther, showing that the quantum nature of light could be independently seen through a phenomenon known as the “photoelectric effect.” When ultraviolet light is shone on the surface of a metal, electrons are found to be emitted. In 1902, the German physicist Philipp Lenard had studied this phenomenon and showed that the energy of the individual emitted electrons increased with the frequency of the light. Einstein showed that the data could be explained if the electrons were absorbing light in quanta, whose energy was given by Planck’s rule. In this way, Einstein found direct confirmation of the quantum hypothesis. Yet, like Planck, Einstein also saw the worrying implications of quantization for any classical view of reality. He was later quoted as saying: “It was as if the ground was pulled out from under one.” 41

In 1913, the upheaval continued when Niels Bohr, working at Manchester under Ernest Rutherford, published a paper titled “On the Constitution of Atoms and Molecules.” Much as Planck had done for light, Bohr invoked quantization to explain the orbits of electrons in atoms. Just before Bohr’s work, Rutherford’s experiments had revealed the atom’s inner structure, showing it to be like a miniature solar system, with a tiny, dense nucleus at its centre and electrons whizzing around it.

Rutherford used the mysterious alpha particles, which Marie and Pierre Curie had observed to be emitted from radioactive material, as a tool to probe the structure of the atom. He employed a radioactive source to bombard a thin sheet of gold foil with alpha particles, and he detected how they scattered. He was amazed to find that most particles went straight through the metal but a few bounced back. He concluded that the inside of an atom is mostly empty space, with a tiny object — the atomic nucleus — at its centre. Legend has it that the morning after Rutherford made the discovery, he was scared to get out of bed for fear he would fall through the floor.42

Rutherford’s model of the atom consisted of a tiny, positively charged nucleus orbited by negatively charged electrons. Since unlike charges attract, the electrons are drawn into orbit around the nucleus. However, according to Maxwell’s theory of electromagnetism, as the charged electrons travelled around the nucleus they would cause changing electric and magnetic fields and they would continually emit electromagnetic waves. This loss of energy would cause the electrons to slow down and spiral inward to the nucleus, causing the atom to collapse. This would be a disaster every bit as profound as the ultraviolet catastrophe: it would mean that every atom in the universe would collapse in a very short time. The whole situation was very puzzling.

Niels Bohr, working with Rutherford, was well aware of the puzzle. Just as Planck had quantized electromagnetic waves, Bohr tried to quantize the orbits of the electron in Rutherford’s model. Again, he required that a quantity with the same units as Hamilton’s action — in Bohr’s case, the momentum of the electron times the circumference of its orbit — came in whole-number multiples of Planck’s constant. A hydrogen atom is the simplest atom, consisting of just one electron in orbit around a proton, the simplest nuclear particle. One quantum gave the innermost orbit, two the next orbit, and so on. As Bohr increased the number of quanta, he found his hypothesis predicted successive orbits, each one farther from the nucleus. In each orbit, the electron has a certain amount of energy. It could “jump” from one orbit to another by absorbing or emitting electromagnetic waves with just the right amount of energy.

Experiments had shown that atoms emitted and absorbed light only at certain fixed wavelengths, corresponding through Planck’s rule to fixed packets of energy. Bohr found that with his simple quantization rule, he could accurately match the wavelengths of the light emitted and absorbed by the hydrogen atom.

· · ·

PLANCK, EINSTEIN, AND BOHR’S breakthroughs had revealed the quantum nature of light and the structure of atoms. But the quantization rules they imposed were ad hoc and lacked any principled basis. In 1925, all that changed when Heisenberg launched a radically new view of physics with quantization built in from the start. His approach was utterly ingenious. He stepped back from the classical picture, which had so totally failed to make sense of the atom. Instead, he argued, we must build the theory around the only directly observable quantities — the energies of the light waves emitted or absorbed by the orbiting electrons. So he represented the position and momentum of the electron in terms of these emitted and absorbed energies, using a technique known as “Fourier analysis in time.”

At the heart of Fourier’s method is a strange number called i, the imaginary number, the square root of minus one. By definition, i times i is minus one. Calling i “imaginary” makes it sound made up. But within mathematics i is every bit as definite as any other number, and the introduction of i, as I shall explain, makes the numbers more complete than they would otherwise be. Before Heisenberg, physicists thought of i as merely a convenient mathematical trick. But in Heisenberg’s work, i was far more central. This was the first indication of reality’s imaginary aspect.

The imaginary number i entered mathematics in the sixteenth century, during the Italian Renaissance. The mathematicians of the time were obsessed with solving algebraic equations. Drawing on the results of Indian, Persian, and Chinese mathematicians before them, they started to find very powerful formulae. In 1545, Gerolamo Cardano summarized the state of the art in algebra, in his book Ars Magna (The Great Art). He was the first mathematician to make systematic use of negative numbers. Before then, people believed that only positive numbers made sense, since one cannot imagine a negative number of objects or a negative distance or negative time. But as we all now learn in school, it is often useful to think of numbers as lying on a number line, running from minus infinity to plus infinity from left to right, with zero in the middle. Negative numbers can be added, subtracted, multiplied, or divided just as well as positive numbers can.

Cardano and others had found general solutions to algebraic equations, but sometimes these solutions involved the square root of a negative number. At first sight, they discarded such solutions as meaningless. Then Scipione del Ferro invented a secret method of pretending these square roots made sense. He found that by manipulating the formulae he could sometimes get these square roots to cancel out of the final answer, allowing him to find many more solutions of equations.

There was a great deal of intrigue over this trick, because the mathematicians of the time held public contests, sponsored by wealthy patrons, in which any advantage could prove lucrative. But eventually the trick was published, first by Cardano and then more completely by Rafael Bombelli. In his 1572 book, simply titled Algebra,

Bombelli systematically explained how to extend the rules of arithmetic to include i. 43 You can add, subtract, multiply, or divide it with any ordinary number. When you do, you will obtain numbers like x + iy, where x and y are ordinary numbers. Numbers like this, which involve i, are called “complex numbers.” Just as we can think of the ordinary numbers as lying on a number line running from negative to positive values, we can think of the complex numbers as lying in a plane, where x and y are the horizontal and vertical coordinates. Mathematicians call this the “complex plane.” The number zero is at the origin and any complex number has a squared length, given by Pythagoras’s rule as x2 + y2.

Then it turns out, rather beautifully, that any complex number raised to the power of any other complex number is also a complex number. There are no more problems with square roots or cube roots or any other roots. In this sense, the complex numbers are complete: once you have added i, and any multiple of i, to the ordinary numbers, you do not need to add anything else. And later on, mathematicians proved that when you use complex numbers, every algebraic equation has a solution. This result is called the “fundamental theorem of algebra.” To put it simply, the inclusion of i makes algebra a far more beautiful subject than it would oterhwise be.

And from this idea came an equation that Richard Feynman called “the most remarkable formula in mathematics.” 44 It was discovered by Leonhard Euler, one of the most prolific mathematicians of all time. Euler was the main originator and organizer of the field of analy­sis — the collection of mathematical techniques for dealing with infinities. One of his many innovations was his use of the number e, which takes the numerical value 2.71828 . . . and which arises in many areas of mathematics. It describes exponential growth and is used in finance for calculating compound interest or the cumulative effects of economic inflation, in biology for the multiplication of natural populations, in information science, and in every area of physics. What Euler found is that e raised to i times an angle gives the two basic trigonometric functions, the sine and cosine. His formula connects algebra and analysis to geometry. It is used in electrical engineering for the flow of AC currents and in mechanical engineering to study vibrations; it is also used in music, computer science, and even in cosmology. In Chapter Four, we shall find Euler’s formula at the heart of our unified description of all known physics.

Heisenberg used Euler’s formula (in the form of a Fourier series in time) to represent the position of an electron as a sum of terms involving the energy states of the atom. The electron’s position became an infinite array of complex numbers, with every number representing a connection coefficient between two different energy states of the atom.

The appearance of Heisenberg’s paper had a dramatic effect on the physicists of the time. Suddenly there was a mathematical formalism that explained Bohr’s rule for quantization. However, within this new picture of physics, the position or velocity of the electron was a complex matrix, without any familiar or intuitive interpretation. The classical world was fading away.

Not long after Heisenberg’s discovery, Schrödinger published his famous wave equation. Instead of trying to describe the electron as a point-like particle, Schrödinger described it as a wave smoothly spread out over space. He was familiar with the way in which a plucked guitar string or the head of a drum vibrates in certain specific wave-like patterns. Developing this analogy, Schrödinger found a wave equation whose solutions gave the quantized energies of the orbiting electron in the hydrogen atom, just as Heisenberg’s matrices had done. Heisenberg’s and Schrödinger’s pictures turned out to be mathematically equivalent, though most physicists found Schrödinger’s waves more intuitive. But, like Heisenberg’s matrices, Schrödinger’s wave was a complex number. What on earth could it represent?

Shortly before the Fifth Solvay Conference, Max Born proposed the answer: Schrödinger’s wavefunction was a “probability wave.” The probability to find the particle at any point in space is the squared length of the wavefunction in the complex plane, given by the Pythagorean theorem. In this way, geometry appeared at the heart of quantum theory, and the weird complex numbers that Heisenberg and then Schrödinger had introduced became merely mathematical tools for obtaining probabilities.

This new view of physics was profoundly dissatisfying to physicists like Einstein, who wanted to visualize concretely how the world works. In the run-up to the Solvay meeting, all hope of that was dashed. Heisenberg published his famous uncertainty principle, showing that, within quantum theory, you could not specify the position and velocity of a particle at the same time. As he put it, “The more precisely the position [of an electron] is determined, the less precisely the momentum is known in this instant, and vice versa.” 45 If you know exactly where a particle is now, then you cannot say anything about where it will be a moment later. The very best you can hope for is a fuzzy view of the world, one where you know the position and velocity approximately.

Heisenberg’s arguments were based on general principles, and they applied to any object, even large ones like a ball or a planet. For these large objects, the quantum uncertainty represents only a tiny ambiguity in their position or velocity. However, as a matter of principle, the uncertainty is always there. What Heisenberg’s uncertainty principle showed is that, in quantum theory, nothing is as definite as Newton, or Maxwell, or any of the pre-quantum physicists had supposed it to be. Reality is far more slippery than our classical grasp of it would suggest.

ONE OF THE MOST beautiful illustrations of the quantum nature of reality is the famous “double-slit experiment.” Imagine placing a partition with two narrow, parallel slits in it, between a source of light of one colour — like a green laser — and a screen. Only the light that falls on a slit will pass through the partition and travel on to the screen. The light from each slit spreads out through a process called “diffraction,” so that each slit casts a broad beam of light onto the screen. The two beams of light overlap on the screen (click to see image).

However, the distance the light has to travel from either slit to each point on the screen is in general different, so that when the light waves from both slits arrive at the screen, they may add or they may cancel. The pattern of light formed on the screen is called an “interference pattern”: it consists of alternate bright and dark stripes at the locations where the light waves from the two slits add or cancel.46 Diffraction and interference are classic examples of wave-like behaviour, seen not only in light but in water waves, sound waves, radio waves, and so on.

Now comes the quantum part. If you dim the light source and replace the screen with a detector, like a digital camera sensitive enough to detect individual photons — Planck’s quanta of light — then you can watch the individual photons arrive. The light does not arrive as a continuous beam with a fixed intensity. Instead, the photons arrive as a random string of energy packets, each one announcing its arrival at the camera with a flash. The pattern of flashes still forms interference stripes, indicating that even though each photon of light arrived in only one place as an energy packet, the photons travelled through both slits and interfered as waves.

Now comes the strangest part. You can make the light source so dim that the long interval between the flashes on the screen means there is never more than one photon in the apparatus at any one time. But then, set the camera to record each separate flash and add them all up into a picture. What you find is that the picture still consists of interference stripes. Each individual photon interfered with itself, and therefore must somehow have travelled through both slits on the way to the screen.

So we conclude that photons sometimes behave like particles and sometimes behave like waves. When you detect them, they are always in a definite position, like a particle. When they travel, they do so as waves, exploring all the available routes; they spread out through space, diffract, and interfere, and so on.

In time, it was realized that quantum theory predicts that electrons, atoms, and every other kind of particle also behave in this way. When we detect an electron, it is always in a definite position, and we find all its electric charge there. But when it is in orbit around an atom, or travelling freely through space, it behaves like a wave, exhibiting the same properties of diffraction and interference.

In this way, quantum theory unified forces and particles by showing that each possessed aspects of the other. It replaced the world that Newton and Maxwell had developed, in which particles interacted through forces due to fields, with a world in which both the particles and the forces were represented by one kind of entity: quantized fields possessing both wave-like and particle-like characters.

NIELS BOHR DESCRIBED THE coexistence of the wave and particle descriptions with what he called the “principle of complementarity.” He posited that some situations were best described by one classical picture — like a particle — while other situations were better described by another — like a wave. The key point was that there was no logical contradiction between the two. The words of the celebrated American author of the time, F. Scott Fitzgerald, come to mind: “The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.”47

Bohr had a background in philosophy as well as mathematics, and an exceptionally agile and open mind. His writings are a bit mystical and also somewhat impenetrable. His main role at the Solvay Conference seems to have been to calm everyone down and reassure them that despite all the craziness everything was going to work out fine. Somehow, Bohr had a very deep insight that quantum theory was consistent. It’s clear he couldn’t prove it. Nor could he convince Einstein.

Einstein was very quiet at the Fifth Solvay meeting, and there are few comments from him in the recorded proceedings. He was deeply bothered by the random, probabilistic nature of quantum theory, as well as the abstract nature of the mathematical formalism. He famously remarked (on a number of occasions), “God does not play dice!” To which at some point Bohr apparently replied, “Einstein, stop telling God how to run the world.”48 At this and subsequent Solvay meetings, Einstein tried again and again to come up with a paradox that would expose quantum theory as inconsistent or incomplete. Each time, after a day or two’s thought, Bohr was able to resolve the paradox.

Einstein continued to be troubled by quantum theory, and in particular by the idea that a particle could be in one place when it was measured and yet spread out everywhere when it was not. In 1935, with Boris Podolsky and Nathan Rosen, he wrote a paper that was largely ignored by physicists at the time because it was considered too philosophical. Nearly three decades later, it would seed the next revolutionary insight into the nature of quantum reality.

Einstein, Podolsky, and Rosen’s argument was ingenious. They considered a situation in which an unstable particle, like a radioactive nucleus, emits two smaller, identical particles, which fly apart at exactly the same speed but in opposite directions. At any time they should both be equidistant from the point where they were both emitted. Imagine you let the two particles get very far apart before you make any measurement — for the sake of argument, make it light years. Then, at the very last minute, as it were, you decide to measure either the position or the velocity of one of the particles. If you measure its position, you can infer the position of the other without measuring it at all. If instead you measure the velocity, you will know the velocity of the other particle, again without measuring it. The point was that you could decide whether to measure the position or the velocity of one particle when the other particle was so far away that it could not possibly be influenced by your decision. Then, when you made your measurement, you could infer the second particle’s position or velocity. So, Einstein and his colleagues argued, the unmeasured particle must really have both a position and a velocity, even if quantum theory was unable to describe them both at the same time. Therefore, they concluded, quantum theory must be incomplete.

Other physicists balked at this argument. Wolfgang Pauli said, “One should no more rack one’s brain about the problem of whether something one cannot know anything about exists all the same, than one should about the ancient question of how many angels are able to sit on the point of a needle.”49 But the Einstein–Podolsky–Rosen argument would not go away, and in the end someone saw how to make use of it.

· · ·

HAVE YOU EVER WONDERED whether there is a giant conspiracy in the world and whether things really are as they appear? I’m thinking of something like the The Truman Show, starring Jim Carrey as Truman, whose life appears normal and happy but is in fact a grand charade conducted for the benefit of millions of TV viewers. Eventually, Truman sees through the sham and escapes to the outside world through an exit door in the painted sky at the edge of his arcological dome.

In a sense, we all live in a giant Truman show: we conceptualize the world as if everything within it has definite properties at each point in space and at every moment of time. In 1964, the Irish physicist John Bell discovered a way to show conclusively that any such classical picture could, with some caveats, be experimentally disproved.

Quantum theory had forced physicists to abandon the idea of a deterministic universe and to accept that the best they could do, even in principle, was to predict probabilities. It remained conceivable that nature could be pictured as a machine containing some hidden mechanisms that, as Einstein put it, threw dice from time to time. One example of such a theory was invented by the physicist David Bohm. He viewed Schrödinger’s wavefunction as a “pilot wave” that guided particles forward in space and time. But the actual locations of particles in his theory are determined statistically, through a physical mechanism to which we have no direct access. Theories that employ this kind of mechanism are called “hidden variable” theories. Unfortunately, in Bohm’s theory, the particles are influenced by phenomena arbitrarily far away from them. Faraday and Maxwell had argued strongly against such theories in the nineteenth century, and since that time, physicists had adopted locality — meaning that an object is influenced directly only by its immediate physical surroundings — as a basic principle of physics. For this reason, many physicists find Bohm’s approach unappealing.

In 1964, inspired by Einstein, Podolsky, and Rosen’s argument, John Bell, working at the European Organization for Nuclear Research (CERN), proposed an experiment to rule out any local, classical picture of the world in which influences travel no faster than the speed of light. Bell’s proposal was, if you like, a way of “catching reality in the act” of behaving in a manner that would simply be impossible in any local, classical description.

The experiment Bell envisaged involved two elementary particles flying apart just as Einstein, Podolsky, and Rosen had imagined. Like them, Bell considered the two particles to be in a perfectly correlated state. However, instead of thinking of measuring their positions or velocities, Bell imagined measuring something even simpler: their spins.

Most of the elementary particles we know of have a spin — something Pauli and then Dirac had explained. You can think of particles, roughly speaking, as tiny little tops spinning at some fixed rate. The spin is quantized in units given by Planck’s constant, but the details of that will not matter here. All that concerns us in this case is that the outcome is binary. Whenever you measure a particle’s spin, there are only two possible outcomes: you will either find the particle spinning about the measurement axis either anticlockwise or clockwise at a fixed rate. If the particle spins anticlockwise, we say its spin is “up,” and if it is clockwise, we say its spin is “down.”

Bell hypothesized a situation in which the two Einstein–Podolsky–Rosen particles are produced in what is known as a “spin zero state.” In such a state, if you measure both particles with respect to the same axis, then if you find one of them to be “up,” the other will be “down,” and vice versa. We say that the particles are “entangled,” meaning that measuring the state of one fixes the state of the other. According to quantum theory, the two particles can retain this connection no matter how far apart they fly. The strange aspect of it is that by measuring one, you instantly determine the state of the other, no matter how distant it is. This is an example of what Einstein called “spooky non-locality” in quantum physics.

Bell imagined an experiment in which the particles were allowed to fly far apart before their spins were measured. He discovered a subtle but crucial effect, which meant that no local, classical picture of the world could possibly explain the pattern of probabilities that quantum theory predicts.

To make things more familiar, let us pretend that instead of two particles, we have two boxes, each with a coin inside it. Instead of saying the particle’s spin is “up,” we’ll say the coin shows heads; and instead of saying the particle’s spin is “down,” we’ll say the coin shows tails.

Imagine you are given two identical boxes, each in the shape of a tetrahedron — a pyramid with four equilateral triangular sides. One side is a shiny metal base, and the other three are red, green, and blue. The coloured sides are actually small doors. Each of them can be opened to look inside the pyramid. Whenever you open a door, you see a coin lying inside on the base, showing either heads or tails.

Upon playing with the boxes, you notice that the bases are magnetic and stick together base to base. When the boxes are stuck together like this, the doors are held tightly shut, and there is a soft hum indicating the state of the boxes is being set.

Now you and a friend pull the two boxes apart. This is the analogue of the Einstein–Podolsky–Rosen experiment. You each take a box and open one of its doors. First, you both look through the red door of your box. You see heads and your friend sees tails. So you repeat the experiment. You put the boxes together, pull them apart, and each of you opens the red door. After doing this many times, you conclude that each result is entirely random — half the time your coin shows heads, and half the time it shows tails. But whatever you get, your friend gets exactly the opposite. You try taking the boxes very far apart before opening them, and the same thing happens. You cannot predict your own result, but whatever that result turns out to be, it allows you to predict your partner’s finding with certainty. Somehow, even though each box gives an apparently random result, the two boxes always give opposite results.

It’s strange, but so far there is no real contradiction with a local, classical picture of the world. You could imagine that there is a little machine that makes a random choice of how to program the two boxes when they are placed base to base. If it programs the first box to show heads when the red door is opened, it will program the second box to show tails. And vice versa. This programming trick will happily reproduce everything you have seen so far.

Now you go further with the experiment. You decide that you will both open only the green door. And you find the same thing as before — each of you gets heads or tails half the time, and each result is always the opposite of the other. The same happens with the blue door.

Still, there is no real contradiction with a classical picture of the world. All that is required to reproduce what you have seen is that when the two bases are held together, one box is programmed randomly and the other box is given exactly the opposite program. For example, if the first box is programmed to give heads/heads/tails when you open the red, green, or blue door, then the other is programmed tails/tails/heads when you open the red, green, or blue door. If the first box is programmed heads/heads/heads, the second is programmed tails/tails/tails. And so on. This arrangement would explain everything you have seen so far.

Now you try something different. Again, you put the two bases together and pull the boxes apart. But now, each of you chooses at random which door to open — either red, green, or blue — and records the result. Doing this again and again, many times, you find that half the time you agree and half the time you disagree. Initially, it seems like sanity has been restored: the boxes are each giving a random result. But wait! Comparing your results more carefully, you see that whenever you and your partner happen to open the same-coloured door, you always disagree. So there is still a strong connection between the two boxes, and their results are not really independent at all. The question is: could the boxes possibly have been programmed to always disagree when you open the same-coloured door but to disagree only half the time when you each open a door randomly?

Imagine, for example, that the boxes were programmed to give, say, heads/heads/tails for your box and tails/tails/heads for your friend’s. You pick one of three doors, at random, and your friend does the same. So there are nine possible ways for the two of you to make your choices: red–red, red–green, red–blue, green–red, green–green, green–blue, blue–red, blue–green, and blue–blue. In five of them you will get the opposite results, with one seeing heads and the other tails, but in four you will agree. What about if the boxes were programmed heads/heads/heads and tails/tails/tails? Well, then you would always disagree. Since every other program looks like one of these two cases, we can safely conclude that however the boxes are programmed, if you open the doors randomly there is always at least a five-ninths chance of your disagreeing on the result. But that isn’t what you found in the experiment: you disagreed half the time.

As you may have already guessed, quantum theory predicts exactly what you found. You agree half the time and disagree half the time. The actual experiment is to take two widely separated Einstein–Podolsky–Rosen particles in a spin zero state and measure their spins along one of three axes, separated by 120 degrees. The axis you choose is just like the door you pick in the pyramidal box. Quantum theory predicts that when you pick the same measurement axis for the two particles, their spins always disagree. Whereas if you pick different axes, they agree three-quarters of the time and disagree one-­quarter of the time. And if you pick axes randomly, you agree half the time and disagree half the time. As we have just argued with the boxes, such a result is impossible in a local, classical theory.50

Before drawing this conclusion, you might worry that the particles might somehow communicate with each other, for example by sending a signal at the speed of light. So that, for example, if you chose different measurement axes, the particles would correlate their spins so that they agreed three-quarters of the time and disagreed one-quarter of the time, just as predicted by quantum mechanics. Experimentally, you can eliminate this possibility by ensuring that at the moment you choose the measurement axis, the particles are so far apart that no signal could have travelled between them, even at the speed of light, in time to influence the result.

In 1982, the French physicists Alain Aspect, Philippe Grangier, and Gérard Roger conducted experiments in which the setting for the measurement axis of Einstein–Podolsky–Rosen particles was chosen while the particles were in flight. This was done in such a way as to exclude any possible communication between the measured particles regarding this choice. Their results confirmed quantum theory’s prediction, showing that the world works in ways we cannot possibly explain using classical notions. Some physicists were moved to call this physics’ greatest-ever discovery.

Although the difference between five-ninths and one-half may sound like small change, it is a little like doing a very long sum and finding that you have proven that 1,000 equals 1,001 (I am sure this has happened to all of us many times, while doing our taxes!). Imagine you checked and checked again, and could not find any mistake. And then everyone checked, and the world’s best computers checked, and everyone agreed with the result. Well, then by subtracting 1,000, you would have proven that 0 equals 1. And with that, you can prove any equation to be right and any equation to be wrong. So all of mathematics would disappear in a puff of smoke. Bell’s argument, and its experimental verification, caused all possible classical, local descriptions of the world similarly to disappear.

These results were a wake-up call, emphasizing that the quantum world is qualitatively different from any classical world. It caused people to think carefully about how we might utilize these differences in the future. In Chapter Five, I will describe how the quantum world allows us to do things that would be impossible in a classical world. It is opening up a whole new world of opportunity ahead of us — of quantum computers, communication, and, perhaps, perception — whose capabilities will dwarf what we have now. As those new technologies come on stream, they may enable a more advanced form of life capable of comprehending and picturing the functioning of the universe in ways we cannot. Our quantum future is awesome, and we are fortunate to be living at the time of its inception.

· · ·

OVER THE COURSE OF the twentieth century, in spite of Einstein’s qualms, quantum theory went from one triumph to the next. Curie’s radioactivity was understood to be due to quantum tunnelling: a particle trapped inside an atomic nucleus is occasionally allowed to jump out of it, thanks to the spreading out in space of its probability wave. As the field of nuclear physics was developed, it was understood how nuclear fusion powers the sun, and nuclear energy became accessible on Earth. Particle physics and the physics of solids, liquids, and gases were all built on the back of quantum theory. Quantum physics forms the foundation of chemistry, explaining how molecules are held together. It describes how real solids and materials behave and how electricity is conducted through them. It explains superconductivity, the condensation of new states of matter, and a host of other extraordinary phenomena. It enabled the development of transistors, integrated circuits, lasers, LEDs, digital cameras, and all the modern gadgetry that surrounds us.

Quantum theory also led to rapid progress in fundamental physics. Paul Dirac combined Einstein’s theory of relativity with quantum mechanics into a relativistic equation for the electron, in the process predicting the electron’s antiparticle, the positron. Then he and others worked out how to describe electrons interacting with Maxwell’s electromagnetic fields — a framework known as quantum electrodynamics, or QED. The U.S. physicists Richard Feynman and Julian Schwinger and the Japanese physicist Sin-Itiro Tomonaga used QED to calculate the basic properties and interactions of elementary particles, making predictions whose accuracy eventually exceeded one part in a trillion.

Following a suggestion from Paul Dirac, Feynman also developed a way of describing quantum theory that connected it directly to Hamilton’s action formalism. What Feynman showed was that the evolution in time of Schrödinger’s wavefunction could be written using only Euler’s e, the imaginary number i, Planck’s constant h, and Hamilton’s action principle. According to Feynman’s formulation of quantum theory, the world follows all possible histories at once, but some are more likely than others. Feynman’s description gives a particularly nice account of the “double-slit” experiment: it says that the particle or photon follows both paths to the screen. You add up the effect of the two paths to get the Schrödinger wavefunction, and it is the interference between the two paths that creates the pattern of probability for the arrival of particles or photons at various points on the screen. Feynman’s wonderful formulation of quantum theory is the language I shall use in Chapter Four to describe the unification of all known physics.

As strange as it is, quantum theory has become the most successful, powerful, and accurately tested scientific theory of all time. Although its rules would never have been discovered without many clues from experiment, quantum theory represents a triumph of abstract, mathematical reasoning. In this chapter, we have seen the magical power of such thinking to extend our intuition well beyond anything we can picture. I emphasized the role of the imaginary number i, the square root of minus one, which revolutionized algebra, connected it to geometry, and then enabled people to construct quantum theory. To a large extent, the entry of i is emblematic of the way in which quantum theory works. Before we observe it, the world is in an abstract, nebulous, undecided state. It follows beautiful mathematical laws but cannot be described in everyday language. According to quantum theory, the very act of our observing the world forces it into terms we can relate to, describable with ordinary numbers.

In fact, the power of i runs deeper, and it is profoundly related to the notion of time. In the next chapter, I will describe how Einstein’s theory of special relativity unified time with space into a whole called “spacetime.” The German mathematician Hermann Minkowski clarified this picture, and also noticed that if he started with four dimensions of space, instead of three, and treated one of the four space coordinates as an imaginary number — an ordinary number times i — then this imaginary space dimension could be reinterpreted as time. Minkowski found that in this way, he could recover all the results of Einstein’s special relativity, but much more simply.51

It is a remarkable fact that this very same mathematical trick, of starting with four space dimensions and treating one of them as imaginary, not only explains all of special relativity, it also, in a very precise sense, explains all of quantum physics! Imagine a classical world with four space dimensions and no time. Imagine that this world is in thermal equilibrium, with its temperature given by Planck’s constant. It turns out that if we calculate all the properties of this world, how all quantities correlate with each other, and then we perform Minkowski’s trick, we reproduce all of quantum theory’s predictions. This technique, of representing time as another dimension of space, is extremely useful. For example, it is the method used to calculate the mass of nuclear particles — like the proton and the neutron — on a computer, in theoretical studies of the strong nuclear force.

Similar ideas, of treating time as an imaginary dimension of space, are also our best clue as to how the universe behaves in black holes or near the big bang singularity. They underlie our understanding of the quantum vacuum, and how it is filled with quantum fluctuations in every field. The vacuum energy is already taking over the cosmos and will control its far future. So, the imaginary number i lies at the centre of our current best efforts to face up to the greatest puzzles in cosmology. Perhaps, just as i played a key role in the founding of quantum physics, it may once again guide us to a new physical picture of the universe in the twenty-first century.

Mathematics is our “third eye,” allowing us to see and understand how things work in realms so remote from our experience that they cannot be visualized. Mathematicians are often viewed as unworldly, working in a dreamed-up, artificial setting. But quantum physics teaches us that, in a very real sense, we all live in an imaginary reality.