1

The Face in the Window

How, when you stand in front of a window, the most shocking discovery in the history of science – that ultimately things happen for no reason – is literally staring you in the face

Une difficulté est une lumière. Une difficulté insurmontable est un soleil.’ (A difficulty is a light. An insurmountable difficulty is a sun.)

Paul Valéry

   

‘No progress without paradox.’
John Wheeler, 1985

It is night-time and it is raining. You are staring dreamily out of a window at the lights of the city. You can see the cars driving past on the street and you can see the faint reflection of your face among the runnels of water streaming down the pane. Believe it or not, this simple observation is telling you something profound and shocking about fundamental reality. It is telling you that the Universe, at its deepest level, is founded on randomness and unpredictability, the capricious roll of a dice – that, ultimately, things happen for no reason at all.

   

The reason you can see the lights of the city outside and simultaneously the faint image of your face staring back at you is because about 95 per cent of the light striking the window goes straight through while about 5 per cent is reflected. This is easy to understand if light is a wave, like a ripple on water, which is the commonly held view. Imagine a speedboat streaking across a lake and creating a bow wave which runs into a piece of partially submerged driftwood. Most of the wave just keeps on going, unaffected by the obstacle, while a small portion doubles back on itself. Similarly, when a light wave encounters the obstacle of a window, most of the wave is transmitted, while a small portion is reflected.

This explanation of why you see your face in a window is straightforward. It certainly does not appear to have any profound implications for the nature of ultimate reality. However, this is an illusion. Light is not what it seems. It has a trick up its sleeve which undermines this simple picture and changes everything. In the twentieth century, a number of phenomena were discovered that revealed that light behaved not as a wave, like a ripple spreading on a pond, but as a stream of bullet-like particles. For instance, there was the Compton effect, which revealed something very peculiar about the way light bounced, or ‘scattered’, off an electron. Discovered in 1897 by Cambridge physicist ‘J. J.’ Thomson, the electron was a particle smaller than an atom. In fact, it was one of its key constituents.

In 1920, the American physicist Arthur Compton decided to investigate what happened to light when it was shone on electrons. He had a picture in his mind of light waves bouncing off an electron like water waves off a buoy. If you have seen such a thing, you will know that the size, or ‘wavelength’, of the waves remains unchanged. In other words, the distance between successive wave crests is the same for the outgoing wave as the incoming wave. But in Compton’s experiment this was not the case at all. After the light waves had bounced off electrons, their wavelength was bigger than before. And the more the direction of the light was changed in the encounter, the bigger the change in wavelength. It was as if the mere act of bouncing off an electron magically changed blue light, which is characterised by a short wavelength, into red light, which has a longer wavelength.1 A longer, more sluggish wave turns out to be less energetic than a short, frenetic wave. So what Compton’s experiments were telling him was that, when light bounced off an electron, it was somehow sapped of energy.

Compton’s mental picture of what was going on was knocked for six. The light in his experiments was not behaving anything like a water wave bouncing off a buoy. In fact, the more he thought about it, the more he realised that it was behaving like a billiard ball hitting another billiard ball. When a ball is struck by the cue ball, it shoots off, carrying with it some of the energy of the cue ball. Inevitably, the cue ball loses energy. Electrons were known to be like tiny billiard balls, but light was known to ripple though space like a wave. Compton’s experiments were unequivocal, however. Despite centuries of evidence to the contrary, light must also consist of particles like tiny billiard balls. For his ground-breaking work in confirming the particle-like nature of light, Compton was awarded the 1927 Nobel Prize for Physics.

More evidence that light behaved like a stream of particles came from the photoelectric effect, familiar to everyone who sees supermarket doors part like the Red Sea when they walk towards them. What triggers the doors to swish aside is the breaking of a beam of light by an approaching leg or a foot. The beam illuminates a ‘photocell’, a device containing a metal which spits out electrons whenever light falls on it. This happens because the electrons are only loosely bound to their parent atoms, so the energy delivered by the light is sufficient to kick them free. When someone breaks the light beam, the photocell is cast into shadow and the sputtering of electrons stops. The electronics are rigged in such a way that the instant the flow of electrons chokes off the doors open.

So what has the photoelectric effect got to do with the particle nature of light? If light is a wave, it is nigh on impossible to explain how it can deliver energy efficiently to a tiny, localised electron. Being spread out, a typical light wave will interact with a large number of electrons spread over the surface of the metal. Inevitably, some will get kicked out after others. In fact, calculations show that some electrons will be kicked out up to ten minutes after others. Imagine if the flow of electrons took ten minutes to build up in the photocell, so supermarket customers had to wait ten minutes for an automatic door to open.

Everything makes sense if the light is made of tiny particles and each interacts with a single electron in the metal. Rather than spreading its energy over large numbers of electrons, the light tied up in such ‘photons’ packs a real punch. Not only does each photon eject a single electron but it ejects it promptly, not after a ten-minute delay. Thank the particle-like nature of light for your prompt admission to a supermarket.

It was for explaining the photoelectric effect in terms of tiny chunks, or ‘quanta’, of light that Einstein won the 1921 Nobel Prize for Physics. Many people find this surprising. They wonder why he did not win the prize for ‘relativity’, the theory for which he is most famous and which changed for ever our view of space and time. Einstein himself, however, always saw relativity as a natural and unsurprising outgrowth of nineteenth-century physics.2 He considered ‘quanta’, alone among his achievements, the only truly revolutionary idea of his life.

Einstein published his paper on the existence of quanta in the same ‘miraculous year’ as his theory of relativity. Five years earlier, in 1900, the German physicist Max Planck had found a way to explain the puzzling character of the heat coming from a furnace by suggesting that atoms can vibrate only at certain permissible energies and that those energies come in multiples of some basic chunk, or quantum, of energy. Planck believed these quanta to be no more than a mathematical sleight of hand, with no physical significance whatsoever. Einstein was the first person to view them as truly real – as flying through space as a stream of photons in a beam of light.

The Matchbox that Ate a 40-Tonne Truck

Actually, the fact that light must in some circumstances behave as tiny, localised particles is forced on us by the most familiar of everyday phenomena – the emission of light by the filament of a light bulb and the absorption of light by your eye. The reason has to do with the make-up of the filament and your retina. Like all matter, they are made of atoms.

The idea that everything is made of atoms comes from the Greek philosopher Democritus, who, around 440 BC, picked up a rock or a branch or maybe it was a piece of pottery and asked himself: ‘If I cut this object in half, then cut the halves in half, can I go on subdividing it like this for ever?’ Democritus answered his own question. It was inconceivable to him that matter could be subdivided in this way for ever. Sooner or later, he reasoned, you must come to a tiny grain of matter which could not be cut in half any more. Since the Greek for ‘uncuttable’ was a-tomos, Democritus’s ultimate grains of matter have come to be known as ‘atoms’.

Democritus actually went further and postulated that atoms come in a handful of different types, like microscopic Lego bricks, and that, by assembling them in different ways, it is possible to make a rose or a cloud or a shining star. But the key idea is that reality is ultimately grainy, composed of tiny, hard bullets of matter. It is an idea that has certainly stood the test of time.3

Atoms turn out to be very small. It takes more than a million to span a pinhead. Confirming their existence was therefore very hard. A lot of indirect evidence was accumulated in the age of science. However, remarkably, no one actually ‘saw’ an atom until 1980, when two physicists at IBM built an ingenious device called the Scanning Tunnelling Microscope.

The STM earned Gerd Binnig and Heinrich Rohrer the 1986 Nobel Prize for Physics. Basically, the device drags a microscopic ‘finger’ across the surface of a material, sensing the up-and-down motion as it passes over the atoms in much the same way that a blind person senses the undulations of someone’s face with their finger. And, in the same way a blind person builds up a mental picture of the face they are feeling, the STM builds up a picture on a computer display of the atomic landscape over which it is travelling.

Using the STM, Binnig and Rohrer became the first people in history to look down, like gods, on the microscopic world of atoms. And what they saw, swimming into view on their computer screen, was exactly what Democritus had imagined 2,500 years earlier. Atoms looked like tiny tennis balls. They looked like apples stacked in boxes. Never, in the history of science, had someone made a prediction so far in advance of its experimental confirmation. If only Binnig and Rohrer had a time machine. They could have transported Democritus to their Zürich lab, stood him in front of their remarkable image and said: ‘Look. You got it right.’ Just like artists who die in obscurity, never having seen their reputations go stratospheric and their paintings sell for tens of millions of pounds, scientists may never live to see the spectacular success of their ideas.

Atoms, it turns out, are not the ultimate grains of matter. They are made of smaller things. Nevertheless, Democritus’s idea that matter is ultimately grainy, not continuous, persists, with ‘quarks’ and ‘leptons’ now wearing the mantle of nature’s uncuttable grains. But quarks, it turns out, are not important when it comes to the meeting of light and matter in your eye or in the filament of a light bulb. When light is absorbed or spat out, it is atoms that do the absorbing and spitting. And herein lies the problem.

An atom, according to our theory of matter, is a tiny, localised thing like a microscopic billiard ball. Light, on the other hand, is a spread-out thing like a ripple on a pond. Take visible light. A convenient measure of its size is its wavelength – the distance it travels during a complete up-and-down oscillation, or double the separation of successive wave crests. The wavelength of visible light is about 5,000 times bigger than an atom. Imagine you have a matchbox. You open it and out drives a 40-tonne truck. Or say a 40-tonne truck is driving towards you, you open your matchbox and the truck disappears inside. Ridiculous? But this is precisely the paradox that exists at the interface where light meets matter.

How does an atom in your eye swallow something 5,000 times bigger than itself? How does an atom in the filament of a light bulb cough out something 5,000 times more spread out? The British survival expert Ray Mears said during one of his TV programmes: ‘Nothing fits inside a snake like another snake.’ Apply this logic to the interface between light and matter. If light is to fit inside an atom, which is small and localised, it too must be small and localised. The trouble is there are a thousand instances – most notably Young’s double-slit experiment – where light shows itself to be a spread-out wave.

In the first decades of the twentieth century, physicists too went round and round in circles, trying desperately to resolve paradoxes of this kind. As the German physicist Werner Heisenberg wrote: ‘I remember discussions which went through many hours until very late at night and ended almost in despair; and when at the end of the discussion I went alone for a walk in the neighbouring park I repeated to myself again and again the question: Can nature possibly be so absurd as it seemed to us in these atomic experiments?’

A paradox where one theory predicts one thing in a particular circumstance and another theory something quite different is often hugely fruitful. It tells us that one theory at least is wrong. And the bigger and more well-established the theories which are at loggerheads, the more revolutionary the consequences. In the case of light being emitted from a light bulb or being absorbed by your eye, the two theories which predict conflicting things are the wave theory of light and the atomic theory of matter. And they are two of the biggest and most well-established theories of all.

So which theory is wrong? The extraordinary answer embraced by physicists is both. Or neither. Light is both a wave and a particle. Or, rather, it is something for which we have no word in our vocabulary and nothing we can compare it with in the everyday world. It is fundamentally ungraspable – like a three-dimensional object is to creatures confined to the two-dimensional world of a sheet of paper, with no concept of up above or down below. All they can ever experience are ‘shadows’ of the object, never the object in its entirety. Similarly, light is not a wave or a particle but ‘something else’ that we can never grasp completely. All we can see are its shadows – in some circumstances its wave-like face and in others its particle-like face.

Clearly, atoms do spit out light. But, just as clearly, visible light is many thousands of times bigger than an atom that spits it out. Both facts are incontrovertible. The only way to resolve the paradox, therefore, is to accept something that sounds like sheer madness – that light is both thousands of times bigger than and smaller than an atom. It is both spread-out and localised. It is both a wave and a particle. When it travels through space, light travels like a ripple on a pond. However, when it is absorbed or spat out by an atom, it behaves like a stream of tiny machine-gun bullets. Imagine you are standing by a fire hydrant in New York’s Times Square and simultaneously spread out like a fog throughout Manhattan. Ridiculous? Yes. Nevertheless, that is the way light is.

The wave picture of light was correct. So too was the particle picture. Paradoxically, light is both a wave and a particle.

A World that Defies Common Sense

Should we be surprised to find that light is fundamentally different from anything in the everyday world? Should we be surprised that it is ungraspable in its entirety, that its properties are counter-intuitive, that they defy common sense? Perhaps it helps to spell out what we mean by intuition or common sense. Really, it is just the body of information we have gathered about how the world around us works. In evolutionary terms we needed that information to survive on an African plain in the midst of a lot of creatures which were bigger, faster and fiercer than us. Survival depended on having vision that enabled us to see relatively big objects between us and the horizon, hearing that enabled us to hear relatively loud sounds, and so on. There was no survival value in developing senses that could take us beyond the world of our immediate surroundings – eyes, for instance, that could show us the microscopic realm of atoms. Consequently, we developed no intuition whatsoever about these domains. We should, therefore, not be surprised that when we began to explore the domain of the very small compared to our everyday world, we found counter-intuitive things. An atom is about 10 billion times smaller than a human being. It would be surprising if it behaved in any way like a football or a chair or a table, or anything else in the world of our senses.

The first person to realise that the fundamental reality that underpins the everyday world is totally unlike the everyday world was the Scottish physicist James Clerk Maxwell, arguably the most important physicist between the time of Newton and Einstein (tragically, he died of stomach cancer, aged only 48). His great triumph, in the 1860s, was to distil all magnetic and electrical phenomena into one neat set of formulae. ‘Maxwell’s equations’ are so super-compact you could write them on the back of a stamp (if you have small hand-writing!).

Up until the time of Maxwell, physicists modelled the world in terms of things they could see around them. They talked, for instance, of a Newtonian ‘clockwork’ Universe. Maxwell was no different. Initially, when struggling to understand how a magnet reached out and tugged on a piece of metal, for instance, he imagined the space between the magnet and the metal filled with invisible toothed cogs. A cog pressed tight up against the magnet turned another cog, which turned another, and so on. In this way, the force was transmitted from the magnet to the metal. When the picture did not fit his observations of magnetism, Maxwell modified it, imagining that the cogs were made of springy material that flexed as they turned. When this did not work either, he threw up his hands in despair and dispensed with such ‘mechanical’ models. Nature, he realised, was not like anything in everyday experience.

Instead of invisible cogs turning, Maxwell imagined ghostly electric and magnetic ‘force fields’ permeating space, with no parallel in the everyday world. It was a seismic break with the past. In the long term it would liberate physics, enabling Einstein to imagine gravity as a warpage of four-dimensional space–time and present-day physicists to hypothesise that the fundamental building blocks of matter are tiny strings of mass-energy vibrating in an unimaginable space of ten dimensions.

It took a while for physicists to learn the hard lesson that, in their quest to understand fundamental reality, they would have to do without the safety net of everyday intuition. They had still not learnt the lesson, in fact, when in the first decades of the twentieth century the titanic collision between the theories of light and matter spawned the wave–particle theory of light.

God Plays Dice

If light behaves as a stream of particles – and this is the point of this discussion – it has serious implications for understanding why you can see the reflection of your face in a window. Why? Well, what is perfectly straightforward to explain if light is a wave – remember the wave from the speedboat hitting the partially submerged wood and being partially reflected – is fiendishly difficult to explain if light is instead a stream of bullet-like particles. Photons, after all, are identical. However, if they are identical, surely they should be affected identically by a pane of glass. Either they should all be transmitted or they should all be reflected. So how can 95 per cent go through and 5 per cent bounce back?

This is a classic case of a physical paradox – a situation in which one theory, the particle theory of light, predicts one thing, whereas our common-sense experience tells us something contradictory. Our experience is clearly trustworthy – we can indeed see the scene outside a window and simultaneously the faint reflection of our face. Consequently, something must be awry with our idea of photons.

There is only one logical possibility: each photon must have a 95 per cent chance of being transmitted and a 5 per cent chance of being reflected. It may seem an innocuous fact, but actually it is a bombshell dropped into the heart of physics. For if we can know only the chance, or ‘probability’, of a photon going through a window or coming back, then we have tacitly given up all hope of knowing for sure what an individual photon will actually do. As realised by Einstein – ironically, the first person to propose the existence of the photon – this was a catastrophe for physics. It was utterly incompatible with everything that had gone before. Physics was a recipe for predicting the future with total confidence. If, at midnight, the Moon is over here in the sky, using Newton’s law of gravity we can predict that at the same time tomorrow night it will be over there – with 100 per cent certainty. But take a photon impinging on a window pane. We can never predict with certainty what it will do. Whether it is transmitted or reflected is totally, utterly random, determined solely by the vagaries of chance.

This kind of chance is not the type familiar from the roll of a dice and the spin of a roulette wheel. It is far more fundamental – and sinister. If all the myriad forces acting on a dice were known, a physicist with a big enough computer and enough dogged patience could use Newton’s laws of motion to predict the outcome. The problem is there are so many factors influencing the trajectory of a dice – from the initial impetus given it by a gambler to the currents of air that buffet it to the roughness of the tabletop over which it tumbles – that it is beyond anyone’s capabilities to pin down all of them with the necessary precision to predict the outcome with certainty.

But the key thing to recognise is that our ignorance of the factors influencing the roll of a dice is merely a practical problem. It is not impossible that, in the future, someone with sufficient tenacity – not to mention time on their hands – might be able to determine to the required degree of accuracy all the forces acting on a dice. The point is, the roll of a dice is not inherently unpredictable. It is only unpredictable in practice.

Contrast this with a photon. What a photon does when it encounters a pane of glass is utterly unpredictable – not merely in practice but in principle. It is not a matter of us being ignorant of all the factors that influence what it does. There are no factors to be ignorant of. A photon goes through a window rather than bouncing back out of sheer bloody-mindedness – for no reason at all.

In the day-to-day world every event is triggered by a prior event. A cause always precedes an effect. The dice comes up the number it does because of the effect of all the forces acting on it. You trip and stumble while out walking because a paving stone is loose and catches the heel of your shoe. But what a photon does on encountering a window pane is triggered by no prior event. It is an effect without a cause. Though the probability of a dice coming up ‘six’ can be determined in principle, there is no prior event from which the probability of a photon going through a window can be determined, no hidden machinery whirring beneath the skin of reality. It is nature’s bedrock, its bottom line. There is nothing deeper. For some mysterious reason, the Universe is simply constructed this way.4

The kind of unpredictability that characterises photons at a window pane in fact characterises their behaviour in all conceivable circumstances. It is, actually, typical of the behaviour of not just photons but all denizens of the microscopic world of atoms and their constituents – the ultimate building blocks of reality. An atom of radium can disintegrate, or ‘decay’, its central ‘nucleus’ exploding violently like a tiny grenade. But there is absolutely no possibility of predicting exactly when an individual radium nucleus will self-destruct, only the probability that it will happen within a particular interval of time.

The unpredictability of the microscopic world is unlike anything human beings have ever come across before. It is something entirely new under the sun. This is why Einstein got the Nobel Prize for deducing the particle-like nature of light from the photoelectric effect, and not for the theory of relativity. He – and the Nobel committee – realised it was a truly revolutionary discovery.

The recognition that the microscopic world is ultimately controlled by irreducible, random chance is probably the single most shocking discovery in the history of science. Ironically, it so appalled Einstein that he famously declared: ‘God does not play dice with the universe.’ (The great quantum pioneer Niels Bohr retorted: ‘Stop telling God what to do with his dice.’) He steadfastly refused to believe that things at a fundamental level in the Universe happened for no reason at all. The bitter irony, not lost on Einstein, was that he was the one who, by postulating the existence of the photon, had inadvertently set loose the genie of randomness in the heart of physics.5

To Einstein’s dismay, other physicists in the 1920s appeared to embrace the quantum idea that things can happen for no reason at all. But Einstein’s intuition told him something important. If naked randomness was admitted into the heart of the world, it would inevitably spawn even more shocking consequences – consequences so troubling, he believed, that physicists would be forced to abandon the whole quantum idea. It took until 1935 but, eventually, Einstein found what he was looking for. Working with two other physicists – Nathan Rosen and Boris Podolsky – he discovered that if quantum theory was right, then an inescapable consequence was that two atoms could influence each other instantaneously, even on opposite sides of the Universe.

To appreciate how Einstein came to such a conclusion requires a digression. This chapter began with the assertion that the reflection of your face in a window pane is easy to understand if light is a wave like a ripple on a pond. But there was no mention of how we ever came to suspect that light is a wave. After all, it does not look like a wave.

Light Is a Wave

The man who demonstrated that light was a wave was the Englishman Thomas Young. He was a polymath who not only made the first breakthrough in deciphering the Egyptian hieroglyphs on the Rosetta Stone but realised that the eye must contain separate receptors for blue, green and red light. Arguably his greatest achievement, however, was to lay bare the wave nature of light.

Young had a strong suspicion that light was a wave rather than a stream of bullet-like ‘corpuscles’, as Newton believed. In 1678, the Dutch scientist Christian Huygens had found that if light were a wave rippling through space, it was possible to explain many optical phenomena, such as the reflection of light by a mirror and the bending, or ‘refraction’, of the path of light by a dense medium such as glass. Huygens’ wave theory even predicted the correct bending of light as it travelled from air into a block of glass, whereas Newton’s did not – at least not without some tinkering. Such was Newton’s God-like standing, however, that Huygens’ theory was pretty much ignored – until Young.

A central characteristic of waves of any kind is that, when they pass through each other, they alternately reinforce and cancel each other out. They reinforce, or ‘constructively interfere’, where the peaks of one wave coincide with the peaks of another; and they cancel, or ‘destructively interfere’, where the peaks of one coincide with the troughs of another. This ‘interference’ is hypnotic to watch in a puddle during a rain shower. As the concentric ripples from impacting raindrops spread through each other, they alternately boost and nullify each other.

Young knew of this effect. He knew also that if a similar effect occurred with light, the fact it was not visible to the naked eye could only mean that the crests of light waves must be separated by far less than the width of a human hair, one of the smallest things discernible to the human eye. Making the interference of such tiny waves visible was a challenge, to say the least. But Young rose to it.

The key, he realised, was to create two similar sources of concentric ripples just like those spreading from two raindrops that puncture the skin of a pond. As the ripples spread through each other, they would interfere. At the places where the ripples destructively interfered, cancelling each other out, there would be darkness; and at the places where they constructively interfered, bolstering each other, there would be enhanced brightness. The dark and light regions would alternate. To see them it would be necessary only to put some kind of white screen at a location where the concentric ripples overlapped. This would reveal the interference as a pattern of alternating light and dark zebra stripes, not unlike a supermarket bar code.

It was crucial for the success of Young’s experiment that the light be of a single colour, or as close to a single colour as was possible. Different colours of light are today known to correspond to different wave sizes, or ‘wavelengths’, with the crests of red light being roughly twice as far apart as the crests of blue. Young may have suspected this. Since demonstrating the interference of light required perfect cancellation and perfect reinforcement of the overlapping light waves, it could happen only if there was light of a single colour.

In 1801, Young created his two sources of concentric ripples by shining light on one side of an opaque screen with two closely spaced, parallel slits cut in it. On the other side of the screen, the light emerged from each slit, spreading out and passing through the light from the other slit. In the region where the ripples overlapped Young interposed his white screen. And there, triumphantly, he saw a pattern of light and dark stripes – the unmistakable signature of interference. Beyond any doubt light was a wave. The reason it was not obvious to the naked eye was because the waves were so small: only a thousandth of a millimetre from crest to crest.6

Why is it necessary to know about an experiment at the beginning of the nineteenth century that demonstrated the wave nature of light? Because this was not the end of the story for Young’s double-slit experiment. Not by a long chalk. In the twentieth century, it reappeared in a new incarnation. And, remarkably, this time it demonstrated not the wave character of light but something else – something scarcely believable. That it is possible for a single microscopic entity – a photon or an atom – to be in two places at once.

Waves Inform Particles

Recall that Young shone light of a single colour, or wavelength, onto an opaque screen into which were incised two closely spaced, parallel slits. Each slit acted as a source of secondary light waves, just as two stones dropped in a pond together act as sources of concentric ripples. And, just as the ripples from two stones pass through each other, alternately reinforcing and cancelling, so too do the light ripples from the two slits. Where they reinforce, the light is boosted in brightness; where they cancel, it is snuffed out, leaving darkness. Young interposed a second screen in the region where the waves overlap. And there for all to see were alternating bands of light and dark. Beyond a shadow of a doubt, light was a wave.

But, beyond a shadow of a doubt, it was also a stream of particles. Arthur Compton had shown it to bounce off electrons as if it was made of tiny billiard balls, and there was also the photoelectric effect, in which individual particles of light liberated individual electrons from the surface of a metal. The key question therefore was: how is it possible to reconcile this with Young’s experiment?

Think about photons of visible light. Each carries very little energy. This is why nobody noticed their existence before Einstein. If photons carried large amounts of energy, when someone used a dimmer switch to turn up a light, the brightness would jump in abrupt steps from zero to some minimum brightness, then double that brightness, triple that brightness, and so on. We never see a light source brighten like this. And the reason is that individual photons carry so little energy that the steps, though present, are simply too minuscule to be discernible with the naked eye.

The light source in Young’s experiment is also composed of trillions upon trillions of tiny photons. Although this explains why its particle nature is not obvious, it does not explain how the photons conspire to form an interference pattern of dark and light bands, the unequivocal signature of waves, not particles. One possibility is that when large numbers of photons are present, their particle-like nature is somehow washed out in favour of their wave-like nature, that they lose their individuality like a lone person in a crowd at a football match. But what if we force light to show its particle hand? This can be done by carrying out Young’s experiment with a source of light so weak that it contains not trillions upon trillions of photons but only a few. If the source is so weak that photons arrive at the slit in the screen one at a time, with long intervals between, there will be no doubt at all that we are dealing with particles.

The human eye cannot detect single photons, so the arrival of photons on the second screen will be invisible. Nevertheless, this can be overcome by covering the screen with an array of sensitive detectors capable of registering individual particles of light. Think of them as tiny buckets which collect photons, just as real buckets collect raindrops. If the photon buckets are connected to a computer, what they pick up can be displayed on a screen and so made visible to the human eye.

If we set up this high-tech version of Young’s experiment, what might we expect to see? Well, it is a fundamental feature of interference that it takes two waves to mingle, or interfere, with each other. In the case of Young’s experiment, the two sets of waves emerge like concentric ripples from the two slits in the opaque screen. However, if photons are arriving at the screen one at a time, with large gaps of time in between, then it stands to reason there will only ever be one photon at a time emerging from one slit or the other. Such a solitary photon will have no other photon to mingle with. There can be no interference. So, after the experiment has been running a long while and lots of photons have gone through the two slits and peppered the second screen, the pattern on the computer monitor should simply reveal two parallel, bright bands – the images of the two slits.

But this is not what happens.

At first, the computer screen appears to show the photons raining down all over the second screen, as if fired from some kind of scattergun. However, as the experiment continues, something remarkable happens. Slowly but surely a pattern begins to emerge, like Lawrence of Arabia appearing out of the desert dust, built up a photon at a time from the particles intercepted by the tiny light buckets. And it is not just any pattern. It is a pattern of alternating light and dark bands, precisely the parallel interference stripes seen by Young in 1801. But how can this be? Interference arises from the mingling of waves from two sources. Here, the light is so weak that it is demonstrably made of particles – the light-bucket detectors, after all, register them one click at a time – and each photon has no other with which to mingle.

Welcome to the weird world of the quantum. Photons doing things for absolutely no reason at all turns out to be merely the beginning of the madness.

It seems that photons, even when there are so few of them that they are undeniably individual particles, have some awareness of their wave nature. After all, they end up on the second screen at exactly the places that waves emerging from the two slits would reinforce each other, while studiously avoiding the places where waves from the two slits would cancel. It is as if there is a wave associated with each photon that somehow directs it where to go on the screen.

And this is pretty much the picture most physicists, rightly or wrongly, carry in their minds. There is a wave associated with a photon. It informs it where to go or what to do. There is a twist, however. The wave is not a real, physical wave that can be seen or touched like a wave on water. Instead, it is an abstract, mathematical thing. Physicists imagine this quantum wave, often called the ‘wave function’, as extending throughout space. Where the wave is big, or highly peaked, there is a high chance, or probability, of finding the photon; and where it is small, or relatively flat, there is a low probability of finding it. To be a little more specific, the chance, or probability, of finding a particle at a particular location in space is the square of the height of the quantum wave at that location. Quantum waves can mingle and interfere and, when they do, the interference pattern produced determines where the photons are most likely to be found.

It is a hard picture to get your head around. Nevertheless, it hints at a profound duality in nature. Not only can light waves behave as particles – photons – but photons in turn can behave like waves, albeit abstract quantum waves.

As already pointed out, the consequence of light waves behaving as particles is pretty earth-shattering. The world of photons – and everything else – is ultimately orchestrated by random chance. And it turns out that the consequence of photons behaving as waves is equally earth-shattering. A single photon can be in two places at once (or do two things at once), the equivalent of you being in London and Paris at the same time. How come? Well, if photons can behave like waves, then it follows they can do all the things that waves can do. And there is one thing that waves can do which, although it has mundane consequences in the everyday world, has remarkable consequences in the microscopic world.

Two Places at Once

Imagine the sea on a stormy day. Big rolling waves driven by the wind are marching across the surface. Now imagine the sea a day later, when the storm has passed. The surface of the water is calm except for tiny ripples, ruffles caused by the light breeze. Now it is also possible to have big rolling waves with tiny wind-ruffled ripples superimposed. And this, it turns out, is a general feature of waves of all kinds. If two different waves are possible, a combination of those two waves is always possible. In the case of ocean waves, this has a consequence hardly worthy of note. But in the case of the quantum waves associated with photons, which inform them where to be and what to do, the consequences are pretty amazing.

Imagine a quantum wave which is highly peaked on one side of a window pane, so there is a high probability of finding it on that side. Now imagine another quantum wave which is highly peaked on the other side. Nothing untoward here. However, since both waves are individually possible, a wave that is a combination, or ‘superposition’, of both is also possible. In fact, it is required to exist. But this corresponds to a photon that is on both sides of the window pane at the same time. A photon that is simultaneously transmitted and reflected. Surely this is impossible?

Think back to Young’s double-slit experiment again. Recall that, to create an interference pattern, two things must mingle. One way to think about this is from the wave perspective. In this case, the quantum wave associated with each photon spreads out in concentric ripples from the slits in the opaque screen. But the other way to think of it is from the particle point of view. In this case, each photon arriving at the opaque screen is in two places at once. This enables it to go through both slits simultaneously and mingle with itself.

The ability of a photon to do two things at once is a direct consequence of the fact that if two waves are possible, a combination of those two waves is also possible. But nature does not stop at just two waves. If any number of waves are possible – three, 99 or 6 million – a combination of all of them is also possible. A photon can not only do two things at once, it can do many things at once.

It turns out there is an equation – a recipe, if you like – which predicts precisely how the quantum wave corresponding to a photon, or anything else, spreads through space. It was devised by the Austrian physicist Erwin Schrödinger, and his equation answers a quantum conundrum, namely, if the Universe is fundamentally unpredictable, at the mercy of the quantum roll of a dice, how is it that the everyday world is largely predictable? How is it that we can predict with almost complete certainty that if you get caught out in the rain, you will get wet? Or that the Sun will rise tomorrow morning?

The Schrödinger equation shows that what nature takes away with one hand it grudgingly gives back with the other. Yes, the Universe is fundamentally unpredictable. However – and this is the key thing – the unpredictability is predictable. We cannot know for certain what a photon or any other microscopic particle can do. But, with the aid of the Schrödinger equation, we can know the probability of it doing one thing, the probability of it doing another, and so on. And this, it turns out, is enough to ensure that we live in a largely predictable world.

More than that. Quantum theory is the most successful physical theory ever devised. Its predictions match what we see in experiments to an obscene number of decimal places. Quantum theory has literally made the modern world possible, not only giving us lasers and computers and iPods but also an understanding of why the Sun shines and why the ground beneath our feet is solid. It is ironic that we have this hugely successful theory which, on the one hand, is a remarkable recipe for building things and understanding our world, yet, on the other, provides a window onto an Alice-in-Wonderland world that is stranger than anything we could possibly have invented.

Instantaneous Influence

But if you think a photon doing something for absolutely no reason at all or being in two places at once is bad, there is worse to come. And this is where Einstein, Rosen and Podolsky came in. They highlighted a consequence of quantum theory they believed was so ridiculous that it must force all reasonable people to drop it. Think of the particle nature of light waves, which leads to naked unpredictability, and the wave nature of photons, which enables a photon to be in two places at once. Now imagine combining them. The result, Einstein’s team discovered, is a new, even weirder phenomenon: instantaneous communication between separated locations of space, even if those locations are on opposite sides of the Universe.

Actually, a third ingredient is required to conjure up the new phenomenon. But that ingredient is something so fundamental that it transcends quantum theory. It is a conservation law. Physicists have discovered a number of these. For instance, there is the law of conservation of energy. This states that energy can never be created or destroyed, merely changed from one form into another. In a light bulb, for example, electrical energy is converted into light energy and heat energy. In your muscles, the chemical energy derived ultimately from your food is converted into the mechanical energy of movement of your muscles.

In 1918, one of the great unsung heroines of science, the German mathematician Emmy Noether, made a surprising discovery about conservation laws in physics. She discovered that they are merely consequences of deep ‘symmetries’ of nature – things that stay the same even when there is a change in our viewpoint. For instance, the conservation of energy stems from ‘time translation symmetry’: the fact that, if we do an experiment now or translated in time – say, next week or next year – all things being equal, we will get exactly the same result. Another deep symmetry of nature is ‘rotational symmetry’. If we carry out an experiment with our equipment aligned north–south and rotate it to, say, the east–west direction, we will get the same result. The law which stems from this innocuous symmetry is the conservation of angular momentum, angular momentum being a quantity which is a measure of a rotating body’s tendency to keep turning. The Earth, spinning on its axis, has a very large angular momentum, and so is likely to stay spinning for a long time.

It turns out that microscopic particles such as photons possess a quantum property called ‘spin’. In common with irreducible randomness, it has no analogue whatsoever in the everyday world. As far as we know, photons as they fly through space are not actually spinning like the Earth spins on its axis. Their spin is ‘intrinsic’. Nevertheless, they behave as if they are spinning. Specifically, a photon has two possibilities open to it: it can behave as if it is corkscrewing in a clockwise manner about its direction of motion at a particular spin rate; or it can behave as if it is corkscrewing in an anticlockwise manner at the same rate.

The key thing is that quantum spin obeys the law of conservation of angular momentum. And the law, applied to photons, says that if two photons are created together, their total spin can never change. Say they are born together and one is spinning clockwise and the other anticlockwise. Their spins cancel each other out. In the jargon, physicists say their total spin is zero. In this case, the conservation of angular momentum requires that the total spin of photons must remain zero for ever, or until some process destroys them.

Nothing peculiar or controversial about this.

But consider a real process that creates two oppositely spinning photons. The electron, the tiny particle that orbits inside atoms, has an ‘antiparticle’ twin called the positron. It is a characteristic of all particles and such ‘antimatter’ twins that when they meet, they destroy, or ‘annihilate’, each other. Now an electron and a positron have an intrinsic spin just like a photon. It has a different magnitude to that of a photon, but that is not important here. The important thing is that just before they annihilate, the electron and positron are spinning in opposite directions, so their total spins cancel each other out. This ensures that the two photons created must also have spins that cancel out. One must be spinning clockwise and the other anticlockwise.

But here comes the quantum twist. The conservation of angular momentum requires only that the spins of the two photons that fly away from the annihilation are opposite to each other. But there are two possible ways this can happen. The first photon can be spinning clockwise and the second anticlockwise. Or the first photon can be spinning anticlockwise and the second clockwise. Remember, however, this is a quantum world. Each possibility is represented by a quantum wave. And, if two waves are possible, recall that a combination is also possible – required, in fact.

So as the newborn photons fly off – and they fly off in opposite directions – they exist in a weird quantum ‘superposition’. Just as a single photon can be on both sides of a window pane at the same time, the two photons are simultaneously spinning clockwise-anticlockwise and anticlockwise-clockwise. Maybe you do not see the bombshell lurking here. Don’t worry. Nobody did. It took Einstein to see it.

In addition to using the conservation of angular momentum, we have so far used one quantum ingredient – the quantum superposition. That leaves only the second quantum ingredient – unpredictability. Say we have arranged for there to be a detector which will intercept the first photon and determine its spin. Now it is impossible to predict for certain which way the photon will be spinning – even in principle. The quantum world is characterised by irreducible randomness. All we can know is that there is a 50 per cent chance that when we detect the photon, we will find it spinning clockwise, and a 50 per cent chance we will find it spinning anticlockwise.

Say we detect the first photon and find it is spinning clockwise. Now here comes the bombshell. Instantaneously, the second photon must start spinning anticlockwise. The photons were born spinning in opposite directions, after all, and the conservation of angular momentum requires that they must always spin in opposite directions. If, on the other hand, we detect the first photon and find that it is spinning anticlockwise, the second photon must start spinning clockwise instantaneously. What is mind-blowing about this is that there is no reference whatsoever to how far apart the photons are. If one photon is found to be spinning one way, its twin must react instantaneously so as to ensure that it is spinning in the opposite sense – even if the photons are on opposite sides of the Universe.

Quantum theory, as spectacularly shown by Einstein, Rosen and Podolsky, permits the insanity of instantaneous influence at a distance. It implies that particles born together for ever after behave as if, in some sense, they are a single joined-at-the-hip particle rather than two separate ones. They know about each other. Their properties are inextricably entwined or, in the quantum jargon, ‘entangled’. Instantaneous influence is synonymous with some kind of ghostly influence travelling between quantum particles at infinite speed. However, this flies in the face of Einstein’s special theory of relativity, which maintains that no influence can travel faster than light – 300,000 kilometres per second.

Everything can be traced back to the interaction of three things: superposition, unpredictability and the conservation of angular momentum. Because two photons are in a superposition, the state of the two particles – whether they are spinning clockwise-anticlockwise or anticlockwise-clockwise – is not determined for sure until the spin of one particle is observed. But when it is measured, the outcome is unpredictable. Yet the conservation of angular momentum somehow operates to give the second particle knowledge of its partner’s spin so that it can instantaneously adopt the opposite spin.

It is the subtle interplay of these three factors that predicts the existence of instantaneous influence, technically known as ‘non-locality’. And actually the conservation of angular momentum is not essential. There is absolutely no reason why instantaneous influence could not be demonstrated by substituting another conservation law, like the conservation of energy, for the conservation of angular momentum. It would simply require a bit of ingenuity to concoct a situation in which instantaneous influence was explicit.

Some popular books maintain that two entangled particles are like a pair of gloves. Imagine taking one glove from your drawer without looking at it, packing it in a bag and driving a long way away before opening the bag and checking it. If you find it is a left-hand glove, you will of course know immediately that the glove left behind in your drawer is a right-hand glove, and vice versa. But this is to misunderstand (and to trivialise) the magic of entanglement. Two separated quantum particles are not like two gloves. In the former case, one glove fits a left hand and one fits a right hand, and this is true for all time, or at least while the two gloves are in existence. If the glove you have carried with you turns out to be a right-hand glove, it was a right-hand glove before you opened your bag, which means the stay-at-home glove was a left-hand glove. There is no need for any signal to travel to the stay-at-home glove to tell it to be a left-hand glove. It was a left-hand glove all along.

Contrast this with two photons. If each is like a glove, it is a weird kind of glove, one that is neither left-nor right-hand, or rather a glove which has no pre-existing property of left-ness or rightness. This property is determined only when you take it out of your bag and look at it, at which point it plumps, utterly randomly, for being a right-hand glove or a left-hand glove. And the left-behind glove, which also had no pre-existing property of leftness or rightness, must respond, instantaneously, by becoming the opposite of its partner. It is the fact that the glove (or photon) has no state – and then that state is determined totally randomly – that forces there to be some ghostly connection between it and its partner at the moment its state is determined.

With non-locality, Einstein was convinced he had finally come up with a ridiculous prediction, one so stark-staring bonkers that it must mean that quantum theory was not nature’s final word. The trouble is, the ridiculous phenomenon predicted by Einstein has actually been observed – by a French physicist called Alain Aspect. In 1982, a quarter of a century after Einstein’s death, Aspect showed that photons on one side of his laboratory at the University of Paris Sud responded to photons on the other side as if some ghostly influence had passed between them significantly faster than the speed of light. Einstein was wrong. Quantum theory had passed yet another stringent test. The reality it described might be ridiculous, it might be unpalatable, but that was tough. It was simply the way it was.

Being able to communicate at infinite speed, in total violation of Einstein’s cosmic speed limit set by the speed of light, would be a wonderful thing. However – and wouldn’t you just believe it – what nature gives with one hand – the tantalising possibility of instantaneous, Star Trek-like communication – it takes away with the other. It is all down to randomness again. The only information that can be sent using instantaneous influence is the spin state of a photon. But if the sender is to exploit non-locality, they must send each photon of the message in a superposition of spinning clockwise and anticlockwise. Perhaps clockwise can encode a ‘0’ and anticlockwise spin a ‘1’. But if each photon is in a superposition of states, it will only have a 50 per cent chance of being a 0 and a 50 per cent chance of being a 1. The only message that can be sent is a random sequence of 0s and 1s, as useless a message as a series of random coin tosses. Einstein’s speed limit of the speed of light is not violated because it turns out it is an upper limit on the speed of ‘information’. Nature imposes no speed limit on the transmission of unusable gibberish. And that is all non-locality, as amazing as it seems at first sight, permits.

We have come a long way from the reflection of your face in the window. The image staring back at you tells you that the microscopic world of photons must be orchestrated by random chance. But in taking into consideration the wavelike behaviour of photons – which implies they can do two things at once – we arrive at non-locality. Many physicists consider this instantaneous influence the greatest mystery of quantum theory. It is fair to say that nobody knows what it means for the Universe at large. However, there is one thing we know for sure. All the Universe’s countless particles were born together 13.7 billion years ago in the fireball of the Big Bang. Consequently, the ghostly ties that bind two spinning photons must, in some sense we do not quite understand yet, bind you and me to the atoms in the most distant stars and galaxies.

Notes - CHAPTER 1

1. Actually, rather than visible light, Compton used X-rays. This ultra-high-energy light had so much oomph that it easily knocked electrons out of atoms. To all intents and purposes, they reacted like free-floating electrons rather than ones tied to an atomic nucleus.

2. Remarkably, relativity could have been a natural and unsurprising outgrowth of sixteenth-century physics. As several people have realised since Einstein, relativity is actually an unavoidable consequence of two things. One is that the laws of physics look the same whatever your state of motion, as long as that motion is at constant velocity. For instance, a ball thrown between two people follows the same shaped trajectory whether they are standing in a field or on a train travelling at 100 kilometres per hour. And the second thing is that the laws of physics look the same no matter what your orientation in 3D space. It is not necessary to assume anything about the speed of light, as Einstein did. Galileo could have discovered relativity. See ‘The Theory of Relativity – Galileo’s Child’ by Mitchell Feigenbaum (http://xxx.lanl.gov/abs/0806.1234).

3. In fact, it has more than stood the test of time since it turns out that it is not only matter that is grainy but everything. This is the meaning of the word ‘quantum’ in quantum theory. A quantum is an indivisible grain of something. Matter comes in quanta. So does energy, electric charge, time, and so on. We live in a fundamentally grainy world.

4. It is always possible there is a deeper level of reality beneath quantum theory and that the probability of things happening is determined by factors operating at this fundamental level, just as the roll of a dice is determined by environmental factors. This possibility continues to be explored by some scientists, including the English physicist Antony Valentini and the Dutch Nobel Prize-winner Gerard t’Hooft. However, they are in a minority. The theory appears to work perfectly if the unpredictability is indeed nature’s fundamental, irreducible bedrock, so most physicists see no compelling reason to look any deeper.

5. Another irony is that, in 1900, the year Planck proposed the quantum, Lord Kelvin, one of the greatest physicists of his day, surveyed the achievements of his contemporaries and declared: ‘There is nothing new to be discovered in physics now. All that remains is more and more precise measurement.’ How wrong he was.

6. Young’s double-slit experiment is one of the pivotal experiments in the history of science. Today, however, you can prove that light is a wave with a £1 laser pointer and a £2 metal ruler. Simply shine the laser at a very shallow angle along the metal ruler so that its narrow beam spreads out enough to illuminate several of the most closely spaced gradations on the ruler. Each of the gradations will act as a secondary source of concentric light waves which, as they spread through space, will pass through each other. Where they reinforce, they will create bright spots, and these will show up if, for instance, there is a convenient white wall in the path of the light. Strictly speaking, the spots are a result of ‘diffraction’, a phenomenon closely related to interference, but an undeniable characteristic of waves nevertheless.

    

Further reading:

The Magic Furnace by Marcus Chown (Vintage, 2000).