God does not play dice with the Universe.
ALBERT EINSTEIN
Stop telling God what to do with his dice.
NIELS BOHR
Quantum theory is our very best description of the microscopic world of atoms – the building blocks of ordinary matter – and their constituents. It is a fantastically successful theory. Not only has it given us lasers and computers and nuclear reactors but it has provided an explanation of how the Sun shines and why the ground beneath our feet is solid.
But, in addition to being a fantastic recipe for making things and understanding things, quantum theory provides a unique window on the weird, counter-intuitive, Alice-in-Wonderland world that underpins everyday reality. It is a place where a single atom can be in two places at once; where things happen for absolutely no reason at all, and where two atoms can influence each other instantaneously even if they are on opposite sides of the Universe.
Quantum theory was born out of a conflict between two great theories of physics – the theory of matter and the theory of light. The theory of matter holds that, ultimately, everything is made of tiny indivisible grains, or atoms.1 The theory of light says that light is a wave, spreading outwards from its source like a ripple on a pond.
Both theories are very successful. For instance, the theory of atoms explains the behaviour of gases such as steam. If a gas is squeezed into half its volume, it pushes back with twice the force, or pressure, an observation encapsulated in Boyle’s law. This can be explained if the pressure is caused by countless tiny atoms drumming on the walls of the container like rain on a tin roof. When the volume is halved, the atoms have only half as far to fly between striking and restriking the walls and so drum twice as often on the walls, doubling the pressure.
The theory of light is also very successful. However, the phenomena it explains generally involve light waves that overlap each other and reinforce or cancel each other out. And, since the distance between successive crests of a light wave is far less than the width of a human hair, such interference or diffraction phenomena are hard to spot and take scientific ingenuity to make visible to the naked eye.
The clash between the theory of light, which says light is a wave, and the theory of matter, which maintains matter is made of atoms, occurs not surprisingly in the place where light meets matter. Specifically, when an atom spits out light – for instance, in a light bulb – or when an atom gobbles up light – for example, in your eye.
The problem is not hard to appreciate. A light wave is fundamentally a spread-out thing whereas an atom is fundamentally a localised thing – it would take 10 million laid side by side to span the full stop at the end of this sentence. In fact, a wave of visible light is about 5,000 times bigger than an atom. Imagine you have a matchbox and you open it and out drives a 40-tonne lorry. Or, alternatively, a 40-tonne lorry approaches, you open a matchbox, and the lorry slips inside. That’s the way it is when light meets matter. Somehow an atom must swallow or cough out something 5,000 times bigger than itself.
Logically, the only way something can be emitted and absorbed by something as small and localised as an atom is if it too is small and localised. ‘Nothing fits inside a snake like another snake,’ observed TV survival expert Ray Mears. The trouble is there are countless experiments that show unequivocally that light is indeed a spread-out wave.
Resolving the paradox was mental torture for the physicists of the 1920s. ‘I remember discussions … which went through many hours until very late at night and ended almost in despair,’ wrote the German physicist Werner Heisenberg. ‘And when, at the end of the discussion, I went alone for a walk in the neighbouring park I repeated to myself again and again the question: Can nature possibly be so absurd as it seemed to us in these atomic experiments?’2
In the end physicists were forced to accept something scarcely believable: that light is both a spread-out wave and a localised particle. Or, rather, light is neither a wave nor a particle. It is something else for which we have no word in our language and nothing with which to compare it in the everyday world. It is as fundamentally ungraspable as the colour blue is to a person blind from birth. ‘We must content ourselves with two incomplete analogies – the wave picture and the corpuscular picture,’ said Heisenberg.3
In retrospect, perhaps physicists should not have been surprised to find the submicroscopic world weird. Why should the world of the atom – which is 10 billion times smaller than a human being – contain objects that behave in any way like those in the everyday world? Why should they dance to the same tune, the same laws of physics?
Light is an ungraspable thing and all we can ever do is observe the facets of it. When light is absorbed or spat out by an atom, we see its particle-like face, known as a photon. When light bends around a corner, we see its wave-like face.4 ‘On Mondays, Wednesdays and Fridays, we teach the wave theory and on Tuesday, Thursdays and Saturdays the particle theory,’ joked the English physicist William Bragg in 1921.
But, it turns out, things are much worse than this. In 1923, the French physicist Louis de Broglie, writing in his doctoral thesis, proposed that not only can light waves behave as localised particles, particles such as electrons can behave as spread-out waves. According to de Broglie, all the microscopic building blocks of matter have two faces. All share a peculiar wave–particle duality. In fact, if there is one thing you need to know in order to understand quantum theory – one thing from which pretty much everything else logically follows – it is this: Waves can behave as particles and particles can behave as waves.
Take the first half of the sentence first: waves can behave as particles. Imagine you are looking though a window at the street outside. Maybe you see a car going past, a woman walking her dog past a tree. If you look closely, however, you will also see a faint reflection of your own face staring out. This is because glass is not perfectly transmitting. Most of the light – say, 95 per cent – goes right through but the remainder – 5 per cent – is reflected back. The question is: how is this possible if light behaves as particles – a stream of identical photons like so many miniature machine-gun bullets?
Surely, if the photons are all identical, they should all be affected identically by the window pane? Either they should all be transmitted or all reflected. There appears to be no way to explain how some can be transmitted and some reflected. Unless – and here physicists were forced to accept a diminished, cut-down version of what it means to be identical in the microscopic world – photons have an identical chance of being transmitted, an identical chance of being reflected.
But this, as Einstein first realised, is catastrophic for physics. Physics is a recipe for predicting the future with 100 per cent certainty. The Moon is over here today and Newton’s theory of gravity predicts where it will be tomorrow with absolute confidence. But, if photons merely have a particular chance of being transmitted, then it is impossible to predict what an individual photon will do when its strikes the window pane. Whether it goes through or bounces back is entirely down to chance.
And we are not talking about the kind of chance with which we are familiar in the everyday world. We may think a roulette ball ends up where it ends up by chance. But, actually, if we knew the initial motion of the ball, the friction between the wheel and the ball, the play of air currents in the casino, and so on, Newton’s laws would predict exactly where the ball would end up. The fact we cannot do this is merely down to not being able to measure all these things accurately enough and do the required calculation to enough decimal places. Though we could do it in principle, we could not do it in practice. However, when we come to a photon impinging on a window, what it does – whether it is transmitted or reflected – is not even predictable in principle. Quantum unpredictability is truly something new under the Sun.
And it turns out that it is not just photons that are fundamentally unpredictable. So too are all the denizens of the submicroscopic world, from neutrons to neutrinos, electrons to atoms. Einstein was so appalled by this that he famously declared: ‘God does not play dice with the Universe.’ But Einstein was wrong.5
An obvious question arises: if the Universe at its fundamental level is unpredictable, how come we know the Sun will rise tomorrow, that a ball will go roughly where we throw it? The answer is that what nature takes away with one hand it grudgingly gives back with the other. Yes, the Universe is unpredictable. But, crucially, the unpredictability is predictable. In fact, this is what quantum theory is: a recipe for predicting the unpredictable – the probability of this event, the probability of that event. And this, it turns out, is enough to create the largely predictable world we find ourselves in.
The fact that, ultimately, things happen randomly, for no reason at all – the consequence of waves behaving as particles – is arguably the most shocking discovery in the history of science. But, recall, there is a second half to that crucial sentence: particles can behave as waves. The consequence of this turns out to be equally stunning.
Clearly, if particles can behave as waves, they can do all the things waves can. And one thing in particular waves can do is mundane in the everyday world but has truly earth-shattering consequences in quantum world.
Imagine there is a storm out at sea and huge waves are rolling in to a beach. Imagine that the next day the storm has passed and the surface of the sea is ruffled into small ripples by a gentle breeze. Now, anyone who has watched the sea knows that it is possible to have a wave that is both big and rolling and that also has small ripples on its surface. This is a general property of all waves. If two waves can occur individually, it is always possible to have a combination, or superposition of the two.
Now consider a quantum wave associated with, say, an atom. Actually, this is a slightly peculiar wave because it is a mathematical thing. Nevertheless, it can be imagined extending through out space. The important thing is that where it is big there is a high probability of finding the atom and where it is small there is a low probability.6
So far, nothing untoward.
Now imagine two quantum waves. One is a quantum wave for an oxygen atom that is highly peaked 10 metres to your left, so there is a very high probability of finding it there. And the other is a quantum wave for the same oxygen atom that is highly peaked 10 metres to your right, so there is a very high probability of finding it there. But, recall, it is a general property of waves that, if two waves are possible, so too is a superposition of the two. But, in this case, such a combination will correspond to an oxygen atom that is simultaneously 10 metres to your left and 10 metres to your right – in other words, in two places at once. That is the equivalent of you being in London and New York at the same time.
Actually, nature is set up in such a way that it is impossible to observe something being in two places at once. That is because, if we try to locate something, we are implicitly looking for its particle-like property, which precludes seeing a wave-like property such as superposition. So who cares? Well, although it is impossible to observe something being in two places at once, it is nevertheless possible to observe the consequences of something being in two places at once. The wave phenomenon that makes this possible and spawns all kinds of quantum weirdness is called interference.
If you have ever seen raindrops falling in a pond, you will have seen concentric ripples spreading out from each impact and overlapping with each other. Where the crests of two waves coincide, they reinforce each other, making a bigger wave; where the crest of one wave coincides with the trough of another, they cancel each other, creating dead calm. This is interference. Now, imagine inserting a piece of card in the region of overlap of the waves spreading from two raindrops. There will be places on the card where big waves strike and there will be places on the card where no waves hit.
Actually, this experiment was done with light by the English physician and polymath Thomas Young in 1801. With considerable ingenuity, he managed to engineer a situation in which there was an overlap between the light spreading from two point sources of illumination. When he inserted a screen in the overlapping region, there appeared a pattern of alternating light and dark stripes, not unlike a modern-day barcode. It was undeniable proof that light exhibits the characteristic wave phenomenon of interference. Young had proved that light ripples through space like an undulation on the surface of a pond. Nobody had noticed it before because the waves of light are simply far too small to be seen with the naked eye.
Because of interference, the fact that a quantum object such as an electron can be in two places at once has consequences. Here is an example. Imagine two bowling balls that are rolled together so they collide and ricochet off each other. If this happens over and over again, the balls will be seen to scatter in a range of different directions. Imagine a giant clock face. The balls will go to every number on the clock face.
Now imagine two quantum particles – say, electrons – which collide and scatter in a similar manner. If this happens thousands upon thousands of times, the electrons will also scatter in a range of different directions. But something very odd will soon become apparent. Some directions will be favoured by the electrons. And others will be studiously avoided. In other words, there will be numbers on the clock face where the electrons never go.
The explanation is that there are directions in which the electron waves reinforce each other and directions in which they cancel each other out. The latter are the directions in which no electrons are seen.
This interference phenomenon was demonstrated in 1927 by Clinton Davisson and Lester Germer in the US and by George Thomson in Scotland. The physicists bounced electrons off the flat surface of a crystal and noticed that there were directions in which the electrons never bounced. The crystal consisted of layers of atoms like a loaf of sliced bread stood on end. Some electrons bounced off the top layer; some off the layer below; some off the layer below that, and so on. And the quantum waves of all these electrons interfered with each other. Only in the directions where all the waves reinforced did the experimenters observe electrons.
By showing that electrons can interfere with each other, proving that electrons are indeed waves, Davisson, Germer and Thomson won the Nobel Prize for Physics. The irony is that, while Thomson received the Nobel for showing that the electron is not a particle, his father, ‘J. J.’ Thomson, had received the Nobel for showing that it is.7 There can be no better illustration of the paradox at the heart of quantum theory.
What all this shows is that, even though it is not possible to see a single quantum particle go in several directions at once, interference means there are consequences. The quantum waves corresponding to the electron going in all possible directions interfere with each other, reinforcing in some directions and cancelling each other out in other directions. That is why there are directions that electrons never go. That is why there is quantum weirdness.8
Currently, there is a race on in the world to exploit superpositions – the ability of atoms and the rest to do many things at once – to do many calculations at once. People are trying to build a quantum computer, which promises to outperform massively even the most powerful conventional computer with certain types of calculations. The reason for saying ‘certain types of calculations’ is that they must have a single answer. Recall that it is impossible to observe a quantum particle doing many things at once, merely the consequence of it doing many things at once. Similarly, it is impossible to access all the countless individual strands of a quantum computation, only the consequence – that is the single answer made from all the threads woven together.
Building a quantum computer is extremely difficult because, if the quantum building blocks of such a computer interact with their surroundings in any way, the multi-tasking power of the computer is irrevocably lost. So a quantum computer must be totally isolated in a vacuum chamber so that no air atoms strike it or photons of light. And this is hard.
It is not that the ability of the quantum particles to do many things at once is fragile. It is simply that it is very difficult for a large number of atoms – such as air atoms – to maintain a superposition. If the quantum particle impresses its superposed state on a lot of atoms, the impression is quickly lost, a bit like one voice being drowned out in a crowd of chanting football supporters.
This explains why atoms display quantum weirdness but, when large numbers of atoms come together to make everyday objects, those objects do not display quantum weirdness. For instance, you never see a table in two places at once or someone walking through two doors at the same time. We never see quantum behaviour in the everyday world because we never see individual atoms or photons. We see only large numbers. You do not observe the world; you observe yourself. In other words, your brain never observes a photon; it observes the amplified effect of that photon impressed on hundreds of thousands of atoms in your retina. And that impression loses the quantumness of the original photon. This is why, bizarrely, we live in a quantum world that does not look quantum.
Much quantum weirdness is a consequence of superpositions and interference. But there are other quantum ingredients also. And, when they are combined in different permutations, they spawn all sorts of novel and surprising behaviours. ‘Magic without magic’, as it has been called. Take quantum spin. This is a property, like quantum unpredictability, that has no analogue in everyday life. Basically, a quantum particle behaves as if it is spinning like a tiny top, even though it is not. Physicists say it has intrinsic spin, or angular momentum.
An electron has the smallest possible quantity of spin, which for historical reasons is called spin ½9 rather than spin 1, which would be the sensible thing to call it.10 Now a spinning charge acts like a tiny magnet.11 This means that it acts like a compass needle when in a magnetic field, aligning itself either pointing along the field (up) or against it (down). If there are two electrons, one possibility is that electron 1 is spin up and electron 2 spin down; another possibility is that 1 is down and 2 is up. Now here is the important thing. In the quantum world superpositions are possible. So the two electrons can be up–down and down–up at the same time. A bit like you being simultaneously dead and alive.
So much for ingredient 1 – superposition. Ingredient 2 is the law of conservation of angular momentum. Basically, this says that the total spin of the two electrons can never change. Since the two electrons in the above example begin pointing in opposite directions, they must always point in opposite directions. Ingredient 3 is simply quantum unpredictability. If we observe an electron, whether it turns out to be spin up or down is fundamentally unpredictable like a quantum coin toss. There is a 50 per cent chance of it being up and a 50 per cent chance of it being down.
If all this is getting complicated, here is the situation where the three ingredients come together to create something extraordinary. We start with the pair of electrons that is in a superposition of up–down and down–up, and send one a long way away. When we have done this, we look at the stay-at-home electron. Perhaps we find that its spin is up. If so, instantaneously, its partner, far away, must flip down since the two spins must always point in opposite directions. Perhaps we find that its spin is down. Instantaneously, its partner must flip up.
What is so surprising about this is that, even if the far-away electron was on the other side of the Universe, it would still have to react instantaneously to its partner being found to be up or down. To Einstein this ‘spooky’ action at a distance, apparently in violation of the cosmic speed limit set by light, was so ridiculous that it proved quantum theory was flawed.12 But, not for the first time, Einstein was wrong. Non-locality was triumphantly demonstrated in a laboratory in Paris by French physicist Alain Aspect in 1982.
It is worth pointing out that separating the two electrons is not like separating a pair of gloves. Clearly, if the stay-at-home glove is found to be the left one, the distant one will be a right one. That is because one glove was a left-hand one and the other a right-hand one at the outset. But the two electrons were neither up nor down at the outset. Their state was undetermined. The stay-at-home electron assumed its state only when it was observed. And that state was random. This is why non-locality does not violate Einstein’s special theory of relativity. If up and down were like the dots and dashes of Morse code, all that could ever be sent would be a random sequence of dots and dashes because the state of the stay-at-home electron and its far-away cousin would always be selected randomly. The dots and dashes could not be controlled. Special relativity, it turns out, limits only the speed of a meaningful signal. Nature does not care about unusable garbage. It is welcome to fly about the Universe at any speed it likes. Nobody knows how this happens. Non-locality is arguably the deepest mystery of quantum theory.
But quantum theory’s greatest achievement is in explaining atoms. ‘Atoms are completely impossible from the classical point of view,’ said Richard Feynman. According to Maxwell’s theory of electromagnetism, an accelerated charge – one that changes its speed or direction or both – radiates into space electromagnetic waves.13 An electron orbiting an atomic nucleus is continually changing its direction. It should therefore broadcast like a tiny TV transmitter and rapidly lose energy. Calculations in fact show it should spiral into the nucleus in less than a hundred-millionth of a second. Atoms, as Feynman observed, have no right to exist.
Quantum theory comes to the rescue because quantum theory recognises that an electron has a wave nature. And it turns out that the smaller the mass of a particle the bigger its quantum wave.14 Because you are so big, your wavelength is ridiculously tiny. This is why you exhibit no obvious wave behaviour. This is why you do not bend around corners or pass by on both sides of a lamp post. But the electron is the smallest particle in nature. It is precisely because it has the biggest quantum wave that it exhibits so much quantum weirdness. And its wave nature explains the existence of atoms. A wave is a fundamentally spread-out thing. It simply cannot be squashed into a nucleus.15 So atoms do not shrink down to oblivion in a hundred-millionth of a second. Instead, they can exist essentially for ever.
In fact, the electron wave needs so much room that it explains another puzzling feature of atoms: why an electron orbits so far from its nucleus. An atom is 99.9999999999999 per cent nothingness. 16 You are 99.9999999999999 per cent nothingness.17 Atoms are empty – or so big compared with their nuclei – simply because an electron wave needs lots of elbow room.
But electron waves have a lot more to tell us about atoms. In fact, they explain everything about atoms.
There is not just one kind of electron wave that can exist inside an atom. There are many. A more wiggly, more violent wave has more energy than a more sluggish one. It therefore corresponds to an electron that is capable of defying the pull of the nucleus and orbiting further away. But there is a restriction on what kinds of electron wave are possible. All must fit neatly inside the atom. Think of waves with one hump inside the atom. Or two. Or three. And so on. They fit. But waves with 1½ humps or 2.687 humps do not fit. This leads to a crucial distinction between an atom and the Solar System. Although in principle a planet can orbit at any distance from the Sun, an electron in an atom is most probably found only at certain special distances from the nucleus, corresponding only to certain energies.18
Immediately, this explains why atoms give out light of only certain energies, or wavelengths (the higher the energy, the shorter the wavelength). When an electron in an atom drops from a high-energy orbit to a low-energy orbit, it sheds its excess energy as light. The energy of this photon is equal to the difference in energy of the two states.
There is a twist. Isn’t there always? An atom is a three-dimensional object. This means that an electron probability wave might be peaked not only at certain distances from the nucleus but also in certain orientations. Think of a globe. It takes two numbers to specify any location on the Earth. Similarly, it takes two numbers to specify an electron wave with a particular orientation in space. Add this to the number necessary to specify the distance of an electron from a nucleus and that makes a total of three quantum numbers.
The twist is therefore that an atom gives out light when it drops from any high-energy orbit to a low-energy orbit. And those orbits might differ not just in the distance of an electron from the nucleus but in the orientation of its orbit. Incidentally, the orientation of the electron wave – at least that of the outermost electrons – explains chemistry. An atom can join with another atom via its outermost electrons, which can be thought of as living on its exterior surface. And the locations on this surface where an atom can stick, or bond, with another atom are simply the locations at which the electron wave is biggest – that is, where electrons are most likely to be found.
There remains a puzzle, however. The electrons in an atom can occupy any of the permitted quantum waves, known as orbitals. But, in the real world, things have a strong tendency to minimise their energy. For instance, a ball, given a chance, will roll to the foot of a hill to minimise its gravitational energy. So, in an atom with more than one electron, why do they all not roll down the energy hill to the bottom? Why do they not crowd into the lowest-energy orbital, closest to the atom?
If this happened, atoms as we know them would not exist. For one thing, there could be no light. Photons are emitted only when an electron drops from one energy level to another, shedding its excess energy in the process. But, if all electrons were in the same state, at the same energy, there would never be any energy to be shed.
A more serious problem is that it is the outermost electrons that determine chemistry – how one type of atom joins up with other types of atoms to make molecules. For instance, some atoms have one outer electron, some two, three, and so on; and some atoms have outer electrons pointing in certain directions and other atoms in other directions, and so on. It is this that creates the huge variety of nature ’s atoms, from hydrogen, the lightest, to uranium, the heaviest. But, if all the electrons in every type of atom piled into the innermost orbital, all atoms would present pretty much the same exterior to the world. Instead of 92 naturally occurring kinds of atom, there would be only one. There would be no chemistry. No complexity. No us.
Once again, quantum theory rescues the atom – and the Universe – from such a stultifyingly dull fate. Recall that quantum ingredients, when combined in different permutations, spawn all sorts of novel and surprising behaviours. For instance, the mix of superpositions, electron spin and the law of conservation of angular momentum creates the madness of non-locality, of quantum particles influencing each other instantaneously when separated by impossible distances. Here is another mix: electron spin, the wave nature of electrons, and the fact that electrons are indistinguishable. The last is yet another new-under-the-Sun quantum property. Objects in the real world are always distinguishable – two similar cars by a scratch on the paintwork, for instance, or a slight variation in tyre pressure. But electrons are utterly indistinguishable. They cannot be marked. If two electrons are switched, there is no way to know that this has happened even in principle.
This mix of quantum ingredients spawns the Pauli Exclusion Principle.19 In a nutshell, this edict says that no two electrons in an atom can share the same orbital. More specifically, no two electrons can share the same quantum numbers. A slight twist is that there is a fourth quantum number: spin. Remember, an electron in a magnetic field can either have a spin up or a spin down. So, the Pauli Principle says that no two electrons can share the same four quantum numbers.
The Pauli Principle stops electrons all piling on top of each other in the same orbital. It is why there are 92 types of naturally occurring atoms, not one. It is why there is variety in the world and you are here to read these words.
Because of the Pauli Principle, electrons arrange themselves in shells at successively greater distances from a nucleus. The first shell can contain a maximum of 2 electrons; the next 8; the next 18, and so on. Here are some examples … Take an atom with 6 electrons – it has 2 electrons in its inner shell and 4 in its outer shell. One with 12 electrons has 2 in its inner shell, 8 in the next shell and 2 in its outer shell. Immediately, it is clear why some kinds of atom behave similarly. For instance, lithium, sodium and potassium all have 1 electron in their outermost shell and so appear much the same to the outside world.
So there you have the origin of the world’s stability and diversity. The wave-like nature of atoms prevents their electrons spiralling down to the nucleus in the merest split second. And the Pauli Exclusion Principle prevents the electrons piling on top of each other so, instead of just one kind of atom, there is a huge number of types. ‘It is the fact that electrons cannot get on top of each other that makes tables and everything else solid,’ said Feynman.
The Pauli Exclusion Principle applies to all subatomic particles with so-called half-integer spin – that is, ½, ³⁄² , 5⁄² units, and so on (quarks, by the way, have spin ½ just like electrons). Such particles, known as fermions, are characterised by their enormous unsociability. Particles with integer spin – that is, 0, 1, 2 units, and so on – on the other hand, are gregarious. They do not obey the Pauli Exclusion Principle. This is why photons, which are bosons, are able to flock together in untold quadrillions to create the phenomenon of laser light.
It seems that there is nothing in the world that cannot be explained by quantum theory. It is the most successful physical theory ever devised. Inventions that exploit the ideas of quantum theory are estimated to account for 30 per cent of the GDP of the United States. Each and every one of us is a product of quantum theory. We live in a quantum world. But, although the quantum world is a magical world, there is little doubt that it is a mind-stretching world. ‘If anybody says he can think about quantum physics without getting giddy’, said Niels Bohr, ‘that only shows he has not understood the first thing about it.’
1 It is not quite true that the Earth gains no net energy from the Sun. It gains a little. For instance, the level of the greenhouse gas carbon dioxide, which traps heat in the atmosphere, is rising. This is causing global warming. Trees also sequester some solar energy which, if they fall and are buried, may become coal in many millions of years’ time. Coal is trapped sunlight and, when we burn it, we free yesterday’s sunlight.
2 Peter Atkins, Four Laws that Drive the Universe.
3 The difference between heat and temperature – the degree of hotness of a body – is illustrated by a match and a central-heating radiator. A match contains very little heat but has a temperature high enough to burn you. A radiator contains a lot of heat but has a temperature low enough for you to lean safely against it.
4 Atoms tend to combine to form molecules under the influence of their mutual electromagnetic force. A molecule of steam, for instance, consists of two hydrogen atoms glued to one oxygen atom (H2O).
5 Actually, the reason the molecules of steam lose speed is subtle. If the piston was not moving, they would bounce off it like perfect rubber balls, losing no speed. However, the piston is moving away from the molecules. This means that, when a molecule bounces off the piston, its speed relative to it is less than it would have been relative to a stationary piston.
6 Conservation laws are nothing more than manifestations of deep symmetries. These are properties of the world that stay the same under a particular transformation. For instance, the law of conservation of energy is a consequence of time-translation symmetry, the fact an experiment done today or next week will produce the same result. This idea that symmetry underpins the laws of physics was discovered by the German mathematician, Emmy Noether, in 1918, and is one of the most powerful ideas in all of science. See Chapter 20, ‘Rules of the game: The laws of physics’.
7 The efficiency of a steam engine that uses steam at a temperature of Th and discharges waste heat to its surroundings at Tc is 1 – Tc/Th (with the temperature expressed in Kelvin; see n. 10 below). The formula was discovered by the nineteenth-century French engineer, Sadi Carnot. It shows, for instance, that, if an engine uses steam at 373 Kelvin and discharges waste heat to its surroundings at 300 Kelvin, it can turn only about 20 per cent of the energy of the steam into useful work.
8 We can ignore the piston because temperature and entropy describe only disordered microscopic motion. The piston exemplifies ordered bulk motion.
9 Arthur Eddington, The Nature of the Physical World.
10 For temperature, physicists tend to use the Kelvin scale. This assigns 0 Kelvin to the temperature at which microscopic motion becomes so sluggish that it actually stops altogether. Since on the Celsius scale this is -273 °C, the freezing point of water, 0 °C, is equivalent to 273 Kelvin. This makes the average surface temperature of the Earth about 300 Kelvin.
11 Although the light that falls on the Earth from the Sun is predominantly visible light – characteristic of a body glowing at 5,778 Kelvin – the light the Earth radiates into space is invisible-to-the-naked-eye far infrared – characteristic of a body at about 300 Kelvin.
12 For every 5,778 Kelvin photons the Earth receives from the Sun it radiates back into space about twenty 300 Kelvin photons. Every photon, it turns out, has about the same entropy. So the Earth exports to the Universe about twenty times the entropy it receives from the Sun. All this extra disorder is the price paid by the Universe for all the wonderful things that go on on Earth.
13 Since all activity in the Universe is driven by the temperature distance between the stars and empty space, the obvious question is: what created that temperature difference? The answer is gravity. Shortly after the big bang, the matter of the Universe was spread out uniformly at uniform temperature. But regions that were slightly denser than average had slightly stronger gravity and began to drag more matter towards them. The ultimate result was to squeeze matter into dense clumps – and when matter is squeezed it becomes hot. Gravity, then, changed a tepid, uniform Universe into a Universe full of clumpy hot things – stars.
14 Entropy is related to our lack of information about a system – in other words, to our ignorance of it. If the energy is in disordered steam, for instance, it is impossible to know which of the countless molecules have the energy of motion. High entropy is therefore the same as having a high level of ignorance. However, when a piston is moving, it is obvious where the energy of motion is – with the whole piston. Low entropy is therefore synonymous with having a low level of ignorance.
15 Howard Resnikoff, The Illusion of Reality.
16 Actually, since 1998, we have known that the expansion of the Universe is speeding up, driven by the repulsive gravity of mysterious dark energy. The fate of matter may therefore be to be diluted out of existence by the breakneck expansion. It could be even more boring than anyone suspected.