2

CLASSICAL COSMOLOGY

Cosmology is the study of the Universe at large, its beginning, its evolution, and its ultimate fate. In terms of ideas, it is the biggest of big science. Yet in terms of hardware, it is less impressive. True, cosmologists do make use of information about the Universe gleaned from giant telescopes and space probes, and they do sometimes use large computers to carry out their calculations. But the essence of cosmology is still mathematics, which means that cosmological ideas can be expressed in terms of equations written down using pencil and paper. More than any other branch of science, cosmology can be studied by using the mind alone. This is just as true today as it was seventy-five years ago when Albert Einstein developed the general theory of relativity and thereby invented the science of theoretical cosmology.

When scientists refer to the “classical” ideas of physics, they are not referring back to the thoughts of the ancient Greeks. Strictly speaking, classical physics is the physics of Isaac Newton, who laid the foundations of the scientific method for investigating the world back in the seventeenth century. Newtonian physics reigned supreme until the beginning of the twentieth century, when it was overtaken by two revolutions, the first sparked by Einstein’s general theory of relativity and the second by quantum theory. The first is the best theory we have of how gravity works; the second explains how everything else in the material world works. Together, these two topics, relativity theory and quantum mechanics, formed the twin pillars of modern twentieth-century science. The Holy Grail of modern physics, sought by many, is a theory that will combine the two into one mathematical package.

But to the modern generation of Grail seekers in the twenty-first century, even these twin pillars of physics, in their original form, are old hat. There is another, more colloquial, way in which scientists use the term “classical physics”—essentially to refer to anything developed by previous generations of researchers and therefore more than about twenty-five years old. In fact, going back twenty-five years from the time the first edition of this book was written does bring us to a landmark event in science: the discovery of pulsars in 1967, the year Stephen Hawking celebrated his own twenty-fifth birthday. These objects are now known to be neutron stars, the collapsed cores of massive stars that have ended their lives in vast outbursts known as supernova explosions. It was the discovery of pulsars, collapsed objects on the verge of becoming black holes, that revived interest in the extreme implications of Einstein’s theory of gravity, and it was the study of black holes that led Hawking to achieve the first successful marriage between quantum theory and relativity.

Typically, though (as we shall see), Hawking had already been working on the theory of black holes at least two years before the discovery of pulsars, when only a few mathematicians bothered with such exotic implications of Einstein’s equations, and the term “black hole” itself had not even been used in this connection. Like all his contemporaries, Hawking was brought up, as a scientist, on the classical ideas of Newton and on relativity theory and quantum physics in their original forms. The only way we can appreciate how far the new physics has developed since then, partly with Hawking’s aid, is to take a look at those classical ideas ourselves, a gentle workout in the foothills before we head for the dizzy heights. “Classical cosmology,” in the colloquial sense, refers to what was known prior to the revolution triggered by the discovery of pulsars—exactly the stuff that students of Hawking’s generation were taught.

image

Isaac Newton made the Universe an ordered and logical place. He explained the behavior of the material world in terms of fundamental laws that were seen to be built into the fabric of the Universe. The most famous example is his law of gravity. The orbits of the planets around the Sun had been a deep mystery before Newton’s day, but he explained them by a law of gravity which says that a planet at a certain distance from the Sun feels a certain force, tugging on it, proportional to one over the square of the distance to the Sun—what is known as an inverse-square law. In other words, if the planet is magically moved out to twice as far from the Sun, it will feel one-quarter of the force; if it is put three times as far away, it will feel one-ninth of the force; and so on. As a planet in a stable orbit moves through space at its own speed, this inward force exactly balances the tendency of the planet to fly off into space. Moreover, Newton realized, the same inverse-square law explains the fall of an apple from a tree and the orbit of the Moon about the Earth, and even the ebb and flow of the tides. It is a universal law.

Newton also explained the way in which objects respond to forces other than gravity. Here on Earth, when we push something it moves, but only as long as we keep pushing it. Any moving object on Earth experiences a force, called friction, which opposes its motion. Stop pushing, and friction will bring the object to a halt. Without friction, though (like the planets in space or the atoms that everyday things are composed of), according to Newton, an object will keep moving in a straight line at a steady speed until a force is applied to it. Then, as long as the force continues to operate, the object will accelerate, changing its direction, its speed, or both. The lighter the object, or the stronger the force, the greater the acceleration that results. Take away the force, however, and once again the object moves at a steady speed in a straight line but at the new velocity that has built up during the time it was accelerating.

When you push something, it pushes back, and the action and reaction are equal and opposite. This is how a rocket works—it throws material out from its exhaust in one direction, and the reaction pushes the rocket along in the opposite direction. This last law is familiar these days from the billiard table, where balls collide and rebound off each other in a very “Newtonian” manner. And that is very much the image of the world that comes out of Newtonian mechanics—an image of balls (or atoms) colliding and rebounding, or of stars and planets moving under the influence of gravity, in an exactly regular and predictable manner.

All these ideas were encapsulated in Newton’s masterwork, the Principia, published in 1687 (usually referred to simply by the short version of its Latin title; the full English title of Newton’s great work is Mathematical Principles of Natural Philosophy). The view Newton gave us of the world is sometimes referred to as the “clockwork universe.” If the Universe is made up of material objects interacting with each other through forces that obey truly universal laws, and if rules like that of action and reaction apply precisely throughout the Universe, then the Universe can be regarded as a gigantic machine, a kind of cosmic clockwork, which will follow an utterly predictable path forever once it has been set in motion.

This raises all kinds of puzzles, deeply worrying to philosophers and theologians alike. The heart of the problem is the question of free will. In such a clockwork universe, is everything predetermined, including all aspects of human behavior? Was it preordained, built into the laws of physics, that a collection of atoms known as Isaac Newton would write a book known as the Principia that would be published in 1687? And if the Universe can be likened to a cosmic clockwork machine, who wound up the clockwork and set it going?

Even within the established framework of religious belief in seventeenth-century Europe, these were disturbing questions, since although it might seem reasonable to say that the clockwork could have been wound up and set in motion by God, the traditional Christian view sees human beings as having free will, so that they can choose to follow the teachings of Christ or not, as they wish. The notion that sinners might actually have no freedom of choice concerning their actions, but were sinning in obedience to inflexible laws, following a path to eternal damnation actually laid out by God in the beginning, simply could not be fit into the established Christian world view.

Strangely, though, in Newton’s day, and down into the twentieth century, science did not really contemplate the notion of a beginning to the Universe at all. The Universe at large was perceived as eternal and unchanging, with “fixed” stars hanging in space. The biblical story of the Creation, still widely accepted in the seventeenth century by scientists as well as ordinary people, was thought of as applying only to our planet, Earth, or perhaps to the Sun’s family, the Solar System, but not to the whole Universe.

Newton believed (incorrectly, as it turns out) that the fixed stars could stay as they were in space forever if the Universe were infinitely big, because the force of gravity tugging on each individual star would then be the same in all directions. In fact, we now know that such a situation would be highly unstable. The slightest deviation from a perfectly uniform distribution of stars would produce an overall pull in one direction or another, making the stars start to move. As soon as a star moves toward any source of gravitational force, the distance to the source decreases, so the force gets stronger, in line with Newton’s inverse-square law. So once the stars have started to move, the force causing the nonuniformity gets bigger, and they keep on moving at an accelerating rate. A static universe would soon start to collapse under the pull of gravity. But that became clear only after Einstein had developed a new theory of gravity—a theory, moreover, that contained within itself a prediction that the Universe would certainly not be static and might actually be not collapsing, but expanding.

image

Like Newton, Albert Einstein made many contributions to science. Also like Newton, his masterwork was his theory of gravity, the general theory of relativity. It is some measure of just how important this theory is to the modern understanding of the Universe that even Einstein’s special theory of relativity, the one that leads to the famous equation E = mc2, is by comparison a relatively minor piece of work. Nevertheless, the special theory, which was published in 1905, contributed a key ingredient to the new understanding of the Universe. Before we move on to this, though, we should at least give a brief outline of the main features of the special theory.

Einstein developed the special theory of relativity in response to a puzzle that had emerged from nineteenth-century science. The great Scottish physicist, James Clerk Maxwell, had found the equations that describe the behavior of electromagnetic waves. Maxwell’s equations were soon developed to explain the behavior of radio waves, which were discovered in 1888. But Maxwell had found that the equations automatically gave him a particular speed,* which is identified as the speed at which electromagnetic waves travel. The unique speed that came out of Maxwell’s equations turned out to be exactly the speed of light, which physicists had already measured by that time. This revealed that light must be a form of electromagnetic wave, like radio waves but with shorter wavelength (that is, higher frequency). And it also meant, according to those equations, that light (as well as other forms of electromagnetic radiation, including radio waves) always travels at the same speed.

This is not what we expect from our everyday experience of how things move. If I stand still and toss a ball to you gently, it is easy for you to catch the ball. If I am driven toward you at 60 miles an hour in a car and toss the ball equally gently out the window, it hurtles toward you at 60 miles an hour plus the speed of the toss. You would, rightly, be dumbfounded if the ball tossed gently out the car window reached you traveling only at the gentle speed of the toss, without the speed of the car being added in, yet that is exactly what happens with light pulses. Equally, if one vehicle traveling at 50 miles an hour along a straight road is overtaken by another traveling at 60 miles an hour, the second vehicle is moving at 10 miles an hour relative to the first one. Speed, in other words, is relative. And yet, if you are overtaken by a light pulse, and measure its speed as it goes past, you will find it has the same speed you would measure for a light pulse going past you when you are standing still.

Nobody knew this until the end of the nineteenth century. Scientists had assumed that light behaved in the same way, as far as adding and subtracting velocities is concerned, as objects like balls being thrown from one person to another. And they explained the “constancy” of the speed of light in Maxwell’s equations by saying that the equations applied to some “absolute space,” a fundamental reference frame for the entire Universe.

According to this view, space itself defined the framework against which things should be measured—absolute space, through which the Earth, the Sun, light, and everything else moved. This absolute space was also sometimes called the “aether” (or “ether”) and was conceived of as a substance through which electromagnetic waves moved, like water waves moving over the sea. The snag was, when experimenters tried to measure changes in the velocity of light caused by the motion of the Earth through absolute space (or “relative to the aether”), none could be found.

Because the Earth moves around the Sun in a roughly circular orbit, it should be moving at different speeds relative to absolute space at different times of the year. It’s like swimming in a circle in a fast-flowing river. Sometimes the Earth will be “swimming with the aether,” sometimes across the aether, and sometimes against the flow. If light always travels at the same speed relative to absolute space, common sense tells us this ought to show up in the form of seasonal changes in the speed of light measured from the Earth. It does not.

Einstein resolved the dilemma with his special theory. This says that all frames of reference are equally valid and that there is no absolute reference frame. Anybody who moves at a constant velocity through space is entitled to regard himself or herself as stationary. They will find that moving objects in their frame of reference obey Newton’s laws, while electromagnetic radiation obeys Maxwell’s equations and the speed of light is always measured to be the value that comes out of those equations, denoted by the letter c (for “constant” or the Latin celeritas, meaning “swiftness, celerity”). Furthermore, anybody who is moving at a constant speed relative to the first person (the first observer, in physicists’ jargon) will also be entitled to say that they are at rest and will find that objects in their laboratory obey Newton’s laws, while measurements always give the speed of light as c. Even if one observer is moving toward the other observer at half the speed of light and sends a torch beam out ahead, the second observer will not measure the speed of the light from the torch as 1.5c: it will still be c!

Starting out from the observed fact that the speed of light is a constant, the same whichever way the Earth is moving through space, Einstein found a mathematical package to describe the behavior of material objects in reference frames that move with constant velocities relative to one another—so-called “inertial” frames of reference. Provided the velocities are small compared with the speed of light, these equations give exactly the same “answers” as Newtonian mechanics. But when the velocities begin to become an appreciable fraction of the speed of light, strange things happen.

Two velocities, for example, can never add up to give a relative velocity greater than c. An observer may see two other observers approaching each other on a head-on collision course, each traveling at a speed of 0.9c in the first observer’s reference frame, but measurements carried out by either of those two fast-moving observers will always show that the other one is traveling at a speed less than c but bigger (in this case) than 0.9c.

The reason why velocities add up in this strange way has to do with the way both space and time are warped at high velocities. In order to account for the constancy of the speed of light, Einstein had to accept that moving clocks run more slowly than stationary clocks and that moving objects shrink in the direction of their motion. The equations also tell us that moving objects increase in mass the faster they go.

Strange and wonderful though all these things are, they are only peripheral to the story of modern cosmology and to the search for links between quantum physics and gravity. We stress, however, that they are not wild ideas in the sense that we sometimes dismiss crazy notions as “just a theory” in everyday language. To scientists, a theory is an idea that has been tried and tested by experiments and has passed every test. The special theory of relativity is no exception to this rule. All the strange notions implicit in the theory—the constancy of the speed of light, the stretching of time and shrinking of length for moving objects, the increase in mass of a moving object—have been measured and confirmed to great precision in very many experiments. Particle accelerators—“atom-smashing” machines like those at CERN, the European Center for Nuclear Research in Geneva—simply would not work if the theory were not a good one, since they have been designed and built around Einstein’s equations. The special theory of relativity as a description of the high-speed world is as securely founded in solid experimental facts as is Newtonian mechanics as a description of the everyday world; the only reason it conflicts with our common sense is that in everyday life we are not used to the kind of high-speed travel required for the effects to show up. After all, the speed of light, c, is ≈186,000 miles a second (≈300,000 kilometers a second), and the relativistic effects can be safely ignored for any speeds less than about 10 percent of this—that is, for speeds less than a mere 30,000 kilometers a second.

In essence, the special theory is the result of a marriage of Newton’s equations of motion with Maxwell’s equations describing radiation. It was very much a child of its time, and if Einstein hadn’t come up with the theory in 1905, one of his contemporaries would surely have done so within the next few years. Without Einstein’s special genius, though, it might have been a generation or more before anyone realized the importance of a far deeper insight buried within the special theory.

image

This key ingredient, to which we have already alluded, was the fruit of another marriage—the union of space and time. In everyday life, space and time seem to be quite different things. Space extends around us in three dimensions (up and down, left and right, forward and backward). We can see where things are located in space, and travel through it, more or less at will. Time, although we all know what it is, is almost impossible to describe. In a sense, it does have a direction (from past to future), but we can look neither into the future nor into the past, and we certainly cannot move through time at will. Yet the great universal constant, c, is a speed, and speed is a measure that relates space and time. Speeds are always in the form of miles per hour, or centimeters per second, or any other unit of length per unit of time. You cannot have one without the other when you are talking about speed. So the fact that the fundamental constant is a velocity must be telling us something significant about the Universe. But what?

If you multiply a speed by a time, you get a length. And if you do this in the right way (by multiplying intervals of time by the speed of light, c), you can combine measures of length (space) with measures of time in the same set of equations. The set of equations that combine space and time in this way consists of the equations of the special theory of relativity that describe time dilation and length contraction and lead to the prediction that a mass m is equivalent to an energy E as described by the formula E = mc2. Instead of thinking about space and time as two separate entities, as long ago as 1905 Einstein was telling physicists that they should be thinking about them as different aspects of a single, unified whole—spacetime. But this spacetime, the special theory also said, was not fixed and permanent like the absolute space or absolute time of Newtonian physics—it could be stretched or squeezed. And therein lay the clue to the next great step forward.

Einstein used to say that the inspiration for his general theory of relativity (which is, above all, a theory of gravity) came from the realization that a person inside a falling elevator whose cable had snapped would not feel gravity at all. We can picture exactly what he meant because we have seen film of astronauts orbiting the Earth in spacecraft. Such an orbiting spacecraft is not “outside” the influence of the Earth’s gravity; indeed, it is held in orbit by gravity. But the spacecraft and everything in it is falling around the Earth with the same acceleration, so the astronauts have no weight and float within their capsule. For them, it is as if gravity does not exist, a phenomenon known as free fall. But Einstein had never seen any of this and had to picture the situation in a freely falling elevator in his imagination. It is as if the acceleration of the falling elevator, speeding up with every second that passes, precisely cancels out the influence of gravity. For that to be possible, gravity and acceleration must be exactly equivalent to one another.

The way this led Einstein to develop a theory of gravity was through considering the implications for a beam of light, the universal measuring tool of special relativity. Imagine shining a flashlight horizontally across the elevator from one side to the other. In the freely falling elevator, objects obey Newton’s laws: they move in straight lines, from the point of view of an observer in the elevator, bounce off each other with action and reaction equal and opposite, and so on. And, crucially, from the point of view of the observer in the elevator, light travels in straight lines.

But how do things look to an observer standing on the ground watching the elevator fall? The light would appear to follow a track that always stays exactly the same distance below the roof of the elevator. But in the time it takes the light to cross the elevator, the elevator has accelerated downward, and the light in the beam must have done the same. In order for the light to stay the same distance below the roof all the way across, the light pulse must follow a curved path as seen from outside the elevator. In other words, a light beam must be bent by the effect of gravity.

Einstein explained this in terms of bent spacetime. He suggested that the presence of matter in space distorts the spacetime around it, so that objects moving through the distorted spacetime are deflected, just as if they were being tugged in ordinary “flat” space by a force inversely proportional to the square of the distance. Having thought up the idea, Einstein then developed a set of equations to describe all this. The task took him ten years. When he had finished, Newton’s famous inverse-square law reemerged from Einstein’s new theory of gravity; but general relativity went far beyond Newton’s theory, because it also offered an all-embracing theory of the whole Universe. The general theory describes all of spacetime and therefore all of space and all of time. (There is a neat way to remember how it works. Matter tells spacetime how to bend; bends in spacetime tell matter how to move. But, the equations also insisted, spacetime itself can also move, in its own fashion.)

The general theory was completed in 1915 and published in 1916. Among other things, it predicted that beams of light from distant stars, passing close by the Sun, would be bent as they moved through spacetime distorted by the Sun’s mass. This would shift the apparent positions of those stars in the sky—and the shift might actually be seen, and photographed, during a total eclipse, when the Sun’s blinding light is blotted out. Just such an eclipse took place in 1919; the photographs were taken and showed exactly the effect Einstein had predicted. Bent spacetime was real: the general theory of relativity was correct.

But the equations developed by Einstein to describe the distortion of spacetime by the presence of matter, the very equations that were so triumphantly vindicated by the eclipse observations, contained a baffling feature that even Einstein could not comprehend. The equations insisted that the spacetime in which the material Universe is embedded could not be static. It must be either expanding or contracting.

Exasperated, Einstein added another term to his equations, for the sole purpose of holding spacetime still. Even at the beginning of the 1920s, he still shared (along with all his contemporaries) the Newtonian idea of a static Universe. But within ten years, observations made by Edwin Hubble with a new and powerful telescope on a mountaintop in California had shown that the Universe is expanding.

The stars in the sky are not moving farther apart from one another. The individual stars we can see from Earth all belong to a huge system, the Milky Way Galaxy, which contains about a hundred billion stars and is like an island in space. In the 1920s, astronomers discovered with the aid of new telescopes that there are many other galaxies beyond the Milky Way, many of them containing hundreds of billions of stars like our Sun. And it is the galaxies, not individual stars, that are receding from one another, being carried farther apart as the space in which they are embedded expands.

If anything, this was an even more extraordinary and impressive prediction of the general theory than the bending of light detectable during an eclipse. The equations had predicted something that even Einstein at first refused to believe but which observations later showed to be correct. The impact on scientists’ perception of the world was shattering. The Universe was not static, after all, but evolving; Einstein later described his attempt to fiddle the equations to hold the Universe still as “the greatest blunder of my life.” Even at the end of the 1920s, the observations and the theory agreed that the Universe is expanding. And if galaxies are getting farther apart, that means that long ago they must have been closer together. How close could they ever have been? What happened in the time when galaxies must have been touching one another and before then?

The idea that the Universe was born in a super-dense, super-hot fireball known as the Big Bang is now a cornerstone of science, but it took time—over fifty years—for the theory to become developed. Just at the time astronomers were finding evidence for the universal expansion, transforming the scientific image of the Universe at large, their physicist colleagues were developing the quantum theory, transforming our understanding of the very small. Attention focused chiefly on the development of the quantum theory over the next few decades, with relativity and cosmology (the study of the Universe at large) becoming an exotic branch of science investigated by only a few specialist mathematicians. The union of large and small still lay far in the future, even at the end of the 1920s.

image

As the nineteenth century gave way to the twentieth, physicists were forced to revise their notions about the nature of light. This initially modest readjustment of their worldview grew, like an avalanche triggered by a snowball rolling down a hill, to become a revolution that engulfed the whole of physics—the quantum revolution.

The first step was the realization that electromagnetic energy cannot always be treated simply as a wave passing through space. In some circumstances, a beam of light, for example, will behave more like a stream of tiny particles (now called photons). One of the people instrumental in establishing this “wave-particle duality” of light was Einstein, who in 1905 showed how the way in which electrons are knocked out of the atoms in a metal surface by electromagnetic radiation (the photoelectric effect) can be explained neatly in terms of photons, not in terms of a pure wave of electromagnetic energy. (It was for this work, not his two theories of relativity, that Einstein received his Nobel Prize.)

This wave-particle duality changes our whole view of the nature of light. We are used to thinking of momentum as a property to do with the mass of a particle and its speed (or, more correctly, its velocity). If two objects are moving at the same speed, the heavier one carries more momentum and will be harder to stop. A photon does not have mass, and at first sight you might think this means it has no momentum either. But, remember, Einstein discovered that mass and energy are equivalent to one another, and light certainly does carry energy—indeed, a beam of light is a beam of pure energy. So photons do have momentum, related to their energy, even though they have no mass and cannot change their speed. A change in the momentum of a photon means that it has changed the amount of energy it carries, not its velocity; and a change in the energy of a photon means a change in its wavelength.

When Einstein put all of this together, it implied that the momentum of a photon multiplied by the wavelength of the associated wave always gives the same number, now known as Planck’s constant in honor of Max Planck, another of the quantum pioneers. Planck’s constant (usually denoted by the letter h) soon turned out to be one of the most fundamental numbers in physics, ranking alongside the speed of light, c. It cropped up, for example, in the equations developed in the early decades of the twentieth century to describe how electrons are held in orbit around atoms. But although the strange duality of light niggled, the cat was only really set among the pigeons in the 1920s when a French scientist, Louis de Broglie, suggested using the wave-particle equation in reverse. Instead of taking a wavelength (for light) and using this to calculate the momentum of an associated particle (the photon), why not take the momentum of a particle (such as an electron) and use it to calculate the length of an associated wave?

Fired by this suggestion, experimenters soon carried out tests that showed that, under the right circumstances, electrons do indeed behave like waves. In the quantum world (the world of the very small, on the scale of atoms and below), particles and waves are simply twin facets of all entities. Waves can behave like particles; particles can behave like waves. A term was even coined to describe these quantum entities—“wavicles.” The dual description of particles as waves and waves as particles turned out to be the key to unlocking the secrets of the quantum world, leading to the development of a satisfactory theory to account for the behavior of atoms, particles, and light. But at the core of that theory lay a deep mystery.

Because all quantum entities have a wave aspect, they cannot be pinned down precisely to a definite location in space. By their very nature, waves are spread-out things. So we cannot be certain where, precisely, an electron is—and uncertainty, it turns out, is an integral feature of the quantum world. The German physicist Werner Heisenberg established in the 1920s that all observable quantities are subject, on the quantum scale, to random variations in their size, with the magnitude of these variations determined by Planck’s constant. This is Heisenberg’s famous “uncertainty principle.” It means that we can never make a precise determination of all the properties of an object like an electron: all we can do is assign probabilities, determined in a very accurate way from the equations of quantum mechanics, to the likelihood that, for example, the electron is in a certain place at a certain time.

Furthermore, the uncertain, probabilistic nature of the quantum world means that if two identical wavicles are treated in an identical fashion (perhaps by undergoing a collision with another type of wavicle), they will not necessarily respond in identical fashions. That is, the outcome of experiments is also uncertain, at the quantum level, and can be predicted only in terms of probabilities. Electrons and atoms are not like tiny billiard balls bouncing around in accordance with Newton’s laws.

None of this shows up on the scale of our everyday lives, where objects such as billiard balls do bounce off each other in a predictable, deterministic fashion, in line with Newton’s laws. The reason is that Planck’s constant is incredibly small: in standard units used by physicists, it is a mere 6 × 10−34 (a decimal point followed by 33 zeros and a 6 [i.e., 0.0000000000000000000000000000000006]) of a joule-second. And a joule is indeed a sensible sort of unit in everyday life—a 60-watt light bulb radiates 60 joules of energy every second. For everyday objects like billiard balls, or ourselves, the small size of Planck’s constant means that the wave associated with the object has a comparably small wavelength and can be ignored. But even a billiard ball, or yourself, does have an associated quantum wave—even though it is only for tiny objects like electrons, with tiny amounts of momentum, that you get a wave big enough to interfere with the way objects interact.

It all sounds very obscure, something we can safely leave the physicists to worry about while we get on with our everyday lives. To a large extent, that is true, although it is worth realizing that the physics behind how computers or TV sets work depends on an understanding of the quantum behavior of electrons. Laser beams, also, can be understood only in terms of quantum physics, and every compact disc player uses a laser beam to scan the disc and “read” the music. So quantum physics actually does impinge on our everyday lives, even if we do not need to be a quantum mechanic to make a TV set or a hi-fi system work. But there is something much more important to our everyday lives inherent in quantum physics. By introducing uncertainty and probability into the equations, quantum physics does away once and for all with the predictive clockwork of Newtonian determinism. If the Universe operates, at the deepest level, in a genuinely unpredictable and indeterministic way, then we are given back our free will, and we can after all make our own decisions and our own mistakes.

image

At the beginning of the 1960s, the two great pillars of physics stood in splendid separation. General relativity explained the behavior of the cosmos at large and suggested that the Universe must have expanded from a super-dense state, colloquially known as the Big Bang. Quantum physics explained how atoms and molecules work and gave an insight into the nature of light and other forms of radiation. One young physicist, taking his first degree at Oxford University, would have been given a thorough grounding in both great theories. But he would hardly have suspected that over the next half century he would play a key role in bringing the two theories together, providing insight into how they might be unified into one grand theory that would explain everything, from the Big Bang to the atoms we are made of.

* Strictly speaking, it is a velocity—a quantity that specifies speed and direction. For our purposes, it is easier to refer to velocities as speeds.

In everyday language, time dilation means that a clock moving relative to an observer runs slow, and length contraction means that an object moving relative to an observer shrinks in the direction of its motion.