11

Life’s Big Bang

Before the 1980s most scientists found the suggestion that we live in a Universe fine-tuned for life outrageous and implausible. The one and only Universe is what it is and it could be no other way. How could a Universe, in any sense, be optimised for life? This attitude has gradually changed over the last 30 years until the idea that we live in a peculiarly life-friendly Universe has now almost, but not quite, become mainstream cosmology. During this time, cosmologists have developed weird and wonderful ideas in an extremely successful attempt to explain the observed large-scale structure of our Universe. These same ideas, almost incidentally, make cosmic fine-tuning for life not just plausible but almost inevitable. This chapter therefore looks at this cosmological background and how it may impact on my central topic: the life-friendliness of the Earth.

A couple of years ago, after giving a talk on earthquakes at a local school, I was asked: ‘What’s the most surprising scientific discovery in your lifetime?’ I had no doubt about the answer but had to confess that it was in cosmology rather than my own subject of geophysics. It’s been known since the late 1920s that distant galaxies are moving away from us as a result of the Big Bang that created our Universe but, 70 years after the discovery that we live in an expanding cosmos, the scientific world was shocked by the revelation that the galaxies are spreading out ever more rapidly. It’s like watching as a ball thrown into the air accelerates away into space instead of slowing and falling back to the ground.

The equations that describe evolution of the Universe have a lot in common with those that govern the trajectory of a ball. Gravity decelerates a thrown ball as it rises until it eventually stops and then falls back, although in theory a ball could be thrown faster than Earth’s escape velocity so that it would never quite come to a stop. Even then it would still slow as it travelled upwards – just not by enough to ever become stationary. In a similar way, gravity should slow the Universe’s expansion. This deceleration could happen so rapidly that the Universe would eventually stop expanding and then contract. Alternatively, the expansion could slow more gradually so that it never quite ceased. Cosmologists in the second half of the 20th century worked hard to measure the rate at which the expansion is slowing, because they wanted to know the ultimate fate of the Universe. Would it turn around and collapse into a ‘Big Crunch’ or would it eventually disperse into nothingness? To everyone’s surprise, they discovered a Universe whose expansion is not slowing at all. A mysterious repulsive force is pushing distant galaxies so that they move away from us ever more quickly rather than having their recession slowed by the force of gravity.

The approach used to determine the cosmic deceleration was straightforward in principle. Astronomers compared expansion in the relatively nearby parts of the Universe to expansion further away. Light from more distant parts of the Universe has taken longer to get to us and so, if we look at how fast distant objects are receding from us, we are seeing expansion at an earlier time. I should warn that ‘nearby’, in this context, bears no relation at all to what most of us understand by the word. To see the expansion at all we need to look at galaxies hundreds of times further away than our nearest significant extra-galactic neighbour, the famous spiral galaxy in Andromeda. Light has taken over 2 million years to get to us from the Andromeda spiral and hundreds of millions of years to get to us from the galaxies I’m describing as ‘nearby’. Those galaxies are close, though, compared to the ones used to determine the ancient expansion rate.

Whether near or far, the speeds at which galaxies are moving away from us as the Universe expands are measured using the Doppler effect, the fact that the colour of light emitted by receding galaxies is shifted towards red as successive light waves come to us from further and further away. However, to measure expansion, we need to see how recession speed increases with distance, and we must therefore find out how far away each galaxy is. Measuring that distance is much more difficult than determining speed.

The most successful method for estimating the huge distances involved is to accurately measure the brightness of super-nova explosions in distant galaxies. The further away these are, the fainter they look and so measuring their brightness tells us their distance. This is much harder than it sounds. There are several different types of supernova and only one kind has reasonably consistent luminosity. Furthermore, even these gigantic stellar cataclysms are hard to spot when looking half way, or more, across the visible Universe. A final difficulty is that these measurements must be calibrated using a supernova from a nearby galaxy whose distance is determined by some other technique. Cosmic distance scales are built up using a step-by-step approach in which we first determine the size of the solar system, then the distances to nearby stars, then distances to nearby galaxies and so on. Each of these steps is subject to uncertainties and so inter-galactic distances are hard to determine with the necessary precision. As a consequence of all these problems, the results from the supernova surveys were confusing and contradictory for several decades; but, when unambiguous measurements finally arrived partly thanks to the Hubble Space Telescope, the results astonished nearly everyone. The Universe’s expansion had been slowing as expected during the first 9 billion years of its existence but, about 4 billion years ago, the expansion started to speed up again.

Although I’ve emphasised that this discovery was unexpected, the theoretical possibility of a Universe with accelerating expansion had been known for more than 80 years. Few people took the possibility seriously because it requires anti-gravity, a repulsive force large enough to overwhelm the usual gravitational attraction between massive objects. Anti-gravity is a mainstay of much science fiction, and sounds like pure fantasy, but Einstein’s very successful theory of general relativity actually allows it. In this theory, which was published in 1915, gravitational attraction is caused not only by mass (as in Newton’s earlier theory) but also by pressure. The force of gravity pulling you towards the Earth right now, for example, results almost entirely from the large mass of the Earth, but in general relativity this force is minutely supplemented by an additional attraction caused by the immense pressures inside the Earth. In normal matter the contribution from pressure is utterly swamped by the contribution from mass and can be ignored. Nevertheless, the contribution is there. Furthermore, pressure, and therefore gravity, can be negative: for example, consider stretched rubber, which pulls inwards rather than pushing outwards. So, in principle, we generate anti-gravity every time we stretch an elastic band. In practice any real material would snap long before measurable effects occurred, but it turns out that space itself exerts negative pressure.

It may seem surprising that a vacuum exerts any kind of pressure, but this results from another major 20th-century breakthrough in physics: the laws of quantum mechanics. In quantum mechanics the energy contained by an object, such as the wound-up spring of an old-fashioned clock, does not have a precise value but instead fluctuates on short timescales because of the uncertainties that are an inherent feature of quantum theory. This uncertainty in energy applies to everything, even empty space. As a consequence I shouldn’t really use the phrase ‘empty space’ at all, because (thanks to Einstein’s equation E=mc2 which says that matter and energy are really the same thing) these energy fluctuations cause space to be filled by a dynamic froth of short-lived elementary particles. These ‘virtual particles’ give the vacuum real physical properties including a minute density and a minute pressure. Moreover, since a larger piece of vacuum contains more virtual energy than a smaller chunk, you have to put energy in if you want to stretch space. This implies that stretching space is hard work (as well as hard to imagine) and so the vacuum resists being stretched. Hence, like rubber, empty space exerts a negative pressure. The anti-gravity from this turns out to be three times greater than the gravitational attraction generated by the vacuum’s tiny density and so, overall, empty space produces a minuscule amount of anti-gravity. In a nutshell, the vacuum has a repulsive nature. It’s as if space doesn’t like itself and every part of it is gently pushing away every other part. The effect is tiny and utterly unable to produce measurable consequences on human scales of space and time but you can see the resulting repulsion by looking at the cumulative effect as it adds up across 10 billion light years.

Before moving on to the cosmic consequences of this negative pressure, it’s probably worth saying a few words about my terminology here. In cosmology books you will not often see the phrase ‘anti-gravity’. Rather, you will see more scientifically acceptable synonyms such as ‘dark energy’, ‘quintessence’ or ‘the cosmological constant’. The last of these phrases has the longest history. The cosmological constant was originally introduced by Einstein himself because, when he developed his theory of general relativity, the expansion of the Universe had not yet been discovered and it was believed that the Universe must be static. Einstein introduced the cosmological constant into his equations as a way to produce a balanced Universe where a general repulsion spread through the whole of space is exactly cancelled by the gravitational attraction between massive objects. However, by 1930, it became clear that the Universe was expanding and so the fiddle-factor Einstein had included to allow a static cosmos was no longer needed. Einstein famously dismissed its invention as his greatest blunder but the cosmological constant never completely went away. Theoreticians occasionally resurrected it to try to explain various oddities such as a Universe that seemed younger than some of the stars it contained! However, the cosmological constant was rather frowned upon as not quite respectable until it came back with a vengeance following the late 20th-century discovery that cosmic expansion is speeding up. This observational confirmation of anti-gravity was almost immediately interpreted as being due to the repulsive vacuum that I described above.

Dark energy, on the other hand, is a more recent term and is more inclusive in the sense that it covers alternative explanations for accelerating cosmic expansion that are not strictly equivalent to a cosmological constant (quintessence is one of these). These alternatives have a similar effect, in that they produce a long-distance repulsive force, but they differ in their mathematical details. These differences are unimportant here, and for simplicity I will stick with the most widely discussed explanation for an accelerating Universe – that it results from a vacuum-generated cosmological constant.

The repulsive force produced by all this weird and wonderful physics is so small as to be almost immeasurable but it is competing with gravitational attraction generated by an average density for matter that is also tiny. Vast empty spaces between galaxies take up most of the Universe’s volume, and even the galaxies themselves consist mostly of near-vacuums in the voids between the stars. There is a complication here, however. Galaxies are filled and surrounded by ‘dark matter’ (not to be confused with ‘dark energy’), a currently unidentified fluid whose only physical effects are through the gravity it produces. Dark matter does not, for example, interact with light and that’s why we can’t see it. Nevertheless, it must be there since the way stars move within galaxies (and the way galaxies themselves move) can be explained only if there is far more matter present than we can see. In fact, there is six times more dark matter in the Universe than there is of the ordinary kind from which stars, planets and people are made. Despite this extra contribution to the amount of mass in galaxies and galaxy clusters, the average matter density of the Universe remains very small – equivalent to about one thousandth of a gram in a volume the size of the Earth. Furthermore, as the Universe has expanded, this low average density of matter has fallen further, and, with it, the tiny gravitational attraction holding our Universe together has fallen as well. The cosmological constant, on the other hand, really is a constant. It has not dropped as the Universe has expanded since it is a property of empty space itself. Thus, as the Universe has grown, the relative importance of the cosmological constant has grown with it.

This growth in anti-gravity’s importance can be seen clearly in the supernova data, which shows us that, about 4 billion years ago, the Universe expanded to the point where repulsion by the cosmological constant finally cancelled out the diminishing attraction from matter. Since that moment, the expansion has been accelerating rather than decelerating. The observation that gravity and anti-gravity were in balance 4 billion years ago also allows us to work out the size of the anti-gravity effect. The cosmological constant produces a pressure that is one hundredth of one trillionth of the atmospheric pressure at the Earth’s surface. When spread across the entire cosmos, though, the gravitational effects of this almost inconceivably small pressure are sufficient to gently tear the Universe apart.

All of which brings me to one of several deep cosmological mysteries at the core of this chapter: physics cannot explain why this vacuum pressure is so very small. A back-of-the-envelope calculation based on the expected quantum fluctuations shows that there should be a staggeringly large vacuum pressure roughly equal to the so-called ‘Planck pressure’ of ‘one followed by 108 zeros’ atmospheres. The difference between this unimaginably large theoretical pressure and the unimaginably small true value has justifiably been called the worst prediction in the history of science. The surprise is not that our Universe has anti-gravity, but rather that this repulsive force is so small. This problem was first identified in the 1960s and astronomers’ response at the time was to assume that the cosmological constant must be exactly zero. This is an entirely reasonable reaction. If you can’t see an elephant in your room it’s probably because there is no elephant there rather than because a ridiculously small elephant is hiding under a speck of dust. Then, 30 years later, came the strong evidence of an accelerating Universe and, hence, a non-zero cosmological constant. There was, after all, a microscopic pachyderm in the room and it was going to take some explaining.

At this point, the anthropic principle and the life-friendliness of our Universe finally come back into my story. Assume for now (I’ll justify this more later) that there are many universes and that the vacuum-fluctuation pressure is able to take different values in each one. Under these circumstances, the typical pressure in a typical universe might still be close to the Planck pressure but a few very rare universes might, by chance, have much smaller vacuum pressures. In high-pressure universes, anti-gravity will be very strong and they will blow themselves to pieces long before stars, galaxies and observers can evolve. Such universes will therefore be uninhabited. Life can appear only in highly unusual universes where the vacuum pressures are low enough to allow time for galaxies and their possible inhabitants to appear. In this view, our Universe is one of those infrequent but life-friendly universes.

This is the kind of problem the anthropic principle was originally devised to tackle: explanations for life-promoting coincidences that seem to be built into the entire fabric of the Universe. There are quite a few such coincidences, since the cosmological constant is not alone in having a value that seems fine-tuned to allow life. Many physical properties of our Universe would, if changed slightly, render it uninhabitable. The best known example of this is that the strengths of electrical and nuclear forces are just right to allow carbon to be among the most common elements. Carbon chemistry is basic to all life on Earth because carbon has a unique ability to form a large number of complex compounds. It seems likely therefore that carbon, along with water, will be essential to any chemically-based life and so a cosmos in which carbon was rare would probably be a cosmos without life.

Fortunately, a number of factors are fine-tuned to allow carbon to form easily. Carbon is created inside stars when they start to get old. In younger stars such as our Sun, the energy is produced by the nuclear fusion of hydrogen to make the next heaviest element, helium. However, when the hydrogen starts to run out, stars collapse and their central temperatures and pressure go up dramatically. This creates the conditions under which helium nuclei themselves can fuse and so the stars get a new, relatively short-lived, source of energy. At first two helium nuclei fuse together to form beryllium and this reaction occurs quite easily. However, beryllium is actually quite rare because it is very easy to add another helium nucleus to beryllium, creating carbon. Most beryllium that forms is rapidly turned into carbon. This reaction could, in principle, go a stage further by the addition of yet another helium nucleus to produce oxygen. However, this second reaction is much harder and so only a little carbon turns into oxygen. Thus, the main product of nuclear fusion in stars, once they have used up their hydrogen, is carbon. Eventually, some of this carbon is blown into the rest of the Universe by the strong stellar winds of ageing stars or, more dramatically, exploded into space by novae and supernovae eruptions. This is the stardust from which all of us are made.

If the relative strengths of the electrical and nuclear forces were slightly different, this story would change dramatically, because it would alter the ease with which different elements can be synthesised inside stars. A cosmos in which beryllium was the stable end-product of stellar fusion reactions would contain a lot of violent and short-lived stars and very little carbon. A universe in which carbon reacted easily to produce oxygen would be rather damp, since it would be full of oxygen and hydrogen, the ingredients of water; and in this damp universe carbon would be as rare as beryllium is in our own Universe. Interestingly, this story shows how the anthropic principle can be used to make predictions. Fifty years ago little was known about the details of these nuclear reactions. However, Fred Hoyle pointed out that carbon was ‘an essential component of astronomers’ and so carbon must be a major product of stellar nucleosynthesis. Furthermore, he argued that for carbon to be so common it must be much easier to convert beryllium into carbon than it is to burn carbon to form oxygen. All of these predictions were experimentally verified within a few years.

There are many other ways in which the laws of physics seem to be fine-tuned to allow our cosmos to be particularly life-friendly. Planetary orbits, for example, are stable only in three dimensions but the latest theories in physics suggest that the cosmos actually has eleven dimensions, most of them too small to see. In our Universe, only three spatial dimensions took part in the Big Bang, but other universes may have different numbers of ‘large’ spatial dimensions. In another example of our life-friendly cosmos, molecular bonds have the right strength to allow chemistry at temperatures corresponding to typical star–planet separations. If the electromagnetic forces in our cosmos were much stronger, then planets would be too cold for chemistry; if they were weaker, the entire Universe would be too hot.

None of these properties have values set, as far as we know, by fundamental laws of nature, and quite moderate alterations produce imagined universes with substantially less complexity than our own – universes where galaxies, stars, planets, molecules, atoms or even nuclei cannot exist. Life is, if nothing else, complexity-unmatched and it stretches credulity to suggest that universes that can’t even make atoms could somehow generate life. It’s conceivable that the values of physical constants in our Universe are the only ones possible, but the alternative explanation, that multiple universes are actually realised in nature and that we necessarily live in one of the few where the constants are ‘just right’, is surely a fascinating idea worth pursuing.

There is much more that could be said about cosmological anthropic effects than has been discussed in detail by cosmologists themselves. The life-friendliness of our planet is not peculiar at all if, instead, it’s a consequence of living in a life-friendly universe. For example, the combination of silicate rocks, carbon dioxide, and water stabilises the climate of Earth-like planets, as discussed earlier in this book, and this would not happen if the properties of any of these compounds were altered. Is this an example of a peculiarly life-friendly property of our particular universe? Could there also be fine-tuning to optimise the occurrence of habitable planets in our Universe or to make the long-term stability of climates more likely? Might our 4 billion years of good weather therefore simply result because we live in a universe where that sort of thing happens?

Can we go even further? Are the laws of nature in our Universe contrived not just to allow complexity but to actually make the emergence of life inevitable on suitable worlds? Could the Gaia hypothesis even be true, despite my scepticism in the last chapter, because a life-friendly universe is a universe that generates Gaian biospheres? The final two chapters of this book will tackle the issue of whether we live in a life-friendly universe or on a life-friendly planet; for now, let me pursue the cosmological version of the anthropic principle a little further.

The idea that we live in an unusually life-friendly universe makes sense only if two things are true. Firstly, there must be many universes. Secondly, the laws of physics must be different in each of them. Given this situation, some universes will be better suited to life than others, and it is then almost inevitable that we will find ourselves living in one of the better neighbourhoods. Note, however, that for the same reason I suggested earlier that we probably live on the second-best of all possible worlds, we probably also live in a second-best universe. Universes that are marginally suitable for life will probably be far more common than universes that are perfect. Nevertheless, multiple universes each with different laws of nature explains a great deal about our Universe that otherwise seems mysterious. But is the idea of a collection of universes, a multiverse, really believable?

The simple answer is ‘yes’ because, as many teenagers might argue, our part of the cosmos is simply too boring to be all that there is to existence. Let me explain this rather cryptic statement. It has been understood for many decades that the very smooth and featureless character of our Universe, on the very biggest scales, is difficult to explain. On cosmologically small scales concerned with trifling objects such as galaxy clusters there is quite a lot of interesting variation that has produced stars, planets and people, but at larger scales the Universe seems much the same whatever direction you look in. If we look as far out as telescopes can see in one direction and then compare that to what we see in any other direction, there are no significant differences at all. In all directions we see very similar numbers of very similar-looking galaxies. However, in standard Big Bang cosmology, two portions of the Universe in different directions may never have had any contact with each other. For galaxies at the limits of observability from Earth, there has by definition only just been time since the origin of the Universe for their light to get to us. Two patches of sky in opposite directions, for example, are obviously twice as far from each other as they are from us, and even signals travelling at the speed of light cannot yet have crossed that distance. As far as each of these volumes of space is concerned, the other volume may as well not exist. But, if these volumes have never interacted, how can they look so similar? It’s a bit like the completely unsubstantiated legend that there were tribes of Native Americans who spoke Welsh when European settlers first arrived. If true, the legend could only mean that there had been earlier contact between the two continents, since an independent evolution of Welsh is, to say the least, rather unlikely. In a similar way, completely independent evolution of identical cosmological structures is unlikely, and the implication is that there must have been earlier contact. But in standard Big Bang cosmology, this simply isn’t possible. If we go back in time a few billion years, the light from those distant galaxies hasn’t even had time to reach the Earth, let alone galaxies beyond our planet in the opposite direction. Quite simply, if these two patches of sky can’t see each other today, they could not see each other in the past – and the further back you go towards the Big Bang, the bigger this problem gets.

Recent results from a spectacularly successful space mission have thrown this issue into even sharper relief. The Wilkinson Microwave Anisotropy Probe (WMAP) was launched on 30 June 2001, and after circling the Earth three times, it was sent towards the Moon whose gravity, in turn, slung WMAP out into interplanetary space. Six months later it reached its final destination, a spot 1.5 million kilometres from the Earth in the opposite direction to the Sun. This position, known as the L2 point, is a relatively stable place to put a spacecraft since it will stay put in that vicinity with relatively little adjustment using the spacecraft’s own engines. The L2 point also has the advantage of being quite a long way from the Earth (which cuts down radio interference) while still being close enough to allow reasonably easy communication. Furthermore, the Sun, Moon and Earth all lie in the sunward direction, as seen from L2, and so it is easy for the spacecraft to avoid looking at all three of these over-bright objects. Once WMAP was on station, it began an eight-year mission to take the temperature of the Universe. Strictly speaking, WMAP actually measured differences in temperature between opposite points in the sky rather than measuring temperature directly, but that is a relatively minor technical detail. WMAP’s measurements were made using a device similar to one that may have been stuck in your ear to take your own temperature. In the ear-thermometer case, a detector measures the amount of infra-red radiation emitted by your ear-drum and uses this to determine body temperature. In WMAP’s case, its detectors picked up the microwaves produced by the much colder background temperature of the Universe and found a temperature of –270.424°C.

Some things in the Universe, stars for example, are much warmer than this temperature, but most cosmic radiation does not come from those relatively rare sources. The vast majority of photons bouncing around the Universe are remnants from the Big Bang, or more accurately, from a time 375,000 years later when the Universe first became transparent. Prior to that the Universe was so hot that electrons and atomic nuclei could not combine to form atoms and light could not propagate through this electrically charged plasma. Then, over a period of just a few thousand years as the entire cosmos cooled below about 3,000°C, atoms formed and photons were set free. This, rather than the Big Bang itself, was the true ‘let there be light’ moment. Since that time these photons have largely moved unimpeded through the Universe. Cosmic expansion has, however, cooled them so that they now indicate a temperature 1,000 times lower than that of the hot plasma from which they originated. The microwaves we see today are the red-shifted remnant of an ancient metamorphosis that changed our Universe from being about as clear as mud to being far more transparent than the purest of mountain spring water.

This remnant is called the cosmic microwave background and its existence, which was first demonstrated in the 1960s, is the single most important piece of evidence supporting the standard Big Bang model of the Universe, because it proves that the cosmos was once dense and hot. WMAP was measuring heat emitted more than 13 billion years ago and its mission was to measure more accurately than ever before how the intensity of this heat varies as we look in different directions. What WMAP produced was a heat-map of the sky showing fluctuations of just a few millionths of one degree. Places in the Universe that should never have been in contact with one another nevertheless somehow manage to have almost identical temperatures. If it was just a few bits of the Universe that happened to be at the same temperature, this might be a coincidence, but it’s not just a few bits. The whole of the early Universe was at more or less the same temperature and this implies thermal contact. Heat must have been able to move from hot parts to cooler parts so that temperatures equalised. Furthermore, the tiny remaining fluctuations that are present have exactly the right size to seed the later evolution of galaxies. Slightly cooler patches in the early Universe were slightly denser and therefore attracted, by gravity, material from their surroundings to make them denser still. Over hundreds of millions of years this process produced the galaxies and galaxy clusters that fill the Universe today. The large-scale uniformity of the galaxy distribution that I mentioned earlier can also be traced to the smoothness of the Universe’s temperature a few hundred thousand years after its birth.

A few weeks after I wrote the preceding paragraphs, and while I was still working on the rest of this book, a new set of microwave background measurements from the European Space Agency’s Planck mission were released. There are now three independent sets of space-based measurements of the microwave fluctuations. The first set came from the COBE mission, launched in 1989. Then came WMAP in 2001 as described above and, in 2013, we began to get the first results from Planck. With each successive mission the pictures have become a little sharper, a little more detailed and a little better at testing our understanding of the Universe. The cosmic architecture which emerges is a consistent one of a hot, early Universe whose temperature was extraordinarily uniform. There are some tiny fluctuations but these are themselves very uniform. We see similar deviations from precise uniformity whatever direction we look in and, as a result, the distribution and size of galaxies that formed later was also spectacularly uniform. The latest results from Planck do show one very minor but extraordinarily interesting feature not seen on the blurrier images from COBE and WMAP but I’ll come back to that later. Based on the available data it is clear that, on the very largest scales, our Universe is extremely uniform with even the minor departures from perfect smoothness being surprisingly consistent whichever direction we look in. Given that even the early Universe was too spread out to have gained a uniform temperature by the usual route of heat moving from hot areas to cold ones, this needs explaining!

Fortunately, COBE, WMAP and Planck were not operating in a theoretical vacuum (no pun intended). An explanation for the Universe’s smoothness and several other related mysteries had been proposed in the early 1980s by American cosmologist Alan Guth. Guth described the Big Bang as resulting from a process he called inflation, which gave rise to a young, hot and expanding Universe. According to Guth, inflation was a short period of enormous expansion in the very young Universe that took points that were microscopically close and moved them astronomically far apart in a tiny fraction of a second. As a result, points that were no longer in thermal contact after inflation had actually been close enough to equalise their temperatures when the process started. Essentially, a process of inflation stretches a young Universe so much that it is forced to be incredibly smooth. This expansion takes place at an unimaginably large rate that causes adjacent points to move apart much faster than the speed of light. This does not defy the normal laws of physics, however, because it is space itself that is expanding. There are no physical entities moving through space at these high velocities.

To explain the high degree of smoothness we actually observe in the Universe, inflation must have been so great that even points that are now a thousand times further away than the edge of the visible Universe must have been in thermal communication with us when inflation began. Detailed theories of inflation actually imply that this is a lower limit to the volume of that part of space that has the same properties as the visible Universe. Our Universe is therefore probably much more than a billion times larger than the observable Universe. However, and this is the fascinating new result from Planck, there are tiny departures from perfect uniformity in the microwave background data that may be the first indications of structure behind the veil of the 14 billion light-year horizon imposed by the finite speed of light. It’s early days and there may yet be other, more mundane, explanations for these apparent deviations from a completely uniform Universe, but it would be hard to overstate how exciting these new results could be. We may be seeing the first signs of what the Universe looks like on a scale of trillions of light years.

But what physical process caused this massive expansion of the early Universe? We already have a mechanism for expanding a Universe: the cosmological constant produced by quantum mechanical fluctuations in the vacuum. But this is nowhere near strong enough to do the job. The accelerating expansion seen in today’s Universe is a very gentle affair compared to that needed when the Universe was young. Guth suggested that the earlier phase of extremely rapid expansion resulted from a ‘false vacuum’. I have, I hope, already convinced you that the vacuum should be thought of almost as a real physical object, an entity that has significant properties of its own. Now let’s take that a step further. It is also possible that the vacuum can exist in different states. In the same way that water can be cold ice, warm liquid or hot vapour, the vacuum may exist in different forms with different energy content. The false vacuum is a state with much higher energy than that of the vacuum we see in our Universe today and, as a consequence, it has a much higher cosmological constant. False vacuum might therefore have been the cause of inflation in the very young Universe. This suggestion also provides an explanation for a very natural way for inflation to have ceased, since this high-energy form of vacuum will eventually condense into the less energetic true vacuum. That process of decay would have released vast amounts of heat to give rise to the hot Big Bang itself (there is a debate among cosmologists about whether we should think of inflation as leading to the Big Bang, as I’ve described it here, or think of the start of inflation as being ‘The Big Bang’ – but this is a purely semantic point and rather unimportant in my view).

Inflationary cosmology with its false vacuum and super-luminary expansion seems like very wild speculation to those of us who are not cosmologists, but these ideas violate no known laws of physics and, more importantly, they make predictions about our Universe that fit the observations extraordinarily well. Not only does inflation resolve puzzles such as the smoothness of our Universe, it accurately predicts how the minuscule departures from a completely uniform temperature should look at different scales and it predicts how these small temperature differences gave rise eventually to stars and galaxies. Inflationary cosmology is very much mainstream cosmology these days. A multiverse, on the other hand, takes things to an even higher level of speculation and needs one final ingredient. Widely separated parts of the unimaginably enormous inflationary universe need to have different laws of physics. Thanks to physicists’ almost obsessive search for simplicity, it begins to look as if such a thing is indeed possible.

I love Richard Dawkins’s description of physics as the science of objects that are so simple you can actually use mathematics to describe them. There is more than a little truth in this, and as part of their eternal search for simplicity, physicists look for ways to unite things that seem to be very different: a falling apple is drawn towards the ground by the same force that keeps the Moon in its orbit, while a tsunami is described by the same equations that govern the flow of petrol vapour into a combustion chamber. In this spirit, physics has now been on a 150-year search to formulate what some call, with a touch of hubris, a ‘theory of everything’ in which all the different forces of nature emerge as different facets of a single force. This search has not been in vain. The laws governing electricity and magnetism were united in the 19th century, and an additional force, the weak nuclear force, was successfully added in the 1970s. There has also been significant progress with bringing the so-called strong nuclear force into the same theory. The final ingredient, gravity itself, is proving very tricky to incorporate but even here there are promising avenues of research. Despite the fact that this is still very much a work in progress, an important pattern has already emerged: the disparate laws governing the behaviour of objects from nuclei to molecules to galaxies start to look similar as we turn up the temperature. The expectation is that these laws had a very simple and completely unified form in the high-energy conditions that prevailed around the time of the Big Bang and that they then crystallised into their observed complexity as temperatures dropped. The key idea here is that the specific form into which the laws crystallised may not be fundamental; other laws may be equally possible. An often used analogy is that of a pencil balanced on its tip, which, when it falls, must do so in one specific direction. We would not expect it to fall in the same direction if the exercise was repeated. Similarly, the laws of physics may have fallen out in different ways in different parts of space. The massive expansion produced by inflation ensures that we are nowhere near different domains with different physics, but if these ideas are correct, those different domains are out there. These alien domains are so far away, and have such completely different physics, that it makes sense to think of them as completely different universes since we have no contact with them and they do things differently there.

It’s probably worth mentioning that this is just one of many theories invented by cosmologists that give rise to multiple universes. Another related scenario assumes that inflation is eternal and merely breaks down locally to give rise to pocket universes of space that are no longer inflating, separated by unimaginably vast regions of ongoing inflation. Other scenarios examine whether the interior of black holes may constitute separate universes each with, in turn, their own black holes. Older theories have postulated an oscillating Universe that experiences repeated expansion, collapse and a ‘Big Bounce’. A related concept in which multiple universes are separated in time is one where our Universe eventually becomes so dilute that a new Big Bang occurs spontaneously within it. From my point of view these different proposals are just details for the cosmologists to debate. The important point is that a multiverse is not as outlandish an idea as it might first seem and is one taken seriously by many experts in the field.

The picture that emerges from modern cosmology is that we live in a vastly larger, stranger and more diverse cosmos than the one envisaged just a few decades ago. This gives a stage on which the anthropic principle can perform its magic. Whether we regard the disparate regions of the cosmos with their varied laws and properties as truly separate universes or as simply distinct domains within a single Universe is merely a semantic point. Our laws of nature may be local by-laws whose strictures apply only in our small corner of existence. Under these conditions it is all but inevitable that we occupy a favoured location – one of the rare neighbourhoods where those by-laws allow the emergence of intelligent life. We do, in that sense, live in a lucky universe.

However, that does not necessarily mean that the view propounded in the rest of this book, that we live on a lucky planet, needs to be discarded. It is quite possible that both are true and that we live on a particularly favoured world even within our favoured portion of the cosmos. That brings us back to the ‘unlucky planet’ of Nemesis from the Prologue, one example of how not to build a highly habitable world. The difference between Earth and my imagined twin world was in their moons, and a look at how the size and distance of the Moon affect our climate reveals a real surprise. It is only by great good fortune that we have avoided a catastrophe that would have rendered our world incapable of supporting the complex and beautiful biosphere we enjoy.