COINCIDENCE, PROVIDENCE – OR MULTIVERSE?
On religion I tend towards deism but consider its proof largely a problem in astrophysics. The existence of a cosmological God who created the universe (as envisaged by deism) is possible, and may eventually be settled, perhaps by forms of material evidence not yet imagined.
E. O. Wilson, Consilence
WHAT DOES THE FINE TUNING MEAN?
In our universe, intricate complexity has unfolded from simple laws. But it’s not guaranteed that simple laws permit complex consequences; indeed, we’ve seen that different choices of our six numbers would yield a boring or sterile universe. Similarly, mathematical formulae can have very rich implications, but generally they don’t. The Mandelbrot set, for instance, with its infinite depth of intricate structure, is encoded by a short algorithm (see Figure 11.1). But other algorithms, superficially similar, yield very dull patterns.
There are various ways of reacting to the apparent fine tuning of our six numbers. One hard-headed response is that we couldn’t exist if these numbers weren’t adjusted in the appropriate ‘special’ way: we manifestly are here, so there’s nothing to be surprised about. Many scientists take this line, but it certainly leaves me unsatisfied. I’m impressed by a metaphor given by the Canadian philosopher John Leslie. Suppose you are facing a firing squad. Fifty marksmen take aim, but they all miss. If they hadn’t all missed, you wouldn’t have survived to ponder the matter. But you wouldn’t just leave it at that – you’d still be baffled, and would seek some further reason for your good fortune.
Others adduce the ‘tuning’ of the numbers as evidence for a beneficent Creator, who formed the universe with the specific intention of producing us (or, less anthropocentrically, of permitting intricate complexities to unfold). This is in the tradition of William Paley and other advocates of the so-called ‘argument from design’ for God’s existence. Variants of it are now espoused by eminent scientist-theologians such as John Polkinghorne; he writes that the universe is ‘not just “any old world”, but it’s special and finely tuned for life because it is the creation of a Creator who wills that it should be so’.1
If one doesn’t accept the ‘providence’ argument, there is another perspective, which – though still conjectural – I find compellingly attractive. It is that our Big Bang may not have been the only one. Separate universes may have cooled down differently, ending up governed by different laws and defined by different numbers. This may not seem an ‘economical’ hypothesis – indeed, nothing might seem more extravagant than invoking multiple universes – but it is a natural deduction from some (albeit speculative) theories, and opens up a new vision of our universe as just one ‘atom’ selected from an infinite multiverse.
THE MULTIVERSE
Some people may be inclined to dismiss such concepts as ‘metaphysics’ (a damning put-down from a physicist’s viewpoint). But I think the multiverse genuinely lies within the province of science, even though it is plainly still no more than a tentative hypothesis. This is because we can already map out what questions must be addressed in order to put it on a more credible footing; more importantly (since any good scientific theory must be vulnerable to being refuted), we can envisage some developments that might rule out the concept.
The prime stumbling-block is, of course, our perplexity about the extreme physics that applied in the initial instants after the Big Bang. There are strengthening reasons to take ‘inflation’ seriously as an explanation for our expanding universe: the theory’s firmest and most generic prediction, that the universe should be ‘flat’, is seemingly borne out by the latest data (albeit not in the simplest form: three ingredients – atoms, dark matter, and the vacuum energy λ – contribute to the ‘flatness’). The actual details of inflation depend on the physical laws that prevailed in the first 10−35 seconds, when conditions were so extreme as to be far beyond the range of direct experiment. But there are two ways we can, realistically, hope to pin down what those conditions were. Firstly, the ultra-early universe may have left conspicuous ‘fossils’ in our present-day universe. For example, clusters and superclusters of galaxies were ‘seeded’ by microscopic fluctuations that arose during inflation, and their detailed properties, which astronomers can now study, hold clues to the exotic physics that prevailed when these structures were laid down. Secondly, a unified theory may earn credibility by offering new insight into aspects of the microworld that now seem arbitrary and mysterious – for instance, the various types of subatomic particles (quarks, gluons, and so forth) and how they behave. We would then have confidence in applying the theory to the inflationary era.
Advances along these two routes may disclose to us a convincing description of the physics of the ultra-early universe. Computer simulations of how universes emerge from something of microscopic size would then be just as believable as our current calculations of how helium and deuterium were formed in the first few minutes of the expansion (Chapter 5) and how galaxies and clusters emerged from small fluctuations (Chapter 8).
Andrei Linde and others have (as described in Chapter 9) already shown that some assumptions, consistent with everything else we know, yield many universes that sprout from separate Big Bangs into disjoint regions of space-time. These universes would never be directly observable; we couldn’t even meaningfully say whether they existed ‘before’, ‘after’ or ‘alongside’ our own. The input assumptions that predict multiple universes are still speculative. But, if these assumptions could be firmed up, and were based on a theory that convincingly explained things we could observe, then we should take the other (unobservable) universes seriously, just as we give credence to what our current theories predict about quarks inside atoms, or the regions shrouded inside black holes.
If there are indeed many universes, the next question that arises is: How much variety do they display? The answer again depends on the character of the physical laws at a deeper and more unified level than we yet understand. Perhaps some ‘final theory’ will give unique formulae for all of our six numbers. If it were to, then the other universes, even if they existed, would in essence be just replicas of ours, and the apparent ‘tuning’ would be no less a mystery than if our single universe were the whole of reality. We’d still be perplexed that a set of numbers imprinted in the extreme conditions of the Big Bang happened to lie in the narrow range that allowed such interesting consequences ten billion years later.
But there’s another possibility. The overarching laws that apply throughout the multiverse may turn out to be more permissive. The strength of the forces and the masses of elementary particles (as well as Ω, Q and λ) may not be uniquely fixed, but could take different values in each universe. What we call the ‘laws of physics’ would then, in the perspective of the multiverse, be merely ‘bylaws’, applying only within our own universe and the outcome of its early history.
There is an analogy here with a ‘phase transition’, such as the familiar phenomenon of water turning into ice. When the inflationary era of a particular universe ended, space itself (the ‘vacuum’) underwent a drastic change. The fundamental forces – gravitational, nuclear, and electromagnetic – all ‘froze out’ as the temperature dropped, fixing the values of N and ε in a manner that can be considered ‘accidental’, just like the pattern of ice crystals when water freezes. The number Q, imprinted by quantum fluctuations when a universe was of microscopic size, may also depend on how these transitions occur.
Some universes may manifest different numbers of dimensions, depending on how many of the initial nine spatial dimensions compactify rather than stretch. Even in three-dimensional spaces, there may be different microphysics, and perhaps different values of λ, depending on the type of six-dimensional space into which the other dimensions curl up. Universes could have different values of Ω (which fixes the density and how long their ‘cycle’ lasts if they recollapse), and Q (which measures how smooth a universe is, and so determines what structures emerge in it). In some, gravity could be so overwhelmed by the repulsive effect of the ‘vacuum energy’ (λ) that no galaxies or stars can form. Or the nuclear forces may be outside the range of {ε close to 0.007} that allows elements like carbon and oxygen to be stable, and to be synthesized in stars: there would then be no periodic table and no chemistry. Some universes could have been short-lived, and so dense throughout their lives that everything stayed close to equilibrium, with the same temperature everywhere.
And some universes might just be too small and simple to permit any internal complexity at all. I have highlighted one basic number, N, that is exceedingly large – one followed by 36 zeros. Its size reflects the weakness of gravity: very large numbers of particles have to gather together before gravity becomes important – as it does, for instance, in stars (gravitationally bound fusion reactors). It’s a straightforward consequence of their size that stars have lifetimes that are enormously long, allowing time for photosynthetic and evolutionary processes to unfold on suitable planets in orbit around them. In Chapter 3 we imagined a universe where N wasn’t as huge as 1036 but where everything else (including our other five numbers) was unchanged. Stars and planets could still exist, but they would be smaller and would evolve quicker. They would not offer the stretches of time that evolution demands. And gravity would crush anything large enough to evolve into a complex organism.
The recipe for any ‘interesting’ universe must include at least one very large number: clearly, not much could happen in a universe that was so constricted that it contained few particles. Every complicated object must contain a large number of atoms; to evolve in an elaborate way, it must also persist for a long time – many, many times longer than a single atomic event.
But an abundance of particles, and a long stretch of time, are not in themselves sufficient. Even a universe as large, long-lived and stable as ours could contain just inert particles of dark matter, either because the physics precludes ordinary atoms from ever existing or because they all annihilate with exactly equal numbers of antiatoms.
THE MYSTERY OF λ
These speculative ideas offer a new perspective on λ, the key number that measures the energy content of empty space. The energy that drove inflation is presumed to have been latent in the vacuum. This means that λ in the remote past was larger by 120 powers of ten than it could possibly be today. In this perspective, it seems surprising that λ should decay away to be so close to zero. There are three very different resolutions of this puzzle.
One is that the microstructure of space (maybe involving a foam-like assemblage of tiny interlinked black holes) somehow adjusts itself to make this so. A second idea is that the decay is gradual, and somehow ‘tracks’ the density of ordinary matter; it might then not be coincidental that the vacuum should now contribute about the same as the ordinary matter, so that Ω is around 0.3 but the vacuum still stores enough energy to provide the remaining 0.7 that’s needed to bring the overall density up to the critical value required for a flat universe.
A third possibility is that there’s no fundamental explanation for the smallness of λ in our universe, but that its ‘tuning’ (like that of our other numbers) is a prerequisite for our existence. We can think of λ as neutralizing the gravity at a particular density; this is what would happen in the static universe that Einstein had in mind when he invented the idea. So, as the universe expands, and the ordinary material gets more diffuse, the density at some stage drops below a threshold and the repulsion starts to ‘win’ over gravity. Our own universe may have passed that threshold, so that galaxies are already speeding up in their recession from us. But imagine a universe that was ‘set up’ exactly like ours except that λ was much larger. Then the repulsion would take over much earlier. If this transition had happened before galaxies had formed, then they never would – such a universe would be sterile.
In the multiverse, λ could range over many possible values: these could either be a set of discrete numbers (determined by the way the extra dimensions curled up), or else a continuum of possibilities. In most universes, λ would be vastly higher than in ours. But our universe could be typical of the subset in which galaxies could have formed.
A KEPLERIAN ARGUMENT
The issue of the multiverse might seem arcane, even by cosmological standards, but it affects how we weigh the observational evidence in the current debate about Ω and λ. Some theorists have a strong prior preference for the simplest universe, with (contrary to the best present evidence) enough intergalactic dark matter to make Ω exactly unity, thus implying a degree of tuning in the early universe that was not merely remarkable but absolutely perfect. They’re uneasy with Ω being, say, 0.3 and even more by extra complications like a non-zero λ. As we’ve seen, it now looks as though a craving for such simplicity will be disappointed.
Perhaps we can draw a parallel with debates that occurred 400 years ago. Kepler discovered that planets move in ellipses not circles. Galileo was upset by this. He wrote ‘For the maintenance of perfect order among the parts of the Universe, it is necessary to say that movable bodies are movable only circularly’.2
To Galileo, circles seemed more beautiful; and they were simpler – they are specified just by one number, the radius, whereas an ellipse needs an extra number to define its shape (the ‘eccentricity’). Newton later showed, however, that all elliptical orbits could be understood by a single unified theory of gravity. Had Galileo still been alive when Principia was published, Newton’s insight would surely have joyfully reconciled him to ellipses.
The parallel is obvious. A universe with low Ω, non-zero λ and so forth may seem ugly and complicated. But maybe this is our limited vision. Our Earth traces out one ellipse among an infinity of possibilities, its orbit being constrained only by the requirement that it allows an environment conducive for evolution (not getting too close to the Sun, nor too far away). Likewise, our universe may be just one of an ensemble of all possible universes, constrained only by the requirement that it allows our emergence. So I’m inclined to go easy with Ockham’s razor3: a bias in favour of ‘simple’ cosmologies may be as short-sighted as was Galileo’s infatuation with circles.
If there were indeed an ensemble of universes, described by different ‘cosmic numbers’, then we would find ourselves in one of the small and atypical subsets where the six numbers permitted complex evolution. The seemingly ‘designed’ features of our universe shouldn’t surprise us, any more than we are surprised at our particular location within our universe. We find ourselves on a planet with an atmosphere, orbiting at a particular distance from its parent star, even though this is really a very ‘special’ and atypical place. A randomly chosen location in space would be far from any star – indeed, it would most likely be somewhere in an intergalactic void millions of light-years from the nearest galaxy.
At the time of writing, the view that our six numbers are accidents of cosmic history is no more than a ‘hunch’. But it could be firmed up by advances in our understanding of the underlying physics. More importantly for its standing as a genuinely scientific hypothesis, it is vulnerable to disproof: we would need to seek a different interpretation if the numbers turned out to be even more special than our presence requires. Suppose, for instance, that (contrary to current indications) λ contributed less than 0.001 of the critical density, and was thus thousands of times smaller than it needed to be merely to ensure that cosmic repulsion didn’t inhibit galaxy formation. This would raise suspicions that it was indeed zero for some fundamental reason. Likewise, if the Earth’s orbit had been an exact circle (even though we could exist equally comfortably in a modestly eccentric orbit), it could have favoured the kind of explanation that Kepler and Galileo would have preferred, whereby the orbits of the planets were fixed in exact mathematical ratios.
If the underlying laws determine all the key numbers uniquely, so that no other universe is mathematically consistent with those laws, then we would have to accept that the ‘tuning’ was a brute fact, or providence. On the other hand, the ultimate theory might permit a multiverse whose evolution is punctuated by repeated Big Bangs; the underlying physical laws, applying throughout the multiverse, may then permit diversity in the individual universes.
PROGRESS AND PROSPECTS: A RÉSUMÉ
Elucidating the ultra-early universe and clarifying the concept of the multiverse are challenges for the next century. These challenges look less daunting if we look back at what has been achieved during the twentieth century. A hundred years ago, it was a mystery why the stars were shining; we had no concept of anything beyond our Milky Way, which was assumed to be a static system. In contrast, our panorama now stretches out for ten billion light-years, and its history can be traced back to within a fraction of a second of the ‘beginning’.
Physical probes are, of course, still confined to our own Solar System, but improvements in telescopes and sensors allow us to study galaxies so far away that their light has been journeying towards us for ninety per cent of the time since the Big Bang. We have mapped, at least in outline, most of the volume that is in principle accessible to us, though we suspect that, beyond our horizon, our universe encompasses a vastly larger volume from which light has not yet had time to reach us (and perhaps never will).
We are learning how cosmic structure emerged, and how galaxies evolved, from detailed observations – not only of nearby galaxies but also of populations of distant galaxies that are being seen as they were up to ten billion years ago.
This progress is possible only because of the contingency – in principle, remarkable – that the basic physical laws are comprehensible and apply not just on Earth but also in the remotest galaxies, and not just now but even in the first few seconds of our universe’s expansion. Only in the first millisecond of cosmic expansion, and deep inside black holes, do we confront conditions where the basic physics remains unknown.
Cosmologists are no longer starved of data. Current progress is owed far more to observers and experimentalists than to armchair theorists. But in future there will be armchair ‘observers’. The results of galaxy surveys, detailed ‘maps’ of the sky, etc, will be available electronically to anyone who can access or download them. A far larger community will be able to participate in exploring our cosmic habitat, checking their own ‘hunches’, seeking new patterns, and so forth.
Observations are steadily improving, but our understanding is advancing in a zigzag fashion. There is a sawtooth advance as theories come and go, but the general gradient is upwards. Progress requires more powerful telescopes, and enhanced computer power that permits more realistic simulations.
There are three great frontiers in science: the very big, the very small and the very complex. Cosmology involves them all. Within a few years, the cosmic numbers, λ, Ω and Q should be as well measured as the size and shape of the Earth have been since the eighteenth century. We may by then have solved the problem of the ‘dark matter’.
But it remains a fundamental challenge to understand the very beginning – this must await a ‘final’ theory, perhaps some variant of superstrings. Such a theory would signal the end of an intellectual quest that started with Newton, and continued through Maxwell, Einstein and their successors. It would deepen our understanding of space, time, and the basic forces, as well as elucidating the ultra-early universe and the centres of black holes.
This goal may be unattainable. There could be no ‘final’ theory; or, if there is, it could be beyond our mental powers to grasp it. But even if this goal is reached, that would not be the end of challenging science. As well as being a ‘fundamental’ science, cosmology is also the grandest of the environmental sciences. It aims to understand how a simple ‘fireball’ evolved into the complex cosmic habitat we find around us – how, here on Earth, and perhaps in many biospheres elsewhere, creatures evolved that are able to reflect on how they emerged.
Richard Feynman used a nice analogy to make this point. Imagine you’d never seen chess being played before, then by watching a few games, you could infer the rules. Physicists, likewise, learn the laws and transformations that govern the basic elements of nature. In chess, learning the moves is just a trivial preliminary on the absorbing progress from novice to grand master; by analogy, even if we knew the basic laws, exploring how their consequences have unfolded over cosmic history is an unending quest. Ignorance of quantum gravity, subnuclear physics and the like impedes our understanding of the ‘beginning’. But the difficulties of interpreting the everyday world and the phenomena that astronomers observe stem from their complexity. Everything may be the outcome of processes at the subatomic level, but even if we know the relevant equations governing the microworld, we can’t, in practice, solve them for anything more complex than a single molecule. Moreover, even if we could, the resultant ‘reductionist’ explanation would not be enlightening. To bring meaning to complex phenomena, we introduce new ‘emergent’ concepts. (For example, the turbulence and wetness of liquids, and the textures of solids, arise from the collective behaviour of atoms, and can be ‘reduced’ to atomic physics, but these are important concepts in their own right; so, even more, are ‘symbiosis’, ‘natural selection’, and other biological processes.)
The chess analogy reminds us of something else. There is no chance that our finite observable universe, even though it extends ten billion light years around us, can ‘play out’ all its potentialities. This is because any estimate of how many different chains of events could happen quickly runs into even vaster numbers than we’ve encountered so far. The number of different chess games, even after only three moves by each player, is about 9 million. There are far more 40-move games than the 1078 atoms within our horizon: even if all the material in the universe were constituted into chess boards, most possible games would never be played. And the range of options in a board game is obviously minuscule compared to the variety allowed in nature.
Even simple inanimate systems are generally too ‘chaotic’ to be predictable: Newton was actually lucky to find, in planetary orbits, one of the few aspects of nature that are highly predictable! Any biological process involves tremendously more variety – more branch points at every stage as the complexity unfolds – than a game of chess. If there were millions of Earth-like planets in each galaxy that all harboured life, each one would be distinctive. (Far beyond our horizon, however, there could be a literally infinite expanse, where every possible combination of circumstances could occur – and could indeed be replicated infinitely often4.) This perspective should caution us against scientific triumphalism – against exaggerating how much we’ll ever really understand of nature’s intricacies.
A theme of this book has been the intimate links between the microworld and the cosmos, symbolized by the ouraborus (Figure 1.1). Our everyday world, plainly moulded by subatomic forces, also owes its existence to our universe’s well tuned expansion rate, the processes of galaxy formation, the forging of carbon and oxygen in ancient stars, and so forth. A few basic physical laws set the ‘rules’; our emergence from a simple Big Bang was sensitive to six ‘cosmic numbers’. Had these numbers not been ‘well tuned’, the gradual unfolding of layer upon layer of complexity would have been quenched. Are there an infinity of other universes that are ‘badly tuned’, and therefore sterile? Is our entire universe an ‘oasis’ in a multiverse? Or should we seek other reasons for the providential values of our six numbers?