THE FINE-TUNED EXPANSION: DARK MATTER AND Ω
Eternity is very long, especially towards the end.
Woody Allen
THE CRITICAL DENSITY
In about five billion years the Sun will die; and the Earth with it. At about the same time (give or take a billion years) the Andromeda galaxy, our nearest big galactic neighbour, which belongs to the same cluster as our galaxy and which is actually falling towards us, will crash into the Milky Way.
These gross long-range forecasts are reliable because they depend on assuming that basic physics within the Sun, and the force of gravity in stars and galaxies, operate during the next five billion years as they have for the last five to ten billion. Not much of the (more interesting) detail is predictable, however. We can’t be sure that the Earth will still be the third-closest planet to the Sun throughout the next five billion years: even planetary orbits can behave ‘chaotically’ over that expanse of time. And of course the changes on the Earth’s surface, particularly the ever-more-rapid alterations in its biosphere being wrought by our own species, can’t be confidently predicted even for a millionth of that timespan.
The Sun hasn’t even burnt up half its fuel yet. More time lies ahead of it than has elapsed in the entire course of biological evolution. And the galaxy will far outlast the Sun. Even if life were now unique to Earth, there would be abundant time for it to spread through the galaxy and beyond. Manifestations of life and intelligence could eventually affect stars or even galaxies. I forbear to speculate further, not because this line of thought is intrinsically absurd but because it opens up such a variety of conceivable scenarios – many familiar from science fiction – that we can predict nothing. In contrast, long-range forecasts for our entire universe are on surer ground.
Our galaxy will surely end five or six billion years hence in a great crash. But will our universe go on expanding for ever? Will the distant galaxies move ever further away from us? Or could these motions eventually reverse, so that the entire firmament eventually recollapses to a ‘Big Crunch’?
The answer depends on the ‘competition’ between gravity and the expansion energy. Imagine that a large asteroid or a planet were to be shattered into fragments. If the fragments dispersed rapidly enough, they would fly apart for ever. But if the disruption were less violent, gravity might reverse the motions, so that the pieces fell back together again. It’s similar for any large domain within our universe: we know the expansion speed now, but will gravity bring it to a halt? The answer depends on how much stuff is exerting a gravitational pull. The universe will recollapse – gravity eventually defeating the expansion, unless some other force intervenes – if the density exceeds a definite critical value.
We can readily calculate what this critical density is. It amounts to about five atoms in each cubic metre. That doesn’t seem much; indeed, it is far closer to a perfect vacuum than experimenters on Earth could ever achieve. But the universe actually seems to be emptier still.1
Suppose our star, the Sun, were modelled by an orange. The Earth would then be a millimetre-sized grain twenty metres away, orbiting around it. Depicted to the same scale, the nearest stars would be 10,000 kilometres away: that’s how thinly spread the matter is in a galaxy like ours. But galaxies are, of course, especially high concentrations of stars. If all the stars from all the galaxies were dispersed through intergalactic space, then each star would be several hundred times further from its nearest neighbour than it actually is within a typical galaxy – in our scale model, each orange would then be millions of kilometres from its nearest neighbours.
If all the stars were dismantled and their atoms spread uniformly through our universe, we’d end up with just one atom in every ten cubic metres. There is about as much again (but seemingly no more) in the form of diffuse gas between the galaxies. That’s a total of 0.2 atoms per cubic metre, twenty-five times less than the critical density of five atoms per cubic metre that would be needed for gravity to bring cosmic expansion to a halt.
HOW MUCH DARK MATTER?
The ratio of the actual density to the critical density is a crucial number. Cosmologists denote it by the Greek letter Ω (omega). The fate of the universe depends on whether or not Ω exceeds one. At first sight our estimate of the actual average concentration of atoms in space seems to imply that Ω is only 1/25 (or 0.04), portending perpetual expansion, by a wide margin. But we should not jump too soon to that conclusion. We’ve come to realize in the last twenty years that there’s a lot more in the universe than we actually see, such unseen material consisting mainly of ‘dark stuff’ of unknown nature. The things that shine – galaxies, stars and glowing gas clouds – are a small and atypical fraction of what is actually there, rather as the most conspicuous things in our terrestrial sky are cloud patterns, which are actually insubstantial vapours floating in the much denser clear air. Most of the material in the universe, and the main contributor to Ω, emits no light, nor infrared heat, nor radio waves, nor any other kind of radiation, and is consequently hard to detect.
The cumulative evidence for dark matter is now almost uncontestable. The way stars and galaxies are moving suggests that something invisible must be exerting a gravitational pull on them. This is the same line of argument by which we infer the existence of a black hole when a star is seen to be orbiting around an invisible companion; it’s also the reasoning used in the nineteenth century when the planet Neptune was inferred to exist because the orbit of Uranus was deviated by the pull of a more distant unseen object.
In our Solar System, there is a balance between the tendency of gravity to make the planets fall towards the Sun, and the centrifugal effect of the orbital motions. Likewise, on the far bigger scale of an entire galaxy, there is a balance between gravity, which tends to pull everything together into the centre, and the disruptive effects of motion, which, if gravity didn’t act, would make its constituent stars disperse. Dark matter is inferred to exist because the observed motions are surprisingly fast – too fast to be balanced just by the gravity of the stars and gas that we see.
We know how fast our Sun is circling around the central ‘hub’ of our galaxy; and we can measure the speeds of stars and gas clouds in other galaxies. These speeds, especially those of ‘outliers’ orbiting beyond most of the stars, are puzzlingly high. If the outermost gas and stars were feeling just the gravitational pull of what we can see, they should be escaping, just as Neptune and Pluto would escape from the Sun’s influence if they were moving as fast as the Earth does. These high observed speeds tell us that a heavy invisible halo surrounds big galaxies – just as, if Pluto were moving as fast as the Earth (but were still in orbit rather than escaping), we would have to infer a heavy invisible shell outside the Earth’s orbit but inside Pluto’s.
If there weren’t a lot of dark stuff, galaxies would not be stable but would fly apart. The beautiful pictures of discs or spirals portray what is essentially just ‘luminous sediment’ held in the gravitational clutch of vast swarms of invisible objects of quite unknown nature. Galaxies are ten times bigger and heavier than we used to think. The same argument applies, on a larger scale, to entire clusters of galaxies, each millions of light-years across. To hold them together requires the gravitational pull of about ten times more material than we actually see.
There is, of course, one assumption underlying these inferences of ‘dark matter’, namely that we know the force of gravity exerted by the objects we see. The internal motions within galaxies and clusters are slow compared with the speed of light, and so there are no ‘relativistic’ complications; we therefore just use Newton’s inverse-square law, which tells us that if you move twice as far away from any mass then the force gets four times weaker. Some sceptics remind us that this law has only really been tested within our Solar System; it is plainly a leap of faith to apply it on scales a hundred million times larger. Indeed, we’ve now got tantalizing clues (see Chapter 10) that, on the scale of the entire universe, gravity is perhaps overwhelmed by another force that causes repulsion rather than attraction.
We should keep our minds open (or at least ajar) to the possibility that our ideas on gravity need reappraisal. If the force exerted at large distances were stronger than we would infer by extrapolating the inverse-square law – if it weren’t four times weaker at twice the range – then clearly the case for dark matter would need rethinking. But we shouldn’t abandon our theory of gravity without a struggle. We might be tempted to do so if there were no conceivable candidates for dark matter. However, there seem to be many options; only if these can all be ruled out should we, in my opinion, be prepared to jettison Newton and Einstein.2
There are other tell-tale signs of abundant ‘dark matter’. All gravitating material, whether luminous or ‘dark’, deflects light rays, and so clusters can be ‘weighed’ by detecting how strongly they deviate the paths of light rays passing through them. Indeed, the deflection of starlight by the Sun’s gravity, observed by Eddington and others during the 1919 total eclipse, famously offered an early test of relativity that propelled Einstein to world-wide celebrity. The Hubble Space Telescope has taken spectacular pictures of some clusters of galaxies lying about a billion light-years away. The pictures reveal a lot of faint streaks and arcs: each is a remote galaxy, several times further away than the cluster itself, whose image is, as it were, viewed through a distorting lens. Just as a regular pattern on background wallpaper looks streaky and distorted when viewed through a curved sheet of glass, the cluster acts like a ‘lens’ that focuses light passing through it. The visible galaxies in the cluster, all added together, aren’t heavy enough to produce so much distortion. To bend the light so much, and cause such conspicuous distortion in the images of background galaxies, the cluster must contain ten times more mass than we see. These huge natural lenses offer a bonus to astronomers interested in how galaxies evolve, because they bring into view very remote galaxies that would otherwise be too faint to be seen.
We shouldn’t really have been surprised to discover that dark matter, amounting to about ten times what we see, is the dominant gravitational influence on the cosmos. There’s nothing implausible about dark matter per se: why should everything in the universe be shining? The challenge is to narrow down the range of candidates.
WHAT CAN THE DARK MATTER BE?
The inferred dark matter emits no light – indeed no radiation of any kind that we can detect. Nor does it absorb or scatter light. This means that it cannot be made of dust. We know that there is some dust in our galaxy, because starlight is scattered and attenuated by intervening clouds that are pervaded by tiny grains, rather like those that produce the haze from tobacco smoke. But if the grains cumulatively weighed enough to make up all the dark matter, they would black out our view of any distant stars.
Small faint stars are obvious suspects for the dark matter. Stars below eight per cent of the Sun’s mass are called ‘brown dwarfs’. They wouldn’t be squeezed hot enough to ignite the nuclear fuel that keeps ordinary stars shining. Brown dwarfs definitely exist: some have been found as a by-product of searches for planets in orbit around brighter stars; others, especially nearby, have been detected by their very faint emission of red light. How many brown dwarfs might we expect altogether? Theory offers little guidance. The proportions of big and small stars are determined by very complicated processes that aren’t yet understood. Not even the most powerful computers can tell us what happens when an interstellar cloud condenses into a population of stars; the processes are currently intractable, for the same reasons that weather prediction is so very difficult.
Individual brown dwarfs can be revealed by gravitational lensing. If one of them were to pass in front of a bright star, then the brown dwarf’s gravity would focus the light, causing the bright star to appear magnified. As a consequence, a star would brighten up and fade in a distinctive way if a brown dwarf passed in front of it. This requires very precise alignment, and such events would consequently be very rare, even if there were enough brown dwarfs to make up all the dark matter in our galaxy. However, astronomers have carried out ambitious searches for these ‘microlensing’ events (called ‘micro’ to distinguish the phenomenon from the lensing by entire clusters of galaxies, as already mentioned). Millions of stars are monitored repeatedly in order to pick out those whose brightness changes from night to night. Many stars vary for all kinds of intrinsic reasons: some pulsate, some undergo flares, and some are orbiting around binary companions. The searches have found many thousands of these (which are interesting to some astronomers, though a tiresome complication for the microlensing searches). Occasionally, stars have been found to display the distinctive rise and fall in brightness that would be expected if an unseen mass had crossed in front of them and focused their light. It still isn’t clear whether there are enough of these events to implicate a new ‘brown dwarf’ population, or whether ordinary faint stars, passing in front of brighter ones, are common enough to account for the events recorded.
There are several other candidates for dark matter. Cold ‘planets’ moving through interstellar space, unattached to any star, could exist in vast numbers without being detected; so could comet-like lumps of frozen hydrogen; so could black holes.
THE CASE FOR EXOTIC PARTICLES
Brown dwarfs or comets (or even black holes, if they are the remnants of dead stars) are, however, suspected to be only a minor constituent of the dark matter. This is because there are strong reasons for suspecting that dark matter isn’t made of ordinary atoms at all. This argument is based on deuterium (heavy hydrogen).
As mentioned in the last chapter, any deuterium that we observe must have been made in the Big Bang, not in stars. The actual amount in our universe was, until recently, uncertain. But astronomers have detected the spectral imprint of deuterium, distinguishing it from ordinary hydrogen, in the light received from very distant galaxies. This measurement has needed the light-collecting power of new telescopes with ten-metre-diameter mirrors. The observed abundance is just a trace – only one atom in 50,000 is a deuterium atom. The proportion that should emerge from the Big Bang depends on how dense the universe is, and observations agree with theory if there are 0.2 hydrogen atoms in each cubic metre. This accords quite well with the actual number of atoms in objects that shine – half are in galaxies, and the other half is in intergalactic gas – but nothing much is then left over for the dark matter.
If there were enough atoms to make up all the dark matter – which would imply at least five (and perhaps ten) times more than we actually see – the concordance with theory would be shattered. The Big Bang calculations would then predict even less deuterium, and somewhat more helium, than we actually observe: the origin of the deuterium in the universe would then become a mystery. This tells us something very important: the atoms in the universe, with a density of 0.2 per cubic metre, contribute only four per cent of the critical density, and the dominant dark matter is made of something that is inert as far as nuclear reactions are concerned. Exotic particles – not anything made of ordinary atoms at all – make the main contribution to Ω.3
The elusive particles called neutrinos are one option. They have no electric charge, and hardly interact at all with ordinary atoms: almost all neutrinos that hit the Earth go straight through. During the very first second after the Big Bang, when the temperature exceeded ten billion degrees, everything was so compressed that the reactions converting photons (quanta of radiation) into neutrinos would have been fast enough to come into balance. In consequence, the number of neutrinos left over from the ‘cosmic fireball’ should be linked to the number of photons. One can calculate, using physics that is quite standard and uncontroversial, that there should be 3/11 as many neutrinos as there are photons. There are now 412 million photons per cubic metre in the radiation left over from the Big Bang. There are three different species of neutrinos, and there would be 113 of each species in every cubic centimetre – in other words, hundreds of millions of neutrinos for every atom in the universe. It is, of course, the heaviest of the three species that is important in the dark matter context.
Because neutrinos so greatly outnumber atoms, they could be the dominant dark matter even if each weighed only a hundred-millionth as much as an atom. Before the 1980s, almost everyone believed neutrinos were ‘zero rest-mass’ particles; they would then carry energy and move at the speed of light, but their gravitational effects would be unimportant. (Likewise, the photons left over from the early universe, now detected as the microwave background radiation, don’t now exert any significant gravitational effects.) But it now seems that neutrinos may weigh something, even though it is a very tiny amount indeed.
The best evidence for neutrino masses comes from the Kamiokande experiment in Japan, using a huge tank in a former zinc mine. The experimenters studied neutrinos that come from the Sun (where they are a by-product of the nuclear reactions in the central core), as well as others that are produced by very fast particles (‘cosmic rays’) impacting on the Earth’s upper atmosphere. The experiments imply a non-zero mass, but one that is probably too small to render them important for the dark matter.4 This is, nonetheless, a pivotal discovery about neutrinos themselves. At first sight it makes the microworld seem more complicated, but the masses may offer extra clues to the relation between neutrinos and other particles.
At least we know that neutrinos exist, although we don’t yet know their exact masses. But there is a long list of hypothetical particles that might exist, and (if so) could have survived from the Big Bang in sufficient numbers to provide the dominant contribution to Ω. There are no very convincing arguments about how heavy each particle might be: best guesses suggest a hundred times as much as a hydrogen atom. If there were enough such particles to make up all the dark matter in our galaxy, there would be several thousand per cubic metre in the neighbourhood of the Sun; they would be moving at about the same speed as the average star in our galaxy – maybe 300 kilometres per second.
These particles, heavy but electrically neutral, would generally, like neutrinos, go straight through the Earth. However, a tiny proportion are likely to interact with an atom in the material they pass through. There would be only a few collisions per day within each of us (even though our bodies each contain nearly 1029 atoms). We ourselves clearly feel nothing. However, very sensitive experiments can detect the minuscule ‘kick’, or recoil, when such an impact happens in a lump of silicon or similar material. The detectors must be cooled to a very low temperature and placed deep underground (for instance, they are set up in a mine in Yorkshire, and in a tunnel under an Italian mountain) so as to reduce the confusion from other kinds of event that could drown out any genuine signal from dark-matter impacts.
Several groups of physicists have taken up the challenge of this ‘underground astronomy’. It’s delicate and tedious work, but if they succeed, they will not only find out what our universe is mainly made of but as a bonus they may discover an important new kind of particle. Only an extreme optimist would bet more than evens on success. This is because, at the moment, we have no theory that tells us what the dark-matter particles are and it’s therefore hard to focus the search optimally.5
Many other candidates for dark matter are currently being considered. Some theorists favour a type of even lighter particle called an axion. Others suspect that the particles could be a billion times heavier than those currently being searched for (in which case there would be a billion times fewer, making detection even harder). Or they could be more exotic still – for instance, atom-sized black holes made in the ultra-high pressures of the early universe.
NARROWING DOWN THE OPTIONS
Some options for the dark matter can be ruled out; serious searches for other candidates, by a variety of techniques, are under way. Gravitational microlensing may detect enough faint stars or black holes. Experimenters at the bottom of mineshafts may detect some new kind of particle that pervades our galactic halo. Even negative results can sometimes be interesting because they exclude some tenable options.
There may well be several different kinds of dark matter. It would, for instance, be surprising if there weren’t some brown dwarfs and black holes. However, exotic particles seem far more likely, because of the evidence from deuterium that most dark matter isn’t made up of ordinary atoms.
It’s embarrassing that more than ninety per cent of the universe remains unaccounted for – even worse when we realize that the dark matter could be made up of entities with masses ranged from 10-33 grams (neutrinos) up to 1039 gm (heavy black holes), an uncertainty of more than seventy powers of ten. This key issue may yield to a three-pronged attack:
1. The entities making up the dark matter may be directly detectable. Brown dwarfs may cause gravitational lensing of stars. If the dark matter in our galaxy is a swarm of particles, some of these might be detected by intrepid experimenters deep underground. I’m optimistic that if I were writing in five years’ time, I would be able to report what the dark matter is.
2. Experimenters and theorists are already telling us more about neutrinos. It’s possible (though it now seems unlikely) that neutrinos have enough mass to be an important dark-matter constituent. When the physics of extreme energies and densities is better understood, we should know what other kinds of particles might once have existed, and be able to calculate how these particles would have survived from the first millisecond of the universe just as confidently as we can now predict the amount of helium and deuterium surviving from the first three minutes.
3. Dark matter dominates galaxies. When and how galaxies formed, and the way that they are clustered, plainly depends on what their gravitationally dominant constituent is and how it behaves as the universe expands. We can make different guesses about the dark matter, calculate the outcome of each, and see which outcome most resembles what we actually observe. Such calculations (described in Chapter 8) can offer indirect clues to what the dark matter is.
WHY MATTER AND NOT ANTIMATTER?
We don’t know yet what types of particle might have existed in the ultra-early phases of the universe nor how many survive. If, as I believe, the main contribution to Ω comes from new kinds of particle, our cosmic modesty may have to go a stage further. We are used to the post-Copernican idea that we don’t occupy a special central place in the cosmos, but we must now abandon ‘particle chauvinism’ as well. The atoms that comprise our bodies and that make all visible stars and galaxies, are mere trace-constituents of a universe whose large-scale structure is controlled by some quite different (and invisible) substance. We see, as it were, just the white foam on the wave-crests, not the massive waves themselves. We must envisage our cosmic habitat as a dark place, made mainly of quite unknown material.
Ordinary atoms seem to be a ‘minority’ constituent of the universe, swamped by quite different kinds of particles surviving from the initial instants of the Big Bang. But it is actually more of a puzzle to understand why there are any atoms – why our universe isn’t solely composed of dark matter.
To every kind of particle there is a corresponding antiparticle. There are protons (made up of three so-called ‘quarks’) and antiprotons (made up of three antiquarks); the ‘anti’ of an electron is a positron. Antiparticles annihilate when they encounter ordinary particles, converting their energy (mc2) into radiation. No antimatter exists in bulk anywhere in or on the Earth. Tiny amounts can be made in accelerators, where particles are crashed together with sufficient energy to make extra particle-antiparticle pairs. Antimatter would be the ideal rocket fuel. When it annihilates, its entire rest-mass energy is released, compared with the fraction ε = 0.007 for rockets powered by nuclear fusion. Antimatter can survive only if ‘quarantined’ from ordinary matter; otherwise it betrays itself by generating intense gamma rays when it annihilates. We can be sure that our entire galaxy – all its constituent stars and gas – is matter rather than antimatter: its content is constantly being churned up and recycled by stellar births and deaths, and had it started off half matter and half antimatter there would by now be nothing left. But on much larger scales the mixing would be less efficient: we can’t, for instance, refute the conjecture that ‘superclusters’ of galaxies consist alternately of matter and antimatter. So why is there a seeming bias in favour of one kind of matter?
There are 1078 atoms within our observable universe (mainly hydrogen atoms, each composed of a proton and an electron), but there do not seem to be so many antiatoms. The simplest universe, one might imagine, would have started off with particles and antiparticles mixed up in equal numbers. Our universe luckily wasn’t like that. If it had been, then all protons would have annihilated with antiprotons during the dense early stages; it would have ended up full of radiation and dark matter but containing no atoms, no stars and no galaxies.
Why this asymmetry? The full 1078 excess could have been there right from the beginning, but this seems an unnaturally large number to accept as simply a part of the ‘initial conditions’. The Russian physicist Andrei Sakharov is most widely famed for his role in developing the H-bomb, and later as a leading dissident in the final years of the Soviet Union; but he also contributed prescient ideas to cosmology. In 1967 he explored whether, during the cooling immediately after the Big Bang, a small asymmetry might favour particles over their antiparticles. This imbalance could create a slight excess of quarks over antiquarks (which would later translate into an excess of protons over antiprotons).
Sakharov’s idea obviously requires some departure from perfect symmetry between the behaviour of matter and antimatter. Evidence for such an effect – a big surprise at the time – came in 1964 from two American physicists, James Cronin and Val Fitch, who were studying the decays of an unstable particle called a K°. They found that this particle and its antiparticle weren’t perfect mirror images of each other, but decayed at slightly different rates; some slight asymmetry was built in to the laws governing the decays. (This means, incidentally, that if we achieved contact with an ‘alien’ physicist who could report experiments done on another galaxy, we could tell whether that physicist was made of matter or antimatter – something that it would be prudent to check before planning a rendezvous!) The K° decay involves only the so-called ‘weak’ force (which governs radioactivity and neutrinos) and not the strong nuclear force. In a unified theory of the forces, however, this type of asymmetry would ‘carry over’ from one force to the other, offering a basis for Sakharov’s idea.
Suppose that, for every 109 quark-antiquark pairs, such an asymmetry had led to one extra quark. As the universe cooled, antiquarks would all annihilate with quarks, eventually giving quanta of radiation. This radiation, now cooled to very low energies, constitutes the 2.7 degree background heat pervading intergalactic space. But for every billion quarks that were annihilated with antiquarks, one would survive because it couldn’t find a partner to annihilate with. There are indeed more than a billion times more radiation quanta (photons) in the universe than there are protons (412 million photons in each cubic metre, compared with about 0.2 protons). So all the atoms in the universe could result from a tiny bias in favour of matter over antimatter. We, and the visible universe around us, may exist only because of a difference in the ninth decimal place between the numbers of quarks and of antiquarks.
Our universe contains atoms and not antiatoms because of a slight ‘favouritism’ that prevailed at some very early stage. This implies, of course, that a proton (or its constituent quarks) can sometimes appear or disappear without the same thing happening to an antiproton. There is a contrast here with net electrical charge: this is exactly conserved, so that if our universe started off uncharged, there would always be an exact cancellation between positive and negative charges.
Atoms don’t live for ever, although the decay rate appears to be incredibly low: a best guess for an atom’s lifetime might be about 1035 years. This would mean that, on average, one atom would decay every year within a tank containing a thousand tons of water. Experiments in the same large underground tanks that are used to detect neutrinos cannot quite reach this sensitivity, but already tell us that the lifetime is at least 1033 years.
In the remote future, all stars will turn into cold white dwarfs, neutron stars or black holes. But the white dwarfs and neutron stars will themselves erode away as the constituent atoms decay. If this erosion took 1035 years, the heat generated by the prolonged decay would make each star radiate as much as a household electric heater. These feeble emitters would be the prime warmth (except for occasional flashes following stellar collisions) in the remote future, when all stars had exhausted their nuclear energy.
THE TUNING OF THE INITIAL EXPANSION
Ω may not be exactly one, but it is now at least 0.3. At first sight, this may not seem to indicate fine tuning. However, it implies that Ω was very close indeed to unity in early eras. This is because, unless expansion energy and gravitational energy are in exact balance (in which case Ω is, and remains, exactly equal to unity), the gap between those two energies widens: if Ω were to start off slightly less than unity in the early universe, eventually the kinetic energy would completely dominate (so that Ω becomes very small indeed); on the other hand, if Ω were substantially more than unity, then gravity would soon get the upper hand and bring the expansion to a halt.
The range of ‘trajectories’ for our actual universe, consistent with what the dark-matter evidence tells us about the present value of Ω, is shown in Figure 6.1. The figure also depicts some universes in which life as we know it couldn’t have emerged. It highlights a basic mystery: Why is our universe still, after ten billion years, expanding with a value of Ω not too different from unity?
There are, as we’ve seen in the last chapter, good grounds for extrapolating back to when the universe was one second old and at a temperature of ten billion degrees. Suppose that you were ‘setting up’ a universe then. The trajectory it would follow would depend on the impetus it was given. If it were started too fast, then the expansion energy would, early on, have become so dominant (in other words, Ω would have become so small) that galaxies and stars would never have been able to pull themselves together via gravity and condense out; the universe would expand for ever, but there would be no chance of life. On the other hand, the expansion must not have been too slow: otherwise the universe would have recollapsed too quickly to a Big Crunch.
This diagram indicates various trajectories for possible universes. Despite the uncertainty in the present value of Ω, the initial conditions must have been tuned with remarkable precision in order for our universe to end up in the permitted range. Without this tuning, the expansion would either have been so fast that no galaxies could form, or so slow that the universe recollapsed before there was time for any interesting evolution. Explanations for this tuning are discussed in Chapter 9.
Any emergent complexity must feed on non-uniformities in density and in temperature (our own biosphere, for example, energizes itself by absorbing the Sun’s ‘hot’ radiation and re-emitting it into cold interstellar space). Without being to the slightest degree anthropocentric in our concept of life, we can therefore conclude that a universe has to expand out of its ‘fireball’ state, and at least cool down below 3000 degrees, before any life can begin. If the initial expansion were too slow to permit this, there would be no chance for life.
In this perspective, it looks surprising that our universe was initiated with a very finely tuned impetus, almost exactly enough to balance the decelerating tendency of gravity. It’s like sitting at the bottom of a well and throwing a stone up so that it just comes to a halt exactly at the top – the required precision is astonishing: at one second after the Big Bang, Ω cannot have differed from unity by more than one part in a million billion (one in 1015) in order that the universe should now, after ten billion years, be still expanding and with a value of Ω that has certainly not departed wildly from unity.
We have already noted that any complex cosmos must incorporate a ‘large number’ N reflecting the weakness of gravity, and must also have a value of ε that allows nuclear and chemical processes to take place. But these conditions, though necessary, are not sufficient. Only a universe with a ‘finely tuned’ expansion rate can provide the arena for these processes to unfold. So Ω must be added to our list of crucial numbers. It had to be tuned amazingly close to unity in the early universe. If expansion was too fast, gravity could never pull regions together to make stars or galaxies; if the initial impetus were insufficient, a premature Big Crunch would quench evolution when it had barely begun.
Cosmologists react to this ‘tuning’ in different ways. The most common reaction seems, at first sight, perverse. This is to argue that because our early universe was set up with Ω very close to unity, there must be some deep reason why it is exactly one; in other words, because the ‘tuning’ is very precise, it must be absolutely perfect. This odd-looking style of reasoning has actually served well in other contexts; for instance, we know that in a hydrogen atom, the positive electric charge on the proton is cancelled by the negative charge on the orbiting electron, to immense precision – better than one part in 1021. No measurement can, however, tell us that the net charge on an atom is exactly zero: there is always some margin of error. So-called ‘grand unified theories’, which interrelate electrical forces with nuclear forces, have, within the last twenty years, suggested a deep reason why the cancellation is exact. However, most physicists even fifty years ago would have guessed that the cancellation was exact, even though there weren’t then any convincing arguments.
Another surprise is that the expansion rate (the Hubble constant) is the same in all directions: it can be described by a single ‘scale factor’, depicting the lengthening of the rods in Escher’s lattice – see Figure 5.1. We could easily imagine a universe where the stretching was faster in some directions than in others. A less uniform universe would seem to have more options open to it. Why, when we observe remote regions in opposite directions, do they look so similar and synchronized? Or why is the temperature of the background radiation, which has not been scattered since the temperature was 3000 degrees, almost the same all over the sky? As we shall see in Chapter 9, there is an attractive explanation – invoking a so-called ‘inflationary phase’ – for these features of our universe, and for the fine tuning of Ω in the early universe.