CHAPTER 7

THE NUMBER λ: IS COSMIC EXPANSION SLOWING OR SPEEDING?

                The universe may

                Be as large as they say.

                But it wouldn’t be missed

                If it didn’t exist.

Piet Hein

SEEING BACK INTO THE PAST

Our universe contains more mass in dark matter than in ordinary atoms. But is there enough to provide the full ‘critical density’ – to make Ω exactly equal to unity? The inferred amount within galaxies and clusters of galaxies falls short of this. However, dark matter uniformly spread through the universe would not influence the internal motions within clusters, nor the light-bending due to clusters, which magnifies and distorts the images of very distant galaxies. It would therefore be even more elusive. The extra material would only betray its presence by affecting the overall cosmic expansion. Can we, therefore, discover how the expansion rate is changing?

This is certainly possible in principle. The redshift of a distant object tells us how it was moving when its light set out, as opposed to how it is moving now. By observing the redshifts and distances of a remote population of galaxies (or any other type of object) we can therefore infer the expansion rate at an earlier era. Comparison with the present rate then tells us how much (if at all) the expansion rate has been changing.

Any change in the expansion rate would be so gradual that it would only show up over a ‘baseline’ of several billion years, so there is no hope of detecting it unless we can observe objects several billion light-years away. This isn’t in itself an impediment, because superbly instrumented telescopes with ten-metre mirrors are now probing back to when the universe was no more than a tenth of its present age. More serious is the problem of finding distant objects that are sufficiently standardized, and allowing for the possibility that they look intrinsically different from their nearby counterparts because they are being observed at an earlier stage in their evolution.

The easiest objects to detect at high redshifts are ‘quasars’, the hyperactive centres of galaxies. They are very far from being ‘standard candles’: quasars with similar redshifts (in other words, at similar distances) display a wide range of apparent brightness. Even worse, they are so poorly understood that we do not know how their intrinsic properties might change as the universe gets older.

Galaxies themselves are somewhat better understood than quasars (though not as luminous), and we can now see them out to equally large redshifts, but here too there are problems. There is a whole zoo of different types, which are hard to classify. And they evolve as they age. They do this for several reasons: the existing stars evolve and die; new stars form from gas; or stars are added to the galaxy because it captures smaller neighbours (this is called ‘galactic cannibalism’).

Galaxies are too complicated, too varied and still too poorly understood to serve as ‘standard candles’. They are far less well understood than individual stars. Single stars are far too faint to be detected at cosmological distances: our telescopes detect a whole galaxy by picking up the total light from its billions of constituent stars. But some stars, in their death-throes, explode as supernovae, and for a few days blaze nearly as brightly as a whole galaxy containing many billions of ordinary stars.

HUNTING DISTANT SUPERNOVAE

A distinctive type of supernova, technically known as a ‘Type la’, signals a sudden nuclear explosion in the centre of a dying star, when its burnt-out core gets above a particular threshold of mass and becomes unstable. It is, in effect, a nuclear bomb with a standard yield. The physics is fairly well understood, and the details need not concern us. What is important is that Type la supernovae can be regarded as ‘standard candles’, bright enough to be detected at great distances. From how bright they appear, it should be possible to infer reliable distances, and thereby (by measuring the redshift as well) to relate the expansion speed and distance at a past epoch. Cosmologists hoped that such measurements would distinguish between a small slowdown-rate (expected if the dark matter has all been accounted for) or the larger rate expected if – as many theorists suspected – there was enough extra dark matter to make up the full ‘critical density’ so that the universe resembled the simplest theoretical model.

These supernovae, incidentally, display another trend that relates directly to their redshift: the remotest and most redshifted ones appear to flare up and fade more slowly than closer ones of the same type. This is exactly what we would expect: a clock on a receding object should run slow. If it sends out periodic ‘beeps’, the later ones have further to travel, and so the intervals between their arrival are lengthened.1

The brightening and fading of a supernova is itself like a clock, so a slowdown in the ‘light curves’, proportional to the redshift, is just what we would expect if they are receding. It would have no natural explanation in a static universe. This is the best counter to any suspicion that the redshift is due to some kind of ‘tired light’ effect.

Astronomy is, in sociological terms, a ‘big science’: it requires large and expensive equipment. But the research programmes themselves generally don’t require industrial-style teamwork of a kind that is obligatory in, for instance, the laboratories that use big accelerators to study subnuclear particles. Astronomers can still be individualists, pursuing solo projects by competing for a few nights’ observing time on big telescopes (or, of course, by doing something innovative with a small telescope, like the astronomers who first discovered planets around other stars). But the enterprise of using supernovae for cosmology requires prolonged effort by many collaborators, using several telescopes. The first challenge is to ‘catch’ some photons – faint traces of light – from a stellar explosion that occurred billions of years ago. Distant supernovae are picked out by surveying the same patches of sky repeatedly, looking for occasional transient points of light in remote galaxies. The searches are done with moderately sized telescopes because the biggest instruments are in such demand that not enough time can be allocated to any single programme, even one as important as this. Each supernova must then be observed repeatedly, so as to plot out its ‘light curve’ and measure the apparent brightness as accurately as possible. This preferably requires a ten-metre telescope on the ground, or the Hubble Space Telescope. Analysing all the data, and assessing its reliability, is itself an elaborate task.

There is a natural tendency to suspend judgement on any novel scientific claim, especially when it is unexpected, until it has been corroborated by independent evidence. There is sometimes a frustrating delay before this happens. It was therefore fortunate that two separate teams dedicated themselves to the ‘supernova cosmology project’. The first serious entrant into the field was Saul Perlmutter, a physicist based at the Lawrence Berkeley Laboratory in California. Perhaps because he didn’t then have much background in astronomy, he wasn’t deterred by the difficulties and began his involvement around 1990. He gradually attracted and inspired a group of collaborators, from the UK as well as the US. A second group, also international, assembled later; this latter group contained several researchers who had introduced new techniques (which were then adopted also by Perlmutter’s group) to classify the supernovae into subclasses that were even more standardized.

By 1998, each team had discovered about a dozen distant supernovae and mustered enough confidence to announce provisional results. There was less deceleration than would be expected if Ω were equal to one. This in itself wasn’t surprising – there was no evidence for enough dark matter to raise Ω above around 0.3 – though it went against a strong theoretical prejudice that the cosmos would be ‘simpler’ if Ω were exactly unity. But what was a surprise was that there seemed no deceleration at all – indeed, the expansion seemed to be speeding up. The US-based magazine Science rated this as the number-one scientific discovery of 1998 in any field of research.

These observations are right at the limits of what is possible with existing telescopes. Remote supernovae are so faint that it’s hard to measure them accurately. Furthermore, some astronomers worry that an intervening ‘fog’ of dust could attenuate the light, making the supernovae seem further away than they actually are. Also, the ‘bomb’ may not be quite standardized: for instance, its yield may depend on the amount of carbon etc in the precursor star, which would be systematically lower in objects that formed when the universe was younger (in other words, those that we observe with the highest redshifts). But cross-checks are being made, and every month more supernovae are added to the sample.

AN ACCELERATING UNIVERSE?

An acceleration in the cosmic expansion implies something remarkable and unexpected about space itself: there must be an extra force that causes a ‘cosmic repulsion’ even in a vacuum. This force would be indiscernible in the Solar System; nor would it have any effect within our galaxy; but it could overwhelm gravity in the still more rarified environment of intergalactic space. Despite the gravitational pull of the dark matter (which, acting alone, would cause a gradual deceleration), the expansion could then actually be speeding up. And we have to add another crucial number to our list to describe the strength of this ‘antigravity’.

We normally think of the vacuum as ‘nothing’. But if one were to remove from a region of interstellar space the few particles that it contains, and even shield it from the radiation passing through it, and cool it to the absolute zero of temperature, the emptiness that’s left may still exert some residual force. Einstein himself conjectured this. As early as 1917, soon after he had developed his theory of general relativity, he began to think how that theory might apply to the universe. At that time, astronomers only really knew about our own galaxy, and the natural presumption was that the universe was static – neither expanding nor contracting. Einstein found that a universe that was set up in a static state would immediately start to contract because everything in it attracts everything else. A universe couldn’t persist in a static state unless an extra force counteracted gravity. So he added to his theory a new number, which he called the ‘cosmological constant’, and denoted by the Greek letter λ (lambda). Einstein’s equations then allowed a static universe where, for a suitable value of λ, a cosmic repulsion exactly balanced gravity. This universe was finite but unbounded: any light beam that you transmitted would eventually return and hit the back of your head.

This so-called ‘Einstein universe’ became no more than a curiosity after 1929. Astronomers had by then realized that our galaxy was just one of many, and that distant galaxies were receding from us: the universe wasn’t static, but was expanding. Einstein thereafter lost interest in λ. Indeed, George Gamow’s autobiography My World Line recalls a conversation in which Einstein, three years before his death, rated λ as his ‘biggest blunder’, because if he hadn’t introduced it, his equations would have obligated the conclusion that our universe would be expanding (or contracting). He could then maybe have predicted the expansion before Edwin Hubble discovered it.

Einstein’s reason for inventing λ has been obsolete for seventy years. But that doesn’t discredit the concept itself. On the contrary, λ now seems less contrived and ad hoc than Einstein thought it was. Empty space, we now realize, is anything but simple. All kinds of particles are latent in it. Any particle, together with its antiparticle, can be created by a suitable concentration of energy. On an even tinier scale, empty space may be a seething tangle of strings, manifesting structures in extra dimensions. From our modern perspective the puzzle is: Why is λ so small? Why don’t all the complicated processes that are going on, even in empty space, have a net effect that is much larger? Why isn’t space as dense as an atomic nucleus or a neutron star (in which case it would close up on itself within ten or twenty kilometres)? Or even, perhaps, why isn’t space as dense as the universe was at 10−35 seconds – an era whose significance for unified theories is discussed in later chapters? In fact, it is lower than that ultra-early density by a factor of 10120 – perhaps the worst failure of an order-of-magnitude guess in the whole of science. The value of λ may not be exactly zero, but it is certainly so weak that it can only compete with the very dilute gravity of intergalactic space.

Some theorists have suggested that space has a complicated microstructure of tiny black holes that adjusts itself to compensate for any other energy in the vacuum, and leads to λ being exactly zero. If our universe is indeed accelerating, and λ is not zero, this would scupper such arguments and also caution us against the line of thought that ‘because something is remarkably small, there must be some deep reason why it is exactly zero’.

THE CASE FOR A NON-ZERO λ

The case for a non-zero λ at the time of writing (Spring 2000) is strong but not overwhelming. There could be unsuspected trends or errors in the supernova observations that haven’t been properly allowed for. But other evidence, albeit of a slightly technical and indirect kind, bolsters the case for an accelerating universe. The background radiation – the ‘afterglow’ surviving from the Big Bang – is not completely uniform across the sky; there is a slight patchiness in the temperature, caused by the non-uniformities that evolve into galaxies and dusters. The expected size of the most prominent patches can be calculated. How large they appear in the sky – whether, for instance, they are one degree across or two degrees across – depends on the amount of focusing by the gravity of everything along the line of sight. Measurements of this kind weren’t achieved until the late 1990s (they are made from high dry mountain sites, from Antarctica, or from long-duration balloon flights) and they tell against a straightforward low-density universe. If Ω were really 0.3, and λ were exactly zero, the seeds of clusters would appear smaller than they actually do. However, any energy latent in the vacuum contributes to the focusing. If λ were around 0.7, we get a pleasant consistency with these results, as well as with the supernova evidence for accelerating expansion.

Gravity is the dominant force in planets, stars and galaxies. But on the still-larger scale of the universe itself, the average density is so low that a different force may take over. The cosmic number λ – describing the weakest force in nature, as well as the most mysterious – seems to control the universe’s expansion and its eventual fate. Einstein’s ‘blunder’ may prove a triumphant insight after all. If it does, it will not be the only instance in which his work has had an impact that he himself failed to foresee. The most remarkable implication of general relativity is that it predicted black holes; but his attitude was summarized thus by Freeman Dyson:2

Einstein was not only sceptical, he was actively hostile, to the idea of black holes. He thought the black hole solution was a blemish to be removed from the theory by a better mathematical formulation, not a consequence to be tested by observation. He never expressed the slightest enthusiasm for black holes, either as a concept or a physical possibility.

If λ isn’t zero, we are confronted with the problem of why it has the value we observe – one smaller, by very many powers of ten, than what seems its ‘natural’ value. Our present cosmic environment would be very little different if it were even smaller (though the long-range forecast, discussed below, would be somewhat altered). However, a much higher value of λ would have had catastrophic consequences: instead of becoming competitive with gravity only after galaxies have formed, a higher-valued λ would have overwhelmed gravity earlier on, during the higher-density stages. If λ started to dominate before galaxies had condensed out from the expanding universe, or if it provided a repulsion strong enough to disrupt them, then there would be no galaxies. Our existence requires that λ should not have been too large.

THE LONG-RANGE FUTURE

Geologists infer the Earth’s history from strata in the rocks; climatologists can infer changes in temperature over the last million years by drilling through successive layers of Antarctic ice. Likewise, astronomers can study cosmic history by taking ‘snapshots’ of the galaxies at different distances: those more remote from us (with larger redshifts) are being viewed at earlier stages in their evolution. The challenge for theorists (see Chapter 8) is to understand galaxies and how they evolve, and to produce computer simulations that faithfully match the reality.

Most galaxies have now settled down into a sedate maturity, an equilibrium where their ‘metabolism’ has slowed. Fewer new stars are forming, and few bright blue stars are shining. But what about the long-range future? What would happen if we came back when the universe was ten times older – a hundred billion rather than ten billion years old? My favoured guess (before there was much relevant evidence) used to be that the expansion would by then have halted and been succeeded by recollapse to a Big Crunch in which everything experienced the same fate as an astronaut who falls inside a black hole. Our universe would then have a finite timespan for its continued existence, as well as being bounded in space. But this scenario requires Ω to exceed unity in value, contrary to the evidence that has mounted up in recent years. Dark matter assuredly exists, but there does not seem to be enough to yield the full ‘critical density’: Ω seems to be less than unity. Furthermore, an extra cosmic repulsion, described by λ, may actually be speeding-up the expansion of our universe.

It seems likely that expansion will continue indefinitely. We can’t predict what role life will have carved out for itself ten billion (or more) years hence: it could be extinct; on the other hand, it could have evolved to a state where it can influence the entire cosmos, perhaps even invalidating this forecast. But we can compute the eventual fate of the inanimate universe: even the slowest-burning stars would die, and all the galaxies in our Local Group – our Milky Way, Andromeda, and dozens of smaller galaxies – would merge into a single system. Most of the original gas would by then be tied up in the dead remnants of stars; some would be black holes; others would be very cold neutron stars or white dwarfs.

Looking still further ahead, processes far too slow to be discernible today could come into their own. Collisions between stars within a typical galaxy are immensely infrequent (fortunately for our Sun), but their number would mount up. The drawn-out terminal phases of our galaxy would be sporadically lit up by intense flares, each signalling a collision between two dead stars. The loss of energy via gravitational radiation (an effect predicted by Einstein’s theory of general relativity) – imperceptibly slow today, except in a few binary stars where the orbits are specially close and fast – would, given enough time, grind down all stellar and planetary orbits. Even atoms may not live for ever. In consequence, white dwarfs and neutron stars will erode away because their constituent particles decay. Eventually, black holes will also decay. The surface of a hole is made slightly fuzzy by quantum effects, and it consequently radiates. In our present universe, this effect is too slow to be interesting unless mini-holes the size of atoms actually exist. The timescale is 1066 years for the total decay of a stellar-mass hole; and a hole weighing as much as a billion suns would erode away in 1093 years.

Eventually, after 10100 years have passed, the only surviving vestige of our Local Group of galaxies would be just a swarm of dark matter and a few electrons and positrons. All galaxies beyond our Local Group would undergo the same internal decay, and would move further from us. But the speed with which they disperse depends crucially on the value of λ. If λ were zero, the pull of ordinary gravity would slow down the recession: although galaxies would move inexorably further away, their speed (and redshift) would gradually diminish but never quite drop to zero. If our remote descendants had powerful enough telescopes to detect highly redshifted galaxies, despite their intrinsic fading and ever-increasing remoteness, they would actually be able to detect more than are visible in our present sky. After (say) 100 billion years, we would be able to see out as far as 100 billion light-years; objects that are now far beyond our present horizon, because their light hasn’t yet had time to reach us, would come into view.

But if λ isn’t zero, the cosmic repulsion will push galaxies away from each other at an accelerating rate. They will fade from view even faster because their redshifts increase rather than diminish. Our range of vision will be bounded by a horizon that is rather like an inside-out version of the horizon around a black hole. When things fall into a black hole, they accelerate, getting more and more redshifted and fading from view as they approach the hole’s ‘surface’. A galaxy in a λ-dominated universe would accelerate away from us, moving ever closer to the speed of light as it approaches the horizon. At late times, we will not see any further than we do now. All galaxies (except Andromeda and the other small galaxies gravitationally bound into our own Local Group) would be fated to disappear from view. Their distant future lies beyond our horizon, as inaccessible to us as the events inside a black hole. Extragalactic space will become exponentially emptier as the aeons advance.