10
DARK SECRETS

Astronomy’s big questions

During the 1980s, a most unusual astronomy book was published. Written by an astronomer working with a designer of folded-paper models, it was an ambitious and elaborate ‘pop-up’ book of the Universe. The book made a heroic effort to represent the wonders of the Cosmos in three dimensions, and I think it had its moment in the bookshops. But, with all due respect to the authors (one of whom I count among my friends), it has to be said that the wonders of the Cosmos really don’t lend themselves to folded-paper models—certainly not as successfully as the usual stock-in-trade of the pop-up genre: Jemima Puddleduck, Postman Pat and Thomas the Tank Engine. When it comes to planets, stars and galaxies, the medium becomes a little inadequate. To be honest, a folded-paper planet looks more like a hibernating armadillo. A star in the process of formation resembles something you’d hang on a Christmas tree. And a galaxy looks strikingly like the aftermath of some bizarre culinary experiment involving a soufflé and a stick of dynamite. Of all the pop-ups in the book, though, the least awe-inspiring is the pop-up Big Bang. The cataclysmic explosion that gave birth to the entire Universe is reduced to a series of creaks and shuffles as you open the book. No bang. Not even a pop. Just a garish paper splodge lurching unsteadily into existence before your eyes.

‘Well, what do you expect?’ I can almost hear the indignant author saying over my shoulder. And she would have a point. What do we imagine the Big Bang to have been like? Nothing, absolutely nothing in our experience allows us to envisage even remotely the power of this most significant event in the Universe’s history. We can imagine explosions, of course. Even nuclear ones. But in our mind’s eye we always see them from the outside. And the trouble with the Big Bang is that when it went off, 13.7 billion years ago, it not only produced everything now contained in the Universe but created space and probably time as well. There was no ‘outside’ for the infant Universe to explode into—all of space was contained within its violently expanding boundaries. If, indeed, there were any boundaries. And there was no ‘before’, if time itself started with the instant of creation. These are truly weird concepts. Perhaps, after all, the pop-up book did make a pretty good attempt at depicting the undepictable.

PUTTING THE BANG INTO THE UNIVERSE

The supremely understated name we give to this extreme event came from someone who was actually sceptical about it. In 1949, on BBC radio, Fred Hoyle (who was then merely a professor rather than a sir) made a disparaging remark about the theory, using the term Big Bang in an attempt to highlight how ridiculous it was. Once his radio lectures were enshrined in print in a little book called The Nature of the Universe, the name stuck, and we’ve been stuck with it ever since. You’d have thought that cosmologists could have come up with something far more elegant to describe the single most important event in the history of the Universe, but I’m afraid they haven’t.

In fact, the theory itself has a much more imposing pedigree than its name suggests. It has its origins in the aftermath of Albert Einstein’s General Theory of Relativity, which, I’m sure you’ll remember, was published early in 1916. Little more than a year later, however, Einstein thought his new theory was in big trouble. If his mathematical equations were applied to the Universe as a whole, they became unstable, producing a Universe that would be changing in size. To the best of Einstein’s knowledge, the Universe was static. So he did something clever: he introduced a mathematical entity that he called the ‘cosmological constant’. It could have a positive or negative effect and would represent an inbuilt attractive or repulsive force in the fabric of space that would balance any tendency for it to expand or contract. As far as Einstein was concerned, that solved the problem. His equations then represented a well-behaved static Universe—and he could sleep easy at night. Phew.

But in 1912 evidence had already begun to emerge that the Universe might not be static at all. In that year, Vesto Melvin Slipher of the Lowell Observatory (from where Pluto was later discovered) had embarked on the first systematic measurement of the radial velocities of the mysterious objects then known as ‘spiral nebulae’. Perhaps you’ll recall from Chapter 6 that such an undertaking required the use of a spectrograph and today is called a ‘galaxy redshift survey’. Slipher’s work, trivial though it may seem to us now, was a triumph of observational astronomy. Each of his objects required between 20 and 40 hours of photographic exposure time on the Lowell 0.6-metre refracting telescope, gathered over several nights, producing spectra whose barcode features were even then barely distinguishable. This was no mean feat, for, only a few years earlier, the great 1.5-metre reflecting telescope at Mount Wilson Observatory in California had needed no fewer than 80 hours to obtain a single spectrum of one of these objects. Slipher’s results, published in 1917, showed that the 25 objects he had managed to observe were predominantly receding rather than approaching. The spiral nebulae—whatever they were—were racing away from us.

When, eight years later, Edwin Hubble showed conclusively that the nebulae were galaxies—huge, remote objects rather than nearby, small ones—Slipher’s results took on new significance. They were the first hint of a relationship between the velocity of a galaxy and its distance from our own Milky Way. The further away a galaxy was, the faster it was racing away from us. And it was Hubble again, working at Mount Wilson with the new 2.5-metre telescope (then the biggest in the world), who spectacularly confirmed the relationship, in 1929. Which, of course, is why it’s now called Hubble’s Law.

Meanwhile, two other mathematicians, Willem de Sitter (the Dutchman we met in Chapter 8) and a Russian scientist called Alexander Friedman, had produced new solutions to Einstein’s equations that rashly allowed for an expanding Universe. By 1927, a Belgian priest called Georges Lemaître had taken their work further by proposing a Universe in which distant galaxies would appear to recede from the observer at greater speeds than those nearby, thus foreshadowing Hubble’s Law itself. Following Hubble’s confirmation of the relationship, its implication became clear—that the Universe was expanding everywhere at a constant rate, neither accelerating nor decelerating in its expansion. Lemaître then took the next big step, reasoning that if the Universe was expanding at a constant rate there must have been a time in the distant past when everything was in the same place. Thus, in 1931, Lemaître produced his theory of a primordial atom from which everything had expanded—the forerunner of today’s Big Bang model.

Much of the refinement leading to the present version of the model was introduced in the 1940s by a Ukrainian-born US physicist called George Gamow. It included the idea that the early Universe had an extremely high density and temperature, and suggested that this would have led to the formation of hydrogen and helium at levels matching those that are observed today. The prediction was made in a paper jointly authored with physicists Ralph Asher Alpher and Hans Albrecht Bethe—both of whom played a significantly greater role than merely lending their names for the celebrated pun on the first three letters of the Greek alphabet. The ‘Alpher, Bethe, Gamow’ paper was published on 1 April 1948 and, despite the date, contained ground-breaking research.

Gamow also developed the extraordinary idea that if we could see far enough back in time, by looking deep into space, we would be able to see the flash of the Big Bang itself. We would be able to see back to a time just before the infant Universe became transparent, when it was still filled with a brightly glowing fog of radiation. Were it not for the expansion of the Universe since the light was emitted—a consequence of the Big Bang itself—we would expect to see this as visible light. The entire sky would shine brightly, rendering the stars invisible, and we would know little about our place in space. But as the Universe has expanded, the light travelling through it will have been stretched into faint whispers of microwave radiation.

Eventually, in 1964, that radiation was famously discovered—completely by accident—when a new radio astronomy antenna was being tested in Holmdel, New Jersey. Scientists Arno Allan Penzias and Robert Wilson tried everything they could to eliminate a mysterious background signal that seemed to emanate from the entire sky. They even cleared pigeon droppings from the antenna. (Doesn’t that conjure up the most bizarre conversation? ‘Oh, it’s the flash of the Big Bang. We thought it was just pigeon poo . . .’) The supposedly faulty signal is now known as the ‘cosmic microwave background radiation’ (CMBR), and it is the most ancient thing we can ever see. Dating from a time only about 380 000 years after the Big Bang itself, this fossil radiation has been travelling through space for almost the complete lifetime of the Universe.

The CMBR was the clinching proof of the Big Bang model. It was also the final nail in the coffin for its main rival, the Steady State Theory of the Universe, which had been championed by Fred Hoyle, among others. Happily, both Georges Lemaître and George Gamow were still alive to see the discovery of the CMBR. And happily for Penzias and Wilson, in 1978, they jointly received the Nobel Prize for Physics for their work.

But what of Einstein and the cosmological constant he’d introduced to make the Universe nicely static? Once the expansion had been confirmed by Hubble in 1929, Einstein quickly withdrew his idea in embarrassment. And, years afterwards, George Gamow disclosed that, ‘When I was discussing cosmological problems with Einstein, he remarked that the introduction of the cosmological term was the biggest blunder he ever made in his life.’ However, the latest findings in cosmology suggest that, far from blundering, Einstein had shown great insight in introducing the cosmological constant. Remarkably, as we will see later in this chapter, an inbuilt repulsive force in the fabric of space truly seems to be there. Perhaps less remarkably, Einstein himself was an even bigger genius than he realised.

9781743433720txt_0251_001

The fascination of the public at large with these discoveries about the wider Universe has, over the years, fed a growing industry in popular cosmology. The hapless pop-up book is but one recent example, but the genre goes back a long way. Almost before the ink was dry on Einstein’s general relativity paper, he had produced a popular-level book on the subject, Relativity: The Special and the General Theory, which was first published in 1916. Many more scientists, from Eddington to Hoyle and beyond, produced accessible accounts, and who could forget Gamow’s fictional hero, Mr Tompkins, a humble bank clerk who married the daughter of a physics professor? Dreaming his way through the byways of modern physics, Mr Tompkins brought science to the people during the dark days of the Second World War.

If Mr Tompkins was an unlikely exponent of the mysteries of the Universe, an even more unlikely one turned up in the 1990s, in the shape of a British stand-up comic. In a three-part TV series made for the United Kingdom’s Channel 4, the late Ken Campbell took the role of an ordinary bloke driven by curiosity to probe the secrets of the Universe on both cosmic and quantum scales—the very large and the very small. In Reality on the Rocks, Campbell’s visit to the 4.2-metre William Herschel Telescope, on the island of La Palma in the Canary Islands, vied with his exploration of the giant particle accelerator at the European Centre for Nuclear Research, in Geneva, for spectacular footage. But Reality on the Rocks was no run-of-the mill science doco. The whizz-bang stuff was cut with candid scenes from Campbell’s one-man theatre show, producing a delicious mix of enlightenment, hardhitting comedy and a quest for meaning. Forget the movie star voice-overs and deep, meaningful music that accompany most astronomy documentaries—this was science in the raw.

I mention Reality on the Rocks not just because I thought it did a brilliant job in popularising physics, or because I happened to stumble into it as a minor player (courtesy of an observing stint on the William Herschel Telescope), but also because it endowed me with a lasting sense of optimism about our growing understanding of the Universe. Campbell’s dogged determination to comprehend what we know about the Universe and place it into the perspective of everyday life was inspiring in itself—especially when seen at close quarters. However, it was a humble Spanish café owner in the small coastal town of Tazacorte, on La Palma, who unwittingly hit the nail on the head. I can’t be sure, but I think his name was José. We were shooting a final scene around the tables outside this gentleman’s restaurant on the fringe of Tazacorte’s black-sand beach, chatting about the Universe and enjoying the seafood delicacies that he insisted on giving us. With the setting Sun blazing over the Atlantic Ocean in front of us, it was the classic TV atmosphere shot. Eventually, someone asked José what he thought about the Universe. ‘Ah, yes,’ he replied, sagely. ‘Yes. Things have been better since the Universe was here. Much better.’

I think it’s the single most profound thing I’ve ever heard. OK, his command of English didn’t allow him to differentiate between the words ‘universe’ and ‘observatory’, but what a way to sum up all the motivation and aspiration of astronomers and cosmologists. I doubt his remark made the final cut—I’m afraid I can’t remember—but that thoughtful man’s words have stuck with me ever since.

GROPING IN THE DARK

When Reality on the Rocks went to air in 1995, science’s view of the Universe as a whole was troubled with inconsistencies. A couple of them had been fixed a decade or so earlier, when the concept of inflation was introduced, which says that the Universe expanded by a colossal amount during the first gazillionth of a second of its existence—from the size of a subatomic particle to the size of a galaxy within 10-33 of a second of the Big Bang. (That’s a 1 with a decimal point and 32 zeroes in front of it.) Then, for some reason, inflation stopped. While financial inflation is usually a bad thing, cosmic inflation solved problems of geometry (the absence of large-scale curvature of space-time) and uniformity (the smoothness of the CMBR) in our understanding of the early Universe. That modification of the theory still holds good today, and most scientists accept the inflationary model of the Big Bang.

But another geometry problem niggled at the fertile minds of cosmologists. When you added up all the matter in the Cosmos, it looked as if there wasn’t enough to give the Universe its observed geometry. The conventional wisdom was that the mutual gravitational pull of all the matter in the Universe—galaxies, stars, planets and all the rest—should conspire to slow down its expansion over time. In other words, the Universe should be decelerating, and this slow-down should be detectable by looking far enough into space to find a departure from Hubble’s Law. It should produce a particular geometric signature for space on the largest scale. But that didn’t seem to be there, and one or two scientists wondered whether something was badly wrong with the theory.

Those near-heretics were vindicated dramatically in 1998, when a group of astronomers led by Brian P. Schmidt of the Australian National University produced hard evidence that, far from slowing down as expected, the Universe is expanding more rapidly today than it was seven or eight billion years ago. Bizarrely, the expansion of the Universe is accelerating. Confirmation came from a rival group within the year, and the discovery was hailed by Science magazine as the ‘breakthrough of the year’ for 1998. It’s a great pleasure to report that the accolades have not subsided, as, a few weeks before these words were written, Schmidt and his colleague, Adam G. Riess, together with Saul Perlmutter—leader of the rival team in the United States—were awarded the 2011 Nobel Prize for Physics. The partying throughout Australian astronomy seems set to go on for quite a while yet, and Schmidt even got a mention by the Queen during her visit shortly afterwards. Wow.

The evidence for accelerating expansion collected by these two groups came in the form of observations of a particular kind of supernova—the Type Ia—at very great distances. Supernovae of this type are caused by old stars exploding violently as a result of matter being deposited onto them from a nearby companion star. They provide extremely bright standard candles, easily outshining their host galaxies. What caused all the excitement was that these remote supernovae were dimmer than they ought to have been, given their estimated distances—and hence look-back times—from our own Galaxy. This suggests that, yes, the expansion of the Universe is accelerating, and that has now been confirmed by a number of different methods.

The effect is attributed to an inherent springiness of space—or dark energy—that is overcoming the tendency of the Universe to decelerate because of the mutual gravitational attraction of everything within it. It’s described as a ‘negative pressure’ to distinguish it from the positive pressure experienced at the centre of a cloud of gas collapsing under its own gravity, for example. In other words, it’s a tension. Moreover, we now know that this dark energy is the largest single component of the Universe, amounting to 72 per cent of its total mass/energy budget. (I’m sure you remember that mass and energy are equivalent, by courtesy of Einstein’s special relativity.)

So what exactly is dark energy? When consideration was given to this question, an intriguing possibility emerged. After Einstein’s embarrassing withdrawal of the concept of a cosmological constant, most scientists had simply assumed that the constant was zero and that space had no inbuilt force field. But could the newly discovered dark energy be something to do with this long-neglected orphan of general relativity? If it had the form of a constant negative pressure, and the pressure was so weak that it only began to overcome gravity when the characteristic distances separating galaxies had become very large—a long time after the Big Bang—then it might just fit the bill.

There are other theoretical possibilities, too, but they require the introduction of various flavours of ‘new physics’—those underlying realities, such as quantum gravity and string theory, that are not predicted by relativity. These are well beyond the current limits of certainty, and are active areas of research. Often in such investigations, scientists end up following blind alleys, and the certainties only emerge over time. One such possibility is a new fundamental force—perhaps the whimsically named ‘quintessence’. Its relationship to the four fundamental forces we already know about echoes the fifth element of ancient Greek philosophy, the heavenly substance of the stars. Like the effect of the cosmological constant, this would have to be a dark energy with negative pressure, but one key difference is that quintessence would change with time, leaving its imprint on the Universe only in the relatively recent past.

Another possibility that must be explored is to abandon the hallowed cosmological principle, which says the Universe is the same in all directions. That would permit a Universe that has significant differences between one place and another, perhaps again eliminating the need for an overall repulsive pressure. This possibility may have been ruled out by recent work at Johns Hopkins University, in Baltimore, which has examined the idea of a local void in the Universe giving the illusion of accelerated expansion. It appears that such a hypothetical void cannot be consistent with the most recent measurements of the expansion velocity.

Most astronomers today accept the reality of dark energy—and its consequences for the long-term future of the Universe. When, during the 1970s and 1980s, we believed that the Universe’s expansion was slowing down, many astronomers thought the expansion would eventually reverse, turning into a collapse that would culminate in what was usually called the ‘Big Crunch’. It would be a rewind of the Big Bang, in a sense, which is why Brian Schmidt always calls it the ‘gnaB giB’. Yes, quite so. But with the discovery of the accelerating expansion, that scenario changed, and we now expect the Universe to continue expanding forever.

Sadly, the Universe seems destined to become an incredibly boring place as a result. Its reserves of hydrogen will be consumed in stars, which are themselves doomed to die when their nuclear fuel runs out. Moreover, the accelerating expansion will eventually carry most galaxies beyond the horizon of visibility, because they will be receding faster than the speed of light. Any intelligent beings will have no idea that there are other galaxies, or that the Universe started in a Big Bang. It will make the science of cosmology very difficult indeed.

Some of today’s scientists—including Schmidt—see an even more startling future. If the expansion continues to accelerate, it’s possible that space itself may be torn apart in a scenario that has been dubbed the ‘Big Rip’. It’s very hard for us to imagine this, and even experts in the field disagree about the possibility because, as yet, we have no accepted theory of exactly how empty space is constructed. Still, it makes for good headlines.

9781743433720txt_0257_001

The issue with dark energy is that no one really knows what causes it. The quantum physicists think it might be the result of a seething foam of virtual particles popping in and out of existence and imbuing the fabric of space with a negative pressure. It sounds plausible, except that even the best of these theories predicts a repulsive force that is 10120 times bigger than what we observe (yes, that’s a 1 followed by 120 zeros). A repulsive force as intense as that would already have made its presence felt by tearing everything apart—including atoms—in a baby-Universe version of the Big Rip. No wonder this estimate is considered by most physicists to be the worst theoretical prediction in the whole of science.

What can we do to improve our flawed understanding of the problem? A good start would be to identify which model of dark energy best fits the astronomical observations—cosmological constant, quintessence or something else? Not surprisingly, the required observations are hard to carry out, but they are underway. A number of groups throughout the world are now actively engaged in tackling the dark energy problem, typically by extending the standard candle supernova observations to greater distances and greater numbers of objects.

There’s another possibility, however, based on the fact that dark energy has a significant influence on the large-scale geometry of the Universe. If that geometry can be probed at different periods in the Universe’s history, perhaps by using some kind of standard ruler seen at varying distances (for example, the typical separation of pairs of galaxies, which seems to be the same throughout the Universe ), there is a real chance that the correct model of dark energy will be identified. We have a good starting point in this quest, through our knowledge of the large-scale geometry at two key times in cosmic history. One is the very early Universe. That microwave background radiation we mentioned a couple of pages ago, the CMBR, is a kind of cosmic wallpaper pasted across the entire sky and is behind (or, more accurately, before) every thing else in the visible Universe. Early studies showed it had a temperature of 2.7 degrees Kelvin (i.e. degrees above absolute zero), which is about what you would expect from 13.7 billion years of cooling off. But more detailed investigations revealed minute temperature variations in the radiation from one part of the sky to another at the minuscule level of 0.00001 degrees Kelvin. These had their origin in acoustic oscillations—sound-waves—in the primordial fireball. They represent the bang of the Big Bang frozen in time, and they give us an accurate picture of what the Universe was like in its infancy. And then, as we saw in Chapter 6, today’s Universe has also been thoroughly explored with large-scale surveys of the three-dimensional distribution of galaxies, such as the 2dF Galaxy Redshift Survey, made with the 3.9-metre AAT. Both these snapshots of the Universe reveal structure that is of great interest to cosmologists. Those minute variations in the CMBR have been explored in ever-finer detail by a succession of spacecraft, beginning with the Cosmic Background Explorer, in the early 1990s, followed by the Wilkinson Microwave Anisotropy Probe, in the early 2000s, and culminating in today’s Planck spacecraft, whose findings will be announced as this book goes to press. And the redshift surveys tell us that the distribution of galaxies in today’s Universe is spidery, resembling a honeycomb or foam of galaxies. In a remarkable vindication of our understanding of how the Universe works, this honeycomb structure is exactly what you would expect to see if you could fast-forward the patterns in the CMBR to the present time.

Comparison of the Universe’s geometry at these two periods has already allowed a wealth of information on its evolution to be deduced. But the missing ingredient has been a similar three-dimensional survey of galaxies at great enough distances that they correspond to a look-back time of about half the age of the Universe—a time seven or eight billion years ago when dark energy first began to make its presence felt. Investigating the standard rulers in such a survey would require observations of hundreds of thousands of faint galaxies, a hugely ambitious program. That has now been at least partially provided by a number of galaxy surveys, including one named WiggleZ, which was carried out on the AAT. The Z in WiggleZ is the scientific symbol for redshift, but the Wiggle part is all about the acoustic oscillations and the structure they imprinted as the Universe evolved. Completed in January 2011, WiggleZ measured the redshifts of 200 000 galaxies, mapping the cosmic structure across look-back times of up to eight billion years.

Further down the track, studies such as this could be carried out with even greater precision if bigger telescopes were used, allowing fainter galaxies to be observed. That will be the province of the new generation of 20- to 30-metre telescopes—the extremely large telescopes. These represent the latest step in the evolution of the telescope and are no more than a decade away—at least in terms of their construction. There are several projects under consideration, including E-ELT (the 39-metre European Extremely Large Telescope), the US Thirty Meter Telescope and the international Giant Magellan Telescope, in which Australia is already a partner. The last of these is of particular interest for studies of dark energy, as it will have a sufficiently wide field of view to make it a formidable survey telescope rather than simply an instrument that studies single objects in minute detail.

So, what results are we acquiring from studies such as WiggleZ and distant supernova measurements? Both these techniques have returned information on the nature of dark energy that has a decided preference for Einstein’s old nemesis, the cosmological constant. Supernova studies indicate that the negative pressure of dark energy seems to have changed by less than 20 per cent since the Universe was about half its current size. Likewise, the independent WiggleZ measurements are best fit by a model with a constant negative pressure. These are truly remarkable findings, not least because they strongly hint that even when Einstein thought he was blundering, he was actually right. You really do have to take your hat off to him.

WONDERING WHAT’S THE MATTER?

As you might expect, astronomers are pretty frustrated that they don’t know what 72 per cent of the Universe is made of. It keeps some of them awake at night. But they do know what the other 28 per cent is, right?

Wrong, I’m afraid. Just when you thought you were getting your head around the mass/energy budget of the Universe, along comes something else to thwart you. In fact, the other big unknown—dark matter—has a much longer history in scientific research than dark energy. We’ve been mystified for longer, but, on the positive side, we could be closer to finding out what it is.

How do we know dark matter is present in the Universe? It reveals itself only by one thing—the effect of its gravitational attraction on matter that we can see. Other than that, we have no way of detecting it, at least for the time being. We can, however, sense this gravitational smoking gun by a number of means—described below—and they all give the same answer: that the ordinary matter we see by the radiation it emits (for example, in stars and glowing gas clouds) or absorbs (for example, in dust clouds silhouetted against a bright background) amounts to only one-sixth of what gravity tells us is there. Embarrassingly, dark matter outweighs visible matter by five to one.

In fact, the very first estimate of how much dark matter there is gave an even higher imbalance, because at the time of its discovery astronomers didn’t know about the copious quantities of perfectly normal matter in the form of hydrogen that surrounds most galaxies. The man who first noticed that something didn’t add up—and thereby stumbled across dark matter—was one of the great characters of twentieth-century science. He was a Swiss-US astronomer by the name of Fritz Zwicky, and he was interested in clusters of galaxies, the largest concentrations of matter in the Universe. Like many of his contemporaries, Zwicky didn’t suffer fools gladly, and once famously described some of his colleagues not merely as ‘bastards’ but as ‘spherical bastards’. Why? Because, according to Zwicky, they were bastards whichever way you looked at them—and the only thing that looks the same from all directions is a sphere. He did have a way with words.

In 1933, the 35-year-old Zwicky was studying a cluster of galaxies in the constellation of Coma Berenices (Berenice’s Hair) in the northern-hemisphere sky. With the tried and tested method of spectroscopy, he used the Doppler effect to measure the radial velocities of several members of the cluster. He was astonished to find that these galaxies seemed to be moving too quickly, relative to the cluster, for its gravity to hold on to them. Given their velocities, the galaxies he was observing should have escaped long ago, because the gravitational attraction of all the visible matter was simply not enough to stop the cluster disintegrating. Zwicky calculated that it would require 400 times more mass than he could account for to keep the cluster intact—an over-estimate, as I have mentioned, but an understandable one. However, he was spot on in his inference that something invisible was present, a component that neither emitted light nor absorbed it from the radiation of background objects.

Surprisingly, not a lot happened as a result of Zwicky’s observations, since astronomers didn’t really understand them. It was not until 1970 that a lone voice was raised in concern about the behaviour of galaxies themselves. That voice belonged to a 30-year-old Australian researcher by the name of Kenneth C. Freeman—who had set about investigating the way galaxies rotate, and had discovered new evidence for the existence of dark matter. Once again, the underlying principle was to measure the motion of objects using the Doppler effect, but this time the objects were not galaxies in clusters, but clouds of gas in individual galaxies.

The trick here is to measure the characteristic speeds of objects within a galaxy—stars or gas clouds, it doesn’t really matter—and look at the way those speeds change between objects moving around the centre and those moving around the edge. The result is a graph called a ‘rotation curve’, and it is easiest to measure in spiral galaxies that are almost edge on to our line of sight. Basic orbital dynamics tells you that if you assume the mass of your galaxy is concentrated where it is brightest, then objects closest to this point will whizz around much more rapidly than those further out. Thus, you would expect the rotation curve to be highest near the centre and steadily decline towards the edge—exactly as you would find if you made the measurements with the planets of our Solar System.

However, Freeman was surprised to find that this was not what his results showed. Yes, the objects nearest to a galaxy’s centre did whizz around rapidly, but, far from declining with increasing distance from the centre, the rotation curves stayed almost level, right out to the extremities of the disc. The only way such curves could be explained was if there was a lot of invisible material in the outer parts of each galaxy. There it was again—Zwicky’s mysterious dark matter.

At the time, this conclusion had a mixed reception in the scientific world. Some astronomers took the dark matter issue seriously, while others thought there was no problem. However, Freeman’s eventual vindication was celebrated in October 2012 when, to the delight of all Australian astronomers, he received the nation’s highest scientific honour, the Prime Minister’s Prize for Science. Like Nobel Laureate Brian Schmidt, Ken Freeman is an astronomer at the Australian National University’s Mount Stromlo Observatory. There must be something in the water there.

It was not until 1978, four years after Zwicky’s death, that the dust began to settle over the issue. By then, a US astronomer called Vera Rubin had extended Freeman’s observations, and realised that each galaxy must be embedded in a giant spherical halo of something invisible that was providing additional gravitational attraction on the stars or gas clouds. Since Rubin’s work, the study of dark matter has become a veritable crusade among astronomers. It’s hardly surprising, given that we’d like to think we know what the Universe is made of. But this has led to a remarkably detailed view of the characteristics of dark matter—even though its exact nature has remained elusive.

At first, in the wake of these discoveries, there were two competing theories as to what constitutes dark matter. Uncharacteristically for astronomers, the candidates were given rather clever names, being described as either WIMPs or MACHOs. (The former stands for ‘weakly interacting massive particles’, while the latter means ‘massive compact halo objects’. You’d probably guessed that already.) Taking the MACHOs first, the idea was that galaxies might be accompanied by an unexpectedly high number of dark objects like massive planets, dim brown dwarf stars and perhaps even black holes. They would occupy the galaxies’ spherical halos, which were already known to be populated by faint stars. Such MACHOs would be very difficult to observe but, if they were present in sufficiently high numbers, might account for the dark matter. The alternative theory was that dark matter exists at the subatomic level and that what we are seeing is the effect of vast swathes of subatomic particles that do have mass but don’t interact significantly with the various particles that constitute normal matter. WIMPs, in other words.

It didn’t take long to establish that MACHOs couldn’t be the primary component. An experiment performed during the 1990s with a 1.25-metre telescope at the Mount Stromlo Observatory found that there weren’t enough MACHOs to account for the observed levels of dark matter. The observations were made using a clever technique called ‘gravitational microlensing’. You take tens of thousands of repeated images of a single region of sky containing a huge number of distant stars—for example, the central region of our Milky Way Galaxy, or our neighbour galaxy, the Large Magellanic Cloud. If the intervening space were populated by large numbers of MACHOs, you would expect to see a characteristic rise and fall in the brightness of some of the background stars over the period during which the images were made. That would signify that MACHOs had been passing in front of them.

Why would intervening objects brighten the background stars rather than dim them? Because, as we discovered in Chapter 8, the mass of an object distorts the space around it, making it bend. Whereas the bending produced by a nearby object such as the eclipsed Sun simply distorts the apparent positions of stars, more distant objects bend light in the same way that a glass lens would. It’s an unusually shaped lens, admittedly, since it’s more like the bottom of a wine glass than a normal lens. That means the focusing effect is nowhere near as crisp as in a real lens and results in arc-like images of the distant objects. (Check them out next time you’re enjoying a glass of wine—but do make sure the glass is empty first.) Nevertheless, the principle is the same. The distorted space around the MACHO would act like a kind of magnifying glass, enhancing the brightness of any star it passed in front of.

In failing to find an adequate number of MACHOs, the Mount Stromlo experiment put the spotlight firmly on WIMPs as the origin of dark matter, and there it remains today. However, even though we don’t yet know what these subatomic particles are, our understanding of their properties has blossomed over the last decade.

The galaxy redshift surveys I mentioned in connection with dark energy also have a major role to play in studies of dark matter, because dark matter exhibits gravity, and we know that the effect of that is to distort the geometry of space. Likewise, visible matter has a distorting effect, but we can allow for this because we can see where it is. This tells us, for example, that the ratio of dark matter to visible matter in the Universe is about five to one, and that dark matter and visible matter actually concentrate together. ‘Beacons of light on hills of dark matter’ is one eloquent description of clusters of galaxies.

9781743433720txt_0266_001

Dark energy and dark matter have quite different properties, and this allows us to disentangle their effects on the geometry of the Universe. For a start, dark energy is a relatively weak repulsive force, while dark matter exhibits a pretty robust level of gravitational attraction. Moreover, dark energy is everywhere—a property of space itself—whereas dark matter occurs in blobs in the vicinity of galaxies. When the large-scale geometry of the Universe is investigated with all this taken into account, we arrive at the 72, 23 and 5 per cent mix of dark energy, dark matter and normal matter mentioned earlier. And it’s worth noting that most of that normal matter is actually hydrogen and helium left over from the Big Bang still permeating the Cosmos. The stars in the Universe—all of them—constitute a mere 0.5 per cent of the total, while the stuff from which our world and we ourselves are made—carbon, oxygen, nitrogen, silicon, iron and so on—amounts to a humble 0.03 per cent. If that doesn’t put things into perspective, nothing will.

A number of new galaxy surveys are continuing the exploration of space’s geometrical distortions in great detail. One, called Galaxy and Mass Assembly, involves observations of 375 000 galaxies with a wide range of different telescopes (including the AAT). Its aim is to produce the most comprehensive database to date of galaxies and their properties as they have evolved over the past one-third of the age of the Universe. Of course, the dark matter halos of galaxies are part of the survey’s stock-in-trade.

Knowing that dark matter concentrates wherever visible matter is found suggests another approach to the investigation. Our own Milky Way Galaxy is a giant spiral system of 400 billion stars with associated gas and dust—and dark matter. So dark matter is all around us. How is it distributed? Does it occur in clumps, and, if so, how big are they? And what might we learn from their size? The fact that we are surrounded by stars whose histories reflect the history of our Galaxy as a whole suggests that much can be learned about our Galaxy’s past by studying them. As we saw in Chapter 7, this is the basis of the science of Galactic archaeology and has prompted large-scale star surveys such as RAVE and the GALAH experiment. One possible spin-off from these surveys is a more detailed look at how dark matter is structured. Stars move under the influence of gravity, and if we sample large populations of stars the underlying gravity field can be mapped and the dark matter component identified. A likely outcome of this is an estimate of the minimum size of a clump of dark matter. Such estimates have already been made from more limited star velocity surveys, and suggest it may be around 1000 light-years across. This has implications for the temperature—the energy of motion—of the dark matter particles, hinting that they may be warmer than expected, at a few degrees above absolute zero rather than the few tenths of a degree that is usually assumed. As the various surveys evolve, we will see better measurements being made, and it is possible that the temperature of dark matter will eventually be one of the better-determined outcomes.

In the future, this technique of large-scale velocity mapping will be extended to sizeable samples of individual stars in our neighbouring galaxies—stars not yet accessible with existing telescopes. But, with the coming generation of extremely large telescopes mentioned earlier, such observations will be fairly straightforward.

9781743433720txt_0268_001

Another powerful way of probing the structure of dark matter in the Universe goes back to the idea of matter distorting the space around it to produce a lens-like effect on background objects. The most dramatic distortions of space come from the most massive objects, and they are not just single stars—or even single galaxies—but clusters of galaxies. Whereas single stars exhibit gravitational microlensing, there’s nothing micro about the gravitational effect of a galaxy cluster. Imagine a cluster of galaxies sitting in front of a group of much more distant galaxies in the background. The mass of the foreground cluster distorts the space around it, generating a gravitational lens of the kind we met a couple of pages ago. We’ve seen that this has the potential to magnify the light of anything in the background, acting as a kind of gigantic natural telescope, so that we can often detect distant galaxies that would otherwise be invisible. But the distant galaxies are also turned into distorted arcs of light by this process, exactly in accordance with the wine glass model described earlier. By analysing these arcs in a statistical manner, it’s possible to reconstruct the distortion of space around the foreground galaxy cluster, making a detailed map of the distorted geometry. Then, from the map, the actual distribution of matter in the foreground cluster can be charted accurately.

But here’s the clever bit. Gravitational lensing is the result of both visible and dark matter, so the fact that we know where the visible matter is allows us to deduce exactly where the dark matter is hiding. This is an extremely effective technique and confirms that galaxies are embedded in large volumes of dark matter, just as suggested by the other methods. Because of the extreme faintness of the background objects, the work was the province of the Hubble Space Telescope during the 1990s, but today’s generation of 8- and 10-metre-class telescopes is capable not only of providing images of the faint background objects but also of obtaining spectra to establish their distances. In the future, it will be extended to very great distances indeed—at which almost all objects are distorted by the lensing effect of intervening matter—using the new extremely large telescopes.

Meanwhile, the Hubble Space Telescope continues to provide new insights into the behaviour of dark matter using the lensing technique. Several recent sets of observations have allowed us to explore the behaviour of dark matter when galaxy clusters collide. While the galaxies and the rarefied gas clouds accompanying them grind to a halt in the pile-up, their respective dark matter halos carry on oblivious to the chaos, a result of the feeble interaction between dark and normal matter. Other experiments have probed the distribution of dark matter over large distance ranges. One compared galaxies with a look-back time (equivalent to distance) of about 3.5 billion years, and others have look-back times almost twice as long. Crucially, differences in the characteristic size of the dark matter cocoons of these galaxies suggest that typical clumps of dark matter have become more fragmented as time has gone on. Such observations involving a wide range of look-back times enhance our understanding of the vital role played by dark matter in the evolution of today’s galaxies. The assumption is that dark matter clumped together in the early Universe and its enhanced gravitational field then attracted concentrations of hydrogen from the Big Bang, which eventually collapsed into stars and galaxies. A consequence of this is that dark matter at greater look-back times should be less clumped than it is in the more recent past—which is exactly what has been found.

SCIENCE IN A SPIN

By combining all of these various studies of dark matter, astronomers hope to learn enough about its behaviour for a clear leader to emerge from the various competing models. These have been built primarily by the theoretical physicists who study subatomic particles. But it is in the experimental research accompanying this that the quest for dark matter is taking perhaps its most exciting turn—if you’ll pardon the pun—bringing hopes of a real breakthrough. It also leads us straight back to the underlying theme of this book, because those experiments are taking place at one of the most inspiring scientific institutions in the world, and it is an institution that makes visitors welcome.

I’m sure you’re aware of the Large Hadron Collider (LHC), the giant atom-smasher near Geneva, which is operated by the European Centre for Nuclear Research. This machine is the successor to the facility visited by our old friend the comedian Ken Campbell earlier in this chapter. That was the Large Electron-Positron Collider, and it occupied a 27-kilometre-long circular tunnel, which was excavated for it in the 1980s. When plans were made for a new machine, the same tunnel was to be used, so there was a lengthy period when physics took second place to engineering on the site. In December 2009, however, the LHC was fired up, and, apart from one early mishap, it has performed as brilliantly as expected.

The LHC’s role is to accelerate two streams of subatomic particles such as protons around circular paths in opposite directions, achieving speeds close to the speed of light—and then collide them together. What could be simpler than that? But the technology required to achieve this is little short of astonishing. As one of the largest scientific experiments in the world, the LHC bursts with engineering superlatives. Unfortunately for spectators—or perhaps fortunately—most of the highenergy action takes place deep underground, where the old Large Electron-Positron Collider tunnel now houses twin vacuum tubes containing the particle beams. The plumbing alone is staggering. For example, rope-like skeins of microscopic copper tubing run for kilometres carrying supercooled liquid helium. The vacuum through which the particles travel is ten times lower than the vacuum at the surface of the Moon, meaning that there are fewer residual air molecules for the particles to bump into. And those particles are kept on track by superconducting magnets that are colder than space itself. Some folk have commented unfavourably on the centre’s €6 billion price tag for the LHC, but my personal view is that it’s amazing they have been able to build it so cheaply.

As you can probably guess, I’m a bit of a fan of the LHC, and to date I’ve had the opportunity to make two visits there. The first was with the Stargazer II tour, in 2010, when we were hosted by scientists Klaus Bätzner and Quentin King, who graciously fielded all our questions about the machine and its various experiments. A highlight of the tour was lunch in one of the centre’s cafeterias, where thousands of enthusiastic scientists and engineers spend their lunchtimes eating, drinking and talking physics. The excitement there was palpable, and it remained with us as we boarded the high-speed train to Paris later that afternoon. We touched almost 300 kilometres per hour on that journey, a minuscule fraction of the almost 300 000 kilometres per second reached by the protons circulating in the collider—but, unlike the protons, we didn’t collide with anything.

My second visit was probably a consequence of the first, because by then everyone thought I was an expert on the LHC. (As if . . . ) In some ways it was even more exciting, because it sought to tread in Ken Campbell’s footsteps by making a down-to-earth TV documentary about this most esoteric of human endeavours. That led our TV crew to parts of the facility that are normally out of bounds to visitors, such as the inside of some of the experimental control rooms. And, once again, it led us to the cafeteria, where the buzz was captured on camera with the help of a handful of Australian and Kiwi scientists who were happy to give us a lunchtime taste of their work.

In an echo of the high-speed train ride, that trip also led us to a ridge in the Jura Mountains behind Geneva, to a place called Col de la Faucille, which boasts a gutwrenching rollercoaster ride intended to simulate an Olympic luge track. The idea was to give viewers a hint of what it might feel like to be a subatomic particle circulating in the LHC, by mounting a camera on one of the cars while we presenters took turns expounding the details of impossible sideways accelerations around the ride’s hairpin bends. This time, we reached speeds only of around 40 kilometres per hour—but, when your bottom is just a few centimetres above the track, I can tell you it feels a lot like the speed of light.

9781743433720txt_0273_001

All the marvellous engineering notwithstanding, it is the LHC’s potential for scientific discovery that excites physicists and astronomers. By smashing together particles such as protons, the LHC is effectively acting as a gigantic super-microscope, probing matter on the smallest scale by examining the cascade of subatomic debris released in the collisions. It is exactly analogous to breaking things into successively smaller pieces until they can’t be broken any more. Then you know you are dealing with the fundamental building blocks of matter. The collider’s first job was to test something called the ‘standard model’, which is a complex hierarchy of those fundamental building blocks. It consists of twelve indivisible particles of matter that range from the familiar (like electrons) to the decidedly peculiar (like top quarks).

Then there are four force particles carrying three of the four fundamental forces of nature. Particles carrying forces? Welcome to the strange world of subatomic physics. And why four particles to carry three forces? Because one of them, the weak nuclear force, is greedy, and needs two. And what happened to the other fundamental force? That is gravity, and it is actually beyond this standard model at present, because we don’t yet have a satisfactory theory of quantum gravity describing the way it acts on very small scales.

The standard model does, however, suggest the existence of a particle that had not yet been found at the time of my visits—the Higgs boson, a 1960s postulate of physicist Peter Ware Higgs and his colleagues at Edinburgh University. Boson is the generic name given to the force carriers mentioned above, and the Higgs boson is thought to endow all of the other standard model particles with the property of mass, so it’s fundamental to the model. And one of the first tasks of the LHC was to discover where it could be hiding. Not in terms of its position in space but where in the range of energies that characterise the size of things in the subatomic world—in a similar manner to the way spectrum lines characterise chemical elements that are emitting light. By the end of 2011, the first hint of the Higgs had been announced. The evidence was spotted not just in one but in two of the ongoing experiments being carried out at the LHC, an essential requirement for the finding to be considered valid. But confirmation of the Higgs’ existence and a measurement of its mass took much more work, and it wasn’t until July 2012 that a formal announcement was made. Even then, the statement was couched in the most cautious of terms, as meets a discovery in a science that depends so heavily on probabilities. The likelihood is, though, that the Higgs boson has now been found, and physicists the world over can get on with more detailed investigations of the standard model with renewed confidence that it correctly describes what nature is up to.

It is another area of physics—a postulated extension to the standard model called ‘supersymmetry’—that carries the hopes of astronomers regarding the nature of dark matter. The idea behind supersymmetry is that each particle in the standard model has a massive ‘shadow particle’ that has not yet been detected. So there might be an entire suite of undiscovered particles that together constitute a supersymmetric version of the standard model. The expected characteristics of at least one of these shadow particles would exactly fit the bill for dark matter. It would be massive but would not interact with normal matter, except through gravity. Finding supersymmetry at the LHC is turning out to be just as difficult as the hunt for the Higgs, with most of the simpler models having already been ruled out. But astronomers and physicists remain hopeful of a breakthrough in this area. And the odds are that when the first announcement of the true nature of dark matter is made, it will come not from a telescope but from a particle collider.

9781743433720txt_0275_001

Meanwhile, a recent incident at the LHC provides a tantalising glimpse of the reception that awaits the discoverers of any hint of new physics. In September 2011, the extraordinary announcement was made that exotic subatomic particles called ‘neutrinos’ had been clocked travelling ever so slightly faster than the speed of light. Although it’s an idea beloved of science-fiction writers, most folk know that faster-than-light travel is firmly prohibited by Einstein’s Special Theory of Relativity, because accelerating an object to the speed of light requires infinite energy. This tried and tested rule is at the heart of our understanding of the Universe, but it is just possible that the physical world actually consists of more than the three dimensions of space and one of time that we perceive—and might have hidden higher dimensions. If that is the case, then objects like neutrinos may be able to take short cuts through these higher dimensions, arriving fractionally before they otherwise would, while still obeying the laws of special relativity. Thus, the result of the LHC experiment was viewed as a tantalising hint of the existence of such new physics and attracted intense media interest.

The overwhelming view of the world’s scientists, however, was that the faster-than-light effect would be shown to have an explanation entirely within the realm of normal physics. And, little more than six months after the original announcement, that’s exactly what happened. A tiny measurement error apparently caused by a loose connection had resulted in an incorrect calculation of the neutrinos’ flight times. Red-faced, the head of the experiment resigned his position, despite having been ultra-cautious when he’d first presented his astonishing result.

So, RIP new physics? Not necessarily. It remains possible that, some day, we may find similar evidence that does stand up to scrutiny, with implications for science that are truly staggering. The discovery of hidden dimensions would allow new thinking on a wide range of topics, including the origin of that other big cosmic mystery, dark energy. And, as we shall see in the final chapter, such a discovery could feed directly into our understanding of reality at the most profound level.

For those of us cheering them on from the sidelines, it is a most exciting time to be watching the work of scientists and engineers at the LHC. There is unprecedented interest among ordinary people in what is happening there, and, as a result, the world of physics may even be starting to lose its image as the exclusive province of nerds. Books, articles and even TV segments featuring insane rollercoaster rides are all helping to satisfy the public’s hunger for information. And here’s a suggestion for one more educational item in this field. You can probably guess what it is. Forget the pop-up Big Bang. What we need now is a pop-up book of new physics, complete with faster-than-light particles. With all those hidden dimensions, the folded-paper work should be easy.