The recipe for making a black hole is theoretically very simple, yet in practice rather difficult. Essentially, throw enough matter into a small enough space, crush it down and voila! A black hole will result. Now I can’t speak for everyone, but my puny noodle arms definitely aren’t strong enough to crush matter down in this way, and I imagine neither are yours. I’m sure even veterans of the recipe game like Mary Berry would struggle to follow that one.
Luckily for us,37 there are processes in the Universe which can follow this recipe with relative ease, thanks to gravity. Annoying as gravity is, keeping us hostage here on Earth, we do also have our very existence to thank it for. In essence, gravity likes to clump things together, whether that’s two tiny fundamental particles or two rather large lumps of rock. The force that ruled the early Universe and gave us the first structures out of just tiny atoms of hydrogen, is the same force that turned a random clump of gas on the outskirts of the Milky Way into the Solar System: Earth and all.
At the beginning of the Universe, space, time and the basic building blocks of matter were formed: protons, neutrons and electrons. Eventually, when the Universe had cooled enough from its hot dense state, those building blocks came together to make atoms, the majority of which were hydrogen atoms. That’s pretty much all there was in those days – it’s why the early Universe is described as a ‘soup of hydrogen’, because nothing better describes the boring uniformity of it all than soup. But here’s the kicker: technically it wasn’t quite uniform. In the first fractions of a second of the Universe’s life, tiny random quantum flutters made some bits of the Universe slighter denser and some a bit emptier. As the Universe expanded, these tiny quantum flutters grew like ripples on a pond, with more hydrogen forming in some places than others.
Those areas that were already slightly denser, with just a bit more hydrogen, slowly started to clump together and attract yet more hydrogen. And slowly, over a few hundreds of millions of years, enough hydrogen clumped together to become hot and dense enough for hydrogen atoms to fuse together to make a helium atom, and the first stars were born. If this was a recipe, the Universe got all the ingredients out of the cupboard, quantum flutters and gravity took care of the mixing, and finally the first stars started the cooking. When the first stars ran out fuel, supernovae then littered space with the heavier elements – things like carbon, nitrogen, oxygen and iron – polluting pristine hydrogen gas with what astronomers refer to as dust. That dusty gas got recycled by gravity to form the next generation of stars, in a cycle of clumping under gravity, fusion and yet more supernova pollution.
Eventually, after there’d been a few generations of stars in one region of the Universe, there was enough dust for gravity to start clumping it together to give solid objects that we might recognise as lumpy asteroids around newly formed stars. If gravity continued to get its way, those lumpy bits of rock kept on clumping together to form planets, moons and entire star systems like our own Solar System. Unfortunately for us, our own Solar System is not destined to become a black hole. The Sun will leave behind a core that’s a messy mix of helium, carbon and oxygen that will glow like the dying embers of a fire – something we call a white dwarf.
But what’s stopping a white dwarf from collapsing into a black hole? In fact, what’s to stop any star as it goes supernova from collapsing into a black hole? With no fusion, surely there’s nothing left to stop the endless crush of gravity inwards that has shaped the rest of the Universe around us? To find out why not all stars become black holes, we must once again understand the world of the very small: of atoms, themselves made up of protons, neutrons and electrons.
Figuring out what the building blocks of all things in the Universe are is a question that has been asked by humans since we learnt to ask questions. The basic idea that everything could be made up of tiny particles that are indivisible is a very old one, found in many ancient cultures from India to Greece. These particles were dubbed ‘atoms’, from the Greek atomos meaning ‘uncuttable’, i.e. these are the basic building blocks of all matter and they are indivisible. There is nothing below an atom.
That idea, that the atom could not be split, pervaded both religious and scientific minds until the late nineteenth century, when a discovery sent people reeling. In 1897, British physicist Joseph John ‘J. J.’ Thomson was experimenting with something called cathode rays at the Cavendish Laboratory at the University of Cambridge. Cathode rays are generated when two metal rods, one positive and one negatively charged, are placed in a vacuum (a space where all the air molecules have been sucked out). Usually these are in glass tubes, and if you leave a tiny bit of the air in there, you can see a slight glow caused by the cathode rays travelling from the negative to the positive rod. These cathode ray tubes look a bit like a modern neon light sign, and were used throughout the twentieth century in the backs of old-style television sets.
Thomson was trying to figure out what cathode rays were made of. The slight glow must be caused when something impacts with molecules in the glass, causing them to give off light. But what something? Thomson decided to try and measure the mass of whatever the cathode ray was made of and was shocked to find that the individual particles were over 1,000 times lighter than a hydrogen atom, the lightest ‘indivisible’ atom known. What’s more, he found that no matter what type of metal rod he used to produce cathode rays, the mass of the particles making it up never changed. The mass was the same no matter the type of atom they came from. He concluded that the only explanation was that the cathode rays were made of a very small negatively charged particle (since they travelled from the negative to the positively charged rod), which were a universal building block in all atoms. They were subatomic particles. The atom had been split.
What Thomson had discovered was the electron (although he originally dubbed them corpuscles – there’s a name I’m glad didn’t stick), and with it he redefined how we think of atoms.38 No longer were they indivisible, they were made up of yet smaller particles, like electrons, but what else? Atoms were known to be neutral, so Thomson reasoned that there must also be something positively charged that atoms were also made of. In 1904, he proposed what has become known as the ‘plum pudding model’ of an atom; a sphere of positively charged matter within which the electrons were embedded, like the fruit in a plum pudding.
The plum pudding model, although delicious-sounding with custard, did not stand the test of time, lasting less than a decade before another model usurped it. It was one of Thomson’s own protégés, New Zealand physicist Ernest Rutherford,39 who would find the evidence against the plum pudding model. Rutherford had been working with Thomson in the Cavendish Laboratory in 1897 when Thomson discovered the electron, but Rutherford was distracted by Henri Becquerel’s recent (1895) discovery of the strange properties of uranium, and like Marie Curie, set out to investigate further. It was Rutherford who coined the term ‘half-life’ for radioactive elements, realising that the time it took for half of a sample of radioactive material to decay was always the same, thereby giving the geologists the information they needed to figure out how old the Earth was.
In 1907, he moved to the University of Manchester, where he continued to study what was emitted by radioactive elements when they decayed. He had already identified three different types of radiation, which he dubbed alpha, beta and gamma (this is where gamma rays of light get their name from), and showed that when the decay happens an atom spontaneously transforms into another type of atom (another element). It was for this that he won the Nobel Prize in Physics in 1908. Not one to slow down after winning the highest honour there is, Rutherford did his most famous work in the years following his Nobel Prize victory, on the nature of alpha radiation.
Working with German physicist Hans Geiger (of Geiger counter fame – the device for counting radioactive particles), he showed that alpha radiation was made of particles with a charge twice that of a hydrogen atom. Then, working with British physicist Thomas Royds (a local boy at Manchester University, having been born in Oldham), he managed to show that you could make helium using alpha particles; we now know alpha particles are helium atoms with their electrons removed, hence why they are positively charged. To understand this properly, Rutherford wanted to measure the ratio between the charge and mass of alpha particles (this was also how Thomson had managed to discover the nature of the electron). To do this, you set alpha particles moving through a magnetic field and measure how much they are deflected (the greater the charge the greater the deflection, but the heavier the mass the more it will resist deflection). The problem was that the particles kept pinging off molecules of air that got in the way, like a pool break shot scattering balls everywhere, making the measurement unreliable.
Thomson had had the exact same problem when he was measuring the charge-to-mass ratio of an electron, and he solved the problem by doing the entire experiment in a perfect vacuum (i.e. removing all the pesky air in the way). Rutherford didn’t think he’d have to do the same thing, because alpha particles were much heavier than electrons (about 4,000 times heavier), and in Thomson’s plum pudding model of the atom the sphere of positive charge wasn’t concentrated enough to be able to deflect a particle that heavy.
Rutherford decided to investigate this scattering very carefully, with the help once again of Hans Geiger and British-New Zealand physicist Ernst Marsden.40 Together, they fired alpha particles at thin sheets of gold foil in a vacuum and recorded where the alpha particles ended up. The overwhelming majority went straight through the foil unhindered, but a small fraction were deflected. Most of those alpha particles were deflected by small angles, but again a small fraction of those were deflected so much that they made complete U-turns, coming back towards where they were fired from.
With this new information, in 1911 Rutherford concluded that the only way to explain what they’d found was if the positive charge in an atom was concentrated in a tiny section right in the centre, orbited by electrons that have a much lower mass. In his model, 99 per cent of the atom was empty space, which allowed the majority of the alpha particles to sail straight through the foil made of gold atoms. Rutherford continued his experiments with atoms and by 1920 had figured out that the hydrogen atom, as the lightest possible atom there was, must have a nucleus made of another basic subatomic particle, which he dubbed the proton.
This paradigm shift in the structure of the atom – from indivisible to made up of yet more particles, arranged almost like the Solar System itself – set in motion one of the largest knowledge jumps humanity has experienced. From an understanding of the periodic table and the chemistry underlying everyday reactions, to creating the entire field of quantum mechanics.
It was in trying to understand the structure of the periodic table that Danish physicist Niels Bohr (another Nobel Prize winner) came up with his model of the atom, where electrons were allowed to orbit in ‘shells’ around the centre, which were stable when filled with a certain number of electrons (sometimes two, sometimes eight, depending on the position of the shell). This model was uncovered by chemistry experiments, rather than through theoretical means, as it was found that elements with an even number of electrons were more stable than those with an odd number.
Explaining this theoretically was what Austrian physicist Wolfgang Pauli set out to do: what was so special about two or eight electrons in the same orbit? Pauli was one of the pioneers of quantum physics. His dad was a chemist, his sister a writer and actress, and his godfather was the one and only Ernst Mach (as in supersonic speeds measured in units of mach). Surrounded by such overachievers, I can only imagine the pressure Pauli put on himself to succeed. But succeed he did; if you have no idea who I’m talking about, let me say this: Einstein nominated Pauli for a Nobel Prize, which he won.41
In 1925, Pauli delved into how quantum mechanics describes electrons and realised that the elements of the periodic table could all be explained with just four quantum properties of electrons to describe their ‘state’: energy, angular momentum, magnetic moment and spin. The rule is that no two electrons around an atom could have the same values for those four properties. This is what’s known as the Pauli exclusion principle; essentially it says that no two electrons can be in the same quantum state, i.e. have the same values for their four quantum properties. This is why each element in the periodic table is unique: because the electrons in their atoms have specific configurations defined by quantum mechanics that are replicated by no other element. Pauli figured out this one simple rule that explained the structure of all atoms and why some were more stable than others. This is why physicists like to joke that the entirety of chemistry can be explained in one page of quantum mechanics, to the intense frustration of chemists everywhere.
What the Pauli exclusion principle means for astrophysics is that if you squash a load of electrons under gravity, they’ll resist being squished as there is no lower quantum state for them to go to; other electrons have already filled those states. This resistance is known as electron degeneracy pressure, and by 1926 the British astronomer Ralph Fowler42 applied this new quantum mechanics discovery to the decades-old problem of the densities of white dwarf stars. He realised that the huge densities of white dwarfs, at around a billion kg/m³ (for context, water has a density of 1000 kg/m³), could be explained if gravity had crushed down the matter in stars so much that the electrons started to push back against gravity. Like many problems in science, though, solving this one led to a whole host of other questions. Including if there was a point when the electron degeneracy pressure was no longer able to resist that crush of gravity inwards. More simply, what was the maximum mass of a white dwarf?
It was Indian astrophysicist Subrahmanyan Chandrasekhar who cracked this one. Another overachiever, Chandrasekhar wrote his first scientific research paper aged nineteen during his undergraduate degree at the University of Madras. He sent this paper to Ralph Fowler at Trinity College, Cambridge, who promptly invited him to come and do a PhD at the university (Chandrasekhar was thankfully awarded a scholarship by the Indian government to pursue his graduate studies). Fowler had already attempted to determine what the limit to a white dwarf’s mass might be, but Chandrasekhar, on his travels from India to the UK, realised Fowler’s work needed some corrections using Einstein’s theory of special relativity; the electrons had so much energy that their masses started increasing. I can only imagine Fowler’s reaction when his new PhD student arrived with the news that he’d already cracked the problem Fowler had been working on for years. Over the course of his PhD, Chandrasekhar diligently revised his theory to give us what we now know as the Chandrasekhar limit for white dwarfs: 1.44 times the mass of the Sun.43
However, the idea of the Chandrasekhar limit was not well received by the astronomy community at the time, due to what it implied. Arthur Eddington was particularly vocal about it (the Big Name in Physics who had reasoned that stars could only be powered by nuclear fusion before there was any evidence for it). Eddington was also at Cambridge when Chandrasekhar completed his PhD, before being elected a new fellow of Trinity College in 1933 at the age of just twenty-three. Eddington was fifty-one and an eminent professor with international prestige, who used that influence to convince his colleagues that the idea of a limit to a white dwarf’s mass was absurd. He went as far as presenting immediately after Chandrasekhar at a meeting of the Royal Astronomical Society in 1935, claiming that Chandrasekhar’s theory was incomplete since it used two separate branches of physics: relativity and quantum mechanics (an argument that Pauli himself dismissed).44 Eddington claimed that if we had a quantum relativity theory then the maths would come out to support his theory that white dwarfs were the last stage in the evolution of stars. He famously stated at that meeting: ‘I think there should be a law of nature to prevent a star from behaving in this absurd way!’
Eddington, being the more senior academic, was taken more seriously than Chandrasekhar, who then had to fight for a good two decades before his theory became accepted, with both Chandrasekhar and Fowler eventually winning the Nobel Prize in 1983 (I do love a happy ending). Apart from his own ideas on stellar collapse being proven wrong, what was Eddington so worried about? Eddington thought it was absurd that there was a limit beyond which matter in a white dwarf star could not resist the crush of gravity, because what in the Universe could possibly happen next?
Eddington’s fears were allayed for a few years by the discovery of the neutron in 1932 by James Chadwick (again in Cambridge at the Cavendish Laboratory45), completing the trifecta of the basic building blocks of all matter: electrons, protons and neutrons. This discovery led Walter Baade and Fritz Zwicky (two giants of astronomy from Germany and Switzerland respectively) to propose the existence of stars made entirely of neutrons just one year later in 1933. Here was an explanation for the next stage in the evolution of white dwarf stars after they get too massive and collapse under gravity.
Baade and Zwicky were working on a different problem, though; explaining what’s left behind in a supernova. White dwarfs are formed when stars fizzle out, but explosive supernovae needed another explanation. They claimed this explanation was neutron stars. These neutron stars would be supported by neutron degeneracy pressure – similar to the pressure from electrons holding up white dwarf stars, neutron stars were held up by the inability of two neutrons to occupy the same quantum state, again according to the Pauli exclusion principle.
But just like with white dwarfs, the inevitable question of whether there was a limit to the mass of a neutron star reared its head. A mass so great that neutron degeneracy pressure could not resist the crush of gravity inwards (the concept that Eddington found so absurd). This was tackled at the University of California, Berkeley, by American physicist Robert Oppenheimer46 and his PhD student at the time, Russian-Canadian physicist George Volkoff, using previous work by Richard Tolman. In 1939 they derived the first estimate for what is now known as the Tolman–Oppenheimer–Volkoff limit for the maximum mass of a neutron star (the sibling of the Chandrasekhar limit), beyond which they claimed there was no known law of physics that would prevent the collapse of a star down to an infinitesimally small point with infinite density.
Eddington and many, many others were still not convinced, believing the notion of a gravitationally completely collapsed star (i.e. a black hole) to be completely unphysical. First, because no neutron stars had yet been discovered, and second, because the idea of a black hole, or of mass condensed into an infinitely small point, were just theoretical curiosities for the mathematically minded to ponder over. We can speculate whether, if Eddington had instead embraced Chandrasekhar’s ideas and the application of the Pauli exclusion principle, he may have had a different role in this chapter, perhaps becoming the first physicist to predict the existence of a black hole, in the same way he predicted that nuclear fusion must be powering the Sun. Instead, the astronomical community came to begrudgingly accept the existence of black holes in Eddington’s absence after a number of discoveries and observations later in the twentieth century.
First, in 1967 a PhD student at Mullard Radio Astronomy Observatory at the University of Cambridge, Jocelyn Bell,47 working with Martin Hewish, discovered an unexplained radio signal which pulsed every 1⅓ seconds.48 The following year then saw the discovery of the same repeating radio pulses from the centre of our old friend the Crab Nebula, the remnants of the AD 1054 supernova recorded by Chinese astronomers. By 1970, fifty pulsating radio sources had been found, and the explanation that was most favoured was of spinning neutron stars. These ‘pulsars’49 were the missing piece of the puzzle of understanding how stars end their life. Unfortunately, Eddington didn’t live to see the discovery of neutron stars (having died from cancer at the age of sixty-one in 194450), but the rest of the astronomical community realised what this discovery meant: if neutron stars were real objects, then perhaps black holes were not as unnatural as first thought. Coinciding with Bell Burnell and Hewish’s pulsar discovery, in 1969 British physicists Roger Penrose and Stephen Hawking published a very mathematics-heavy paper showing how this gravitational collapse down to an infinitely dense, infinitesimally small point was actually inevitable in nature.
This all culminated in the release of a paper in 1972 by Australian astronomer Louise Webster and British astronomer Paul Murdin, who worked together at the Royal Observatory Greenwich to observe the mysterious X-ray and radio source Cygnus X-1. They observed a normal star that was found in the same part of the sky as Cygnus X-1 and noticed that the light from the star was Doppler-shifted. We all encounter Doppler shift in our day-to-day lives on a regular basis. As ambulance sirens race towards and away from us we hear the change in pitch of the soundwaves as they are squashed to a smaller wavelength (or higher frequency) moving towards us, and stretched out to larger wavelength moving away from us. You can hear this at racetracks as cars zoom past and on motorway bridges as cars thunder underneath. This happens because sound is a wave. Just like sound, light is also a wave, and so the same process of squishing and stretching can also happen to light. As the wavelength is stretched to longer wavelengths the light becomes redder (redshift), and as it’s squashed to shorter wavelengths the light becomes bluer (blueshift).
The star that Webster and Murdin observed was redshifted and blueshifted periodically every 5.6 days. This is caused when a star has a companion, so that the two orbit a centre of mass in empty space somewhere between the two. From how much the light is shifted, you can tell how fast the star is orbiting its companion and hence how heavy the companion to the star is, whether planet-sized (this is how we find a lot of Jupiter-sized planets) or much heavier. They calculated that this star’s companion (which couldn’t be seen) was greater than the theoretical Tolman–Oppenheimer–Volkoff limit, and that’s when alarm bells started ringing. The paper they published with these measurements ends with the wonderful line: ‘it is inevitable that we should also speculate that it might be a black hole.’
And so, by the 1970s, the trifecta of the graveyard of stars was complete: white dwarf, neutron star, black hole. Once a massive star, around ten times the mass of the Sun or larger, runs out of fuel, there’s no process to stop the inevitable pull of gravity inwards on its core during the supernova and the only eventuality is that the core is crushed down into a black hole: a dark star. Today, we even think that some incredibly massive stars have directly collapsed into black holes and skipped supernova entirely, just – poof! – there one day and gone the next.
With the Chandrasekhar limit we also know that, in very special cases where they’re given a supply of extra mass to grow, white dwarfs could one day collapse into neutron stars if enough mass is somehow added to them (eventually the electrons are forced to combine with the protons to make the neutrons that make up neutron stars). Similarly, with the Tolman–Oppenheimer–Volkoff limit we know that neutron stars could also one day become black holes if given enough mass to grow. This can actually occur if either the white dwarf or neutron star are in binary systems with other stars, which they can steal enough mass from to reach those limits. It’s for this reason that I like to think of neutron stars as the prior evolutionary stage to a black hole: a Pikachu to a black hole’s Raichu.
So, if we are willing to wait long enough, and there’s a bumper supply of extra matter hanging around the Solar System neighbourhood, then theoretically, the Sun could one day become a white dwarf, grow into a neutron star and eventually a black hole. But that’s true for just about any patch of gas in the Universe if you’re patient enough to follow the recipe: