“Yes, but I grow at a reasonable pace,” said the Dormouse. “Not in that ridiculous fashion.”
—LEWIS CARROLL, ALICE’S ADVENTURES IN WONDERLAND
Astronomer Nick Suntzeff’s life was certainly being shaken up. Admittedly, his career revolved around the deaths of stars. This, however, was not what he had signed up for.
He sat in a corner of a small emergency room in a small hospital in the city of La Serena, Chile, desperately trying to phone the wife of the man writhing in agony in the opposite corner. Prison guards then brought in a blood-soaked man on a stretcher who, once inside, tumbled off the stretcher and immediately bolted out the double doors and into a waiting taxi. A gunfight broke out. Amid the screams and gunshots, Suntzeff succeeded in contacting Arlene and explained the dire situation to her. Her husband needed emergency surgery. No, there was no time to fly him to a larger, more modern hospital. Thankfully, one of the world’s top physicians in the field had set up shop in La Serena, and he could do the operation.
“He looks just like the actor Alan Alda,” the doctor said.
“He is Alan Alda,” Suntzeff replied.
Two hours later, surgeon Nelson Zepeda came out with the good news. Alda was going to be fine. In a most unlikely twist of fate, Zepeda had operated on the man whose television portrayal of a surgeon had inspired him to become a physician.
Alda had been in Chile to interview astronomers for his science program, Scientific American Frontiers, when his medical disaster struck.* Suntzeff, then at the Cerro Tololo Inter-American Observatory, and his team were busy discovering distant supernovae, not so much to learn more about the supernovae themselves, but to use them as tools to understand what the universe was doing. As Alda and the world had found out a few years before, what the universe was doing made no sense, and he was hoping that conversations with the astronomers involved would help him understand.
Surprisingly, the supernovae that Suntzeff and his team had been hunting were not the final acts of massive stars, single or otherwise. Instead, they were just the cores of ordinary stars that had reached their limit.
After all is said and done, the only thing a low-mass star has to show for its life’s hard labor is a white dwarf. Not nearly as dense as its neutron star cousin, a typical white dwarf is about the size of Earth—a few thousand kilometers in radius—but 200,000 times as massive. Half a Sun, give or take, is squeezed into a space the size of a puny planet. If you were somehow able to survive the 100,000-degree heat, the 200,000 g’s of gravity would get you.
Unlike neutron stars, white dwarfs announced their presence and their peculiarities long before astronomers had any inkling that they could or should exist. The bright dot known as Sirius is, it turns out, two dots, a fact that remained hidden until the mid-1840s. It took another 70 years for astronomers to obtain the faint companion’s spectrum, and in keeping with pretty much all of astronomical history, they found it to be unlike anything they’d seen before.
It was clearly Sirius’s binary partner, so it had to be the same distance away as Sirius. But what became known as Sirius B was mystifyingly hot. At 25,000 degrees Celsius, the smaller dot was 2.5 times the temperature of its much more obvious sibling, Sirius A. This made no sense because hotter stars were understood to be, as a rule, much, much brighter. The relationship between temperature and luminosity had been quite well established for most stars, and the only way for Sirius B to be that hot, and yet that faint, was for it to be that small.
Being small wasn’t the problem. Lots of things are small. Earth. The Moon. Kittens. No, the problem was that the orbits of these two stars about their center of mass revealed that the tiny object must be as massive as the Sun.
The astronomical community was dumbfounded.
By 1931, though, the confusion seemed little more than a faint memory. “The possibility of the existence of matter in this dense state offers no difficulty,” wrote astronomer E. A. Milne. “As pointed out by [Arthur] Eddington, we simply have to suppose the atoms ionized down to free electrons and bare nuclei.”
“No difficulty” was quite a departure from the earlier admonitions of astronomers to white dwarfs, specifically, “Shut up. Don’t talk nonsense!” Somewhere between those two sentiments arose the bizarre and unintuitive field of quantum mechanics, which revealed what subatomic particles can and cannot tolerate. Being crammed into a ball weighing a ton per teaspoon is doable as long as the electrons and atomic nuclei are properly arranged.
But it was also determined that there is only so much straw you can put on this quantum camel’s back before the camel explodes.
A white dwarf is a densely packed inferno made of the last things its star was able to cook up, which, for something with the Sun’s mass, would be carbon and oxygen. Like a neutron star, a white dwarf has layers, but a white dwarf’s layers are made of things that seem at first glance to be familiar. Oxygen, the hardest nucleus for such a star to fuse, tends to occupy the innermost core, which is the only place where conditions extreme enough to forge oxygen exist. Meanwhile, carbon fills most of the rest of the volume. A crust of helium resides in the top couple of hundred kilometers, and occasionally some residual hydrogen might hang out above that.
Unlike a typical star, a white dwarf’s size is not governed by the balancing act between the pull of gravity and the push of hot gases and photons jostling to break free. Instead, its size is dictated by the dances of the electrons, which is the same issue that the helium core experienced just before its energetic flash. This arrangement makes for some seemingly backward behavior. Add more mass, and a white dwarf paradoxically becomes smaller. Take mass off, and it grows. Add heat, though, and it doesn’t change its size at all. It simply heats up.
And that’s where the problems arise. When a star hits a certain stage in its life, it feels a tremendous, growing pressure in its heating helium core, a condition that sparks a brief episode of runaway fusion. Releasing the equivalent of millions of years of solar energy in a minute, the helium flash foreshadows what could happen . . . “if,” the star says menacingly, “certain conditions are met.”
A white dwarf is a powder keg waiting for that spark. Alone, it will be fine (for the extreme foreseeable future). But a partner has the potential to set it off. The easiest way to do so is for the partner star to age, swell past its Roche lobe, and drop some of its hydrogen envelope through the narrow gravitational doorway between the two objects. The hydrogen races toward its dead partner’s heart, heating as it goes, and collects on the surface of the white dwarf. Feeling additional pressure, the white dwarf becomes smaller and hotter.
This process can go on for quite some time—tens of thousands of years, even—but it can’t go on indefinitely. Once the surface hits a suitably scorching temperature of 20 million degrees Celsius or so, the hydrogen does the only thing it can. It fuses into helium, releasing a flood of energy that makes the densely packed hydrogen fuse even more efficiently, releasing a bigger flood of energy that makes the hydrogen fuse even more efficiently, and so on. Within seconds, this runaway thermonuclear process has turned the entire atmosphere into an enormous hydrogen bomb. The temperature skyrockets to over 300 million degrees, and the surface that the white dwarf has been so patiently amassing is blasted out into space at breakneck speeds exceeding 3 million kilometers per hour. At this rate, it would take just five seconds to get from London to New York, but the flight would be uncomfortably hot.
Here on Earth, we see an obscure dot brighten over the course of a few hours, and then slowly fade away over the next several days as the superhot, superfast shell cools and dissipates.
A nova (not a supernova) is born.
At the end of this surface explosion, this scrappy little dead core of a low-mass star will have spat out about a millionth the energy of a typical supernova, or a micro-foe, along with a couple of teeth. Not bad for something that doesn’t even leave a scar. Depending on the dynamics of the binary system, the two characters might reprise the play some day in the not-so-near future.
While 10,000 years is a long time to wait for the next outburst in a system like this, astronomers have options if they want to observe a nova. Given that (1) a galaxy like ours contains hundreds of billions of stars; (2) low-mass stars that end their lives as white dwarfs are extremely common; and (3) binary systems are the norm, novae are popping off all over the universe. Indeed, it’s estimated that somewhere in a galaxy like ours there is about one nova per week, most of which go unnoticed because the dark dust of the Emu and other sooty clouds obscures them from our sight.
Deep in the heart of a white dwarf, the universe walks a fine line between stability and destruction, a fine line occupied by the supporting cast of electrons. In the early twentieth century, these minuscule electrons weighed heavily on the minds of astronomers and physicists, since so much in the physical world seemed dependent on their whims.
In 1930, Subrahmanyan Chandrasekhar embarked on his professional scientific career by literally embarking on a voyage from his home in India to England. During the journey, he refined the calculations of his soon-to-be graduate advisor, Ralph Fowler. Fowler had been determining the internal conditions of white dwarf stars, which are incredibly dense. The pressure to keep them from succumbing to gravity’s crush comes from the fact that no two electrons can have exactly the same energetic qualities. It’s an esoteric topic, but in his calculations Fowler failed to account for an even more esoteric topic: relativity, or the bizarre behavior of things that move incredibly fast or are subjected to extremely strong gravity. Within a white dwarf, electrons are such things, and rather than simply apply the usual laws of physics to the situation, Chandrasekhar realized that relativistic effects needed to be considered.
While most passengers basked in the light streaming off the surface of our nearest star, Chandrasekhar concerned himself with the fate of its deepest interior, a task that would earn him a Nobel Prize half a century later. His calculations revealed that a white dwarf’s electrons could take only so much. As the mass of the white dwarf increases, the electrons in its heart move at a decent percentage of the speed of light. When the pressure becomes too high and the electrons become relativistic, the electrons do the only thing that’s left for them to do: join with the protons to make neutrons.
Chandrasekhar discovered in his calculations that, given a certain fraction of electrons in the inconceivably dense interior, the maximum mass for a white dwarf is 1.44 times the mass of the Sun, a figure now known as the Chandrasekhar limit.* Beyond this, the electrons flee into the atomic nuclei. Once that happens, it’s curtains for the white dwarf. It collapses into a neutron star, spitting out a flood of neutrinos and gamma rays in the process. The outermost layers of the white dwarf might also get blasted off.
This scenario is most assuredly not on the poster in every Astronomy 101 classroom. Making a neutron star is taught to be an activity strictly reserved for massive stars.
The problem with the Chandrasekhar limit is not that it’s incorrect. It’s that a typical white dwarf winds up tearing itself apart before it can even get to that point. Just before the electrons feel compelled to dive into the protons, conditions finally become perfect for fusing carbon into heavier and heavier elements. This kicks off not because the temperature is so great—although it is plenty hot in there—but because the pressure is so high.
Once that process begins, it’s also curtains for the white dwarf, but with a different result. The center of the dead star becomes a fusion bomb that can’t punch its way out of the dense ball that contains it. All that energy ignites more fusion bombs, which ignite more fusion bombs. And yet again, the extreme material in the white dwarf won’t budge until the runaway fusion process literally blasts the white dwarf into oblivion.
There is no neutron star left behind, or anything else for that matter. Racing out to meet the rest of the universe is a Sun’s mass of new fusion products—radioactive nickel, for one—cooked up in the last milliseconds of the white dwarf’s existence. After a few days, the radioactive nickel nuclei shake themselves off, flick a spare positron from one of the protons, and become radioactive cobalt. After a few months, the cobalt nuclei do the same thing to become the more stable and more common iron. All those antimatter positrons meet up with electrons and create gamma rays, adding even more energy to the expanding gases, which glow for many months before finally tapering off. When the show is over, energy equivalent to the Sun’s 10-billion-year lifetime will have been unleashed. A foe of energy, emerging from something smaller than our planet.
One of the most striking features of this type of supernova explosion is that because it starts out as a colossally dense orb of carbon and oxygen, its spectral fingerprint shows no evidence of hydrogen. When a massive star blows up, though, the outer layers are typically still relatively undisturbed hydrogen, which doesn’t know what all the fuss is inside. The spectra of those sorts of supernovae reveal the skirmishes between light and hydrogen.
As early as 1941, Minkowski had noticed that the spectra of some supernovae displayed wavelengths associated with hydrogen while others didn’t, but it wasn’t clear then why that should be the case, so he did what every good observer does. He classified them as Type I (no signs of hydrogen) and Type II (signs of hydrogen). SN 1987A would be one of the latter, albeit with the “pec” suffix. Beyond that, among supernovae that didn’t show any hint of hydrogen, there were variations in both the details of the spectra and how the supernovae dimmed over time.
As for Fritz “Supernova” Zwicky, the pioneering supernova hunter, it turns out that his first 35 supernova discoveries were considered Type I. Although there are plenty of Type I supernovae that do involve core collapse, it’s possible—but not likely—that Zwicky didn’t see one of these play out until his 36th catch. What is likely is that many of the supernovae he caught were white dwarfs that had finally caved in to pressure from their binary partners, a scenario he never envisioned. Nobody knows for certain the breakdown of what Zwicky saw. Astronomers wouldn’t start peeling apart different classifications of Type I supernovae until the 1980s based on the exact pattern of brightening and dimming. It was then that Type Ia supernovae, the sort created when a white dwarf turns into a fiery fusion bomb, became a class by themselves.
All this sounds well and good, but it leaves plenty of open questions. First, if we’re going to blame a binary companion for the explosive demise of a white dwarf, wouldn’t it help to actually have evidence of a binary companion?
In 2012 astronomers finally got confirmation that, at the very least, the progenitors of some Type Ia supernovae had a surviving partner. The evidence came from SN 2012cg, an event in a galaxy 50 million light-years away. At this distance, it’s impossible to make out the partner visually, but there is a chance of seeing what the explosion does to the partner. In this case, having a close friend explode as a stellar-mass fusion bomb heats up one side of the companion just as sitting next to the fireplace heats up one side of a lazy dog. This heating makes the surviving partner glow with shorter, more energetic wavelengths. Through months-long, exquisitely detailed analysis of the light from this event by teams of astronomers using several observatories dotting the globe, an ultraviolet observatory orbiting the globe, and computer modeling of the scenario, astronomers could say with some measure of confidence that, yes, we might have seen the warm side of a dog.
It’s possible, though, that a Type Ia supernova isn’t always just a stellar vampire drawing material from a large dying star. After all, so much has to go right (or wrong, if you want to look at it that way) for this to happen. The most obvious issue is that somehow the white dwarf has to pull matter off its partner slowly and gently enough that there aren’t any nova outbursts, events that effectively reset the white dwarf’s mass. It’s like trying to save up for that dream vacation when an unexpected bill empties the piggy bank. The second problem is that the matter being pulled off is largely hydrogen. If the white dwarf explodes while bathing in a sea of hydrogen, that element is going to leave behind clues that it was there, and then our supernova will be a Type II event.
Consequently, astronomers have modeled other channels to create these events. If the universe can somehow persuade two white dwarfs to merge and if their combined mass is sufficiently close to the Chandrasekhar limit, then there will be spectacular mutually assured destruction. What’s more, this mechanism all but guarantees that all Type Ia supernovae will have approximately the same energy output, unlike core-collapse supernovae, which arise from stars with an enormous variety of masses. In a universe of messy interactions, extreme conditions, and epic explosions, reliability is a very useful feature indeed.
Reliability allows astronomers to use Leavitt’s law to gauge cosmic distances, divining the energy output from timing a Cepheid variable star’s ups and downs. Even as early as the 1940s, Baade and others had seen hints that the exact progressions of brightening and dimming of some supernovae had very similar properties. In the late 1960s, after dozens more supernovae had been analyzed, astronomer Charles Kowal declared, “It is obvious, therefore, that these supernovae could be exceedingly useful indicators of distance. It should be possible to obtain . . . accuracies of 5% to 10% in the distances. The main problem now is one of calibration.”
For astronomers, the implications of this paper were enormous. A supernova is as bright as the entirety of its host galaxy, which can harbor hundreds of billions of stars. Essentially, if you can see the galaxy, you can see a supernova in that galaxy, assuming one pops off. If the supernova is of the right parentage, something that can be determined by looking at its spectral fingerprint and light curve, then you stand a chance of knowing exactly how much energy is being emitted. And, as with so many of astronomy’s standard candles, knowing an object’s energy output and measuring how bright it appears from Earth leads to an object’s distance.
Since the 1920s, astronomers had known that the observable universe is expanding in all directions. It is as though galaxies are riding along on a stretching elastic medium, each galaxy seemingly rushing away from the others at breakneck speeds. Closer galaxies recede more slowly (that is, they have smaller redshifts) than more distant ones, just as the separation between two dots on an inflating balloon increases more if the dots start out farther apart. Decades of observing the redshifts of galaxies and applying Leavitt’s law, as well as using a variety of less reliable distance indicators for the truly distant galaxies, had supported this universal expansion.
But what was sorely lacking were precise, independent distance measurements, particularly for the most distant galaxies. Was the expansion rate of the universe so great that if you wound time backward, it was a mere 10 billion years old? Or was the expansion rate small enough that the universe could have taken as much as 20 billion years to get to its present state? Moreover, was the amount of gravitating matter in the universe enough to halt the expansion and bring the universe back together at some point in the extremely distant future? Or had that initial bump in the universe known as the big bang been so energetic that the universe was destined to keep flying away from itself forever?
Complicating matters were the pesky observations and models of stellar evolution that seemed to insist that the oldest stars in the Milky Way Galaxy were older than the actual universe. Clearly the children can’t be older than their parents, but no scientists could see where they’d miscalculated. Between different subfields of astronomy there was a marked tension. Astronomers fell into philosophical camps, arguing that the other side had failed to account for X, Y, or Z.
Someone was definitely wrong about something—possibly a whole lot of somethings—and a bright, reliable standard promised some kind of resolution. By the early 1970s, astronomers were discovering only about two supernovae each month, a rate that was convenient only for those who were naming the events. Supernova designations reveal the year (e.g., 1987) and the order of discovery (e.g., A for the first one of the year), so with fewer than 27 annual supernova observations, at least astronomers didn’t run out of letters.
But not all those supernovae were Type I, the sort that might possibly be useful distance indicators. Indeed, fewer than half of them were ever assigned a type, so inconclusive were the messages in their rainbows and changes in brightness. Worse, few had ever been spotted at what could be considered cosmological distances beyond hundreds of millions of light-years.
This was hardly progress.
Interest in hunting exploding stars waned as other shiny topics drew astronomers’ attention. It was a thankless process after all. Photographing distant galaxies with the hope that one would show a new dot seemed to be a poor use of not just telescope time, but human time. It took considerable effort to expose photographic plates, develop plates, and then visually compare plates. As much as the field of astronomy had grown up since the days of the Harvard computers, some things were hopelessly stuck in the past.
Then the digital revolution hit. Images could be taken using the new charge-coupled devices (CCDs), and computers could do the heavy lifting of scanning for new dots. The Berkeley Automated Supernova Search was kicked off in 1981 with a state-of-the-art 500 pixels × 312 pixels sensor, 512 kilobytes of memory (more than enough, the proposal assured the reader, as each image would only be 320 kilobytes in size), and two—count ’em—two 10-megabyte disks. To put this in perspective, today a single phone camera image of a mouthwatering pasta dish would practically fill the entirety of the automated search’s data storage.
The method for the search was incredibly basic. Take a galaxy image. Read the image into the capacious 512-kilobyte memory. Move the telescope to another galaxy. Meanwhile, the custom software analyzed the first image, looking to find any hint of a supernova. If it found nothing, then the telescope simply moved to yet another galaxy, and the process was repeated. Rather than poring over everything visually, astronomers could let the computers and digital detectors do the work, accomplishing in minutes what used to eat up hours or days.
It seemed like a slam dunk, a way to catch exploding stars and better understand them and possibly even better understand the entire universe. Paradoxically, the astronomical community regarded the project with apparent indifference. The shiny object of the day was the Big Picture. Despite the advertised promise of supernovae, they didn’t seem to be all that helpful in answering the big questions.
“This is what was drilled into my mind,” Nick Suntzeff said. “My mentor Allan Sandage would say, ‘There are only two numbers in astronomy: H naught and q naught. And that’s what you should be working on, Nick.’ ”*
H naught, written H0, is the Hubble constant, and it’s the relationship between a galaxy’s distance and its redshift. In 1929 Edwin Hubble found that the redshifts of galaxies become greater as we observe more distant galaxies. One of the first hints that our universe is expanding and dragging the galaxies along for the ride, this clear correlation can be seen in a graph of the recessional velocity (measured in kilometers per second) and distance (measured in megaparsecs, where 1 megaparsec = 3.26 million light-years). The slope of that graph became known as Hubble’s constant, and it’s reported in kilometers per second per megaparsec.
This is anything but an intuitive unit, and its value is anything but settled. Hubble originally estimated the slope of the graph to be a whopping 500 kilometers per second per megaparsec, indicating that two galaxies that are one megaparsec away from each other would be separating at 500 kilometers per second; a pair that began two megaparsecs apart would separate at 1,000 kilometers per second. Since Hubble’s time, better observations have gradually pushed that number down to about 70, give or take a few kilometers per second per megaparsec.
It’s the “give or take” that has caused considerable consternation in the astronomical world. Astronomers have not been able to definitively zero in on the Hubble constant’s value, and by extension we are missing something fundamental about the growth history of the universe. You see, the Hubble constant isn’t simply the slope of some curve on a graph. If you know the Hubble constant, you can determine the age of the universe by essentially running everything backward until you hit the x-axis. Unfortunately, the Hubble constant is not, as its name would suggest, constant. The value of 70 give-or-take kilometers per second per megaparsec applies only to the here and now. In the era of the first galaxies, the figure was much higher. Because of its relationship to the overall size of the universe, the Hubble constant (more accurately, the Hubble parameter) has been decreasing throughout the history of the universe, as astronomers expected. Precisely how it’s been decreasing, though, is hard to nail down, and that’s where q naught comes in.
Q naught, indicated by q0, is the deceleration parameter, an equally important value. It tells astronomers how much the universe’s expansion is changing. A positive value, and the expansion rate is slowing. A negative value would mean that the expansion rate is increasing. While fights about H0 were being fought by astronomers worldwide, the value for q0, everyone agreed, was definitely positive. Absolutely, positively, undeniably positive. It couldn’t be otherwise.
To understand why, just imagine throwing a ball straight up. As it gets higher, the ball gradually slows. If you clock its speed along the path, you find a very specific relationship between the height of the ball and its speed. Even if you threw the ball so quickly that it launched off the face of Earth, it would still lose speed as Earth’s gravity sapped its energy. Galaxies should be doing the same thing. Sure, the space between them might be both incomprehensibly vast and rapidly growing, and sure, the universe does have a nasty habit of throwing weird and unexpected things our way, but this one seemed nonnegotiable. The enormous amount of stuff in the universe—billions of trillions of Suns’ worth of stuff—simply had to be putting the gravitational brakes on itself. But the only way to know for sure how strong those brakes were was to clock the motions of different galaxies and get precise distances.
Plenty of people had tried throughout the decades, but the uncertainties were always too great to put much stock in any of the values. And then SN 1987A happened, and a series of lightbulbs went off over the heads of astronomers like Suntzeff.
The blast of SN 1987A launched a thousand new research projects and about as many careers. One of those belonged to astronomer Mario Hamuy, whose job at the Cerro Tololo Inter-American Observatory in Chile began just two days after the Large Magellanic Cloud sported a bright new dot. Eager to get in on the supernova wave, he teamed up with Nick Suntzeff, Mark Phillips, and José Maza to launch a new supernova survey.
“We were just a bunch of astronomers hot on the track of supernovae,” Suntzeff recalled, and they had one goal: figuring out the elusive deceleration parameter, q0.
For this task, Type Ia supernovae came to the rescue. Astronomers still weren’t sure whether these resulted from two merging white dwarfs or a white dwarf siphoning material off a companion, but in either case, they seemed to be the right tools for the job. Unfortunately, they turned out not to be the uniform standard candles originally advertised. That would have been too convenient, and the universe seems to think that astronomers appreciate a good challenge.
Over the next few years, the members of the Calán/Tololo Supernova Survey discovered dozens of supernovae, more than 30 of which were Type Ia. This was a big enough sample to reveal the more nuanced behavior of these objects.
“It’s a very long, overly complicated explanation, but there was no ‘aha!’ moment in any of this,” Suntzeff said. He and his group gradually realized that by closely monitoring the exact pattern of brightening and dimming, they could precisely determine a Type Ia supernova’s peak energy. From there, Suntzeff explained, “it was just unraveling the observational project and then applying that to farther and farther objects.”
Once they had that puzzle piece sorted out, the rest fell into place. Now they could both clock the speed of the ball and measure its height. By the early 1990s, Suntzeff had built a CCD camera for the telescope and fine-tuned the process, determined to apply the group’s work to objects even deeper into the universe. The answers they sought were still beyond their grasp, but the team knew that with just a few more years of observing supernovae, they would finally do something that nobody else had managed to do.
But they needed a name. The team pondered the possibilities. “Well, what do we do?” Suntzeff recalled of the conversation. “We look for high redshift supernovae. So let’s call ourselves that.” Z is the astronomical designation for redshift, and so they became the High-Z Supernova team. The newly designated High-Z Supernova team took on new members, and Australian astronomer Brian Schmidt became its leader. It also took on new rivals, many of whom would use the same equipment that Suntzeff had built for his team’s project.
“I would be up there with the support scientists finding supernovae for all the competing projects,” Suntzeff mused. “One night would be mine. The next night, theirs. I’d be up there the whole time, helping my competitors try to disprove or beat us. It was competition, but it was a fun competition.”
Finally, in 1998, the team was ready to share the results with the world. The title of the paper—“The High-Z Supernova Search: Measuring Cosmic Deceleration and Global Curvature of the Universe Using Type Ia Supernovae”—didn’t hint at anything unusual. The implications of their findings, though, were mind-boggling.
Suntzeff chose his words carefully. “What we found was that distant supernovae are fainter than they should be. They’re farther away, which means that over cosmic time, something has pushed them farther away than we thought they were going to be.”
After much calibrating and considering all the possible uncertainties, the High-Z Supernova team reported a value of q0 that was negative. The expansion of the universe wasn’t decelerating, at least not over the past few billion years. It was accelerating. This was as puzzling as finding out that the ball you threw was getting faster and faster as it went up. And that meant that something—they were careful not to say what—had to be pushing the universe apart. Moreover, it wasn’t just the High-Z Supernova team that got this inexplicable result. A competing team, led by Saul Perlmutter, reported the same behavior.
It was these cosmic shenanigans that had intrigued Alan Alda enough to prompt his visits to astronomers in Chile and other locales for Scientific American Frontiers. Suntzeff’s team even invited Alda to take over an observation, confident that he’d find at least one supernova.
Then disaster struck.
“He never did find a supernova,” Suntzeff recalled somewhat regretfully, although Alda did get a consolation prize. “We inducted him into our supernova team, and we printed up something official-looking for him.”
Alda’s fascination with a universe that is doing the complete opposite of what astronomers set out to measure mirrored that of the entire scientific community. In 2011, the Nobel Prize Committee recognized the revolutionary discovery by awarding the Nobel Prize in Physics to the two competing teams. Unfortunately for Suntzeff and many others in those groups, the rules of the Nobel Prize dictate that the maximum number of recipients for any given award is three, a rule that seems starkly out of touch with the modern era of vast scientific collaborations that would fail without the creativity and expertise of many contributors.
Meanwhile, the scrappy little remnants of low-mass stars could stand proud, having been the focus of two Nobel Prizes: one for determining their breaking point and the other for exploiting that breaking point. The second prize even sports a poetic twist. Given that at least some fraction of Type Ia supernovae are the result of colliding white dwarfs, a small-scale coming together revealed a colossal pushing apart. But this wasn’t the first time that small-scale mergers let astronomers in on an enormous universal secret.