Chapter 2

The Heating Element: Planck’s Desperate Trick

Down in the kitchen, I put water on for tea—checking for the glow of the heating element to make sure I haven’t groggily put the kettle on the wrong burner again …

The red glow of a hot object is one of the simplest and most universal phenomena in physics. If you get a chunk of material—any material—hot enough, it will start to glow, first red, then yellow, then white. The color of the light emitted depends only on the temperature of the object. The material used doesn’t matter—a rod of clear glass and one of black iron, heated to the same temperature, glow with exactly the same color. The method of heating doesn’t matter, either—whether you’re running an electric current through a coil of metal, as in my electric stove, or forging that coil in a fiery furnace, the color of the hot metal will be the same for that particular temperature.

This sort of simple and universal behavior is like catnip for physicists because it suggests that there should be some simple and universal underlying principle at work. In the late 1500s, Galileo Galilei and Simon Stevin demonstrated empirically that objects of different materials and weights all fall at the same rate—Stevin by dropping two lead balls, one ten times heavier than the other, from a church tower.1 This observation led Isaac Newton to develop his law of universal gravitation in the 1600s, and a few hundred years after that a different perspective on the same simple, universal behavior inspired Albert Einstein’s general theory of relativity, which remains our best theory of gravity. Einstein recalled the key moment in the development of his theory as an afternoon in 1907 at the patent office in Bern, when he was struck by the realization that a person falling off a roof would feel weightless during the fall, an insight that provided the link between acceleration and gravity that is the foundation of general relativity. Einstein referred to this as “the happiest thought of my life.” Working out the consequences of that happy thought mathematically took the better part of eight years, but culminated in one of the greatest and most successful theories in modern physics.

The universal behavior of thermal radiation, then, seemed like a similarly promising source of insight, a phenomenon against which to test ideas about the distribution of energy in hot objects and the ways light and matter interact. Unfortunately, the best efforts of physicists in the late 1800s to predict the color of light emitted by hot objects at different temperatures failed spectacularly.

In the end, a full explanation of thermal radiation required a radical break with existing physics. The starting point for the whole of quantum theory, whose implications physicists are still debating more than a century later, is found in the red glow of the heating elements we use to cook breakfast.

In a very real sense, then, all of the bizarre phenomena associated with quantum physics—particle-wave duality, Schrödinger’s cat, “spooky action at a distance”—can be traced back to your kitchen.

Light Waves and Colors

As is often the case, the easiest way to explain the need for a radical new theory is to first illustrate the failure of the old one. Before we can understand how the quantum model solved the problem of thermal radiation, we need to see why classical physics couldn’t. That, of course, requires a bit of background in what classical physics had to say about light, heat, and matter.

The first essential concept underlying the experiments that led to the breakdown of classical physics is the idea of light as a wave. The wave nature of light was known for a half century before Maxwell’s equations, thanks in large part to experiments carried out around 1800 by the English polymath Thomas Young. Physicists had been arguing about whether light was best thought of as a stream of particles or a wave through some medium since the days of Newton, but Young convincingly demonstrated light’s wave nature with his ingeniously simple “double-slit” experiment.

As the name suggests, the double-slit experiment involves light passing through two narrow openings cut in a card. Young found that shining light through two closely spaced slits to a screen on the other side does not result in the two bright stripes you might expect (for light passing through each individual slit). Instead, what appears on the screen is a series of bright and dark spots.2

These spots arise from a process known as “interference,” which occurs whenever waves from two different sources combine. If the two waves reaching a given point arrive “in phase,” so that the peaks of one wave coincide with the peaks of the other, the waves combine to form a wave with a higher peak than either had alone. On the other hand, if the waves arrive “out of phase,” with one at a peak when the other is in a valley, they cancel out: the peaks of one fill in the valleys of the other, and the end result is no wave at all. This works with any source of waves—it’s responsible for the complex patterns of waves seen in wave pools at amusement parks, and the destructive interference of sound waves is the basis for “noise canceling” headphones.

The interference in Young’s double-slit experiment comes about because light waves from each slit take different amounts of time to travel to a particular point on the screen. At a point exactly centered between the two slits, both waves travel the same distance, and thus arrive in phase, giving a bright spot. At a point a bit to the left of the center, the waves from the left-hand slit travel a shorter path to the screen than that taken by the waves from the right-hand slit. This extra distance means the waves from the right slit have had a bit more time to oscillate, and if the distance is just right, the peaks of the right-slit waves fill in the valleys of the left-slit waves, making a dark spot. A bit farther out, though, the extra distance allows for an extra full oscillation, putting the right-slit peaks on top of the left-slit peaks, and making another bright spot.

Interference of light waves in a double-slit experiment. Midway between the slits, the waves arrive in phase and combine to create a bright spot. A bit above the center, waves from the bottom slit travel a longer distance, and thus undergo an extra half oscillation (dashed line), so that the peaks from the bottom wave fill in the valleys from the top, producing a dark spot. Some distance farther out, waves from the bottom slit complete a full additional oscillation (dashed line), and the waves are once again in phase, producing another bright spot.

This pattern repeats many times, leading to the array of bright and dark spots. The spacing between bright spots depends in a simple way on the wavelength, providing a convenient way to measure the wavelength of visible light—in modern units, this ranges from about 400 nanometers for violet light to about 700 nanometers for deep red.3 Adding more slits makes the bright spots narrower and more distinct, and by the 1820s Joseph von Fraunhofer was using “diffraction gratings” based on the interference of light to make the first reasonably precise measurements of the wavelengths of light emitted by the sun and other stars.

Young’s experiment, published in 1807, caused some sensation in physics circles, but many scientists remained reluctant to discard the particle theory of light. When the French physicist Augustin-Jean Fresnel submitted a paper on wave theory to a physics competition, one of the holdouts, Siméon Denis Poisson, pointed out that the wave interference used to explain Young’s experiment would predict that there should be a bright spot at the center of the shadow of a round object. This bright spot in a shadow seemed plainly absurd, and Poisson thus rejected the wave model of light.

François Arago, one of the judges for the competition, was intrigued by Poisson’s idea and began a careful experimental search for bright spots at the center of shadows. Observing the spot takes exceptional care, but Arago was up to the task, and he definitively demonstrated that light passing around a circular obstacle really can interfere to produce a bright spot at the center of the shadow. This “spot of Arago” or “Fresnel spot” was the final bit of evidence needed to convince most physicists that light was indeed a wave.

Arago’s experiment secured the success of the wave model, but exactly what was waving remained a mystery into the 1860s, when Maxwell’s equations explained light as an electromagnetic wave. In the closing decades of the 1800s, then, the wave theory was firmly established, and physicists were seeking to explain all interactions between light and matter in terms of electromagnetic waves.

The spectrum of thermal radiation for several different temperatures. The vertical lines indicate the limits of the visible spectrum, showing how the peak moves from the infrared into the visible with increasing temperature.

When studying waves, there are two properties that we can readily measure: the wavelength and the frequency. Wavelength is the distance between peaks in the wave, looking at a snapshot of the whole pattern over some region. Frequency is the time between peaks measured from a single point watching the wave go by. Because light travels at a fixed speed, frequency and wavelength are closely related: the wave moves forward one wavelength for each oscillation. Shorter wavelengths repeat more often over the same period, so they have higher frequencies. Physicists switch back and forth between talking about light in terms of frequency and in terms of wavelength depending on which is most convenient for the particular problem at hand—we’ll make this switch a couple of times in the rest of this chapter.

Determining the “color” of light emitted by a hot object is a matter of measuring its spectrum: the intensity of light emitted at each frequency over a wide range. When we measure this spectrum for light at a particular temperature, we find a simple characteristic shape, a distribution with lower amounts of light at the lower frequencies, increasing to a peak, and then dropping off rapidly at the higher-frequency end. The “color” of the light is determined by the position of this peak—the exact frequency at which the emitted intensity is greatest—and depends only on the temperature, in a very simple way. As the temperature increases, the frequency at which the amount of light emitted reaches a maximum gets higher: at room temperature, the peak intensity is in the far infrared region of the spectrum, moving into the red end of the visible spectrum as the temperature increases to “red-hot,” and toward the blue end as the temperature increases further. A “white-hot” object has the peak of its spectrum in a region that would correspond to green light,4 but it emits significant amounts of light across the entire visible range of the spectrum and thus looks white. If you double the temperature (measured on the kelvin scale, which begins at absolute zero), the peak frequency also doubles.

The spectrum of light from the sun closely resembles this universal spectrum for light from a hot object, corresponding to a temperature of about 5600K, peaked at a frequency of around 600 THz—in fact, this is how we measure the temperature of the sun and other stars. At the other extreme of temperature is the cosmic microwave background, relic radiation left from shortly after the Big Bang that permeates the universe with a spectrum corresponding to that of an object at 2.7K, peaked at around 290 GHz.

Heat and Energy

Throughout the nineteenth century, in parallel with the developments in theories of electromagnetism and the wave model of light, there were great advances in the physics of thermodynamics. Just as the century opened with debate over two models of light—wave and particle—the early decades of the 1800s also saw debate over two competing models of heat. One school of thought viewed heat as a physical thing unto itself—a “subtle fluid” called “caloric” that flowed from one object to another. The competing model, “kinetic theory,” envisioned heat as arising from the random motion of the microscopic components making up macroscopic matter.

Over a period of several decades, experiments by Benjamin Thompson (also known as Count Rumford) and James Joule demonstrated a connection between mechanical work and the generation of heat that was difficult to reconcile with the caloric theory. Thompson showed that the friction involved in boring out a cannon could provide a seemingly inexhaustible source of heat, which should not have been possible if “caloric” were a real fluid. Joule strengthened this relationship by determining a precise value for the “mechanical equivalent of heat”—that is, how much work was needed to raise the temperature of a fixed amount of water one degree by stirring it.

On the more theoretical side, work by Rudolf Clausius and James Clerk Maxwell5 established the mathematics linking the flow of heat between objects to the kinetic energy of the atoms and molecules making them up. The Austrian physicist Ludwig Boltzmann built on Maxwell’s work, developing much of the statistical model of heat energy that we use today.

Individual atoms and molecules in a gas or solid rattle around at different velocities, but given a large enough number of them, we can use statistical methods to precisely predict the probability of finding atoms with a certain kinetic energy in a substance at a specified temperature. (The resulting formula is known as the “Maxwell-Boltzmann distribution” in honor of their pioneering work.) A crucial piece of this kinetic model is the notion of “equipartition,” introduced by Maxwell and refined by Boltzmann, which holds that energy is distributed equally among all types of motion available to a particle. A gas of single atoms has all its kinetic energy contained in the linear motion of its atoms, while a gas of simple molecules will have its energy split equally between linear motion of the molecules as a whole, vibration of the atoms within the molecules, and rotation of each molecule about its center of mass. Kinetic theory and this statistical approach successfully explained the thermal properties of many materials,6 and by the end of the 1800s, caloric theory had fallen by the wayside.

Since the emission of light requires heat energy, and light plays a significant role in transmitting heat—this is why cooks cover some dishes with foil, to block light and reduce burning—physicists naturally began to investigate the connection between electromagnetic waves and thermal energy. This project required empirical data, so in the late 1800s, spectroscopists in Germany conducted experiments to measure the spectrum of light emitted by hot objects over a wide range of temperatures and wavelengths. The experimental results were of high quality, but an explanation of those results in terms of the kinetic model of thermal physics remained elusive.

In the 1890s, two competing models, by Wilhelm Wien in Germany and Lord Rayleigh in Britain, made empirical predictions of the amount of light emitted at a given wavelength for a given temperature—formulae based on general principles and experimental data from one range of wavelengths that they hoped to extend to other ranges. Wien’s predictions matched the data at high frequencies but failed at lower ones, while Rayleigh’s worked only at low frequencies. In 1900, Max Planck found a mathematical function that combined the two and at last lined up with the observed data. Planck derived this function after a party he hosted where spectroscopist Heinrich Rubens told him about Rayleigh’s predictions and the latest experimental results. When the guests left, Planck retreated to his study, and some time later emerged with the correct formula, which he sent to Rubens on a postcard the same evening. But while Planck’s formula was a great empirical success, nobody could explain why it worked, at least not using what, at the time, were the accepted fundamental principles of physics.

The Ultraviolet Catastrophe

So, what should a model based on those principles look like? The general approach is most clearly illustrated by the method attempted by British physicists Lord Rayleigh and James Jeans (which actually slightly postdates Planck’s successful quantum model). The Rayleigh-Jeans model fails, but in a way that makes the origin of the failure clear, and the eventual solution can be explained using the same basic language.

The idea behind the Rayleigh-Jeans approach to the problem of thermal radiation is very simple, and relies on the notion of equipartition used by Maxwell and Boltzmann in describing the thermal properties of gases: you simply take the energy available from heat and divide it evenly among the possible frequencies of light. “Divide it evenly” demands a countable set of possible frequencies, though, which means physicists would need a simplified theoretical model to break down the continuous spectrum of light.

The trick to making the frequencies countable follows directly from the universality of the radiation observed: remember, the spectrum of light from a hot object doesn’t depend on any of the material properties of that object. The theoretical model would need to reflect this, which led physicists to consider the light emitted by an idealized “black body,” an object that absorbs any and all light that falls on it, reflecting nothing.7 This doesn’t mean that the object is dark, emitting no light—if that were the case, it would rapidly heat up and disintegrate—only that, as with the glow of a heating element, the light it emits does not depend in any way on the light it absorbs.

It turns out that there’s a nice, practical way to make such a black body in the lab: a box with a small hole in it. As long as the hole is small compared to the size of the box, any light entering will be extremely unlikely to come right back out, but would instead have to bounce around many times before it managed to escape (if it isn’t absorbed first). This approximates the essential “blackness” of the black body: light falling on it is absorbed and not reflected, regardless of frequency. The physicists making measurements of thermal radiation8 used exactly this technique to make the sources for their experiments.

The box-with-a-small-hole model is also a great boon for theoretical physicists because the waves inside the box will be restricted to a limited set of frequencies. Waves that fit nicely within the boundaries of the box endure, while waves at the “wrong” frequencies will interfere with each other and get wiped out. Whatever light leaks out of the hole, then, will reflect the limited set of frequencies that exist inside, and have nothing to do with whatever’s going on outside the box.9

Once physicists hit upon the trick for determining a limited set of allowed frequencies, the hope was that when they tallied up the allowed frequencies inside the box, and divided the available energy among them, the resulting spectrum would resemble that observed in experiments and described by Planck’s formula. Unfortunately, this simple and straightforward approach failed spectacularly. We can see the problem just by going through the process of counting up the allowed frequencies.

The allowed frequencies inside the box are called “standing wave modes,” and these are determined by the size of the box and the constraint that none of the waves are allowed to leave (as long as the hole in the box is small enough, the fraction of light that escapes is so tiny it can safely be ignored). For the sake of illustrating the origin and characteristics of these standing-wave modes, we can simplify things still further, imagining a “box” that has just one dimension: waves can travel only left and right, no other directions. This has a simple and familiar everyday analogue: the string of a musical instrument.

A guitar player makes sound by plucking a string, displacing a small part of the string and creating a disturbance that travels outward in the form of waves shaking the string up and down. The two ends of the string are fixed, so when a wave traveling up the neck reaches the player’s finger pressing the string against the fret, it bounces back, reversing direction to travel back down the neck again. It doesn’t take long before waves traveling in opposite directions find themselves occupying the same stretch of string, at which point they interfere with each other like the light from the two slits in Young’s famous experiment.

When you add together all these waves bouncing back and forth, you find that for most wavelengths the end result is complete destructive interference. For every wave trying to make the string rise to a peak, there’s another trying to push it down to a valley, and they cancel each other out. For a very particular set of wavelengths, though, you get constructive interference: all the various reflected waves rise to a peak at exactly the same place. These wavelengths give rise to stable patterns of waves along the string, where some parts of the string move quite a bit, while others remain fixed in place.

The simplest of these patterns is a “fundamental mode” with a single oscillating lump between the fixed ends. We typically draw this as an upward-going bump, but really it varies in time: the bit of string in the middle is pulled upwards, then it drops back to the flat position, then it moves down to a negative peak, then back to zero, then back to the upward peak, and so on. The time required to complete an oscillation is determined by the frequency associated with the wavelength of the mode in question.

The wavelength of a wave is defined as the distance to go up to a peak, then down to a valley, and back to the start. A single up-and-back-to-zero motion is half a wave, so the wavelength associated with the fundamental mode is twice the length of the string. The next simplest pattern fits a full wave between the fixed ends, going up (or down) and then back down (or up), with a fixed “node” in the center where the string does not move; the wavelength of this second “harmonic” is exactly equal to the length of the string. The next harmonic has one and a half waves (three oscillating lumps and two nodes) for a wavelength of two-thirds the length of the box; the next has two waves with a wavelength of half the length of the string, and so on.

If we look closely at these allowed modes, we find a simple pattern: in each of the allowed standing-wave modes, a whole number of half wavelengths fit across the length of the string. There are a discrete set of these allowed modes, and we can assign each of them a number—the number of oscillating lumps in the pattern.

Some of the standing-wave modes in a one-dimensional “box” of length L with the wavelength λ for each mode.

The sound that we hear from a guitar makes a nice analogy with the spectrum we see from a black body in this model. The initial plucking of the string will excite waves at a huge number of different frequencies, like the light that enters the “box” for our black body. After a very short time, though, destructive interference between the many reflections off the ends of the string or the walls of the box wipes out most of these wavelengths, leaving only those that correspond to standing-wave modes.

In the case of the guitar string, most of the energy of the wave ends up in the fundamental mode, which as the name suggests is the primary determinant of the sound that we hear. The higher frequency harmonics get a smaller share of the energy but are still present, and they are responsible for the rich sound of a real instrument compared to, say, a computer generating a single pure tone. The many different tunings and effects used by guitarists produce distinctly different tones by amplifying some of these harmonics and damping others to give a different mix that distinguishes the sound of, say, Jerry Garcia’s guitar from that of Jimi Hendrix’s.

For light waves in our black-body box, the distribution of energy is determined not by the aesthetic tastes of a particular player, but by a simple rule from thermal physics: equipartition. The process of identifying the standing-wave modes is a little more complicated for light in three dimensions than sound in one dimension, but leads to the same result: a discrete set of numbered modes that can be counted. Once we know these modes, equipartition tells us to allot each mode an equal share of the total energy available from the thermal motion of the particles making up the walls of the box (which, remember, are standing in for the particles making up a hot object).10

The problem is that as the wavelengths get shorter, the wavelengths of allowed modes get closer and closer together. If we tally up the number of modes within some given range of wavelengths, we find that it increases without limit at short wavelengths (which, remember, corresponds to high frequency). If we imagine a string half a meter long, with a fundamental wavelength of one meter, there are two allowed modes with wavelengths in the five millimeters between 0.1 m and 0.095 m—that is, two wavelengths that can fit an integer number of their half wavelengths across the string. In the five-millimeter wavelength range between 0.02 m and 0.015 m, there are thirty-four modes. Between 0.01 m and 0.005 m, there are over two hundred modes.

In terms of a spectrum, this model doesn’t reproduce the nice, simple peak at an intermediate wavelength found in experiments. On the contrary, it says that any object, regardless of temperature, ought to spray out an infinite amount of short-wavelength (high-frequency) radiation. This is not what you want in a toaster.

The spectrum of thermal radiation at different temperatures, plus the prediction of the Rayleigh-Jeans model, i.e., the “ultraviolet catastrophe.”

This failure of the straightforward mode-counting approach was so bad that it picked up the name “ultraviolet catastrophe.”11 Explaining the peak seen in the real black-body spectrum, and successfully described by Planck’s formula from 1900, required a fundamental shift in our understanding about the way energy is distributed.

The Quantum Hypothesis

Fittingly, it was the same Max Planck who’d found the mathematical function accurately describing the shape of the spectrum of emitted light who also eventually found a way to explain the origin of that spectrum. In the terms of the model described above, Planck associated each of the standing-wave light modes with an “oscillator” inside the material, with each oscillator emitting only a single frequency of light. He then assigned each of these oscillators a characteristic energy, equal to the frequency of that oscillator multiplied by a small constant. Then he required the amount of light emitted by a given oscillator to be an integer multiple of its characteristic energy, which he called a “quantum” after the Latin word for “how much”—so an oscillator could have one quantum of energy, or two, or three, but never half a quantum, or π quanta.

This “quantum hypothesis” does the necessary trick of cutting off the amount of light at high frequencies—exactly where the ultraviolet catastrophe happens. When we allocate each “oscillator” an equal share of the heat energy available, for low-frequency oscillators, that share amounts to many times its characteristic energy, and thus each emits many quanta of light. As the frequency increases, the amount of light emitted by each individual oscillator goes down, because each oscillator’s share of the heat energy amounts to a smaller multiple of its characteristic energy. And when the frequency gets high enough that the characteristic energy is bigger than that oscillator’s share of the heat energy, it can’t emit any light at all.

At low frequency, then, there are relatively few oscillators, because there are few possible standing waves at relatively long wavelengths, but each emits many “quanta” worth of light. At high frequency, there are many oscillators (because there are many allowed modes at shorter wavelengths), but each emits little or no light. The competition between the increasing number and decreasing emission gives exactly the kind of peaked spectrum observed in black-body radiation: starting at long wavelengths and moving down, the increase in the number of oscillators is initially faster than the decrease in light emitted per oscillator, so the total amount of light increases to a peak, then decreases as the emission cuts off completely. And it explains the shifting peak of the spectrum, as well: as the temperature increases, the amount of heat energy increases, increasing the share allotted to each mode, and pushing up the frequency where the quantum hypothesis cuts off the light emission.

Planck initially introduced the quantum hypothesis thinking it was a “desperate mathematical trick.” And in fact it was a bookkeeping trick of a type often employed in calculus. Mathematical physicists regularly describe smooth, continuous phenomena in terms of discrete steps when setting up a problem, then use well-honed mathematical techniques to make the “steps” infinitesimally small and restore the original smoothness. Planck knew that giving each oscillator a characteristic energy that increased with frequency would give the resulting spectrum the cutoff he needed, but he also thought he would be able to use calculus to reduce the constant multiplying the frequency to zero, restoring the smoothness and doing away with the steplike quanta of energy. Instead, he found that the constant needed to take a very small but stubbornly nonzero value: these days, it’s called “Planck’s constant” in his honor, and goes by the variable h, with a value of 0.0000000000000000000000000000000006626 joule-seconds—a very small number indeed.12 With the quantum hypothesis in place—namely that energy comes in discrete, irreducible “packets”—and h taking that tiny but nonzero value, the process of dividing the available energy among all the possible frequencies leads to exactly the formula that Planck had found to describe the black-body spectrum.

Planck’s formula is a spectacular success, and has become an invaluable tool for many areas of physics. Astronomers use it to determine the temperature of distant stars and gas clouds by measuring the spectrum of the light they emit. The spectrum of light from a typical star—our sun included—closely resembles a black-body spectrum, and by comparing the light we see to the prediction from Planck’s formula, we can deduce the temperature on the surface of stars many light-years away.

Probably the most perfect black-body spectrum ever measured is the “cosmic microwave background” mentioned earlier, a weak radiation field in the radio-frequency part of the spectrum that permeates the entire universe. This background radiation is one of the best pieces of evidence for Big Bang cosmology: the microwaves we see today were created about 300,000 years after the Big Bang, when the universe was still extremely hot and dense, but had cooled just enough to allow photons to escape. Over the intervening billions of years, the universe has expanded and cooled, so what were once high-energy, visible-light photons reflecting a temperature of thousands of kelvin have been stretched out to microwave wavelengths. The spectrum has been measured many times, and matches a black body at about 2.7K to phenomenal accuracy. In fact, tiny variations in the temperature of that background radiation from different points of the sky—shifts of millionths of a kelvin—provide the best information we have about the conditions of the very early universe, and the origins of galaxies, stars, and planets.

On a more down-to-earth level, the Planck formula informs the way we talk about light and heat every day. Photographers and designers talk about the “color temperature” of various kinds of light, which is a number in kelvin that corresponds to the temperature of the black body whose visible spectrum most closely matches the light in question.13 The different styles of lightbulbs available at your favorite home-improvement store—“soft white,” “natural light,” and so on—use a variety of techniques to produce light with a spectrum that resembles black-body radiation from objects of different temperatures.

In the context of breakfast, black-body radiation can be used to determine the temperature of hot objects—if your kitchen contains one of those infrared thermometers that you point at a pan to see whether it’s hot enough, you’re making use of Planck’s formula. A sensor in the thermometer detects the total amount of invisible infrared radiation coming from whatever it’s pointed at, and uses that to deduce the temperature of a black body that would emit that much infrared light.

Despite the many successes of his formula and the personal fame it brought him, Max Planck himself was never particularly satisfied with his quantum theory. He regarded the quantum hypothesis as an ugly ad hoc trick, and he hoped that someone would find a way to get from basic physical principles to his formula for the spectrum without resorting to that quantum business. Once the idea was out there, though, other physicists picked it up and ran with it—most notably a certain patent clerk in Switzerland—leading to a complete and radical transformation of all of physics.

Notes

1 This works provided both objects are dense enough for air resistance forces to be negligible—if you were to drop a paper clip and a feather, the paper clip will drop rapidly while the feather will flutter to the ground slowly. The force of gravity acting upon them, though, is the same—in a vacuum, they would reach the ground together, as demonstrated dramatically by Commander Dave Scott during the Apollo 15 mission to the moon.

2 If you’d like to see for yourself, you can make two fine slits in a piece of aluminum foil and illuminate them with light from a laser pointer. Another closely related phenomenon is even easier to see: if you put a strand of hair in the beam of a laser pointer, the light waves passing around the different sides of the hair will interfere and make a pattern of multiple spots.

3 One nanometer is 10-9m, or 0.000000001m.

4 Relating the wavelength or frequency of light to the color perceived by humans is a tricky business, particularly when it comes to dealing with light at multiple frequencies. The color addition that kids learn in elementary school is an example of this—a mix of red light (around 650 nm wavelength) and blue light (around 490 nm) will create the same impression in your eyes and brain as violet light (around 405 nm) even though there is no violet light present.

5 Yes, the same Maxwell who worked on electromagnetism. Physics in Europe in the 1800s was a smallish community, and Maxwell was a really smart guy.

6 At high temperatures, anyway; at very low temperatures, and for some very hard materials, the Maxwell-Boltzmann kinetic theory fails. These anomalies were another hint of the need for new physics and would play a role in the rise of quantum mechanics in the early 1900s.

7 In the immortal words of Nigel Tufnel in This Is Spinal Tap, “How much more black could this be? And the answer is: None. None more black.”

8 Notably the German experimentalists Otto Lummer and Ferdinand Kurlbaum.

9 This might seem like it’s reintroducing properties specific to a particular “box,” but as long as the box is very large compared to the wavelength of the waves inside, there are well-established mathematical techniques for smoothing this out to get an answer that doesn’t involve the size of the specific box.

10 Admittedly, there are an infinite number of these modes, but dealing with these kinds of infinities is exactly the reason physicists invented calculus.

11 This was coined by Paul Ehrenfest in 1911, in reference to the Rayleigh-Jeans model from 1905, and would be a great name for a band.

12 It is more commonly written as h = 6.626070040 ×10-34 kg-m2/s.

13 Human perception makes the language around color and temperature confusing: reddish light is traditionally called “warm,” even though it corresponds to a lower temperature source, while bluish light is called “cool.”