5.1 The Strange Tale of the King of Neutrinia
Some physicists are much taken with parallel universes and are happy to postulate their existence, expound on properties they might have, and rejoice in the fact that although there is little or no possibility of verifying the existence of these worlds, at least there is no inconsistency with physical theories that actually precludes their being.
In one of these multitudinous parallel worlds, there existed a tiny kingdom, called “Neutrinia”, with a king, Neutrino the First, who was very young and inexperienced, with many in his kingdom claiming that he lacked sufficient gravitas for his regal role. Now King Neutrino was madly in love with Princess Juliet, the daughter of the king of Protonia, a neighbouring country. Juliet also loved Neutrino dearly, and the only requirement for their lasting happiness was the blessing of Juliet’s father, the curmudgeonly King Proton the 100th.
King Proton was very proud of his long dynasty and of his kingdom’s wealth. When King Neutrino asked for Juliet’s hand, he replied that he would only concede his daughter to a King able to donate to her as a Wedding present the famous pink diamond, Darya-ye Noor. Unfortunately, Neutrinia was small and even if Neutrino commandeered its entire wealth, it would not suffice to purchase such an illustrious jewel. The distraught king approached one of his friends, Paolo Diracus, who happened to be a very famous physicist, for advice.
The answer he received was indeed very helpful. Diracus replied: “As you probably know, in this world, Heisenberg’s Uncertainty Principle is valid, not just in the realm of physics, but also in the field of economics.” King Neutrino nodded sagely, although he understood little. Technical issues tended to put him in a spin. The physicist continued: “And my experimentalist friends tell me that lately some of the fundamental physical constants have begun to fluctuate wildly in value. Indeed, h, a constant named after my colleague, Maximus Planck, has recently increased greatly. So perhaps you can borrow, free of charge, the amount of money you need to buy the diamond, provided that you only need to show it to Juliet’s father for a very short time.” Pulling a pencil and an old bus ticket out of his pocket, Diracus exclaimed: “let’s do the calculations” and began to scribble down some equations and numbers.
To King Neutrino’s delight, everything worked out exactly as Diracus had foretold. Neutrino held possession of the diamond just long enough to show it to Proton, and Neutrino and the beautiful Juliet were married and lived happily ever after.
***
If King Neutrino’s world seems strange to you, dear reader, then welcome to the crazy domain of Quantum Mechanics. In this chapter, ideas that seem bizarre based on our everyday experience are explored, and shown to be not only possible, but essential for the explanation of experimental observations. “Common Sense”, which was never common1 and seldom sense, is no longer useful as a guide to understanding Quantum Mechanics. Instead reliance must be placed on consistency, meticulous experimentation and observation to lead us in our quest.
5.2 Et Lux Fuit (and There Was Light)
Let us continue our story in our own universe at the end of the Nineteenth Century, when the general consensus among scientists was that all relevant Physics had by now been discovered, with only a few details remaining to be clarified. The ground for such optimism was the recently proposed theory of electromagnetic fields and waves by James Clerk Maxwell. In his seminal papers and book [1], Maxwell provided a convincing explanation of all known electric and magnetic phenomena. His theory, which also gathered up earlier work by Faraday and Gauss, predicted many as yet unknown effects,2 which were subsequently discovered. In addition, Maxwell succeeded in unifying two forces (electric and magnetic), which up to then had always been considered distinct and independent. Maxwell’s achievement has been called “the second grand unification”, the first one being Newton’s unification of terrestrial and celestial mechanics.
But perhaps the greatest merit of Maxwell’s work was to show that light (or, more precisely, the propagation of optical waves) was nothing but the solution of his equations in the most elementary case of a completely empty medium (vacuum). This was remarkable, since all other cases of propagating waves require a medium of some substance to support them, e.g. air for acoustic waves (sound) and water for sea waves. Luckily, light can also propagate, although at the cost of some absorption, in tenuous media such as air. This applies not only to visible light (i.e. radiation in the range of frequencies which can be captured by the human eye). It also applies to x-rays, gamma rays, ultraviolet and infrared light, as well as to waves used for the propagation of radio and television channels. All these forms of radiation propagate in exactly the same manner, as predicted by Maxwell’s equations, and are distinguished only by their different frequencies.
Frequency, along with wavelength and velocity, are the three quantities that characterise all forms of wave motion. When we throw a pebble into a pond, we can observe these properties readily. By concentrating our attention on one point in the pond, we can see the water surface rising and falling as the wave moves by. The number of times the surface rises and falls per second at this point is the frequency of the wave. On the other hand, if we concentrate on the wavefront itself, and measure how far the wave progresses in one second, we have the velocity of the wave. The third quantity, the wavelength, is the distance between successive wavefronts. A little thought will convince the reader that these three quantities are not independent; in fact, the velocity is equal to the product of the other two.
For some types of waves, the velocity is different for different frequencies. An example is ocean swell. These long wavelength waves are generated by distant storms, far from the beach where they are observed, and can sometimes travel half-way around the globe. The longest wavelength waves travel faster and arrive first, followed by those with progressively shorter wavelengths. The shortest wavelength waves, or chop, are soon absorbed and do not travel over inter-continental distances. A measurement of the wavelength gives an indication of how far away the storm that generated the waves was located. This characteristic—the variation of the velocity of waves with frequency—is called dispersion.
Electromagnetic waves travelling in a vacuum are non-dispersive, i.e. all frequencies travel with the same speed. This speed, called the speed of light, is one of the fundamental physical constants and is given the symbol c. However, when travelling through a medium, the waves are slowed down, and the magnitude of this effect does depend on the frequency.

The visible region, shown in colour, is only a small part of the overall electromagnetic spectrum, which ranges from cosmic rays to long-wavelength radio waves
From the above discussion, it is clear that light is a form of wave propagation. No room is left for any doubt. As if these theoretical considerations are not convincing enough, there exists a rich phenomenology to demonstrate this particular characteristic: diffraction, interference, and polarization are all examples of light’s wavy personality. These effects are discussed in more detail later in this Chapter. They can produce flashy displays of natural splendour, such as the rainbows that accompany a sun shower.

A soap bubble, displaying the superimposed effects of reflection and refraction. Image courtesy of Jooinn (https://jooinn.com/colorful-soap-bubbles.html (accessed 2020/5/20))
Despite the evidence outlined above for the wave nature of light, we shall see in the next section that there are other observations, from which it is equally clear that light is composed of beams of particles. Such is the weird and almost contradictory nature of Quantum Mechanics.
5.3 One, None, and a Hundred Thousand
In one of his best-known novels [2], Uno, Nessuno, Centomila, the Italian writer Luigi Pirandello argued that, although we may believe in our own enduring identity, we are in a constant state of change, not only because of the unavoidable effects of ageing, but also because we are different when we are with different people. We might change consciously: in fact, it may be advantageous for a young person to show off in the presence of a desirable potential partner, and then try to look miserable when asking for financial support from parents. But we also change unconsciously, for when we are in different company, we instinctively tend to distort our personality.
This behaviour is, of course, only human. Or is it? If we look at the nature of light, we might get the impression that light is no less Pirandellian than we are. In fact, one of the few details left to be clarified at the end of the 19th Century was precisely that: the nature of light. As we have seen in the previous Section, the wave-like character of light was very well established by Maxwell’s theory.
On the other hand, since antiquity poets and scientists alike have spoken of solar and lunar rays. Newton’s corpuscular theory of light asserts that light is made up of small discrete particles (corpuscles), which travel in straight lines (rays). The idea of tiny elementary particles comes from the first Greek philosophers who, even though fascinated by the immensity of the heavens, also turned their attentions to the very small.
Objects around us are usually complex, being constructed from various materials that are obviously different from each other and from the composite that they form. However, suppose we isolate one of these constituents, for example, a cube of sugar. What happens if we break it into two? Will these two distinct parts have similar properties to each other, and to the original cube? Suppose we now break these two halves further, and continue to do so, over and over again. Will these minute specks of sugar also continue to taste sweet and exhibit the other properties of the original cube? Can this decomposition proceed forever, or will we reach a limit that cannot be overcome? If we reach this limit, will the final minute particles still be sugar with all its properties, or will they be small specks of something else entirely?
What we have described above is a Gedankenexperiment, or thought experiment. These intellectual exercises are used frequently in philosophy, Quantum Mechanics, and other branches of modern physics, to explore hypotheses in a logical fashion. Leucippus and Democritus, two philosophers from the 5th Century BCE, were probably the first to suggest that if matter is broken down as outlined above, eventually one will arrive at small particles that are indivisible [3]. The name “atoms” was coined from the Greek “atomos”, meaning indivisible.
Democritus held the belief that atoms moved freely in a void, colliding with each other, and sometimes sticking together. It was a remarkably accurate picture of the modern viewpoint, considering its antiquity. However, the atomic theory was rejected by Aristotle, certainly the most influential philosopher of Ancient Greece (whom we met in Chap. 1), and so was disregarded for two millennia, up until the nineteenth century. Paradoxically, science is sometimes best served by a healthy disrespect for authority.
Further development of the concept of atoms had to await the work of John Dalton (1766–1844), an English chemist with a Quaker background. His atomic theory, which underpins modern chemistry, held that all matter is composed of atoms, which are indivisible and indestructible. The simplest materials, known as elements, are comprised of identical atoms. More complex materials, or compounds, are made up from combinations of different types of atoms. Chemical reactions proceed by rearranging the atoms of different compounds: each new arrangement corresponds to a new compound with chemical properties differing from those of its antecedents.
Dalton’s theory was successful at explaining much of chemistry, in particular the fact that elements combine in particular ratios to form compounds. Ludwig Boltzmann in the late 1800s expanded Dalton’s ideas to explain the temperature and pressure of a gas as a consequence of the motion of the constituent molecules. However, in the eyes of many 19th Century physicists, the atomic theory suffered from one overriding disadvantage: the atoms are so small that surely they must be completely unobservable. As we remarked in Chap. 3, a physical assumption must be falsifiable: if a quantity cannot be observed, it is outside the scope of physics.
This fundamentally philosophical disagreement on the nature of matter was resolved by Albert Einstein in the first of his ground-breaking papers of 1905 [4]. In 1827, a botanist, Robert Brown, had observed that tiny particles in a fluid, such as air or water, exhibited a jerky, random motion. This Brownian motion, named after its discoverer, was suspected of being the result of collisions of the fluid’s molecules against the particles. If a coloured dye were added to water, the molecular collisions produced a slow diffusion of the colour throughout the liquid. Einstein studied this problem mathematically, and found that the diffusion rate could be related to the size of the molecules. The precision of his results convinced the doubters, and atoms became accepted not just as a mathematical artifice to explain chemical reactions, but as real objects. The unobservable had become observed, albeit indirectly.
But let us return now to the debate, which lasted several centuries, about the nature of light. As we have seen, Maxwell’s theory seemed to have put an end to the dispute, but there remained an unexplained but important phenomenon called black body radiation (see Appendix 5.2). Suppose that we carry out an experiment by placing a small piece of iron in a blacksmith’s forge and observe the change as we slowly turn up the heat. At first the iron starts to glow a dull cherry red, but as it gets hotter, its red colour becomes lighter and more yellow. If we can make it hot enough, the iron will glow with a white light, and if it could be made as hot as the inside of some stars, it would look blue.
This is a commonly observed effect that everybody who has lived in the era of tungsten filament lamps has witnessed, but for decades physicists strived in vain to explain it quantitatively. All their calculations predicted that most of the radiation would be emitted at ultraviolet frequencies, and they dubbed the phenomenon: the Ultraviolet Catastrophe.
It was not until nearly four decades after Maxwell’s work that Max Planck in 1900 produced an explanation of black body radiation. He made the radical and, at the time, arbitrary assumption that electromagnetic radiation was not emitted continuously, but instead came out in discrete packets, or quanta. There was no justification for this assumption, which was completely contrary to the tenets of classical physics (see Appendix 5.3), other than that it worked! Planck proposed that the energy of these packets is directly proportional to the frequency of the electromagnetic radiation. The constant of proportionality was given the symbol h, which is now known as Planck’s constant.
Another challenge for Maxwell’s theory arose in 1887 when Heinrich Hertz investigated the photoelectric effect, which consists of the emission of electrons when light is allowed to fall on a metal surface. Hertz discovered that the energy of the emitted electrons depends on the frequency of the light, and not on the beam intensity, as expected. In 1905, Einstein explained the anomaly by proposing that the incident light beam is not a continuous wave, but rather is composed of discrete wave packets (now called photons).4 The energy of each photon is proportional to its frequency, and the photons can be identified with the quanta of Planck.
Readers will by now be querying why it is so difficult for physicists to distinguish between waves and particles. Surely the two phenomena are nothing like each other. We’ve all seen ripples on a pond, we’ve all held a pebble in our hand. So, what’s the problem? Is light a form of electromagnetic waves, or is it made up of photons shot out like bullets?
The answer, however, is not as simple as the question: sometimes light is one thing, sometimes the other. Being a truly Pirandellian creature, it has a third, more complex nature, in which both the wave and particle personalities coexist. We will try to explain this duality in the following Sections.
5.4 The QM Revolution
Il Buonsenso, che già fu caposcuola, ora in parecchie scuole è morto affatto; la Scienza sua figliuola l'uccise, per veder com'era fatto | Common Sense, who used to be school master, is now in many schools dead and buried; the reason is that Science, who is his daughter, killed him to see how he was made |
Now that we are exploring Quantum Mechanics, we have to do precisely the opposite to what Giusti was advocating, i.e. we must learn to abandon common sense in favour of experimental evidence. Otherwise we will never understand how an object such as a photon, can be simultaneously both a particle and a wave. As we learned in Chap. 2, to understand sometimes means simply to accept a premise and explore its consequences. This was the approach adopted by Max Planck with his “explanation” of black body radiation.
This trait appears to be much easier for a young, uncluttered mind. So it happened that a Ph.D. student, Louis-Victor Pierre Raymond (Louis) de Broglie, with a thesis of just 70 pages [5], obtained both his Ph.D and the Nobel prize (in 1929), by postulating a dual nature, not only for photons, but also for matter. (Before this, he had already obtained a degree in history and one in science.) De Broglie’s bold assumption meant that electrons, the so-called atoms of electricity, should also display wave-like characteristics, such as diffraction and interference, which we discussed in Chap. 1 for light. This prediction was confirmed by the experiments of G. P. Thomson and of Davisson and Germer (1923–1927), who diffracted a beam of electrons from a nickel crystal.
But how can we depict a particle with a dual nature? Classically we describe a particle’s location by specifying its coordinates in four dimensions (4D)—the three spatial dimensions and time. In the case of wave motion, we normally specify the amplitude of the wave in terms of these 4D coordinates. The physical interpretation of amplitude depends on the type of wave we are considering and of the medium supporting it. For instance, in the case of a sound wave, amplitude represents the displacement of the medium through which the wave is propagating (usually air). In the case of light and other forms of electromagnetic radiation propagating through a vacuum, there is no physical supporting medium, and the amplitude represents the value of the electric (or magnetic) field at these 4D coordinate points, bearing in mind that the amplitude may be either positive or negative. The intensity of the light is given by the square of the amplitude, ignoring the sign.
Now we see the conundrum that presented itself to physicists in the early twentieth century. Common Sense will get us nowhere. Instead we must abandon ourselves to mathematics, and see where it leads. The modern interpretation is that associated intimately with every particle is a wave function. This wave function is a mathematical abstraction, and as far as we know has no physical reality. Indeed, it is a complex quantity, which in mathematics means that it has both real and imaginary parts. (As we have seen in Sect. 4, an imaginary number comes from taking the square root of a negative number.) If the reader is getting the sinking feeling that we are about to lose ourselves in an unintelligible quagmire, please take heart, for we shall go no further in this direction.
Clearly the wave function must have some connection with reality, or it would remain an irrelevant mathematical oddity. The connection comes via its modulus squared.5 Not just electrons, but all matter, has wave-like properties. However, the interference effects are easier to observe with less massive particles. In analogy with the intensity of light described above, the modulus squared of the wave function at a particular point in space and time gives the probability of finding the particle to be located at that point at that time. This last sentence contains the very core of what is now called Quantum Mechanics.
What we have said above is truly revolutionary: we can no longer say that a particle is here or there, but only specify the probability of finding it here or there. Not only that, if the particle is there, in order to look at it, or find a way to somehow detect it, we must use a small amount of energy (light waves have energy) and this energy moves the particle somewhere else. In other words, the process of observation changes, if ever so slightly, what we have observed.
Gone is the determinism of classical physics, where it was believed that if one could specify the positions and velocities of all the particles in the universe, one could predict the future for all eternity,6 and as an extra bonus also reconstruct the past. What takes its place is a world where one can predict nothing with complete certainty, but only estimate the probability that some event will take place. With macroscopic objects, the uncertainties are negligibly small: if one leaps off a tall building there is very little chance, a second or two later, of finding oneself reposing in an armchair reading Newton’s Principia. In the domain of atoms, however, analogous phenomena are entirely possible.
Having now grasped some idea of the relevance of the wave function, the question arises: how does one calculate it? The answer was provided by Erwin Schrödinger in 1926, who proposed an equation that bears his name. The solution of Schrödinger’s Equation for a particle in a particular environment, e.g. an electron in the hydrogen atom, provides the wave function for that particle. This wave function can be used to calculate essentially everything associated with the particle, i.e. its position, velocity, orbital angular momentum, etc. However, these quantities are obtained and expressed only as probabilities.
In our everyday world, we are used to being able to predict events with near certainty. Astronomers can tell you when solar eclipses will occur hundreds of years in the future. So what use is it if the best we can get from QM is a probability? It is like being told that there is a 1 in 36 chance of rolling two sixes on a pair of dice. It doesn’t help decide whether to bet your hard-earned cash on the next roll or the one after. (In fact, one shouldn’t waste any time on this decision. If the dice are “fair”, the chance of rolling two sixes is the same for every throw, irrespective of past history.)
There are, however, many instances where probabilistic predictions are rather more useful. Weather forecasts are not exact, but most people consult them when planning a picnic. Casinos know very accurately the likelihoods of a gambler winning on their various poker machines. They plan their budgets based on this knowledge. When a large number of gamblers use the machines, the reality reflects these predictions accurately. Actuaries in insurance companies examine past records to determine the risk associated with the various behaviours and lifestyles of their clients, and adjust the premiums they must pay accordingly. Some of the classical laws of physics, e.g. those dealing with the properties of gases, are statistically based. They achieve their accuracy because of the enormous numbers of molecules involved, which make the statistical predictions very precise indeed.
However, when dealing with individual atoms, QM does not allow an accurate prediction of events. In a sample of radium, one cannot tell in advance which will be the next atom to undergo radioactive decay, only how many are expected to decay in a given time.
5.5 Heisenberg’s Uncertainty Principle
As we have just seen, if we wish to know the location of a particle, we need to obtain its wave function by solving Schrödinger’s equation. We will not obtain a precise location, but only a probability distribution for the particle. If the location is known fairly accurately, this distribution will be narrow; contrarily, if the distribution is broad, it means that our uncertainty of the particle’s location is great.
We could apply the same methodology to determining the particle’s velocity (or more strictly, its momentum, i.e. its mass multiplied by its speed). Again, we would obtain a probability distribution, and a narrow one would mean we have a fairly precise knowledge of the momentum, and a broad one the opposite.
Heisenberg in 1927 realised that these two probability distributions, one for the particle’s location, and the other for its momentum, are not independent of each other, but closely related. If we can locate a particle accurately, we can measure its momentum only roughly, and vice versa. In the extreme case, where we are able to locate the particle exactly, we can have no knowledge at all of its momentum.
Of course, in physics we do not deal with “hand-waving” statements, such as “accurately” or “roughly”. Heisenberg presented his principle in a mathematical form, stating that the product of the uncertainty in the particle’s location and the uncertainty in its momentum cannot be less than ℏ/2, where ℏ is the ubiquitous Planck’s constant h divided by 2π. Here by “uncertainty” we mean the standard deviation,7 as it is called in statistics.
Heisenberg’s Uncertainty Principle is often explained by saying that any attempt to measure the position of a particle will perturb the particle’s momentum. The more accurately we try to locate the particle, the greater is this effect. Conversely, any attempt to measure its momentum, dislodges the particle from its original position by an unknown amount.
The pair of quantities, “position” and “momentum”, which satisfy Heisenberg’s Uncertainty Principle in this manner, are called canonical conjugates. They are not the only such pairs that satisfy this relationship. Energy and Time are another example, as are Angular Position and Angular Momentum.8 The latter are important in the physics of electrons bound within the confines of an atom.
In the everyday world where we live, the idea that one cannot know one’s velocity and position at the same time seems strange indeed. Certainly, the traffic police have no such qualms when they send you an infringement notice for violation of the speed limit at a particular place and time. They are only able to justify this action because of the extremely small value of Planck’s constant.9
5.6 Collapse of the Wave Function
We are now at a stage where we can explore some of the consequences of the probabilistic nature of QM, and provide an insight into the “incredible” in the title of this Chapter. It is time to put away the last remnants of our common sense, and follow our fancy and mathematics, wherever they may lead us.
Let us begin with a simple thought experiment, where we imagine shooting a beam of particles, e.g. electrons, through a small slit. There are various ways to construct such a slit, which need not concern us here. The important thing is that suitable slits can be constructed without much difficulty in the laboratory. The wave-like nature of electrons means that we expect diffraction to occur as the waves pass through the slit.

The diffraction pattern from waves passing through a single slit as a function of the angle of diffraction θ
In the case of the diffraction of light, if we placed a photographic film at a distance behind the slit, we would see a dark splodge where the main beam lands on the film, and lighter splodges from the diffracted beams on either side. If we narrow the slit, the diffracted beams would extend further out on either side. Those with an interest in photography will recall that decreasing the aperture of a camera lens (i.e. increasing the ƒ-number) results in sharper images at first due to less sensitivity to lens aberrations, but then the fuzziness increases when diffraction effects start to become significant.
Returning now to the case of electron diffraction, if we place a number of electron-detectors across the region behind the slit, we can count the number of electrons arriving per second in the detectors at the various angles. This number will be greatest in the detector directly behind the slit, but the smaller diffraction peaks will also be readily discernible by the clicks from the detectors at angles corresponding to these peaks.
At the moment when one of our detectors emits a click, we know immediately where that particular electron is located: it is located inside that detector, and there is zero probability that it is located elsewhere. Our act of measurement has resulted in the wave function of the electron shrinking to the size of the detector. In QM this is known as the collapse of the wave function.
To reiterate, up until the moment of detection we had no idea exactly where the electron was located; instead our knowledge was confined to a probability distribution that we obtained from the electron’s wave function, showing the statistical likelihood of it arriving in each of the various detectors. After the moment of detection, we are now certain where the electron is, and also, simultaneously, we are certain of where it is not, even if the other detectors are far away from the first one. In the next Chapter, however, we will learn that such a simultaneity is in conflict with Einstein’s Theory of Relativity. This conflict remains one of the outstanding problems of modern physics. We will discuss it in more detail in Chap. 6.
The issue of the collapse of the wave function caused by observation is itself a controversial topic, and has been debated at length by philosophers and physicists alike. The impression that the observer triggers the collapse by the act of measurement attaches an anthropomorphic element to QM. An extreme of this approach is the Many Worlds Interpretation of Hugh Everett, who proposed in a Ph.D. thesis at Princeton in 1973 that the universe splits into parallel divergent universes whenever a measurement is taken, each of them bound to the choice of detector made by the arriving particle. This proliferation of universes continues as time progresses, until essentially, somewhere at some time, there exists a universe where anything we can imagine is possible. (Where is Occam’s Razor when you most need it?).
Part of the difficulty may lie with the tendency to interpret the wave function as possessing a physical reality. Rather, it should be viewed as a mathematical artifice that enables the calculation of the probable location of the electron. If the electron is suddenly localised because of an experiment providing extra information (e.g. that the electron has been found in detector No. 1), then clearly the probability distribution of the electron’s location needs to be updated (as there is now no probability of the electron being located in other detectors). This is what we mean when we say that the electron’s wave function has collapsed.
An intriguing puzzle that has been known to fool even professional mathematicians may cast a little light on the above situation. Known as the Monty Hall problem after an American TV games show host, it runs like this: Contestants stand before three closed doors. Behind one of the doors is a car, while the other two conceal booby prizes. The contestants choose one of the doors, hoping to win themselves the car. The show host, who knows where the car is, does not open the contestant’s choice, but opens instead one of the two other doors, revealing a booby prize. He then offers the contestants a choice of staying with their original selection, or changing their mind and selecting the other remaining closed door. What should they do?
The intuitive, or common sense, answer is that it doesn’t matter, as there are now only two closed doors, and surely there is an equal likelihood of finding the car lying behind each of them. This answer is wrong. Contestants will have twice as much chance of winning the car if they change their selection from their original choice to the other door.
The solution of the Monty Hall problem is discussed in Appendix 5.5. However, the point we wish to make here is that as soon as the host opens the door revealing the booby prize, the probability distribution associated with the car’s location is changed. Originally there was an equal probability of the car being located behind each of the three doors. Now we know for certain that it is not behind the door that was opened. We have received extra information that changes the probability distribution of the car at this moment. Information is an important concept in QM, but one which we cannot pursue further here.
5.7 Wheeler’s Delayed Choice Experiment

Light waves passing through two slits and impinging on an optical screen, or a photographic plate

Interference pattern from light (coherent) passing through two slits and falling on a photographic plate. Image, public domain, courtesy of Pieter Kuiper (Image adapted from https://commons.wikimedia.org/wiki/File:SodiumD_four_double_slits.jpg (accessed 2020/05/23))
Naïvely, we may tell ourselves that one photon has passed through one slit, a second through the other slit, and their wave functions have interfered with each other to produce the above interference pattern. However, it is not quite so simple, and QM has a few more tricks to play.
Let us continue our thought experiment by reducing the intensity of the incident beam so that only one photon (or electron, if we wish) is in flight at any time. In other words, we wait until one photon has collided with our photographic plate, before releasing the next one on its journey. When the photon that is underway reaches the slits, it should pass through one or the other, be diffracted in the manner we discussed in the last Section, and then impinge on the photographic plate. We would expect over time to see a build-up of the diffraction pattern from one slit (Fig. 5.3) superimposed on that of the other, but not the fine interference structure shown in Fig. 5.5. However, the observed interference pattern remains exactly the same as in Fig. 5.5. It just takes longer to build up because of the weak beam intensity.
It is as though the photon is not really specified in location until it has impinged on the photographic plate, and its wave function has collapsed. Before then it could be anywhere, and it could be passing through either slit, or even through both slits simultaneously, such is its ghostly nature.
Okay, you say, then let us see if we can find a way to detect which slit it is passing through. Suppose we remove the photographic plate, and replace it with two telescopes, one directed at the first slit and the other at the second. If a photon passes through the first slit, a flash of light will be detected by the first telescope; vice versa, a photon passing through the second slit will be detected by the second telescope. We keep the beam intensity low enough to ensure that only one photon is in flight at any particular moment. With this setup, we are successful. Half of the photons are observed to pass through the first slit and half through the second, exactly as if we were looking at a stream of bullets.
In other words, whether the light beam behaves as a wave or a stream of particles seems to depend on what apparatus we are placing in its passage to detect it. Any attempt to localise which slit the photon passes through destroys the interference pattern.
There is one further twist we can add to our thought experiment. Let us set up an experiment similar to that of Fig. 5.4, but construct it so that we can randomly choose whether to use a photographic plate (i.e. a wave detector) or telescopes (photon detectors), and are able to replace one choice by the other very quickly. We also lengthen the time of flight between the slits and the detectors so that there is time for us to make our choice after the photon has already passed through the slit(s) and is on its way to the detectors.
This experiment, which is known as the Wheeler Delayed Choice Experiment, has been performed for a variety of quantum particles (some as large as whole atoms [6]) with results that destroy the last vestiges of Common Sense. If we observe the slits with a telescope, we can detect which slit the particle passed through; if we put the photographic plate in place, an interference pattern will build up, which is indicative that the particle passed through both slits. However, the particle had already passed through the slit(s) before we made our decision which detector to use. How did the wave/particle know which choice we were going to make before we knew it ourselves?
O, that way madness lies; let me shun that; No more of that [7].
We shall discuss the Wheeler Delayed Choice Experiment further in Chap. 12 (Part 3).
5.8 Quantum Entanglement
If we are still reeling from the implications of Wheeler’s Delayed Choice Experiment, then “a cup of tea, a Bex, and a good lie down”10 are surely necessary before tackling the mysteries of Quantum Entanglement.
From our examples above, one may obtain the impression that everything in QM is of a ghostly probabilistic nature, and that nothing can be known for certain. As we have seen, at the microscopic level this is largely true, but there are some quantities that must not change, and these are expressed by conservation laws. These laws are the quantum analogues of corresponding laws in classical physics. Indeed, they are more than analogues: they are really the same laws, with the quantum form transitioning into the classical form as the objects we are considering increase in complexity and size. Examples are the Laws of Conservation of Momentum, Angular Momentum and Energy. We shall discuss conservation laws in more detail in Chap. 8.
Let us consider another thought experiment where two electrons are produced together such that their total combined angular momentum is zero. Such a process is possible if the electrons are produced jointly in a physical process where the total angular momentum that the pair can carry away is limited to zero by the requirement of the Law of Conservation of Angular Momentum. Then if one electron has a positive angular momentum, the other electron must carry an equal negative angular momentum. In other words, the two electrons are spinning at the same rate in opposite directions. This is a classical picture—electrons are not simply spinning balls, as we shall see in Chaps. 8 and 9. However, this model will suffice for our purposes here. Two particles restrained in this manner by a conservation law are said to be entangled.
It is worth reiterating that at this point we have not yet conducted any measurements on the electrons, so we cannot say what the angular momentum of each electron is, only that they spin in opposite directions, and that the sum of their individual angular momenta must be zero. Indeed, QM goes further and states that until a measurement is made, neither electron has a well-defined spin, but is a mixture, or superposition, of possible spin states.
Let us now conduct our measurement. We select one electron and measure its angular momentum, while we allow the second one to continue on its journey. Our act of measurement collapses the wave function of the first electron, resulting in its angular momentum becoming precisely defined (i.e., no longer described by a probability distribution). However, because the two particles are entangled, the wave function of the second electron must also simultaneously collapse, and its angular momentum becomes precisely specified, no matter how far away the second electron has moved from the first.
This aspect of QM was not accepted by Einstein, who maintained that this “spooky action at a distance” violated the Theory of Relativity, which indeed it does (as we shall see in Chap. 6). He claimed that QM must therefore be incomplete, and that the electron spins were indeed well defined, although hidden from experimenters until they had carried out their experiments. This interpretation was known as the “hidden variables” theory. A major contribution by the Irish physicist, John Stewart Bell, was to show that no hidden variables theory can ever reproduce all the predictions of QM.
The first experimental test of the Einstein versus Bell disagreement was performed in 1972, with the evidence strongly in favour of Bell [8]. Since then, many tests have been made to close possible loopholes in the methodology. As time has passed, the experimental evidence in favour of entanglement has accumulated, and now few physicists doubt its existence. Many research projects are underway to harness its capabilities in such diverse fields as Quantum Computing, cryptography, ultra-precise clocks and biology.
5.9 Schrödinger, and His Long-Suffering Cat
No Chapter on QM would be complete without a mention of Schrödinger’s cat. The story has been told so often that it has become hackneyed, so we apologise in advance if the reader is already familiar with it. However, it is a thought experiment that raises important questions about the boundaries between quantum and classical physics, and is worth presenting for that reason. There are several variants to this experiment, but the one below is a typical example.
Imagine that we have a box containing a cat and a weakly radioactive source that has a probability of 50% of emitting one radioactive particle per hour. Also in the box we have a Geiger counter to detect the decay product from the source. The Geiger counter is wired so that when it registers a detection, it releases a hammer which swings down and smashes a phial of cyanide gas, thereby releasing poisonous fumes into the box and killing the cat. Remember, this is only a thought experiment, and not intended to be carried out!11
- (1)
The radioactive source has not emitted a particle, the cyanide has not been released, and the cat is alive and well;
- (2)
The radioactive source has emitted a particle, the cyanide has been released, and the cat is dead.
From a quantum perspective, one might say that it is the observation process of opening the box that collapses the wave function and reveals which of the two possibilities is real. Before the box is opened, both possibilities exist and the cat is in a mixture, or superposition, of two states, one in which it is alive and one in which it is dead.
From a classical physics viewpoint, one would say that the cat is alive or dead, but until we open the box and take a look, we do not know which. This is akin to Einstein’s Hidden Variables theory that we discussed in the previous Section. So under what conditions are the classical theories of physics accurate, and when do they fail, so that we must turn to QM? This is the dilemma that Schrödinger’s Cat has highlighted for us.
Macroscopic objects, such as pebbles, balls, cats, and even us, contain countless billions of molecules.12 The wave function associated with these large objects is a linear superposition of the wave functions associated with the atoms of which they are constructed. For two waves to interfere, there must be a stable phase relationship between them. This means that the peaks and valleys of the waves are not randomly distributed. In this case, the waves are said to be coherent. In a large body, the wave functions associated with the billions of molecules are jumbled, and so do not add up coherently. In this case, the superposition of the wave functions produces physical effects that are just averages of the individual effects. This is akin to the classically predicted behaviour.
As an example, the light emitted by a tungsten filament lamp is incoherent, and so will not normally produce interference effects.13 On the other hand, light from a laser is produced by stimulating the atoms in the laser to all emit simultaneously. This light is coherent, and therefore readily produces interference effects.
In the above experiment, the cat is clearly not a quantum particle, and the radioactive particle that triggered the Geiger counter just as clearly is. The boundary between the classical and quantum domains is continually being raised. In 2010, Andrew Cleland and his team at the University of California, succeeded in placing a “paddle” 30 microns long containing trillions of atoms into a quantum state [9]. By cooling the paddle below 0.1° K, they placed it into its “ground” state, where the available thermal energy is too small to allow any excitations to higher energy states to occur. They then added the smallest possible unit of vibrational energy. This created a situation where the paddle was in a superposition of two states, one with zero energy and the other with one unit of vibrational energy. “This is analogous to Schrödinger's cat being dead and alive at the same time,” said Cleland. When the experimenters measured the energy, the wave function collapsed, and the paddle had to “choose” whether to remain in the excited state, or pass its energy to the measuring device. The paddle was large enough to be visible.
There are a number of interpretations of the Schrödinger Cat experiment, some of which involve live and dead cats in various universes. The philosophical implications of the thought experiment are profound, involving as they do a live animal, and notions of life and death. Let us consider for a moment replacing Schrödinger’s cat with a virus. Depending on one’s definition of life, a virus is considered by many to be a living creature. Viruses range in size from 0.004 to 0.1 microns, and are thus much smaller than Cleland’s paddle. Quantum effects should surely apply to them. Bacterial cells range from about 1 to 10 microns in length and from 0.2 to 1 micron in width. Bacteria are most certainly alive, and are also smaller than the paddle, so we can expect quantum effects to be relevant to them. We begin to see some of the challenges thrown to philosophers by Quantum Mechanics.
In the next Chapter, we shall present the next great challenge to Common Sense, and the other half of the revolution in physics that took place in the twentieth Century, namely, the Theory of Relativity.