Experiments contradict local realism, but it’s easier to reject falsehood than to establish the truth. The truth is elusive. We cannot observe photons prior to observation—so we do not have direct evidence that the observation of one photon affects another. Debate over these points has resulted in a host of interpretations of quantum mechanics.
Let’s review, once more, the creation of two entangled photons that are just as likely to be horizontally polarized as vertically polarized. Let’s place a horizontal polarizer in front of each photon. Half the time, both photons pass through the polarizers. Half the time, neither photon passes through.
Let’s name the photons A and B. Suppose Photon A reaches its polarizer before Photon B. If Photon A passes through the polarizer, then we know that Photon B is certain to pass through its horizontal polarizer when it gets there. But what really changed when Photon A passed through its polarizer? Did Photon A change? Did both photons change? Did neither change? Or do the changes take place only after the photons reach detectors, or after the detectors communicate the result to a circuit board?
If I claim that the size of my right foot changes when I measure my left foot, we would expect to observe this directly: when I hold a ruler up to my left foot, we should be able to watch my right foot shrink or expand, or perhaps transform from fuzziness to solidity. Similarly, we want to observe Photon B, both before and after Photon A is measured, to see if anything changes. But then, the first observation of Photon B would be a measurement, which may affect the state of Photon A!
My claim is that both photons are transformed by the first observation of either photon. Thus this transformation can never be observed; we can’t perform any observation prior to the first observation. So we can never watch one particle change in response to the measurement of its twin. The innermost workings of nature remain forever out of reach. The quest for complete understanding is always an unscratchable itch. The only fact that’s (almost) certain is that local realism cannot account for measured results.
Local realism is defeated by violations of Bell inequalities, which is why local realism is the negative space of quantum physics: local realism is the excluded explanation. If we reject local realism, what’s left? Are the only remaining views of reality mystical? Does quantum mechanics, after all, say something mystical about the universe? We can no longer argue that physics is merely a set of formulas for predicting experimental outcomes, disjoint from philosophical considerations: Bell inequalities show that experiment has overruled a plausible philosophical assumption. There are many alternative assumptions, but none are especially plausible, and all have their partisans.
Indeed, there are many philosophical interpretations of quantum mechanics. I will not try to compile a complete list, or give equal attention to leading viewpoints, or even to classify the viewpoints in a standard way. But I will consider four categories of responses to Bell inequalities:
We can no longer argue that physics is merely a set of formulas for predicting experimental outcomes, disjoint from philosophical considerations: Bell inequalities show that experiment has overruled a plausible philosophical assumption.
If we hope to cling to local realism, like a life preserver in a stormy sea, we need to identify another assumption that may be false. We then blame this other assumption for the incompatibility between experiment and the mathematical constraints that we derived. If this other assumption is to blame, then local realism may be innocent.
First, let’s think about an assumption that seems identical to realism: the unstated assumption of counterfactual definiteness.1 This is the assumption that even though each photon goes through a polarizer set to a single angle, we can specify what the photon would do if it went through a polarizer at a different angle. The assumption of realism—that the photon has properties that predetermine its response to any chosen polarizer angle—seems to require counterfactual definiteness. In a moment, we’ll distinguish realism from counterfactual definiteness.
In normal life, counterfactual definiteness seems reasonable. For example, I’m not jumping right now, so it’s counterfactual to discuss what would definitely happen if I were to jump. Yet I can say with confidence that if I jump, I will come back down. And if I drop my pen, it will fall. And if I clap, I will hear the sound. It seems obvious that all these statements are true—and that’s because counterfactual definiteness is so innocuous, we assume it all the time.
We already know that quantum particles defy our expectations in many ways. We might as well ask whether there’s something fundamentally forbidden about specifying what a particle would do in any situation other than the one it actually experiences. If we reject counterfactual definiteness, can we save realism?
The distinction between realism and counterfactual definiteness becomes clearer if we consider the viewpoint of superdeterminism.2 According to superdeterminism, there’s no free will. The entire universe is a Rube Goldberg device evolving inexorably along its predetermined course. Every future occurrence, down to the minutest detail, was predetermined at the moment of the Big Bang. Free will is an illusion, and if we believe in this illusion, it’s only because we were predestined to do so.
All of the Bell inequalities are derived under the assumption that the experimenters can freely choose polarizer angles, such as 0°, 30°, or 60°. Since the photons don’t “know” the angles that they’ll encounter, they have to be prepared (they have to have hidden properties) for all possible angles. In a superdetermined universe, the photons have a lot less preparation to do. Each photon needs a property only for the single polarizer angle that it’s certain to encounter. The assumption of realism is thus valid; the photon has its single property (causing it to be transmitted or blocked) all along, even before we measure it. Although this explanation preserves realism, it’s still very strange. Somehow each photon “knows” exactly which polarizer angle it will encounter. The two photons no longer have to collude with each other, but instead each has to collude with its own polarizer before it even gets there. We can preserve locality by arguing that the collusion took place during the Big Bang, when everything was scrunched into one locale.
In any case, counterfactual definiteness isn’t valid in a superdetermined world. We can’t specify what a photon would do in any measurement except the one it actually experiences because there’s never any possibility of anything else.
Another viewpoint that undermines counterfactual definiteness is the many-worlds interpretation of quantum mechanics.3 In this view, all possible outcomes of a measurement are real—in parallel universes! When the measurement is performed, the world splits—the photons are vertically polarized in one world, and horizontally polarized in the other. (I believe adherents of this interpretation prefer different terminology: The only reality is the sum of all possible outcomes. So reality itself isn’t splitting; there are just new branches within the single reality, and we’re conscious of only one of the branches.)
As unrealistic as it seems, the many-worlds interpretation is based on realism. Quantum mechanics represents the state of our entangled photons as a sum of two mutually exclusive outcomes, horizontal polarization and vertical polarization. This sum is considered the ultimate reality in the many-worlds interpretation. The measurement causes the terms in the sum to split off into separate worlds. The sum of all the worlds remains the single deep reality, but we perceive only the one world we inhabit. In a sense, when I measure a photon’s polarization, I’m not changing the photon, which always existed as a sum of vertical and horizontal polarization; I’m changing myself, splitting into someone who observes vertical polarization, and someone who observes horizontal polarization.
Now we want to explore whether counterfactual definiteness makes sense in the many-worlds interpretation. In a particular world, can we specify what a photon would do if the polarizer were set differently from how it’s actually set? Well, what would make the experimenter decide to set it differently? Is it random? Let’s imagine that a random quantum event determines the direction of the polarizer. For example, we might set up a light source to emit a single photon. Let’s call this photon the Decider. We arrange an experiment so that the Decider has a 50 percent chance of being vertically polarized, and a 50 percent chance of being horizontally polarized. Suppose the polarization of this photon sets the angle of one of the polarizers in our entanglement experiment.
But wait—the world split when the Decider photon was measured! We can’t talk about what the entangled photon would do if the polarizer were set differently because that occurs only in a different universe!
When describing the many-worlds interpretation, authors like to give the disclaimer, “This sounds like science fiction.” And yet a surprising number of physicists actually believe in it. It’s amazing that the same profession that gives us airplanes and computer chips also tells us that our whole entire universe may be a vanishingly tiny speck in an exploding infinity of parallel worlds.
Indeed, the many-worlds interpretation has its partisans because of how it resolves the measurement problem in quantum mechanics.4 The measurement problem is not specific to entangled particles. Even a single particle is in a fundamentally undecided, unknowable state prior to measurement. The measurement forces the particle to settle into a state with a more exact value of one property (its position, for example), while another property (its speed) unavoidably becomes more uncertain. Similarly, when measuring a photon’s polarization in a horizontal or vertical direction, we learn whether it’s horizontally or vertically polarized, but we lose any information we may have had about whether it was polarized in the 45° or −45° direction. The loss of information about one property, when measuring a different property, is Heisenberg’s famous uncertainty principle.
But at exactly what point does the measurement take place? This is a key question, since such a fundamental transformation takes place. The particle seemingly transmutes into something it wasn’t just before. Does the transmutation occur when the photon passes through the polarizer? Or when it reaches the detector? Or when the detector sends an electronic signal to a circuit board? Or when the circuit board transmits the message to a computer? Or when the computer displays 0 or 1? Or when a conscious observer sees the result on the computer screen? Some physicists have actually proposed that consciousness creates objectively real states. Before registering in someone’s consciousness, the photon is in a fundamentally undetermined and unknowable state—and so is everything it encounters on its way, in an avalanche of indeterminacy! In this view, the computer screen is in some unimaginable combination of showing both mutually exclusive outcomes before the conscious observer comes along.
The same profession that gives us airplanes and computer chips also tells us that our whole entire universe may be a vanishingly tiny speck in an exploding infinity of parallel worlds.
Measurement is a problem because a measurement disrupts the smooth evolution of a quantum state. The fundamental equation of quantum physics does one thing very well: it specifies the probabilities of measurable outcomes, and how these probabilities change over time. As soon as a measurement is made, all the outcomes that weren’t measured get thrown out of the equation. This “throwing out” process is external to the equation and no one fully understands it—and it doesn’t happen at all in the many-worlds interpretation.5 In the many-worlds interpretation, no outcomes get thrown out because all possible outcomes coexist in parallel worlds.
Now we’ll think about a few more assumptions underlying the derivations of the Bell inequalities. There’s an assumption called fair sampling: no detectors are 100 percent efficient; the detectors miss a large fraction of the photons that reach them. For example, suppose a detector responds to 20 percent of the photons that reach it. Our unstated assumption is then that each photon arriving at the detector has the same 20 percent chance of detection; the system is not somehow rigged. This fair-sampling assumption is also called the detection loophole. If we reject the fair-sampling assumption, we could make the (outrageous?) claim that the detector somehow favors photons that violate Bell inequalities, just to fool us; only the detected photons violate Bell inequalities, and if we replaced our detectors with ideal (100 percent efficient) detectors, the Bell inequalities would in fact be satisfied.
We’ve also assumed that the photons don’t “know” in advance the angle of the polarizer they’ll encounter; this is why realism requires the photons to have preset outcomes for all possible polarizer angles. What if the photons can somehow sense in advance the angles of the polarizers? Then we can’t derive any Bell inequalities because the photons need to have just a single preset outcome. (We followed this line of reasoning in the discussion of superdeterminism.) We’re now imagining some form of signal from the polarizers to the photons, so that the photons are alerted to exactly which polarizer angles they’ll encounter.
To me, it’s much less spooky to think that the measurement of one photon affects the other, than to think that the photons somehow sense the angles of polarizers they haven’t arrived at yet. If we imagine that the photons may somehow receive information from the polarizers before they get there, we have what’s called the locality loophole. The idea is that if information travels (no faster than light) from the polarizer to the photons, then this information is available locally, at the photons’ original position. This is contrasted with the idea that the measurement of one photon instantly (faster than the speed of light) affects the other (nonlocal) photon.
From 1982 until 2015, various experiments closed either the detection loophole or the locality loophole. For example, in 1982, Alain Aspect led an experiment that effectively rotated the polarizers as the photons were traveling to them, so the photons couldn’t have any advance notice as to which polarizer angle they’d encounter.6 Other experiments used detectors with high efficiency to close the detection loophole. But prior to 2015, a true zealot could still insist that local realism was not to blame for the experimental violation of mathematical constraints. Finally, in 2015, both loopholes were closed simultaneously in a single experiment.7
To close the locality loophole, the polarizer angles have to be chosen unpredictably so that the photons can’t have any advance notice of what they’ll encounter. We can place a random number generator at each polarizer to choose the angle. But what if some unknown, common cause affects both the random number generators and the photons? Then, the photons could have predetermined properties all along, while still violating Bell inequalities, due to the unknown influence tampering with the random number generators. This loophole is called the freedom-of-choice loophole. It challenges our assumption that the choice of polarizer angles can be made freely, independent of the properties of incoming photons. Theoretical work has shown that local realism can be preserved if the tampering influence is minimal; we don’t need to go to the extreme of superdeterminism.8
How can we close the freedom-of-choice loophole? We need to rule out a tampering influence, which travels no faster than the speed of light. Some physicists have used light from distant stars to set the angle of polarizers, and the results were the same as always: Bell inequalities were violated.9 The starlight was emitted hundreds of years ago, and we assume that the stellar photons were unaltered during their long journey to Earth. If a tampering influence exists, it must have planned ahead by hundreds of years, before the starlight was emitted, just to produce a Bell inequality violation. This hypothetical, tampering influence is like a patient villain with an extremely perplexing goal.
In another experiment to close the freedom-of-choice loophole, about 100,000 people from around the world generated random numbers.10 The random numbers were used to set the polarizer angles (or equivalent analyzer settings) in tests of Bell inequalities. Participants generated random numbers by playing a video game online.11 The Bell inequalities were violated, as usual. We conclude that local realism was defeated: the entangled particles did not have definite properties prior to measurement, or if they did, the measurement of one particle affected the other. Alternatively, a superdeterministic power governed the seemingly random choices of 100,000 people so that their choices corresponded with properties that the entangled particles had prior to measurement. In either case, common sense cannot account for the results.
Let’s consider a final assumption that we’ve made all along, which also seems like common sense: the two entangled particles, when separated by an arbitrarily large distance, are in two different places, not a single place. How could this possibly be untrue? Well, what if the two entangled particles are connected by a wormhole, which is a shortcut through space and time (like Madeleine L’Engle’s “wrinkle in time”)? Since both ends of a wormhole are actually the same point, then no matter how far apart the entangled particles are, they occupy the same position! Leonard Susskind and Juan Maldacena advanced this idea several years ago, in 2013.12 The succinct nickname for this conjecture is ER = EPR.
“EPR” refers to the paper Einstein coauthored with Boris Podolsky and Nathan Rosen in 1935, arguing that entanglement reveals a flaw in quantum mechanics: a more complete theory is needed to specify the exact outcome of any possible measurement. Less than two months after the EPR paper was published, Einstein and Rosen (ER) published a paper about (what we now call) wormholes.13 If ER=EPR, then entangled particles are even stranger than we thought, connected via invisible tunnels through space and time!
In fact, a favored view among physicists is that reality is a higher-dimensional space.14 Our ordinary ideas of space and time are inadequate to understand entanglement. To recognize our cognitive limitations, we can imagine a world with fewer dimensions than ours: imagine a society constrained to exist in a flat, geometric plane.15 The two-dimensional people in this world have no concept of three-dimensional space because they have never experienced it.
Now imagine that a three-dimensional titan starts poking the tips of a fork through the two-dimensional world. The fork is poked at random moments through random locations. The two-dimensional people (quivering in terror) perceive the tines of the fork as four isolated, round blobs. They see no possible physical connections among the four blobs; they can completely encircle each blob with a string to prove that it’s isolated from the others. The four blobs always appear at almost the same time, however, and they disappear at almost the same time. Although the two-dimensional scientists can’t predict where or when the blobs will appear, the distance between adjacent blobs is always the same. (Perhaps the blobs expand slightly after they appear, and they shrink before they vanish, but the distance between the centers of the blobs is always the same.)
The two-dimensional scientists wonder if the appearance of one blob causes the three other blobs to appear, some distance away. Is this spooky action at a distance? The two-dimensional scientists are scratching their two-dimensional heads. Eventually, an idea forms in their two-dimensional brains. Perhaps the isolation of the blobs is an illusion; perhaps, in an unimaginable higher-dimensional space, the four blobs are part of a unified whole. The properties of one blob don’t influence the properties of any other blob. Instead, the relationships among the blobs exist all along in a higher-dimensional space, which only occasionally intersects the familiar, two-dimensional reality.
This is how some physicists explain entanglement: We live in a cross-section of a higher-dimensional reality. Much like the two-dimensional scientists, we cannot intuitively understand causality in the higher dimension. Nicolas Gisin writes, “In a certain sense then, reality is something that happens in another space than our own, and what we perceive of it are just shadows, rather as in Plato’s cave analogy used centuries ago to explain the difficulty in knowing the ‘true reality.’”16 This is an extraordinary statement. Scientists are stereotyped to equate reality with empirical data, but evidently some scientists equate reality with an invisible higher realm.
Let’s review, once more, the assumptions of locality and realism. We’ve seen that the assumption of local realism imposes constraints on measurable quantities, and measurement violates those constraints. Thus, unless we embrace an exotic viewpoint like superdeterminism, we must abandon locality, realism, or both. As we’ve seen, locality and realism are both very reasonable:
If physics forces us to abandon at least one of these common-sense notions, what’s left, other than a higher-dimensional reality? Here are some possibilities.
Scientists are stereotyped to equate reality with empirical data, but evidently some scientists equate reality with an invisible higher realm.
As I mentioned earlier, I find it expedient and straightforward to suppose that measurement creates objectively real states. If a photon passes through one vertical polarizer, it will pass through any number of vertical polarizers lined up in a row. The vertical polarization of the photon seems to be an objective fact, once it’s measured in the first place.
A pair of entangled photons shares a single state, and the initial state is noncommittal. The photons do not have properties that predetermine their behavior at polarizers; they’re not predestined to pass through polarizers or get blocked by polarizers. Let’s suppose both polarizers are set to the same angle. If we first measure one photon in the pair, the measurement instantly creates a definite state for both photons. If we believe that the state is objectively real, then the measurement of one photon physically alters the distant photon. This is the spooky action at a distance that Einstein decried (and that many physicists continue to reject). We can’t prove that the measurement of one photon changes the other because we can’t watch the change take place; we can’t make an observation prior to the first observation on either photon. But for exactly this reason, we can’t prove that the measurement of one photon doesn’t affect the other. For all practical purposes—predictions of outcomes—the measurement of one photon indeed affects both. Both photons transform from a noncommittal state to a known state. As Tim Maudlin writes, “Going from not having a physical state to having a physical state is some sort of change, call it what you will!”17
Even if spooky action at a distance is real, is it any spookier than other influences, like gravity? If you lived a thousand years ago and someone told you, “The only thing stopping Earth from floating off into the endless midnight of black space is a wrenching attractive force from the distant sun,” wouldn’t you think that was spooky? The only reason gravity doesn’t seem spooky is that we’re so familiar with it; we’ve absorbed it into our intuition about how the world works.
To me, spooky action at a distance is disappointingly unalarming. The real mystery resides in the measurement problem: Exactly what were the photons like before they were measured? How does a measurement transform their polarization from something essentially undefinable to something definite? And when does this transformation take place: What physical process functions as a measurement? And why don’t we observe quantum effects on a large scale? Why can’t we be in two places at once, or dead and alive at the same time?
Some physicists believe these questions have been partially answered by quantum decoherence: when objects are jostled by the surrounding air molecules and photons, they lose the ability to be in two mutually exclusive states at the same time. One state survives, and the other state dissipates. This process is explained through quantum equations. What’s not explained is how the surviving state is chosen: the selection of the state remains a purely random process.
There have been attempts to formulate nonlocal, realistic theories. David Bohm’s theory is the most famous example. If we accept nonlocality, we have a chance of preserving realism. We’ve assumed all along that each photon is unaffected by the other photon’s polarizer. Why, indeed, would a photon be affected by a distant sheet of plastic that it never even approaches? If your polarizer affects my photon, it must be indirect, through your polarizer’s effect on your photon (which is entangled with mine).
In 2003, Anthony Leggett derived a generalized Bell inequality. We recall that ordinary Bell inequalities are based on two assumptions: realism and locality. Leggett retained the assumption of realism, but permitted a restricted form of nonlocality.18 Quantum mechanics and measurements violate Leggett’s inequality, but physicists dispute the significance of this result.19
We need to identify just a single false assumption to explain why Bell’s constraints do not apply to real particles. Could the false assumption be realism, such that locality may be valid? If we reject realism, then particles are not predestined to behave any particular way when they are eventually measured. Thus one photon in an entangled pair is not predestined to be vertically polarized, even if that’s what the measurement ultimately shows. But when both polarizers are vertical, the two photons always do the same thing: they both pass through, or they’re both blocked. If the photons are in a fundamentally undecided state before measurement, how can they possibly arrange to always behave identically when the polarizer angles are identical? Nonlocality is much more obviously necessary when realism is rejected. Indeed, Einstein’s objection to quantum mechanics was that its lack of realism necessitates nonlocality.
But we can insist (boldly? shrilly? petulantly?) that direct observation is the only scientific reality. This assertion has been made in a variety of forms, starting with Niels Bohr’s Copenhagen interpretation. An extreme version of this idea is called genuine fortuitousness, which denies the existence of microscopic particles! In this view, there are probabilities of responses from our detectors, but we shouldn’t say that the detectors are actually detecting anything.20
“Direct observation is the only scientific reality” takes a less extreme, though still brazen, form, in a recent interpretation of quantum mechanics called QBism (pronounced “cubism” to deliberately create a sense of radical departure from established norms).21 QBism is the abbreviation of “quantum Bayesianism.” In Bayesian statistics, probabilities are updated as new information comes in.
For example, one time I was visiting Chicago. Coming out of a train I noticed that my wallet was gone. The departing train receded before my saddened eyes. I asked the transit staff if there was a lost and found. They gave me the phone number, but they told me that there was no point because I had been pickpocketed. I called the lost and found, but it was closed for the day. I spent the whole harrowing night believing that Chicago was a city of villains. Even if I got a new wallet, I would surely be robbed again.
The next morning, I called the lost and found, and learned that someone had turned in my wallet, with all $136 in it! I immediately reversed my judgment of Chicago. Chicago was a city of good Samaritans, and I could expect nothing but kindness from strangers.
The daily risk of crime in Chicago at no point actually changed over that 12-hour period. My subjective judgment of the risk, however, underwent two drastic updates.
In QBism, quantum mechanical probabilities are subjective judgments. There’s no such thing as an absolutely accurate, objective probability “out there.” Quantum mechanics is a tool for making our subjective judgments as accurate as possible. Different people will assign different probabilities to the same event if they have different information about it. Before a photon’s polarization is measured, you and I may agree that the probability of vertical polarization is 50 percent. If you do the measurement and the photon is found to be vertically polarized, you update the probability to 100 percent. If I’m out of the room, I still think it’s 50 percent until you give me the news. Until then, 50 percent and 100 percent are equally legitimate probabilities in the sense that they’re both based on the best information available to the person.
Thus, according to QBists, there is absolutely no action at a distance. If I measure the polarization of one entangled photon and find it to be horizontally polarized, I immediately believe with 100 percent certainty that the other photon will also be horizontally polarized. If the other photon is traveling to you, a full light-year away, I have no way of communicating my knowledge to you before the other photon arrives; your photon had too much of a head start, even if I send you the news at the speed of light. For the whole year, you’ll continue to believe that the probability of horizontal polarization is 50 percent, while I know it’s really 100 percent. According to QBism, we’re both right! We’re both applying quantum mechanics as accurately as possible with the information we have.
QBists refuse (humbly? peevishly?) to assign a cause to the observed correlations between entangled photons. The correlations are a fact of nature, and quantum mechanics gives us the math to accurately predict them. Any speculation as to how the correlations come about is outside the scope of physical science. (This approach is sometimes called “shut up and calculate.”) Since QBist physicists don’t speculate about underlying causality, the speculation and discussion must therefore come from ... philosophers ... or from theologians, poets, or science-fiction writers?
I don’t permanently encamp with the QBists. But on occasion, QBism feels like an invigorating breeze that clears away a cloying miasma of confusion. QBism fends off the questions of what a particle’s like before measurement, what constitutes a measurement, and what is the underlying deep reality. QBism ejects these questions from the realm of science because they all inquire about something that can never be scientifically determined: the state of an object before it’s observed. It’s not wrong to speculate about what a particle’s like before it’s measured, or to wonder what invisible mechanism enables one photon to always behave like its twin; it’s just that we step outside of QBist science when we speculate about things that can never be directly observed.
What happens to objects that no one’s looking at? Does the seemingly solid world dissolve into the phantasms and mirages of our own assumptions and mental images? The visible universe does not completely blink out of QBist existence when we close our eyes; the lapse in observation is filled in by the subjective judgment that the world is still there. QBism preserves our common sense. Quantum mechanics is classified as a prediction tool, not a gateway to ultimate reality.
QBism sweeps the cobwebby spookiness out of quantum physics (and into someone else’s discipline). There’s no action at a distance, and there’s no speculation (within physics) about what particles are doing when we’re not looking at them. But we can push this idea in a direction unintended by QBism’s inventors. If we really believe that direct observation is the only reality, then, looking at the night sky is a single truth; observer and observed cannot be logically separated. And the quest to preserve locality leads to unification with everything we see.