Where do we go from here?
Despite the implications of the Copenhagen interpretation, relational quantum mechanics, and information-theoretic interpretations, we might still have no wish to suggest that the quantum formalism is in any way incomplete. Is there nevertheless some solace to be gained by seeking to reinterpret what the theory says? This offers the advantage that we avoid messing about too much with the equations, as we know that these work perfectly well. Instead, we look hard at what some of the symbols in these equations might actually mean. This doesn’t necessarily lead us to adopt a more realist position, but it might help us to say something more meaningful about the underlying physics that the symbols are supposed to represent.
As a starting point, let’s just acknowledge that quantum mechanics appears to be an inherently probabilistic theory, and that it is founded on a set of axioms. Instead of wrestling endlessly with the interpretation of the theory, is it possible to reconstruct it completely using a different set of axioms, in a way that allows us to attach greater meaning to its concepts?
The theorist Lucien Hardy certainly thought so. In 2001, he posted a paper on the arXiv preprint archive in which he set out what he argued were ‘five reasonable axioms’ from which all of quantum mechanics can be deduced.1 These look nothing like the axioms I presented towards the end of Chapter 4. Gone is the completeness or ‘nothing to see here’ axiom. Gone are the ‘right set of keys’ and the ‘open the box’ axioms. There is no assumption of the Born rule, as such.
Hardy argued that the singular feature which distinguishes quantum mechanics from any theory of physics that has gone before is indeed its probabilistic nature. So, why not forget all about wave–particle duality, wavefunctions, operators, and observables and reconstruct it as a generalized form of probability theory? In fact, the first four of Hardy’s reasonable axioms serve to define the structure of classical probability, of the kind we would use quite happily to describe the outcomes we would anticipate from tossing a coin. It is the fifth axiom, which assumes that transformations between quantum states are continuous and reversible, which extends the foundations to include the possibility of quantum probability.* The rest of quantum mechanics then flows from these, including the Born rule.
At first sight, Hardy’s fifth axiom appears rather counterintuitive. In a theory that is characterized by discontinuities, it seems odd to assume that transformations between quantum states happen in a smooth, continuously incremental fashion. But this is necessary to set up the kind of scenario which just can’t happen in classical physics. ‘Heads’ can’t continuously and reversibly transform into ‘tails’. But the quantum states ↑ and ↓ can. Hardy’s fifth axiom allows for the possibility of quantum superposition, entanglement, and all the fun that follows. Quantum discontinuity is then interpreted straightforwardly as the transformation of our knowledge, from two possibilities (either ↑ or ↓) to one actuality, in just the same way we see that the coin has landed with ‘heads’ facing up.
Hardy’s paper follows something of a tradition in attempts to reconstruct quantum mechanics, and its publication sparked renewed interest in this general approach. Note, however, that any reconstruction of quantum mechanics as a general theory of probability might allow us to say some meaningful things about what goes into it and what comes out, but it tells us nothing whatsoever about what happens in between. ‘What the physical system is is not specified and plays no role in the results,’ explains Giulio Chiribella. Such probability theories ‘are the syntax of physical theories, once we strip them of the semantics’.2
There’s still nothing to see here.
We might be tempted to conclude that rushing to embrace an entirely probabilistic structure risks throwing the baby out with the bathwater, losing sight of whatever physics is contained within the conventional theory. Instead of discarding all the conventional axioms, is it possible just to be a little more selective?
Look back at the axioms detailed at the end of Chapter 4, and listed in the Appendix. If we’re accepting of the assumption that the wavefunction provides a complete description (Axiom #1), then we need to discover how we feel about the others. There doesn’t seem much to be gained by questioning the ‘right set of keys’, the ‘open the box’, or the ‘how it gets from here to there’ axioms, as these are most certainly necessary if we are to retain some predictability and extract the right kind of information from the wavefunction. Inevitably, our attention turns to Axiom #4, the Born rule or ‘What might we get?’ axiom, as this is where we sense some vulnerability.
In quantum mechanics, we tend to interpret the Born rule in terms of quantum probabilities that are established at the moment of measurement. The reason for this is quite simple and straightforward. Whether we interpret the wavefunction realistically or not, when we apply Schrödinger’s wave equation we get a description of the motion that is smooth and continuous, according to Axiom #5. The form of the wavefunction at some specific time can be used to predict the form of the wavefunction at some later time. In this sense, the Schrödinger equation works in much the same way as the classical equations of motion. It is only when we introduce an interaction or a transition of some kind that changes the state of a quantum system that we’re confronted with discontinuity—an electron ‘jumps’ to some higher-energy orbit, or the wavefunction collapses to one measurement outcome or the other, as God once more rolls the dice. This discontinuity does not—it simply cannot—appear anywhere in the Schrödinger equation.
This is the reason the Born rule is introduced as an axiom. There is nothing in the formalism itself that tells us unambiguously that this is how nature works. We apply the Born rule because this is the way we try to make sense of the inherent unpredictability of quantum physics. A quantum system has two possible measurement outcomes, but we can’t predict with certainty which outcome we will get in each individual measurement. We use the Born rule in an attempt to disguise our ignorance and to pretend that we really do know what’s going on. The only way we can do this is to assume it’s true.
In conventional quantum mechanics, we assume that quantum probability arises as a direct consequence of the measurement process. But what if we don’t do this? What if we reject the conventional interpretation of the Born rule, or find another, deeper, explanation for the seemingly unavoidable and inherent randomness of the quantum world?
Let’s be clear. Calculating the probability of getting a particular measurement outcome from the square of the total wavefunction is deeply ingrained in the way physicists use quantum mechanics, and nobody is suggesting that this should stop. What we’re suggesting instead is that the Born rule is seen not simply as a calculating device that has to be assumed because of the way quantum systems interact with our classical apparatus, but rather as an inevitable consequence of the underlying quantum physics, or of the way that we as human beings perceive this physics. Either way, this means changing the way we think about quantum probability.
We’ll begin by taking a look at the first alternative.
As we’ve seen, the philosopher Karl Popper shared some of the realist leanings of Einstein and Schrödinger, and it is clear from his writings on quantum mechanics that he stood in direct opposition to the Copenhagen interpretation, and in particular to Heisenberg’s positivism. As far as Popper was concerned, all this fuss about quantum paradoxes was the result of misconceiving the nature and role of probability.
To explain what he meant, Popper made extensive use of an analogy. Figure 11 shows an array of metal pins embedded in a wooden board. This is enclosed in a box with a transparent side, so that we can watch what happens when a small marble, selected so that it just fits between any two adjacent pins, is dropped into the grid from the top, as shown. On striking a pin, the marble may jump either to the left or to the right. The path followed by the marble is then determined by the sequence of random left/right jumps as it hits successive pins. We measure the position at the bottom of the grid at which the marble comes to rest.
Figure 11 Popper’s pin board.
Repeated measurements made with one marble (or with a ‘beam’ of identical marbles) allow us to determine the frequencies with which the individual marbles come to rest in specific channels at the bottom. As we make more and more measurements, these frequencies converge to a fixed pattern which we can interpret in terms of statistical probabilities. If successive marbles always enter the grid at precisely the same point and if the pins are identical, then we would expect a uniform distribution of probabilities, with a maximum around the centre, thinning out towards the extreme left and right. The shape of this distribution simply reflects the fact that the probability of a sequence in which there are about as many left jumps as there are right is greater than the probability of obtaining a sequence in which the marble jumps predominantly to the left or to the right.
From this, we deduce that the probability for a single marble to appear in any of the channels at the bottom (E, say) will depend on the probabilities for each left-or-right jump in the sequence. Figure 11 shows the sequence left–left–right–left–right–right, which puts the marble in the E channel. The probability for a particular measurement outcome is therefore determined by the chain of probabilities in each and every step in the sequence of events that gives rise to it. If we call such a sequence of events a ‘history’, then we note that there’s more than one history in which the marble lands in the E channel. The sequences right–left–left–right–left–right and right–right–right–left–left–left will do just as well.
Popper argued that we change the propensity for the system to produce a particular distribution of probabilities by simply tilting the board at an angle or by removing one of the pins. He wrote:3
[Removing one pin] will alter the probability for every single experiment with every single ball, whether or not the ball actually comes near the place from which we removed the pin.
So, here’s a thought. In conventional quantum mechanics, we introduce quantum randomness at the moment of interaction, or measurement. What if, instead, the quantum world is inherently probabilistic at all moments? What if, just like Popper’s pin board example, the probability for a specific measurement outcome reflects the chain of probabilities for events in each history that gives rise to it?
It’s a little easier to think about this in the context of a more obvious quantum example. So let’s return once again to our favourite quantum system (particle A), prepared in a superposition of ↑ and ↓ states. We connect our measuring device to a gauge with a pointer which moves to the left, , when A is measured to be in an ↑ state (A↑), and moves to the right,
, when A is measured to be in a ↓ state (A↓).
In what follows, we will focus not on the wavefunction per se, but on a construction based on the wavefunction which—in terms of the pictograms we’ve considered so far—can be (crudely) represented like this:
In this expression, I’ve applied the so-called projection operators derived from the wavefunctions for A↑ and A↓ onto the total wavefunction. Think of these operators as mathematical devices that allow us to ‘map’ the total wavefunction onto the ‘space’ defined by the basis functions A↑ and A↓.* If it helps, you can compare this process to that of projecting the features of the surface of the (near-spherical) Earth onto a flat, rectangular chart. The Mercator projection is the most familiar but this trades off the advantages of two-dimensionality for a loss of accuracy as we approach the poles, such that Greenland and Antarctica appear larger than they really are.
What happens in this quantum-mechanical projection is that the ‘front end’ of each projection operator combines with the total wavefunction to yield a number, which is simply related to the proportion of that wavefunction in the total. The ‘back-end’ of each projection operator is the wavefunction itself. So, what we end up with is a simple sum. The total wavefunction is given by the proportion of A↑ multiplied by the wavefunction for A↑, plus the proportion of A↓ multiplied by the wavefunction for A↓. So far in our discussions we have assumed that these proportions are equal, and we will continue to assume this.
I know that this looks like an unnecessary complication, but the projection operators take us a step closer to the actual properties (↑ and ↓) of the quantum system, and it can be argued that these are more meaningful than the wavefunctions themselves.
How should we think about the properties of the system (represented by its projection operators) as it evolves in time through a measurement process? To simplify this some more, we’ll consider just three key moments: the total (quantum plus measurement) system at some initial time shortly after preparation (we’ll denote this as time t0), the system at some later time just before the measurement takes place (t1), and the system after measurement (t2). What we get then is a measurement outcome that results from the sequence or ‘history’ of the quantum events.
Here’s the interesting thing. The history we associate with conventional quantum mechanics is not the only history compatible with what we see in the laboratory, just as there are different histories that will leave the marble in the E channel of Popper’s pin board.
In the consistent histories interpretation, first developed by physicist Robert Griffiths in 1984, these histories are organized into ‘families’ or what Griffiths prefers to call ‘frameworks’. For the measurement process we’re considering here we can devise at least three different frameworks: In Framework #1, we begin at time t0 with an initial quantum superposition of the A↑ and A↓ states, with the measuring device (which we continue to depict as a gauge of some kind) in its ‘neutral’ or pre-measurement state. We suppose that as a result of some spontaneous process, by t1 the system has evolved into either A↑ or A↓, each entangled with the gauge in its neutral state. The measurement then happens at t2, when the gauge pointer moves either to the left or to the right, depending on which state is already present.
Framework #2 is closest to how the conventional quantum formalism encourages us to think about this process. In this family of histories, the initial superposition entangles with the gauge, only separating into distinct A↑ or A↓
states at t2, which is where we imagine or assume the ‘collapse’ to occur. There is no such collapse in Framework #3, in which A↑
and A↓
are entangled at t2, producing a macroscopic quantum superposition (also known, for obvious reasons, as a Schrödinger cat state).
These different frameworks are internally consistent but mutually exclusive. We can assign probabilities to different histories within each framework using the Born rule, and this is what makes them consistent. But, as Griffiths explains: ‘In quantum mechanics it is often the case that various incompatible frameworks exist that might be employed to discuss a particular situation, and the physicist can use any one of them, or contemplate several of them.’4 Each provides a valid description of events, but they are distinct and they cannot be combined.
At a stroke, this interpretation renders any debate about the boundary between the quantum and classical worlds—Bell’s ‘shifty split’—completely irrelevant. All frameworks are equally valid, and physicists can pick and choose the framework most appropriate to the problem they’re interested in. Of course, it’s difficult for us to resist the temptation to ask: But what is the ‘right’ framework? In the consistent histories interpretation, there isn’t one. Just as there is no such thing as the ‘right’ wavefunction, and there is no ‘preferred’ basis.
But doesn’t the change in physical state suggested by the events happening between t0 and t1 in Framework #1, and between t1 and t2 in Framework #2, still imply a collapse of some kind? No, it doesn’t:5
Another way to avoid these difficulties is to think of wave function collapse not as a physical effect produced by the measuring apparatus, but as a mathematical procedure for calculating statistical correlations…. That is, ‘collapse’ is something which takes place in the theorist’s notebook, rather than the experimentalist’s laboratory.
We know by now what this implies from our earlier discussion of the relational and information-theoretic interpretations. The wavefunctions (and hence the projection operators derived from them) in the consistent histories interpretation are not real. Griffiths treats the wavefunction as a purely mathematical construct, a pre-probability, which enables the calculation of quantum probabilities within each framework. From this we can conclude that the consistent histories interpretation is anti-realist. It involves a rejection of Proposition #3.
The consistent histories interpretation is most powerful when we consider different kinds of questions. Think back to the two-slit interference experiment with electrons. Now suppose that we use a weak source of low-energy photons in an attempt to discover which slit each electron passes through. The photons don’t throw the electron off course, but if they are scattered from one slit or the other, this signals which way the electron went. We allow the experiment to run, and as the bright spots on the phosphorescent screen accumulate, we anticipate the build-up of an interference pattern (Figure 4). In this way we reveal both particle-like, ‘Which way did it go?’, and wave-like interference behaviour at the same time.
Not so fast. In the consistent histories interpretation, it is straightforward to show that which way and interference behaviours belong to different incompatible frameworks. If we think of these alternatives as involving ‘particle histories’ (with ‘which way’ trajectories) or ‘wave histories’ (with interference effects), then the consistent histories interpretation is essentially a restatement of Bohr’s principle of complementarity couched in the language of probability. There simply is no framework in which both particle-like and wave-like properties can appear simultaneously. In this sense, consistent histories is not intended as an alternative, ‘but as a fully consistent and clear statement of basic quantum mechanics, “Copenhagen done right” ’.6
But there’s a problem. If we rely on the Born rule to determine the probabilities for different histories within each framework, then we must acknowledge an inescapable truth of the resulting algebra. Until it interacts with a measuring device, the square of the total wavefunction may contain ‘cross terms’ or ‘interference terms’:
As the name implies, the interference terms are responsible for interference, of precisely the sort that gives rise to alternating bright and dark fringes in the two-slit experiment. In conventional quantum mechanics, the collapse of the wavefunction implies not only a random choice between the outcomes A↑ and A↓, but also the disappearance of the interference terms.
We can observe interference effects using light, or electrons, or (as we’ll see in Chapter 8), with large molecules or small superconducting rings. But it goes without saying that we don’t observe interference of any kind in large, laboratory-sized objects, such as gauge pointers or cats. So we need to find a mechanism to account for this.
Bohr assumed the existence of a boundary between the quantum and the classical worlds, without ever being explicit about where this might be or how it might work. But we know that any classical measuring device must be composed of quantum entities, such as atoms and molecules. We therefore expect that the first stages of an interaction between a quantum system and a classical detector are likely to be quantum in nature. We can further expect that the sheer number of quantum states involved quickly mushrooms as the initial interaction is amplified and converted into a signal that a human experimenter can perceive—perhaps as a bright spot on a phosphorescent screen, or the changing direction of a gauge pointer.
In the example we considered earlier, the presence in the detector of particle A in an ↑ state triggers a cascade of ever more complex interactions, with each step in the sequence governed by a probability. Although each interaction taken individually is in principle reversible, the process is quickly overwhelmed by the ‘noise’ and the complexity in the environment and so appears irreversible. Just as a smashed cocktail glass on the floor doesn’t spontaneously reassemble, no matter how long we wait, though there’s nothing in the classical theory of statistical mechanics that says this can’t happen.
This ‘washing out’ of quantum interference effects as the measurement process grows in scale and complexity is called decoherence. In 1991, Murray Gell-Mann and James Hartle extended the consistency conditions of the consistent histories interpretation specifically to account for the suppression of interference terms through decoherence. The resulting interpretation is now more frequently referred to as decoherent histories.
We will meet decoherence again. But I’d like to note in passing that this is a mechanism for translating phenomena at the microscopic quantum scale to things we observe at our macroscopic classical scale, designed to eliminate all the strange quantum quirkiness along the way. Decoherence is deployed in a number of different interpretations, as we’ll see. In this particular instance, decoherence is used as a rather general, and somewhat abstract, mathematical technique which is used to ‘cleanse’ the probabilities arising from the interference terms. This is entirely consistent with the view that the wavefunction is a pre-probability, and so not physically real. Other interpretations which take a more realistic view of the wavefunction make use of decoherence as a real physical process.
One last point. Decoherence rids us of the interference terms. But it does not force the choice of measurement outcome (either A↑ or A↓)—which is still left to random chance. Einstein would not have been satisfied.
Gell-Mann and Hartle were motivated in their search for an alternative to the Copenhagen interpretation as this appears to attach a special significance to the process of measurement. At the time this sat rather uncomfortably with emerging theories of quantum cosmology—in which quantum mechanics is applied to the entire Universe—because, by definition, there can in theory be nothing ‘outside’ the Universe to make measurements on it. The decoherent histories interpretation resolves this problem by making measurement no more significant than any other kind of quantum event.
Interest in the interpretation grew, promoted by a small but influential international group of physicists that included Griffiths, Roland Omnès, Gell-Mann, and Hartle.
But the concerns grew, too.
In 1996, theorists Fay Dowker and Adrian Kent showed that serious problems arise when the frameworks are carried through to classical scales. Whilst the history of the world with which we are familiar may indeed be a consistent history, it is not the only one admitted by the interpretation.7 There is an infinite number of other histories, too. Because all the events within each history are probabilistic in nature, some of these histories include a familiar sequence of events but then abruptly change to an utterly unfamiliar sequence. There are histories that are classical now but which in the past were superpositions of other classical histories, suggesting that we have no basis on which to conclude that the discovery of dinosaur fossils today means that dinosaurs roamed the Earth a hundred million years ago.
Because there is no ‘right’ framework that emerges uniquely as a result of the exercise of some law of nature, the interpretation regards all possible frameworks to be equally valid and the choice then depends on the kinds of questions we ask. This appears to leave us with a significant context dependence, in which our ability to make sense of the physics seems to depend on our ability to ask the ‘right’ questions. Rather like the vast computer Deep Thought, built to answer the ultimate question of ‘life, the universe and everything’ in Douglas Adams’s Hitch-hiker’s Guide to the Galaxy, we are furnished with the answer,* but we can only hope to make sense of this if we can be more specific about the question.
Griffiths acknowledges that Dowker and Kent’s concerns are valid, but concludes that this is the price that must be paid. The decoherent histories interpretation is8
contrary to a deeply rooted faith or intuition, shared by philosophers, physicists, and the proverbial man in the street, that at any point in time there is one and only one state of the universe which is ‘true’, and with which every true statement about the world must be consistent. [This intuition] must be abandoned if the histories interpretation of quantum theory is on the right track.
Sometimes, in situations like this, I find it helpful to step back. Maybe I’ll make another cup of tea, stop thinking about quantum mechanics for a while, and hope that my nagging headache will go away.
In these moments of quiet reflection, my mind wanders (as it often does) to the state of my personal finances. Now, I regard myself as a rational person. When faced with a choice between two actions, I will tend to choose the action that maximizes some expected utility, such as my personal wealth. This seems straightforward, but the world is a complex and often unpredictable place, especially in the age of Trump and Brexit. I understood quite some time ago that buying weekly tickets for the national lottery does not make for a robust pension plan. But beyond such obvious realizations, how do any of us know which actions to take? Should I keep my money in my bank account or invest in government bonds or the stock market?
Our rationalist tendency is to put a probability on each action and choose the action with the highest probability of delivering the expected utility. We don’t necessarily calculate these probabilities: we might look at bank interest rates, study the stock market, and try to form a rational, though qualitative, view. Or we might run with some largely subjective opinions about these different choices taking into account our perceptions and appetite for financial risk. Of course, we might shift the burden of responsibility for these choices to a financial adviser, if we can afford one, but we’re still going to want to hear the rationale behind them before we commit.
This notion of probability as a measure of our subjective degree of belief or uncertainty is credited to the eighteenth-century statistician, philosopher, and Presbyterian minister Thomas Bayes. Bayes’ approach was given its modern mathematical formulation by Pierre Simon Laplace in 1812.
Suppose you form a hypothesis that some statement might be true or valid. In Bayesian probability theory, you assign this hypothesis a probability of being valid, or as a measure of the extent of your belief in it. This is called a prior probability. Now look at it again in the light of some factual evidence. The probability that your hypothesis is valid in the light of the evidence is then called a posterior probability.
You’re now faced with a simple question. Is the posterior probability larger or smaller than the prior probability? In other words, does the evidence confirm or at least support your hypothesis, does it disconfirm or serve to undermine it? Or is it neutral? Bayesian probability theory is used extensively in science, and especially in the philosophy of science as a way of thinking about how we use empirical evidence to confirm or disconfirm scientific theories.
But if we think about this for a while, we will conclude that these probabilities are really all rather subjective. I might come to believe one thing, but you might look at the same evidence and come to believe something completely different. Who is to say which of us is right? We are able to get away with this kind of subjectivity in daily life, but surely this has no place in theories of physics based on objective facts about an objective reality (Proposition #1).*
Perhaps this is a good point to provide a more extended version of this quote from Heisenberg:9
Our actual situation in research work in atomic physics is usually this: we wish to understand a certain phenomenon, we wish to recognise how this phenomenon follows from the general laws of nature. Therefore, that part of matter or radiation which takes part in the phenomenon is the natural ‘object’ in the theoretical treatment and should be separated in this respect from the tools used to study the phenomenon. This again emphasises a subjective element in the description of atomic events, since the measuring device has been constructed by the observer, and we have to remember that what we observe is not nature in itself but nature exposed to our method of questioning.
When scientists go about their business—observing, experimenting, theorizing, predicting, testing, and so on—they tend to do so with a certain fixed attitude or mindset. Scientists tend to assume that there is, in fact, nothing particularly special about ‘us’. We are not uniquely privileged observers of the Universe we inhabit. We are not at the centre of everything. This is the ‘Copernican Principle’: science strives for a description in which our existence is a natural consequence of reality rather than the reason for it.
Remember that one consequence of Einstein’s theories of relativity is that the observer is put back into the reality that is being observed. So, shouldn’t we at least accept the need to put the experimenter back into the quantum reality that is being experimented on? We don’t have to go so far as to suggest a causal connection—we don’t need to reject Proposition #1 and argue that the Moon ceases to exist when nobody looks at it or thinks about it. Perhaps we just need to accept that our scientific description isn’t really complete unless we place ourselves firmly in the thick of it.
In 2002, Carlton Caves, Christopher Fuchs, and Rüdiger Schack proposed to do just this. Instead of denying the subjective element in quantum mechanics they embraced it. They argued that quantum probabilities computed using the Born rule are not objective probabilities related in a mechanical way to the underlying quantum physics. They are Bayesian probabilities reflecting the personal, subjective degree of belief of the individual experimenter, related only to the experimenter’s experience of the physics.
Your first instinct might be to reject this idea out of hand. Surely, there’s a world of difference between my subjective beliefs about the stock market and the unassailably objective facts of physics? But think about it. My ability to predict movements in the stock market are limited by my lack of experience and knowledge. If I took the time to expand my experience, build my knowledge, and codify this in a couple of useful algorithms, there’s a good chance I’d be able to make more realistic predictions (just ask Warren Buffet).
How is quantum physics different? For the past hundred years or so, physicists have taken the time to expand their experience, building a body of knowledge about quantum systems which is codified in the set of equations we call quantum mechanics. Why believe in the Born rule? Because this is what any rational physicist with access to the experience, knowledge, and algorithms of quantum mechanics will choose to do. ‘The physical law that prescribes quantum probabilities is indeed fundamental, but the reason is that it is a fundamental rule of inference—a law of thought—for Bayesian probabilities.’10
This is an interpretation known as Quantum Bayesianism, abbreviated as QBism (pronounced ‘cubism’). It is entirely subjective. QBists view quantum mechanics as ‘an intellectual tool for helping its users interact with the world to predict, control and understand their experiences of it’.11 Despite the exhortations of his professors, Mermin converted to QBism following six weeks in the company of Fuchs and Schack at the Stellenbosch Institute for Advanced Study in South Africa in 2012, where he ‘finally began to understand what they had been trying to tell me for the past ten years’.12
The approach reaches beyond the Born rule to the quantum states themselves, and the wavefunctions we use to represent them. The Schrödinger equation simply constrains the way any rational physicist will choose to describe their experience, until such time as they become aware of the outcome of a measurement. And this is unproblematic, for the same reason that Rovelli argued that his knowledge about China changes instantaneously whenever he chooses to read an article about China in the newspaper.
As the gauge pointer moves to the left, the rational Alice chooses to describe this experience of the outcome of a measurement in terms of the quantum state A+. She expresses her degree of belief in this outcome by entering a ‘+’ in her laboratory notebook. The rational Bob, stuck outside in the corridor with his research supervisor, chooses to describe his experiences in terms of a macroscopic quantum superposition involving the quantum system, measuring device, gauge, Alice, and her notebook. When Bob finally enters the laboratory, Alice shows him her notebook and Bob’s experience and beliefs change. This is Bob’s ‘measurement’. It doesn’t involve any quantum systems, detection devices, or gauges. Bob makes his measurement just by looking at Alice’s notebook, or simply by asking her a question. Of course, Bob wasn’t present when Alice made her measurement, but he trusts Alice implicitly and his degree of belief in the outcome is unshaken.
By making this all about subjective experiences, once again all the problems associated with a realist interpretation of the wavefunction evaporate. Quantum probability is a personal judgement about the physics; it says nothing about the physics itself.13 There is no collapse of the wavefunction, for the simple reason that there are no outcomes before the act of measurement (however this is defined): experiences can’t exist before they are experienced. There is no such thing as non-locality, and no spooky action at a distance: ‘QBist quantum mechanics is local because its entire purpose is to enable any single agent to organize her own degrees of belief about the contents of her own personal experience. No agent can move faster than light.’14
This is a ‘single-user’ interpretation. The experiences and degrees of belief are unique to the individual—the Bayesian probabilities make no sense when applied to many individuals at once. We have to face up to the fact that the subjective nature of our individual experiences means that we all necessarily carry different versions of reality around with us in our own minds. If this is really the case, how is science of any kind even possible?
Calm down. The versions of reality that we all carry in our minds are still shaped by our experiences of a single, external, Empirical Reality. As a result of all our experiences, learning, and communicating with our fellow humans we develop what the philosopher John Searle calls the background, which I mentioned briefly in Chapter 2. This is an enormously wide and varied backdrop against which we interact with external reality. It is everything we learn from experience and come to take for granted, social and physical, as we live out our daily lives. The background is where we find all the regularities and the continuity, the expectation that the Sun will rise tomorrow, that things will be found where we left them, that cars won’t turn into trees, that this $20 bill really is worth $20, and that when you turn the next page it will be covered by profoundly interesting text, and not pictures of sausages.
We each form the background by accumulating a set of mental impressions. But these have great similarity, derived from a broad set of common experiences (including experiences of quantum physics), a common body of knowledge, commonly accessible forms of communication, and human empathy. It is the close similarity of these individual backgrounds that makes human interaction possible.
Similar, but not the same. Within my mind is the reality with which I have learned to interact. You have no access to this reality, because you have no access to my mind. Within your mind is the reality with which you have learned to interact. I have no access to this reality, because I have no access to your mind. My reality is not your reality. But these individual realities possess many common features, such as the recognition that a $20 bill is money, or that if I perform this experiment I’ll get the result A↑ 50% of the time. Through the extraordinary complexity of our everyday interactions, we perceive these separate realities as one.
Clearly, QBism rejects Proposition #3 and in this regard is unashamedly anti-realist at the level of representation. It has nothing meaningful to say about the physics underlying the experiences. Once more, there’s nothing to see here.
The Copenhagen interpretation seeks to place the blame for the inaccessibility of the quantum world on our classical language and apparatus. Rovelli’s relational interpretation shifts the blame to the need to establish relationships with quantum states if they are to acquire any physical significance. Interpretations based on information do much the same. In the consistent or decoherent histories interpretation, the blame resides in the fundamentally probabilistic nature of all quantum events, and the lack of a rule to determine the ‘right’ framework.
In QBism, all of physics beyond our experience is in principle inaccessible. This kind of subjectivism applies equally well to classical mechanics, in which we codify our experience in equations that represent the behaviour of classical objects in terms of things such as mass, velocity, momentum, and acceleration.15 Arguably, we are forced to acknowledge this subjectivism only in quantum mechanics, when we’re finally confronted with the bizarre consequences of adopting a realist perspective.
But Fuchs argues that QBism is not instrumentalist. Inspired by many of John Wheeler’s arguments, he prefers to think of the interpretation as involving a kind of ‘participatory realism’ (more on this to follow). This is participatory not in the sense of human perception and experience being necessary to conjure something from nothing and ‘make it real’, which would involve rejecting Propositions #1 and #2. Instead QBism simply argues that, in quantum mechanics, we can no longer ignore the fact that we are very much part of the reality we’re trying so desperately hard to describe:16
QBism breaks into a territory the vast majority of those declaring they have a scientific worldview would be loath to enter. And that is that the agents (observers) matter as much as electrons and atoms in the construction of the actual world—the agents using quantum theory are not incidental to it.
Hardy’s axiomatic reconstruction, consistent histories, and QBism all require some substantial trade-offs. Yes, all the problems go away and we can forget about them. But we are left to contemplate a reality made of probabilities, and nothing more, or the abandonment of a single version of the historical truth, or a reality that is inherently subjective and participatory. It’s clear that none of these attempts can provide us with any new insights or understanding of the underlying physics. In the context of Proposition #4 they are passive, not active, reconstructions or interpretations.
It seems that even if we’re not still trapped by Scylla, mercilessly exposed to her brutally monstrous charms, then we haven’t managed to sail the ship very far.