9

Quantum Mechanics is Incomplete
Because We Need to Include My Mind (or Should That be Your Mind?)

Von Neumann’s Ego, Wigner’s Friend, the Participatory Universe, and the Quantum Ghost in the Machine

Of course, the problem of the collapse of the wavefunction didn’t originate in 1932 with the publication of von Neumann’s Mathematical Foundations of Quantum Mechanics. But it’s fair to say that his approach to quantum measurement really dragged the problem into the open, from where it has proceeded to torture the intellects of quantum physicists and philosophers for the past 90 years or so.

As we’ve seen, the anti-realists dismiss the collapse as a non-problem, no more difficult to understand than the abrupt change in our knowledge when we gain some new information. A few theorists more inclined to realist preconceptions have sought to identify physical mechanisms that act on physically real wavefunctions, designed in attempts to explain how and becomes or.

But what did von Neumann himself think was going on?

In his classic text, von Neumann clearly distinguished between two fundamentally different types of quantum process. The first, which he referred to as process 1, is the discontinuous, irreversible transformation of a pure quantum state into a mixture, involving the ‘projection’ of some initial wavefunction into one of a set of possible measurement outcomes, with an accompanying increase in entropy. We now call this the collapse of the wavefunction, although von Neumann himself didn’t use this terminology.* Process 2 is the continuous, deterministic, and completely reversible evolution of a wavefunction, governed by the Schrödinger equation according to Axiom #5 (see Appendix). These two processes are distinct: process 1 cannot happen in process 2, and vice versa.

He then looked at quantum measurement from the perspective of three fundamental components, which he labelled I, II, and III:

I is the quantum system under investigation;
II is the physical measurement; and
III is the ‘observer’.

He proceeded to demonstrate that if a quantum system I is present in a superposition of the measurement outcomes (for example, particle A is in a superposition of ↑ and ↓ states), then this will evolve smoothly and continuously according to process 2. On encountering the measuring device, the wavefunction becomes entangled, but von Neumann saw no reason to suppose that quantum mechanics ceases to apply at this classical scale. The entangled wavefunction must then continue to evolve smoothly according to the Schrödinger equation. Process 2 still applies. If we further entangle the device with a gauge, we know well by now that this gives rise to another superposition consisting of components Aimage and Aimage.

Von Neumann could find no reason, based purely on the mathematics, to suppose that process 1 would have any role to play in the composite system I plus II. Process 2 applies equally to classical measuring devices and gauges as it does to quantum systems.

Schrödinger wouldn’t publish the paper containing the reference to his famous cat for another three years, and von Neumann was already aware of the implications for an infinite regress. But his resolution of the problem was quite straightforward. If the quantum mechanics described by process 2 applies equally well to classical measuring devices, then there is again no good reason to suppose that it ceases to apply when considering the function of human sense organs, their connections to the brain, and the brain itself. Suppose that the laboratory has an overhead light which illuminates the screen of the gauge and some of the reflected light is gathered and focused at the observer’s retinas. This triggers electrical signals in the observer’s optic nerves, which travel to the visual cortex located at the back of the observer’s brain.

We can choose simply to expand the definition of the ‘quantum system’ in I to include the particle A, the classical measuring device, the gauge, and the reflected light. Component II—the ‘physical measurement’—then includes the observer’s sensory apparatus and brain. The result is yet another superposition:

image

This implies that the observer enters into a superposition of ‘brain states’, image and image.

Von Neumann wrote:1

Now quantum mechanics describes the events which occur in the observed portions of the world, so long as they do not interact with the observing portion, with the aid of process 2…, but as soon as such an interaction occurs, i.e. a measurement, it requires the application of process 1.

So what, then, did he have in mind with regard to the ‘observing portion’ of the world? Based on conversations he had had with his Hungarian compatriot Leo Szilard, von Neumann suggested that component III consists of the observer’s ‘abstract ego’. In other words, process 1—the collapse of the wavefunction—only occurs when the measurement outcome is registered in the observer’s conscious mind.

The logic is pretty unassailable. No observer has ever reported experiencing a superposition of brain states (or, at least, anyone declaring that they have directly experienced such a superposition wouldn’t be taken very seriously). Components I and II are entirely ‘mechanical’ in nature—they involve physics and biochemistry. We’re left to conclude that because III is not mechanical, then this must be the place where the continuous evolution of the wavefunction—process 2—breaks down, to be replaced by process 1.

This conclusion is nevertheless quite extraordinary considering von Neumann’s mission in the Mathematical Foundations, which was to provide a much more secure mathematical basis for quantum mechanics using Hilbert’s axiomatic approach. As I’ve already mentioned, these axioms (especially Axiom #1) served to entrench the prevailing Copenhagen interpretation directly in the formalism itself. Although Bohr was more ambiguous about what this actually meant, Heisenberg’s perspective was firmly anti-realist. Yet in his theory of quantum measurement, von Neumann went substantially beyond the Copenhagen interpretation, ignoring Bohr’s insistence on an arbitrary boundary between the quantum and classical worlds.

I would argue that the introduction of a role for consciousness in the measurement process represents an addition to the conventional quantum formalism, seemingly at odds with the ‘nothing to see here’ axiom. I guess von Neumann would have responded that Axiom #1 refers only to the mathematical structure, and the proposed addition of component III is decidedly non-mathematical. He wrote that: ‘III remains outside of the calculation’.2

As it stands, von Neumann’s theory can still be interpreted in two fundamentally different ways. An anti-realist would agree with the description of components I and II and interpret III not as a physical collapse, but as the registering of the measurement outcome and the updating of the observer’s state of knowledge about it. This most definitely involves the observer’s conscious mind, but only in a passive sense, and takes us back to relational quantum mechanics, or information-theoretic interpretations, or QBism (take your pick).

But it seems that von Neumann held a different view. Component III is intended as the place where process 1 occurs, considered as a real physical collapse. His long conversations with Szilard concerned the latter’s work on entropy reduction in thermodynamic systems through interference by intelligent beings, a variation on Maxwell’s Demon.3 The philosopher Max Jammer notes that this kind of paper ‘marked the beginning of certain thought-provoking speculations about the effect of a physical intervention of mind on matter’.4

A real physical collapse implies a real wavefunction, and therefore a much more active role for the observer’s conscious mind. It is for this reason that I include consciousness-causes-collapse theories in my collection of realist interpretations of quantum mechanics. That von Neumann wasn’t specific on how he thought the wavefunction itself should be interpreted just adds to the confusion (but is pretty much par for the course in this business).

But now we need to ask ourselves: Just who is the observer? Let’s return to the scenario we’ve considered a few times already, in which Alice makes a measurement in the laboratory but Bob is delayed in the corridor. I’m going to make one small adjustment. I’m going to replace Bob with renowned theorist Eugene Wigner. Alice and Wigner are close friends.5

We recall that Alice performs a measurement on a quantum system consisting of an ensemble of A particles prepared in a superposition of ↑ and ↓ states. Instead of a gauge the measuring device is now connected to a simple light switch. If the device records an ↑ result, the switch is not thrown and the light doesn’t flash (image). If the device records a ↓ result, the switch is thrown and the light flashes (image). She runs the experiment once, and observes the light flash.

Wigner is still in the corridor. As far as he is concerned, the total wavefunction that Alice just experimented on has the form of another superposition involving the measurement outcomes A and A, the two possible states of the light, image and image, and Alice’s possible brain states, image and image:

image

Wigner now enters the laboratory. The following conversation ensues.

‘Did you see the light flash?’ asks Wigner.

‘Yes,’ replies Alice.

As far as Wigner is concerned, the measurement outcome has just registered in his conscious mind and the wavefunction collapses into the state described by Aimageimage.

But, after some reflection, he decides to probe his friend a little further.

‘What did you feel about the flash before I asked you?’

Understandably, Alice is starting to get a little irritated. ‘I told you already, I did see a flash,’ she replies, testily.

Not wishing to put any further strain on his relationship with Alice, he decides to accept what she’s telling him. He concludes that the wavefunction must have already collapsed into the state Aimageimage before he entered the laboratory and asked the question, and the above superposition that he took to be the correct description is, in fact, wrong. This superposition ‘appears absurd because it implies that my friend was in a state of suspended animation before [she] answered my question’.6 He wrote:

It follows that the being with a consciousness must have a different role in quantum mechanics than the inanimate measuring device…. It is not necessary to see a contradiction here from the point of view of orthodox quantum mechanics, and there is none if we believe that the alternative is meaningless, whether my friend’s consciousness contains either the impression of having seen a flash or of not having seen a flash. However, to deny the existence of the consciousness of a friend to this extent is surely an unnatural attitude, approaching solipsism, and few people, in their hearts, will go along with it.

This is the paradox of Wigner’s friend. To resolve it we must presume that the irreversible collapse of the wavefunction is triggered by the first conscious mind it encounters.

There’s more. Nowhere in the physical world is it possible physically to act on an object without some kind of reaction. This is Newton’s third law of motion. Should consciousness be any different? Although small, the action of a conscious mind in collapsing the wavefunction produces an immediate reaction—knowledge of the state of a system is irreversibly (and indelibly) generated in the mind of the observer. This reaction may lead to other physical effects, such as writing the result in a laboratory notebook, or the publication of a research paper, or the winning of a Nobel Prize. In this hypothesis, the influence of matter over mind is balanced by an influence of mind over matter.

If we introduce a role for consciousness in our representation of quantum mechanics, then we must acknowledge the truth of one of Wheeler’s favourite phrases. He argued, paraphrasing Bohr, that ‘No elementary phenomenon is a phenomenon until it is a registered (observed) phenomenon.’7 Rather than interpret this registration process simply as an irreversible change in our knowledge of the system, Wheeler explored a more realistic interpretation in which the process is actually an irreversible act of creation. ‘We are inescapably involved in bringing about that which appears to be happening.’8

Wheeler took some pains to separate the notion of a ‘quantum phenomenon’ from consciousness, arguing that it is the physical, irreversible act of amplification that brings the phenomenon about. But his phrase includes the word ‘observed’, and if we accept a role for conscious observation in quantum physics, then we arrive at more or less the same conclusions. If consciousness is required to collapse the wavefunction and ‘make it real’, then arguably the quantum state that it represents does not exist until it becomes part of the observer’s conscious experience. It’s a relatively small step from this to the rejection of Proposition #1. Nothing exists unless and until it is consciously experienced.

And, indeed, in 1977 Wheeler himself elaborated what was to become known as the ‘participatory anthropic principle’:9

Nothing speaks more strongly for this thesis than…. the anthropic principle of [Brandon] Carter and [Robert] Dicke and…. the indispensable place of the participating observer—as evidenced in quantum mechanics—in defining any useful concept of reality. No way is evident to bring these considerations together into a larger unity except through the thesis of ‘genesis through observership’.

Here anthropic means ‘pertaining to mankind or humans’. Although Wheeler would subsequently declare that the ‘eye’ of the observer ‘could as well be a piece of mica’,10 it’s virtually impossible to read this 1977 essay without concluding that this is about ‘us’, participants of a universe that we create by observing it. In their comprehensive review of anthropic reasoning, called The Anthropic Cosmological Principle, John Barrow and Frank Tipler accepted Wheeler’s sentiments to be consistent with a version of what they call the strong anthropic principle: ‘Observers are necessary to bring the universe into being.’11

Okay. So on this particular visit to the shores of Metaphysical Reality, we’ve settled ourselves comfortably in a deckchair on the beach, with the Sun shining, shades on, sipping a margarita. We’re here to stay a while. Whilst the logic is clear, what we’re trying to do here is resolve two very deeply rooted philosophical conundrums—the collapse of the wavefunction and the nature of consciousness—simply by bringing them together. I have to say that this has never struck me as a particularly productive way to go.

By introducing consciousness into the mix, we invite an awful lot of further difficult questions. What is consciousness and how does it work? What does it mean when we say that consciousness is something other than ‘mechanistic’ and what evidence do we have for this? How is the collapse of the wavefunction supposed to be triggered by consciousness? Is the collapse of the wavefunction actually responsible for consciousness? Is the mind a quantum computer?

These questions are no doubt thought-provoking, but don’t expect to find too many ready answers. The study of consciousness is the only discipline I’ve come across that is structured principally in terms of its problems. We have the ‘hard problem’ of consciousness, the ‘mind–body’ problem, the problem of ‘other minds’, and many more. These problems have sponsored much philosophical reflection and many words but—at present—there appears to be no consensus on the solutions.

We find ourselves in quite a curious situation. Consciousness is very personal. You know what it feels like to have conscious experiences of the external world and you have what I might call an inner mental life. You have thoughts, and you think about these thoughts. You know what your own consciousness is or at least what it feels like. So what’s the problem?

To answer this question it’s helpful to trace the physical processes involved in the conscious perception of a red rose. Now, roses are red because their petals contain a subtle mixture of chemicals called anthocyanins, their redness enhanced if grown in soil of modest acidity. Anthocyanins in the rose petals interact with sunlight, absorbing certain wavelengths and reflecting predominantly red light, electromagnetic radiation with wavelengths between about 620 and 750 billionths of a metre, sitting at the long-wavelength end of the visible spectrum, sandwiched between invisible infrared and orange. Of course, light consists of photons but, no matter how hard we look, we will not find an inherent property of ‘redness’ in photons with this range of wavelengths. Aside from differences in wavelength (and hence energy, according to the Planck–Einstein relation), there is nothing in the physical properties of photons to distinguish red from green or any other colour.

We keep going. We can trace the chemical and physical changes that result from the interactions of photons with cone cells in your retina all the way to the stimulation of your visual cortex at the back of your brain. Look all you like, but you will not find the experience of the colour red in any of this chemistry and physics. It is obviously only when this information is somehow synthesized by your visual cortex do you have a conscious experience of a beautiful red rose.

Just how is this supposed to work? This is the hard problem, as philosopher and cognitive scientist David Chalmers explained:12

The really hard problem of consciousness is the problem of experience…. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.

The problem is hard because we not only lack a physical explanation for how this is supposed to happen, we don’t even really know how to state the problem properly.

Okay. If the ‘how’ problem is too hard, can we at least generate some clues by pondering on where these experiences might be happening?

The French philosopher René Descartes is rightly regarded as the father of modern philosophy. In his Discourse on Method, first published in 1637, he set out to build a whole new philosophical tradition in which there could be no doubt about the absolute truth of its conclusions. From absolute truth, he argued, we obtain certain knowledge. However, to get at absolute truth, he felt he had no choice but to reject as being absolutely false everything in which he could have the slightest reason for doubt. This meant rejecting all the information about the world that he received through his senses.

Why? Well, first, he could not completely rule out the possibility that his senses would deceive him from time to time as, for example, through optical illusions or the sleight of hand and mental manipulations involved in magic tricks.13 Second, he could not be certain that his perceptions and experiences were not part of some elaborate dream. Finally, he could not be certain that he was not the victim of a wicked demon or evil genius with the ability to manipulate his sensory inputs to create an entirely false impression of the world around him (just like the machines in The Matrix).

But he felt that there was at least one thing of which he could be certain. He could be certain that he was a being with a conscious mind that has thoughts. He argued that it would seem contradictory to hold the view that, as a thinking being, he does not exist. Therefore, his own existence was also something about which he could be certain. Cogito ergo sum, he concluded. I think therefore I am.

The external physical world is vague and uncertain, and may not appear as it really is. But the conscious mind seems very different. Descartes went on to reason that this must mean that the conscious mind is separate and distinct from the physical world and everything in it, including the unthinking machinery of his body, and his brain. Consciousness must be something ‘other’, something unphysical.

This mind–body dualism (sometimes called Cartesian dualism) is entirely consistent with belief in the soul or spirit. The body is merely a shell, or host, or mechanical device used for giving outward expression and extension to the unphysical thinking substance. It seems reasonably clear that this kind of dualism is what both von Neumann and Wigner had in mind when they identified consciousness as the place (component III) where physical mechanism is no longer applicable, something outside the calculation and therefore the ideal place for the collapse of the wavefunction.

But to conclude from this that the conscious mind must therefore be unphysical involves a rather bold leap of logic, one that many contemporary philosophers and neuroscientists believe is indefensible. The trouble is that by disconnecting the mind from the brain and making it unphysical we push it beyond the reach of science and make it completely inaccessible. Science simply can’t deal with it. In The Concept of Mind, first published in 1949, the philosopher Gilbert Ryle wrote disparagingly of Cartesian-style mind–body dualism, referring to it as the ‘ghost in the machine’.14 In his 1991 book Consciousness Explained, the philosopher Daniel Dennett argued that ‘accepting dualism is giving up’.15

Faced with this impasse, the only way to progress is to make some assumptions. We assume that, however it works, consciousness arises as a direct result of the neural chemical and physical processes that take place in the brain. Our experience of a red rose has a neural correlate—it corresponds to the creation of a specific set of chemical and physical states involving a discrete set of neurons located in various parts of the brain. In philosophical terms, this is known as ‘materialism’.

Neuroscientists have access to a battery of technologies, such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET), which can probe the workings of the brain in exquisite detail in non-invasive ways. Experiencing something or thinking about something stimulates one or more parts of the brain. As these parts get to work, they draw glucose and oxygen from the bloodstream. An fMRI scan shows where the oxygen is being concentrated, and so which parts of the brain are ‘lighting up’ as a result of some sensory stimulus, thought process, emotional response, or memory. A PET scan makes use of a radioactive marker in the bloodstream but otherwise does much the same thing, though with lower resolution.

Neuroscience in its modern form was established only in the second half of the past century, and our understanding has come an awfully long way in that relatively short time. But we must once again acknowledge that whilst studying the brain has revealed more and more of the materialist mechanism, it hasn’t yet solved the ‘hard problem’.

Some neuroscientists are nevertheless convinced that consciousness is to be found in chemical and neurophysiological events, that consciousness is not a ‘thing’ but rather an emergent consequence of a complex set of processes occurring in a developed brain.16

Grounding consciousness in neuronal activity implies that it is not the exclusive preserve of human beings. In July 2012, a prominent international group of cognitive neuroscientists, neuropharmacologists, neurophysiologists, neuroanatomists, and computational neuroscientists met together at the University of Cambridge in England. After some deliberations, they agreed the Cambridge Declaration on Consciousness, which states:17

the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Nonhuman animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neurological substrates.

Are cats conscious? The Cambridge Declaration would suggest that they are. Schrödinger’s cat might again be spared the discomfort of being both alive and dead, its fate already decided (by its own consciousness) before you lift the lid of the box, and look. To some extent, this answers Bell’s challenge. You don’t need a PhD to collapse the wavefunction. But you do need to be awake.

According to the current broad consensus, the human mind is the result of evolutionary selection pressures, driven in Homo sapiens by feedback loops established between an expanding neural capacity, genetic adaptations, and anatomical changes promoting the development of language capability, and the construction of societies. This is the social brain hypothesis. Paleoanthropologists date the specifically human ‘light bulb moment’ to between 40,000 to 50,000 years ago. This is the moment of the Great Leap Forward, or the ‘human revolution’, a flowering of human innovation and creativity involving the transition to what is known as behavioural modernity.

In this ‘standard model’, consciousness is a natural consequence of the physical, chemical, and biological processes involved in evolution. Now, the bulk properties of water (freezes at 0°C, boils at 100°C) are consequences of the physical properties of up and down quarks, gluons, and electrons, though we would be very hard pressed to predict the former based on what we currently know about the latter. So consciousness is a (so far unpredicted and unpredictable) consequence of the conventional material content of the Universe which emerges when we connect billions of neurons together to form an extended network in the brain, and then run billions of complex computations on this. Consciousness didn’t somehow ‘pre-exist’.

But what if the Universe has always contained physical events that are, in some sense, ‘atoms’ of consciousness that existed long before biology? What if one consequence of evolution has then been to assemble these ‘atomic’ events, orchestrate them, and couple them to activity occurring within the neurons in the brain, resulting in what we identify as consciousness? What if the events in question are associated with the distinctly non-computable collapse of the wavefunction? Then we have what Penrose and Stuart Hameroff, professor of anesthesiology at the University of Arizona, have called orchestrated objective reduction, or Orch-OR, a proposal for a quantum basis for consciousness.

This idea dates back to the early 1990s, and was initially developed separately by Penrose and Hameroff before they chose to combine their efforts in a collaboration. Not surprisingly, Penrose approached the problem from the perspective of mathematics. In his book The Emperor’s New Mind, he argued in favour of a fundamental role for consciousness in the human comprehension of mathematical truth, one that goes beyond computation. ‘We must “see” the truth of a mathematical argument to be convinced of its validity,’ he wrote. ‘This “seeing” is the very essence of consciousness.’18 This is entirely consistent with von Neumann’s assertion that consciousness ‘remains outside of the calculation’.

We’ve already seen in the previous chapter that, in this same book, Penrose put forward arguments in favour of a role for local mass–energy density and the curvature of spacetime in collapsing the wavefunction, in what was later to become known as Diósi–Penrose theory. However, having convinced himself that consciousness is at heart the result of some kind of non-computable process, and having also proposed a mechanism for the collapse of the wavefunction, he wasn’t yet able to make the connection. What he lacked was a physical mechanism that would allow quantum events somehow to govern or determine brain activity, and thence consciousness:19

One might speculate, however, that somewhere deep in the brain, cells are to be found of single quantum sensitivity. If this proves to be the case, then quantum mechanics will be significantly involved in brain activity.

Hameroff knew where to look. With Richard Watt from the University of Arizona’s Department of Electrical Engineering, in 1982 he had hypothesized a role for certain protein polymers called microtubules in processing information in the brain. These structures sit inside all complex cell systems, including neurons.* The polymers self-assemble, allowing the formation of synaptic connections between neurons, and helping to maintain and regulate the strengths of these connections to support cognitive functions. Hameroff and Watt theorized that the polymer subunits (globular proteins called tubulins) undergo coherent excitations, forming patterns which support the processing of information much like transistors in a computer.

The conventional wisdom is that information processing in the brain is based on switching between synapses. There are, on average, about 1000 synapses per neuron, each capable of 1000 switching operations per second. The average human brain has about one hundred billion neurons, and hence a capacity of about 1017 computational operations per second. But there are 10 million tubulin subunits in every cell, capable of switching a million times faster, producing 1016 operations per second per neuron. If the information processing really does occur here, then this suggests an enhancement in the number of operations per second to 1027, an increase of ten orders of magnitude.

The tubulin subunits possess two distinct lobes, each consisting of about 450 amino acids. Each subunit can adopt at least two different ‘conformations’—two different arrangements of its atoms in space—with slightly different distributions of electron density which generate weak, long-range, so-called ‘van der Waals’ forces between neighbouring units. These forces are thought to be important in facilitating the switching between conformations, which is the basis of the computational operation.

From the beginning, Hameroff was convinced of the relationship between microtubules and consciousness, not least because of the efficacy of a wide range of very different anaesthetics in temporarily suspending it. Together with Watt, in 1983 he proposed that the chemical anaesthetic seeps into the neuron, disrupting the van der Waals forces between the tubulin subunits, shutting down the computational operations and hence the consciousness of the patient.

On reading The Emperor’s New Mind, Hameroff approached Penrose and they agreed to collaborate. By the time of publication of Penrose’s sequel, Shadows of the Mind, in 1994, the Penrose–Hameroff Orch-OR theory was firmly established.

Each tubulin subunit measures about 8 × 4 × 4 billionths of a metre. These link together in polymeric chains which form columns, and 13 columns wrap around to form a hollow tube—the microtubule. These in turn combine with a network of interlinking filaments to make up the neuron’s physical support structure, called the cytoskeleton.

The Orch-OR mechanism involves the formation of quantum superpositions of the different tubulin conformations, as depicted in Figure 16a. The subunits interact with their neighbours in a cooperative fashion, enabling the development of extended, coherent superpositions across the microtubule. This is shown as steps 1–6 in Figure 16b, where for clarity the microtubule has been unrolled and flattened out. The individual subunit superpositions are shown as the grey elements in this picture. As the extended superposition builds, it passes a threshold determined by the local mass density (and hence local spacetime curvature) according to the Diósi–Penrose theory. The extended wavefunction collapses and the tubulin subunits revert to their classical states. This is the transition between steps 6 and 7 and, according to the Orch-OR theory, this is where consciousness happens (hence the light bulb). The process then begins all over again.

image

Figure 16 The Penrose–Hameroff Orch-OR theory is based on the idea that tubulin subunits in the polymer chains that make up microtubules inside neurons can enter a superposition of different conformational states, (a). The subunits interact with their neighbours, and a coherent superposition develops across the microtubule, shown in (b) as steps 1–6. When the superposition reaches a critical mass density, the wavefunction collapses (steps 6 to 7), contributing to a conscious experience.

This much simplified description of the mechanism doesn’t really do it justice. But I think you should get the sense that this is all very speculative. It is based on the fusion of ideas from the fringes of quantum physics and neuroscience (and, for that matter, from philosophy). And, unsurprisingly, it has been strongly criticized by both physicists and neuroscientists.

Perhaps the most obvious issue concerns the sustainability of coherent quantum superpositions over what are very large biomolecular structures. As we saw in Chapter 8, superpositions involving structures intermediate between quantum and classical have been created successfully in the laboratory, including organic molecules containing up to 430 atoms. Whilst it’s fair to say that we do not yet know what the upper limit might be, the larger the system, the more difficult it is to protect it from the effects of environmental decoherence. This is why MAQRO is a space mission.

But each tubulin subunit is a protein structure containing over ten thousand atoms.20 Microtubules vary in length from about 200 up to 25,000 billionths of a metre. The shorter length implies a polymer column of just 25 subunits, and 13 columns implies a microtubule consisting of 325 subunits in total. The Orch-OR mechanism then calls for a coherent quantum superposition spanning a structure containing on the order of 325,000 atoms, and which must be sustained for millisecond timescales before collapsing. Contrast this with the macroscopic objects suggested for the MAQRO mission, which are small ‘nanospheres’ with diameters of about 100 billionths of a metre.

It seems extremely unlikely that a coherent quantum superposition can be sustained in the kind of ‘warm, wet, and noisy’ environment likely to be typical of neurons in a working brain. In 2000, theorist Max Tegmark argued that decoherence timescales on the order of a tenth of a trillionth (10−13) to a hundredth of a millionth of a trillionth (10−20) of a second are more likely in this kind of environment.21

But, once again, we must never underestimate the power of a realistic interpretation to inspire and motivate interest, consistent with Proposition #4. Despite its very speculative nature, the Penrose–Hameroff Orch-OR theory has many components that are potentially accessible to experiment and arguably makes many testable predictions. In a recent 2014 updating of the theory and review of its status, Hameroff and Penrose examine how 20 predictions they had offered in 1998 have fared in the interim. They drew much comfort from a recent discovery by the research group led by Anirban Bandyopadhyay at the National Institute of Material Sciences in Japan, of memory-switching in a single brain microtubule.22 They concluded that the theory had actually fared rather well, giving a ‘viable scientific proposal aimed at providing an understanding of the phenomenon of consciousness’.23

Needless to say, the one component of the theory for which there is as yet no empirical evidence is the Diósi–Penrose OR mechanism. For now, any potential role for the non-computable collapse of the wavefunction in facilitating consciousness remains stranded on the beaches of Metaphysical Reality.

Even if evidence for a connection between quantum mechanics and consciousness can one day be found, Chalmers argues that this will still not solve the hard problem: ‘when it comes to the explanation of experience, quantum processes are in the same boat as any other. The question of why these processes should give rise to experience is entirely unanswered.’24

* He tended to avoid the word ‘wavefunction’, presumably as this is closely associated with Schrödinger’s wave mechanics. He preferred to think of the description of quantum systems in terms of rather more abstract ‘state functions’ in a mathematical ‘Hilbert space’. And, with some modifications, this is the description commonly taught to students today.
* They have also been found in some simple prokaryotic cells.