THE CONCEPT OF COMPLEMENTARITY: THE GIFT FROM PHYSICS
Those who are not shocked when they first come across quantum theory cannot possibly have understood it.
—Neils Bohr
IN 1868, THE physicist, mountaineer, educator, and professor at the Royal Institution John Tyndall gave a talk to the mathematical and physical section of the British Association for the Advancement of Science. In it, he laid out the following dilemma:
The passage from the physics of the brain to the corresponding facts of consciousness is unthinkable. Granted that a definite thought, and a definite molecular action in the brain occur simultaneously; we do not possess the intellectual organ, nor apparently any rudiment of the organ, which would enable us to pass, by a process of reasoning, from the one to the other.… “How are these physical processes connected with the facts of consciousness?” The chasm between the two classes of phenomena would still remain intellectually impassable.1
Here we are, 150 years later, and we are not quite as far as we would like to be. We understand, to some extent, the electric discharges, the groupings and flow of molecules, and sometimes even corresponding brain states, especially in the study of vision. Unlike Tyndall, however, I think we do have an organ that is up to the task. What is needed is to apply the right kinds of ideas to the problem of determining how mind comes from brain. How do we think about that pesky gap between our biology and our mind?
It is commonly recognized that the chasm or gap is a problem. It was only twenty-five years ago that the philosopher Joseph Levine officially dubbed it the explanatory gap, which he later described in his book Purple Haze:
We have no idea, I contend, how a physical object could constitute a subject of experience, enjoying, not merely instantiating, states with all sorts of qualitative character. As I now look at my red diskette case, I’m having a visual experience that is reddish in character. Light of a particular composition is bouncing off the diskette case and stimulating my retina in a particular way. That retinal stimulation now causes further impulses down the optic nerve, eventually causing various neural events in the visual cortex. Where in all of this can we see the events that explain my having a reddish experience? There seems to be no discernible connection between the physical description and the mental one, and thus no explanation of the latter in terms of the former.2
Levine leaves us with an unbridged chasm between the physical level of interacting neurons and the seemingly nebulous level of conscious experience. We may explain, for example, that pain is caused by the nervous system’s firing of C fibers, and why there is a delay between withdrawal and the feeling of pain, but explaining the causal relationship tells us nothing about the feeling of pain itself, that subjective experience.
The current state of the mind/body problem rests on two plausible yet seemingly incompatible propositions: (1) Some form of materialism or physicalism is true. (2) Physicalism cannot explain phenomenal consciousness, raw feel, or qualia. Pick (1) and you are a materialist; pick (2) and you are a dualist. Levine, however, throws caution to the wind and picks both. He is a materialist and believes that phenomenal facts can never be derived from physical facts. Can he have his cake and eat it, too? Most philosophers and neuroscientists would say he cannot, so how does he do it?
Levine junks the intuition that mental events (those much heralded qualitative experiences) seem different from physical events. For example, the delay in feeling pain is a phenomenal fact explained by a physical fact. But that is not Levine’s issue. While he accepts that firings of neurons cause phenomenal experience, and that consciousness really must be a physical phenomenon, he states, “There are two, interrelated features of conscious experience that both resist explanatory reduction to the physical: subjectivity and qualitative character.” If we can’t bridge that gap by explaining how the firing of neurons = the experience of pain, then Levine suggests that “it must be that the terms flanking the identity sign themselves represent distinct things.”3 That surprisingly sounds like Levine has resorted to a form of dualism.
Several years later, however, Levine made clear that he did not believe in an actual gap, a nothingness between neurons and subjective experience. He simply was pointing out we have no knowledge about how that gap might be closed. Of course, when you think about it, gaps are all over the place in the history of science, but they are usually framed in terms of gaps in knowledge. In the end, Levine thought that this is also the case for the mind/brain gap. In fancy philosophical terms, he believed it is a question of epistemology versus metaphysics. He viewed it as a gap in current understanding as to how such things are to be explained. Of course, when put that way, he is completely correct.
An even stronger view comes from the Australian philosopher David Chalmers, who also agrees there is an “explanatory gap.” He is steadfastly committed to the perspective of proposition (2): Physicalism cannot explain phenomenal consciousness, raw feel, or qualia. This makes him a dualist, though Chalmers would specify that he is a naturalistic dualist. He agrees that mental states are caused by the physical systems of the brain (that’s the naturalist part), but he believes that mental states are fundamentally distinct from and not reducible to physical systems.4 This is an extraordinary position for a modern philosopher, but not for non-philosophers. Most people on earth today are dualists!
Yet Tyndall, in 1879, slightly rewording his inaugural address as new president of the British Association given in 1874, and foreshadowing our discussion later in this chapter concerning the origins of life, wrote: “Believing as I do in the continuity of nature, I cannot stop abruptly where our microscopes cease to be of use. Here the vision of the mind authoritatively supplements the vision of the eye. By an intellectual necessity I cross the boundary of the experimental evidence, and discern in that ‘matter’ … the promise and potency of all terrestrial life.”5 William James was of the same opinion; he stated:
The demand for continuity has, over large tracts of science, proved itself to possess true prophetic power. We ought therefore ourselves sincerely to try every possible mode of conceiving the dawn of consciousness so that it may not appear equivalent to the irruption into the universe of a new nature, non-existent until then.6
He also said:
The point which as evolutionists we are bound to hold fast to is that all the new forms of being that make their appearance are really nothing more than results of the redistribution of the original and unchanging materials. The self-same atoms which, chaotically dispersed, made the nebula, now, jammed and temporarily caught in peculiar positions, form our brains; and the “evolution” of the brains, if understood, would be simply the account of how the atoms came to be so caught and jammed. In this story no new natures, no factors not present at the beginning, are introduced at any later stage.7
It seems that over the past few decades most of us have forgotten that human consciousness has gradually evolved from precursors; it did not spring fully formed into the brain of the first Homo whateverensis. James goes on to comment, “If evolution is to work smoothly, consciousness in some shape must have been present at the very origin of things.”8 So, yes, that far down. If we want to get at an understanding of the chasm between mind and brain, we have to delve deeply into other big questions, such as how it is that life comes out of non-living matter.
This chapter’s journey will find us discovering that in order to understand what the difference is between living and non-living matter, it is necessary to grasp the inherent duality of all evolvable entities—the fact that, indeed, all living matter can be in two different states at the same time. As you will see, physics and biosemiotics can show us how to resolve the inherent gaps between living and non-living systems without resorting to spooks in the system. The insights of these disciplines suggest how to think about the problem of such gaps in general and how to think about closing this one in particular, and offer a road map for how neuroscientists might succeed with a mind/brain gap nested in a layered architecture, with protocols that describe the interfaces between those layers. But first, the physics.
The Beginnings of Physics and the Commitment to Determinism
The story starts with Isaac Newton and the spectacular beginnings of classical physics in the seventeenth century. This is the kind of physics most of us struggled to learn in school. It turns out that the apple story is the real deal. Newton himself related it to his biographer, William Stukeley, reminiscing about a day in 1666 when, sitting under an apple tree, he wondered,
Why should that apple always descend perpendicularly to the ground.… Why should it not go sideways, or upwards? but constantly to the earths center? assuredly, the reason is, that the earth draws it. there must be a drawing power in matter. & the sum of the drawing power in the matter of the earth must be in the earths center, not in any side of the earth. therefore dos this apple fall perpendicularly, or toward the center. if matter thus draws matter; it must be in proportion of its quantity. therefore the apple draws the earth, as well as the earth draws the apple.9
Newton’s niece’s husband, John Conduitt, related how Newton went on to wonder if this power might extend beyond the Earth: “Why not as high as the Moon said he to himself & if so, that must influence her motion & perhaps retain her in her orbit, whereupon he fell a calculating what would be the effect of that supposition.”10 Calculate he did. Newton converted the results of Galileo’s “terrestrial” motion experiments into algebraic equations, now known as the laws of motion. Galileo had shown that objects retain their velocity and trajectories unless a force acts upon them; objects have a natural resistance to changes in motion, known as inertia; and, finally, friction is a force. That last finding is presented in the third law: To every action there is always an equal and opposite reaction. Newton’s apple musings and his various calculations led him to the universal law of gravitation, and to the realization that the “terrestrial” laws of motion that he had put into algebra also described observations that Johannes Kepler had made about the motions of the planets. That is not a bad day’s work.
Then came Newton’s big revelation. He had just come up with a set of fixed, knowable mathematical relationships that described, well, the workings of all the physical matter in the universe, from bocce balls to planets. These laws are universal, inexorable. They are separate from him, Newton the observer, Kepler, and everyone else. The universe and all the systems it contains just hum along following these laws concerned with space, time, matter, and energy, with or without observers. When the tree falls in the forest with no one to observe it, it still makes sound waves. Whether they are heard or not is another matter, and we shall soon see that the distinction illustrates the crux of our problem about the origins of life.
Newton stirred up more than scientific interest. It was thought that if his laws are universal, then, theoretically, if the initial conditions were known, every action in the physical universe is predictable. That means that all actions are determined, even your actions, for you are just another physical thing in the universe. Plug the right initial conditions into the equation, and out will come the answer as to what happens next—even what you are going to do after work next Thursday. But this line of thinking overlooks a crucial point. What we are going to find out in a bit is that plugging in a value for those initial conditions is a subjective choice made by the experimenter, and that subjective choice is a wolf of a problem dressed up in sheep’s clothing. It is not so simple.
Newton’s laws seem to undermine free will and, thus, responsibility for one’s actions. Determinism first captured the imagination of the physicists, and soon many others got caught in its sway. Still, even though Newton’s view of things took some getting used to, his laws seemed to describe most observations of the physical world well, and they became entrenched over the next two hundred years. But soon there was a new challenge to Newtonian physics that had to do with a new invention: the steam engine. The first commercial one was patented by Thomas Savery, a military engineer, in 1698 to pump water out of flooded coal mines. Even as the engines’ design improved, one problem continued to plague them: the amount of work they produced was minuscule compared to the amount of wood that had to be burned to produce it.
The early engines were all super inefficient because way too much energy was dissipated or lost. In the wholly determined world that Newton envisioned, this didn’t make much sense, so the theoretical physicists were forced to confront the puzzle of the seemingly lost energy. Soon a new field of study emerged, thermodynamics, and with it a change in theory about the nature of the world. It all has to do with the relations of heat and temperature to energy and work. When it was all thought through, the field of physics was changed forever, and Newton’s determined world looked a bit different.
The Rise of Quantum Mechanics and a Statistical View of Causation
It wasn’t long before the steam engine problem gave rise to the first two laws of thermodynamics. The first states: The internal energy of an isolated system is constant. In essence this is reiterating the law of conservation of energy, which states that while energy can be transformed from one form to another, it cannot be created or destroyed. This is entirely consistent with the deterministic world of Newton, but it was also a very limited claim, since it was true only for isolated and contained systems.
The second law is where things get interesting and challenging, and it involves something called entropy. The second law reveals that such things as heat cannot spontaneously flow from a colder to a hotter location. I can remember the moment when I struggled to grasp this concept. It was a cold winter’s day at Dartmouth and I was welcoming a physicist to a meeting in my office. He had just walked across the campus Green, a wide-open and cold walk that had nearly frozen his parka itself. I happily remarked that every time someone walked into my office their clothes brought in the cold, and I always felt chilled. He looked at me and said, “Let’s get the physics of this straight. Cold is not transferring to you. Your body heat is transferring to me, and because that heat is leaving your body, you feel colder.” He reminded me that the second law of thermodynamics can come in very handy even in understanding everyday life, and then added that we needed to hire another theoretical physicist.
“Entropy” was a term originally coined by the nineteenth-century German physicist Rudolf Clausius to describe “waste heat.” It is a measure of the amount of thermal energy that cannot be used for work. The physicist’s cold parka had increased my state of entropy, and with that, there was less energy available to keep me warm.11 The second law is where things started getting fuzzy. In short, with parkas and steam engines the exchange of heat is not reversible. That came as startling news to those Newtonian-minded physicists who believed in a determined world. Suddenly, time was no longer reversible: the arrow of time flowed only one way. This put thermodynamics at odds with Newton’s universal laws, which claimed that everything was reversible in principle. It was this earth-shattering realization that slowly worked its way into other thinking—even, as we shall see, thinking about layered architecture and how to frame the mind/brain gap problem.
Oddly, by the mid-nineteenth century, atomic theory, the theory that matter is made up of atoms, had been accepted by chemists and put to use, but still wasn’t the consensus among physicists. One physicist puzzling over it all was the Austrian Ludwig Boltzmann. He is best known for kinetic theory, in which he describes a gas as made up of a large number of atoms or molecules constantly moving, hitting and bouncing off both one another and the walls of their container, producing random chaotic motion. He turned Gassendi’s seventeenth-century ideas into the hard science of what is now called statistical mechanics. If the types of molecules and their positions are taken into account, kinetic theory explained the observable macroscopic properties of gases: pressure, temperature, volume, viscosity, and thermal conductivity.
Overall, Boltzmann’s huge insight was to further define the disorder of a system (entropy) as the collective result of all the molecular motion. He maintained that with all those atoms bouncing around willy-nilly, the second law was valid only in a statistical sense, not in a flat-footed, deterministic sense. That is, whether a particular particle would transfer was unknown. With that parka next to me, my overall system was becoming more disordered. As Michael Corleone said, “It’s not personal, it’s strictly business.”
Boltzmann caused a major ruckus among physicists who still viewed the universe as completely deterministic and ruled by Newton’s laws. They firmly believed it was not a statistical universe where mere predictions were as good as it gets. As a consequence, Boltzmann’s theory was repeatedly attacked. Sadly, he became so frustrated and depressed by this that he committed suicide in 1906 while on vacation with his family near Trieste, just before his theory was proved unequivocally true.
Physicists to this day are flummoxed by statistical laws. For one thing, Newton’s laws are symmetrical with respect to time and are therefore reversible. Clearly, in the determined world defined by Newton, what goes forward can also go backward. Obviously not so with statistical laws. How can something that only happens with a probability, not a certainty, be reversible? It can’t, and, on the face of it, these two ways of describing reality are at odds. New thinking was needed to handle this duality. After being slow on the uptake with atomic physics, once they had accepted it, the physicists ran with it and stretched their minds around what this new world showed them. Right off the bat, in 1897, the English physicist Joseph John Thomson discovered and identified the first subatomic particle: the electron. Thomson was both a great physicist and a great teacher. Not only was he knighted and Nobeled for his work, but eight of his research assistants also won their own Noble prizes, as did his son. Included in the group was Niels Bohr, who ultimately presented the idea of complementarity. But I get ahead of myself. Accepting this new world took a bit of convincing.
Max Planck, a German theoretical physicist, was obsessed with the idea of entropy and the second law of thermodynamics. He first believed in its absolute validity, not the wishy-washy statistical version that Boltzmann was championing. As a champion of Newtonian mechanics, Planck nonetheless realized that entropy presented a problem, since lurking within the concept of increasing entropy was that thorny reality of irreversibility. Planck actually accepted the idea of irreversibility, but he longed to present a rigorous derivation of the entropy law that could justify its irreversibility using classical laws. Like most physicists, he badly wanted a single physical description that explained everything. And old ideas fall hard.
An opportunity presented itself in 1894, when he was commissioned for a special task—to optimize lightbulbs, maximizing the light produced while minimizing the energy used. In order to do this, he had to tackle the problem of what is called black-body radiation. We can grasp what this is by going out to the campfire. If you stick a metal shish kebab skewer into the fire, its tip will eventually become red-hot. If it gets even hotter, the color will go from red to yellow to white, then blue. As the interior of the skewer heats up, the surface starts emitting electromagnetic radiation in the form of light, called thermal radiation. The hotter the interior (the higher the energy), the shorter the wavelength (and the higher the frequency) of the light that is emitted—thus the color change. Physicists soon posited an idealized object, a “perfect” emitter and absorber that would look black when it is cold, because all light that falls on it would be completely absorbed.
This perfect object is known as a black body, and the electromagnetic radiation it emits is black-body radiation. No one had been able to predict accurately the amount of radiation and at what frequencies such a black body would emit using the classical laws of physics. Newtonian laws worked well when lower frequencies of light (red) were emitted, but as the frequencies increased, the predictions were about as wrong as they could be. After several attempts and failures using purely classical physics, Planck reluctantly turned to the statistical notion of entropy. Once he introduced the idea of “energy elements” and considered that the energy was “a discrete quantity composed of an integral number of finite equal parts,”12 he was able to come up with an equation that predicted black-body radiation well.
Neither he nor anyone else at the time realized that his radiation law was based on a conceptual novelty, a fundamental change in how to view the world. It was the first foray into the quantum world. Planck’s discovery would also suggest that there was neither one fundamental set of laws nor one model of the universe. Some would say it was a nail in the coffin of the Newtonian belief that the world was a determined place.
Interestingly, Planck himself was just tickled at the accuracy of his radiation law and considered that specifying energy quanta was, he said, “a purely formal assumption and I really did not give it much thought.”13 What Planck had stumbled upon, the conceptual novelty that he didn’t fully grasp but used as a mathematical trick, is that microscopic objects behave differently than macroscopic objects. Whoa! Planck had unwittingly pulled out the brick that held together the foundations of his pet dream: that a single explanation could describe everything. Not only did finding the quantum world change human understanding of the universe, but it was going to become apparent that there are two different layers of reality, and each possesses a different vocabulary and way of doing things. Just as in any complex system, each layer has its own protocol: at the atomic level things worked statistically, but large objects worked just the way Newton said. Lacking the concept of a layered architecture, the epistemologists went nuts. The “How is it we know stuff?” crowd was in disarray, big-time. A single explanation for everything wasn’t working. There seemed to be two kinds of explanations for the behavior of matter—a complementarity.
The physicists stumbled on the new idea as they came to understand that light could behave as particles or as waves. There is a complementarity to stuff, a duality. They fought the idea off for decades, but finally they accepted it as truth. Recently, researchers captured an unbelievable picture of a small group of photons as waves and another group behaving as particles at the same time.14 Although the idea of complementarity is now established in physics, it is not widely seen as a possible foundational idea for thinking about the mind/brain explanatory gap. I think it should be, and first want to look at how physics came to accept its seemingly puzzling reality. Following its acceptance in physics, the idea of complementarity may prove itself to be key to thinking about biology, and about the mind/brain gap in particular.
The Idea of Complementarity
After earning his diploma as a teacher of physics and mathematics, in 1901 the twenty-two-year-old Albert Einstein became a Swiss citizen and struggled to find a job. No educational institution would hire him. He finally snagged a position working in the Bern patent office as a “technical expert, third class,” and tutored on the side. In his downtime, he bounced ideas off a couple of pals in a discussion club that they had formed and called the Olympia Academy.
Over the course of 1905, which came to be known as his annus mirabilis, Einstein, now twenty-six, brought physics into a different universe by proposing four huge ideas. He created the quantum theory of light, which stipulates that the energy in a beam of light is, really and truly, in little packets (later called photons), and that energy could be exchanged in only tiny, discrete amounts. The “energy packet” wasn’t, after all, some mathematical trick that Planck had ginned up that just produced a good equation. Up until that time it had been debated whether light is a wave or a set of tiny particles. Considering light as a wave explained all sorts of observations, such as light refraction and diffraction, interference, and polarization. It did not, however, explain the photoelectric effect: when light hits a metallic surface, electrons (called photoelectrons in this case) may be jettisoned from the metal’s surface.
Initially, physicists didn’t see this as any big deal. Assuming the wave theory of light, they figured that the more intense the light (i.e., the higher the amplitude of the wave), the greater would be the energy with which the electrons would be chucked from the metal. But it turns out that this is the opposite of what actually happens. The energies of the emitted electrons are independent of the intensity of the light: bright and dim lights eject electrons from the metal surface with the same energy when the wave frequency is held constant. The unexpected finding was that increasing the wave frequency was what increased the energy with which the electrons were chucked from the surface. This doesn’t make sense if light is a wave. It would be like saying that if a big huge ocean wave and a little ripple hit a beach ball, it would fly off with the same energy. Einstein realized that the observed effects could be explained only by light being made up of particles that interacted with the electrons in the metal. In his model, light consisted of individual quanta (which were later called photons) that interacted with the metal’s electrons. Each photon carried its own energy. Increasing the intensity of the light increased the number of photons per unit of time, but the amount of energy per photon was the same. Then, a few months later, Einstein was to add to his bonanza year and figured out that light could also be viewed as a wave. Light indeed existed in two realities.
Einstein was unstoppable. He also presented empirical evidence validating the reality of the atom, settling the debate over its existence, and gave the thumbs-up to the use of statistical physics. Putting frosting on the cake, he added the theory of relativity and came up with the famous E = mc2 equation. It took a while for the physics world to cotton to all these ideas, and Einstein didn’t immediately gain much recognition. The immediate result of his efforts was solely a promotion in the patent office to “technical expert, class II.”
Once physicists got their heads around atomic theory and caught up with the chemists, however, they soon realized that subatomic particles, atoms, and molecules, those submicroscopic universal building blocks of everything, do not follow Newton’s laws—they flout them. The smoking gun was that when orbiting electrons lose energy, they don’t crash into the nucleus as Newton’s laws would predict; they remain in orbit. How could that be?
In 1925 to 1926, quantum theory was further developed by a group of physicists that included Werner Heisenberg, at the University of Göttingen, with frequent trips back and forth to Niels Bohr’s institute in Copenhagen, to explain the three big puzzles: the phenomena of black-body radiation, the photoelectric effect, and the stability of orbiting electrons. Physicists, whether they liked it or not (and many, including Planck and Einstein, did not), were jolted out of Newton’s deterministic world, the physical “layer” that we inhabit and can see and touch, the one-explanation-fits-all world, to a lower layer, the hidden, nonintuitive, statistical, indeterminate world of quantum mechanics. They were jolted from the world of black-and-white answers to the world of gray answers: a layer with a different protocol that exists simultaneously.
Consider, for example, light reflection. When light photons hit glass, 4 percent are reflected, but the rest are absorbed. What determines which are reflected? After years of research, using multiple techniques, the answer appears to be: chance. It is chance whether a particular photon will be reflected or absorbed. Richard Feynman asked, “Are we therefore reduced to this horror that physics is not reduced to wonderful predictions but to probabilities? Yes we have, that’s the situation today.… In spite of the fact that philosophers have said, ‘It is a necessary requirement for science that setting up an experiment that is exactly similar will produce results exactly the same the second time.’ Not at all. One out of 25 it goes up and sometimes it goes down … unpredictable, completely by chance … that is the way it is.”15 The world of uncertainty. Physicists at the time despised it. Even Einstein, who had opened the door to this uncertain world, wanted to slam it shut. He was having grave doubts about what it implies about a supposedly determined universe and causality, prompting his famous quote “God does not play dice with the universe.” Yet, if they were to be good scientists, physicists had to discard their preconceived notions and follow wherever their findings took them.
When considering the wacky quantum world, remember that we inhabit the macro world of Newtonian physics. Common sense, that is, our folk physics based in the macro world, is not going to help us in the quantum world. It is like nothing we have ever experienced. Leave your intuitions at home. They will not be needed and will only be a burden. Feynman entertainingly prepared a physics class for a lecture on quantum behavior with the following disclaimer:
Your experience with things you have seen before is inadequate. Is incomplete. The behavior of things on a very tiny scale is simply different. They do not behave just like particles. They do not behave just like waves.… [Electrons] behave as nothing you have seen before. There is one simplification at least, electrons behave exactly the same in this respect as photons. That is, they are both screwy but in exactly the same way. How they behave, therefore, takes a great deal of imagination to appreciate because we are going to describe something which is different from anything that you know about.… It is abstract in the sense that it is not close to experience.16
He goes on to say that if you want to learn about the character of physical law, it is essential to talk about this particular aspect “because this thing is completely characteristic of all the particles of nature.”
The submicroscopic quantum world is hidden from our view. That means in order to learn anything about it, we have to have some type of measurement interaction. This involves some instrument from our macro world, which in turn is made up of atoms, which themselves can react with and disrupt the particles we set out to measure that had innocently been doing their own thing. That disruption is going to set the dynamics of the system off in a direction other than what had been going on before the measurement was made. In short, it was also beginning to look like there was an unavoidable measurement problem. Snooping in on the quantum world was going to be tough and would require some new thinking.
So here we go: it turns out that, just as Einstein discovered, light behaves as both a wave and a particle. A few years later it was also found that the same thing is true about matter: electrons have particle- and wavelike properties, too. Physicists soon accepted the idea that what we perceive in the macro world as continuous (rather than billions of individual atoms), say a dining room table, is merely a simulated averaging process in what the applied mathematician, physicist, and all-around polymath John von Neumann later called “a world which in truth is discontinuous by its very nature.” He went on to say, “This simulation is such that man generally perceives the sum of many billions of elementary processes simultaneously, so that the leveling law of large numbers completely obscures the real nature of the individual processes.”17 What “the leveling law of large numbers” means is that the motions of all those particles together cancel one another out, and thus the table stays in one place and isn’t shimmying across the floor. When we see a solid table, however, it is an illusion, a symbolic representation, created by our brain, to denote what is really there. It is a very good illusion that delivers good information, which allows us to function effectively in the world.
The Austrian physicist of “cat in a box” fame, Erwin Schrödinger, was also eager to shore up the deterministic world of causality. He developed what came to be known as the Schrödinger equation, a “law” that describes the behavior of a quantum mechanical wave and how it changes dynamically over time. While the “law” is reversible and deterministic, it can’t describe the overall system’s state. It doesn’t take into account the particle nature of an electron, which Schrödinger tried to avoid. The law can’t determine where exactly an electron will actually be in its orbit at any given time. Only a prediction, based on probability, can be made about its exact position at any given moment, its so-called quantum state.
In order to know the actual location of the electron, a measurement must be made, and here is where the troubles begin for the die-hard determinists. Once a measurement is made, the quantum state is said to collapse, meaning that all the other possible states the electron could have been in (known as superpositions) have collapsed into one. All the other possibilities have been eliminated. The measurement, of course, was irreversible and had constrained the system by causing the collapse. Over the next couple of years physicists realized that neither the classical concept of “particle” nor that of “wave” could fully describe the behavior of quantum-scale objects at any one point in time. As Feynman quipped, “They don’t behave like a wave or like a particle, they behave quantum mechanically.”18
This is where Niels Bohr, the Danish electron expert and Nobel laureate, came in to help out. After spending a couple of weeks skiing alone in Norway, pondering the dual nature of electrons and photons, he returned having framed the principle of complementarity, which this wave-particle duality exemplifies. The principle maintains that quantum objects have complementary properties that cannot both be measured, and thus known, at the same point in time. As Jim Baggott describes in his book The Quantum Story, Bohr had
realized that the position-momentum and energy-time uncertainty relations actually manifest the complementarity between the classical wave and particle concepts. Wave behavior and particle behavior are inherent in all quantum systems exposed to experiment and by choosing an experiment—choosing the wave mirror or the particle mirror—we introduce an inevitable uncertainty in the properties to be measured. This is not an uncertainty introduced through the “clumsiness” of our measurements, as Heisenberg had argued, but arises because our choice of apparatus forces the quantum system to reveal one kind of behavior over another.19
Again, at any one point in time, either the position or the momentum of an electron can be measured and known, but not both—just like its wave-like properties or its particle-like properties. When you make an instantaneous measurement of its position at a single point in time, it actually is in a single location and not moving; thus, its dual nature of also having momentum is compromised. At that point in time, measuring the momentum is not possible. One can only suggest the other measurement with a probability, not a certainty. Complementarity emerges in a system when an attempt to measure one of the paired properties is made. The single system has two simultaneous modes of description, one not reducible to the other.
Bohr worked on this theory for six months and first outlined it in his 1927 Como Lecture, presented to an illustrious group of physicists at a congress commemorative of the one-hundred-year anniversary of the death of Alessandro Volta. Einstein wasn’t there and didn’t hear it until the next month, when Bohr presented it again in Brussels. Einstein was unhappy with the idea of a dual description and uncertainty. He and Bohr began a lengthy exchange that lasted years. Einstein would come up with a scenario in an attempt to defeat quantum theory, only to have Bohr present an argument consistent with quantum theory that overcame it. Since then, many proposals have been made and experiments done attempting to bolster Einstein’s side of the debate;20 all have been unsuccessful. While unpopular among those with a determinist bias, a version of Bohr’s complementarity remains undefeated.
At the root of their argument is what objectivity means and what physics is about. Robert Rosen has explained just what is at stake:
Physics strives, at least, to restrict itself to “objectivities.” It thus presumes a rigid separation between what is objective, and thus falls directly within its precincts, and what is not. Its opinion about whatever is outside these precincts is divided. Some believe that whatever is outside is so because of removable and impermanent technical issues of formulation; i.e., whatever is outside can be “reduced” to what is already inside. Others believe the separation is absolute and irrevocable.21
Bohr belonged to the latter group and argued that whether we see light as a particle or a wave is not inherent in light but depends on how we measure and observe it. Both the light and the measurement apparatus are part of the system. For Bohr, the classical world is just too small to describe all material reality. For him, the universe and all it contains is way more complex and requires more than a single layer with a protocol made up of the laws of classical physics. Rosen notes that Bohr changed the concept of “objectivity” itself, from pertaining only to what is inherent entirely in a material system to what is inherent in a system-observer pair. Einstein couldn’t swallow this and threw his lot in with classical physics, which ignores the measurement procedure and looks at its outcome as inherent in light. For Einstein, something is objective only if it is independent of how it is measured or observed. Rosen concludes, “Einstein believed that there was such knowledge, immanent alone in a thing, and independent of how that knowledge was elicited. Bohr regarded that view as ‘classical,’ incompatible with quantum views of reality, which always required specification of a context, and always containing unfractionable information about that context.”22
Bohr’s principle of complementarity is not just some interesting bit of scientific bickering with Einstein. We are going to see that it is fundamental to understanding the mind/brain gap.