7

A New Era of Hyperthought — From Precognitive Bacteria to Our Tesseract Brain

Time is the substance I am made of. Time is a river which sweeps me along, but I am the river; it is a tiger which destroys me, but I am the tiger; it is a fire which consumes me, but I am the fire.

— Jorge Luis Borges, “A New Refutation of Time” (1946)

I n Madeleine L’Engle’s 1962 novel A Wrinkle in Time , 13-year-old Meg Murry, along with her telepathic little brother and a classmate, travel to a series of distant planets via a fold in the fabric of the universe called a tesseract . They are trying to find and rescue their missing scientist father, who happened to be researching precisely such higher-dimensional possibilities when he vanished months earlier. In a memorable scene, “Mrs. Who,” one of three librarian-like spinsters who understand higher-dimensional cosmology and use tesseracts to get around, demonstrates the principle to the schoolkids by holding her robe out flat and drawing it together, showing how such a “wrinkle” can bring distant points together and make a hypothetical insect’s traverse of her robe much shorter. 1 The concept paved the way for newer concepts like wormholes as shortcuts through space and time. But such higher-dimensional doorways (or at least the idea of them) have a surprisingly long history.

The fourth dimension, as we saw earlier, was catnip to thinking Victorians. 2 Tesseracts were originally the brainchild of a British mathematician named Charles Howard Hinton. In his 1888 book A New Era of Thought , Hinton coined that term to refer to a four-dimensional version of a cube (also called a “hypercube”). Hinton’s fourth dimension (like that of Edward A. Abbott in Flatland ) remained an added spatial dimension, not the dimension of time—again, it was H. G. Wells who made that further leap—but Hinton saw that our human ability to conceive of higher dimensions and manipulate them in our imagination reflected the likelihood that our brains somehow partook of this higher dimensionality. “We must be really four-dimensional creatures,” he wrote, “or we could not think about four dimensions.” 3

As is irresistible and probably unavoidable when you’re trying to blow your readers’ minds with a speculative new idea and also justify it scientifically, Hinton engaged in a little bit of hand-waving to show how this might be possible. He proposed that it was the brain’s computational units, what he called “brain molecules,” that behaved in this four-D way and enabled our contemplation of higher dimensions:

It may be that these brain molecules have the power of four-dimensional movement, and that they can go through four-dimensional movements and form four-dimensional structures. …

And these movements and structures would be apprehended by the consciousness along with the other movements and structures, and would seem as real as the others—but would have no correspondence in the external world.

They would be thoughts and imaginations, not observations of external facts. 4

Sometimes, hand-waving that looks like drowning is really saying hello to a future the rest of us cannot yet see. Hinton’s “brain molecules” able to perform four-dimensional gymnastics sound whimsical, but a growing number of researchers think the brain could have real quantum computational properties. And if the retrocausal hypothesis gaining ground in physics is right, those properties could even be time-defying. Although it remains speculative at this point—and thus, yes, hand-wavy—it is not unthinkable that the brain may turn out to be something like a squishy, pinkish-gray tesseract—a roughly six-inch-in-diameter information tunnel through time, corresponding more or less to what J. W. Dunne called a person’s “brain line.” 5 Moreover, its super-dimensional abilities, if real, will likely turn out to be based, as Hinton presciently intuited back in 1888, on molecular structures that the brain’s cells share with distant bacterial ancestors of all complex organisms on our planet.

Scientists and philosophers have long sought to understand how order, and specifically life, could ever have emerged within a universe governed by the physical laws formulated during the Enlightenment. Classical physics, with its totally determinative, forward-in-time, billiard-ball causation, not only required sweeping anomalies like prophecy under the rug, it also replaced the order and beauty of God’s creation with a bleak mechanistic universe forever slouching toward cold chaos. The second law of thermodynamics insists that everything is, on the whole, cooling off and descending into disorder. This produced a seeming paradox: How could a natural world governed by entropy produce systems that bind energy, replicate themselves, and create ever more complex forms? What principle in the ever-more-disordered universe allows things like seashells, eyes, brains, or Beethoven’s Fifth Symphony?

Beginning in the middle of the last century, scientists like Austrian biologist Ludwig von Bertalanffy, Russian chemist Ilya Prigogine, and American mathematician-meteorologist Edward Lorenz applied new scientific and mathematical tools to model the emergence of complex systems within the traditional regime of thermodynamics. According to Prigogine’s idea of “dissipative structures” (for which he won the Nobel Prize in 1977), systems become orderly by exporting (dissipating) entropy itself. In this way, they generate complex emergent forms, including the complex forms of animal and plant life. 6 In the 1970s, Austrian astrophysicist Erich Jantsch argued that these same basic principles underlie the regularities of social existence too, up to and including the cultural symbol systems used by humans to encode meaningful information and guide our behavior. 7 Today, quantum information theory, discussed in the last chapter, is also being applied to study the emergence of complex systems. 8

Yet, voices on the margins of mainstream science have again and again felt that, to really explain the “balance within imbalance” of the cosmos, the miraculous rise of life and mind, there must be some as-yet-undiscovered anti-entropic force or principle to supplement the mechanistic laws formulated in the Enlightenment. Vitalism was a popular idea in the Victorian era, for instance. Around the turn of the century, Henri Bergson proposed the existence of an elan vital , a life force, as the missing X-factor. 9 A couple decades later, the Lamarckian biologist Paul Kammerer argued for “seriality” as a kind of convergence on meaningful order. 10 In the 1980s, the maverick biologist Rupert Sheldrake proposed that complexity and extraordinary convergences in nature (including psi phenomena) could be explained by a “resonance” among forms. Morphic resonance , he argued, is a kind of active non-material memory and causative principle all of its own. 11 Today, some researchers are proposing that consciousness is a fundamental organizing principle in nature, perhaps even driving life and complexity. 12

The problem with these anti-materialist alternatives, of course, is that (almost by definition) they cannot supply any underlying physical mechanism, and thus most mainstream scientists will call them hand-wavy (if not worse epithets). Sheldrake’s formative causation, for example, cannot explain how or why forms impose themselves on matter at a distance. While he has accused neuroscience of “promissory materialism” in its assurances that consciousness will eventually have a materialist, brain-based explanation, his theory of morphic resonance is also promissory in that it lacks any existing basis in physics as we know it—it too rests on future discoveries. 13 A more basic problem may be the question of how forms could be defined objectively, independent of some “comparing” God-like observer that decides what counts as a form in the first place. In other words, what is it that causes two spatial (or temporal) arrangements of things, be it seashells, strands of DNA, or patterns of brain activity dictating the behavior of a mouse, to count as formally similar?

Rather than imagine hard-to-define consciousness or invisible morphic fields driving the emergence of life, a simpler answer is liable to come from retrocausation, the ability of future states of systems to influence prior states. This possibility was already floated in the middle of the 20th century, in fact, by a mathematician named Luigi Fantappiè. Fantappiè proposed a retrocausal principle drawing systems toward complexity, coherence, and order, which he named syntropy . Two Italian psychologists, Ulisse Di Corpo and Antonella Vannini, have lately resurrected this idea, drawing on research in physics and parapsychology to support Fantappiè’s theory. 14 They propose that future nodes of convergence and harmony, or “attractors”—a concept borrowed from Edward Lorenz’s work in chaos theory—exert a pull on the past, and they describe some physical mechanisms that may facilitate this. Water itself, they suggest, may provide a physical basis for the pull toward order. On the molecular level, the unique properties of hydrogen bonds (the “hydrogen bridge” discovered by physicist Wolfgang Pauli) make water especially suitable to serve as the basis for the emergence of complex, self-organizing biological systems out of the entropic, prebiological matrix. In animals and humans, di Corpo and Vannini argue, syntropy expresses itself as precognition and presentiment; the emotion of love, they argue, is basically a syntropic signal drawing individuals toward meaningful convergences in their future.

The idea that some kind of retrocausation may explain life, order, and complexity is no longer only being discussed on the scientific fringes. Huw Price has tentatively speculated that some form of primitive precognition may have been a force in evolution. 15 And Arizona State University physicist Paul Davies has suggested that it might turn out to be post-selection , applied to the vast quantum computer that is the universe, that will explain the rise of life from lifeless matter:

Perhaps living systems have the property of being post-selective, and thus greatly enhance the probability of the system ‘discovering’ the living state? Indeed, this might even form the basis of a definition of the living state, and would inject an element of ‘teleology without teleology’ into the description of living systems. 16

A Time Eye

Erwin Schrödinger speculated about quantum mechanics’ possible role in biology in a 1944 book, What Is Life? , which inspired James D. Watson and Francis Crick in their hunt for a molecular basis of the genetic code. But while Schrödinger’s book gave a fore-taste, the quantum biology revolution really didn’t begin until the first decade of this century, when a process called quantum tunneling was found to be essential to photosynthesis. 17 When photons strike magnesium atoms in chlorophyll molecules, they release energy in the form of free electrons. These liberated electrons find the shortest route to the reaction center of a plant cell by tunneling—also known as taking a “quantum walk”—through the cytoplasm. A quantum walk is usually explained using the language of superposition: By remaining in a wavelike unmeasured state, particles can pass through solid physical barriers or expeditiously find their way to distant points in space by taking multiple paths simultaneously. An alternative, retrocausal way of looking at it is that the particle’s path may be partly determined by the interaction at its destination—that same business of a particle “knowing” where it is going in advance. Biologist Graham Fleming, in announcing the dependence of photosynthesis on quantum mechanics in a 2007 Nature article, suggested that plants are quantum computers because of this phenomenon. (Remember that, at the quantum level, any manipulation to produce an effect—that is, cause something—can also be described as performing a computation.)

Since then, quantum processes have been confirmed in other biological systems. Tunneling has been discovered to be essential to the catalytic action of enzymes, for example; and entanglement turns out to be the answer to the longstanding mystery of bird navigation. A pair of entangled electrons in the retinas of migratory birds enables them to “see” the angle of a magnetic field in relation to the Earth’s surface, making them sensitive to latitude. 18 Some researchers have proposed that quantum processes may be involved in the sense of smell, which depends on an acute ability to detect the difference between structurally identical molecules whose only difference is their quantum vibration. 19 These functions undoubtedly represent just the tip of the iceberg of quantum behavior in living systems, and as we will see, there is currently a kind of gold rush to discover quantum processes in the brain that may help explain consciousness.

According to some retrocausal interpretations of quantum mechanics, we are awash in information from the future—every physical interaction, including in our bodies, is conditioned or inflected by what will happen to every interacting particle next . The reason we are mostly unaware of this fact is that we lack the context for interpreting that inflection—a cipher turning that information into something meaningful. Information at the quantum level remains noise—seemingly probabilistic or random—unless there is a suitable apparatus for measuring and comparing different groups of particles whose next interactions can be predicted with some degree of reliability. Since it is now possible to design an experimental apparatus to measure retrocausal effects via post-selection, there seemingly are exceptions to the rule that information from the future is always or only noise. In the previous chapter, I described some possible avenues for building a “future detector”—one that uses a technique called weak measurement, others that use serial entanglement of particles or perhaps a matrix of entangled qubits (as would be found in an artificial quantum computer) to decipher information in the past correlated with a future state of the system. These methods would use post-selection to impose a constraint on the outcome, such that a prior “readout” of information correlated to a future “input” can give meaningful insight about that future state (and at the same time, prevent paradox).

As the “chaotician” Ian Malcolm (Jeff Goldblum) famously says in Jurassic Park , “life will find a way.” It stands to reason that if it is possible to detect the future in laboratories, then life too would have found a way to use post-selection to tell the difference between particles that will receive a predicted later measurement from those that won’t be—in other words, to evolve a quantum-biological future detector. 20

To see how such a thing might work, we can use the analogy of a simple eye. Some single-celled organisms like the euglena possess photoreceptors that can distinguish light from dark, but the simplest eyes in more complex animals like planarian worms consist of a patch of photoreceptive cells at the base of a shallow pit. For reasons that will be clear later, let’s picture a simple eye as a pit on the upper surface of the animal, with light flowing down from above. There is very little that a simple photoreceptor array can determine about the environment overhead. It can tell the organism about the presence of light and its intensity or frequency and perhaps roughly its direction, and it can tell when it is in shadow, but it cannot image the environment or tell the animal exactly what is casting that shadow or how far away it is. (This is analogous in spatial terms to how the back-flowing influence of future interactions is interpreted by us in the present as randomness or chance—it appears as a kind of noise that, at most, can be quantified in the form of probability.)

But what happens when you set those photoreceptors inside a deeper recess that is mostly enclosed except for a small pupil-like opening? Evolution did this multiple times—it’s the intermediate stage on the way to a proper eye. Even in the absence of a lens to enhance the photon-gathering capacity, a narrow aperture acts as a pinhole camera to project an image onto the photoreceptive cells. All the sudden, you have the ability to capture a picture, a re-presentation, of what is outside in the environment, such as a predator circling a few inches above. In other words, when you constrain the in-falling light, you actually gain much more usable or meaningful information about the outside world even though you have eliminated most of that light in the process. Photographers understand this as the inverse relationship between aperture (f-stop) and depth of field: The sharpness of the image, and the amount of the scene that can be in focus, increases as the aperture narrows. This is exactly like a spatial analogue of post-selection. The pupil, the aperture in an optical eye, acts as a selector of light rays in space; by admitting only a small bundle of rays, it generates much more coherent information about energetic events unfolding beyond it (you might call the pupil a “far-selector”).

In contrast, the basic pre-sense enabled by intracellular quantum computing would be a temporal sense, a time eye , which amounts to an ability to gather information about outcomes ahead of the organism in its future rather than objects at some distance away in space. To create a time eye, evolution would have needed to create a system that is able to tell the difference between two or more groups of otherwise identical particles that will receive different measurements later. This requires the system to have a potential “measuring presence” at two points in time, not just one, the same way a primitive eye requires bodily tissue at two different distances from the external “seen” object (i.e., the retina and the aperture or pupil). That is no problem: In the block universe of Minkowski spacetime, organisms are continuous in time, wormlike beings that have beginnings, middles, and ends, like stories, just as they have extension in space. Thus, we are now talking about a kind of “sensory apparatus” that “points” along the direction of the organism’s world line through the glass block—that is, along the time axis instead of along a spatial axis like an eye.

By convention, most graphs put time along the horizontal x-axis, and intuitively in our culture we often think of time as “running” from left to right. So to visualize such a system, mentally rotate your simple optical eye away from the vertical y-axis of space so it is directed instead horizontally, facing to the right, along the x-axis. Instead of an in-falling rain of light being constrained by a narrow aperture to form a coherent image on a recessed surface, the noisy back-flow of influence from the future needs to be constrained, or post-selected, later in time, on the right, to form a coherent “image” at an earlier time point, on the left, when it is (er, was —we have to fudge our tenses here) measured initially, perhaps via some form of weak measurement. What would be the post-selection parameter acting as the temporal “aperture” in this time eye? Most simply, it could be existence at that later time point: survival , in other words … if not the survival of the organism as a whole, at least the persistence of the molecular apparatus doing the measurement.

Weak measurement is a sophisticated tool used in experimental situations to (among other things) detect possible retrocausal influences. But with billions of years of trial and error in the primordial soup, it may not have been necessary to engage in something as subtle as that. All a quantum presponsive circuit, a molecular future detector, needs to have done is learn to somewhat reliably detect a difference between groups of identical quantum-entangled objects (particles, atoms, even molecules) whose only difference is what happens to them next (e.g., in a few milliseconds). An array of entangled qubits in a molecular quantum computer, representing multiple options in a decision space (such as moving to the left versus moving to the right), could serve as a precognitive guidance system, orienting the organism generally toward positive outcomes ahead in its timeline. 21

My hand-waving makes it sounds nice and easy—but do we actually know of any molecular structures or systems in nature that could be capable of detecting optimal outcomes (most basically, survival) in their future? Maybe . It just so happens that the ongoing search for the roots of consciousness in quantum biology has turned up an excellent candidate—if not for consciousness, then at least for cellular quantum computation, and with it perhaps the temporal shenanigans that would make a time eye possible.

The cytoskeleton or internal structure of all complex cells is formed from tiny tubular polymers called microtubules . Originally thought to be merely the bones of cells, giving them their shape, these highly dynamic structures are now known to drive cellular movement and shape-changing and to control cell division. They seem to act as the “brains” of cells. Thus, their information storage and computation abilities have attracted a great deal of attention in recent years. 22 Interest in these structures as computing devices is often associated with the work of Stuart Hameroff. As an anesthesiologist at the University of Arizona in the 1980s, Hameroff noticed that anesthetics seem to cause unconsciousness via their actions in microtubules, which are particularly numerous and complexly arrayed in neurons. Thus, he hypothesized that these structures may be the quantum computational basis for consciousness. 23

A quantum computer, remember, is a matrix or lattice of entangled atoms or other particles that are kept isolated from their surroundings to preserve their entanglement and can act as quantum bits or qubits. Whatever you do to one of the particles affects all the others simultaneously—although again, entanglement may really reflect a zig-zagging connection across time. Conveniently, microtubules are perfect tubular lattices built from individual cup-shaped proteins called tubulin. Proteins like tubulin have multiple ways they can fold, called conformational states . After several researchers had suggested that quantum mechanics may play a role in determining how proteins select one versus another state, Hameroff suggested that tubulin molecules could play the role of qubits through their variable conformational states. 24 The lattice structure of microtubules places the individual tubulin molecules, and electrons in bonds within them, at distances that would enable entanglement to occur. Hameroff and his colleagues subsequently confirmed that microtubules transmit electricity according to quantum principles: Like semiconductors, they offer no resistance. This makes it increasingly promising that his hypothesis—at least about their quantum computational abilities, if not their role in consciousness—could be right. 25

The possible role of microtubules as the central information processors of complex cells also interested the pathbreaking evolutionary biologist Lynn Margulis. 26 According to Margulis, complex nucleated cells (eukaryotes ) formed originally over two billion years ago from the endosymbiosis or merger of bacteria having different lifestyles and able to make different contributions to the collective. For instance, the engulfment of specialized bacteria gave eukaryotes their oxygen-burning mitochondria and the chloroplasts that enable plant cells to photosynthesize. Both of these structures still retain their own DNA separate from that found in the cell’s nucleus, proving they were once independent-living organisms. 27 Establishing the origins of microtubules has been more difficult, but Margulis argued that those structures, as well as the cilia and flagella that facilitate motion in many eukaryotic cells, are the inheritance of an early engulfment of the distant ancestors of today’s spirochetes (a group that includes the pathogens that cause modern syphilis and Lyme disease). These bacteria were little corkscrews that distinguished themselves from other early bacteria by speedily moving from place to place. The undulatory movement of spirochetes, as well as the motors that drive the movement of cilia and flagella in more complex cells, are made possible by the dynamism of microtubules.

The link between microtubules and movement could be consistent with the hypothesis that microtubules were the first cellular guidance systems, even the first future detectors. You do not need to “decide” much if you are a relatively stationary bacterium floating in muck or clinging to some surface. And most types of bacteria do not have microtubules. (Today, bacteria are thought to navigate mainly by orienting toward or away from chemicals in their environment, called chemotaxis .) But if you are a speedy mover, a way of making informed choices about whether to move to the right or move to the left (or up or down) could come in very handy. If microtubules are quantum computers presponsive to post-selected outcomes (i.e., survival) in addition to perhaps encoding a record of past successes, then there you have it: Engulfed spirochetes may have endowed eukaryotes not only with motility but also with quantum pre-sense, and perhaps simple learning ability, via their microtubules. If such a molecular quantum computer could detect the relative favorability of multiple decisional options, even a few milliseconds into the future, it would obviously have conferred a valuable selective advantage on any cell equipped with it. It would have given that cell the ability to bind time .

Time binding was a term originally coined by the philosopher Alfred Korzybski to denote our species’ unique ability to transmit information to later generations and thereby pursue goals that transcend the span of an individual’s lifetime. For Korzybski, who influenced many 20th -century science fiction writers like Philip K. Dick, Robert A. Heinlein, and Frank Herbert, time binding was implicitly higher than both space binding , the activity of animals who live in an eternal present and are dominated by the imperative to forage and hunt for food in their environment, and energy binding , the activity of plants that, via their chloroplasts, convert energy from the sun. 28 Korzybski wrote before the era of quantum biology, and he was not thinking in terms of any ability to use information from an organism’s future as well as its past. But something like time binding, mediated perhaps by a molecular future detector along the lines I have proposed, may be as old as those energy- and space-binding functions of life, truly a “first sight.” Earth’s primordial soup may have been a precognitive soup.

The Big (6-Inch-in-Diameter) Picture

Starting right around the time Charles Howard Hinton wrote his A New Era of Thought , the Spanish pathologist Santiago Ramón y Cajal was using paint and ink to depict the animal neurons he saw under his microscope with great detail, revealing that these cells were like trees with often hundreds of branches and countless bud-like projections, each making a connection to another neuron. At that time, it was not yet widely believed that neurons were the building blocks of thought, but their importance rapidly became apparent to Ramón y Cajal. Around the turn of the century, he drew and painted even more mindbogglingly complex human neurons, vast and sublimely intricate despite being so tiny. He called neurons “mysterious butterflies of the soul whose beating of wings may one day reveal to us the secrets of the mind.” 29 Ramón y Cajal’s work paved the way for 20th and 21st century neuroscience.

With recent visualization technologies, even a one cubic millimeter spec of mouse cortex looks like a dense Amazonian rainforest—a vast jungle of trees with roots and tendrils making millions of connections. The human brain, an object a little bigger than a grapefruit, contains about 86 billion of these cells, each one making about 1,000 synaptic connections with neighboring cells, amounting to about 80 trillion connections across the brain. It is sometimes said that there are more possible paths that a signal can take through this structure than there are atoms in the entire universe. Neuroscientist Christof Koch famously declared that the brain is “the most complex object in the known universe.” 30 This is true even just at the “macro” scale neuroscientists can easily study with present-day imaging technology, and trends in several research fields promise to exponentially increase our knowledge of the brain’s complexity in coming decades.

The most publicized controversy in neuroscience and philosophy today concerns the brain’s role in relation to consciousness—whether experience and awareness arise solely from brain processes, whether consciousness is an “emergent property” that rests on those processes yet cannot be predicted by them, or, again, whether it may be somehow more basic and universal in nature. Some, like Koch, suggest that consciousness is intrinsic to complexity itself, and that while the brain may be super-conscious, even a rock may possess a tiny bit of that ineffable quality. Critics of Enlightenment materialism (including many parapsychologists) are particularly keen to reject neuroscience assumptions that consciousness is a product only of brain processes or even that it is some kind of higher-order emergent property. 31 (Again, Rupert Sheldrake has called this assumption “promissory materialism.” 32 ) Alternative theories have long been proposed, such as that the brain merely acts as a kind of prism or radio receiver for consciousness—a view argued by Frederic Myers, Henri Bergson, and the psychologist William James, for example. 33 (Although today’s scientific psychologists often consider James the father of their science, they choose to overlook his interest in psychical phenomena and his anti-materialistic views on consciousness.) The novelist Aldous Huxley later used the metaphor of the brain as a “reducing valve” for consciousness or what he called “Mind at Large.” 34 The position that consciousness is actually fundamental and irreducible in nature is not unrelated to this idea. Panpsychism, for instance, is the position that matter is just a manifestation of mind—an idea with deep roots in Eastern philosophy and advanced in different ways by prominent 20th -century thinkers like Bergson, philosopher Alfred North Whitehead, and Carl Jung. 35 A good argument can also be made that consciousness is really an ill-defined term that marks the breakdown of language and symbolization at certain boundaries and margins of knowledge, rather than a well-formed problem that either the sciences or philosophy could ever hope to solve definitively on their own. 36

Whatever the case—whether the material brain produces consciousness or merely receives or filters it—there is much less debate over the importance of the brain in shaping experience, controlling the body, and even in encoding an individual’s personality and memory, all the forms of information that we find meaningful in defining our selves and in helping us act successfully in the world. Damage to specific brain areas produces very predictable and often catastrophic deficits in functioning and impairments in meaning-making, as Oliver Sacks showed in his prolific writings. The brain as described by contemporary neuroscience is a jaw-droppingly complex mechanism of interacting parts and functions—a vast, hyperfast system coordinating sensory inputs, motor responses, and involuntary processes throughout the body from heartbeat to breathing to digestion as well as preserving or “storing” a record of experiences (although the old computer metaphor for the brain, with its storage and retrieval operations, has largely given way to new metaphors drawn from weather and other chaotic systems). As discussed previously, it is also an imaging system with unbelievable resolution, able to generate realistic pictures and sounds and words in the inner workbench of thought—images based on real-life experiences as well as ones that are wholly new and original. It makes sense that if a full-on quantum computer exists in nature, the brain (or components of it) would be the most exciting and promising place to look.

Cognition is increasingly recognized to be “quantum-like” in numerous ways. For instance, when viewing an ambiguous image like a Necker cube or a duck/rabbit, the viewer only sees one aspect at a time, not both—an either/or that oscillates from moment to moment. 37 Words or other items learned in memory experiments have multiple potential links to other cues—akin to superposition—until a test is administered, which effectively forecloses nonrelevant associative links. 38 Various fallacies and heuristics in probability judgment also seem best modeled quantumly. For instance, the order in which information is presented to a test subject constrains the outcome of that person’s decision. 39 Thus the latest thing in cognitive psychology is the framework of “quantum cognition,” describing the processes of perception, memory, or judgment in quantum computer terms, along with a bold disclaimer that “quantum” is just a convenient and suggestive metaphor. But plenty of researchers have been keen to prove that the metaphor is more than a metaphor.

In the 1980s, physicist Roger Penrose argued that the brain must literally be a quantum computer after pondering mathematician Kurt Gödel’s incompleteness theorems : Any formal mathematical system will contain statements that are unprovable within the system, and no such system can prove its own consistency internally. Penrose reasoned that only a quantum computer could arrive at the idea that computation can never be complete, and thus the brain—or at least Kurt Gödel’s brain—must be such a device. 40 His reasoning was analogous to Hinton’s argument that only a brain with “molecules” that reached across the fourth dimension could think in four-dimensional terms. Like Hinton, Penrose was clearly ahead of his time. Since Stuart Hameroff had already identified a likely quantum culprit with his microtubules, Penrose collaborated with the anesthesiologist to formulate what they call the “orchestrated objective-reduction” (Orch-OR) hypothesis, in which neuronal microtubules create consciousness through brain-wide entanglement. 41

The narrow channels in neuronal walls that control the movement of ions into and out of the cell—and thus the cell’s action potential (the electrical charge that passes down it when it fires)—have also attracted attention, as these are potentially sites where particles are protected from environmental interference long enough that they could become entangled. 42 Already in the 1970s, physicist Evan Harris Walker had proposed that consciousness depended on quantum tunneling at the synaptic cleft, the narrow gaps where molecules carry signals from neuron to neuron. 43 There is no reason why more than one mechanism could not play a role, or that there may be others that no one has even imagined yet. The brain’s quantum computation could take multiple forms. The problem with existing theories, however, is that there is no known way that quantum entanglement could be preserved throughout or across the whole brain, which intuitively might seem to be a requirement for “quantum consciousness.” 44

Again, though, it may be that the search for the physical roots of consciousness per se, even in quantum biology, is a hopeless task, simply because of the rift I mentioned earlier between the necessarily reductive methods of objective science and subjective descriptions of experience. No matter how fine-grained neuroscientists’ understanding of the brain becomes, it may never map convincingly onto qualia , or “what it feels like” to be a conscious entity. This is what philosopher David Chalmers called the “hard problem” of consciousness. 45 Nevertheless, as a MacGuffin spurring competitive efforts from various researchers in different fields, a byproduct of efforts to discover the roots of consciousness in the brain may be the discovery of quantum processes in or between neurons that might enable the brain to process information across its timeline.

In the spirit of Charles Howard Hinton, here’s a sketch of what a brain-based account of precognition might look like in the coming era of quantum hyperthought.

We can start with what is already generally agreed upon about how memory works in the brain. “Memories” do not exist whole and discrete, like fossils of past experience tucked in the brain’s folds. Different experiences activate, and thus can share, many of the same neurons and connections, across many areas of the brain. Although it is a simplistic (even simple-minded) metaphor, individual neurons could be thought of as “pixels” in our mental life and experience; just as a TV screen reuses the same pixels in different ways from instant to instant to create sequential still images in a changing picture, the brain reuses the same cells and circuits in different configurations from moment to moment to generate the flow of our thoughts and experiences. What seems to distinguish one memory from another, or one experience from another, is a unique spatial and temporal pattern of neuronal activation across the brain. That pattern is determined by the variable strengths of synaptic connections among all those linked neurons.

Memory is made possible by the brain’s plasticity, its ability to change from day to day, minute to minute, even moment to moment. The strengths of those trillions of synaptic connections are continually being updated based on our experience. Although different types of learning have been identified in different brain circuits, the most basic principle operative throughout the brain is what is known as Hebbian learning , captured by the phrase “neurons that fire together, wire together.” When a neuron sends a chemical signal to another neuron, the synapse where they link up is strengthened, so that future signals at that synapse will be easier—called long-term potentiation . Through this process, our experiences are self-reinforcing, like a trickle of water wearing a deeper and deeper rivulet in the soil to become a stream. By the same token, connections that are not reinforced decay or weaken over time—called long-term depression .

If there is a brain-based theory of precognition forthcoming in science’s future, it will likely involve these same processes of memory and learning, specifically the ability of synapses to update their facility of signaling. Lo and behold, the cytoskeleton of neurons—including the aforementioned microtubules—controls this process. 46

When the axon of an “upstream” neuron sends neurotransmitters to the dendritic spine of a “downstream” neuron, that dendritic spine enlarges and in other ways makes itself more receptive to future signals, as well as sending retrograde messengers to the upstream neuron that initiate similar changes in the axon terminal. These structural changes that enhance the ability to send and receive signals at a synapse are controlled by the shape-changing of microtubules—a process governed by a kind of chemical dance of proteins that continually disassemble and reassemble these structures, shortening and lengthening them at either end. 47 (Among the microtubule-associated proteins governing this shape-changing is tau; dysregulation of tau proteins is associated with the devastating impairments in Alzheimer’s disease.) Microtubules also transmit electrical signals through the cell and serve as tracks for the transport of cellular raw materials. In reshaping the synapse and controlling synaptic efficiency, microtubules act in concert with other cytoskeletal structures called actin filaments , which are also currently being studied as biological computational devices. 48

So, one hypothesis would be that these structures of the neuronal cytoskeleton may be behaving like tiny molecular versions of the apparatus in the Rochester experiment described in the last chapter: devices that somehow “weakly measure” the behavior of entangled qubits within them at time point A and then, after some regular, predictable length of time (probably rhythmically), perform a subsequent measurement—post-selection in other words. Post-selection might even be a function of the ever-shifting length of a microtubule, as its ends disassemble and reassemble. Via arrays of microtubular time eyes controlling the shape of axon terminals and dendrites, signaling at synapses may be potentiated or enhanced if they are going to be signaling in the future (and vice versa if they won’t be).

Remember that, according to the Dunnean view, precognition would not be a preview of future events out in the world, as is often assumed (negatively assumed, for those who reject precognition on principle); it is instead a presponsivity of the brain to its own future states and behavior (thoughts, emotions, perceptions). If quantum computation in the cytoskeleton enables synaptic connections to be conditioned by their future signaling, this would scale up in complex networks of interlinked neurons, enabling whole “pre-presentations” of thoughts and emotions to be projected into the past, albeit imperfectly and imprecisely, in roughly the same way that salient experiences “project” into the future as memories. There is no question in this model of the brain somehow “receiving” information directly from future events; it is simply communicating with itself across time. 49

This is by no means the first brain-based model for precognition to be proposed. I already mentioned Gerald Feinberg, who suggested in the mid-1970s that precognition is just memory in reverse. Speculating on a possible mechanism, he proposed that brain oscillatory patterns thought to play a role in short-term memory might have both an “advanced” and “retarded” component to them, in the manner of time-symmetric quantum-mechanical models. 50 More recently, Jon Taylor has suggested that similar patterns of neural activation at different points in time may resonate with each other across the brain’s timeline. 51 His proposal resembles Rupert Sheldrake’s argument that memory may be a function of informational patterns resonating across spacetime. But again, formative causation arguments (like Platonic models more generally) seem to put the cart of meaning (as “form”) before the horse of causation. Current retrocausal paradigms in quantum physics offer an interesting alternative way of thinking about informational reflux from the Not Yet, since they apply the same principle to information that Darwin, Wallace, and their contemporaries did for natural forms: selection . Post-selection is really just causal Darwinism.

At its most basic level, a “signal sent back from the future” via post-selection would be one that necessarily indicates a course of action that survived long enough to send that message back—like a little breadcrumb trail from the organism’s future self, or a note at a crossroads weirdly in its own handwriting, saying “come this way.” Part of what post-selection entails is predictability, and thus the recognizability of that handwriting. The more the mechanism can anticipate its state in a few milliseconds or seconds or longer, the more information from its future can have coherence and context, making it meaningful or useful in guiding behavior. Here, in the organism’s relationship to itself across time, is where a kind of “resonance” may come into play, although it must be understood metaphorically. When scaled up in a complex animal nervous system like that of a human being, it may be something like habit or conditioning—a kind of self-trust that the state of the individual performing a measurement now will be more or less the same as the state of individual in a millisecond, a second, or a minute (or a decade)—that acts as a kind of post-selection, providing the cipher key of back-flowing information, enabling it to usefully guide behavior (i.e., be meaningful) rather than noise. 52

Other possibilities should also be kept on the table. For example, could there even be neurons in the body that fire in advance of incoming signals, kind of like Asimov’s thiotimoline molecule? 53 If a single neuron could get even a one-millisecond head start on firing, a chain of hundreds of such neurons (like a chain of Asimov’s endochronometers) could amplify that head start enough to explain the findings of Dean Radin, Daryl Bem, and other presentiment researchers. It is already known that quantum processes accelerate the transmission of electricity within neurons 54 , but for a downstream neuron to actually fire in advance of signals from the upstream neuron would seemingly require some kind of entanglement between molecules in separate neurons, across the synapse. Again, this kind of wider entanglement in the brain remains the holy grail for those trying to solve the problem of quantum consciousness, and it also remains the big stumbling block to those efforts, given the problem that entanglement tends to be lost in warm, wet environments. But researchers are rapidly learning about more and more ways quantum coherence can be sustained in biological systems over distances and across time spans that would have been thought impossible even a decade ago. 55 One puzzling phenomenon that is at least suggestive in this context is spontaneous neurotransmission —neurons firing without being triggered by any input from neighboring cells. Initially thought to be just “noise” in the brain (sound familiar?), it is now thought to play a role in reshaping synapses during learning. 56 Could it be evidence of thiotimoline-like neuronal behavior?

There is much we still don’t know, obviously. But the bottom line is that if synaptic plasticity or other aspects of neurons’ behavior or signaling are controlled or influenced by molecular computers capable of harnessing time-defying quantum principles, then it is likely here that an answer to “how can this be?”—that is, how can the brain get information about its future responses to the world?—will be found. The biological basis of precognition would be learning processes in which the brain’s connectivity and signaling in the present are influenced not only by the individual’s past experience but also, to some as-yet-uncertain degree, by that individual’s future experience. It would enable that individual to “post-select” on rewards ahead and to be influenced by, if not intentionally access, information the individual will conventionally acquire down the road. 57

Such a proposal is, of course, still speculative, only slightly less hand-wavy than the various alternatives. It remains a hypothesis to be tested. But it does not fly in the face of what we are learning about quantum computational processes in biological systems and what some physicists are arguing (and discovering in the laboratory) about retrocausation. Thus, it should not be unpalatable even to materialists, at least in principle.

Libet’s Golem

It sometimes happens in science that new discoveries are made based on old data that were misinterpreted at the time they were collected because existing theories made no place for them. New species are often discovered in old museum collections, for example, when specimens are found to have been misidentified or just ignored, awaiting some shift in taxonomic or evolutionary paradigms. It may be that direct evidence of the nervous system’s time-defying behavior has been staring researchers in the face for nearly four decades. Famous perplexities having to do with the synchronization of sensation, action, and decisions would make more sense in a nervous system capable of computing four-dimensionally across its timeline than in any purely Newtonian information processor.

Older readers will remember drive-in movies—often badly projected and frequently the picture and sound were a few frames out of sync. If the nervous system is a purely classical, mechanical information processor, our everyday experience ought to be a little bit like an out-of-sync drive-in movie … but it isn’t, and why it’s not is a bit of a mystery. If you step on a sharp tack with your bare foot, the pain signal takes roughly a half second to travel 1.5-2 meters between your foot and your brain and for you to become aware of it. Each nerve cell in a long chain has to receive a chemical signal, fire, release its own neurotransmitters, and so on. However, other signals, such as the sight of your foot hitting the floor, have a much shorter “flight time” since light travels much faster from your foot to your eye than that chemical-electric pain signal traveling up your whole body. But if you watch your right foot as you are walking, you feel the sensation of your big toe touch the floor at exactly the same time as you see it visually. Why?

Neuroscientist Benjamin Libet discovered this contradiction between sensory out-of-sync-ness and the subjective experience of our harmoniously orchestrated bodily movements and decisions in a series of landmark experiments in the late 1970s and 1980s. To explain how we don’t experience life like a drive-in movie, he argued that some process in the brain “antedates” our conscious experience relative to the stimulus so that multiple sensations can match up and be felt as synchronous—sort of like taking separate video and audio tracks in a video editing program and sliding one to the side so the visual component synchs up with the audio. But since, it was assumed, you can’t slide the slower of those components (e.g., the pain sensation) to the right in your timeline, synching things up meant sliding all the faster signals to the left or holding them in some sort of buffer while the slower ones caught up. Libet concluded that the coherence of our experience, the fact that it is all synchronized, reflects the remarkable fact that we are really living always about a half second in the past . 58

It gets even stranger, though, when you include our feeling of conscious will in this fictitious synchronization. In 1983, Libet conducted a now-famous experiment that compared the subjective timing of participants’ decisions to move a finger with their motor nerves’ preparation to fire. He found that neurons begin to build up a charge a fifth of a second (200 milliseconds) before the decision to move is consciously made. This discrepancy between what is called the nerve’s readiness potential and the subjective sense of conscious will flies in the face of our ordinary experience of deciding to act and then acting, the intuitive sense that our will causes our actions and is not merely a spectator. 59 For a certain faction in psychology and neuroscience, Libet’s work was the last nail in Descartes’ coffin, the final death blow to the idea that consciousness is something over and above (and prior to) the mechanical operating of the brain’s circuits. Psychologist Daniel Wegner, for instance, expanded on Libet’s research and built an interesting (and troubling) case that our subjective experience of being the masters in our own house is altogether an illusion. 60 Libet himself did not go so far; he felt his discovery did not eliminate conscious will but altered its essential character. Instead of exerting free will, he said, we exert a veto power over pre-initiated actions. V. S. Ramachandran has called this “free won’t.” 61 Our conscious will can intercede within that 200-millisecond window to say “no” to an impulsive action initiated by the brain.

Much research in psychology and cognitive science over the past few decades has identified two systems that guide our behavior in parallel: a fast, largely unconscious, emotional system (sometimes called “System 1”), and a slower, more deliberate, more reasoned system that ideally hovers over and says “no” (“System 2”). It makes sense from an adaptive standpoint that, when making quick responses to real threats—such as swerving to avoid an oncoming car or changing your foot position to avoid a tack—we wouldn’t want our slower deliberative conscious will to get in the way and delay a response that the unconscious mind and body can handle more efficiently and swiftly. Yet even in the absence of such an impediment, how does a large animal like a human manage to survive when sensory signals take a measurable fraction of a second to reach the brain, where they must then be processed and interpreted, before a set of responses can then be sent back down parallel nerve channels to make a motion? It ought to make us very clumsy and easily defeated by sudden threats and by smaller animals with simpler nervous systems. And just imagine if you were an elephant, or a swift 10-ton T-Rex whose sensory and motor signals needed to travel many times that distance.

An emerging paradigm in psychology and neuroscience emphasizes the brain’s role as a predictive processor , meeting these challenges by generating constant simulations or forecasts that are able to guide the body toward anticipated outcomes. 62 A baseball player is able to swing at a point in space where he perceives the ball will be, rather than base his actions on constantly updated, but necessarily slow, moment-to-moment input from his eyes. It happens outside of conscious awareness—and it goes along with many “superpowers” that neuroscientists have attributed to our fast, unconscious processing, in a largely unacknowledged debt to Sigmund Freud and his generation of Victorian psychiatrists who were perplexed at humans’ seeming supernormal abilities (more on which later). But another possibility is that some of what gets labeled unconscious or implicit processing may really be the time-defying possibilities that have been revealed in the experiments of Bem, Radin, and their colleagues. Physicist Fred Alan Wolf proposed in the late 1980s that John Cramer’s transactional interpretation of quantum mechanics could explain Libet’s findings and may also help explain consciousness. 63 Quantum-biologically mediated presentiment, in other words, could be the real reason, or part of the reason, why our lives don’t feel like a drive-in movie.

If the brain really is a quantum future detector—or perhaps, trillions of quantum future detectors networked classically—then effective motor action might be initiated partly from a position displaced slightly ahead in an organism’s timeline, when the success of the action is already confirmed. Such a model would offer another way of thinking about skillful performance in sports or martial arts, for example, not to mention intuition, creative insight, and inspiration. I like to think of this possibility as “Libet’s golem,” an ironic, science-fictional perversion of the lumbering clay robot of Jewish folklore. It is conceivable that we are not after all mere automatons, spectators of our bodies, as Wegner argued, but could be pulling our meat puppets’ strings from a position offset from the “now” of sensation, or perhaps even from multiple temporal vantage points distributed across time. This would be especially the case when engaged in a skilled activity, and it might account for the dissociated feeling that accompanies states of peak engagement and creativity. Could temporally displaced action-initiation even account for the “mental replay” that often follows a successful high-stakes action? In other words, when we mentally relive a successful action in its immediate aftermath, might we in fact be initiating that action, from that action’s future?

It is in this respect that gaining greater understanding of what happens in quantum computers, including biological quantum computers and the role they play in the nervous system, really could provide an important missing piece of the consciousness puzzle. Rather than being simply tantamount to coherence or entanglement across spatially separate parts of the brain, consciousness could instead (or also) have something to do with cognition being distributed across time. With its trillions of classical connections (i.e., chemical signals) mediating the actions of many more trillions of molecular quantum computers, the brain might turn out to be an exquisitely tuned device extracting and synthesizing relevant information from across some indefinite time window and bringing it to bear on an immediate situation or problem. Although our awareness remains tied to a single synchronized (yet in fact, fictitious 64 ) instant of stimuli coordinated among our five senses, we might in fact be “thinking with” a wider swath of our future as well as past history. In which case, the brain really would be a kind of informational tesseract, a 4-D meaning-machine.

This would not make the mind “infinite,” the implicit and sometimes even explicit promise of those who claim psi phenomena must rule out a reductive, materialistic explanation for consciousness. 65 But even within a materialist framework, the mind would be vastly bigger, vastly longer , than we ordinarily suppose—and thus, indeed “transcendent.” 66

What are the possibilities? How “long” could our minds really be? Might we even sometimes draw on the entirety of our brain’s computing power across our lifespan? We veer very far into speculation here, obviously, but some capacity to compute across long swathes of time would make sense of baffling experiences like dreams and artworks that seem prophetic of events years or decades in the individual’s future. It would also help clear up yet another mystery in the cognitive sciences: the oft-noted correlation between intelligence and longevity. There are some commonsensical explanations why smarter individuals tend to live longer, such as being better able to avoid dangers, as well as possible confounding factors like education and affluence. Yet these confounds have never been able to fully account for the correlation, and cognitive epidemiologists have argued that there must therefore be a strong genetic underpinning to this association. 67 But as yet, no specific gene variants have been identified that produce both smarts and long life. Obviously, as they say, “more research is needed.” But the idea that the brain could be a quantum computer drawing on its computing power over its whole history, or at least over significant swathes of that history, raises the intriguing possibility that longer lifespan might to some extent cause higher intelligence, by increasing the four-dimensional computing resources of the individual’s brain.

It would be easy to test this hypothesis, in principle. One would simply test the intelligence or problem-solving ability of a group of identical animals (cloned mice reared in the same conditions, say) who are of the same age, and then subsequently “sacrifice” (as they euphemistically say in laboratories) a randomly selected half of the animals, allowing the remainder to live a full life—a kind of post-selection, in other words. If their brains are making computations drawing on the computing power of a whole mouse lifetime, the long-lived mice would be expected to have performed better on the problem-solving task than the short-lived ones. I only offer this as a Gedanken -animal-experiment, and assuredly no animals have been harmed in the creation of this book. But it is possible to go partway toward such an experiment simply by comparing the performance of mice who engage in learning or practice after they perform the assessed task with mice who don’t … but in that case, why use mice? The retroactive-facilitation-of-recall experiments conducted by Daryl Bem are precisely such a study in humans, and the results point to a real effect of subsequent learning on prior performance. 68

In a 1974 meeting of physicists and parapsychologists in Geneva, Switzerland, the French physicist Olivier Costa de Beauregard—the one who first proposed that quantum entanglement might be explained retrocausally—described the train of thought that led to his own ultimate acceptance of the probable existence of something like precognition:

My starting point … occurred in 1951 when I suddenly said to myself: If you truly believe in Minkowski’s space-time—and you know you have to—then you must think of the relationship between mind and matter not at one universal or Newtonian instant but in space-time . If, by the very necessity of relativistic covariance, matter is time extended as it is space extended, then, again by necessity, awareness in a broad sense must also be time extended. 69

It is very much like an updated version of Charles Howard Hinton’s reasoning about the brain’s capacity for four-dimensional thought: Our brains are four-dimensional, so our awareness must be as well.

If our awareness is four-dimensional, the mystery becomes: Why do we experience our experience as confined to that narrow cursor in our life’s timeline—the “single Newtonian instant,” as Costa de Beauregard puts it? It may have a lot to do with the mystery raised earlier: Why is it that efficient causes (the ones the “push” from the past) are so much more obvious and intuitively understood than final (teleological) ones? Does it boil down to some biologically determined preferential weighting of past experience over future ones in updating synaptic connections? Could it be partly a function of our cultural beliefs and expectations about causality and free will, acting as a kind of “restraining bolt” on our natural precognitive abilities? Could it even have to do with something as simple as the way sentences in our language unfold in a single direction (the premise of the movie Arrival —more on which later)? We’ll explore these and other possibilities, although I make no promises we’ll get to the bottom of it—plenty of greater minds than mine have tried and failed to crack the nut of time and its relation to consciousness. 70 But the tesseract brain offers, I think, an exciting new way of thinking about (or perhaps, hyperthinking about) these problems, as well helping explain why there are so many situations in which, despite our tendency to consciously inhabit that Newtonian instant, our unconscious seems to “know” more than it ought to be able to in a purely Newtonian, mechanistic world. Experiences like precognitive dreams point to a whole unknown part of our lives—our whole future—that we are interacting with, subtly and obliquely, and that is exerting an influence over our thoughts and behavior now, here in our future’s past.

Some of the prophylaxis against knowing more about that future (instead of just feeling it) may have to do with the difficulties of source monitoring mentioned earlier. We might picture the 4-D tesseract brain as a long hall in a hotel, with information passing up and down it in both directions like guests—some are familiar faces from our past, whom we will greet and engage with, but there are also lots of unfamiliar faces, total strangers. We’ll tend to ignore, avoid, or even be suspicious of the latter, not having any reason to suspect that they may be from our own future. We may even make up untrue stories about their origins. Meanwhile, along the whole length of that hall, there is just a single window that opens onto the world beyond the body. This narrow temporal aperture of coordinated sensory experience serves as the singular focus of our engagement with external reality, and it gradually moves from one end of the hall to the other, as we move from childhood to old age. Given that even mainstream cognitive science agrees that this “now” is a fiction, then no matter how compellingly it arrests our attention, we should pay greater heed to those strangers coming down the hall—that is, pay more attention to the unfamiliar parts of what we may think of as our “inner” experience. Perhaps that way we can learn something about our 4-D nature, the shape of our cosmic wormlike life as it wends and twists its way through the glass block of Minkowski spacetime. It could really be that the now of our conscious experience bears a similar relationship to the entirety of our thought over life that a point of light from a magnifying glass bears to the sun projecting that light.

As we bring this obligatory “nuts and bolts” section of the book to a close and leave behind all the physical and biological hand-waving, let me reiterate and underscore just the following idea, the controlling theme for the second half of this book: Whatever physical (or even nonphysical) mechanisms will eventually be found to explain our ability to access and be influenced by our future, much of what has been called “the unconscious” may instead be consciousness displaced in (or distributed across) time . Without at all realizing it, Freudian psychoanalysis may have always been a science of the truly weird, time-looping effects precognition produces in our lives.