8.

NON-LIVING TO LIVING AND NEURONS TO MIND

In the beginning of heaven and earth there were no symbols. Symbols came out of the womb of matter.

—Lao-tzu

An important stage of human thought will have been reached when the physiological and the psychological, the objective and the subjective, are actually united.

—Ivan Pavlov

IN THE SEARCH to understand matter more precisely, physicists stumbled upon complementarity, the concept that all matter can exist in two different states at the same time. Accepting this duality did more than simply push the boundaries of physics. It demanded new thinking, beyond what we could imagine from our own experience of natural phenomena, to understand the natural world. Today, new thinking and imagination stretching are needed by those studying mind/brain duality. We need someone who will think outside of the twenty-five-hundred-year-old box stuffed full of human intuitions and conventional wisdoms. We need someone who has struggled with the ebb and flow of modern physics, who recognizes the importance of complementarity. We need someone who thinks the last few millennia have been frittered away by philosophers looking for answers in the wrong place: the highly evolved human brain. We need someone like Howard Pattee, a Stanford-educated physicist who waded into theoretical biology during his stunning career at SUNY Binghamton. Pattee, who has been a keen observer of human thought, feels that philosophers have approached the mind/brain divide from the wrong end of evolution.1 Over the course of his life, Pattee has come to a startling conclusion: duality is a necessary and inherent property of any entity capable of evolving.

Howard Pattee doesn’t concern himself with the gap between the material brain and the immaterial mind. He delves much deeper. The original gap, the true source of the problem, was there long before the brain. The mother of all gaps is that which lies between lifeless and living matter. The fundamental problem started at the origin of life on Earth. We should not focus exclusively on the gap between the physical brain and the ethereal mind. We must understand the difference between the conglomerations of matter that produce a material object that is lifeless and the conglomerations of matter that produce a thing full of life. The gap between living and non-living is at the root of the gap between the mind and the brain and offers a framework for addressing the mind/brain problem.

Pattee’s idea takes a little time to get used to and comes with an important warning: if we want to understand the idea of consciousness, something fully formed in evolved living systems, we must first understand what makes a living system alive and evolvable in the first place. What happened that split things up into two domains, one living, one not? Most of us have thought about the “life springing from matter” problem for a few seconds, then dismissed it as too difficult and gone about our business poking and measuring what is in front of us. Not Pattee. He was captivated by questions about the beginnings of life when he was an adolescent, first dipping his toes into scientific waters at boarding school in the late 1930s. His science teacher and headmaster, Dr. Paul Luther Karl Gross, gave him a book for summer reading. It was not the usual light fare for summer. It was The Grammar of Science (first published in 1892) by Karl Pearson, a brilliant mathematician and statistician.

At the time, Pattee wondered why he was being given a seemingly out-of-date science book written before the days of quantum theory. However, in the chapter “The Relation of Biology to Physics,” Pattee found a question that would motivate his thinking for decades: “How … is it possible for us to distinguish the living from the lifeless if we can describe both conceptually by the motion of inorganic corpuscles?”2 Pattee saw the logic of the question, but he also understood that evoking the same laws to describe both animate and inanimate matter was not a good enough explanation. In fact, it was no explanation at all. There had to be more to the story.

The Headache of Quantum Mechanics

Pattee was very lucky to have such an exceptional headmaster. Dr. Gross stimulated his students’ thinking by taking them to cutting-edge scientific events, including the Nobel laureate Linus Pauling’s evening lectures at Caltech. One evening, Pattee heard Pauling describe Schrödinger’s famous cat paradox, the notion that a cat could be both dead and alive at the same time. It went like this: A cat is penned up in a steel chamber, along with a very tiny bit of radioactive substance and a Geiger counter to measure the radiation emanating from the substance. Due to the radioactive decay rate of the substance, there is a fifty-fifty chance that, within an hour, none of the atoms will decay. However, there is, of course, a fifty-fifty chance that an atom will decay, and if it does the Geiger counter tube will discharge. In this contraption, the Geiger counter discharge will release a hammer that shatters a small flask of hydrocyanic acid, killing the cat. This elaborate setup yields a scenario in which there is a 50 percent chance that, by the end of the hour, the cat is alive, and a 50 percent chance that the cat will be dead. Odd, but fair enough. However, in quantum mechanics, this phenomenon would not be expressed as the probabilities of two outcomes. It would be expressed as what is called a psi function, a description of the entire quantum state of the system. The psi function of Schrödinger’s poor cat would have the living and the dead cat smeared out in equal parts! Pattee sat there puzzled. How could quantum mechanics, a theory that so powerfully explained all of chemistry and most of physics, produce such complete nonsense as the Schrödinger cat problem? It launched him into a lifelong journey seeking resolution to this dilemma.

The puzzle that perplexed the young Pattee is referred to as the measurement problem. We discussed in chapter 7 that a quantum system has paired complementary properties that cannot be measured simultaneously. Measurements present additional challenges at the quantum level for three reasons. First, measurement requires an observer, a subject or agent who is separate from the object being measured. Second, the measurement process (being irreversible) is not governed by the classical laws of physics. Third, measurement has arbitrary aspects: the observer chooses when, where, and what to measure, as well as the symbols (themselves arbitrary) used to express the measurement. Measurement is a selective process in which most aspects of the thing being measured are actually ignored. Let’s say I want to describe you. What one measurement should I choose to capture you? I decide to measure your mass. I will take one weight measurement and use this to describe you over your lifetime. When should I take it? When you are an infant? An adult of twenty, thirty-five, sixty? The day before or after Thanksgiving? Which is most representative? Is weight alone a good measure of you? What about height and weight together? The measurement itself may be precise and objective, but the process of measurement is subjective.

The process of measuring is arbitrary, which means it can’t be described by objective laws, whether they be quantum or classical. This presents a problem for all of physics, not just the quantum variety. To make predictions about the future state of a system, a physicist must know the initial conditions of a system. How? Through measurement of the initial conditions of a system. Yet this measurement is arbitrary and, in making it, the physicist interferes with the initial conditions. The subjectivity of the initial measurement is commonly ignored by determinists when they assume that the world is completely predictable. But there is no escaping it. No matter how hard you try to be an objective observer, by the mere fact of measuring you are introducing subjectivity into the system. The “measurement problem” deals a huge blow to physics, but it might be just what neuroscience needs.

The Schnitt and the Origins of Life

Physicists refer to the inescapable separation of a subject (the measurer) from an object (the measured) as die Schnitt. (What a great word!) Pattee calls “this unavoidable conceptual separation of the knower and the known, or the symbolic record of an event and the event itself, the epistemic cut.”3 There is a world of actions that exists on the side of the observer with the observer’s record of an event. There is also a separate world of actions on the side of the event itself. This sounds confusing, but think of the explanatory gap between your subjective experience of an event (I had so much fun body-surfing) and the event itself (A person went swimming in the ocean). Alternately, you can think of the explanatory gap between the same subjective experience (This is fun) and the goings-on within the brain (Some neurons fired while a person was swimming in the ocean). These are all just versions of the subject/object complementarity seen in physics. Here is the really wild part: Who’s measuring the events? To examine the difference between a person’s subjective experience and objective reality, do we need a scientist? Who’s measuring the scientist?

Pattee points out that neither classical nor quantum theory formally defines the subject, that is, the agent or observer that determines what is measured. Physics, therefore, does not say where to make the epistemic cut.4 Quantum measurement does not need a physicist-observer, however. Pattee argues that other things can perform quantum measurements. For example, enzymes (such as DNA polymerases) can act as measurement agents, performing quantum measurement during a cell’s replication process.5 No human observer is needed.

The nonsense of Schrödinger’s cat, which even the adolescent Pattee wanted no part of, arose not from the cat or the Geiger counter, but from the human performing the bizarre experiment. In Schrödinger’s thought experiment, the cat is described as a psi function, dead and alive at the same time. This situation persists until we open the box and make a measurement; that is, we observe whether the cat is alive or dead, but no longer both. The result of the measurement intervention (the human opening the box) appears to be instantaneous and irreversible, and the physical representation of the result (live cat or dead cat) is arbitrary. Yet how can this be when at the same time, all microscopic events are assumed to obey reversible quantum dynamical laws (e.g., Schrödinger’s equation)? Pattee notes that it was this inadequate model of measurement that prevented the state of Schrödinger’s cat from being known before it was observed. He states, “It was the belief that human consciousness ultimately collapsed the wave function that produced the problem of Schrödinger’s cat.”6 In fact, Schrödinger wrote his cat scenario specifically to illustrate that this notion was ridiculous. He hoped to illustrate that quantum superposition could not work with large objects, such as cats (or dogs or you, for that matter).

For Schrödinger, the joke was on us. He was trying to point out that there is something missing in our understanding. Pattee got it (in high school) and buckled down to attack the problem. Where should we put the cut, the gap, die Schnitt? With his consuming interest in the origins of life, he came to realize that human consciousness was way too high a layer in the architecture of all living organisms to put the epistemic cut between the observer and the observed, between the subjective experience and the event itself. There are umpteen layers between subatomic particles and human brains. There are plenty of layers between subatomic particles and brains in general (cat or mouse or fly or worm). Putting the major epistemic cut that high led to the nonsense of Schrödinger’s cat existing as a quantum system. There was no pussyfooting around for Pattee: “I have taken the point of view that the question of what constitutes an observation in quantum mechanics must arise long before we reach the complexity of the brain. In fact, I propose … that the gap between quantum and classical behavior is inherent in the distinction between inanimate and living matter.”7

There you have it. Pattee proposes that the gap resulted from a process equivalent to quantum measurement that began with self-replication at the origin of life with the cell as the simplest agent.8 The epistemic cut, the subject/object cut, the mind/matter cut, all are rooted to that original cut at the origin of life. The gap between subjective feeling and objective neural firings didn’t come about with the appearance of brains. It was already there when the first cell started living. Two complementary modes of behavior, two levels of description are inherent in life itself, were present at the origin of life, have been conserved by evolution, and continue to be necessary for differentiating subjective experience from the event itself. That is a mind-boggling idea.

A Life in Symbols: Von Neumann Shows the Way

Living matter seems to be playing an entirely different game than non-living matter, even though they are both made from the same stuff. Why is living matter different from non-living matter? Is it simply cheating, somehow violating the physical laws that we’ve come to understand govern non-living matter? Pattee argues that living matter is distinguished from non-living matter by its ability to replicate and to evolve over the course of time. So what does it take to replicate and evolve?

John von Neumann was a Hungarian-born mathematical genius and an electrifying bon vivant whose intellectual contributions were as vast as his appetite for life. Born into Jewish aristocracy in Budapest, he died receiving last rites from a Catholic priest—he quipped he had adopted Pascal’s wager!* In the intervening years, he moved to Princeton’s Institute for Advanced Studies, where he reportedly drove Einstein crazy playing German marching music on his gramophone at full volume.

The intellectual atmosphere of the times was vibrant. Schrödinger had delivered his history-making lecture on “What is life?” in Dublin in 1943, in which he made the suggestion that a “code script” was somehow captured in the molecular mechanisms of a cell. By the late 1940s, von Neumann had assigned himself the question of life as a thought experiment as well. What is life? Well, what do living things do? One answer is, they reproduce. Life makes more life. However, logic told him that “what goes on is actually one degree better than self-reproduction, for organisms appear to have gotten more elaborate in the course of time.”9

Life did not just make more life. Life could increase in complexity; it could evolve. Von Neumann became increasingly interested in what an evolvable, autonomous, self-replicating machine (“an automaton”) would logically require when placed in an environment with which it could interact. His string of logic led him to the conclusion that the automaton needed a description of how to copy itself and a description of how to copy that description so it could hand it off to the next, freshly minted automaton. The original automaton also needed a mechanism to do the actual construction and copy job. It needed information and construction. However, this would cover only replication. Von Neumann reasoned that he had to add something in order for the automaton to be able to evolve, to increase in complexity. He concluded that it needed a symbolic self-description, a genotype, a physical structure independent of the structure it was describing, the phenotype. Linking the symbolic description with what it refers to would require a code, and now his automatons would be able to evolve. We will see why in a bit.

It turned out that von Neumann was right on the money. He correctly predicted how cells actually replicate before Watson and Crick did. From the get-go, at the origin of life, at the single-molecule level, when DNA was still a twinkle in Mother Nature’s eye, evolvable self-replication depended on two things: (1) the writing and reading of hereditary records, which were in some type of symbolic form, and (2) the sharp distinction between the description and the construction processes. After this little thought experiment, von Neumann was off to other endeavors and puzzles. However, von Neumann left his job half done: He did not address the physical requirements for implementing his logic. Rubbing his hands together, Pattee took up the challenge.

The Physics of Symbols: Pattee Presses Onward

We tend to think of symbols as being abstract, not something that is subject to physics. However, as scientists, we are physical beings looking for physical evidence that abides by physical rules and laws. There must be a physical manifestation of von Neumann’s symbols. What Pattee calls the physics of symbols produces certain problems. The first problem lies in the writing and reading of hereditary records, the informational description. A description involves a recording process, and, as we learned in the preceding section, a record is an irreversible measurement that requires some measurer. Pattee realized that the informational description at the origin of life comes face-to-face with the measurement problem in quantum mechanics. Measurements are subjective, meaning they can’t be described by objective laws, whether they be quantum or classical. Any living thing that “records” information is introducing a form of subjectivity into the system.

The second problem is the relationship between the genotype and the phenotype. For example, when we consider DNA, the genotype is the DNA sequence that contains instructions for the living organism. The phenotype is the observable characteristics of an organism, such as its anatomy, biochemistry, physiology, and behavior. The genotype interacts with the environment to produce the phenotype. To put this in an everyday situation, consider the blueprint as a house’s genotype and the actual house its phenotype. The phenotypic construction process is the building of the house using the blueprint as information about what and how to do it. The phenotype is related to the genotype that describes it, but there is a world of physical difference between the genotype and the phenotype and even the phenotypic construction process. For one, the genotype is non-dynamic; it is a quiescent, one-dimensional sequence of symbols (DNA’s symbols are nucleotides) that has no energy or time constraints. Like a blueprint, it can sit around for years, as you have probably learned from watching CSI. The genotype dictates what should be constructed (perhaps a really cute dog), but the DNA itself does not look or act anything like a cute dog. On the other hand, the phenotype (the cute dog) is dynamic and uses energy, especially if it is a border collie.

The phenotypic construction process is also related to the genotype. Just as a blueprint constrains the builder from adding turrets to a house, the genotype constrains how many tails that cute dog is going to have. How do these relate? What is the relationship between a blueprint, a house, and the intervening pouring of concrete and pounding of nails? The hereditary records contain not only information that specifies what to build but also information that specifies how to build it. Somehow, the information about what and how has been “recorded” in some type of symbolic form. There is a gap between the subjectively recorded symbol (genotype) and the phenotypic construction process and the phenotype. The symbols have to be translated into their meaning for construction to begin. If we think of a layered architecture, this would be the protocol between two layers. Pattee proposes that it was from the control interface between these two layers that the epistemic cut arose. In the case of DNA, the bridge between the genotype and phenotype is the genetic code. In the case of the blueprint, the bridge is the building contractor translating the blueprint for the construction workers.

Pattee extended von Neumann’s logic by contending that the symbols themselves, which make up the instructions (the hereditary record), must have a material structure and that the material structure, during the phenotypic construction process (the building of the new automaton), constrains the process in a way that follows Newton’s laws. There are no magic tricks here. The symbols are actual physical structures, a chain of nucleotides that obey the classical laws of physics.

Here’s the kicker: a symbol, whether it is a sequence of nucleotides in DNA, a sequence of Morse code, or a sequence of mental simulations, is arbitrary. The arbitrariness of symbols can be easily grasped when looking at the ever-changing world of slang. For example, “Benjamins,” “simoleons,” and “dough” have all been popular, though arbitrary, symbols for money. And each language has its own set of symbols, as the comedian Steve Martin once warned anyone traveling to Paris, “Chapeau means hat, oeuf means egg. It’s like those French have a different word for everything.”10 The problem is, Newton doesn’t do arbitrary. If Newton’s inflexible laws ruled symbols, then every person, all of the world, would use the exact same word to represent the concept of money, every time, for all of eternity. Sadly for Newton, there are numerous possible symbols that can be selected to convey information. Each would have different properties, different pros and cons, but since the symbols are distinct from the thing itself, there is not a one-to-one mapping.

You may object that DNA is arbitrary in the same way that language is, arguing that there are physicochemical constraints. But the selection of a symbol is not governed by physical laws but by a rule: select the symbol that carries the most useful and reliable (stable) information for the system. We will see in a bit that the components of DNA itself were selected from a range of competitors for doing a better job constraining the function of the system it belongs to. And if it is stable it can be transmissible. The current components of DNA are what Pattee refers to as frozen accidents. The current symbol embodies the history of its successful versions over multiple timescales (time independent), not that of its current action. So, back to money: if only a couple of people refer to money as “Bettys,” it is not a reliable symbol and won’t be selected or transmitted.

This is a bit confusing because in our social world, we often use “rules and “laws” interchangeably. We call such things as the rules for driving “laws.” Pattee explains there is a basic and extremely important distinction between laws and rules in nature.11 Laws are inexorable, meaning they are unchangeable, inescapable, and inevitable. We can never alter or evade laws of nature. The laws of nature dictate that a car will stay in motion either until an equal and opposite force stops it or it runs out of energy. That is not something we can change. Laws are incorporeal, meaning they do not need embodiments or structures to execute them: there is not a physics policeman enforcing the car’s halt when it runs out of energy. Laws are also universal: they hold at all times in all places. The laws of motion apply whether you are in Scotland or in Spain.

On the other hand, rules are arbitrary and can be changed. In the British Isles, the driving rule is to drive on the left side of the road. Continental Europe’s driving rule is to drive on the right side of the road. Rules are dependent on some sort of structure or constraint to execute them. In this case that structure is a police force that fines those who break the rules by driving on the wrong side. Rules are local, meaning that they can exist only when and where there are physical structures to enforce them. If you live out in the middle of the Australian outback, you are in charge. Drive on either side. There is no structure in place to restrain you! Rules are local and changeable and breakable. A rule-governed symbol is selected from a range of competitors for doing a better job constraining the function of the system it belongs to, leading to the production of a more successful phenotype. Selection is flexible; Newton’s laws are not. In their informational role, symbols aren’t dependent on the physical laws that govern energy, time, and rates of change. They follow none of Newton’s laws. They are lawless rule-followers! What this is telling us is that symbols are not locked to their meanings.

Symbols lead a double life, with two different complementary modes of description depending on the job they are doing. In one life, symbols are made of physical material (DNA is made of hydrogen, oxygen, carbon, nitrogen, and phosphate molecules) that follows Newton’s laws and constrains the building process by its physical structure. However, in the other life, as repositories of information, the symbols ignore these laws. The double life of symbols has largely been ignored. Those interested in information processing ignore the objective material side, the physical manifestation of the symbol. Molecular biologists and determinists, interested in only the material side, ignore the subjective symbolic side. By claiming just one aspect, neither studies their full, complementary character. Which is not only a shame but a scientific travesty because, as we discussed earlier, for a self-reproducing and evolvable form of life to exist, physical symbols must perform both roles. Pattee argues that either one alone is insufficient. To avoid either side of the link results in missing the link altogether. He boldly states, “It is precisely this natural symbol-matter articulation that makes life distinct from non-living physical systems.”12

The Genetic Code Is a Real Code

To better understand this symbol-matter articulation and what its implications are for our quest, let’s look closely at DNA, which best exemplifies the symbol-matter structure in a living system. First, however, we need a quick primer in biosemiotics to understand about symbols in living systems. Our guide is Marcello Barbieri, a theoretical biologist from the University of Ferrara.

Semiotics is the study of signs (a.k.a. symbols) and their meanings. Basic to the field is that, by definition, a sign is always linked to a meaning. As we already grasped from Steve Martin and his problems in Paris, there is no deterministic relationship between a sign and its meaning. An egg is an egg, whether it resides in the United States or in France, but we can call it different things. The object is distinct from its symbolic representation (the sound “egg” or “oeuf”) and our understanding of the symbol. Barbieri notes that the relationship between a sign and its meaning is established by a code, a conventional set of rules that establish the correspondence between signs and their meanings. The code is produced by some agent, the codemaker. A semiotic system originates with the codemaker making the code. Thus, Barbieri notes, “a semiotic system is a triad of signs, meanings, and code that are all produced by the same agent, i.e., by the same codemaker.”13

Biosemiotics is the study of signs and codes in living systems. Foundational to the field is the notion that “the existence of the genetic code implies that every cell is a semiotic system.”14 Barbieri states that modern biology has not accepted this fundamental premise of biosemiotics, because there are three concepts at the heart of modern biology that are not compatible with it. First is the description of the cell as a computer. In this metaphor, genes (biological information) are seen as the software and the proteins the hardware. Computers have codes, but they are not semiotic systems, because the codes come from outside the system via a codemaker, and, as we learned above, a semiotic system includes the codemaker. The cell-as-computer concept also contends that the genetic code arose from a codemaker outside the system—natural selection. Under this description, living things are not semiotic systems and “genetic code” is simply a metaphor.

The second concept at issue between modern biology and biosemiotics is physicalism, the notion that everything is reducible to physical quantities. Biologists demand that things (DNA, molecules, cells, organisms) abide by laws that determine their behavior. A semiotic code has the nondeterminist, wishy-washy aspect of rules, not deterministic physical laws linking the symbols inexorably to their meaning. The third source of discord is the conviction (or lack thereof) that all biological innovation is the result of natural selection.

Barbieri argues that biologists are overlooking something basic when making those fundamental assumptions: they are ignoring the origin of life. Evolution by natural selection requires the copying of genetic records and the construction of proteins, but these processes themselves had to originate somehow. Barbieri points out that genes and proteins in living systems are fundamentally different from all other molecules, primarily because they are produced in a totally different way.

The structure of molecules in the inorganic world, the world of objects such as computers and rocks, is determined by the bonds that form spontaneously between their atoms. The bonds themselves are determined by internal factors, the chemical and physical characteristics inherent to the atoms. Happily deterministic.

Not so in living systems. Genes are elaborate strings of nucleotides, and proteins are elaborate strings of amino acids. These strings do not come together spontaneously in a cell. It is not love at first sight drawing them together by an irresistable chemistry. Instead, they are cobbled together by the actions of an entire class of molecules, a whole system of ribonucleic acid (RNA) and protein matchmakers that help them. Barbieri points out that this is highly significant for its implications concerning the origin of life.

Primitive “bondmaker” molecules, early precursors of the RNA system, which bind nucleotides together, came into existence way before the first cells. So, too, did bondmaker molecules that developed the ability to join nucleotides together following a template—“copymakers.” These bondmakers and copymakers came into existence through random molecular re-sorting. It was the existence of copymaker molecules that set the process of evolution into motion. Natural selection chiseled living things into existence, but the requisite molecules for evolution—the bondmakers and copymakers—existed before life itself.

Barbieri chides that “natural selection is the long-term result of molecular copying and would be the sole mechanism of evolution if copying were the sole basic mechanism of life.”15 But it isn’t. While genes can be their own template and copy themselves, proteins cannot. Proteins cannot be made by copying other proteins. The tricky thing is that only molecules that can copy can be inherited, so the information about how to make the proteins had to come from the genes. Barbieri notes that the outstanding feature of the very early protein makers “was the ability to ensure a specific correspondence between genes and proteins, because without it there would be no biological specificity, and without specificity there would be no heredity and no reproduction. Life, as we know it, simply would not exist without a specific correspondence between genes and proteins.”16 The specific correspondence he is talking about is a code. The code had to be there first, before natural selection was set in motion.

Here’s the interesting thing for us: if that correspondence were not a code, but determined by stereochemistry, which is what was first assumed, it would be automatic—and thus deterministic. But that is not the mechanism, which was a surprise to biologists. The bridge between the genes and the amino-acid sequences they code that make up proteins is provided by transfer RNA molecules. These molecules have two separate recognition sites: one is for a codon (a group of three nucleotides), and another for an amino acid, thus binding the two. This, too, could be the setup for an automatic correspondence between a codon and a specific amino acid if one recognition site physically determined what bound to the other, but it doesn’t. The two sites are independent of each other and physically separated. Barbieri notes, “There simply is no necessary link between codons and amino acids, and a specific correspondence between them can only be the result of conventional rules. Only a real code, in short, could guarantee biological specificity, and this means that in no way the genetic code can be dismissed as a linguistic metaphor.” So he leaves us with this conclusion: “The cell is a true semiotic system because it contains all the essential features of such systems, i.e., signs, meanings and code all produced by the same codemaker.”17

Evidence for such a biosemiotic system that contradicts modern biology’s foundational concepts is hot off the press. Recently scientists have found that cephalopods (the family that contains the octopus) can recode their RNA. RNA molecules have the privilege of establishing codes with DNA (in the part of the RNA that recognizes the three-nucleotide DNA codon sequence) and also with proteins (in the separate part of the RNA that recognizes the amino acid). Recoding the RNA means that new proteins can be constructed while the DNA sequence of symbols stays the same. The collective result is the destruction of the one-to-one gene-to-protein correspondence. Recoding allows a single octopus gene to produce many different types of proteins from the same DNA sequence.18 This is a big deal. It is evidence against the three concepts in biology that dismiss semiotic systems in living organisms. The system can change its code. The system has an internal codemaker that can produce biological innovations—new proteins—but not via natural selection. It illustrates the arbitrariness of the connection of a symbol with its meaning in a living system.

If symbols within living systems are arbitrary and RNA is the codemaker, why the preoccupation with DNA? Why has DNA had a monopoly on molecular symbolism over the past few hundreds of millions of years? In its physical manifestation, DNA is extremely structurally stable, unlike RNA. This has helped DNA remain the symbolic structure of choice throughout evolution. However, while the DNA in our cells and the cells of other living organisms is now very stable, the structure of DNA did not start out that way at the very origins of life. Random shuffling and re-sorting of molecules, through the irreversible and probabilistic process of natural selection, generated molecules resembling nucleotide bases. Through subsequent shuffling, successful DNA components and sequences survived and replicated.

However, what do we mean by “successful” when we talk about DNA? DNA is made up of four different nucleotides. Genes are strings of particular nucleotide combinations that act as the symbolic description, the recipe, for making proteins. What would make a DNA sequence successful? Do we mean successful in remaining physically stable over the lifetime of an organism? Do we instead mean successful in reliably encoding information for the successful replication of the organism? We mean both. DNA, the hereditary memory structure when constraining the construction of DNA, abides by Newton’s laws and remains thermodynamically stable in the aqueous environment of the cell based on the properties of its nucleotide bases. However, in its informational (subjective) mode, DNA follows rules, not the laws of physics. The sequences of bases that survive have been selected by evolution according to a rule: pick the most reliable and useful information for the organism’s survival and reproduction. The nucleotides that make up DNA and carry information in symbolic form were selected and, although they are arbitrary, have been conserved in a stable form through the process of evolution for doing and continuing to do a good job, unlike professors who can coast after getting tenure.

During replication, those nucleotides are read and translated into linear strings of amino acids (which make up enzymes and proteins) by a rule-governed process. The set of rules is called the genetic code. The DNA contains the sequence, but the code is implemented by RNA molecules. Certain DNA sequences, called codons, which are made up of three nucleotides, symbolize certain amino acid sequences. There is no ambiguity, but there is also not just one codon for each amino acid. For example, six different codons symbolize arginine, but only one codon symbolizes tryptophan. But the components of the DNA sequence (the symbol) do not resemble the components of the amino acid sequence (its meaning), just as the words that symbolize the components of a recipe do not resemble the components themselves.

When a sequence of DNA has been translated into a chain of amino acids, it is the (temporary) end of DNA’s instructional activity. It is not the end, however, of the constraints this chain of symbols has placed on the material structure of those amino acids. After the amino acid chain is constructed (remember the amino acids are not bonding to one another spontaneously), it folds itself, forming weak chemical bonds between the molecules that act almost like weak magnets. What bonds are formed and how the chain folds is dependent on which amino acid is where, all dictated by the symbolic description. This is the tricky part. Once those amino acids are placed, the bonds that form are determined by physical laws. Certain amino acids hide from water, while others love it; certain amino acids stick to one another, sometimes quite ardently. The interaction of the amino acid chain with its environment folds the chain into a three-dimensional structure, a protein.19 Folding is what transforms the rule-following linear, one-dimensional sequence of amino acids into a law-abiding three-dimensional, dynamic, and functional control structure (a protein).

Proteins of course obey the causal laws of physics and chemistry. Yet arbitrary, symbolic information in the DNA sequences is what determines both the material composition and the biochemical function of proteins. Pretty impressive. DNA is the primeval example of symbolic information (the nucleotide sequences) controlling material function (the action of enzymes), linked by a rule-governed code, just as von Neumann had predicted must exist for evolving, self-reproducing automatons. But wait: What made the protein? DNA had the information that was decoded to make the protein, but what kicked off the process? Answer: another protein. The DNA strands had to be ripped apart by an enzyme (a protein) to get the whole replication process going in the first place. It was a newly minted enzyme that pried those DNA strands apart. It’s the old chicken-and-egg problem: without catalytic enzymes to break apart the DNA strands, DNA is simply an inert message that cannot be replicated, transcribed, or translated, but without DNA there would be no catalytic enzymes. Bohr’s complementarity—two complementary parts, two modes of description, making up a single system.

Von Neumann, in his thought experiment about self-replication, had written that he had avoided the “most intriguing, exciting, and important question of why the molecules or aggregates that in nature really occur … are the sorts of thing they are, why they are essentially very large molecules in some cases but large aggregations in other cases.”20 Pattee suggested that it is the very size of the molecules that ties the quantum and classical worlds together: “Enzymes are small enough to take advantage of quantum coherence to attain the enormous catalytic power on which life depends, but large enough to attain high specificity and arbitrariness in producing effectively decoherent products that can function as classical structures.”21 Quantum coherence basically means that subatomic particles sync together to “cooperate” to produce decoherent products, which are particles that do not have quantum properties. Pattee notes that there is now research that supports his proposal that enzymes require quantum effects22 and that life would be impossible in a strictly quantum world.23 Both are needed: a quantum layer and a classical physical layer.

The Snake Eating Its Tail: Semiotic Closure

Von Neumann made it clear that his automaton would need to replicate. In order to self-replicate, the boundaries of the self must be specified. To make a “self,” you need parts that implement description, translation, and construction. To make another self, you need to describe, translate, and construct the parts that describe, translate, and construct. This self-referential loop is not just a headache. It amounts to a logical closure that, in fact, defines a “self.”

Pattee calls the physical conditions that are required for this exceptional interdependence of symbol-matter-function semiotic closure. He emphasizes that to physically execute this closure, the symbolic instructions must have a material structure. There can be no ghost in the system, and the physical structure must constrain all the lawful dynamic processes of construction following Newton’s laws. The closing of the semiotic loop, the physical bonding of the molecules, is what defines the limits of the “self,” the subject, in “self-replication.” No random structures floating around are being incorporated; the limits have been set. This does not imply that the cell is somehow self-aware. However, there can be no self-awareness without a self. The first steps must be toward a delimited self. The subsequent destinations of self-awareness, self-control, self-experience, self-consciousness, and self-absorption are all farther down the road.

Semiotic closure must be present in all cells that self-replicate. Sure, “the self” became more elaborate through evolutionary processes, but even a cell follows Dirty Harry’s advice and “knows its limitations.” Whatever complex physical processes close the symbol-matter loop, they are the bridge that spans the physicist’s Schnitt, the explanatory gap, the chasm between the subject and object. They are the protocol between the quantum layer and the Newtonian layer. The processes that close the symbol-matter loop unite the two modes of description, spanning the gap that originated at the origin of life. The implication is that the gap between subjective conscious experience and the objective neural firings of our physical brains, those two modes of description, may be bridged by a similar set of processes, and it could even be possible that they are occurring inside cells.

The Surrender and the Truce

In the early days of quantum physics, Niels Bohr presented the principle of complementarity as a white flag, an attempt to explain the dual nature of light (wave-particle duality). The complementarity principle accepts both objective causal laws and subjective measurement rules as fundamental to the explanation of the phenomena. Bohr emphasized that while two modes of description were necessary, this did not correspond to a duality of the system under observation. The system itself was unified. It was both at the same time. Two sides of one coin.

This is what makes it tricky for us to understand, if we even do. Indeed, Richard Feynman said, “I think I can safely say that nobody understands quantum mechanics.” Bohr made an analogy with the distinction between subject and object that extended all the way to mind and matter in his 1927 Como Lecture: “I hope … that the idea of complementarity is suited to characterize the situation, which bears a deep-going analogy to the general difficulty in the formation of human ideas, inherent in the distinction between subject and object.”24

Pattee, however, is bolder. He sees more than an analogy. He sees complementarity as an epistemological necessity that began at the origin of life and extends to all evolved levels. Its essence is not merely the recognition of the subject/object split, but “the apparently paradoxical articulation of the two modes of knowing.”25 That paradox has had philosophers and scientists in a dualist hullabaloo for more than a couple of thousand years. If they keep going at it as they are currently, they’ll fight for a couple of thousand more. The two modes of investigation, the two phenomena they unearthed, are not described by the same set of physical laws. Pattee chuckles that the objective mode has led “reductionists to claim that life is nothing but ordinary physics, which indeed it is as long as one is not willing to consider the subjective problems of measurements and descriptions.… What the principle of complementarity says is that using only this one objective mode of description not even physics is reducible to this mode!”26

Just as scientists and philosophers had to accept the fact that the world wasn’t flat, we are going to have to deal with the principle of complementarity as it applies to the mind and brain. The principle of complementarity is still controversial because it butts heads with the belief that the best explanation of something is a single explanation. Yet the single-explanation fallacy fizzled a hundred years ago in physics with the discovery of the quantum world. The micro world follows different laws than the macro world. They inhabit different layers of description, and one is not reducible to the other.

Those who subscribe to the single-explanation gold standard simply are ignoring the realities of physics. Pattee laments that complementarity’s “acceptance in quantum mechanics only came about because of the failure of every other interpretation.”27 This echoes Sherlock Holmes’s famous saying “When you have eliminated the impossible, whatever remains, no matter how improbable, must be the truth.” Pattee wonders if complementarity’s acceptance into biological and social theories awaits the same agonizing fate. As Richard Feynman once quipped, “You don’t like it then go somewhere else.… Go to another universe where the rules are simpler, philosophy more pleasing, more psychologically easy.”28 Just because you don’t like the idea doesn’t mean it isn’t the way things are.

Summing Up

Living matter is distinct from inanimate matter because it has taken an entirely different course. Inanimate matter abides by physical laws. Life from the get-go has thrown its lot in with rules, codes, and the arbitrariness of symbolic information. The distinction between, and the interdependence of, symbolic information and matter has made open-ended evolution possible, resulting in life as we know it. Information of past successful events was cached in records made up of symbols. These records are themselves measurements that are inherently probabilistic in nature. Nonetheless, life is dependent on these arbitrary, probabilistic symbols for its own material construction in the physical world. The inherent arbitrariness of symbols and measurements provides some spice, that is, some element of unpredictability, and is combined with the predictable menu of physical laws, resulting in life becoming both increasingly ordered and increasingly complex over time.

This distinction between subject and object is not just an interesting oddity. It begins at the level of physics in the distinction between the probability inherent in symbolic measurements and the certainty of material laws. The distinction is later exemplified in the difference between a genotype, the sequence of nucleotide symbols that make up an organism’s DNA, and phenotype, its actual physical structure that those symbols prescribe. It travels with us up the evolutionary layers to the distinction between the mind and the brain.

For the past twenty-five hundred years, the discussions about thought and consciousness have focused on the human and, more recently, the fully evolved human brain. This has gotten us no further across the explanatory gap. It is time that we start exploring Howard Pattee’s gap between living and non-living matter. If we determine how it was bridged, how life achieved semiotic closure, perhaps we can understand how to bridge the explanatory gap between the mind and brain. We even have support for this idea from William James! James, who went so far as to consider what he called the theory of polyzoism: “Every brain-cell has its own individual consciousness, which no other cell knows anything about, all individual consciousnesses being ‘ejective’ to each other.”29 The individual cell had some very rudimentary process that connected a subjective “self” with the objective mechanics. Semiotic closure, the link that spans the gap between living and non-living matter, is present within all cells. By realizing this and attempting to understand the processes involved there, we may begin to seek an understanding of consciousness from a different perspective and look for it in different places. I am not suggesting that single cells are conscious. I am suggesting that they may have some type of processing that is necessary or similar to the processing that results in conscious experience.

The explanatory gap has stumped us because the subjective experiences of the mind have resisted being reduced to neural firings of brain matter. They appear to be two irreducible complementary properties of a single system. We know that no matter how much is learned by objective external observers about the brain’s structure, function, activities, and neural firings, the subject’s experience of these firings is quite different from any observation of them. The details of the neurons’ firing, or even that there are neurons firing, are not part of the subject’s experience or intuitions. The objective workings of perception, thinking, and so forth are not available to the person doing the perceiving and thinking. As we discussed in the chapter on layering, those details are not necessary for the person and are hidden, abstracted from view. Furthermore, the function of the neurons can’t be derived from their structure without any previous knowledge of their function, nor can their structure be derived from their function. Knowing all about one is not going to tell you anything about the other. They are two separate, irreducible layers with different protocols. Pattee believes that this is part and parcel of the complementarity principle, and that a single model cannot explain both objective structure and subjective function. The epistemic cut, the subject/object cut, is alive and well at the level of the human brain. Pattee states that “our models of living organisms will never eliminate the distinction between the self and the universe, because life began with this separation and evolution requires it.”30

It should therefore not surprise us that two complementary modes of behavior, two levels of description, keep appearing in our thinking. The subject/object cut is present in all the great philosophical debates: random/predictable, experience/observation, individual/group, nurture/nature, and mind/brain. Pattee regards the two complementary modes as inescapable and necessary for any explanation that links the subjective and objective models of experience. The two models are inherent in life itself, were present from the beginning, and have been conserved by evolution. Pattee writes, “This is a universal and irreducible complementarity. Neither model can derive the other or be reduced to the other. By the same logic that a detailed objective model of a measuring device cannot produce a subject’s measurement, so a detailed objective model of a material brain cannot produce a subject’s thought.”31

Ignoring one side of the gap will result in missing the link between the two sides. Linking the two requires acknowledging the dual and complementary nature of symbols. The link will consist of mechanisms that are describable by physics, yet the explanation may not prove warm and cuddly, not psychologically satisfying to anyone, neither to determinists nor to believers in spirits. It may be, just like quantum mechanics, something that nobody quite understands, way beyond our intuitions and imaginations. Feynman chided, “We are not to tell nature what she is going to be. That’s what we found out. Every time we take a guess at how she’s got to be and go and measure, she’s clever. She always has a better imagination than we have and she finds a cleverer way to do it that we haven’t thought of.”32