10.

CONSCIOUSNESS IS AN INSTINCT

One man’s “magic” is another man’s engineering.

—Robert A. Heinlein

BACK IN MY YOUTH, right when I was beginning my graduate studies at Caltech, I became friends with the political philosopher Willmoore Kendall. He was the original disruptive personality, so exasperating that the Yale administration had paid him a large sum of money to resign. Kendall rocked all of their assumptions on just about everything and then headed out west. The West was not unfamiliar to him, having been born in Konawa, Oklahoma, to a blind minister. Ultimately, after Yale, he settled down in a small Jesuit school in Dallas. His appetite for life had no bounds, and it came from a man with a sure-footed ego. On the day JFK was shot in Dallas, I got a call from him. Kendall declared, “I never before have been at a lunch where the president of the United States spoke. I should have known something would happen.”

Kendall kept nudging my thinking, as he saw me as a victim of modern reductionist thought gone wild. I was, and to this day remain, committed to the idea that physical mechanism can and will explain almost everything. By and large, when philosophers start parsing foundational thinking, laboratory scientists’ eyes glaze over. Kendall was battling someone who wasn’t up to the deep fight and indeed was barely aware that there were issues. Since I kept lending him my apartment during his day visits to Pasadena, he decided he should reciprocate and provide me with a larger education. He directed me to read Michael Polanyi’s classic book Personal Knowledge, which grew out of Polanyi’s 1951 Gifford Lectures. I did. Ever since, it has hovered in the recesses of my mind and in the recesses of my bookshelf, dragged back and forth across the country umpteen times.

Polanyi, Kendall used to tell me, was a true polymath. During sick leave while serving as a physician on the Serbian front in 1916, he wrote his chemistry PhD thesis. Although he held a chair in physical chemistry at the University of Manchester, his wide-ranging interests in economics, politics, and philosophy resulted in the university creating a chair for him in social science. “You know,” Kendall commented, “he answers his correspondence each day in twelve different languages.” A giant in his time, Polanyi, though based in England, was a regular visiting lecturer to the University of Chicago. It was reading his book that, for me, raised the thorny problem that knowing the parts of something doesn’t always tell you what the whole might be. There is something else going on, something missing, and I think this framing gave rise to what I will call the Chicago school of thought, that is, a particular perspective on brain processes.

That something was missing was the result of the machine metaphor, which had been around since Descartes and had been swallowed hook, line, and sinker by biologists. The Chicago scientists had realized that the traditional deterministic classical machine analogy for life is exactly backward. Brains aren’t like machines; machines are like brains with something missing. Polanyi pointed out that humans evolved through natural selection, whereas machines are made by humans. They exist only as the product of highly evolved living matter, and are the end product of evolution, not the beginning.

The Chicago perspective also launched the idea that the origin of life depended on two complementary modes of description, not just the description provided by classical physics that works so well for machines. Pattee summed it up thus: “Life itself could not exist if it depended on such classical descriptions or on performing its own internal recording processes in this classical way.”1 This followed up on Rosen, who had really shaken things up earlier by asking, “Why could it not be that the ‘universals’ of physics are only so on a small and special (if inordinately prominent) class of material systems, a class to which organisms are too general to belong? What if physics is the particular, and biology the general, instead of the other way around?”2

The battle against pure reductionism started with Polanyi and the University of Chicago professor Nicolas Rashevsky, the father of mathematical biophysics and theoretical biology, who was an unlikely soldier to battle against reductionism. He had initially taken up the problem of establishing the material basis of basic biological phenomena in general, after being outraged at a party when a biologist told him that nobody knew how cells divided, and it was something no one could know because it was biology, outside the pale of physics. After a stunning amount of work on the problem in the 1930s and ’40s, he was growing uneasy. As Rosen, his student, describes, “He had asked himself the basic question ‘What is life?’ and approached it from a viewpoint tacitly as reductionistic as any of today’s molecular biologists. The trouble was that, by dealing with individual functions of organisms, and capturing these aspects in separate models and formalisms, he had somehow lost the organisms themselves and could not get them back.”3 He came to the realization that “no collection of separate descriptions (i.e., models) of organisms, however comprehensive, could be pasted together to capture the organism itself.… Some new principle was needed if this purpose was to be accomplished.”4 Rashevsky dubbed that pursuit of the new principle relational biology. In many ways my mentor Roger Sperry, who was trained at the University of Chicago, took up the search as well. These ideas, as we have seen, also deeply influenced Howard Pattee, who receives the credit for bringing this line of reasoning into the present.

The gnawing message from the early Chicago story was: there is something else that needs to be accounted for when considering an organism. Mechanistic thinking is fine and teaches us all about the parts, the layers of automatic processes that are tirelessly at work in any organism in order to allow it to exist. But there is something else, another factor that needs to be understood, and that something is not a spook in the system. It is the system, the organism itself, that can modulate the lower layers that produce it. It is what answers the question “What is life?”

As Rosen argues, science always inserts a surrogate (a model) for the actual thing it is trying to study. With the surrogate, scientists can use all the methods of reductionist science and figure out how the parts work. The assumption is that the surrogate can substitute for the real thing. But when they go back to the real thing after working on the surrogate and try to plug in their findings, they usually fall short. For example, studying the pancreas alone in a dish underneath a microscope or in a test tube is one thing. It can teach us how it functions locally. But unless you study it all connected up with the body, you are not going to understand its real function, or how it works in concert with and is modulated by a distant system—in this case, a piece of the intestine. The fact that the functions of the pancreas and the intestine are entwined was not stumbled on until surgeons started doing lap-band surgeries for obesity and observed diabetes disappearing overnight. For neuroscience, the surrogate that had been substituted for the brain was “a machine.” And by thinking of it as a machine, they were going to miss the whole idea of complementarity and what that buys us for understanding how the brain does its tricks.

Sperry put it differently. When he suggested our mental capacities were real entities and part of the causal chain of events that lead to behavior, the reductionists went crazy, as we learned in chapter 3. Yet he didn’t see mental events, such as thoughts, as nonphysical events or spooks in the system, either. He saw mental events as the product of the configurational properties of the underlying neural circuitry. That underlying circuitry has both a physical and a symbolic structure. It controls what it is constructing, a mental event—Pattee’s physical symbols controlling construction. In short, he had the organism itself playing a role in its own destiny. From this perspective, even knowing every possible thing about the current state of your brain—its initial conditions—would not allow you to predict how future mental states may have a top-down effect on your bottom-up processing. Those initial conditions are not going to tell us what, where, and with whom you are going to eat dinner a year from Thursday. Knowing everything about the state of a newborn’s brain is not going to allow you to know what that child will be doing on a Tuesday afternoon forty-five years later, as the most determined of the determinists believe. In fact, that extreme determinism is almost as silly as the belief revealed by the Schrödinger cat problem.

Looking Forward

In our sketch of the history of human thinking and research on the problem of consciousness, we have seen a lot of equivocations. It was only after Descartes and the birth of the idea that “the brain is a machine that can be understood by taking it apart” (the sine qua non of the scientific approach to anything) that the ironclad devotion to reductionism firmly took hold, and it remains the dominant idea in neuroscience today. Again, the Chicago school, as I have come to call it, puts the brakes on that and has pointed the way to another formulation, which takes into account the evolutionary nature of the organism and the fact that machines are by-products of human brains—brains are not the by-products of machines. There is something different about living matter. Put bluntly, it is the fact that it is not solely at the beck and call of classical physical interactions, but has an innate arbitrariness conferred on it by physical, yet arbitrary, symbolic information residing on the sunny side of the Schnitt.

When the early results of split-brain research became known and established, the persistent question became: So what does it teach us about consciousness? As the famous experimental psychologist William Estes quipped to me after I was introduced to him as the man who discovered the split-brain phenomenon, “Great, now we have two things we don’t understand.” Yet it was that very puzzle that has stayed with me, just like the Polanyi articulation that the parts list doesn’t tell you how something works. Both realities, the parts list and how those parts work together to produce its function, demand a more complex explanation about how these facts illuminate the problem of consciousness.

Over the past thirty years, billions of dollars have been invested in the study of the role of various brain regions and how they are connected. Yet localization will not yield a comprehensive explanation of consciousness, even though modern brain studies tell us that specific anatomical areas are related to various mental capacities. Although these studies add to the plethora of facts known about the brain, they do not and will not provide explanations of the processes the brain performs, which result in, among other things, consciousness. While the structure-function approach provides insightful knowledge about how the brain compartmentalizes its many specializations, it fails to adequately explain how electrochemical reactions are transformed into life experiences. We have seen that structure and function are complementary properties: one tells you nothing about the other. If you have no idea what the function of a neuron is, you are not going to figure it out by looking at one. The reverse is also true. If you know what the function of a neuron is, you still would have no idea of its appearance. Without any prior knowledge, the function of the neurons can’t be derived from their structure, nor can their structure be derived from their function. They are two separate, irreducible layers with different protocols.

The enterprise of learning more about the underlying parts of the brain needs to expand its agenda and also focus on neural design. Simply trying to locate the structure that produces consciousness, as Descartes and many of his predecessors have attempted, will not unveil the Holy Grail, because consciousness is inherent throughout the brain. Cutting huge chunks from the cortex does not disrupt consciousness, but only changes its contents. It is not compartmentalized in the brain like many other mental capacities, such as speech production or visual processing, but is a crucial element of all these various capacities. Again, as I have discussed, the most compelling evidence for piecemeal consciousness is revealed through the minds of split-brain patients: When transmission between the hemispheres is severed, each will continue to have its own conscious experience.

While it is not intuitive to think that our consciousness emanates from several independent sources, this appears to be the brain’s design. Once this concept is fully grasped, the true challenge will be to understand how the design principles of the brain allow for consciousness to emerge in this manner. This is the future challenge for brain science.

A Final Word

When I started this book, I didn’t think I would wind up with some of the thoughts I have now outlined. The lurking question was always: Is consciousness really an instinct?

In his now classic book The Language Instinct, Steven Pinker provides a necessary wake-up call for the scientific community: How can minds and brains be both delivered biologically and also modified by experience? The book provided a needed framework for thinking about the limits of learning and the realities of mind parts derived through natural selection. Pinker also brilliantly observed that conceptualizing higher-order human traits (such as language) as instincts is downright jarring.

Plopping the phenomenon of consciousness onto the instinct list—right in there with anger, shyness, affection, jealousy, envy, rivalry, sociability, and so on—is equally disorienting. Instincts, as we all know, evolve gradually, making us more fit for our environment. Adding consciousness to the instinct list suggests that this precious human property, which we all hold dear, is not a miraculously endowed part of our species’ special hardware. If we allow consciousness to be an instinct, we toss it into the vast biological world with all of its history, richness, variation, and continuum. Where did it come from? How did it evolve? What other species share features of it?

Let’s pause to ask the fundamental question: What is an instinct, anyway? The term is thrown around like confetti at a parade. Each year, the list of instincts grows and grows. You would almost think that if you popped off the skull, you would see a bunch of labeled lines, each representing one of the much heralded instincts. Indeed, the human brain ought to be a rat’s nest of wires connected up to do their job. Yet if you ask a neuroscientist to show you the network for a particular instinct, such as rivalry or sociability, no such knowledge exists—at least, not yet. So how does it help to call stuff instincts?

When feeling at sea about definitions and meanings in the mind/brain business, it is always rewarding to dial up William James once again. More than 125 years ago, James wrote a landmark article simply titled “What Is an Instinct?” He wastes no time in defining the concept:

Instinct is usually defined as the faculty of acting in such a way as to produce certain ends, without foresight of the ends, and without previous education in the performance.… [Instincts] are the functional correlatives of structure. With the presence of a certain organ goes, one may say, almost always a native aptitude for its use. “Has the bird a gland for the secretion of oil? She knows instinctively how to press the oil from the gland, and apply it to the feather.”5

The definition seems straightforward, and yet it is cleverly dualistic. An instinct calls upon a physical structure to function. Yet using the structure calls upon an “aptitude,” which apparently comes along for free. Finding the physical correlates of an instinct’s physical apparatus is doable, but how do we learn how it comes to be used? Does it just happen? Not a very scientific answer. Does the bird start out with a reflex to press the gland and, over time, learn that, as a consequence, everything works better? Clearly if there was no oil gland, there would be no oil and no opportunity for learning to use it to fly better. One can see the blind loop of natural selection and experience working together to form what we would call an instinct.

Bird behavior is one thing, but does this really apply to human cognition and consciousness? James offers a rationale for how it might all work:

A single complex instinctive action may involve successively the awakening of impulses.… Thus a hungry lion starts to seek prey by the awakening in him of imagination coupled with desire; he begins to stalk it when, on eye, ear, or nostril, he gets an impression of its presence at a certain distance; he springs upon it, either when the booty takes alarm and flees, or when the distance is sufficiently reduced; he proceeds to tear and devour it the moment he gets a sensation of its contact with his claws and fangs. Seeking, stalking, springing, and devouring are just so many different kinds of muscular contraction, and neither kind is called forth by the stimulus appropriate to the other.6

As I look at James’s work now, I recognize a schema that fits the module/layering ideas. James appears to suggest that the structural aspects of instincts are inbuilt modules embedded in a layered architecture. Each instinct can function independently for simple behaviors, but they also work as a confederation. Individual instincts can be sequenced in a coordinated fashion for more complex actions that make them look an awful lot like higher-order instincts. The avalanche of sequences is what we call consciousness. James argues that the competitive dynamics that go into the sequencing of basic instincts can produce what appears to be a more complex behavior manifested from a complex internal state. He even adds a description of the animal’s experience of obeying an instinct: “Every impulse and every step of every instinct shines with its own sufficient light, and seems at the moment the only eternally right and proper thing to do. It is done for its own sake exclusively.” It sounds like a lot of bubbles are conjoined by the arrow of time and produce something like what we call conscious experience.

The dynamics of which bubble pops up when is no doubt influenced by experience and learning. However, experience, learning, and consciousness must all be isomorphic—operational within the same system. Once the phenomenon is thought of in this way, we see conscious experience for what it is: Mother Nature’s trick. Thinking of consciousness as an evolved instinct (or a whole sequence of them) shows us where to look for how it emerged from the cold inanimate world. It opens our eyes to the realization that each aspect of a conscious experience is the unfolding of other instincts that humans possess, and that, by their very nature, the mechanisms and capacities they harbor produce the felt state of conscious experience. Remarkably, in the past few years biologists of all stripes have been able to come together in a breathtaking way to identify twenty-nine specific networks in the brain of a fly, each controlling a specific behavior. These individual behaviors can be flexibly combined and recombined into more complex patterns. Yes, it is in the fruit fly where we may learn the lessons of consciousness! The hunt for understanding the physical dimension of instincts is on.7

However, many abhor the use of concepts such as instinct to describe phenomenal conscious experience. To them, this definition also robs humans of their unique status in the animal kingdom, namely, that we alone are morally responsible for our actions. Humans can choose to do, and we can therefore choose to “do the right thing.” If consciousness is an instinct, they argue, then humans must be automatons, or witless zombies. Yet, putting aside for the moment the physical realities of quantum mechanics and Schnitts with their liberating symbolic functioning, we can argue that accepting the idea that a complex entity like the brain/body/mind has a knowable mechanism does not doom one to such deterministic and despairing views. James himself addressed this overarching concern:

Here we immediately reap the good fruits of our simple physiological conception of what an instinct is. If it be a mere excito-motor impulse, due to the pre-existence of a certain “reflex-arc” in the nerve-centres of the creature, of course it must follow the law of all such reflex-arcs. One liability of such arcs is to have their activity “inhibited” by other processes going on at the same time. It makes no difference whether the arc be organized at birth, or ripen spontaneously later, or be due to acquired habit, it must take its chances with all the other arcs, and sometimes succeed, and sometimes fail.… The mystical view of an instinct would make it invariable. The physiological view would require it to show occasional irregularities in any animal in whom the number of separate instincts, and the possible entrance of the same stimulus into several of them, were great. And such irregularities are what every superior animal’s instincts do show in abundance.8

James provides much more, and it does take time to absorb the idea of instincts. I urge you to read his original paper to see his clear thinking, clear writing, and unshakable pragmatism on these difficult issues. James points the way forward, refusing to accept the despairing caricature of humankind as robot at the beck and call of reflex responses. To him, a complex behavioral state can be produced by varying the combinations of simple, independent modules, just as a combination of multiple different small movements makes the complex behavior of a pole vaulter as he sails upward over the pole. When acting together in a coordinated way, even simple systems can make observers believe other forces exist. James’s stance is clearly stated: “My first act of free will shall be to believe in free will.” This proclamation is consistent with the idea that beliefs, ideas, and thoughts can be part of the mental system. The symbolic representations within this system, with all their flexibility and arbitrariness, are very much tied to the physical mechanisms of the brain. Ideas do have consequences, even in the physically constrained brain. No despair called for: mental states can influence physical action in the top-down way!

The flexibility of my own symbolic representations has been a source of joy and surprise, not despair, over the course of this project. Perhaps the most surprising discovery for me is that I now think we humans will never build a machine that mimics our personal consciousness. Inanimate silicon-based machines work one way, and living carbon-based systems work another. One works with a deterministic set of instructions, and the other through symbols that inherently carry some degree of uncertainty.

This perspective leads to the view that the human attempt to mimic intelligence and consciousness in machines, a continuing goal in the field of AI, is doomed. If living systems work on the principle of complementarity—the idea that the physical side is mirrored with an arbitrary symbolic side, with symbols that are the result of natural selection—then purely deterministic models of what makes life will always fall short. In an AI model, the memory for an event is in one place and can be deleted with one keystroke. In a living, layered symbolic system, however, each aspect of a mechanism can be switched out for another symbol, so long as each plays its proper role. It is this way because it is what life itself allows, indeed demands: complementarity.

Who is going to put science to all of these ideas? What will the neuroscience of tomorrow look like? In my opinion, the hunt for enduring answers will have to include neuroengineers, with their ability to eke out the deep principles of the design of things. Such a revolution is in its early days, but the perspective it offers is clear. A layered architecture, which allows the option of adding supplemental layers, offers a framework to explain how brains became increasingly complex through the process of natural selection while conserving successful basic features. One challenge is to identify what the various processing layers do, and the bigger challenge is to crack the protocols that allow one layer to interpret the processing results of its neighbor layers. That will involve crossing the Schnitt, that epistemic gap that links subjective experience with objective processing, which has been around since the first living cell. Capturing how the physical side of the gap, the neurons, works with the symbolic side, the mental dimensions, will be achieved through the language of complementarity.

In the end, we must realize that consciousness is an instinct. Consciousness is part of organismic life. We never have to learn how to produce it or how to utilize it. On a recent trip to Charleston, my wife and I were out in the countryside looking for some good ole fried chicken and cornbread. We finally found a small roadside diner and ordered. As the waitress was walking away, I said, “Oh yes, and add some grits to that order.” She turned back to me, smiled, and said, “Honey, grits come.” Grits come with the order, and so does what we call consciousness. We are lucky for both.