TWENTIETH-CENTURY STRIDES AND OPENINGS TO MODERN THOUGHT
There are some people, nevertheless—and I am one of them—who think that the most practical and important thing about a man is still his view of the universe.… We think the question is not whether the theory of the cosmos affects matters, but whether in the long run, anything else affects them.
—G. K. Chesterton
AT THE START of the twentieth century, the philosophy of mind and brain was still divided into two battling camps: the rationalists and the empiricists. At the end of the century, as we shall see, things were not a whole lot better. It is almost as if our human brains have a limited set of ideas, and whatever the scientific data or intellectual mood of the day might be, one of these two views is trotted out. But back to the beginning of the century. It was time for the upstart Americans to brashly chime in, and William James was the first to give the issue of consciousness a good hard look. In 1907, he gave a series of lectures at Harvard and began with the above quote from G. K. Chesterton, which neatly summarizes the great mind/brain question of philosophy: Can a mind state—an immaterial belief, an idea—affect matter, that is, a brain state?
James agreed with Chesterton that this was the important question. The topic of his lectures was a new philosophical method: pragmatism, the brainchild of James’s friend Charles Peirce, which grew out of discussions they had had with other philosophers and lawyers at the Metaphysical Club, a short-lived but influential intellectual salon they cofounded in Cambridge, Massachusetts, in the 1870s. Pragmatism didn’t get much attention until James further developed and promoted it twenty years later. In the first lecture, James pointed out what was hidden in plain sight, that philosophers and their philosophical stances are biased by their temperaments:
The history of philosophy is to a great extent that of a certain clash of human temperaments.… Of whatever temperament a professional philosopher is, he tries when philosophizing to sink the fact of his temperament. Temperament is no conventionally recognized reason, so he urges impersonal reasons only for his conclusions. Yet his temperament really gives him a stronger bias than any of his more strictly objective premises. It loads the evidence for him one way or the other, making for a more sentimental or a more hard-hearted view of the universe, just as this fact or that principle would. He trusts his temperament. Wanting a universe that suits it, he believes in any representation of the universe that does suit it.1
And here is the really great, brashly American part. James divides American philosophers into two groups, according to their temperaments: “tender-foot Bostonians” and “Rocky Mountain toughs.” He sees this temperamental dichotomy not only in philosophy but also in literature, art, government, and manners. And, of course, the two have low opinions of each other: “Their mutual reaction is very much like that that takes place when Bostonian tourists mingle with a population like that of Cripple Creek. Each type believes the other to be inferior to itself; but disdain in the one case is mingled with amusement, in the other it has a dash of fear.” He further sketches out the groups: the tender-minded, tender-footed Bostonians are rationalistic (devotees to abstract and eternal principles), intellectualistic, idealistic (in the sense that they believe all things proceed from the mind), optimistic, religious, free-willist, monistic (that is, rationalism starts from wholes and universals, and makes much of the unity of things), and dogmatic. Descartes, deep down, is a tender-foot!
The tough-minded Rocky Mountaineers are quite the opposite: empiricist (lovers of facts in all their crude variety), sensationalistic, materialistic (all things are material, no immaterial mind), pessimistic, irreligious, fatalistic, pluralistic (meaning empiricists start from the parts and they make the whole a collection of those parts), and skeptical (that is, open to discussion).Hume, a tough!
But James realized that most of us are not purely one or the other:
Most of us have a hankering for the good things on both sides of the line. Facts are good, of course—give us lots of facts. Principles are good—give us plenty of principles. The world is indubitably one if you look at it in one way, but as indubitably is it many, if you look at it in another. It is both one and many—let us adopt a sort of pluralistic monism. Everything of course is necessarily determined, and yet of course our wills are free: a sort of free-will determinism is the true philosophy. The evil of the parts is undeniable; but the whole can’t be evil: so practical pessimism may be combined with metaphysical optimism. And so forth—your ordinary philosophic layman never being a radical, never straightening out his system, but living vaguely in one plausible compartment of it or another to suit the temptations of successive hours.2
Yet the more philosophically minded “are vexed by too much inconsistency and vacillation in our creed. We cannot preserve a good intellectual conscience so long as we keep mixing incompatibles from opposite sides of the line.”
So James describes the common layman as wanting facts, science, and religion. But what philosophy was giving him was “an empirical philosophy that is not religious enough, and a religious philosophy that is not empirical enough.”3 Practical help, not highly abstracted absolutist philosophy, was needed to navigate a world whose denizens were interested in the science with which they were being bombarded but also found comfort in religion or romanticism. James thought that the pragmatic method would supply that help. Its foundation is based on the idea that our beliefs are our rules for action—that when we form a belief, we acquire a disposition to act in some distinctive way. In order to understand the significance of a belief, you simply need to determine what action that belief would produce. If two different beliefs produce the same action, then let it rest:
The pragmatic method is primarily a method of settling metaphysical disputes that otherwise might be interminable. Is the world one or many?—fated or free?—material or spiritual?—here are notions either of which may or may not hold good of the world; and disputes over such notions are unending. The pragmatic method in such cases is to try to interpret each notion by tracing its respective practical consequences. What difference would it practically make to any one if this notion rather than that notion were true? If no practical difference whatever can be traced, then the alternatives mean practically the same thing, and all dispute is idle. Whenever a dispute is serious, we ought to be able to show some practical difference that must follow from one side or the other’s being right.4
Although pragmatism is based on the idea that a mental state could be the cause for action, it is a method only and does not advocate particular results. It is open to various types of methods that are employed in different sciences. It is a method, however, that rejects a priori metaphysics and endless intellectualist accounts of thought. It was appealing to stimulus-response psychologists, followers of Hume’s theories of association, who dominated the new field of experimental psychology launched by Wilhelm Wundt, and developed further and brought to New York by his student, the charismatic psychologist Edward Titchener. Another particularly influential character was Edward Thorndike. In his 1898 monograph, Animal Intelligence: An Experimental Study of the Associative Processes in Animals, he formulated the first general statement about the nature of associations: the law of effect. He had noticed that a response that was followed by a reward would be stamped into an organism as a habitual response and that the response would disappear if no reward was given. This stimulus-response mechanism could potentially be the mechanism that established increasingly adaptive responses.
Stimulus-response psychology, also known as behaviorism, quickly came to dominate the studies of associative processes in America. Behaviorists approached psychology from the viewpoint that its appropriate subject matter was behavior rather than mental and subjective experience; it should be studied using methods appropriate to the natural sciences, not introspection. They thought that given a particular environmental stimulus, the behavior of any animal, including humans, could be explained by a law-like tendency to react in a certain way.
Dominating the field was the dynamic figure of John B. Watson. Watson’s stance was that psychology could be objective only if it was based on observable behavior, and he rejected all talk of mental processes that could not be publicly observed: looking inside the black box of the brain was verboten. Ignoring Darwin’s theory of innate mental processes, Watson became committed to the idea that everybody has the same exact neural equipment; the mind is a blank slate, and any child can be trained to do anything by learning through stimulus-response and reward. This idea appealed to the American sense of equality. Soon, most of the directors of American psychology departments held these views, ignoring Darwinian theory’s assertion that complexity is built into the human organism through the process of natural selection and evolution. Behaviorism reigned in the United States for the next five decades, presided over for many years by its major spokesman, the Harvard psychology professor B. F. Skinner.
Of course, even in times when major themes dominate the academic world, there are contrarian stirrings. New methods for studying “mental” processes were being developed that not only made their way steadily into experimental psychology but also became the dominant tools of exploration in modern times.5 Still, talk about mental states and consciousness was largely out of the question in the United States until the cognitive revolution led by George A. Miller at Harvard and the notion of mentalism led by Roger W. Sperry at Caltech reared up at midcentury.
The Canadians’ Resistance and the Rise of Modern Neuroscience
Thankfully, researchers in Canada did not jump on the behaviorist bandwagon. In fact, Montreal’s first neurosurgeon, Wilder Penfield, was making amazing discoveries on patients who had seizures that could only be controlled by removing the portion of their cerebral cortex that was instigating them. In order to locate the region of the seizure foci, Penfield stimulated parts of the cerebral cortex with electrical probes and observed the patient’s responses. During this surgery the patients were awake, under local anesthesia only, so they could comment on what they did or did not feel. Penfield traced out, in the sensory and motor cortices, maps that corresponded to the body’s parts, that is, the physical representation of the human body, located within the brain.* The body that was represented, however, was not normally proportioned. Instead, it was proportional to the degree that the particular body part was innervated: more innervation, more brain area devoted to it. Penfield, along with his close associate Herbert Jasper, a physiologist, got the ball rolling when it came to understanding the localization of brain function. Penfield wrote, “Consciousness continues, regardless of what area of cerebral cortex is removed. On the other hand consciousness is inevitably lost when the function of the higher brain stem (diencephalon*) is interrupted by injury, pressure, disease, or local epileptic discharge.” Yet he is quick to qualify that “to suggest that such a block of brain exists where consciousness is located, would be to call back Descartes and to offer him a substitute for the pineal gland as a seat for the soul.”6
Penfield goes on to describe that while sensory information is processed through the diencephalon, that is, through subcortical regions, information travels back and forth between the subcortex and different areas of the cortex: “Thus, the differing processes of the mind are made possible through combined functional activity in diencephalon and cerebral cortex, not within diencephalon alone.”7 He also states that the final process that is necessary for a conscious experience is to have attention focused on the mental state that is producing it. He predicts that this process is part of the diencephalon’s function. We can discern in these writings that Penfield is using the word “consciousness” to mean two different things. In the first instance, he is talking about the mental state of being alert and aware, that is, not in a coma. In the second two, he is referring to Descartes’s consciousness, meaning a thought or a thought about a thought, and he adds that focusing attention is a necessary component.
Penfield added to his group a psychologist, Donald Hebb, to study the effects that brain injury produced in his patients and the results of surgery for the functioning of the brain. Like most people who study patients with brain injuries, Hebb came away convinced that the workings of the brain explain behavior. While this may seem elementary to us today, as it did to Galen so many centuries ago, mind/body dualism still held many enthralled in 1949, when Hebb published his book The Organization of Behavior: A Neuropsychological Theory. At a time when psychology was still in the tight clutches of behaviorism, Hebb took the psychological world by storm by boldly stepping into the black box of the brain, thumbing his nose at the “off-limits” constraints imposed both by the empiricist Hume and by behaviorists. He postulated that many neurons can combine into a coalition, becoming a single processing unit. The connection patterns of these units, which can change, make up the algorithms (which can also change with the changing connection patterns) that determine the brain’s response to a stimulus. From this idea came the mantra “Cells that fire together wire together.” According to this theory, learning has a biological basis in the “wiring” patterns of neurons. Hebb noted that the brain is active all the time, not just when stimulated; inputs from the outside can only modify that ongoing activity. Hebb’s proposal made sense to those designing artificial neural networks, and it was put to use in computer programs. By opening the black box of the brain and looking inside, Hebb had also launched the first volley that started the revolution against behaviorism.
The Cognitive Revolution in America
Behaviorism’s grip on American psychology began to loosen in the 1950s, especially when a bevy of young, brilliant mind scientists, such as Allen Newell, Herbert Simon, Noam Chomsky, and George Miller, together founded cognitive psychology. Miller, for example, did what a scientist is supposed to do when presented with compelling new evidence: he changed his mind. Miller was researching speech and hearing at Harvard when he wrote his first book, Language and Communication. William James would have been impressed by the full-disclosure preface, in which Miller made no bones about his partiality: “The bias is behavioristic.” In the section on psychology, which was about the differences in how people use language, his probabilistic model of word choice was based on a behaviorist pattern of learning-by-association. The title of a textbook he wrote eleven years later, Psychology: The Science of Mental Life, announced a complete dismissal of his previous stance that psychology should study only behavior. What prompted Miller’s change of mind was the rise of information theory: the introduction of Information Processing Language I, a computer language that implemented several early artificial intelligence programs, and computer genius John von Neumann’s ideas on neural organization, which proposed that the brain may run in a manner similar to a massively parallel computer. Parallel computing means that several programs can run at the same time, in opposition to serial programming, in which only one program can run at a time.
Perhaps for Miller the final nail in the coffin of behaviorism was meeting the brilliant linguist Noam Chomsky. Chomsky was shaking the psychological world to its very roots by showing that the sequential predictability of speech follows grammatical rules, not probabilistic rules. And these grammatical rules were shocking: innate and universal—that is, everybody has them, and they are already wired into the brain at birth. Just like that, the notion of a tabula rasa had to be tossed out kicking and screaming, though some of those screams can still be heard.
In September 1956, Chomsky published “Three Models for the Description of Language,” his preliminary version of these ideas on syntactic theories. Taking the linguistic world by storm, Chomsky transformed the study of language in one fell swoop. Miller’s takeaway from that paper was that associationism, the pet of behaviorists and of radical behaviorist B. F. Skinner in particular, could not account for how language is learned. While the behaviorists had elucidated some aspects of behavior, there was something more going on in that black box that behaviorists could not explain and never would. It was about time they started trying to figure it out.
Miller began to explore the psychological implications of Chomsky’s theories with the ultimate goal of understanding how the brain and mind work as an integrated whole. At the time, however, Miller was leery of one aspect of mental life. He wrote in Psychology: The Science of Mental Life that for the time being, the study of consciousness needed to be put on the shelf: “Consciousness is a word worn smooth by a million tongues. Depending upon the figure of speech chosen it is a state of being, a substance, a process, a place, an epiphenomenon, an emergent aspect of matter or the only true reality. Maybe we should ban the word for a decade or two until we can develop more precise terms for the several uses which ‘consciousness’ now obscures.”8
The word “consciousness,” which Descartes used to mean either a thought or a thought about a thought, had blossomed over the years and taken on all sorts of additional meanings. In addition to what Miller wrote, it had also become entwined with awareness, self-awareness, self-knowledge, access to information, and subjective experience. While most researchers followed Miller’s advice and set the study of consciousness on the shelf, one intrepid group did not. Instead, they did an inventory of what science could say about consciousness up to that point.
Seeking Clarity at the Vatican
While Miller was locking the word “consciousness” away, the Pontificia Academia Scientiarum (Pontifical Academy of Sciences) was bringing it front and center for a study week in 1964. The Academy traces its roots back to the Accademia dei Lincei (Academy of Lynxes), founded in 1603 by an eighteen-year-old Roman prince and naturalist, Federico Cesi, whose uncle was a well-connected cardinal. Cesi founded the Academy to understand the natural sciences through observation, experiment, and inductive reasoning. To symbolize those goals, he chose the sharp-eyed lynx as the academy’s emblem. In 1610, Galileo was named its president.
It was rocky times for such an endeavor, and the Academy did not survive Ceci’s early death at age forty-five. It was resurrected in 1847 by Pope Pius IX as the Accademia Pontificia dei Nuovi Lincei (Pontifical Academy of the New Lynxes). Later, after the unification of Italy and its separation from the Vatican in 1870, the Academy of the New Lynxes split into two: the Royal National Lincean Academy under the Italian flag and another destined to become the Pontifical Academy of Sciences, re-founded in 1936 by Pope Pius XI and headquartered in Vatican City. The academy, though founded by the Pope and located within the walls of the Vatican Gardens, has no restrictions on its research. It is made up of scientists from many countries and disciplines and is charged with the goal of “the promotion of the progress of the mathematical, physical, and natural sciences, and the study of related epistemological questions and issues.” In September 1964, the Pontifical Academy called for a study week on the topic of “Brain and Conscious Experience,” to be headed up by the renowned physician and physiologist Sir John Eccles.
Eccles was an Australian. While in medical school, he was not only an avid student but a pole vaulter as well. Reading On the Origin of Species for his zoology course spurred him on to read philosophical writings, both classical and contemporary, on the mind/brain problem.9 Medical school, however, did not provide the answers to his questions regarding the interaction of mind and body, and he resolved to become a neuroscientist.10 He also resolved to win a Rhodes Scholarship to Oxford and work with the famed neurophysiologist Charles Sherrington. This he did. He set off in 1925 for England, half a world away.
Eccles went on to study the method of neural transmission at the synapse. Initially, he was convinced that transmission was electrical. During this period, he met and was encouraged to rigorously test his hypothesis by the philosopher Karl Popper, who emphasized that the strength of a hypothesis depended on the failure of a thorough investigation to falsify it, not on evidence that apparently supported it. Through dogged hypothesis testing, Eccles changed his mind and resolved that synaptic transmission was chemical. Such an about-face prompted a long-standing friend, Sir Henry Dale, to write, “A remarkable conversion indeed! One is reminded, almost inevitably, of Saul on his way to Damascus, when the sudden light shone and the scales fell from his eyes.”11 Over the next decade Eccles went on to elucidate the mechanisms involved in the firing and inhibition of motor neuron synapses in the spinal cord and then turned his sights on the thalamus, hippocampus, and cerebellum. The year before the pontifical conference, Eccles was awarded the Nobel Prize in Physiology or Medicine. A few years earlier, he had been honored with knighthood for the same research. He was a legend in his own time, and for those of us who knew him, he was crackling smart, endlessly energetic, and a great scientist. He also had been raised Catholic and was a declared dualist. The pragmatist William James would not have been surprised that his belief was a rule for action, and he spent a lifetime searching for mechanisms by which the mind controls the body.
In a 1951 essay in Nature titled “Hypotheses Relating to the Brain-Mind Problem,” Eccles stated that “many men of science find in dualism and interaction the most acceptable initial postulates in a scientific approach to the problem of mind and brain. In such an approach the question arises: What scientific hypotheses may be formulated that bear in any way on the hitherto refractory problem of brain-mind liaison?”12 He went on to propose such a hypothesis. Although he thought that every perceptual experience is the result of a specific pattern of neuronal activation, and that memory is caused by an increase in synaptic efficacy, for some reason he thought experience and memory are “unassimilable into the matter-energy system.” He proposed instead that the activated cortex has “a sensitivity of a different kind from any physical instrument” and that “mind achieves liaison with the brain by exerting spatio-temporal fields of influence that become effective through this unique … function of the active cerebral cortex.” Wow! That is basically voodoo with fancy language. He had replaced Descartes’s pineal gland with the mysteriously sensitive activated cerebral cortex. Indeed, two hundred years after Descartes, Eccles continued his tradition of dualism even though he spent sixty hours a week working on and recording neurons and had otherwise totally adopted the determinist agenda. It is mind-boggling.
Part of Eccles’s job description for heading up the study week was to pick the other attendees and to publish the discussions, which resulted in the landmark book Brain and Conscious Experience. The only bias that Eccles could be accused of was that the conference was a bit heavy in physiologists, but all tended to wear more than one hat. He did succeed in getting the top scientists in their fields, ranging across neurophysiology, neuroanatomy, psychology, pharmacology, pathology, biopsychology, neurosurgery, chemistry, communications, cybernetics, biophysics, and animal behavior. The academy, in its aim to study physical, mathematical, and natural sciences, had declared a single restriction: no philosophers. Eccles was not happy with this, but among the group were what one reviewer described as “amateur philosophers of no mean order.” The reviewer went on to conclude that “as a single volume dealing with recent progress in our understanding of the cortex, [Brain and Conscious Experience] is probably unequalled.”13
Prior to the meeting, the participants were given a brief by the Academy in which consciousness was described as “the psychophysiological concept of perceptual capacity, of awareness of perception, and the ability to act and react accordingly.” As Roger Sperry, my mentor and future Nobel laureate, put it to me when he returned to Caltech, “The Pope said, ‘The brain is yours but the mind is ours.’” The talks were loosely divided among these three aspects of consciousness: perception, action, and volition.
The zoologist of the group, William Thorpe, expanded on this:
The term consciousness, although having innumerable overtones of meaning, involves, I think, three basic components. First, an inward awareness of sensibility—what might be called “having internal perception.” Second, an awareness of self, of one’s own existence. Third, the idea of consciousness includes that of unity; that is to say, it implies in some rather vague sense the fusion of the totality of the impressions, thoughts, and feelings which make up a person’s conscious being into a single whole.14
In discussing cerebral events as they relate to conscious experience, Eccles asked the question “How can some specific spatiotemporal pattern of neuronal activity in the cerebral cortex evoke a particular sensory experience?”15 That question was left unanswered, and remains so.
In reading through the volume, I have to chuckle at the impact of Roger Sperry’s talk about our split-brain research, which was then in its infancy. In his written summary, Sperry had stated: “Everything we have seen so far indicates that the surgery has left these people with two separate minds, that is, two separate spheres of consciousness.”16 The animated discussion afterward indicates how fascinating our findings were and what a showstopper his talk was. He was telling the Vatican and his colleagues that the mind could be divided into two with the slice of a surgeon’s knife.
At the time, Sperry was in the midst of changing his own mind, in part because of split-brain research, and readjusting his basic stance on brain function. He was turning his back on materialism and reductionism as they were then defined and calling himself a “mentalist.” Earlier that year, while working on a nontechnical lecture on brain evolution, he was shocked when he found himself concluding “that emergent mental powers must logically exert downward causal control over electrophysiological events in brain activity.”17 At the time, the notion that a mental state could affect a brain state was complete heresy in the world of neuroscience—and to a great extent it still is. I came to similar conclusions, and reintroduced the idea of mental processes having a downward causative effect in 2009 in my Gifford Lectures in Edinburgh, and I rediscovered how determinists of all stripes are not very receptive to the idea. The central premise of both behaviorism and materialism is that the objective physical brain process is a causally complete stimulus-response network within itself: it gets no input from conscious or mental forces, nor does it need any. In many ways, the book in your hands is a fresh attempt to wrestle with this problem.
At the Vatican conference, Sperry soft-pedaled his growing mentalist stance by merely saying at the close that “consciousness may have real operational value, that it is more than merely an overtone, a by-product, epiphenomenon, or a metaphysical parallel of the objective process.”18 At another point he paraphrased this as “a view that holds that consciousness may have some operational and causal use.”19
Eccles, wearing his materialist hat, admitted, “I am prepared to say that as neurophysiologists we simply have no use for consciousness in our attempts to explain how the nervous system works.”20 He also admitted, “I don’t believe this story, of course; but at the same time, I do not know the logical answer to it.”21 He kept his dualist position.
At the end of the week, the summation of the conference was handed over to the MIT psychologist Hans-Lukas Teuber, one of the founding fathers of neuropsychology. He was famous for his brilliant “wrap-ups” of conferences and scientific meetings, which he spiced with elaborate eyebrow wiggling.22 He charted where the participants agreed and disagreed, and noted where there were gaps in knowledge, providing a concise summary of the field at the time as only he could do. The others agreed that they understood quite a bit about cortical processing of sensation and vision, and that if they understood an equal amount about motor action, memory retention, and awareness—which they didn’t—then they would be very much further along in understanding conscious experience. Teuber lamented: “Every conceivable divergence of opinion seemed to arise when we tried to delineate the systems or mechanisms that might be necessary for consciousness. We were not even quite sure … as to how we might decide what consciousness is for.”23
Teuber was an intense man who helped mentor me in the early days. I can remember a visit he made to Santa Barbara in the late sixties. My wife and I were holding a reception for him in our home in Mission Canyon when he winked at me and said he wanted a word with me alone. We stepped into the bedroom, whereupon he took a recent manuscript I had submitted to the journal Neuropsychologia out of his briefcase and began going through it with a red pencil. I was stunned but grateful for the attention. When we were finished, he jumped up and said, “Let’s go rejoin the party!” I must have said something coherent, because he then invited me to join the newly founded International Neuropsychological Symposium, a wonderful organization that meets yearly in different cities of the world, an event I relished for twenty years.
The Vatican conference, of course, did not solve the mind/body problem. Even so, the varied opinions that were expressed launched a set of arguments and debates within biology and philosophy that continue to this day. The same list of possible solutions was on the table, with Eccles holding tight to Descartes’s two-substance view that mind and body were two separate entities, though he never could find empirical evidence for it. Most leaned toward the materialist view that the mind—consciousness—was produced by matter, but how that happened was as puzzling as ever.
The Vatican conference was a turning point for Sperry. The possibility that mental states could causally affect brain states became his scientific passion, with all of its implications. The psychiatrist of the group, Hans Schaefer of the University of Heidelberg, subscribed to that theory, based on his belief that psychoanalysis worked. Evolutionary theories allowed materialist theories of consciousness to come in two flavors: emergentism and panpsychism. The former proposes that consciousness emerges from unconscious matter once that matter achieves a certain level of complexity or organization. Sperry was leaning heavily in this direction. The latter, panpsychism, tosses the whole problem out by suggesting that all matter has subjective consciousness, albeit in a wide range of types. The idea here is that there is no need for the idea of emergence and complexity to explain consciousness. Consciousness is a primordial feature of all things, from rocks to ants to us.
Returning from the conference, Sperry continued to refine his views. He came out of the mentalist closet the following year at a lecture at his alma mater, the University of Chicago. “I am going to align myself in a counterstand, along with that approximately 0.1 per cent mentalist minority, in support of a hypothetical brain model in which consciousness and mental forces generally are given their due representation as important features in the chain of control.”24 He explained his reasoning: “First, we contend that conscious or mental phenomena are dynamic, emergent, pattern (or configurational) properties of the living brain in action—a point accepted by many, including some of the more tough-minded brain researchers. Second, the argument goes a critical step further, and insists that these emergent pattern properties in the brain have causal control potency—just as they do elsewhere in the universe. And there we have the answer to the age-old enigma of consciousness.” Sperry had taken conscious experience to be a nonreductive (it can’t be broken down into its parts), dynamic (it changes in response to neural activity), and emergent (it is more than the sum of the processes that produce it) property of brain activity, and said that it could not exist apart from the brain. Denying any type of dualism, he emphasized, “The term [‘mental forces’] fits the phenomena of subjective experience but does not imply here any disembodied supernatural forces independent of the brain mechanism. The mental forces as here conceived are inescapably tied to the cerebral structure and its functional organization.”25 There are no spooks in the system.
By the early 1970s, this notion was gaining some limited acceptance, and it contributed to the growing anti-behaviorism sentiment. Mental images, ideas, and inner feelings were back on the table. They even could have a causal role in explanations. The “cognitive revolution” was on, and it continues to the present.
Modern Philosophers Take a Stab
Meanwhile, philosophers were wrangling about theories that were incorporating the materialist view of the brain. After retiring in 1975, Eccles left the lab behind and joined forces with the eminent philosopher Karl Popper. They agreed with Descartes that the brain must be open to nonphysical influences if mental activity is to be effective,26 that is, if a thought can affect a brain state. Eccles tried to come up with testable hypotheses, but he did not succeed and finally settled for a model of mind/brain interaction without any experimental evidence and without a testable hypothesis. While his form of dualism didn’t have much of a fan club, a different type of dualism flared up again, fanned by the wings of bats.
New York University’s well-known philosopher Thomas Nagel published a head-turning article in 1974 titled “What Is It Like to Be a Bat?” and with it ushered in the whole “But how can we explain the experience of redness?” line of argument. Arguing that consciousness has an essential subjective character (just as Franz Brentano had argued), Nagel states that “an organism has conscious mental states if and only if there is something that it is like to be that organism—something it is like for the organism.” “Like” does not mean “resemble,” such as in the question “What is ice skating like? Is it like roller skating?” Instead, it concerns the subjective qualitative feel of the experience, that is, what it feels like for the subject: “What is ice skating like for you?” (For instance, is it exhilarating?) Nagel called this the “subjective character of experience.” It has also been called “phenomenal consciousness,” and, although he doesn’t say it, it is also referred to as qualia.
For Nagel, there is something that it feels like for the subject of an experience to have that experience, and there is something that it feels like for a creature to be the species it is and no other; and the subjective character of a mental state can be apprehended only by that particular subject. This idea was like (in the sense of “resemble”) serving a plate of carbonara to a hungry footballer: it was gobbled up by philosophers, who had been pining, according to the philosopher Peter Hacker, for salvation from “reductive physicalism or soulless functionalism.”27 For some, the escape hatch became: Science is objective, consciousness is subjective; never the twain shall meet, or if they do, they will meet by some new, as yet undescribed physics or fundamental laws (Nagel’s current stance).28
The philosopher Daniel Dennett, however, has become notorious for challenging Nagel’s question. He says that Nagel doesn’t want to know what it would be like for him to be a bat. He wants to know objectively what it is subjectively like: “It wouldn’t be enough for him to have had the experience of donning a ‘batter’s helmet’—a helmet with electrodes that would stimulate his brain into bat-like experiences—and to have thereby experienced ‘battitude.’ This would, after all, merely be what it would be like for Nagel to be a bat. What, then, would satisfy him? He’s not sure that anything would, and that’s what worries him. He fears that this notion of ‘having experience’ is beyond the realm of the objective.”29
Beyond the realm of science. This is what many consider the unbridgeable gap between subjective and objective realms. The new dualism.
Dennett handles this problem by denying it. He laments that one of the problems with explaining consciousness is that we all think we are consciousness experts, and have very strong beliefs about it, just because we have experienced it. He complains that this doesn’t happen to vision researchers. Even though most of us can see, we don’t think we are vision experts. Dennett claims that consciousness is the result of a bag of tricks: our subjective experience is an illusion, a very believable one, one that we fall for every time, even when it has been explained to us how it comes about physically, just like some optical illusions that still fool us even though we know how they work.
The philosopher Owen Flanagan also disagrees that there is an unbridgeable gap, writing, “It is easy to explain why certain brain events are uniquely experienced by you subjectively: Only you are properly hooked up to your own nervous system to have your own experiences.”30 This seems reasonable. So what’s the big deal? While most philosophers today can accept that every mental event and experience is some physical event, many nonetheless resist the conclusion that the essence of a mental event or experience is completely captured by a description at the neural level. Flanagan simply takes the stance that there is nothing mysterious about the fact that conscious mental states possess a phenomenal side. It is all part of the coding.
So, as we glide into the modern era, nothing is resolved. While neuroscience had figured out how reflexes work, how neurons communicate with each other, how traits and much more are inherited, the field remained clueless as to how the brain creates what we have come to call our phenomenal conscious experience. Nothing like an Einsteinian moment had occurred for the mind/brain sciences, and while the contents of the black box could be explored by cognitive psychologists, young scientists were advised to leave the topic of consciousness alone.
Francis Crick to Modern Science: It’s Okay to Study Consciousness
Two decades after George Miller had set consciousness aside for a decade or two, none other than the ever-intrepid, supremely intelligent, creative, and curiosity-driven Francis Crick stepped in and pulled it off the shelf. Yes, that Francis Crick. From an early age, Crick had been interested in two unknowns: the origin of life and the puzzle of consciousness. After spending thirty years on the first unknown, he was itching to tackle the second. And so in 1976, at the tender age of sixty, when most people are looking forward to retirement, he packed his bags, left Cambridge, and headed off to the Salk Institute in San Diego to begin a second scientific career in neuroscience.
I happened to be visiting the Salk soon after his arrival and was ushered into his spectacular office overlooking the sea. He was just beginning to immerse himself in neuroscience and was surrounded by other talented researchers. I hadn’t a clue how to add to the conversation, so I asked him, “How does one think about the timescales that are prescribed by molecular processes, and how do they relate to the different timescales operative in neural activity? Each level has its story, how do they relate?” He seemed to like that, and a few months later, emboldened by this encounter, I invited him to a small meeting I was organizing on memory in Moorea. He instantly accepted. Crick was always annoyed and impatient when hearing about the status quo of any topic. He liked a good experiment, but he always sought to know what a particular observation meant in a larger context. At the meeting, he was no different. He always shook things up. He seemed to be just the person needed to push consciousness studies forward beyond the well-honed classic positions.
Crick began by teaching himself neuroanatomy and reading extensively on neurophysiology and psychophysics. In 1979, a couple of years into this endeavor, he was asked to write an article for an issue of Scientific American dedicated to the latest in brain research. His assignment: “to make some general comments on how the subject strikes a relative outsider.” Crick noted that he was not happy with behaviorists and functionalists treating the brain as a black box. After all, it was the very workings inside the black box that were in question. “The difficulty with the black-box approach is that unless the box is inherently very simple a stage is soon reached where several rival theories all explain the observed results equally well.”31 And nobody thought that the black box was simple.
Crick also noted that brain scientists were too isolated in their particular subdisciplines. They needed to be less scientifically provincial and more scientifically cosmopolitan by engaging in more interdisciplinary cross talk. Psychologists needed to understand the structure and function of the brain, just as anatomists needed to know about psychology and physiology. Of course, his broad assessments were at the expense of dozens if not hundreds of cognitive scientists32 and budding young cognitive neuroscientists like yours truly,33 who had already begun to toil with the problem of consciousness. Yet it was Crick, with his unique and special status, who yanked the field into realizing that studying the physical basis of consciousness was a crucial task.
Everyone needed a dash of neuropsychology, a bit of physics and chemistry, and Crick thought the new field of communication theory held promise as a theoretical tool, so get a handle on that, too. An overarching theory was only going to happen when all aspects and levels of the brain’s churnings and of human behavior were accounted for. If you were only familiar with one aspect, you didn’t have a chance at any sort of all-encompassing explanation.
One of Crick’s suggestions was particularly difficult. He proposed that we needed to change people’s mind-set about the accuracy of their own introspection, because “we are deceived at every level by our introspection.”34 One example he used of that deception is the blind spot that we have in each eye. Crick also chastised current philosophers—though we can assume Dennett is not on the list—for ignoring such phenomena:
Not everyone realizes he has a blind spot, although it is easy to demonstrate. What is remarkable is that we do not see a hole in our visual field. The reason is partly that we have no means of detecting the edges of the hole and partly that our brain fills in the hole with visual information borrowed from the immediate neighborhood. Our capacity for deceiving ourselves about the operation of our brain is almost limitless, mainly because what we can report is only a minute fraction of what goes on in our head. This is why much of philosophy has been barren for more than 2,000 years and is likely to remain so until philosophers learn to understand the language of information processing.
This is not to say, however, that the study of our mental processes by introspection should be totally abandoned, as the behaviorists have tried to do. To do so would be to discard one of the most significant attributes of what we are trying to study. The fact remains that the evidence of introspection should never be accepted at face value. It should be explained in terms other than just its own.35
Crick concluded:
The higher nervous system appears to be an exceedingly cunning combination of precision wiring and associative nets.… The net is broken down into many small subnets, some in parallel, others arranged more serially. Moreover, the parcellation into subnets reflects both the structure of the world, external and internal, and our relation to it.36
Crick was a theorist at heart, with a particular genius for assimilating ideas and experimental results from a wide range of disciplines, churning them up together, and then formulating new theories and new experiments. He clearly stated the deep problems involved in trying to understand conscious experience. He had the invaluable skill of doing what William James suggested: “The art of being wise is the art of knowing what to overlook.” And wise he was.
Crick soon teamed up with the clever, smart, and endlessly energetic computational neuroscientist Christof Koch at Caltech. They decided to tackle consciousness by setting to work on the visual system of mammals—a topic that had already generated a plethora of experimental data. Their aim was to learn as much as possible about the first processing steps performed on visual information that is received by the visual cortex. Their ultimate goal was to discover the neural correlates of consciousness (NCC): the minimal set of neuronal events and mechanisms jointly sufficient for a specific conscious percept.37 Koch explains: “There must be an explicit correspondence between any mental event and its neuronal correlates. Another way of stating this is that any change in a subjective state must be associated with a change in a neuronal state. Note that the converse need not necessarily be true; two different neuronal states of the brain may be mentally indistinguishable.”38 This all sounds eminently reasonable and straightforward, a quality in short supply in research on consciousness.
To begin their quest, Crick and Koch made two assumptions about consciousness. One was that at any one moment some active neuronal processes correlate with consciousness, while others do not. They ask: What are the differences between them? The second assumption they called “tentative”: that “all the different aspects of consciousness (smell, pain, vision, self-consciousness … and so on) employ one or perhaps a few common mechanisms.”39 If they understood one aspect, they would be on the road to understanding them all. They decided to shelve some discussions in order to avoid wasting time quibbling over them. Skirting the mind/body stalemate, they contended that in order to examine consciousness scientifically, since everyone had a rough idea of what was meant by consciousness, they didn’t need to define it, and thus would avoid the dangers of a premature definition.
Since they were being vague about its definition, Crick and Koch decided to be consistently vague about its function and set aside the question of what consciousness is for. They also chose to assume that some species of higher mammals possess some features of consciousness but not necessarily all. Thus, one may have key features of consciousness without having language. And while lower animals may have some degree of consciousness, they didn’t want to deal with that question at the moment. They assumed that self-consciousness was a self-referential aspect of consciousness and put that aside, too. They also put volition and intentionality aside for the moment, along with hypnotic states and dreaming. Finally, they put qualia aside—the subjective character of experience, the feeling of “red”—believing that once they figured out how one saw red, then perhaps a plausible case could be made that our reds are indeed all the same.
Crick and Koch both acknowledged that the NCC would not solve the mystery of consciousness. What identifying the neural correlates of conscious versus nonconscious processing would do for the empirical study of consciousness would be to supply constraints on the specifications of neurobiologically plausible models. The hope is that elucidating the NCC might provide a breakthrough for the theory of consciousness similar to what the structure of DNA did for genetic transmission. Understanding the architecture of the DNA molecule and making a 3-D model provided clues to how the molecule broke apart and replicated itself, which correlated very well with Mendelian inheritance. The first definitive NCCs discovered will be early steps toward a theory of consciousness, but they in themselves will not provide explanations of the links between neural activity and consciousness. That is what models do—and soon, a fresh crop began to appear.
Crick opened the floodgates: it was okay to study consciousness again. In those previous two decades, a foundation had been laid, built of stacks of empirical data about brain mechanisms. An empirical attack was launched, aided by an ever-increasing arsenal of new methods, which now range from not just recording but controlling the firing of single neurons (a goal Crick longed for that has since been achieved through optogenetics), to various types of brain imaging, to all the data crunching that computers can provide. Those who heeded Crick’s warning that “introspection should never be accepted at face value. It should be explained in terms other than just its own” found an embarrassment of riches in the brain’s nonconscious processing. Neurobiological models that attempted to explain the links between neural activity and consciousness, using computational, informational, and neurodynamic elements, began to pop up like mischievous ideas in a rascally child. The models vary according to the level of abstraction they address—something we’ll tackle in chapter 5—and while some have shared features, none explains all aspects of consciousness, and none has yet won general approval.
In the upcoming chapters, I plan to lay out a new idea and framework for thinking about the problem of consciousness. I do it humbly and nervously; trying to add to the story previously generated by this pantheon of thinkers and scientists is daunting, to say the least. Still, today we have at our fingertips a vast amount of rapidly accruing new information, and with a little luck, it affords new perspective on how the brain does its magic. The ideas of Descartes and other past thinkers that the mind is somehow floating atop the brain, and the ideas of the new mechanists that consciousness is a monolithic thing generated by a single mechanism or network, are simply wrong. I will argue that consciousness is not a thing. “Consciousness” is the word we use to describe the subjective feeling of a number of instincts and/or memories playing out in time in an organism. That is why “consciousness” is a proxy word for how a complex living organism operates. And, to understand how complex organisms work, we need to know how brains’ parts are organized to deliver conscious experience as we know it. That is up next.