2.

THE DAWN OF EMPIRICAL THINKING IN PHILOSOPHY

“I don’t think—”

“Then you shouldn’t talk,” said the Hatter.

—Lewis Carroll, Alice’s Adventures in Wonderland

ACROSS THE ENGLISH CHANNEL from Descartes and his fellow Parisians, the British were also puzzling over the meaning of life, soul, and mind. The word “consciousness” caught on with British philosophers, and fifty years after Descartes coined it in his Meditations, John Locke expanded on it, as did the Scotsman David Hume. The philosophers were not alone in this endeavor. The medical world, with its interest in the body and anatomy, also began to explore the issues of mind and brain. Thomas Willis and William Petty were hard at work in Oxford, and their findings were about to influence the simmering debate on mind/brain. To some extent it was the same old story. The scientists were children of religion. Their new scientific knowledge of the world conflicted with their heartfelt childhood religious beliefs. They were experiencing what is now known as cognitive dissonance, the mental discomfort one feels when simultaneously holding two or more contradictory beliefs, ideas, or values. As a result, to reduce this discomfort, people try to explain or justify the conflict, or instead they actually change their beliefs. At the time in question, almost everyone had an overwhelming desire that the belief in God not be a casualty of the discoveries of their young science. Thus, in order to reconcile their thoughts concerning the mind, about which they knew little, and their thoughts about the body, about which they were learning more and more, these scientists began to make rather preposterous suggestions regarding how the two were intertwined. In fact, in the beginning, the neuroscientists of the era were as befuddled as the philosophers by their own felt sense of consciousness and their new commitment to objective thought.

Adding to the fount of ideas springing up inside France and England was the avalanche of work by the Germans. From Leibniz to Kant, the continent was abuzz with ideas about the nature of mind. Watching the ideas form, morph, and change is a wonder in itself. Descartes, with all of his brilliance and confidence, had thrown down the gauntlet by proposing that the mind is not made of the same stuff as the brain. This intellectual act proved challenging to the sophisticated and probing minds of the next two hundred years. In many ways the long discussion was a free-for-all, and dazzling in its importance.

The Blank Slate, Human Experience, and the Beginnings of Neuroscience

The mid-seventeenth century found England embroiled in a vicious civil war over religion and the power of the monarch. Thomas Hobbes, a royalist, and a polymath if there ever was one, had returned to Paris from London, where he had been beleaguered by the reception of a short book he had written about the politics of the time. In Paris, he found a job as tutor to the exiled Prince Charles (the future Charles II) and quickly enough became one of the guests in Mersenne’s salon. From the beginning Hobbes, who was trained in physics, didn’t seem concerned about the immaterial soul. He flat-out rejected Descartes’s notion of the soul, which he thought was a delusion. Reason, Hobbes thought, is not enabled by some mysterious non-substance. It is merely the body’s ability to keep order in the brain. Hobbes thought like an engineer: build it, make it work, and that’s it—no ghosts in the system.

Hobbes had his hands full tutoring the prince and writing two books: one on vision and another about the body and its machinery. He needed an assistant and took on the clever young English medical student William Petty. For some reason, Hobbes had the preconceived notion that the senses produce a pressure that causes the motion of the beating heart. The young Petty helped him study Vesalius’s books, and Hobbes found no evidence there for his theory. Nonetheless, and in keeping with the basic nature of many scientists, Hobbes plodded on.

Hobbes attended dissections with Petty, expecting to see nerves sprouting from the heart like spines from a sea urchin and spreading in all directions. They weren’t there. When the penny finally dropped, Hobbes actually abandoned his hypothesis. In science, as in life, our social environment provides the opportunity for ideas to be shared back and forth. Hobbes’s reversal of thought so impressed Petty that he took up this method of extensively exploring a question and being flexible: if his suppositions didn’t match up with observations, he would change his mind. When he returned to England, Petty carried a material gift from Hobbes under his arm—a microscope—and a conceptual bequest in his head: the conviction that the body was an assembly of parts that ran like a machine. Still, Hobbes’s greatest gift to Petty was to urge him to understand the value of answering a question through observation and experimentation, rather than twisting observations to fit one’s own theories. This is easier said than done, believe me! No one likes to admit they have been wrong.

Petty became an exceptional anatomist. Not long after he returned to England, he was ensconced at Oxford. Like Vesalius before him, he had a steady stream of cadavers from the gallows. He was joined by another young physician, Thomas Willis, a royalist and a staunch Anglican. Because of this stance, which was not locally popular, Willis’s training had been haphazard. Petty corrected that with a vengeance, and over the next five years he turned Willis into another extraordinary anatomist who likewise favored learning from observation and experimentation. The very young field of neuroscience was just getting started in Britain when these two took the reins. Soon, no one could ignore the centrality of the brain when it came to thinking about mental states, consciousness, and still, for some, the soul.

One Small Step for Science

Lots of things go into establishing a great scientific reputation, especially when a field is young and untested. Good luck befell Petty and Willis when, about a year after Petty started his job, a coffin arrived in his office holding the latest victim of the gallows, Anne Greene. She had been raped, and later condemned to death for murdering her newborn infant. She had hung by her neck for a full half hour, and as was common at the time, her friends had clung to her body as she swung from the rope, using their weight to hasten her death. At the autopsy the next day, Petty’s office soon filled with a crowd.

With Petty at the helm, dissecting had become a spectator sport of sorts. But before he entered the room someone raised the lid of the coffin, and a gurgle was heard from inside, à la Edgar Allan Poe. A spectator was stomping on Greene’s chest as Petty and Willis walked in the door. They worked frantically to revive her by various means, and succeeded to the point that she was asking for beer the next morning. The court justices wanted her to hang again, but the two doctors convinced them that she had had a miscarriage (she had been only four months pregnant) and the baby had actually been dead when it was born. She was acquitted, and later went on to have three children. This rather spectacular event brought fame and fortune to Petty and Willis. It set them up for an enviable research life with no need to go begging for financial backing.

Petty and Willis worked together for another four years. Under Petty’s tutelage, Willis began to autopsy his patients when they died, to better understand the body and how it was affected by different diseases, and perhaps to find out what caused them. Later, Petty left for greener pastures, traveling as a physician with Oliver Cromwell’s army in Ireland (and still later becoming a well-known economist, a member of Parliament, and a charter member of the Royal Society). Willis took over and became particularly interested in the brain, developing dissection techniques that allowed him to see its anatomy more clearly than had his predecessors. With Christopher Wren, who, among other achievements (he was an astronomer, a surgeon, and an architect), pioneered the art of injecting dyes into veins, Willis outlined the vascular system of the brain by injecting ink and saffron into the carotid artery of a dog. He was the first to understand the function of the vascular structure at the base of the brain, which, in his honor, is called the Circle of Willis.

Together, Willis and Wren produced the most accurate drawings of the human brain to date and published them in a book, The Anatomy of the Brain and Nerves. It sold out, and went through four printings the first year. Its anatomical drawings remained unsurpassed for more than two hundred years.

Even with all his anatomical knowledge, Willis clung to the idea that there were vital and sensitive spirits keeping the body alive, an idea from the past that seemingly wouldn’t die. But Petty had trained him well. Willis eventually changed his mind when his students convinced him, through numerous clever experiments, that spirits were not involved. The blood was picking up something from the air and delivering it to the muscles, and that was the body’s driving force. They didn’t quite come up with the chemical element oxygen, but they were nearly there.

After carrying out numerous animal dissections, Willis saw close similarities between human brains and animal ones. From his observations, he concluded that human souls and animal souls were much the same, and differed in ways that he could observe only in their bodies. For example, animals with a bigger olfactory bulb were better at smelling. Willis saw that humans had a much bigger cerebral cortex than other animals and concluded that it was the location of memory, because humans could remember so much more. While this might seem like crude and simplistic thinking, Willis’s conclusion is not a whole lot different from some of the most promising ideas floating around modern neuroscience. Indeed, the 2016 Kavli Prize in neuroscience was awarded to Michael Merzenich, the scientist who demonstrated how brain areas associated with particular activities enlarged with use.

Still, Willis’s animal dissections had presented him with a big problem. Although humans are able to think in vastly different ways than other animals, their brains appeared to be very similar in organization. Since he could find no material brain substance that could account for this difference, his logical conclusion was that something else must be giving humans this ability: a rational soul. So here we go again. Since he couldn’t physically locate rational thought in the body, he agreed with Gassendi’s view that it was immaterial, yet located in the brain, just like Descartes claimed. Willis thought that nerves pick up sensations from the outside world and animal spirits carry them back to the brain. The spirits follow pathways deep into the brain to a central meeting place: the huge nerve bundle that connects the two half brains, the corpus callosum. Thus, once again a brilliant mind got the key issue wrong. It is as if a modern scientist looked inside a computer, didn’t see anything special, and concluded there must be an immaterial spirit hovering over the motherboard that makes the computer work.

Soul as King, Not CEO

Willis, the royalist, saw this rational soul as the “king” of the body. Like the head of any big organization, the king only has information that is brought to him and does not have direct knowledge of the outside world. As with any such arrangement, this information could be flawed or could become unavailable. Because the brain itself is a physical organ, it or its parts could become ill and not provide proper intelligence, thus affecting the supply line of information to the rational soul. When the brain is ill, there is a chance that the rational soul could be affected, sometimes permanently. This was and is a powerful insight. As you might expect, Willis described various mental illnesses that he had come across in his patients to back up his theories.

Willis is important in our consciousness quest because he was one of the first to link specific brain damage to specific behavioral deficits, and because he recognized that specific parts of the brain accomplish different tasks. In his book that presented these ideas, Two Discourses Concerning the Soul of Brutes, he described a brain autonomously performing various tasks, not in a single location, but distributed across its terrain; he described communication channels, though in these days before electricity had been discovered, he envisioned spirits flowing within them rather than an electrical signal. He set the wheels rolling toward what has become today the field of cognitive neuroscience, the modern science that has taken up the charge of trying to understand human conscious experience.

The Blossoming of John Locke

Both Willis’s empirical work on anatomy and his theorizing are thought to have had a big impact on the budding philosopher John Locke. He, too, started out as a physician, training at Oxford with Willis. One of the few series of lectures Locke felt were worth attending was Willis’s lectures on anatomy. Locke subsequently became friends with another physician, Thomas Sydenham, who had also been a schoolmate of Willis’s. Sydenham had gained much of his medical knowledge through real experience, the sort of method Locke came to believe was fundamental.

As he churned through hundreds of patients, Sydenham noticed that particular diseases had the same cluster of symptoms no matter who the patient was, whether a blacksmith from Sussex or the Duke of York himself. He came to believe that diseases could be differentiated from each other by their characteristic lists of symptoms, and began to classify diseases as if they were plants. This was revolutionary because up until that time, Galen’s diagnoses and treatments, which were more subjective, were still popular. For Galen, each person’s disease was caused by a unique imbalance of humors requiring tailor-made treatments. Sydenham, on the other hand, began what now sounds like the early rumblings of evidence-based medicine. He tried different treatments on a group of patients for a disease and evaluated and adjusted the medicines according to their effectiveness. Ironically, Galen’s view is more consistent with contemporary medicine’s new enthusiasm for personalized treatments, while Sydenham’s view is consistent with algorithmic medicine, which is dictated by adopting standard practices and procedures for all patients with the same particular syndromes. We are going to find out in chapters 7 and 8 that it is common in science to come across differing approaches, such as these, that result in major battles over “either/or” responses when, in fact, there is at least one additional option. These are known as “false dilemmas,” a type of informal fallacy. Perhaps unexpectedly, we are going to find out that it is physicists who have put their foot down and shown that neither answer alone is usually sufficient. This insight will come in handy as we consider how neurons make minds.

As you would expect from a future philosopher, Locke grilled Sydenham mercilessly about his methods. Did you really need to know the primary cause of a disease to treat it? Was it even possible to know it? While Willis thought causes were knowable and pursued this aim through his dissections and experiments, Locke and Sydenham did not. They came to believe that the causes of disease were beyond human understanding, and Locke later became convinced that the workings of the mind and the essences of things were equally unfathomable. He would approach philosophy in the same manner that Sydenham approached disease: he limited himself to talking about ideas based on everyday experiences. It is no wonder, given this stance, that Locke came up with the idea of the blank slate, the famous tabula rasa, on which the mind forms only from experience and self-reflection. This formulation serves as the basis for social science’s current standard theory of man: nurture is in charge.

While both Locke and Descartes wound up being dualists, they differed on many details. Approaching the question of the soul from a psychological perspective, Locke wrote, “Consciousness is the perception of what passes in a Man’s own mind.”1 This perception or awareness of a perception is accomplished, according to Locke, by an “internal sense” that he called “REFLECTION, the ideas it affords being such only as the mind gets by reflecting on its own operations within itself.” Locke even goes so far as to say that the existence of unconscious mental states is impossible: “It [is] impossible for any one to perceive, without perceiving, that he does perceive.”

Contrary to Descartes, Locke severed the connection between the soul and the mind (the thing that thinks). Remember that for Descartes, the mind and the soul are one: Thought is the main attribute of minds, and a substance cannot be without its main attribute. Thus, minds think all the time, even while sleeping, though those thoughts are immediately forgotten. Locke agreed that the waking mind is never without thoughts, but, drawing from his experience, he rejected the notion that the dreamless sleeper has thoughts. The sleeper, however, must still have his soul; otherwise, what would happen if he died while sleeping? So for Locke, the mind (with its property of consciousness) and the soul must be separate! Simple!

Descartes also limited the contents of consciousness to the present operations of the mind. Locke put on no such limits. For him, one can be conscious of past mental operations and actions. Locke saw consciousness as the glue that binds one’s story together into one’s sense of self, one’s personal identity. He believed that consciousness allows us to recognize our past experiences as belonging to us. While he agreed with Descartes that humans have free will, he skirted the issue about how matter could produce it by adding an omnipotent God to the equation and saying God made it so.

Here were some of the smartest men in the world modeling how body, mind, and soul must work. They had to fit what seemed to them to be indisputable realities into a model that differentiates humans from animals, gods, minds, and consciousness. This is what modelers do, even today. They set up a model and they keep tweaking it, with those tweaks supposedly based on new information, until the model appears to explain a problem. In this case, the model was a mess.

At the end of the seventeenth century, ideas about what consciousness is were plentiful but confusing. Facts were accumulating that someday would provide future theorists with frameworks, but a basic, comprehensive conceptual structure explaining consciousness was still missing. In short, philosophers were at odds over the whole idea, and some thought the philosophical conception of consciousness was incoherent. It took a precocious Scottish philosopher, David Hume, who, at the age of eighteen, was already impatient with philosophy’s “endless disputes,” to set the idea of consciousness on a straight course toward the future. He found the ancients’ moral and natural philosophy “entirely hypothetical, & depending more upon Invention than Experience.”2

Getting Ready for Modern Conceptions

Hume seemed to burst onto the scene as a prefabricated iconoclast, ready to cut through all the talk about idealism. He thought the idea that the mind was somehow supernatural to the body was a delusion, and downright silly. He wanted to put that to rest and to structure a science of the real nature of life. Hume did just that, thereby redirecting human thinking about the nature of mind for centuries to come.

Hume soon realized that the same fallacies that were rampant in the ancient world—such as relying on hypotheses based on speculation and invention, rather than experience and observation—were also to be found among his contemporaries. Hume believed that our knowledge of reality is based on our experience and, for better or worse, on the axioms we choose. Axioms are statements that seem so evident or well-established that they are accepted without controversy or question and are simply asserted without proof. To put it simply, an axiom is a fundamentally unprovable assumption or an opinion. The problem with basing knowledge on an axiom is that then what one concludes about reality is dependent on the axioms one chooses. As the Duke physicist Robert Brown warns, “There is nothing more dangerous or powerful in the philosophical process than selecting one’s axioms.… There is nothing more useless than engaging in philosophical, religious, or social debate with another person whose axioms differ significantly from one’s own.”3

Indeed, Hume concluded that many of the questions that philosophers asked were pseudo questions—that is, questions that cannot be answered with the likes of logic, mathematics, and pure reason because their answers will always be founded at some level on an unprovable belief, on an axiom. He thought that philosophers should stop wasting everyone’s time writing copiously about pseudo questions, dump their a priori assumptions, and rein in their speculations, as the scientists had. They had to reject everything that was not founded on fact or observation, and that included eliminating any appeal to the supernatural. Hume was pointing at Descartes and others who believed they had conclusively demonstrated the dualist philosophy through reason, math, and logic. Today, Hume’s stance is relatively common, in part because contemporary academic philosophers are employed by modern research universities and are surrounded by scientific experiments. Even though Cartesian ideas are still around, they are not taken seriously by most philosophers or scientists. But in the early eighteenth century, Hume’s attack on Descartes was bold and groundbreaking.

Hume’s grand plan was to come up with a “science of man”: that is, to figure out the fundamental laws guiding the mind’s machinations consistent with what was known about the Newtonian world, using Newton’s scientific method. He felt that understanding human nature, including its capabilities and frailties, would allow us to better comprehend human activities in general. It would also allow us to appreciate the possibilities and pitfalls of our intellectual pursuits, including what aspects of our thinking might constrain our attempts to understand ourselves. In fact, he thought his science of man should take top billing, above Newtonian sciences, writing, “Even Mathematics, Natural Philosophy, and Natural Religion, are in some measure dependent on the science of Man; since they lie under the cognizance of men, and are judged of by their powers and faculties. ’Tis impossible to tell what changes and improvements we might make in these sciences were we thoroughly acquainted with the extent and force of human understanding.”4 As they say, Hume was on it, and he was bringing to the mind/brain muddle some hard-nosed clarity. Some consider him to be the father of cognitive science.

It was 1734, and, at the age of twenty-three, Hume attended Descartes’s alma mater, the Jesuit College de la Flèche in Anjou, France. In his spare time, he wrote the classic A Treatise of Human Nature: Being an Attempt to Introduce the Experimental Method of Reasoning into Moral Subjects, which he recast, buffed up, corrected, and clarified in the 1748 publication of An Enquiry Concerning Human Understanding. To get going, he divided all mental perceptions into two categories: impressions, which are either external sensations or internal reflections, such as desires, passions, and emotions; and ideas, which are from memory or imagination. He argued that ideas, in the end, are copied from impressions, and he used the word “consciousness” to mean thought. Known as the copy principle, this is Hume’s first principle in the science of human nature: “That all our simple ideas in their first appearance are deriv’d from simple impressions, which are correspondent to them, and which they exactly represent.”5

Yet, Hume notes, our ideas do not occur randomly. If they did, we wouldn’t be able to think coherently. Thus he proposed the principle of association: “[T]here is a secret tie or union among particular ideas, which causes the mind to conjoin them more frequently together, and makes the one, upon its appearance, introduce the other.”6 Further, these associations follow three principles: resemblance, contiguity in time and place, and causation. Of these, Hume recognizes that causation takes us beyond our senses: it links present experiences to past experiences and infers predictions of the future. Hume concludes that “all reasonings concerning matters of fact seem to be founded on the relation of Cause and Effect.”7 This conclusion foretold the downfall of Descartes’s dualism.

For Hume, causality is made up of three fundamental components: priority in time, proximity in space, and necessary connection. Hume argues that the idea of priority in time comes from observation and experience. Thus, if we say that event A caused event B, then that means that A came before B. The idea of proximity in space also comes from observation, for when we observe event B and say it is caused by event A, it is in proximity to it. When I sauté garlic in olive oil, it is my house that immediately fills with a mouthwatering fragrance—it does not happen two hours later, nor at my neighbors’. The sautéing garlic and the fragrance have to be in proximity of each other in both time and space for me to infer cause and effect.

The problem for Hume arises from the third component: the necessary connection. Hume’s view was that the necessary connection between cause and effect was not something that was derived from mere thought, or what he called the relation of ideas—that is, something demonstrably certain yet discoverable independent of experience, such as 4 × 3 = 12. No, necessary connection required experience. For example, perhaps while I am sautéing that garlic, the phone rings. Do I conclude that sautéing garlic causes the phone to ring? No. My mind infers no necessary connection, unless it happens every single time I sauté garlic.

Consider being handed a previously unknown white powder. You would have no notion of what the effect of swallowing some might be. You might be a rocket scientist, but you would only be able to describe the powder’s color, texture, odor, and—if you were a rather reckless rocket scientist, which seems an oxymoron—taste. But without actual observation or previous experience of the powder’s effects, you won’t know them or be able to predict them. Hume sees the idea of a necessary connection between cause and effect as an idea that forms in the mind from experience. It is not an actual feature of the external world that we derive from the senses. Hume argued that when people regard any set of events as causally connected, they are merely observing that the two events always go together: events like A are always followed by events like B, but, as they say, correlation is not causation. He reasons that through association, the impression of one event brings with it the impression of the other, and if they continue to show up together, eventually the association becomes habitual. So when we see event A, through habit we expect event B. We expect the fragrance when we sauté garlic because one always follows the other, but we don’t expect the phone to ring. Here Hume is prescient of Pavlov and his conditioning experiments. Hume concludes that if this habitual linking of the two events is not something that can be observed via the senses, then the only source that we can identify as the basis of the idea of causality is this compulsive linking that goes on in our brain, producing a feeling of expectation, and that feeling is the source of the idea of causality.

So, based on repeated experience, we infer or presume by force of habit that B will, in the future, always follow A. Here we run into a logic snafu. Consider this: You may have eaten shrimp for years, and then one night, driving alone with your two-year-old to meet your father at his fishing cabin, you stop at a roadside diner for a snack. You hungrily sink your teeth into a succulent shrimp, and within seconds, your throat closes up. You gasp for breath and realize you are having an allergic reaction. Instead of postprandial bliss, you are in postprandial hell. How do we get from “I have no problem eating shrimp” to “I will not have a problem eating shrimp tonight,” which usually works, but in rare cases (what essayist Nassim Taleb would call a Black Swan event) doesn’t? The problem is that to make any causal inference about the future, we have to assume something: the future effects will be like the past ones. We make this assumption multiple times each day. How do we arrive at this assumption? Hume shows that we do not make it through logic or reason, because, logically, it is easy to conceive that future effects may be different. Logically, you know that you could go out to start your car and, unlike yesterday and the previous five years of yesterdays, find that turning the key causes nothing to happen. Hume also discounts our use of probable (empirical) reasoning because it is circular: it supposes that which one is trying to prove (future effects will be like the past). He concludes that our belief that nature is uniform and future effects will be like the past must be derived not rationally but psychologically, from a habit based on association.

This is really walking on the wild side, and Hume knew it. He is questioning the idea of causation and making it clear that it is an axiom, an assumption that is without the hope of proof. Hume is not actually questioning that effects have causes; he is questioning where our certitude that they do comes from. In a letter to John Stewart, a professor of natural philosophy at Edinburgh, Hume wrote in 1754: “But allow me to tell you that I never asserted so absurd a Proposition, as that any thing might arise without a cause: I only maintain’d, that our Certainty of the Falsehood of that Proposition proceeded neither from Intuition nor Demonstration; but from another Source.”8

Hume showed that although navigating the world would be extremely tricky without it, the axiom of cause and effect cannot be proved through mathematics, logic, or reason. Hume admired Newton, but here he called into question the philosophical basis of Newton’s science and, with it, the mechanistic certainty of both the world and the mind/brain. We will see that questioning mechanistic certainty is going to lead us into some interesting territory in the upcoming chapters.

Me, Myself, and Who?

Hume also had a few things to say about the self, that is, our personal identity. These he garnered from introspection, and he concluded that the self consists of a bundle of perceptions, but that there is no subject in which those perceptions appear: “For my part, when I enter most intimately into what I call myself, I always stumble on some particular perception or other, of heat or cold, light or shade, love or hatred, pain or pleasure. I never can catch myself at any time without a perception, and never can observe any thing but the perception.”9 He does admit, however, that he has to account for the fact that he does have some idea of personal identity. This he attributes to associations of perceptions. For Hume, the self is nothing but a bundle of experiences linked by the relations of causation and resemblance, made from our never-ending chain of perceptions. These our imagination bundles together to give rise to an idea of an identity. Memory then extends this idea beyond immediate perceptions, linking it to those in the past.

In other words, Hume thought that we get the idea of a permanent self-identity by our way of thinking, which confounds a succession of related objects, in this case our perceptions, with an uninterrupted and invariable object with a permanent identity, such as a chair. Hume asserted that to justify “this absurdity, we often feign some new and unintelligible principle, that connects the objects [perceptions] together, and prevents their interruption or variation. Thus we … run into the notion of a soul, and self, and substance, to disguise the variation. But we may farther observe, that where we do not give rise to such a fiction, our propension to confound identity with relation is so great, that we are apt to imagine something unknown and mysterious, connecting the parts, beside their relation.”10 Hume could be chastising from the grave the many who now think of consciousness in such terms.

But he also was shaking his finger at Descartes’s res cogitans, or “the thing which thinks.” Hume objected to the idea that the mind is a thinking thing. He saw it more as a stage with the brain providing the entertainment: “The mind is a kind of theatre, where several perceptions successively make their appearance; pass, re-pass, glide away, and mingle in an infinite variety of postures and situations.”11 Hume did not assume that a diversity of experience translates into a single unity, a subject. He reasoned that through introspection, all that we can capture about our self is a bunch of perceptions and ideas. We never catch the mind that supposedly dreams them up. For him, the self is a bundle of perceptions with no bundler, no substantial, persisting, and unchanging essence. If there is no bedrock “I,” then substance dualism is false.

We leave the eighteenth century with perhaps its greatest philosopher looking askance at both Descartes and Locke. Yet Hume, too, was missing something big: the many silent minds within us that influence our behavior. Our subconscious mind, which monitors each of us, is like a hidden spy in your house. This may seem like a supernatural force, but in reality it is neural processing going on below the level of conscious awareness.

Germany and the Birth of the Unconscious Mind

It was the Germans’ turn to contribute to the conversation, and they stirred up talk about unconscious mental processes. Arthur Schopenhauer, for example, wrote: “For the Zeitgeist of every age is like a sharp east wind which blows through everything. You can find traces of it in all that is done, thought and written, in music and painting, in the flourishing of this or that art: it leaves its mark on everything and everyone.”12

Early in the nineteenth century, for the purposes of our quest, that sharp east wind was blowing out of Germany, and more specifically out of the mouth of Arthur Schopenhauer. He took the wind out of the sails of Descartes, who thought that the mind is fully accessible and that nothing is hidden from conscious reflection. Schopenhauer was a philosopher who was focused on the motivations of the individual. He concluded that they aren’t too pretty: humans are motivated by their will and not by their intellect, though they may firmly deny it. In his 1818 publication, The World as Will and Representation, he came to the conclusion that “man can indeed do what he wants, but he cannot will what he wants.” In essence, not only is the will (i.e., our subconscious motivations) in charge, but the conscious intellect does not realize it. Schopenhauer made this clear when describing the will as blind and strong and the intellect as sighted but lame: “The most striking figure for the relation of the two is that of the strong blind man carrying the sighted lame man on his shoulders.”13

Schopenhauer’s framing kicked the problem of consciousness onto a much larger playing field. The mind, with all of its rational processes, is all very well but the “will,” the thing that gives us our “oomph,” is the key: “The will … again fills the consciousness through wishes, emotions, passions, and cares.”14 Today, the subconscious rumblings of the “will” are still unplumbed; only a few inroads have been made. As I write these words, enthusiasts for the artificial intelligence (AI) agenda, the goal of programming machines to think like humans, have completely avoided and ignored this aspect of mental life. That is why Yale’s David Gelernter, one of the leading computer scientists in the world, says the AI agenda will always fall short, explaining, “As it now exists, the field of AI doesn’t have anything that speaks to emotions and the physical body, so they just refuse to talk about it.” He asserts that the human mind includes feelings, along with data and thoughts, and each particular mind is a product of a particular person’s experiences, emotions, and memories hashed and rehashed over a lifetime: “The mind is in a particular body, and consciousness is the work of the whole body.” Putting it in computer lingo, he declares, “I can run an app on any device, but can I run someone else’s mind on your brain? Obviously not.”15 Picture Shaquille O’Neal and Danny DeVito trading brains. Danny would be ducking under doorframes and Shaq would be missing the basket by miles.

The will, according to Schopenhauer, is the will to live, a drive that wheedles humans and all animals to reproduce. For him, the most important purpose of human life is the ultimate end product of a love affair, offspring, because it determines who makes up the next generation. Schopenhauer puts the intellect in the backseat. It isn’t the driver of behavior and also isn’t privy to the will’s decisions; it’s just an after-hours spokesperson, making up stories as it goes along to explain ex post facto what the will has wrought.

Schopenhauer, in deposing conscious intellect, also opened up the Pandora’s box of the unconscious. He described our distinctly conscious ideas as merely like the surface of a pool of water, while the depths are made up of indistinct feelings, perceptions, intuitions, and experiences mingled with our personal will: “Consciousness is the mere surface of our mind, and of this, as of the globe, we do not know the interior, but only the crust.”16 He said that our real thinking seldom takes place on the surface, and thus can rarely be described as a sequence of “clearly conceived judgments.”

Schopenhauer ushered in the world of unconscious mental processes several decades before Freud took over the limelight, but it was by no means a new idea. Recall that Galen had recognized that many of the body’s processes are carried out without cognition—in particular, those that keep the body alive, such as breathing, and also one’s natural urges. In the nineteenth century, however, the idea picked up momentum. In 1867, after many years studying the physiology of the eye, the German materialist physician, physicist, and philosopher of science Hermann von Helmholtz proposed that unconscious inference, that is, an involuntary, pre-rational, and reflex-like mechanism, is at work in visual perception: the visual system in the brain takes the incoming raw visual data and stitches it together into the most coherent picture.17 This was a different type of processing than Hume had proposed in his copy principle, but it was not a new idea, either. It had been suggested in the eleventh century by the Arab scientist Alhazen.

Helmholtz was mentor to fellow materialist physician and physicist Ernst Brücke. Both were dedicated to the idea that the elements that made up the mind are physical, and all the causal relations between the elements are governed by the same mechanical principles that govern physics and chemistry. No vital spirits, no mysticism, no ghosts. The mind and the body are one. Brücke went on to become a professor of physiology at the University of Vienna, where he would have a great deal of influence on one of his students: Sigmund Freud. Can you imagine the intense excitement of the intellectual and scientific atmosphere? No more spooks in the system. It was just the brain, made up of parts, many of which worked outside conscious awareness, all driven by chemistry and physics.

In 1868, a Dutch ophthalmologist, Franciscus Donders, came up with an idea that was going to give those interested in studying the mind’s functioning a new tool. Donders realized that by measuring differences in reaction times, one can infer differences in cognitive processing. He suggested that the amount of time it takes to identify a color is the difference between the time it takes to react to a particular color and the time needed to react to light. With this idea, psychologists recognized that they could study the mind by measuring behavior, and the field of experimental psychology was born. Indeed, this very method of Donders, as well as his groundbreaking insights into cerebral oxygen consumption, led to the dramatic breakthroughs in understanding cognitive processes using brain imaging as first carried out by Marcus Raichle, Michael Posner, and their colleagues at Washington University in Saint Louis more than a hundred years later.

Rumblings about the deep unconscious mind were also being heard in England, and were already accepted by 1867, as evidenced in the writings of the British psychiatrist Henry Maudsley: “The preconscious action of the mind, as certain metaphysical psychologists in Germany have called it, and the unconscious action of the mind, which is now established beyond all rational doubt, are assuredly facts of which the most ardent introspective psychologist must admit that self-consciousness can give us no account.”18 Maudsley goes on to state that “the most important part of mental action, the essential process on which thinking depends, is unconscious mental activity.”19

Soon after, in 1878, the publication of the British journal Brain was inaugurated. The next year, the journal published an article by the polymath Francis Galton in which he wrote about the findings of an experiment he performed on himself. He looked at a word written on a card and timed with a stopwatch how long it took him to associate two ideas with the word, which he then wrote down. He had seventy-five words and he performed this task in four very different locations at intervals of about a month. His findings surprised him. From his list of seventy-five words, gone over four times, he had produced only 289 different ideas. Almost 25 percent of the time a word brought forth the very same associated words during all four sessions, and in an additional 21 percent of trials, the same associations popped up on three out of the four occasions, showing much less variety than he expected. “The roadways of our minds are worn into very deep ruts,” Galton remarked. He concluded:

Perhaps the strongest of the impressions left by these experiments regards the multifariousness of the work done by the mind in a state of half-unconsciousness, and the valid reason they afford for believing in the existence of still deeper strata of mental operations, sunk wholly below the level of consciousness, which may account for such mental phenomena as cannot otherwise be explained.20

Concurrently, the conscious mind was about to get its own field. In 1874, a young German professor of physiology, Wilhelm Wundt, published the first textbook in the field of experimental psychology, Principles of Physiological Psychology. In it, he marked out the territory for this new discipline, which included the study of thoughts, perceptions, and feelings. Wundt was particularly interested in analyzing consciousness, and thought that this should be the focus of psychology. He outlined a system to investigate the immediate experiences of consciousness through self-examination. This was to include an objective observation of one’s feelings, emotions, desires, and ideas. Five years later, at the University of Leipzig, he opened the first psychology laboratory, thereby earning himself the moniker “father of experimental psychology.” He argued that law-like regularities in humans’ inner experience could be identified through experimentation. Wundt believed that neurophysiology and psychology studied the same process from different perspectives, one from the inside and the other from the outside.

Freud, the Unconscious Mind, and His Flip-Flop on Mechanism

Meanwhile, the somewhat revolutionary idea of the unconscious mind really caught some traction with the contributions of Sigmund Freud. Was it the shock factor of Freud’s psychoanalytic theories that brought them to prime time? At any rate, early in his career Freud was already biting off more than he could chew. In 1895, he published The Project for a Scientific Psychology, in which he championed the ultra-materialist idea that every mental event was identical to a neurological event. He declared that the first step toward the goal of a scientific psychology was to identify and precisely describe the neural event associated with each mental event—an early version of the current quest to find the neural correlates of consciousness. If that weren’t enough, he went on to propose the second step, “eliminative reductionism”: the vocabulary used to describe mental states was to be axed, and a new neurological vocabulary substituted. So instead of speaking of your jealousy, you would instead comment that your area J2 is firing at a specific rate and velocity. Freud proposed this change not just for those studying the brain, but for everyone. Everyone. Poetry would be quite different, as would Valentine’s Day cards: “My p392J fires 95 percent faster when my L987T corresponds with your face.” Probably the number of coordinates for “my” would have taken up too much space.

But just as his book was hot off the press, Freud changed his mind totally and completely. Owen Flanagan reports, “In 1895, the very year of the Project, Freud stated it was a ‘pointless masquerade to try to account for psychical processes physiologically.’”21 Not only that, he decided that mental events should be spoken of only in the vocabulary of psychology. No reductionism. Flanagan traces one of the roots of this argument to the German philosopher, psychologist, and former priest Franz Brentano, one of Freud’s medical school instructors.

Brentano wanted philosophy and psychology to be practiced with methods as exacting as those used in the natural sciences. He distinguished between two different approaches to psychology, what he called a genetic approach and a descriptive approach. Genetic psychology would study psychology from the traditionally empirical third-person point of view, while descriptive psychology, which he sometimes referred to as “phenomenology,” was geared at describing consciousness from the subjective first-person point of view. He agreed with the eighteenth-century philosophers that all knowledge was based on experience and argued that psychology must employ introspection in order to empirically study what one experiences in inner perception. Herein lie the roots of another definition of the word “consciousness”: the awareness and subjective feel of phenomenal experience, which we will return to in the next chapter.

Brentano maintained that the difference between mental phenomena and physical phenomena is that physical phenomena are the objects of external perception, whereas mental phenomena have content and are always “about” something, that is, they are directed at an object. That object, Brentano specifies, “is not to be understood here as meaning a thing,”22 but is a semantic object. So while you could desire to see a horse, you could also desire to see a unicorn, which is an entirely imaginary object, or you could desire forgiveness, which although it may or may not be imaginary, is a semantic object but certainly not an object you can put on the table. Brentano argued that this “aboutness” is the main characteristic of consciousness and referred to the status of the objects of thought with the expression “intentional inexistence.” Owen Flanagan writes, “This view, which has come to be known as ‘Brentano’s thesis,’ implies that no language that lacks the conceptual resources to capture the meaningful content of mental states, such as the language of physics or neuroscience, can ever adequately capture the salient facts about psychological phenomena.”23

Freud, drawing on experiences he had with patients and linking them with the notion of unconscious processes, built a systematic psychological theory. He divided the mind into three levels: the conscious mind, which includes all that we are aware of; the preconscious mind, containing ordinary memory, which can be retrieved and shuttled to the conscious mind; and the unconscious mind, the home of feelings, urges, memories, and thoughts that are outside conscious awareness. The notion that processes involved with emotions, desires, and motivations are inaccessible to conscious reflection was not new. This idea had been tossed about not only by Descartes but also, even earlier, in the fourth century by Augustine and in the thirteenth by Thomas Aquinas, and after Descartes by Spinoza and Leibniz. Freud differed, however, in that he considered most of the contents of the unconscious unseemly. And according to his theory, the unconscious influences nearly all of our thoughts, feelings, motivation, behavior, and experiences.

Oddly, Freud, who championed the idea of a scientific psychology, never allowed his psychoanalytic theories to be tested empirically through the newly developing field of experimental psychology. While some of Freud’s propositions have withstood empirical analysis—for example, it is widely accepted today that most cognitive processes are performed unconsciously—his original theories of psychopathology have not withstood close scrutiny and have generally been consigned to the trash bin.24

Darwin’s Challenge to All

Along with the gradual acceptance of unconscious mental processing, the nineteenth century saw another big idea burst onto the scene, this time out of the British Isles, with Charles Darwin’s publication of On the Origin of Species in 1859. The first editions flew out of bookstores and quickly aroused international interest. In the conclusion of the book, Darwin also separated himself from the mind/body dualists when he wrote: “In the distant future I see open fields for far more important researches. Psychology will be based on a new foundation, that of the necessary acquirement of each mental power and capacity by gradation. Light will be thrown on the origin of man and his history.”25 While there was some tut-tutting going around initially, by the time he published The Descent of Man in 1871, Darwin’s grand theory of evolution through natural selection was well accepted by the scientific community and much of the general public. In that book, after detailing numerous examples of the continuity of physical and mental attributes that animals and men share, he concluded, “The difference in mind between man and the higher animals, great as it is, certainly is one of degree and not of kind.”26 Not one to endow animals with immortal souls, Darwin was again arguing against mind/body dualism.

As has been oft noted, Darwin was a soft-spoken man, and almost apologetic for throwing a “monkey” wrench into the beliefs of many people, including those of his wife. He closed On the Origin of Species with an upbeat, rather hopeful note for that constituency:

Thus, from the war of nature, from famine and death, the most exalted object which we are capable of conceiving, namely, the production of the higher animals, directly follows. There is a grandeur in this view of life, with its several powers, having been originally breathed by the Creator into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being evolved.27

Darwin thought that human mental capacities must also be explained by his theory. This part of his theory was more hotly contested. It met with resistance both from traditional dualists and from the empiricist followers of Locke and Hume, the purveyors of the human brain as a tabula rasa, who thought that all knowledge comes from sensory experience. These disputes stifled progress on the conundrum of consciousness for many years. Eventually a different approach to psychology, with its roots in Hume’s association principles, took up the reins: behaviorism.

*   *   *

BY THE END OF THE nineteenth century, many philosophers were insisting that the mind had to have its physical brain, which somehow contained memories and cognition. Some physiologists also required the spinal nerves, and others insisted that the body, too, was part of the package.

Locke separated the mind from the soul, and infused the mind with rational reflection, ethical action, and free will. The mind is the basis of consciousness, volition, and personhood, but it is fallible and can generate illusions and error, and its only coin is conscious ideas. Nothing bubbles up from unconscious depths. Locke skirted the issue of how matter could produce something like free will by adding an omnipotent God to the equation and saying that he made it so. Hume eliminated those supernatural powers from the equation and tried to establish a true science of human minds. In doing so, he realized the limits of the human mind, how all thought has to be constrained by its capacities. Thus, he even questioned the philosophical basis of Newton’s mechanistic science as a way of looking at the world, by undercutting the foundation of humans’ grasp of physical causality.

Schopenhauer insisted that unconscious motivations and intentions drive us, not conscious thinking, which makes him more of an a posteriori apologist. Helmholtz showed that our perceptual systems are not veritable Xerox machines, but rather stitch perceptual information together in a best-guess sort of way. Then along came Darwin, who plopped our brains down on an evolving continuum, instructing us to use natural selection to figure out how they got the way they are and to leave God out of it.

So we enter the twentieth century, still confused, still with the same questions, but with a couple of new strategies to employ: study the difference in reaction times to particular tasks, and employ a new descriptive psychology focusing on the subjective first-person point of view. The next one hundred years were surely going to be full of new insights, new scientific findings, wholly new ways of thinking about consciousness. Humankind cracked the atom, cracked the code of DNA, went to the moon, and could now take pictures of the living human brain. Surely, something had to give on the problem of consciousness.