7

THE FUTURE OF CONSCIOUSNESS

The emerging science of consciousness still faces many challenges. Can we determine the precise moment when consciousness first emerges in babies? Can we figure out whether a monkey, or a dog, or a dolphin is conscious of its surroundings? Can we solve the riddle of self-consciousness, our surprising ability to think about our own thinking? Is the human brain unique in this respect? Does it host distinctive circuits, and if so, can their dysfunction explain the origins of uniquely human diseases such as schizophrenia? And if we manage to analyze those circuits, could we ever duplicate them in a computer, thus giving rise to artificial consciousness?

I sort of resent the idea of science poking its nose into this business, my business. Hasn’t science already appropriated enough of reality? Must it lay claim to the intangible invisible essential self as well?

—David Lodge, Thinks . . . (2001)

In point of fact, the greater one’s science, the deeper the sense of mystery.

—Vladimir Nabokov, Strong Opinions (1973)

The black box of consciousness is now cracked open. Thanks to a variety of experimental paradigms, we have learned to make pictures visible or invisible, then track the patterns of neuronal activity that occur only when conscious access happens. Understanding how the brain handles seen and unseen images has turned out not to be as subtle as we initially feared. Many electrophysiological signatures have manifested the presence of a conscious ignition. These signatures of consciousness have proved solid enough that they are now being used in clinics to probe residual consciousness in patients with massive brain lesions.

No doubt this is only a beginning. The answers to many questions still elude us. In this closing chapter, I would like to outline what I see as the future of consciousness research—the outstanding questions that will keep neuroscientists at work for many more years.

Some of these questions are thoroughly empirical and have already received an inkling of an answer. For instance, when does consciousness emerge—in development as well as in evolution? Are newborns conscious? What about premature infants or fetuses? Do monkeys, mice, and birds share a workspace similar to ours?

Other problems border on the philosophical—and yet I firmly believe that they will ultimately receive an empirical answer, once we find an experimental line of attack. For instance, what is self-consciousness? Surely something particular about the human mind allows it to turn the flashlight of consciousness onto itself and think about its own thinking. Are we unique in this respect? What makes human thought so powerful but also uniquely vulnerable to psychiatric diseases such as schizophrenia? Will this knowledge allow us to build an artificial consciousness—a sentient robot? Would it have feelings, experiences, and even a sense of free will?

No one can claim to know the answers to these conundrums, and I will not pretend that I can resolve them. But I would like to show how we might begin to address them.

Conscious Babies?

Consider the onset of consciousness in childhood. Are babies conscious? What about newborns? Premature infants? Fetuses inside the womb? Surely some degree of brain organization is needed before a conscious mind is born—but exactly how much?

For decades, this controversial question has pitted defenders of the sanctity of human life against rationalists. Provocative statements abound on both sides. For instance, the University of Colorado philosopher Michael Tooley bluntly writes that “new-born humans are neither persons nor quasi-persons, and their destruction is in no way intrinsically wrong.”1 According to Tooley, up to the age of three months at least, infanticide is morally justified because a newborn infant “does not possess the concept of a continuing self, any more than a newborn kitten” does, and therefore it has “no right to life.”2 Continuing this grim message, the Princeton bioethics professor Peter Singer argues that “life only begins in the morally significant sense when there is awareness of one’s existence over time”:

The fact that a being is a human being, in the sense of a member of the species Homo sapiens, is not relevant to the wrongness of killing it; it is, rather, characteristics like rationality, autonomy, and self-consciousness that make a difference. Infants lack these characteristics. Killing them, therefore, cannot be equated with killing normal human beings, or any other self-conscious beings.3

Such assertions are preposterous for many reasons. They clash with the moral intuition that all human beings, from Nobel Prize winners to handicapped children, have equal rights to a good life. They also conflict head-on with our intuitions of consciousness—just ask any mother who has exchanged eye contact and goo-goo-ga-gas with her newborn baby. Most shocking, Tooley and Singer pronounce their confident ukases without the slightest supporting evidence. How do they know that babies have no experiences? Are their views founded upon a firm scientific basis? Not at all—they are purely a priori, detached from experimentation—and, in fact, are often demonstrably wrong. For instance, Singer writes that “in most respects, [coma and vegetative patients] do not differ importantly from disabled infants. They are not self-conscious, rational, or autonomous . . . their lives have no intrinsic value. Their life’s journey has come to an end.” In Chapter 6, we saw that this view is dead wrong: brain imaging reveals residual consciousness in a fraction of adult vegetative patients. Such an arrogant view, denying the complexity of life and consciousness, is appalling. The brain deserves a better philosophy.

The alternative path that I propose is simple: we must learn to do the right experiments. Although the infant mind remains a vast terra incognita, behavior, anatomy, and brain imaging can provide much information about conscious states. The signatures of consciousness, once validated in human adults, can and should be searched for in human babies of various ages.

To be sure, this strategy is imperfect, because it is built upon an analogy. We hope to find, at some point in early childhood development, the same objective markers that we know index subjective experience in adults. If we find them, we will conclude that at this age, children possess a subjective viewpoint on the outside world. Of course, nature could be more complex; the markers of consciousness could change with age. Also, we may not always get an unambiguous answer. Different markers may disagree, and the workspace that operates as an integrated system in adulthood may consist of fragments or pieces that develop at their own pace during infancy. Still, the experimental method has a unique capacity to inform the objective side of the debate. Any scientific knowledge will be better than the a priori proclamations of philosophical and religious leaders.

So do infants possess a conscious workspace? What does brain anatomy say? In the past century, babies’ immature cortex, replete with scrawny neurons, puny dendrites, and skinny axons that lack their insulating sheet of myelin, led many pediatricians to believe that the mind was not operative at birth. Only a few islands of visual, auditory, and motor cortex, they thought, were sufficiently mature to provide infants with primitive sensations and reflexes. Sensory inputs fused to create “one great blooming, buzzing confusion,” in the famous words of William James. It was widely believed that the higher-level reasoning centers in the babies’ prefrontal cortex remained silent at least until the end of the first year of life, when they finally began to mature. This virtual frontal lobotomy explained infants’ systematic failure on behavioral tests of motor planning and executive control, such as Piaget’s famous A-not-B test.4 To many a pediatrician, it was perfectly obvious, then, that newborns did not experience pain—so why anesthetize them? Injections and even surgeries were routinely performed without any regard for the possibility of infant consciousness.

Recent advances in behavioral testing and brain imaging, however, refute this pessimistic view. The great mistake, indeed, was to confuse immaturity with dysfunction. Even in the womb, starting at around six and a half months of gestation, a baby’s cortex starts to form and to fold. In the newborn, distant cortical regions are already strongly interconnected by long-distance fibers.5 Although they are not covered with myelin, these connections process information, albeit at a much slower pace than in adults. Right from birth, they already promote a self-organization of spontaneous neuronal activity into functional networks.6

Consider speech processing. Babies are immensely attracted to language. They probably begin to learn it inside the womb, because even newborns can distinguish sentences in their mother tongue from those in a foreign language.7 Language acquisition happens so fast that a long line of prestigious scientists, from Darwin to Chomsky and Pinker, has postulated a special organ, a “language acquisition device” specialized for language learning and unique to the human brain. My wife, Ghislaine Dehaene-Lambertz, and I tested this idea directly, by using fMRI to look inside babies’ brains while they listened to their maternal language.8 Swaddled onto a comfortable mattress, their ears protected from the machine’s noise by a massive headset, two-month-old infants quietly listened to infant-directed speech while we took snapshots of their brain activity every three seconds.

To our amazement, the activation was huge and definitely not restricted to the primary auditory area. On the contrary, an entire network of cortical regions lit up (figure 34). The activity nicely traced the contours of the classical language areas, at exactly the same place as in the adult’s brain. Speech inputs were already routed to the left hemisphere’s temporal and frontal language areas, while equally complex stimuli such as Mozart music were channeled to other regions of the right hemisphere.9 Even Broca’s area, in the left inferior prefrontal cortex, was already stirred up by language. This region was mature enough to activate in two-month-old babies. It was later found to be one of the earliest-maturing and best-connected regions of the baby’s prefrontal cortex.10

FIGURE 34. The prefrontal cortex is already active in awake infants. Two-month-old infants listened to sentences in their maternal language while their brain was scanned with fMRI. Speech activated a broad language network, including the left inferior frontal region known as Broca’s area. Playing the same tape backward, thus destroying most speech cues, caused a much-reduced activation. Awake infants also activated their right prefrontal cortex. This activity was related to consciousness, because it vanished when the infants fell asleep.

By measuring the speed of activation with MRI, we confirmed that a baby’s language network is working—but at a speed much slower than in an adult, especially in the prefrontal cortex.11 Does this slowness prevent the emergence of consciousness? Do infants process speech in a “zombie mode,” much as a comatose brain unconsciously responds to novel tones? The mere fact that an attentive two-month-old, during language processing, activates the same cortical network as an adult is unfortunately inconclusive, because we know that much of this network (though perhaps not Broca’s area) can activate unconsciously—for instance, during anesthesia.12 Crucially, however, our experiment also showed that babies possess a rudimentary form of verbal working memory. When we repeated the same sentence after a fourteen-second interval, our two-month-olds gave evidence of remembering:13 their Broca’s area lit up much more strongly on the second occasion than on the first. Already at two months, their brain bore one of the hallmarks of consciousness, the capacity to hold information in working memory for a few seconds.

Equally crucially, infants’ responses to speech differed when they were awake and asleep. Their auditory cortex always lit up, but the activity cascaded into the dorsolateral prefrontal cortex only in awake babies; in sleeping babies we saw a flat curve in this area (figure 34). The prefrontal cortex, this crucial node of the adult workspace, therefore seems to already contribute primarily to conscious processing in awake infants.

A much tighter proof that few-month-olds are conscious comes from the application of the local-global test that I described in Chapter 6 and that probes residual consciousness in vegetative-state adult patients. In that simple test, patients listen to repeated series of sounds such as beep beep beep beep boop while we record their brain waves, using EEG. Occasionally, a rare sequence violates the rule, ending for instance with a fifth beep. When this novelty evokes a global P3 wave, invading the prefrontal cortex and the associated workspace areas, the patient is very likely to be conscious.

Undergoing this test requires no education, no language, and no instruction, and thus it is simple enough to be run in infants (or virtually any animal species). Any child can listen to a sequence of tones and, if its brain is smart enough, work out the regularities. Event-related potentials can be recorded from the first few months of life. The only problem is that babies quickly get fussy when the test is too repetitive. In order to probe this signature of consciousness in babies, my wife Ghislaine, who is a neuropediatrician and a specialist of infant cognition, therefore adapted our local-global test. She turned it into a multimedia show in which attractive faces articulated a sequence of vowels: aa aa aa ee. The constantly changing faces, with their moving mouths, fascinated the babies—and once we managed to capture their attention, we were pleased to see that, at two months of age, their brain already emitted a global conscious response to novelty—a signature of consciousness.14

Most parents will not be surprised to learn that their two-month-old baby already scores high on a test of consciousness—yet our tests also showed that their consciousness differs in one important respect from that in adults: in infants, the latency of the brain responses is dramatically slower than in adults. Every processing step seems to take a disproportionately longer time. Our babies’ brain needed one-third of a second to register the vowel change and to generate an unconscious mismatch response. And a full second was needed before their prefrontal cortex reacted to global novelty—about three to four times longer than in adults. Thus, the architecture of the baby’s brain, in the first weeks of life, includes a functional global workspace, albeit a very slow one.

My colleague Sid Kouider replicated and extended this finding, this time using vision. He focused on face processing, another domain for which even newborn babies have an innate competence.15 Babies love faces and magnetically orient toward them from birth. Kouider capitalized on this natural tropism to study whether babies are sensitive to visual masking and exhibit the same sort of threshold for conscious access as adults. He adapted, to five-month-olds, the masking paradigm that we had used to study conscious vision in adults.16 An attractive face was flashed for a brief and variable duration, immediately followed by an ugly scrambled picture that served as a mask. The question was, did infants see the face? Were they conscious of it?

You might remember from Chapter 1 that during masking, adult viewers report seeing nothing unless the target picture lasts more than about one-twentieth of a second. Although speechless babies cannot report what they see, their eyes, like those of a locked-in patient, tell a similar story. When the face is flashed below a minimal duration, Kouider found, they do not gaze at it, suggesting that they fail to see it. Once the face is exposed for some threshold duration, however, they orient toward it. Just like adults, they suffer from masking and perceive the face only when it is “supraliminal,” presented above the perception threshold. Critically, the threshold duration turns out to be two to three times longer in infants than in adults. Five-month-olds detect the face only when it is shown for more than 100 milliseconds, whereas in adults the masking threshold typically falls between 40 and 50 milliseconds. Very interestingly, the threshold drops to its adult value when babies reach ten to twelve months of age, precisely the time when behaviors that depend on the prefrontal cortex begin to emerge.17

Having shown the existence of a threshold for conscious access in babies, Sid Kouider, Ghislaine Dehaene-Lambertz, and I went on to record the infants’ brain responses to flashed faces. We saw exactly the same series of cortical processing stages that we had found in adults: a subliminal linear phase followed by a sudden nonlinear ignition (figure 35). During the first phase, activity in the back of the brain increases steadily with face duration, regardless of whether images are below or above threshold: the infant’s brain clearly accumulates the available evidence about the flashed face. During the second phase, only above-threshold faces trigger a slow negative wave over the prefrontal cortex. Functionally and topographically, this late activation shares a lot of similarity with the adult P3 wave. Clearly, if enough sensory evidence is available, even the infant brain can propagate it all the way into the prefrontal cortex, although at a much-reduced speed. Because this two-stage architecture is essentially the same as in conscious adults, who can report what they see, we can assume that babies already enjoy conscious vision, although they cannot yet report it by speaking aloud.

FIGURE 35. Infants exhibit the same signatures of conscious perception as adults, but they process information at a much slower speed. In this experiment, twelve- to fifteen-month-old infants were flashed attractive faces that were masked to render them visible or invisible. The infant brain exhibited two stages of processing: first a linear accumulation of sensory evidence, then a nonlinear ignition. The late ignition may reflect conscious perception, because it occurred only when the face was presented for 100 milliseconds or more, precisely the duration needed for infants to orient their gaze. Note that conscious ignition started 1 second after the face appeared, which is about three times longer than in adults.

In fact, a very slow frontal negativity shows up in all sorts of infant experiments that involve the orienting of attention toward a novel stimulation, be it auditory or visual.18 Other researchers have noticed its similarity to the adult P3 wave,19 which shows up whenever conscious access occurs, regardless of the sensory modality. For instance, the frontal negativity occurs when infants attend to deviant sounds,20 but only when they are awake, not when they are asleep.21 In experiment after experiment, this slow frontal response behaves as a marker of conscious processing.

We can now safely conclude that conscious access exists in babies as in adults, but in a dramatically slower form, perhaps up to four times slower. Why this sluggishness? Remember that the infant brain is immature. The major long-distance fiber tracts that form the adult global workspace are already present at birth,22 but they are not yet electrically insulated. The sheaths of myelin, the fatty membrane that surrounds the axons, continue to mature well into childhood and even adolescence. Their main role is to provide electrical insulation and, as a result, increase the speed and fidelity with which neuronal discharges propagate to distant sites. The baby’s brain web is wired but not yet insulated; information integration therefore operates at a much slower pace. An infant’s sluggishness is perhaps comparable to that of a patient returning from coma. In both cases, adaptive responses can be evoked, but it takes one or two seconds before a smile, a frown, or a stammering syllable emerges from their lips. Think it of it as a foggy, dawdling, but definitely conscious mind.

Because the youngest subjects we tested were two-month-olds, we still do not know the exact moment at which consciousness emerges. Is a newborn already conscious, or does it take a few weeks before his or her cortical architecture starts to function properly? I will hedge my bets until all the evidence is in, but I would not be surprised if we discovered that consciousness exists at birth. Long-distance anatomical connections already crisscross the newborn baby’s brain, and their processing depth should not be underestimated. A few hours after birth, infants already exhibit sophisticated behavior, such as the capacity to distinguish sets of objects based on their approximate number.23

The Swedish pediatrician Hugo Lagercrantz and the French neurobiologist Jean-Pierre Changeux have proposed a very interesting hypothesis: birth would coincide with the first access to consciousness.24 In the womb, they argue, the fetus is essentially sedated, bathed in a drug stream that includes “the neurosteroid anesthetics pregnanolone and the sleep-inducing prostaglandin D2 provided by the placenta.” Birth coincides with a massive surge of stress hormones and stimulating neurotransmitters such as catecholamines; in the following hours, the newborn baby is usually awake and energized, his eyes wide open. Is he having his first conscious experience? If these pharmacological inferences turn out to be valid, delivery is an even more significant event than we thought: the genuine birth of a conscious mind.

Conscious Animals?

He who understands baboon would do more towards metaphysics than Locke.

—Charles Darwin, Notebooks (1838)

The same questions that we ask concerning infants should also be asked about our speechless cousins—animals. Animals cannot describe their conscious thoughts, but does that mean they have none? An extraordinary diversity of species has evolved on earth, from patient predators (cheetahs, eagles, moray eels) to careful route planners (elephants, geese), playful characters (cats, otters), clever problem solvers (magpies, octopuses), vocal geniuses (parakeets), and social grandmasters (bats, wolves). I would be very surprised if none of them shared at least part of our conscious experiences. My theory is that the architecture of the conscious workspace plays an essential role in facilitating the exchange of information among brain areas. Thus, consciousness is a useful device that is likely to have emerged a long time ago in evolution and perhaps more than once.

Why should we naïvely suppose that the workspace system is unique to humans? It isn’t. The dense network of long-distance connections that links the prefrontal cortex with other associative cortexes is evident in macaque monkeys, and this workspace system may well be present in all mammals. Even the mouse has tiny prefrontal and cingulate cortexes that get activated when it keeps visual information in mind for a second.25 An exciting question is whether some birds, especially those with vocal communication and imitation, may possess analogous circuitry with a similar function.26

Attributing consciousness to animals should not be based solely on their anatomy. Although they lack language, monkeys can be trained to report what they see by pressing keys on a computer. This approach is providing mounting evidence that they have subjective experiences that are very similar to ours. For instance, they can be rewarded to press one key if they see a light and another if they do not. This motor act can then be used as a proxy for a minimal “report”: a nonverbal gesture equivalent to the animal’s saying “I think I saw a light” or “I didn’t see anything.” A monkey can also be trained to classify the images it perceives, pressing one key for faces and the other for nonfaces. Once trained, the animal can then be tested with the same variety of visual paradigms that probe conscious and unconscious processing in humans.

The results of these behavioral studies prove that monkeys, like us, experience visual illusions. If we show them two different images, one to each eye, they report binocular rivalry: they press the keys in alternation, indicating that they too see only one of the two images at a given time. The images ceaselessly wax and wane in and out of their consciousness at the same rhythm as in any of us.27 Masking also works in monkeys. When we flash them a picture and follow it by a random mask, macaques report that they did not see the hidden image, although their visual cortex still shows a transient and selective neuronal discharge.28 Thus like us, they possess a form of subliminal perception, as well as a precise threshold beyond which the image becomes visible.

Finally, when their primary visual cortex is damaged, monkeys too develop a form of blindsight. In spite of the lesion, they can still accurately point to a light source in their impaired visual field. However, when trained to report the presence or absence of light, they label a stimulus presented in their impaired visual field by using the “no light” key, suggesting that, like human blindsight patients, their perceptual awareness is gone.29

There is little doubt that macaque monkeys can use their rudimentary workspace to think about the past. They easily pass the delayed response task, which requires holding information in mind long after the stimulus is gone. Like us, they do so by maintaining a sustained discharge in their prefrontal and parietal neurons.30 If anything, when passively watching a movie, they tend to activate their prefrontal cortex more than humans.31 We may be superior to monkeys in our ability to inhibit distraction, and when we are watching a movie, our prefrontal cortex can therefore decouple from the incoming stream, letting our mind wander freely.32 But macaque monkeys too possess a spontaneous “default mode” network of regions that activate during rest33—regions similar to those activated when we introspect, remember, or mind-wander.34

What about our litmus test of conscious auditory perception: the local-global test that we used to reveal a residual consciousness in patients recovering from coma? My colleagues Bechir Jarraya and Lynn Uhrig tested whether monkeys notice that beep beep beep beep is an anomalous sequence when it occurs amid a flurry of frequent beep beep beep boop sounds. They clearly do. Functional MRI shows that the monkeys’ prefrontal cortex lights up only to the globally deviant sequences.35 As in humans, this prefrontal response goes away when the monkeys are anesthetized. Once again, a signature of consciousness seems to exist in monkeys.

In pilot research conducted by Karim Benchenane, even mice seem to pass this elementary test. In future years, as we systematically test a variety of species, I would not be surprised if we discovered that all mammals, and probably many species of birds and fish, show evidence of a convergent evolution to the same sort of conscious workspace.

Self-Conscious Monkeys?

Macaque monkeys undoubtedly possess a global workspace largely similar to ours. But is it identical? In this book, I have focused on the most basic aspect of consciousness: conscious access, or the ability to become aware of selected sensory stimuli. This competence is so basic that we share it with monkeys and probably a great many other species. When it comes to higher-order cognitive functions, however, humans are clearly very different. We have to ask whether the human conscious workspace possesses additional properties that radically set us apart from all other animals.

Self-awareness seems a prime candidate for human uniqueness. Aren’t we sapiens sapiens—the only species who know that they know? Isn’t the capacity to reflect upon our own existence a uniquely human feat? In Strong Opinions (1973), Vladimir Nabokov, a superb novelist but also a passionate entomologist, made precisely this point:

Being aware of being aware of being . . . if I not only know that I am but also know that I know it, then I belong to the human species. All the rest follows—the glory of thought, poetry, a vision of the universe. In that respect, the gap between ape and man is immeasurably greater than the one between amoeba and ape.

Nabokov was wrong, however. “Know thyself,” the famous motto inscribed in the pronaos of the Temple of Apollo at Delphi, is not the privilege of mankind. In recent years, research has revealed the amazing sophistication of animal self-reflection. Even in tasks that require second-order judgments, as when we detect our errors or ponder our success or failure, animals are not as incompetent as we might think.

This domain of competence is called “metacognition”—the capacity to entertain thoughts about our thoughts. Donald Rumsfeld, George W. Bush’s secretary of defense, neatly outlined it when, in a briefing to the Department of Defense, he famously distinguished among the known knowns (“things we know we know”), the known unknowns (“we know there are some things we do not know”), and the unknown unknowns (“the ones we don’t know we don’t know”). Metacognition is about knowing the limits of one’s own knowledge—assigning degrees of belief or confidence to our own thoughts. And evidence suggests that monkeys, dolphins, and even rats and pigeons possess the rudiments of it.

How do we know that animals know what they know? Consider Natua, a dolphin swimming freely in his home coral pool at the Dolphin Research Center in Marathon, Florida.36 The animal has been trained to classify underwater sounds according to their pitch. This he does extremely well, pressing a paddle on the left wall for low pitches, and one on the right wall for high pitches.

The experimenter set the boundary between low and high pitches at a frequency of 2,100 hertz. When the sound is far enough from this reference, the animal quickly swims to the correct side. When the sound frequency is very close to 2,100 hertz, however, Natua’s responses become very slow. He shakes his head before hesitantly swimming to one side, often the wrong one.

Does this hesitant behavior suffice to indicate that the animal “knows” that he is having a hard time deciding? No. In itself, the increase in difficulty at short distances is quite banal. In humans as in many other animals, decision time and error rate typically increase whenever the difference that must be discriminated is reduced. But crucially, in humans a smaller perceptual distance also elicits a second-order feeling of lack of confidence. When the sound is too close to the boundary, we realize that we face a difficulty. We feel unsure, and we know that our decision may well turn out to be wrong. If we can, we bail out, openly reporting that we have no idea of the correct answer. This is typical metacognitive knowledge: I know that I don’t know.

Does Natua have any such knowledge of his own uncertainty? Can he tell whether he knows the correct response or whether he is unsure? Does he have a sense of confidence in his own decisions? To answer these questions, J. David Smith, from the State University of New York, designed a clever trick: the “escape” response. After the initial perceptual training, he introduced the dolphin to a third response paddle. By trial and error, Natua learned that, whenever he presses it, the stimulus sound is immediately replaced by an easy low-pitch sound (at 1,200 hertz), which earns him a small reward. Whenever the third paddle is present, Natua has the option to escape from the main task. However, he is not allowed to opt out on every trial: the escape paddle must be used sparingly; otherwise, the reward is dramatically delayed.

Here is the beautiful experimental finding: during the pitch task, Natua spontaneously decides to use the opt-out response only on diffcult trials. He presses the third paddle only when the stimulation frequency is close to the reference of 2,100 hertz—precisely those trials where he is likely to make an error. It looks as if he uses the third key as a second-order “commentary” on his first-order performance. By pressing it, he “reports” that he finds it too hard to respond to the primary task and that he prefers an easier trial. A dolphin is smart enough to discern his own lack of confidence. Like Rumsfeld, he knows what he doesn’t know.

Some researchers dispute this mentalist interpretation. They point out that the task can be described in much simpler behaviorist terms: the dolphin merely exhibits a trained motor behavior that maximizes reward. Its only unusual feature is to allow for three responses instead of two. As usual in a reinforcement learning task, the animal has discovered exactly which stimuli make it more advantageous to press the third key—nothing more than rote behavior.

While many past experiments fall prey to this low-level interpretation, new research in monkeys, rats, and pigeons addresses this criticism and strongly tips the scales toward genuine metacognitive competence. Animals often use the opt-out response more intelligently than reward alone would predict.37 For instance, when given the option to escape after making a choice, but before being told whether they were right or wrong, they finely monitor which trials are subjectively difficult for them. We know this because they indeed perform worse on trials where they opt out than on trials where they stick to their initial response, even when the very same stimulus is presented on both occasions. They seem to internally monitor their mental state and sift out precisely those trials where, for one reason or the other, they were distracted and the signal that they processed was not as crisp as usual. It looks as if they can truly evaluate their self-confidence on every trial and opt out only when they feel unconfident.38

How abstract is animal self-knowledge? In monkeys at least, a recent experiment shows that it is not tied to a single overtrained context; macaques spontaneously generalize the use of the opt-out key beyond the bounds of their initial training. Once they figure out what this key means in a sensory task, they immediately use it appropriately in the novel context of a memory task. Having learned to report I didn’t perceive well, they generalize to I don’t remember well.39

These animals clearly possess some degree of self-knowledge, but might it all be unconscious? We have to be careful here, because as you may remember from Chapter 2, much of our behavior stems from unconscious mechanisms. Even self-monitoring mechanisms may unfold unconsciously. When I mistype a letter on the keyboard or when my eyes are attracted to the wrong target, my brain automatically registers these errors and corrects them, and I may never become aware of them.40 Several arguments, however, suggest that the monkeys’ self-knowledge is not based only on such subliminal automatisms. Their opt-out judgments are flexible and generalize to an untrained task. They involve pondering a past decision for several seconds, a long-term reflection whose duration is unlikely to be within the reach of unconscious processes. They require the use of an arbitrary response signal, the opt-out key. At the neurophysiological level, they involve a slow accumulation of evidence and recruit high-level areas of the parietal and prefrontal lobes.41 If we extrapolate from what we know of the human brain, it seems unlikely that such slow and complicated second-order judgments could unfold in the absence of awareness.

If this inference is correct (and it certainly needs to be validated by more research), then animal behavior bears the hallmark of a conscious and reflexive mind. We are probably not alone in knowing that we know, and the adjective sapiens sapiens should no longer be uniquely attached to the genus Homo. Several other animal species can genuinely reflect upon their state of mind.

Uniquely Human Consciousness?

Although monkeys clearly possess a conscious neuronal workspace and may use it to ponder themselves and the external world, humans undoubtedly exhibit superior introspection. But what exactly sets the human brain apart? Is it sheer brain size? Language? Social cooperation? Long-lasting plasticity? Education?

Answering these questions is one of the most exciting tasks for future research in cognitive neuroscience. Here I will venture only a tentative answer: although we share most if not all of our core brain systems with other animal species, the human brain may be unique in its ability to combine them using a sophisticated “language of thought.” René Descartes was certainly right about one thing: only Homo sapiens “use[s] words or other signs by composing them, as we do to declare our thoughts to others.” This capacity to compose our thoughts may be the crucial ingredient that boosts our inner thoughts. Human uniqueness resides in the peculiar way we explicitly formulate our ideas using nested or recursive structures of symbols.

According to this argument, and in agreement with Noam Chomsky, language evolved as a representational device rather than a communication system—the main advantage that it confers is the capacity to think new ideas, over and above the ability to share them with others. Our brain seems to have a special knack for assigning symbols to any mental representation and for entering these symbols into entire novel combinations. The human global neuronal workspace may be unique in its capacity to formulate conscious thoughts such as “taller than Tom,” “left of the red door,” or “not given to John.” Each of these examples combines several elementary concepts that lie in utterly different domains of competence: size (tall), person (Tom, John), space (left), color (red), object (door), logic (not), or action (give). Although each is initially encoded by a distinct brain circuit, the human mind assembles them at will—not only by associating them, as animals undoubtedly do, but by composing them using a sophisticated syntax that carefully distinguishes, for instance, “my wife’s brother” from “my brother’s wife,” or “dog bites man” from “man bites dog.”

I speculate that this compositional language of thought underlies many uniquely human abilities, from the design of complex tools to the creation of higher mathematics. And when it comes to consciousness, this capacity may explain the origins of our sophisticated capacity for self-consciousness. Humans possess an incredibly refined sense of the mind—what psychologists call a “theory of mind,” an extensive set of intuitive rules that allow us to represent and reason about what others think. Indeed, all human languages have an elaborate vocabulary for mental states. Among the ten most frequent verbs in English, six refer to knowledge, feelings, or goals (find, tell, ask, seem, feel, try). Crucially, we apply them to ourselves as well as to others, using identical constructions with pronouns (I is the tenth most frequent word in English, and you is the eighteenth). Thus, we can represent what we know in the same exact format as what others know (“I believe X, but you believe Y”). This mentalist perspective is present right from the start: even seven-month-old infants already generalize from what they know to what others know.42 And it may well be unique to humans: two-and-a-half-year-old children already surpass adult chimpanzees and other primates in their understanding of social events.43

The recursive function of human language may serve as a vehicle for complex nested thoughts that remain inaccessible to other species. Without the syntax of language, it is unclear that we could even entertain nested conscious thoughts such as He thinks that I do not know that he lies. Such thoughts seem to be vastly beyond the competence of our primate cousins.44 Their metacognition seems to include only two steps (a thought and a degree of belief in it) rather than the potential infinity of concepts that a recursive language affords.

Alone in the primate lineage, the human neuronal workspace system may possess unique adaptations to the internal manipulation of compositional thoughts and beliefs. Neurobiological evidence, although scarce, fits with this assumption. As we discussed in Chapter 5, the prefrontal cortex, a pivotal hub of the conscious workspace, occupies a sizable portion of any primate’s brain—but in the human species, it is vastly expanded.45 Among all primates, human prefrontal neurons are the ones with the largest dendritic trees.46 As a result, our prefrontal cortex is probably much more agile in collecting and integrating information from processors elsewhere in the brain, which may explain our uncanny ability for introspection and self-oriented thinking, detached from the external world.

Regions of the midline and anterior frontal lobe systematically activate whenever we deploy our talents for social or self-oriented reasoning.47 One of these regions, called the frontopolar cortex, or Brodmann’s area 10, is larger in Homo sapiens than in any other ape. (Experts debate whether it exists at all in macaque monkeys.) The underlying white matter, which supports the brain’s long-distance connections, is disproportionately larger in humans compared with any other primate, even after correcting for the massive change in overall brain size.48 All these findings make the anterior prefrontal cortex a major candidate for the locus for our special introspective skills.

Another special region is Broca’s area, the left inferior frontal region that plays a critical role in human language. Its layer-3 neurons, which send long-distance projections, are more broadly spaced in humans than in other apes, again permitting a greater interconnection.49 In this area, as well as in the midline anterior cingulate, another crucial region for self-control, Constantin von Economo discovered giant neurons that may well be unique to the brains of humans and great apes such as chimps and bonobos, as they seem to be absent in other primates, such as macaques.50 With their giant cell bodies and long axons, these cells probably make a very significant contribution to the broadcasting of conscious messages in the human brain.

All these adaptations point to the same evolutionary trend. During hominization, the networks of our prefrontal cortex grew denser and denser, to a larger extent than would be predicted by brain size alone. Our workspace circuits expanded way beyond proportion, but this increase is probably just the tip of the iceberg. We are more than just primates with larger brains. I would not be surprised if, in the coming years, cognitive neuroscientists find that the human brain possesses unique microcircuits that give it access to a new level of recursive, language-like operations. Our primate cousins certainly possess an internal mental life and a capacity to consciously apprehend their surroundings, but our inner world is immensely richer, perhaps because of a unique faculty for thinking nested thoughts.

In summary, human consciousness is the unique outcome of two nested evolutions. In all primates, consciousness initially evolved as a communication device, with the prefrontal cortex and its associated long-distance circuits breaking the modularity of local neuronal circuits and broadcasting information across the entire brain. In humans alone, the power of this communication device was later boosted by a second evolution: the emergence of a “language of thought” that allows us to formulate sophisticated beliefs and to share them with others.

Diseases of Consciousness?

The two successive evolutions of the human workspace must rely on specific biological mechanisms laid down by particular genes. A natural question therefore is: Do diseases target the human conscious machinery? Can genetic mutations or brain impairments invert the evolutionary trend and induce a failure of the global neuronal workspace?

The long-distance cortical connections that support consciousness are likely to be fragile. Compared to any other cell type in the body, neurons are monster cells, since their axon can easily span tens of centimeters. Supporting such a long appendix, more than a thousand times larger than the cell’s main body, poses unique problems of gene expression and molecular trafficking. DNA transcription always takes places in the cell’s nucleus, yet somehow its end products must be routed to synapses located centimeters away. Complex biological machinery is needed to solve this logistics problem. We might therefore expect the evolved system of long-distance workspace connections to be the target of specific impairments.

Jean-Pierre Changeux and I speculate that the mysterious cluster of psychiatric symptoms called schizophrenia may begin to find an explanation at this level.51 Schizophrenia is a common ailment affecting about 0.7 percent of adults. It is a devastating mental illness in which adolescents and young adults lose touch with reality, develop delusions and hallucinations (so-called positive symptoms), and simultaneously experience a general reduction in intellectual and emotional capacity, including disorganized speech and repetitive behaviors (the “negative” symptoms).

It long proved difficult to identify a single principle underlying this variety of manifestations. It is striking, however, that these deficits always seem to affect functions hypothetically associated with the conscious global workspace in humans: social beliefs, self-monitoring, metacognitive judgments, and even elementary access to perceptual information.52

Clinically, schizophrenic patients exhibit a dramatic overconfidence in their bizarre beliefs. Metacognition and theory of mind can be so seriously impaired that patients fail to distinguish their own thoughts, knowledge, actions, and memories from those of others. Schizophrenia drastically alters the conscious integration of knowledge into a coherent belief network, leading to delusions and confusions. As an example, patients’ conscious memories can be flagrantly wrong—minutes after seeing a list of pictures or words, they often do not remember seeing some items, and their metacognitive knowledge of whether, when, and where they saw or learned something is often terrible. Yet, remarkably, their implicit unconscious memories may remain completely intact.53

Given this background, my colleagues and I wondered whether there might be a basic deficit of conscious perception in schizophrenia. We investigated schizophrenics’ experience of masking—the subjective disappearance of a word or picture when it is followed, at a short interval, by another image. Our findings were very clear: the minimal duration of presentation needed to see a masked word was strongly altered in schizophrenics.54 The threshold for conscious access was elevated: schizophrenics stayed in the subliminal zone for much longer, and they needed much more sensory evidence before they reported the experience of conscious seeing. Remarkably, their unconscious processing was intact. A subliminal digit flashed for only 29 milliseconds led to a detectable unconscious priming effect, exactly as in normal subjects. The preservation of such a subtle measure indicates that the feed-forward chain of unconscious processing, from visual recognition to the attribution of meaning, remains largely unaffected by the disease. Schizophrenics’ main problem seems to lie in the global integration of incoming information into a coherent whole.

My colleagues and I have observed a similar dissociation between intact subliminal processing and impaired conscious access in patients with multiple sclerosis, a disease that affects the white matter connections of the brain.55 At the very onset of the disease, before any other major symptoms arise, patients fail to consciously see flashed words and digits, but they still process them unconsciously. The severity of this deficit in conscious perception can be predicted from the amount of damage to the long-distance fibers that link the prefrontal cortex to the posterior regions of the visual cortex.56 These findings are important, first, because they confirm that white matter impairments can selectively affect conscious access; and second, because a small fraction of patients with multiple sclerosis develops psychiatric disorders akin to schizophrenia, again suggesting that the loss of long-distance connections may play a crucial role in the onset of mental illness.

Brain imaging of schizophrenic patients proves that their capacity for conscious ignition is dramatically reduced. Their early visual and attentional processes can be largely intact, but they lack the massive synchronous activation that creates a P3 wave at the surface of the head and signals a conscious percept.57 Another signature of conscious access, the sudden emergence of a coherent brain web with massive correlations between distant cortical regions in the range of beta frequencies (13–30 hertz), is also characteristically deficient.58

Is there even more direct evidence for an anatomical alteration of global workspace networks in schizophrenia? Yes. Diffusion tensor imaging reveals massive anomalies of the long-distance bundles of axons that link cortical regions. The fibers of the corpus callosum, which interconnect the two hemispheres, are particularly impaired, as are the connections that link the prefrontal cortex with distant regions of the cortex, hippocampus, and thalamus.59 The outcome is a severe disruption of resting-state connectivity: during quiet rest, in schizophrenic patients the prefrontal cortex loses its status as a major interconnected hub, and activations are much less integrated into a functional whole than in normal controls.60

At a more microscopic level, the huge pyramidal cells in the dorsolateral prefrontal cortex (layers 2 and 3), with their extensive dendrites capable of receiving thousands of synaptic connections, are much smaller in schizophrenic patients. They exhibit fewer spines, the terminal sites of excitatory synapses whose enormous density is characteristic of the human brain. This loss of connectivity may well play a major causal role in schizophrenia. Indeed, many of the genes that are disrupted in schizophrenia affect either or both of two major molecular neurotransmission systems, the dopamine D2 and glutamate NMDA receptors, which play a key role in prefrontal synaptic transmission and plasticity.61

Most interesting, perhaps, is that normal adults experience a transient schizophrenia-like psychosis when taking drugs such as phencyclidine (better known as PCP, or angel dust) and ketamine. These agents act by blocking neuronal transmission, quite specifically, at excitatory synapses of the NMDA type, which are now known to be essential for the transmission of top-down messages across the long distances of the cortex.62 In my computer simulations of the global workspace network, NMDA synapses were essential for conscious ignition: they formed the long-distance loops that linked high-level cortical areas, in a top-down manner, to the lower-level processors that originally activated them. Removing NMDA receptors from our simulation resulted in a dramatic loss of global connectivity, and ignition disappeared.63 Other simulations show that NMDA receptors are equally important for the slow accumulation of evidence underlying thoughtful decision making.64

A global loss of top-down connectivity may go a long way toward explaining the negative symptoms of schizophrenia. It would not affect the feed-forward transmission of sensory information, but it would selectively prevent its global integration via long-distance top-down loops. Thus schizophrenic patients would present entirely normal feed-forward processing, including the subtle operations that induce subliminal priming. They would experience a deficit only in the subsequent ignition and information broadcasting, disrupting their capacities for conscious monitoring, top-down attention, working memory, and decision making.

What about patients’ positive symptoms, their bizarre hallucinations and delusions? The cognitive neuroscientists Paul Fletcher and Chris Frith have proposed a precise explanatory mechanism, also based on an impaired propagation of information.65 As we discussed in Chapter 2, the brain acts like a Sherlock Holmes, a sleuth that draws maximal inferences from its various inputs, whether perceptual or social. Such statistical learning requires a bidirectional exchange of information:66 sensory regions send their messages upward in the hierarchy, and higher regions respond with top-down predictions, as part of a learning algorithm that constantly strives to account for the information arising from the senses. Learning stops when the higher-level representations are so accurate that their predictions fully match the bottom-up inputs. At this point, the brain perceives a negligible error signal (the difference between predicted and observed signals), and as a consequence, surprise is minimal: the incoming signal is no longer interesting and thus no longer triggers any learning.

Now imagine that, in schizophrenia, the top-down messages are reduced, because of impaired long-distance connections or dysfunctional NMDA receptors. This, Fletcher and Frith argue, would result in a strong mistuning of the statistical learning mechanism. Sensory inputs would never be satisfactorily explained. Error signals would forever remain, triggering an endless avalanche of interpretations. Schizophrenics would continually feel that something remains to be explained, that the world contains many hidden layers of meaning, deep levels of explanation that only they can perceive and compute. As a result, they would continually concoct far-fetched interpretations of their surroundings.

Consider, for instance, how the schizophrenic brain would monitor its own actions. Normally, whenever we move, a predictive mechanism cancels out the sensory consequences of our actions. Thanks to it, we are not surprised when we grab a coffee cup: the warm touch and light weight that our hand senses are highly predictable, and even before we act, our motor areas send a top-down prediction to our sensory areas to inform them that they are about to experience a grabbing action. This forecast works so well that when we act, we are generally unaware of touch—we become keenly aware only when our prediction goes wrong, as when we grab an unexpectedly hot mug.

Next, imagine living in a world in which top-down prediction systematically fails. Even your coffee cup feels wrong: when you grab it, its touch subtly deviates from your expectations, causing you to wonder who or what is altering your senses. Above all, speaking feels strange. You can hear your own voice while speaking, and it sounds funny. Oddities in the incoming sound constantly draw your attention. You begin to think that someone is tampering with your speech. From there it is a short step to becoming convinced that you hear voices in your head, and that evil agents, perhaps your neighbor or the CIA, control your body and perturb your life. You find yourself constantly searching for hidden causes of mysterious events that others do not even notice—a pretty accurate picture of schizophrenic symptoms.

In brief, schizophrenia seems to be a strong candidate for a disease of the long-distance connections that broadcast signals throughout the brain and form the conscious workspace system. I am not suggesting, of course, that patients with schizophrenia are unconscious zombies. My view is simply that, in schizophrenia, conscious broadcasting is much more egregiously impaired than other automatic processes. Diseases tend to respect the boundaries of the nervous system, and schizophrenia may specifically affect the biological mechanisms that sustain long-distance top-down neuronal connections.

In schizophrenics, this breakdown is not complete; otherwise, the patient would simply fall unconscious. Can such a dramatic medical condition exist? In 2007 neurologists at the University of Pennsylvania discovered an amazing new disease.67 Young people were entering the hospital with a variety of symptoms. Many were women with ovarian cancer, but others just complained of headache, fever, or flu-like symptoms. Quickly, their disease took an unexpected turn. They developed “prominent psychiatric symptoms, including anxiety, agitation, bizarre behavior, delusional or paranoid thoughts, and visual or auditory hallucinations”—an acute, acquired, and quickly evolving form of schizophrenia. Within three weeks, the patients’ consciousness started to decline. Their EEG began to exhibit slow brain waves, as when people fall asleep or into a coma. They became motionless and ceased to respond to stimulation or even to breathe by themselves. Several of them died within a few months. Others later recovered and had a normal life and mental health but confirmed that they had no memories of the unconscious episode.

What was happening? A careful inquiry revealed that all these patients suffered from a massive autoimmune disease. Their immune system, instead of watching for external intruders such as viruses or bacteria, had turned onto itself. It was selectively destroying a molecule inside the patients’ body: the NMDA receptor for the neurotransmitter glutamate. As we saw earlier, this essential element of the brain plays a key role in the top-down transmission of information at cortical synapses. When a culture of neurons was exposed to serum from the patients, its NMDA synapses literally vanished within hours—but the receptor returned as soon as the lethal serum was removed.

It is fascinating that a single molecule, when wiped out, suffices to cause a selective loss of mental health and, eventually, consciousness itself. We may be witnessing the first medical condition in which a disease selectively disrupts the long-distance connections that, according to my global neuronal workspace model, underlie any conscious experience. This focused attack quickly disrupts consciousness, first inducing an artificial form of schizophrenia, then destroying the very possibility of maintaining a vigilant state. In future years, this medical condition may serve as a model disease whose molecular mechanisms shed light on psychiatric diseases, their onset, and their link to conscious experience.

Conscious Machines?

Now that we are beginning to understand the function of consciousness, its cortical architecture, its molecular basis, and even its diseases, can we envisage simulating it in the computer? Not only do I fail to see any logical problem with this possibility, I consider it to be an exciting avenue of scientific research—a grand challenge that computer science may resolve over the next decades. We are nowhere near having the capacity to build such a machine yet, but the very fact that we can make a concrete proposal about some of its key features indicates that the science of consciousness is moving forward.

In Chapter 5, I outlined a general scheme for a computer simulation of conscious access. Those ideas could serve as a basis for a new kind of software architecture. Much as a modern computer runs many special-purpose programs in parallel, our software would contain a great many specialized programs, each dedicated to a certain function, such as face recognition, movement detection, spatial navigation, speech production, or motor guidance. Some of these programs would take their inputs from inside rather than from outside the system, thus providing it with a form of introspection and self-knowledge. For instance, a specialized device for error detection might learn to predict whether the organism is likely to deviate from its current goal. Current computers possess the rudiments of this idea, since they increasingly come equipped with self-monitoring devices that probe remaining battery life, disk space, memory integrity, or internal conflicts.

I see at least three critical functions that current computers miss: flexible communication, plasticity, and autonomy. First, the programs should flexibly communicate with one another. At any given time, the output of one of the programs would be selected as the focus of interest for the entire organism. The selected information would enter the workspace, a limited-capacity system that operates in a slow and serial manner but has the huge advantage of being able to broadcast the information back to any other program. In current computers, such exchanges are usually forbidden: each application executes in a separate memory space, and its outputs cannot be shared. Programs have no general means of exchanging their expert knowledge—aside from the clipboard, which is rudimentary and under user control. The architecture I have in mind would dramatically enhance the flexibility of information exchanges by providing a sort of universal and autonomous clipboard—the global workspace.

How would the receiving programs use the information broadcast by the clipboard? My second key ingredient is a powerful learning algorithm. The individual programs would not be static but would be endowed with a capacity to discover the best use for the information they receive. Each program would adjust itself according to a brainlike learning rule that would capture the many predictive relationships that exist among its inputs. Thus, the system would adapt to its environments and even to the quirks of its own architecture, rendering it robust, for instance, to the failure of a subprogram. It would discover which of its inputs are worthy of attention and how to combine them to compute useful functions.

And that leads me to my third desired feature: autonomy. Even in the absence of any user interaction, the computer would use its own value system to decide which data are worthy of a slow conscious examination in the global workspace. Spontaneous activity would constantly let random “thoughts” enter the workspace, where they would be retained or rejected depending on their adequacy to the organism’s basic goals. Even in the absence of inputs, a serial stream of fluctuating internal states would arise.

The behavior of such a simulated organism would be reminiscent of our own variety of consciousness. Without any human intervention, it would set its own goals, explore the world, and learn about its own internal states. And at any time, it would focus its resources on a single internal representation—what we may call its conscious content.

Admittedly, these ideas remain vague. Much work will be needed to turn them into a detailed blueprint. But at least in principle, I see no reason why they would not lead to an artificial consciousness.

Many thinkers disagree. Let us briefly consider their arguments. Some believe that consciousness cannot be reduced to information processing, because no amount of information processing will ever cause a subjective experience. The NYU philosopher Ned Block, for instance, concedes that the workspace machinery may explain conscious access, but he argues that it is inherently incapable of explaining our qualia—the subjective states or raw feelings of “what it is like” to experience a feeling, a pain, or a beautiful sunset.68

David Chalmers, a philosopher at the University of Arizona, similarly maintains that even if workspace theory explains which operations may or may not be performed consciously, it will never explain the riddle of first-person subjectivity.69 Chalmers is famous for introducing a distinction between the easy and the hard problems of consciousness. The easy problem of consciousness, he argues, consists in explaining the many functions of the brain: How do we recognize a face, a word, or a landscape? How do we extract information from the senses and use it to guide our behavior? How do we generate sentences to describe what we feel? “Although all these questions are associated with consciousness,” Chalmers argues, “they all concern the objective mechanisms of the cognitive system, and consequently, we have every reason to expect that continued work in cognitive psychology and neuroscience will answer them.”70 By contrast, the hard problem is

the question of how physical processes in the brain give rise to subjective experience . . . : the way things feel for the subject. When we see, for example, we experience visual sensations, such as that of vivid blue. Or think of the ineffable sound of a distant oboe, the agony of an intense pain, the sparkle of happiness or the meditative quality of a moment lost in thought. . . . It is these phenomena that pose the real mystery of the mind.

My opinion is that Chalmers swapped the labels: it is the “easy” problem that is hard, while the hard problem just seems hard because it engages ill-defined intuitions. Once our intuition is educated by cognitive neuroscience and computer simulations, Chalmers’s hard problem will evaporate. The hypothetical concept of qualia, pure mental experience detached from any information-processing role, will be viewed as a peculiar idea of the prescientific era, much like vitalism—the misguided nineteenth-century thought that, however much detail we gather about the chemical mechanisms of living organisms, we will never account for the unique qualities of life. Modern molecular biology shattered this belief, by showing how the molecular machinery inside our cells forms a self-reproducing automaton. Likewise, the science of consciousness will keep eating away at the hard problem until it vanishes. For instance, current models of visual perception already explain not only why the human brain suffers from a variety of visual illusions but also why such illusions would appear in any rational machine confronted with the same computational problem.71 The science of consciousness already explains significant chunks of our subjective experience, and I see no obvious limits to this approach.

A related philosophical argument proposes that, however hard we try to simulate the brain, our software will always lack a key feature of human consciousness: free will. To some people, a machine with free will is a contradiction in terms, because machines are deterministic; their behavior is determined by their internal organization and their initial state. Their actions may not be predictable, due to measurement imprecision and chaos, but they cannot deviate from the causal chain that is dictated by their physical organization. This determinism seems to leave no room for personal freedom. As the poet and philosopher Lucretius wrote in the first century BC:

If all movement is always interconnected, the new arising from the old in a determinate order—if the atoms never swerve so as to originate some new movement that will snap the bonds of fate, the everlasting sequence of cause and effect—what is the source of the free will possessed by living things throughout the earth?72

Even top-notch contemporary scientists find this problem so insuperable that they search for new laws of physics. Only quantum mechanics, they argue, introduces the right element of freedom. John Eccles (1903–1997), who received the Nobel Prize in 1963 for his major discoveries on the chemical basis of signal transmission at synapses, was one of these neuroskeptics. For him, the main problem of neuroscience was to figure out “how the self controls its brain,” as the title of one of his numerous books put it73—a questionable expression that smacks of dualism. He ended up gratuitously supposing that the mind’s immaterial thoughts act on the material brain by tweaking the probabilities of quantum events at synapses.

Another brilliant contemporary scientist, the accomplished physicist Sir Roger Penrose, agrees that consciousness and free will require quantum mechanics.74 Penrose, together with the anesthesiologist Stuart Hameroff, developed the fanciful view of the brain as a quantum computer. The ability of a quantum physical system to exist in multiple superimposed states would be exploited by the human brain to explore nearly infinite options in finite time, somehow explaining the mathematician’s ability to see through Gödel’s theorem.

Unfortunately, these baroque proposals rest on no solid neurobiology or cognitive science. Although the intuition that our mind chooses its actions “at will” begs for an explanation, quantum physics, the modern version of Lucretius’s “swerving atoms,” is no solution. Most physicists agree that the warm-blooded bath in which the brain soaks is incompatible with quantum computing, which requires cold temperatures to avoid a quick loss of quantum coherence. And the time scale at which we become aware of aspects of the external world is largely unrelated to the femtosecond (10–15) scale at which this quantum decoherence typically occurs.

Most crucially, even if quantum phenomena influenced some of the brain’s operations, their intrinsic unpredictability would not satisfy our notion of free will. As convincingly argued by the contemporary philosopher Daniel Dennett, a pure form of randomness in the brain does not provide us with any “kind of freedom worth having.”75 Do we really want our bodies to be haphazardly shaken around by uncontrollable swerves generated at the subatomic level—like the random twitches and tics of a patient with Tourette syndrome? Nothing could be further from our concept of freedom.

When we discuss “free will,” we mean a much more interesting form of freedom. Our belief in free will expresses the idea that, under the right circumstances, we have the ability to guide our decisions by our higher-level thoughts, beliefs, values, and past experiences, and to exert control over our undesired lower-level impulses. Whenever we make an autonomous decision, we exercise our free will by considering all the available options, pondering them, and choosing the one that we favor. Some degree of chance may enter in a voluntary choice, but this is not an essential feature. Most of the time our willful acts are anything but random: they consist in a careful review of our options, followed by the deliberate selection of the one we favor.

This conception of free will requires no appeal to quantum physics and can be implemented in a standard computer. Our global neuronal workspace allows us to collect all the necessary information, both from our current senses and from our memories, synthesize it, evaluate its consequences, ponder them for as long we want, and eventually use this internal reflection to guide our actions. This is what we call a willed decision.

In thinking about free will, we therefore need to sharply distinguish two intuitions about our decisions: their fundamental indeterminacy (a dubious idea) and their autonomy (a respectable notion). Our brain states are clearly not uncaused and do not escape the laws of physics—nothing does. But our decisions are genuinely free whenever they are based on a conscious deliberation that proceeds autonomously, without any impediment, carefully weighting the pros and cons before committing to a course of action. When this occurs, we are correct in speaking of a voluntary decision—even if it is, of course, ultimately caused by our genes, our life history, and the value functions they have inscribed in our neuronal circuits. Because of fluctuations in spontaneous brain activity, our decisions may remain unpredictable, even to us. Yet this unpredictability is not a defining feature of free will; nor should it be confused with absolute indeterminacy. What counts is the autonomous decision making.

In my opinion, a machine with free will is therefore not a contradiction in terms, just a shorthand description of what we are. I have no problem imagining an artificial device capable of willfully deciding on its course of action. Even if our brain architecture were fully deterministic, as a computer simulation might be, it would still be legitimate to say that it exercises a form of free will. Whenever a neuronal architecture exhibits autonomy and deliberation, we are right in calling it “a free mind”—and once we reverse-engineer it, we will learn to mimic it in artificial machines.

In brief, neither qualia nor free will seems to pose a serious philosophical problem for the concept of a conscious machine. Reaching the end of our journey into consciousness and the brain, we realize how carefully we should treat our intuitions of what a complex neuronal machinery can achieve. The richness of information processing that an evolved network of sixteen billion cortical neurons provides lies beyond our current imagination. Our neuronal states ceaselessly fluctuate in a partially autonomous manner, creating an inner world of personal thoughts. Even when confronted with identical sensory inputs, they react differently depending on our mood, goals, and memories. Our conscious neuronal codes also vary from brain to brain. Although we all share the same overall inventory of neurons coding for color, shape, or movement, their detailed organization results from a long developmental process that sculpts each of our brains differently, ceaselessly selecting and eliminating synapses to create our unique personalities.

The neuronal code that results from this crossing of genetic rules, past experiences, and chance encounters is unique to each moment and to each person. Its immense number of states creates a rich world of inner representations, linked to the environment but not imposed by it. Subjective feelings of pain, beauty, lust, or regret correspond to stable neuronal attractors in this dynamic landscape. They are inherently subjective, because the dynamics of the brain embed its present inputs into a tapestry of past memories and future goals, thus adding a layer of personal experience to raw sensory inputs.

What emerges is a “remembered present,”76 a personalized cipher of the here and now, thickened with lingering memories and anticipated forecasts, and constantly projecting a first-person perspective on its environment: a conscious inner world.

This exquisite biological machinery is clicking right now inside your brain. As you close this book to ponder your own existence, ignited assemblies of neurons literally make up your mind.