It is now possible to read and explore our thoughts by decoding patterns of brain activity. With this we can begin to establish, for example, whether a vegetative patient is aware or not. We can also explore dreams and resolve whether they actually happened as we remember them, or are just a narrative created by our brain as we woke up. Who wakes up when consciousness is awoken? What happens in that precise moment?
Consciousness, like time and space, is something we all are familiar with but have trouble defining. We feel it and we sense it in others, but it is almost impossible to define what it is made of. It is so elusive that many of us often fall into different forms of dualism, evoking a non-physical and non-spatial entity to represent the conscious mind.
On 8 May 1794, in Paris, one of the finest of French scientists was guillotined by Maximilien Robespierre’s troops after being accused of treason. Antoine Lavoisier was fifty years old and, among his many other legacies, left behind his Elementary Treatise on Chemistry, which was destined to change the world’s economic and social order.
In the splendour of the Industrial Revolution, the steam engine was the motor of economic progress. The physics of heat, which up until then had been merely a matter of intellectual curiosity, took centre stage. The entrepreneurs of the age were urged to improve the efficiency of steam machines. Building on Lavoisier’s studies, Nicolas Léonard Sadi Carnot, in his Borgesian Reflections on the Motive Power of Fire, then sketched out once and for all the ideal machine.
Seen today with privileged hindsight, there is something odd in this scientific epic that is reminiscent of the present situation with consciousness. Lavoisier and Carnot didn’t have the faintest idea what heat was. Even worse, they were stuck between myths and wrong-headed concepts. For example, they believed that heat was a fluid called caloric that flowed from a hotter body to a cooler one. Today we know that heat is actually a state–agitated and in movement–of matter. For those versed in the subject, the idea of the caloric seems childish, almost absurd.
What will future experts in consciousness think of our contemporary ideas? Today’s neuroscience is in a state of understanding equivalent to somewhere between Lavoisier and Carnot. The steam machine changed the eighteenth-century world in the same way computers and ‘thinking machines’ are changing ours now. Will these new machines be able to feel? Will they have their own wills, conceptions, desires and goals? Will they have consciousness? As was the case in the eighteenth century with heat, science must provide rapid responses to the understanding of consciousness, about whose fundamental substratum we still know almost nothing.
I like to think of Sigmund Freud as the Lavoisier of consciousness. Freud’s great speculation was that conscious thought is just the tip of the iceberg, that the human mind is built on a foundation of unconscious thought. We only access consciously the conclusions, the outcomes, the actions evoked by this massively parallel device of unconscious thought. Freud made this discovery blindly, by observing remote and indirect traces of consciousness. Today, unconscious cerebral processes can be seen, brought to light in real time and with high resolution.
The bulk of Freud’s work and almost all of his intellectual lineage were built on a psychological framework. However, over the course of his life, he also formed a neurophysiological theory of mental processes. This progression seems reasonable. To understand breathing, a pulmonologist analyses how the bronchioles work and why they become inflamed. In much the same way, the observation of the structure and functioning of the brain and its tangle of neurons is a natural path for those wanting to understand thought. Sigmund Freud, a brilliant professor of neuropathology in addition to his work as the founding father of psychoanalysis, declared his intentions in one of his first texts, Project for a Scientific Psychology, which was published posthumously: to build a psychology that was a natural science, explaining the psychic processes as quantitative states determined by distinguishable materials of the nervous system. He added that the particles which make up psychic matter are neurons. This last conjecture–which has rarely been recognized–reveals Freud’s magnificent intuition.
In the last years of the nineteenth century, the scientists Santiago Ramón y Cajal and Camillo Golgi were embroiled in a very heated argument. Cajal maintained that the brain was made up of interconnected neurons. Golgi, on the other hand, believed that the brain was like a reticulum, like a continuous net. This epic scientific battle was settled by the microscope. Golgi, the great experimenter, developed a staining technique–still known today as Golgi’s method–to see what was previously invisible. This stain added contrast to the grey edges on a grey background of brain tissue and made them visible in the microscope, shiny as gold. Cajal used the same tool. But he was wonderfully skilled at drawing, which made him highly observant and, where Golgi saw a continuum, Cajal saw the opposite: separate pieces (neurons) that scarcely touched. Altogether demolishing the image of science as a world of objective truths, the two bitter enemies together won the first Nobel Prize for Physiology. It is one of the loveliest examples of science celebrating, with its highest award and at the same ceremony, two opposing ideas.
Many years have passed since then, with many far more powerful microscopes, and we now know that Cajal was right. His work was the foundation of neuroscience, the science that studies neurons and the organ those neurons make up, along with the ideas, dreams, words, desires, decisions, yearnings and memories that they manufacture. But when Freud began his Project for a Scientific Psychology and sketched his brain model of a network of connected neurons, the debate between neurons and reticula was still unresolved.
Freud understood that the conditions were not yet ripe for a natural science of thought and, as such, he would not be the one to promote his Project. Yet today we–the heirs to Freud’s work–are no longer working blindly as he was then, and we can take up the baton. It may now be prime time for the Project of conceiving a psychology based on the biology of the brain.
In his Project, Freud sketched out the first neuronal network in the history of science. This network captured the essence of the more sophisticated models that today emulate the cerebral architecture of consciousness. It was made up of three types of neurons, phi, psi and omega, that functioned like a hydraulic device.
The phi (Φ) are sensory neurons and form rigid circuits that produce stereotypical reactions, such as reflexes. Freud predicted a property of these neurons that today has been proven by much experimental evidence: they live in the present. The Φ neurons fire rapidly because they are composed of permeable walls that release pressure soon after acquiring it. Thus they encode the stimulus received and, almost instantly, forget it. Freud was wrong about the physics–the neurons fire electrically and not hydraulically–but the principle is almost equivalent; the sensory neurons of the primary visual cortex are biophysically characterized by having rapid charge and discharge times.
The Φ neurons also detect our inner world. For example, they react when the body registers that hydration is necessary, by producing a feeling of thirst. So these neurons transmit an objective, a sort of raison d’être–searching out water in this case–but they do not have memory or consciousness.
Freud then introduced another type of neuron, called the psi (Ψ), which is capable of forming memories, allowing the network to detach from the immediacy of the present. Ψ neurons are made up of an impermeable wall that accumulates and stores, in isolation, our history of sensations. Today we know that the neurons in the parietal and frontal cortices–that codify working memory (active, for example, when remembering a phone number or an address for several seconds)–function similarly to Freud’s conjecture. Except that, instead of having an impermeable casing, they manage to keep their activity alive through a feedback mechanism; like a loop that allows them to recoup the charge they are constantly losing. Yet long-term memories–for example, a childhood memory–work in a very different way from what Freud put forth. The mechanism is complex but, in large part, the memory establishes itself in the pattern of connection between neurons and in their structural changes, not in their dynamic electrical charge. This results in much more stable and less costly memory systems.
Freud was visionary in his anticipation of another conundrum. Since consciousness feeds on past experiences and representations of the future, it cannot be attached to the Φ system, which only codifies the present. And since the contents of consciousness–which is to say, what we are thinking–are constantly changing, they cannot correspond to the Ψ system, which doesn’t change over time. With manifest annoyance, Freud then described a new system of neurons that he called omega (Ω). These neurons can–like those of memory–accumulate charge over time and, therefore, organize themselves in episodes. His hypothesis was that the activation of these neurons was related to awareness and that they could integrate information over time and jump, like in hopscotch, between states to the rhythm of an internal clock.
We will see that this clock does indeed exist inside our brains, organizing conscious perception into a sequence of film stills. As we will see at the end of this chapter, the existence of such a clock can explain an intriguing and common illusion that Freud could not have seen: for example, why, when we are watching a motor race, do the wheels sometimes seem to turn in the wrong direction?
One of the most powerful ideas in Freud’s neuronal circuit was barely hinted at in his Project. The Φ neurons (sensations) activate the Ψ neurons (memory), which in turn activate the Ω neurons (awareness). In other words, consciousness originates in the unconscious circuits, not in the conscious ones. This flow set a precedent for three interwoven ideas that proved decisive in the study of awareness:
(1) Almost all mental activity is unconscious.
(2) The unconscious is the true motor of our actions.
(3) The conscious mind inherits and, to a certain extent, takes charge of those sparks from the unconscious. Consciousness, thus, is not the genuine author of our (conscious) actions. But it, at least, has the ability to edit, modify and censure them.
This triad, a century later, has become tangible through experiments that hack into the brain, questioning and delineating the notion of free will. When we choose something, was there ever really any other option? Or was everything already determined and we only had the illusion of being in control?
Free will leaped into the scientific arena in the early 1980s with a foundational experiment by Benjamin Libet. The first trick was to reduce freedom of expression to its most rudimentary form: a person freely choosing when to push a button. That relegated it to a single act of just one bit. It is a simple, minimal freedom, but freedom nonetheless. After all, we are all free to push the button when we feel like it. Isn’t that so?
Libet understood that in order to reveal this fundamental enigma he had to register three channels simultaneously.
First of all, the exact moment in which a supposedly free decision-maker believes he or she is making a decision. Imagine, for example, that you are on a diving board, deliberating over whether or not to dive into a pool. The process can be long, but there is a fairly precise moment in which you decide whether to dive. With a high-precision watch, and switching the vertigo of the diving board for a mere button, Libet recorded the exact moment in which the participants felt they were making the decision to push the button. This measurement reflects a subjective belief, the story that we tell ourselves about our own free will.
Libet also recorded participants’ muscular activity in order to pinpoint the precise moment when they made use of their supposed freedom and pushed the button. And he discovered that there was a small lag of one third of a second between when they believed they had made a decision and when they carried it out. This is reasonable and simply reflects the conduction speed of the motor signal needed to execute the action. To measure brain activity, he used an electroencephalogram (EEG); a few small electrodes placed at the surface of the scalp. And the extraordinary finding in Libet’s experiment showed up in this third channel. He discovered a trail of cerebral activity that allowed him to identify the moment in which participants would press the button, half a second before they themselves recognized their intention. It was the first clear demonstration in the history of science of an observer able to codify cerebral activity in order to predict another person’s intention. In other words, to read their thoughts.
Libet’s experiment gave rise to a field of investigation that produced countless new questions, details and objections. Here we will only look at three of them. The first two are easily solved. The third opens up a door to something about which we have very little knowledge.
A general criticism of this experiment (made by Libet himself and many other scientists who followed this work) is that the moment in which the decision is made is not always clear. And even if it were, his method allowed for a degree of imprecision in the recording. A second natural objection is that before making a decision there is a process of preparation. One can get into diving position before having decided to dive into the pool. Many of us, in fact, glumly retreat from the board without taking the plunge. Perhaps what Libet observed was the brain’s preparatory circling around the decision.
These two objections are resolved in a contemporary version of Libet’s experiment, conducted by John-Dylan Haynes in 2008, with two subtle but decisive differences. First of all, the resolution of the measuring instrument is improved by using magnetic resonance instead of the electroencephalogram with fewer channels than Libet employed, allowing for greater precision in decoding cerebral states.
Secondly, participants’ freedom of expression is doubled: they can now choose between two buttons. This minimal variation allowed Haynes to distinguish the choice (right or left button) from the action (the moment of pushing one of the buttons).
With this addition of the second button and the new technology, the magnifying glass used for searching out an unconscious seed in our apparently free and conscious decision-making became much more effective. Based on the pattern of activity in a region of the frontal cortex, it was possible to decipher the content of a decision ten seconds before a person felt that they were making it. The region of the brain that denotes our future actions is vast but specifically includes a zone in the more frontal and medial part that we are already familiar with: the Brodmann Area 10, which coordinates inner states cohesively with the outer world. In other words, when a person actually makes a decision, they do not know that in fact it had already been made a few seconds earlier.
The more difficult problem with Libet’s experiment is knowing what happens if someone intentionally decides to push the button but then deliberately halts before doing so. Libet himself responded to this, arguing that consciousness has no vote but does have a veto. Which is to say, it doesn’t have the capability or the freedom to set an action into motion–the task of the unconscious–but it can, once this action becomes observable to it, manipulate it and eventually stop it. Consciousness, in this scenario, is like a sort of preview of our actions in order to filter and mould them.
In Libet’s experiment, if someone decides to press the button and then changes their mind, a series of cerebral processes can be observed; the first codifies the intent to act that is never realized; later, a very different second process reveals a system of monitoring and censorship governed by another structure in the frontal part of the brain that we have already looked at, the anterior cingulate.
Does the conscious decision to halt an action also stem from another unconscious seed? This is still–as I understand it–a mystery. The problem is sketched in Borges’s fable about chess pieces:
God moves the player, who moves the piece.
What God behind God gives rise to the plot
of dust and time and dreams and agony?
In this endless recursion of wills that control wills (the decision to dive into the pool, then the hesitation and the decision to stop, then another that soothes the fear so the first decision can continue its course …) a loop emerges. It is the brain’s ability to observe itself. And this loop is perhaps, as we will see further on, the basis of the principle of consciousness.
The brain’s two hemispheres are connected by a massive structure of neuronal fibres called the corpus callosum. It is like a system of bridges that coordinates traffic between the two halves of a city divided by a river; without the bridges, the city is split in two. Without the corpus callosum, the cerebral hemispheres are isolated from each other. Some years back, in order to remedy some forms of epilepsy that were resistant to pharmaceutical treatment, some patients underwent a corpus callosotomy, a surgical procedure in which the two hemispheres were split apart. Epilepsy is, to a certain extent, a problem of brain connectivity that results in cycles of neuronal activity that feed on themselves. This surgical procedure interrupts the flow of currents in the brain and is a dramatic but effective way of putting paid to these cycles and, with them, epilepsy.
What happens to the language, emotions and decisions of a body governed by two hemispheres that no longer communicate with each other? The methodical answer to this question, which also allows us to understand how the hemispheres distribute functions, earned Roger Sperry the Nobel Prize–shared with Torsten Wiesel and David Hubel–in 1981. Sperry, along with his student Michael Gazzaniga, discovered an extraordinary fact that, just like Libet’s experiment, changed how we understand our construction of reality and, with it, the fuel of consciousness.
Without the corpus callosum, the information available to one hemisphere cannot be accessed by the other. Therefore, each hemisphere creates its own narrative. But these two versions are enacted by the same body. The right hemisphere only sees the left part of the world and also controls the left part of the body. And vice versa. Additionally, a few cognitive functions are fairly compartmentalized in each hemisphere. Typical cases are language (left hemisphere) and the ability to draw and represent an object in space (right hemisphere). So if patients with separated hemispheres are shown an object on the left side of their visual field, they can draw it but not name it. Conversely, an object to the right of their visual field is accessed by the left hemisphere and as such can be named but not really drawn.
Sperry’s great discovery was understanding how our consciousness creates a narrative. Imagine the following situation: patients with separated hemispheres are given an instruction in their left visual field; for example, that they will be paid money to lift up a bottle of water. Since it was presented to the left visual field, this instruction is only accessible to the right hemisphere. The patients pick up the bottle. Then they ask the patients’ other hemisphere why they picked it up. What do they respond? The correct answer, from the perspective of the left hemisphere–which did not see the instruction–should be ‘I don’t know.’ But that’s not what the patients say. Instead, they invent a story. They put forth reasons, such as that they were thirsty or because they wanted to pour water for someone else.
The left hemisphere reconstructs a plausible story to justify the participants’ action, since the real motive behind it is inaccessible to them. So the conscious mind acts not only as a front man but also as an interpreter, a narrator who creates a story to explain in hindsight our often inexplicable actions.
Perhaps the most striking aspect of these fictional narratives created by patients with separated hemispheres is that they aren’t deliberate falsifications to hide their ignorance. The narrative is true, even to those fabricating it. Consciousness’s ability to act as an interpreter and invent reasons is much more common than we recognize.
A group of Swedes from Lund–near Ystad, where the detective Kurt Wallander also deals, in his own way, with the intricate tricks of the mind–produced a more spectacular version of the interpreter experiment. In addition to being scientists, these Swedes are magicians and, as such, know better than anyone how to force their audience’s choices, in making them believe illusions in a magic show as well as making them think they’ve made a decision completely freely in a science laboratory. Their way of putting free will in check is the show business equivalent of the project begun by Libet.
The experiment or trick, which here is the same thing, works this way: people are shown two cards, each showing a woman’s face, and they must choose which woman they consider more attractive and then justify their decision. That much is pretty straightforward. But sometimes the scientist–who also acts as the magician–gives the participants–who also act as the audience–the card they didn’t choose. Of course, the scientist does so using sleight of hand so that the switch is imperceptible. And then something extraordinary happens. Instead of saying, ‘Excuse me, I chose the other card’, most of the participants start to give arguments in favour of a choice they actually never made. Again they resort to fiction; again our interpreters create a story in retrospect to explain the unknown course of events.
In Buenos Aires, I set up, with my friend and colleague Andrés Rieznik, a combination of magic and research to develop our own performiments, performances that are also experiments. Andrés and I were investigating psychological forcing, a fundamental concept in magic that is almost the opposite of free will. It uses a set of precise tools to make spectators choose to see or do what the magician wants them to. In his book Freedom of Expression, the great Spanish magician Dani DaOrtiz explains exactly how the use of language, pacing and gaze allows magicians to make audience members do what they want them to. In the performiments, when the magician asks the crowd whether they saw something or not, or whether they chose the card ‘they really want’, the performer is actually following a precise, methodical script to investigate how we perceive, remember, and make decisions.
Using these tools we proved what magicians already knew in their bones: spectators don’t have the slightest idea that they are being forced and, in fact, believe they are making their choices with complete freedom. The spectators later create narratives–sometimes very odd ones–to explain and justify choices they never made, but truly believe that they have made.
We then moved this experiment from the stage to the lab. There we performed an electronic version of a forcing magic trick. We showed participants a very rapid sequence of cards. One of them was presented for slightly more time. This change remained unnoticed to our participants but made them choose in almost half of the trials the ‘forced’ card. The advantage of doing this experiment in the laboratory is that we could measure, while participants were observing the flashing deck and making their choices, their pupil dilation, an autonomic and unconscious response that reflects, among other things, a person’s degree of attention and concentration. And with this we discovered that there are indications in the spectators’ bodies that reveal whether the choice was freely made or not. Approximately one second after a choice, the pupil dilates almost four times more when people choose the forced card. In other words, the body knows whether it has been forced to choose or not. But the spectator has no conscious record of that. So our eyes are more reliable indicators of the true reasons behind a decision than our thoughts.
These experiments deal with the old philosophical dilemma of responsibility and, to a certain extent, question the simplistic notion of free will. But they do not topple this notion, not by a long chalk. We don’t know where or how Libet’s unconscious spark originates. At this point we can only make conjectures about the answers to these questions, as Lavoisier did with his theory of the caloric.
We saw that the brain is capable of observing and monitoring its own processes in order to control them, inhibit them, shape them, halt them or simply manage them, and this gives rise to a loop that is the prelude to consciousness. Now we will see how three seemingly innocuous and mundane questions can help us to reveal and understand the origin of and reason for this loop, and its consequences.
We can touch ourselves, watch ourselves, caress ourselves, but we can’t tickle ourselves. Charles Darwin, the great naturalist and father of contemporary biology, took on this question in depth and with rigour. His idea was that tickling only works if one is taken by surprise, and that unexpected factor disappears when we do it to ourselves. It sounds logical, but it is false. Anyone who has ever tickled someone knows that it is just as effective–or even more so–if the victim is warned ahead of time. The problem of the reflexive impossibility of tickling oneself then becomes much more mysterious; it is not only that it isn’t a surprise.
In 1971, Larry Weiskrantz published an article in Nature entitled ‘Preliminary Observations on Tickling Oneself’. For the first time, tickles took centre stage in the research on consciousness. Then it was Chris Frith, another illustrious figure in the history of human neuroscience, who began to take tickling seriously–despite the oxymoron–as a privileged window into the study of the conscious mind.
Frith built a tickler, a mechanical device to allow people to tickle themselves. The detail that converted the game into science is the ability to change the intensity and delay in its action. When the tickler works with a scarce half-second delay, the tickling is felt as if someone else were doing it. When some time passes between our actions and their consequences, that produces a strangeness which makes them be perceived as being performed by others.*
Our eyes are in constant movement. They make an average of three saccades or abrupt shifts per second. In each one, our eyes move at top speed from one side of an image to the other. If our eyes are moving all the time, why is the image they construct in our brains still?
We now know that the brain edits the visual narrative. It is like the camera director of the reality we construct. The stabilization of the image depends on two mechanisms that are now being tested out in digital cameras. The first is saccadic suppression; the brain literally stops recording when we are moving our eyes. In other words, for the split-second when our eyes are in motion we are blind.
This can be shown in a quick experiment at home: stop in front of a mirror and direct your gaze to one eye and then the other. When you do this your eyes will, of course, move. Yet, what you will see in the mirror are your immobile eyes. That is the consequence of the microblindness that occurs in the exact moment that our eyes are moving.
Even if we edit the mental movie as our eyes move, there is still a problem. After a saccade, the image should move the way it does in home movies or in Dogme films, when the frame instantly shifts from one point in the image to another. But that doesn’t happen. Why not? It turns out that the receptive fields in the neurons of the visual cortex–somewhat analogous to the receptors that codify each pixel of an image–also move to compensate for the eye movement. That generates a smooth perceptive flow, in which the image remains static despite the frame constantly shifting. This is one of many examples of how our sensory apparatus reconfigures drastically according to the knowledge that the brain has of the actions it is going to carry out. Which is to say, the visual system is like an active camera that knows itself and changes its way of recording depending on how it is planning to move. This is another footprint of the beginning of the loop. The brain informs on itself, it has a record of its own activities. This is the prelude to consciousness.
While in a very different framework, this is the same idea that governs the impossibility of tickling ourselves. The brain foresees the movement it will make, and that warning creates a sensory change. This anticipation cannot work consciously–one cannot deliberately avoid feeling tickles, nor voluntarily edit the visual flow–but therein lies the seed of consciousness.
We talk to ourselves all day long, almost always in a near whisper. In schizophrenia, this dialogue melds with reality in thoughts plagued with hallucinations. Chris Frith’s thesis is that these hallucinations result from the inability of schizophrenic patients to recognize that they are the creators of their inner voices. And since they don’t recognize them as their own, as with tickling, they cannot control them.
This argument withstands fierce experimental scrutiny. The region of the brain that codifies sounds–the auditory cortex–responds in a subdued way when we hear our own voice in real time. But if the same speech is played back and heard in a different context, it generates a cerebral response of greater magnitude. This difference is not observed in the auditory cortices of schizophrenics, whose brains do not distinguish when their voices are presented in real time or in a replay.
It turns out to be very difficult to understand the mind’s quirks when we don’t experience them ourselves. How can someone perceive their inner mental conversations as if they were external voices? They are inside us, we produce them, they are obviously ours. Yet there is a space in which almost all of us make the same mistake, again and again: in dreams. They are also fictions created by our imagination, but dreams exercise their own sovereignty; it is difficult, almost impossible, for us to appropriate their stories. What’s more, many times it is impossible to recognize them as dreams or products of our imaginations. That is why we feel relief when we wake up from a nightmare. In some sense, then, dreams and schizophrenia have similarities, since they both revolve around not recognizing the authorship of our own creations.*
These three phenomena suggest a common starting point. When an action is carried out, the brain not only sends a signal to the motor cortex–so the eyes and hands move–but also alerts itself to readapt beforehand. In order to be able to stabilize the camera, in order to be able to recognize inner voices as its own. This mechanism is called efferent copy, and it is a way that the brain has of observing and monitoring itself.
We have already seen that the brain is a source of unconscious processes, some of which are expressed in motor actions. Shortly before being carried out, they become visible to the brain itself, which identifies them as its own. This sort of cerebral signature has consequences. It happens when we move our eyes, when we can’t tickle ourselves, when we mentally recognize our own voice; we can think generically of this mechanism as an internal communication protocol.
A useful analogy here might be how, when a company decides to launch a new product, it lets its different departments know so that they can coordinate the process: marketing, sales, quality control, public relations, etc. When the company’s internal communications (its efferent copy) fail, incoherencies result. For example, the purchasing group observes that there is less availability of some raw material and has to guess the reason because it is not aware of the new product launch. In the same way, due to the lack of internal information, the brain comes up with its idea of the most plausible scenario for explaining the state of things. We can see in this analogy a metaphor for schizophrenia. It serves to convey the image of how delusions arise from a deficit in an internal communication protocol.
This is of course just an exercise of thought. There is no doubt that the company is not conscious of itself. But it sets a prerequisite of consciousness when it begins to inform itself of its own knowledge and its own states in a way that can be broadcast to different sections. However, this discussion may become less rhetorical and more concrete in the near future, when we build machines that can express all the features of consciousness. Will we consider them conscious? What rights and obligations will they have?
We live in unprecedented times, in which the factory of thoughts has lost its opacity and is observable in real time. How does brain activity change when we are conscious of a process?
The most direct way to tackle this question is to compare cerebral responses to two identical sensory stimuli that, due to internal fluctuations–in attention, concentration or waking state of the subjects–follow completely different subjective trajectories. In one case we consciously recognize the stimulus: we can talk about it and report on it. The other occurs without a conscious trace, affecting the sensory organs and continuing its cerebral trajectory in a way that doesn’t result in a qualitative change in our subjective experience. This would be an unconscious or subliminal stimulus. Let’s think about the most tangible and common case of an unconscious stimulus: imagine that someone is speaking to us while we are placidly falling asleep. The words progressively vanish; we still hear sound arriving to our ears.
Let us begin by seeing how a subliminal image is represented in the brain. Sensory information arrives, for example, in the form of light to the retina, and turns into electrical and chemical activity that spreads through the axons to the thalamus, in the very centre of the brain. From there, the electrical activity spreads to the primary visual cortex, located in the back of the brain, near the nape. So, about 170 milliseconds after a stimulus reaches the retina, a wave of activity occurs in the brain’s visual cortex. This delay is not only due to the conduction times in the brain but also to the construction of a cerebral state that codifies the stimulus. Our brain lives, literally, in the past.
The activation of the visual cortex codifies the properties of the stimulus–colour, luminosity, movement–so well that in the laboratory an image can be reconstructed based on the pattern of cerebral activation produced. What is most surprising is that this happens even if the image is presented subliminally. In other words, an image remains recorded for a while (at least) in the brain, even though that cerebral activity doesn’t produce a conscious mental image. With the proper technology, this recorded image can be reconstructed and projected. So today we are literally able to see the unconscious.
This whole river of cerebral activity that happens in the underground of consciousness is similar to that provoked by a privileged stimulus which is able to access the narrative of consciousness. This is interesting in and of itself and represents the cerebral trail of unconscious conditioning that was sketched out by Freud. But the unconscious is, in phenomenological and subjective terms, very different from the conscious mind. What happens in the brain to differentiate one process from the other?
The solution is very similar to what makes a fire spread or a tweet go viral. Some messages circulate in a local atmosphere, and certain fires remain confined to small sectors of a forest. But every once in a while, due to circumstances intrinsic to the object (the content of the tweet or the intensity of the fire) or the network (the dampness of the ground or the time of day in a social network), the fire and a tweet take over the entire network. They spread massively in an expansive phenomenon that begins to fuel itself. They become viral, and uncontrollable.
In the brain, when the intensity of the neuronal response to a stimulus exceeds a certain threshold, a second wave of cerebral activity is produced, about 300 milliseconds after the stimulus occurs. This second wave of activity is no longer confined to the brain regions related to the sensory nature of the stimulus (the visual cortex for an image or the auditory cortex for a sound), like a wildfire that has spread throughout the brain.
If this second massive wave takes over the brain almost entirely, the stimulus is conscious. Otherwise, it isn’t. The cerebral activity leaves a mark that is a sort of digital signature of consciousness, allowing us to know if a person is conscious or not, to access their subjectivity, and to know the contents of their mind.
This wave of cerebral activity, which is only registered in conscious processes, is:
(1) MASSIVE. A state of great cerebral activity propagated and distributed throughout the entire brain.
(2) SYNCHRONIZED AND COHERENT. The brain is made up of different modules that carry out specific activities. When a stimulus accesses consciousness, all of these cerebral modules synchronize.
(3) MEDIATED. How does the brain manage to create a state of massive, coordinated activity among modules that usually work independently? What performs that task? The answer is, again, analogous to the social networks. What makes information go viral? On the web there are hubs or traffic centres that function as huge information propagators. For example, if Google prioritizes a particular piece of information in a search, its diffusion increases.
In the brain there are at least three structures that carry out that role:
(a) The frontal cortex, which acts sort of like a control tower.
(b) The parietal cortex, which has the virtue of establishing dynamic route changes between different brain modules, sort of like a railway switch that allows a train to pass from one track to another.
(c) The thalamus, which is in the centre of the brain, connected to all the cortices, and in charge of linking them all. When the thalamus is inhibited, it strongly disassociates traffic in the cerebral network–as if one day Google shut down–and the different modules of the cerebral cortex cannot synchronize themselves, making consciousness vanish.
(4) COMPLEX. The frontal cortex, the parietal cortex and the thalamus allow the different actors within the brain to act in a coherent manner. But how coherent does the activity in the brain have to be in order for it to be effective? If the activity were completely disorganized, the traffic and flow of information between different modules would become impossible. Full synchronicity, on the other hand, is a state in which ranks and hierarchies are lost, and where modules and compartments that can realize specialized functions are not formed. In the extreme states of completely ordered or chaotic cerebral activity, consciousness disappears.
This means that the synchronization must have an intermediate degree of complexity and internal structure. We can understand it with an analogy to musical improvisation; if it is totally disorganized, the result is pure noise; if the music is homogeneous and no instrument offers any variation from the others, all musical richness is lost. What’s most interesting happens at an intermediate degree between those two states, in which there is coherence between the different instruments but also a certain freedom. It is the same with consciousness.
In July 2005 a woman had a car accident that left her in a coma. After the routine procedures, including surgery to reduce pressure in the brain caused by various haemorrhages, the days passed with no signs of her recovering consciousness. From that moment on, and over weeks and months, the woman opened her eyes spontaneously, and had cycles of sleep and wakefulness and some reflexes. But she made no gesture that indicated a voluntary response. All these observations corresponded with the diagnosis of a vegetative state. Was it possible that, against all clinical evidence, the patient had a rich mental life, with a subjective landscape similar to that of a person in a state of full consciousness? How could we know? How can we investigate the mental life in someone else’s mind if they can’t communicate their thoughts?
In general, other people’s mental states–happiness, desire, boredom, weariness, nostalgia–are inferred by their gestures and their verbal expression. Language allows us to share, in a more or less rudimentary way, our own private states: love, desire, pain, a special memory or image. But if you are unable to externalize this mental life, as happens for example while sleeping, a person is locked in. Vegetative patients do not externalize their thoughts and, therefore, it was normal to assume that they might not have consciousness. This has all changed. The properties of conscious activity that we enumerated become dramatically relevant because they allow us to decide, in an objective way, whether a person has signatures of consciousness. They work as a tool to read and decipher other people’s mental states, something that becomes more pertinent when it is the only way to do so, such as in the case of vegetative patients.*
Some seven months after the car accident that left her in a vegetative state, doctors made a study of the woman using functional magnetic resonance imaging. Can tracing cerebral activity provide a view of her thoughts? Her brain activity, when hearing different phrases, was comparable with that of a healthy person. Most interestingly, the response was more pronounced when the phrase was ambiguous. This suggested that her brain was struggling with that ambiguity, which indicated an elaborate form of thinking. Perhaps the woman wasn’t truly in a vegetative state? The observations of her brain were not enough to respond conclusively to such a significant question. During deep sleep or under anaesthesia–where one presumes that a person is indeed unconscious–the brain also responds elaborately to phrases and sounds. How can the signature of consciousness be more precisely examined?
When a conscious person imagines that they are playing tennis, the part of the brain that activates the most is known as the supplementary motor area (SMA). This region controls muscular movement.* On the other hand, when someone imagines walking through their house–we can all mentally follow a route through many maps, train lines, friends’ grandmothers’ houses, cities, trails–a network activates, which primarily involves the parahippocampus and the parietal cortex.
The regions that activate when someone imagines they are playing tennis are very different from the ones that activate when they imagine walking through their house. This can be used to decipher thought in a rudimentary but effective way. It is no longer necessary to ask someone if they are imagining tennis or imagining moving through their house. It is possible to decode it precisely just by observing their cerebral activity. In effect, we can read someone else’s mind; at least along a binary code of tennis or house. This tool becomes particularly relevant when we cannot ask questions. Or, more accurately, when the person cannot answer them.
Can that 23-year-old vegetative woman imagine? The British neuroscientist Adrian Owen and his colleagues posed that question in the resonator in January 2006. They asked the patient to imagine tennis and then imagine walking around her house, then tennis again, then walking again, and so on, alternately imagining one activity and then the other.
The cerebral activation was indistinguishable from that of a healthy person. So it can be reasonably inferred that she was capable of imagining and, therefore, that she had conscious thoughts that could not have been supposed by her doctors based on clinical observation.
The moment when she managed to break through the opaque shell that had confined her thoughts for months–as Owen and his team observed her thinking directly in her brain–was a landmark in the history of human communication.
The demonstration with tennis and spatial navigation has an even greater significance: it is a way of communicating. A rudimentary but effective one.
With this we can establish a sort of Morse code. Every time you want to say ‘yes’, imagine you are playing tennis. Every time you want to say ‘no’, imagine you are walking through your house. In this way, Owen’s group could communicate for the first time with a vegetative patient, who was twenty-nine years old. When they asked him if his father’s name was Alexander, the supplementary motor area activated, indicating that he was imagining tennis and meaning, in this code, a ‘yes’. Then they asked the patient if his father was named Tomás and the parahippocampus activated, indicating spatial navigation and representing a ‘no’ in the code they’d established. They asked him five questions, which he responded to correctly with this method. But he didn’t respond to the sixth.
The researchers argued that perhaps he hadn’t heard the question, or maybe he had fallen asleep. This, of course, is very difficult to determine in a vegetative patient. At the same time, the result shows the infinite potential of this window on to a previously inaccessible world, as well as raising a certain scepticism.
This last statement, as I see it, is a pertinent and necessary warning about a ‘broken link’ in science communication, one that distorts reality. The traces of communication in vegetative patients are promising but still very rudimentary. It is likely that the current limitations can be overcome with improving technology, but it is deceptive to believe–or make others believe–that these measures indicate an awareness that is similar in form or content to that of a normal life. Perhaps it is a much more confused and disordered state, a disintegrated, fragmented mind. How can we know?
Tristán Bekinschtein, a friend and companion in many adventures, and I set out to approach this question. Our approach was somewhat minimalist: we tried to identify the minimum behaviour that defines consciousness. And we found the solution in an experiment that Larry Squire, the great neurobiologist of memory, had done by adapting Pavlov’s classic demonstration.
The experiment works like this: a person watching a film–by Charlie Chaplin–hears a sequence of tones: beep buup beep beep buup… One is high-pitched and the other is low. Each time the low tone* is heard, a second later that person receives a slightly annoying burst of air on to one eyelid.
Close to half of the participants recognized the structure: the low tone was always followed by the puff of air. The other half didn’t learn the relationship; they didn’t discover the rules of the game. They could describe the tones and the bothersome burst of air but didn’t perceive any relationship between them. Only those who consciously described the rules of the game acquired the natural reflex of closing their eyelid after the low-pitched tone, anticipating the air and attenuating its bothersome effect.
Squire’s results seem innocent enough but are actually quite meaningful. This extremely simple procedure establishes a minimal test–a Turing test–for the existence of consciousness. It is the perfect bridge between what we wanted to know–whether vegetative patients have consciousness–and what we could measure–if they blink or not, something that vegetative patients can do–so Tristán and I built that bridge to measure consciousness in vegetative patients.
I remember the moment as one of the few in my scientific career when I felt the giddiness of discovery: Tristán and I were in Paris, and we discovered that a patient was capable of learning just as well as people with full consciousness. Then, by laboriously repeating the procedure, we found that only three out of the thirty-five patients we had examined showed this residual form of consciousness.
We spent many years refining the process in order to explore in further detail how reality is seen from the perspective of a vegetative patient who has traces of consciousness. In order to do so, Tristán adapted the experiment with beeps and puffs of air into a more sophisticated version. This time, the participants had to discover that different words in a single semantic category were preceded by a puff of air. Being fully aware wasn’t enough to learn that relationship; they also had to be able to direct their attention to the words. Which is to say, those who were distracted learned in a much more rudimentary way.
So we were able to question the attention focus of vegetative patients and we found that their way of learning was very similar to that of distracted people. Perhaps this is a better metaphor for the functioning of the minds of some vegetative patients with signs of consciousness: flightier ways of thinking, in a much more fluctuating, less attentive and more disordered state.
Consciousness has many signatures. These can be naturally combined in order to determine whether a person has consciousness, but the argument for or against the determination of a patient’s consciousness can never be definitive or certain. If their frontal and thalamic activity is normal, if their cerebral activity has an intermediate range of coherence, if certain stimuli generate synchronous activity and after about 300 milliseconds produce a massive wave of cerebral activity, and if, in addition, there is a trail of directed imagination and forms of learning that require consciousness–if all of these conditions are simultaneously found, then it is very plausible that the patient has consciousness. If only some of them exist, then there is less certainty. All these tools in conjunction are the best means we have today of coming up with an objective diagnosis of conscious activity.
Research into others’ thoughts is also a window into the mysterious universe of newborns’ thinking. How does consciousness develop before a child can express it in gestures and concise words?*
Newborns have a much more sophisticated and abstract thought organization than we imagine. They are able to form numerical and moral concepts, as we saw in Chapter 1. But these ways of thinking could be unconscious and don’t tell us much about the subjective experience during development. Are babies consciously aware of what is happening to them, of their memories, their loved ones, or their sadness? Or do they merely express reflexes and unconscious thinking?
This is a very new field of investigation. And it was my friend and colleague of many years Ghislaine Dehaene-Lambertz who took the first stab at it. The strategy is simple; it involves observing whether babies’ brain activity has the cerebral signatures that indicate conscious thought in adults. The trick is very similar to the experiment to understand how, in the adult brain, a conscious process diverges from an unconscious one.
At five months old, the first phase of cerebral response is practically established. This phase codifies visual stimuli, independently of whether they access consciousness. At this point, the visual cortex is already able to recognize faces and does so in similar ways and at a similar speed as adults do.
The second wave–exclusive of conscious perception–changes during development. At one year of life it is already practically consolidated and presents very similar forms to an adult’s but with a revealing exception: it is much slower. Instead of at 300 milliseconds, it consolidates almost a second after seeing a face, as if babies’ conscious film had a slight lag, like when we watch a game being broadcast with a delay and we hear our neighbours shouting ‘goal’ before we see it.
This lag in response is much more exaggerated in five-month-old babies. Long before developing use of speech, before crawling, when they can barely sit up, babies already have cerebral activity denoting an abrupt and extended response throughout the brain, which persists after the stimulus disappears.
It is the best proof we have for supposing that they have consciousness of the visual world. Surely less anchored to precise images, probably more confused, slower and hesitant, but consciousness nonetheless. Or at least that is what their brains tell us.
This is the first approximation in science to navigating a previously completely unknown territory: babies’ subjective thought. Not what they are able to do, respond to, observe or remember, but something much more private and opaque, that which they are able to perceive from their conscious minds.
Deciding on the state of consciousness of a baby or a person in a vegetative state is no longer merely deliberating intuitions. Today we have tools that allow us to enter–live and direct–into the factory of thought. These tools allow us to break through one of the most hermetic and opaque barriers of solitude.
Today we still know very little about the material substratum of consciousness, as was the case before with the physics of heat. But what’s most striking is that despite so much ignorance we can today manipulate consciousness: turn it on and off, read it and recognize it.