CHAPTER 10

Consolidation

Consider a first grader who succesfully deployed the three pillars of learning and quickly learned to read. He actively engaged in reading, with curiosity and enthusiasm. He learned to pay attention to every letter of every word, from left to right. And, over the months, as his errors receded, he began to accurately decipher the correspondence between letters and sounds and to store the spellings of irregular words. However, he is not a fluid reader yet, and reads slowly and with effort. What’s missing? He still has to deploy the fourth pillar of learning: consolidation. His reading, which, at this stage, mobilizes all his attention, has to become automatic and unconscious.

The analysis of his reading times is revealing: the longer a word is, the longer it takes him to decipher it (see figure 18 in the color insert). The function is linear: response time increases by a fixed amount of about one-fifth of a second for each additional letter. This is characteristic of a serial, step-by-step operation—and it is completely normal: at his age, reading relies on deciphering letters or letter groups one by one, in a slow and attention-demanding manner.1 But this dysfluent phase should not last forever: with practice, in the two years that follow, the child’s reading will accelerate and become more fluid. After two or three years of intensive practice, the effect of word length will flat out disappear. Dear reader, at this very moment, as your expert brain deciphers my words, you take the same exact amount of time to read any word between three and eight letters long. It takes, on average, about three years of training for visual word recognition to move from sequential to parallel. Ultimately, our visual word form area processes all the letters of a word simultaneously rather than serially.

This is an excellent example of the consolidation which happens in all domains: a shift from slow, conscious, and effortful processing to fast, unconscious, and automatic expertise. Our brains never stop learning. Even when a skill is mastered, we continue to overlearn it. Automatization mechanisms “compile” the operations we regularly use into more efficient routines. They transferred them to other brain circuits, outside our conscious awareness, where processes can unfold independent of one another without disrupting other operations in progress.

FREEING UP BRAIN RESOURCES

When you scan the brain of a beginner reader, what do you see? In addition to activation of the normal reading circuit—which includes visual areas for letter recognition and temporal-lobe areas for phoneme, syllable, and word processing—a massive activation of parietal and prefrontal regions is also present.2 This intense and energy-hungry activity, reflecting effort, attention, and conscious executive control, will gradually disappear as learning consolidates (see figure 18 in the color insert). In an expert reader, these regions no longer contribute to reading—they are activated only if you disturb reading, for example by spacing out the  l  e  t  t  e  r  s  , or by them, forcing the expert brain to revert to the slow, beginner mode.3

Automating reading means setting up a restricted and specialized circuit for the efficient processing of the strings of letters that we regularly encounter. As we learn, we develop an extraordinarily effective circuit for recognizing the most common characters as well as their combinations.4 Our brain compiles statistics: it determines which letters are most frequent, where they appear most often, and in which associations they occur. Even the primary visual cortex adapts to the shapes and positions of the most frequent letters.5 After a few years of overlearning, this circuit goes into routine mode and manages to function without the slightest conscious intervention.6 At this stage, the activation of the parietal and prefrontal cortex has vanished: we can now read effortlessly.

What is true for reading also applies to all other areas of learning. Whether we learn to type, play a musical instrument, or drive a car, our gestures are initially under the control of the prefrontal cortex: we produce them slowly and consciously, one by one. Practice, however, makes perfect: over time, all effort evaporates, and we can exercise those skills while talking or thinking about something else. Repeated practice turns control over to the motor cortex and especially the basal ganglia, a set of subcortical circuits that record our automatic and routine behaviors (including prayers and swearing!). The same shift happens for arithmetic. For a beginner child, each calculation problem is an Everest that requires great effort to climb and mobilizes the circuits of the prefrontal cortex. At this stage, calculation is sequential: to solve 6 + 3, children will typically count the steps one by one: “Six . . . seven, eight . . . nine!” As consolidation progresses, children begin to retrieve the result straight from memory, and the prefrontal activity fades away in favor of specialized circuits in the parietal and ventral temporal cortex.7

Why is automatization so important? Because it frees up the cortex’s resources. Remember that the parietal and prefrontal executive cortices operate as a generic executive control network that imposes a cognitive bottleneck: it cannot multitask. While our brain’s central executive is focused on one task, all other conscious decisions are delayed or canceled. Thus, as long as a mental operation remains effortful, because it has not yet been automated by overlearning, it absorbs valuable executive attention resources and prevents us from focusing on anything else. Consolidation is essential because it makes our precious brain resources available for other purposes.

Let us take a concrete example. Imagine if you had to solve a math problem, but your reading had remained at the beginner’s level: “A dryver leevz Bawstin att too oh clok and heds four Noo Yiorque too hunjred myels ahwey. Hee ar eye-vz at ate oh clok. Wat waz hiz avrij speed?” I think you get my point: it is practically impossible to do both things at the same time. The difficulty of reading destroys any capacity for arithmetic reflection. To progress, it is essential that the mental tools most useful to us, such as reading or arithmetic, become second nature—that they operate unconsciously and effortlessly. We cannot reach the highest levels of the educational pyramid without first consolidating its foundations.

THE KEY ROLE OF SLEEP

We have already seen that learning is much more efficient when done at regular intervals: rather than cramming an entire lesson into one day, we are better off spreading out the learning. The reason is simple: every night, our brain consolidates what it has learned during the day. This is one of the most important neuroscience discoveries of the last thirty years: sleep is not just a period of inactivity or a garbage collection of the waste products that the brain accumulated while we were awake. Quite the contrary: while we sleep, our brain remains active; it runs a specific algorithm that replays the important events it recorded during the previous day and gradually transfers them into a more efficient compartment of our memory.

The discovery dates back to 1924. That year, two American psychologists, John Jenkins (1901–48) and Karl Dallenbach (1887–1971), revisited the classical studies on memory.8 They re-examined the work of the pioneer of memory, German Hermann Ebbinghaus (1850–1909), who, as early as the end of the nineteenth century, had discovered a basic psychological law: the more time goes by, the less you remember what you learned. The Ebbinghaus forgetting curve is a beautiful, monotonously decreasing exponential. What Jenkins and Dallenbach noticed, however, is that the curve presented a single anomaly: it showed no memory loss between eight and fourteen hours after learning something new. Jenkins and Dallenbach had an epiphany: in the Ebbinghaus experiment, the eight-hour time limit corresponded to tests taken on the same day, and the fourteen-hour time limit to tests spaced one night apart. To get to the bottom of this, they designed a new experiment that disentangled these two variables: the time elapsed before memory was tested, and whether or not the participants had the opportunity to sleep. To do so, they taught their students random syllables either around midnight, just before going to sleep, or in the morning. The result was clear: what we learn in the morning fades away with time, according to Ebbinghaus’s exponential law; what is learned at midnight, on the other hand, remains stable over time (provided the students had at least two hours of sleep). In other words, sleeping prevents forgetting.

Several alternative interpretations of these results come to mind. Perhaps memory decays during the day because, while awake, the brain accumulates toxic substances that are eliminated during sleep; or perhaps memory suffers from interference with other events that occur in the interval between learning and testing, which does not happen during sleep. But these alternatives were definitively rejected in 1994, when Israeli researchers demonstrated that sleep causes additional learning: without any extra training, cognitive and motor performance improved after a period of sleep.9 The experiment was simple. During the day, volunteers learned to detect a bar at a specific point on the retina. Their performance improved slowly, and it plateaued after a few hours of training: the limit seemed to have been reached. Send the participants to sleep, however, and surprise: when they wake up the next morning, their performance is much improved, and remains so throughout the following days. Sleep demonstrably causes the extra learning, because if we wake subjects up during the night each time they enter REM sleep, they show no improvement in the morning.

Numerous studies have confirmed and extended these early discoveries.10 The amount of nightly gain varies according to the quality of sleep, which can be assessed by placing electrodes on the scalp and monitoring the slow waves that characterize deep sleep. Both the duration and the depth of sleep predict a person’s performance improvement upon waking. The relationship also operates in the converse direction: the need for sleep seems to depend on the amount of stimulation and learning that occurred during the previous day. In animals, a gene involved in cerebral plasticity, zif-268, increases its expression in the hippocampus and cortex during REM sleep, specifically when the animals were previously exposed to an enriched environment: the increased stimulation leads to a surge in nocturnal brain plasticity.11

The respective roles of the different stages of sleep are not yet perfectly established, but it seems that deep sleep allows for the consolidation and generalization of knowledge (what psychologists call semantic or declarative memory), while REM sleep, during which brain activity is close to a state of wakefulness, reinforces perceptual and motor learning (procedural memory).

THE SLEEPING BRAIN RELIVES THE PREVIOUS DAY

While the psychological demonstrations of the effects of sleep were quite convincing, the neural mechanism by which a sleeping brain could learn, even better than while awake, remained to be identified. In 1994, neurophysiologists Matthew Wilson and Bruce McNaughton made a remarkable discovery: in the absence of any external stimulation, neurons in the hippocampus spontaneously activate during sleep.12 And this activity is not random: it retraces the footsteps that the animal took during the day!

As we saw in Chapter 4, the hippocampus contains place cells, i.e., neurons that fire when an animal is (or believes itself to be) at a certain point in space. The hippocampus is packed with a variety of place-coding neurons, each of which prefers a different location. If you record enough of them, you find that they span the entire space in which the animal walks. When a rat moves through a corridor, some neurons fire at the entrance, others in the middle, and yet others toward the end. Thus, the path that the rat takes is reflected by the successive firing of a whole series of place cells: movement in actual space becomes a temporal sequence in neural space.

And this is where Wilson and McNaughton’s experiments fit in. They discovered that when the rat falls asleep, the place cells in its hippocampus start firing again, in the same order. The neurons literally replay the trajectories of the preceding wake period. The only difference is speed: during sleep, neuronal discharges can be accelerated by a factor of twenty. In their sleep, rats dream of a high-speed race through their environment!

The relationship between the firing of hippocampal neurons and the position of the animal is so faithful that neuroscientists have managed to reverse the process, decoding the content of a dream from the animal’s neuronal firing patterns.13 During wakefulness, as the animal walks around in the real world, the systematic mapping between its location and its brain activity is recorded. These data make it possible to train a decoder, a computer program that reverses the relationship and guesses the animal’s position from the pattern of neuronal firing. When this decoder is applied to sleep data, we see that while the animal dozes, its brain traces out virtual trajectories in space.

The rat’s brain thus replays, at a high speed, the patterns of activity it experienced the day before. Every night brings back memories of the day. And such replay is not confined to the hippocampus, but extends to the cortex, where it plays a decisive role in synaptic plasticity and the consolidation of learning. Thanks to this nocturnal reactivation, even a single event of our lives, recorded only once in our episodic memory, can be replayed hundreds of times during the night (see figure 19 in the color insert). Such memory transfer may even be the main function of sleep.14 It is possible that the hippocampus specializes in the storage of the events of the preceding day, using a fast single-trial learning rule. During the night, the reactivation of these neuronal signals spreads them to other neural networks, mainly located in the cortex and capable of extracting as much information as possible from each episode. Indeed, in the cortex of a rat that learns to perform a new task, the more a neuron reactivates during the night, the more it increases its participation in the task during the following day.15 Hippocampal reactivation leads to cortical automation.

Does the same phenomenon exist in humans? Yes. Brain imaging shows that during sleep, the neural circuits that we used during the preceding day get reactivated.16 After playing hours of Tetris, gamers were scanned the following night: they literally hallucinated a cascade of geometric shapes in their dreams, and their eyes made corresponding movements, from top to bottom. What’s more, in a recent study, volunteers fell asleep in an MRI machine and were suddenly awakened as soon as their electroencephalogram suggested that they were dreaming. The MRI showed that many areas of their brains had spontaneously activated just before they were woken, and that the recorded activity predicted the content of their dreams. If a participant reported, for instance, the presence of people in their dream, the experimenters detected sleep-induced activity in the cortical area associated with face recognition. Other experiments showed that the extent of this reactivation predicts not only the content of the dream, but also the amount of memory consolidation after waking up. Some neurosurgeons are even beginning to record single neurons in the human brain, and they see that, as in rats, their firing patterns trace out the sequence of events experienced on the preceding day.

Sleep and learning are strongly linked. Numerous experiments show that spontaneous variations in the depth of sleep correlate with variations in performance on the next day. When we learn to use a joystick, for example, during the following night, the frequency and intensity of slow sleep waves increase in the parietal regions of the brain involved in such sensorimotor learning—and the stronger the increase, the more a person’s performance improves.17 Similarly, after motor learning, brain imaging shows a surge of activity in the motor cortex, hippocampus, and cerebellum, accompanied by a decrease in certain frontal, parietal, and temporal areas.18 Experiment after experiment gives convergent results: after sleeping, brain activity shifts around, and a portion of the knowledge acquired during the day is strengthened and transferred to more automatic and specialized circuits.

Although automation and sleep are tightly related, every scientist knows that correlation is not causation. Is the link a causal one? To verify this, we can artificially increase the depth of sleep by creating a resonance effect in the brain. During sleep, brain activity oscillates spontaneously at a slow frequency, on the order of forty to fifty cycles per minute. By giving the brain a small additional kick at just the right frequency, we can make these rhythms resonate and increase their intensity—a bit like when we push a swing at just the right moments, until it oscillates with a huge amplitude. German sleep scientist Jan Born did precisely this in two different ways: by passing tiny currents through the skull, and by simply playing a sound synchronized with the brain waves of the sleeper. Whether electrified or soothed by the sound of waves, the sleeping person’s brain was carried away by this irresistible rhythm and produced significantly more slow waves characteristic of deep sleep. In both cases, on the following day, this resonance led to a stronger consolidation of learning.19

A French start-up has begun exploiting this effect: it sells headbands which supposedly facilitate sleep and increase the depth of sleep by playing quiet sounds that stimulate the slow rhythms of the nocturnal brain. Other researchers attempt to increase learning by forcing the brain to reactivate certain memories at night. Imagine learning certain facts in a classroom heavily scented with the smell of roses. Once you enter deep sleep, we spray your bedroom with the same fragrance. Experiments indicate that the information you learned is much better consolidated the next morning than if you had slept while being exposed to another smell.20 The perfume of roses serves as an unconscious cue that biases your brain to reactivate this particular episode of the day, thus increasing its consolidation in memory.

The same effect can be achieved with auditory cues. Imagine that you are asked to memorize the locations of fifty images, each associated with a given sound (a cat meows, a cow moos, etc.). Fifty items are a lot to remember . . . but the night is there to help. In one experiment, during the night, the researchers stimulated the subjects’ brains with half of the sounds. Hearing them unconsciously during deep sleep biased the nocturnal neuronal replay—and the next morning, the participants remembered the locations of the corresponding images much better.21

In the future, will we all fiddle with our sleep in order to learn better? Many students already do this spontaneously: they review an important lesson just before falling asleep, unknowingly attempting to bias their nocturnal replay. But let’s not confuse such useful strategies with the misconception that one can acquire entirely new skills while sleeping. Some charlatans sell audio recordings that are supposed to teach you a foreign language unconsciously while you sleep. The research is clear—such tapes have no effect whatsoever.22 Although there might be a few exceptions, the bulk of the evidence suggests that the sleeping brain does not absorb new information: it can only replay what it has already experienced. To learn a skill as complex as a new language, the only thing that works is practice during the day, then sleep during the night to reactivate and consolidate what we acquired.

DISCOVERIES DURING SLEEP

Does sleeping merely strengthen memory? Many scientists think otherwise: they report making discoveries during the night. The most famous case is the German chemist August Kekule von Stradonitz (1829–96), who first dreamed up the structure of benzene—an unusual molecule, because its six carbon atoms form a closed loop, like a ring or . . . a snake that bites its tail. This is how Kekule described his dream on that fateful night:

Again the atoms were gamboling before my eyes. . . . My mental eye, rendered more acute by repeated visions of this kind, could now distinguish larger structures of manifold conformation; long rows sometimes more closely fitted together, all twining and twisting in snake-like motion. But look! What was that? One of the snakes had seized hold of its own tail, and the form whirled mockingly before my eyes.

And Kekule concluded: “Let us learn to dream, gentlemen, and then perhaps we shall learn the truth.”

Can sleep really increase our creativity and lead us to truth? While science historians are divided on the authenticity of Kekule’s Ouroboros episode, the idea of a nightly incubation is widespread among scientists and artists. The designer Philippe Starck said with humor in a recent interview, “Every night after putting my book down . . . I say to my wife: ‘I’m off to work.’”23 I myself have often had the experience of discovering the solution to a difficult problem upon waking up. However, a collection of anecdotes does not make for a proof. You have to experiment—and that’s exactly what Jan Born and his team did.24 During the day, these researchers taught volunteers a complex algorithm, which required applying a series of calculations to a given number. However, unbeknownst to the participants, the problem contained a hidden shortcut, a trick that cut the calculation time by a large amount. Before going to sleep, very few subjects had figured it out. However, a good night’s sleep doubled the number of participants who discovered the shortcut, while those who were prevented from sleeping never experienced such a eureka moment. Moreover, the results were the same regardless of the time of day at which participants were tested. Thus, elapsed time was not the determining factor: only sleep led to genuine insight.

Nocturnal consolidation is therefore not limited to the strengthening of existing knowledge. The discoveries from the day are not only stored, but also recoded in a more abstract and general form. Nighttime neuronal replay undoubtedly has a crucial role in this process. Every night, our floating ideas from the day are reactivated hundreds of times at an accelerated rate, thus multiplying the chances that our cortex eventually discovers a rule that makes sense. In addition, the twentyfold acceleration of neural discharges compresses information. High-speed replay implies that the neurons that were activated at long intervals while awake now find themselves adjacent in the night sequence. This mechanism seems ideal for gathering, synthesizing, compressing, and “converting raw information into useful and exploitable knowledge”—the very definition of intelligence according to artificial intelligence mogul Demis Hassabis.

In the future, will intelligent machines have to sleep like we do? The question seems crazy, yet I think that, in a certain sense, they will: their learning algorithms will probably incorporate a consolidation phase similar to what we call sleep. Indeed, computer scientists have already designed several learning algorithms that mimic the sleep/wake cycle.25 These algorithms provide inspiring models for the new vision of learning that I defend in this book, in which learning consists of building an internal generative model of the outside world. Remember that our brain contains massive internal models, capable of resynthesizing a variety of truer-than-life mental images, realistic dialogues, and meaningful deductions. In the awake state, we adjust these models to our environment: we use the sensory data that we receive from the outside world to select whichever model best fits the world around us. During this stage, learning is primarily a bottom-up operation: the unexpected incoming sensory signals, when confronted with the predictions of our internal models, generate prediction-error signals that climb up the cortical hierarchy and adjust the statistical weights at each step, so that our top-down models progressively gain in accuracy.

The new idea is that during sleep, our brain works in the opposite direction: from top to bottom. During the night, we use our generative models to synthesize new, unanticipated images, and part of our brain trains itself on this array of images created from scratch. This enhanced training set allows us to refine our ascending connections. Because both the parameters of the generative model and its sensory consequences are known, it is now much easier to discover the link between them. This is how we become more and more effective in extracting the abstract information that lies behind a specific sensory input: after a good night’s sleep, the slightest clue suffices to identify the best mental model of reality, however abstract it may be.

According to this idea, dreams are nothing more than an enhanced training set of images: our brain relies on internal reconstructions of reality to multiply its necessarily limited experience of the day. Sleep seems to solve a problem that all learning algorithms face: the scarcity of the data available for training. To learn, current artificial neural networks need huge data sets—but life is too short, and our brain has to make do with the limited amount of information it can gather during the day. Sleep may be the solution that the brain found to simulate, in an accelerated manner, myriad events that an entire life would not suffice to experience for real.

During these thought experiments, we occasionally make discoveries. There is nothing magical about this: as our mental simulation engine runs, it sometimes hits upon unexpected outcomes—a bit like a chess player, once she has mastered the rules, can spend years exploring their consequences. Indeed, humanity owes to mental imagery some of its greatest scientific discoveries—when Einstein dreamed of riding a photon, for instance, or when Newton imagined the moon falling onto the earth like an apple. Even Galileo’s most famous experiment, in which he dropped objects from the Tower of Pisa to prove that their free-falling speed does not depend on their mass, probably never took place. A thought experiment sufficed: Galileo imagined dropping two spheres, one light and one heavy, from the top of the tower; supposed that the heavier one would fall faster; and used his mental models to show that this led to a contradiction. Suppose, he said, that I connect the two spheres with a wire of negligible mass. The resulting two-sphere system, now forming a heavier object, should fall even faster. But this is absurd because the lighter sphere, which falls less quickly, should slow down the heavier one. These never-ending contradictions lead to only one possibility: all objects fall at the same speed regardless of their mass.

This is the kind of reasoning that our mental simulator affords, day or night. The very fact that we can conjure such complex mental scenes highlights the extraordinary array of algorithms in our brain. Of course, we learn during the day, but nocturnal neuronal replay multiplies our potential. This may indeed be one of the secrets of the human species, because suggestive data indicate that our sleep may be the deepest and most effective of all primates.26

SLEEP, CHILDHOOD, AND SCHOOL

What about children? Everyone knows that infants spend most of their time sleeping, and that sleep shortens with age. This is logical: early childhood is a privileged period during which our learning algorithms have a heavier workload. In fact, experimental data show that, for the same length of time, a child’s sleep is two to three times more effective than that of an adult. After intensive learning, ten-year-old children dive much faster into deep sleep than adults. Their slow waves are more intense, and the result is clear: when they study a sequence, sink into sleep, and wake up the next day refreshed and rested, they discover more regularities than adults.27

Nocturnal consolidation is already at work during the first few months of life. Infants under one year of age rely on it, for example, when they learn a novel word. Babies who take a short nap, only an hour and a half long, retain much better the words that they learned within the few hours before falling asleep.28 Above all, they generalize them better: the first time babies hear the word “horse,” they associate it only with one or two specific instances of horses, but after having slept, their brains manage to associate this word with new specimens that they have never seen before. Like Kekule in the crib, these budding scientists make discoveries during their sleep and wake up with a much better theory of the word horse.

What about school-age children? Research is equally clear: in preschool, even a brief afternoon nap strengthens the memory of what the children learned in the morning.29 For maximum benefit, sleep should occur within hours of learning. This benefit, however, exists only in children who regularly take naps. Since the brain naturally regulates its need for sleep according to the stimulation of the day, it does not seem useful to force children to nap, but we should encourage napping for those who feel the need.

Unfortunately, with TV, smartphones, and internet galore, children’s sleep, like that of adults, is now threatened on all fronts. What are the consequences? Can chronic sleep deprivation go so far as to cause specific learning disabilities, which are apparently on the rise? This is still only a hypothesis, but there are some suggestive hints.30 For instance, a subset of hyperactive children with attention disorders may simply be suffering from a chronic lack of sleep. Some experience sleep apneas that prevent them from falling into deep sleep—and simply clearing out the airways suffices to eliminate not only their chronic sleep deficit, but also their attention impairment. Recent experiments even suggest that electrical stimulation of the brain, by increasing the depth of slow sleep waves, may mitigate the learning deficit in hyperactive children.

Let me be clear: these recent data still need to be replicated, and I am in no way denying the existence of genuine attention disorders (in children for whom attention training, or sometimes the drug Ritalin, can have very positive effects). From an educational perspective, however, there is little doubt that improving the length and quality of sleep can be an effective intervention for all children, especially those with learning difficulties.

This idea has been tested in teenagers. Around puberty, chronobiology shows that the sleep cycle shifts: adolescents do not feel the need to go to bed early, but, as everyone may have experienced, they have the greatest difficulty getting up. It is not that they are unwilling so much as a simple consequence of the massive turmoil in the neural and hormonal networks that control their sleep/wake cycle. Unfortunately, no one seems to have informed school principals, who continue to require students to be present early in the morning. What would be so bad about changing this arbitrary convention? The experiment has been done, with promising results: once the start of school is delayed by half an hour to an hour, teenagers get more sleep, school attendance increases, attention in class improves, and grades shoot up.31 And the list of positive effects could go on: the American Academy of Pediatrics strongly recommends delaying school start times as an efficient countermeasure to teenage obesity, depression, and accidents (e.g., drowsy driving). That children’s general physical and mental well-being can be so easily improved, at strictly no cost, provides a magnificent example of adapting the educational system to the constraints of brain biology.