In September 2009, an extraordinary child forced me to drastically revise my ideas about learning. I was visiting the Sarah Hospital in Brasilia, a neurological rehabilitation center with a white architecture inspired by Oscar Niemeyer, with which my laboratory has collaborated for about ten years. The director, Lucia Braga, asked me to meet one of her patients, Felipe, a young boy only seven years old, who had spent more than half his life in a hospital bed. She explained to me how, at the age of four, he had been shot in the street—unfortunately not such a rare event in Brazil. The stray bullet had severed his spinal cord, thus rendering him almost completely paralyzed (tetraparetic). It also destroyed the visual areas of his brain: he was fully blind. To help him breathe, an opening was made in his trachea, at the base of his neck. And for over three years, he had been living in a hospital room, locked within the coffin of his inert body.
In the corridor leading to his room, I remember bracing myself at the thought of having to face a broken child. And then I meet . . . Felipe, a lovely little boy like any other seven-year-old—talkative, full of life, and curious about everything. He speaks flawlessly with an extensive vocabulary and asks me mischievous questions about French words. I learn that he has always been passionate about languages and never misses an opportunity to enrich his trilingual vocabulary (he speaks Portuguese, English, and Spanish). Although he is blind and bedridden, he escapes into his imagination by writing his own novels, and the hospital team has encouraged him in this path. In a few months, he learned to dictate his stories to an assistant, then write them himself using a special keyboard connected to a computer and sound card. The pediatricians and speech therapists take turns at his bedside, transforming his writings into real, tactile books with embossed illustrations that he proudly sweeps with his fingers, using the little sense of touch that he has left. His stories speak of heroes and heroines, mountains and lakes that he will never see, but that he dreams of like any other little boy.
Meeting with Felipe deeply moved me, and also persuaded me to take a closer look at what is probably the greatest talent of our brain: the ability to learn. Here was a child whose very existence poses a challenge to neuroscience. How do our brain’s cognitive faculties resist such a radical upheaval of their environment? Why could Felipe and I share the same thoughts, given our extraordinarily different sensory experiences? How do different human brains converge on the same concepts, almost regardless of how and when they learn them?
Many neuroscientists are empiricists: together, with the English Enlightenment philosopher John Locke (1632–1704), they presume that the brain simply draws its knowledge from its environment. In this view, the main property of cortical circuits is their plasticity, their ability to adapt to their inputs. And, indeed, nerve cells possess a remarkable ability to constantly adjust their synapses according to the signals they receive. Yet if this were the brain’s main drive, my little Felipe, deprived of visual and motor inputs, should have become a profoundly limited person. By what miracle did he manage to develop strictly normal cognitive abilities?
Felipe’s case is by no means unique. Everybody knows the story of Helen Keller (1880–1968) and Marie Heurtin (1885–1921), both of whom were born deaf and blind and yet, after years of grueling social isolation, learned sign language and ultimately became brilliant thinkers and writers.1 Throughout these pages, we will meet many other individuals who, I hope, will radically alter your views on learning. One of them is Emmanuel Giroux, who has been blind since the age of eleven but became a top-notch mathematician. Paraphrasing the fox in Antoine de Saint-Exupéry’s The Little Prince (1943), Giroux confidently states: “In geometry, what is essential is invisible to the eye. It is only with the mind that you can see well.” How does this blind man manage to swiftly navigate within the abstract spaces of algebraic geometry, manipulating planes, spheres, and volumes without ever seeing them? We will discover that he uses the same brain circuits as other mathematicians, but that his visual cortex, far from remaining inactive, has actually repurposed itself to do math.
I will also introduce you to Nico, a young painter who, while visiting the Marmottan Museum in Paris, managed to make an excellent copy of Monet’s famous painting Impression, Sunrise (see figure 1 in the color insert). What is so exceptional about this? Nothing, besides the fact that he accomplished it with only a single hemisphere, his left one—the right half of his brain was almost fully removed at the age of three! Nico’s brain learned to squeeze all his talents into half a brain: speech, writing, and reading, as usual, but drawing and painting too, which are generally thought to be functions of the right hemisphere, and also computer science and even wheelchair fencing, a sport in which he has reached the rank of champion in Spain. Forget everything you were told about the respective roles of both hemispheres, because Nico’s life proves that anyone can become a creative and talented artist without a right hemisphere! Cerebral plasticity seems to work miracles.
We will also visit the infamous orphanages of Bucharest where children were left from birth in quasi-abandon—and yet, years later, some of them, adopted before the age of one or two, have had almost normal school experiences.
All these examples illustrate the extraordinary resilience of the human brain: even major trauma, such as blindness, the loss of a hemisphere, or social isolation, cannot extinguish the spark of learning. Language, reading, mathematics, artistic creation: all these unique talents of the human species, which no other primate possesses, can resist massive injuries, such as the removal of a hemisphere or the loss of sight and motor skills. Learning is a vital principle, and the human brain has an enormous capacity for plasticity—to change itself, to adapt. Yet we will also discover dramatic counterexamples, where learning seems to freeze and remain powerless. Consider pure alexia, the inability to read a single word. I have personally studied several adults, all of whom were excellent readers, who had a tiny stroke restricted to a minuscule brain area that rendered them incapable of deciphering words as simple as “dog” or “mat.” I remember a brilliant trilingual woman, a faithful reader of the French newspaper Le Monde, who was deeply sorrowed at the fact that, after her brain injury, every page of the daily press looked like Hebrew. Her determination to relearn to read was at least as strong as the stroke that she had suffered was severe. However, after two years of perseverance, her reading level still did not exceed that of a kindergartner: it took her several seconds to read a single word, letter by letter, and she still stumbled on every word. Why couldn’t she learn? And why do some children, who suffer from dyslexia, dyscalculia, or dyspraxia, show a similar radical hopelessness in acquiring reading, calculating, or writing while others surf smoothly through those fields?
Brain plasticity almost seems temperamental: sometimes it overcomes massive difficulties, and other times it leaves children and adults who are otherwise highly motivated and intelligent with debilitating disabilities. Does it depend on particular circuits? Do these circuits lose their plasticity over the years? Can plasticity be reopened? What are the rules that govern it? How can the brain be so effective from birth and throughout a child’s youth? What algorithms allow our brain circuits to form a representation of the world? Would understanding them help us learn better and faster? Could we draw inspiration from them in order to build more efficient machines, artificial intelligences that would ultimately imitate us or even surpass us? These are some of the questions that this book attempts to answer, in a radically multidisciplinary manner, drawing on recent scientific discoveries in cognitive science and neuroscience, but also in artificial intelligence and education.
Why do we have to learn in the first place? The very existence of the capacity to learn raises questions. Wouldn’t it be better for our children to immediately know how to speak and think, right from day one, like Athena, who, according to legend, emerged into the world from Zeus’s skull, fully grown and armed, as she let out her war cry? Why aren’t we born pre-wired, with pre-programmed software and exactly the pre-loaded knowledge necessary to our survival? In the Darwinian struggle for life, shouldn’t an animal who is born mature, with more knowledge than others, end up winning and spreading its genes? Why did evolution invent learning in the first place?
My answer is simple: a complete pre-wiring of the brain is neither possible nor desirable. Impossible, really? Yes, because if our DNA had to specify all the details of our knowledge, it simply would not have the necessary storage capacity. Our twenty-three chromosomes contain three billion pairs of the “letters” A, C, G, T—the molecules adenine, cytosine, guanine, and thymine. How much information does that represent? Information is measured in bits: a binary decision, 0 or 1. Since each of the four letters of the genome codes for two bits (we can code them as 00, 01, 10, and 11), our DNA therefore contains a total of six billion bits. Remember, however, that in today’s computers, we count in bytes, which are sequences of eight bits. The human genome can thus be reduced to about 750 megabytes—the contents of an old-fashioned CD-ROM or a small USB key! And this basic calculation does not even take into account the many redundancies that abound in our DNA.
From this modest amount of information, inherited from millions of years of evolution, our genome, initially confined to a single fertilized egg, manages to set up the whole body plan—every molecule of every cell in our liver, kidneys, muscles, and, of course, our brain: eighty-six billion neurons, a thousand trillion connections. . . . How could our genome possibly specify each one of them? Assuming that each of our nerve connections encodes only one bit, which is certainly an underestimate, the capacity of our brain is on the order of one hundred terabytes (about 1015 bits), or a hundred thousand times more than the information in our genome. We are faced with a paradox: the fantastic palace that is our brain contains a hundred thousand times more detail than the architect’s blueprints that are used to build it! I see only one explanation: the structural frame of the palace is built following the architect’s guidelines (our genome), while the details are left to the project manager, who can adapt the blueprints to the terrain (the environment). Pre-wiring a human brain in all its detail would be strictly impossible, which is why learning is needed to supplement the work of genes.
This simple bookkeeping argument, however, fails to explain why learning is so universally widespread in the animal world. Even simple organisms devoid of any cortex, such as earthworms, fruit flies, and sea cucumbers, learn many of their behaviors. Take the little worm called the “nematode,” or C. elegans. In the past twenty years, this millimeter-size animal became a laboratory star, in part because its architecture is under strong genetic determinism and can be analyzed down to the smallest detail. Most individual specimens have exactly 959 cells, including 302 neurons, whose connections are all known and reproducible. And yet it learns.2 Researchers initially considered it as a kind of robot just able to swim back and forth, but they later realized that it possesses at least two forms of learning: habituation and association. Habituation refers to an organism’s capacity to adapt to the repeated presence of a stimulus (for example, a molecule in the water in which the animal lives) and eventually cease to respond to it. Association, on the other hand, consists of discovering and remembering what aspects of the environment predict sources of food or danger. The nematode worm is a champion of association: it can remember, for instance, which tastes, smells, or temperature levels were previously associated with food (bacteria) or with a repellent molecule (the smell of garlic) and use this information to choose an optimal path through its environment.
With such a small number of neurons, the worm’s behavior could have been fully pre-wired. However, it is not. The reason is that it is highly advantageous, indeed indispensable for its survival, to adapt to the specific environment in which it is born. Even two genetically identical organisms will not necessarily encounter the same ecosystem. In the case of the nematode, the ability to quickly adjust its behavior to the density, chemistry, and temperature of the place in which it lands allows it to be more efficient. More generally, every animal must quickly adapt to the unpredictable conditions of its current existence. Natural selection, Darwin’s remarkably efficient algorithm, can certainly succeed in adapting each organism to its ecological niche, but it does so at an appallingly slow rate. Whole generations must die, due to lack of proper adaptation, before a favorable mutation can increase the species’ chance of survival. The ability to learn, on the other hand, acts much faster—it can change behavior within the span of a few minutes, which is the very quintessence of learning: being able to adapt to unpredictable conditions as quickly as possible.
This is why learning evolved. Over time, the animals that possessed even a rudimentary capacity to learn had a better chance of surviving than those with fixed behaviors—and they were more likely to pass their genome (now including genetically driven learning algorithms) on to the next generation. In this manner, natural selection favored the emergence of learning. The evolutionary algorithm discovered a good trick: it is useful to let certain parameters of the body change rapidly in order to adjust to the most volatile aspects of the environment.
Naturally, several aspects of the physical world are strictly invariable: gravitation is universal; the propagation of light and sound does not change overnight; and that is why we do not have to learn how to grow ears, eyes, or the labyrinths that, in our vestibular system, keep track of our body’s acceleration—all these properties are genetically hardwired. However, many other parameters, such as the spacing of our two eyes, the weight and length of our limbs, or the pitch of our voice, all vary, and this is why our brain must adapt to them. As we shall see, our brains are the result of a compromise—we inherit, from our long evolutionary history, a great deal of innate circuitry (coding for all the broad intuitive categories into which we subdivide the world: images, sounds, movements, objects, animals, people . . .) but also, perhaps, to an even greater extent, some highly sophisticated learning algorithm that can refine those early skills according to our experience.
If I had to sum up, in one word, the singular talents of our species, I would answer with “learning.” We are not simply Homo sapiens, but Homo docens—the species that teaches itself. Most of what we know about the world was not given to us by our genes: we had to learn it from our environment or from those around us. No other animal has managed to change its ecological niche so radically, moving from the African savanna to deserts, mountains, islands, polar ice caps, cave dwellings, cities, and even outer space, all within a few thousand years. Learning has fueled it all. From making fire and designing stone tools to agriculture, exploration, and atomic fission, the story of humanity is one of constant self-reinvention. At the root of all these accomplishments lies one secret: the extraordinary ability of our brain to formulate hypotheses and select those that fit with our environment.
Learning is the triumph of our species. In our brain, billions of parameters are free to adapt to our environment, our language, our culture, our parents, or our food. . . . These parameters are carefully chosen: over the course of evolution, the Darwinian algorithm carefully delineated which brain circuits should be pre-wired and which should be left open to the environment. In our species, the contribution of learning is particularly large since our childhood extends over many more years than it does for other mammals. And because we possess a unique knack for language and mathematics, our learning device is able to navigate vast spaces of hypotheses that recombine into potentially infinite sets—even if they are always grounded in fixed and invariable foundations inherited from our evolution.
More recently, humanity discovered that it could increase this remarkable ability even further with the help of an institution: the classroom. Pedagogy is an exclusive privilege of our species: no other animal actively teaches its offspring by setting aside specific time to monitor their progress, difficulties, and errors. The invention of the school, an institution which systematizes the informal education present in all human societies, has vastly increased our brain potential. We have discovered that we can take advantage of the exuberant plasticity of the child brain to instill in it a maximum amount of information and talent. Over centuries, our school system has continued to improve in efficiency, starting earlier and earlier in childhood and now lasting for fifteen years or more. Increasing numbers of brains benefit from higher education. Universities are neural refineries where our brain circuits acquire their best talents.
Education is the main accelerator of our brain. It is not difficult to justify its presence in the top spots in government spending: without it, our cortical circuits would remain diamonds in the rough. The complexity of our society owes its existence to the multiple improvements that education brings to our cortex: reading, writing, calculation, algebra, music, a sense of time and space, a refinement of memory. . . . Did you know, for example, that the short-term memory of a literate person, the number of syllables she can repeat, is almost double that of an adult who never attended school and remained illiterate? Or that IQ increases by several points for each additional year of education and literacy?
Education magnifies the already considerable faculties of our brain—but could it perform even better? At school and at work, we constantly tinker with our brain’s learning algorithms, yet we do so intuitively, without paying attention to how to learn. No one has ever explained to us the rules by which our brain memorizes and understands or, on the contrary, forgets and makes mistakes. It truly is a pity, because the scientific knowledge is extensive. An excellent website, put together by the British Education Endowment Foundation (EEF),3 lists the most successful educational interventions—and it gives a very high ranking to the teaching of metacognition (knowing the powers and limits of one’s own brain). Learning to learn is arguably the most important factor for academic success.
Fortunately, we now know a lot about how learning works. Thirty years of research, at the boundaries of computer science, neurobiology, and cognitive psychology, have largely elucidated the algorithms that our brain uses, the circuits involved, the factors that modulate their efficacy, and the reasons why they are uniquely efficient in humans. In this book, I will discuss all those points in turn. When you close this book, I hope you will know much more about your own learning processes. It seems fundamental, to me, that every child and every adult realize the full potential of his or her own brain and also, of course, its limits. Contemporary cognitive science, through the systematic dissection of our mental algorithms and brain mechanisms, gives new meaning to the famous Socratic adage “Know thyself.” Today, the point is no longer just to sharpen our introspection, but to understand the subtle neuronal mechanics that generate our thoughts, in an attempt to use them in optimal accordance with our needs, goals, and desires.
The emerging science of how we learn is, of course, of special relevance to all those for whom learning is a professional activity: teachers and educators. I am deeply convinced that one cannot properly teach without possessing, implicitly or explicitly, a mental model of what is going on in the minds of the learners. What sort of intuitions do they start with? What steps do they have to take in order to move forward? What factors can help them develop their skills?
While cognitive neuroscience does not have all the answers, we begin to understand that all children start off life with a similar brain architecture—a Homo sapiens brain, radically different from that of other apes. I am not denying, of course, that our brains vary: the quirks of our genomes, as well as the whimsies of early brain development, grant us slightly different strengths and learning speeds. However, the basic circuitry is the same in all of us, as is the organization of our learning algorithms. There are therefore fundamental principles that any teacher must respect in order to be most effective. In this book, we will see many examples. All young children share abstract intuitions in the domains of language, arithmetic, logic, and probability, thus providing a foundation on which higher education must be grounded. And all learners benefit from focused attention, active engagement, error feedback, and a cycle of daily rehearsal and nightly consolidation—I call these factors the “four pillars” of learning, because, as we shall see, they lie at the foundation of the universal human learning algorithm present in all our brains, children and adults alike.
At the same time, our brains do exhibit individual variations, and in some extreme cases, a pathology can appear. The reality of developmental pathologies, such as dyslexia, dyscalculia, dyspraxia, and attention disorders, is no longer a subject of doubt. Fortunately, as we increasingly understand the common architecture from which these quirks arise, we also discover that simple strategies exist to detect and compensate for them. One of the goals of this book is to spread this growing scientific knowledge, so that every teacher, and also every parent, can adopt an optimal teaching strategy. While children vary dramatically in what they know, they still share the same learning algorithms. Thus, the pedagogical tricks that work best with all children are also those that tend to be the most efficient for children with learning disabilities—they must be applied only with greater focus, patience, systematicity, and tolerance to error.
And the latter point is crucial: while error feedback is essential, many children lose confidence and curiosity because their errors are punished rather than corrected. In schools worldwide, error feedback is often synonymous with punishment and stigmatization—and later in this book I will have much to say about the role of school grades in perpetuating this confusion. Negative emotions crush our brain’s learning potential, whereas providing the brain with a fear-free environment may reopen the gates of neuronal plasticity. There will be no progress in education without simultaneously considering the emotional and cognitive facets of our brain—in today’s cognitive neuroscience, both are considered key ingredients of the learning cocktail.
Today, human intelligence faces a new challenge: we are no longer the only champions of learning. In all fields of knowledge, learning algorithms are challenging our species’ unique status. Thanks to them, smartphones can now recognize faces and voices, transcribe speech, translate foreign languages, control machines, and even play chess or Go—much better than we can. Machine learning has become a billion-dollar industry that is increasingly inspired by our brains. How do these artificial algorithms work? Can their principles help us understand what learning is? Are they already able to imitate our brains, or do they still have a long way to go?
While the current advances in computer science are fascinating, their limits are evident. Conventional deep learning algorithms mimic only a small part of our brain’s functioning, the one that, I argue, corresponds to the first stages of sensory processing, the first two or three hundred milliseconds during which our brain operates in an unconscious manner. This type of processing is in no way superficial: in a fraction of a second, our brain can recognize a face or a word, put it in context, understand it, and even integrate it into a small sentence. . . . The limitation, however, is that the process remains strictly bottom-up, without any real capacity for reflection. Only in the subsequent stages, which are much slower, more conscious, and more reflective, does our brain manage to deploy all its abilities of reasoning, inference, and flexibility—features that today’s machines are still far from matching. Even the most advanced computer architectures fall short of any human infant’s ability to build abstract models of the world.
Even within their fields of expertise—for example, the rapid recognition of shapes—modern-day algorithms encounter a second problem: they are much less effective than our brain. The state of the art in machine learning involves running millions, even billions, of training attempts on computers. Indeed, machine learning has become virtually synonymous with big data: without massive data sets, algorithms have a hard time extracting abstract knowledge that generalizes to new situations. In other words, they do not make the best use of data.
In this contest, the infant brain wins hands down: babies do not need more than one or two repetitions to learn a new word. Their brain makes the most of extremely scarce data, a competence that still eludes today’s computers. Neuronal learning algorithms often come close to optimal computation: they manage to extract the true essence from the slightest observation. If computer scientists hope to achieve the same performance in machines, they will have to draw inspiration from the many learning tricks that evolution integrated into our brain: attention, for example, which allows us to select and amplify relevant information; or sleep, an algorithm by which our brain synthesizes what it learned on previous days. New machines with these properties are beginning to emerge, and their performance is constantly improving—they will undoubtedly compete with our brains in the near future.
According to an emerging theory, the reason that our brain is still superior to machines is that it acts as a statistician. By constantly attending to probabilities and uncertainties, it optimizes its ability to learn. During its evolution, our brain seems to have acquired sophisticated algorithms that constantly keep track of the uncertainty associated with what it has learned—and such a systematic attention to probabilities is, in a precise mathematical sense, the optimal way to make the most of each piece of information.4
Recent experimental data support this hypothesis. Even babies understand probabilities: from birth, they seem to be deeply embedded in their brain circuits. Children act like little budding scientists: their brains teem with hypotheses, which resemble scientific theories that their experiences put to the test. Reasoning with probabilities, in a largely unconscious manner, is deeply inscribed in the logic of our learning. It allows any of us to gradually reject false hypotheses and retain only the theories that make sense of the data. And, unlike other animal species, humans seem to use this sense of probabilities to acquire scientific theories from the outside world. Only Homo sapiens manages to systematically generate abstract symbolic thoughts and to update their plausibility in the face of new observations.
Innovative computer algorithms are beginning to incorporate this new vision of learning. They are called “Bayesian,” after the Reverend Thomas Bayes (1702–61), who outlined the rudiments of this theory as early as the eighteenth century. My hunch is that Bayesian algorithms will revolutionize machine learning—indeed, we will see that they are already able to extract abstract information with an efficiency close to that of a human scientist.
Our journey into the contemporary science of learning is a three-part trip.
In the first part, entitled “What Is Learning?”, we start by defining what it means for humans or animals—or indeed any algorithm or machine—to learn something. The idea is simple: to learn is to progressively form, in silicon and neural circuits alike, an internal model of the outside world. When I walk around a new town, I form a mental map of its layout—a miniature model of its streets and passageways. Likewise, a child who is learning to ride a bike is shaping, in her neural circuits, an unconscious simulation of how the actions on the pedals and handlebars affect the bike’s stability. Similarly, a computer algorithm learning to recognize faces is acquiring template models of the various possible shapes of eyes, noses, mouths, and their combinations.
But how do we set up the proper mental model? As we shall see, the learner’s mind can be likened to a giant machine with millions of tunable parameters whose settings collectively define what is learned (for instance, where the streets are likely to be in our mental map of the neighborhood). In the brain, the parameters are synapses, the connections between neurons, which can vary in strength; in most present-day computers, they are the tunable weights or probabilities that specify the strength of each tenable hypothesis. Learning, in both brains and machines, thus requires searching for an optimal combination of parameters that, together, define the mental model in every detail. In this sense, learning is a massive search problem—and in order to understand how learning works in the human brain, it greatly helps to examine how learning algorithms operate in present-day computers.
By comparing the performance of computer algorithms with those of the brain, in silico versus in vivo, we will progressively get a sharper picture of what learning means at the brain level. For sure, mathematicians and computer scientists haven’t managed to design learning algorithms as powerful as the human brain—yet. But they are beginning to home in on a theory of the optimal learning algorithm that any system should use if it aims for the greatest efficiency. According to this theory, the best learner operates as a scientist who makes rational use of probabilities and statistics. A new model emerges: that of the brain as a statistician, of cerebral circuits as computing with probabilities. This theory specifies a clear division of labor between nature and nurture: the genes first set up vast spaces of a priori hypotheses—and the environment then selects the hypotheses which best match the external world. The set of hypotheses is genetically specified; their selection is experience-dependent.
Does this theory correspond to how the brain works? And how is learning implemented in our biological circuits? What changes in our brains when we acquire a novel competence? In the second section, “How Our Brain Learns,” we will turn to psychology and neuroscience. I will focus on babies, who are genuine learning machines without rivals. Recent data show that infants are indeed the budding statisticians predicted by the theory. Their remarkable intuition in the fields of language, geometry, numbers, and statistics confirms that they are anything but a blank slate, a tabula rasa. From birth, children’s brain circuits are already organized and project hypotheses onto the outside world. But they also have a considerable margin of plasticity, which is reflected in the brain’s perpetual effervescence of synaptic changes. Within this statistical machine, nature and nurture, far from opposing each other, join forces. The result is a structured yet plastic system with an unmatched ability to repair itself in the face of brain injury and to recycle its brain circuits in order to acquire skills unanticipated by evolution, such as reading or mathematics.
In the third part, “The Four Pillars of Learning,” I detail some of the tricks that make our brain the most effective learning device known today. Four essential mechanisms, or “pillars,” massively modulate our ability to learn. The first is attention: a set of neural circuits that select, amplify, and propagate the signals that we view as relevant—multiplying their impact in our memory a hundred fold. My second pillar is active engagement: a passive organism learns almost nothing, because learning requires an active generation of hypotheses, with motivation and curiosity. The third pillar, and the flip side to active engagement, is error feedback: whenever we are surprised because the world violates our expectations, error signals spread throughout our brain. They correct our mental models, eliminate inappropriate hypotheses, and stabilize the most accurate ones. Finally, the fourth pillar is consolidation: over time, our brain compiles what it has acquired and transfers it into long-term memory, thus freeing neural resources for further learning. Repetition plays an essential role in this consolidation process. Even sleep, far from being a period of inactivity, is a privileged moment during which the brain revisits its past states, at a faster pace, and recodes the knowledge acquired during the day.
These four pillars are universal: babies, children, and adults of all ages continually deploy them whenever they exercise their ability to learn. This is why we should all learn to master them—it is how we can learn to learn. In the conclusion, I will come back to the practical consequences of these scientific advances. Changing our practices at school, at home, or at work is not necessarily as complicated as we think. Very simple ideas about play, curiosity, socialization, concentration, and sleep can augment what is already our brain’s greatest talent: learning.