Of all the places we travel throughout our lifetimes, the most extraordinary is certainly the land of childhood: a territory that, looked back on by the adult, becomes a simple, naive, colourful, dreamlike, playful and vulnerable space.
It’s odd. We were all once citizens of that country, yet it is hard to remember and reconstruct it without dusting off photos in which, from a distance, we see ourselves in the third person, as if that child were someone else and not us in a different time.
How did we think and conceive of the world before learning the words to describe it? And, while we are at it, how did we discover those words without a dictionary to define them? How is it possible that before three years of age, in a period of utter immaturity in terms of formal reasoning, we were able to discover the ins and outs of grammar and syntax?
Here we will sketch out that journey, from the day we entered the world to the point where our language and thought resemble what we employ today as adults. The trajectory makes use of diverse vehicles, methods and tools. It intermingles reconstructions of thought from our gazes, gestures and words, along with the minute inspection of the brain that makes us who we are.
We will see that, from the day we are born, we are already able to form abstract, sophisticated representations. Although it sounds far-fetched, babies have notions of mathematics, language, morality, and even scientific and social reasoning. This creates a repertoire of innate intuitions that structure what they will learn–what we all learned–in social, educational and family spaces, over the years of childhood.
We will also discover that cognitive development is not the mere acquisition of new abilities and knowledge. Quite the contrary, it often consists in undoing habits that impede children from demonstrating what they already know. On occasion, and despite it being a counterintuitive idea, the challenge facing children is not acquiring new concepts but rather learning to manage those they already possess.
I have observed that we, as adults, often draw babies poorly because we don’t realize that their body proportions are completely different from ours. Their arms, for example, are barely the size of their heads. Our difficulty in seeing them as they are serves as a morphological metaphor for understanding what is most difficult to sense in the cognitive sphere: babies are not miniature adults.
In general, for simplicity and convenience, we speak of children in the third person, which erroneously assumes a distance, as if we were talking about something that is not us. Since this book’s intention is to travel to the innermost recesses of our brain, this first excursion, to the child we once were, will be in the first person in order to delve into how we thought, felt and represented the world in those days we can no longer recall, simply because that part of our experience has been relegated to oblivion.
In the late seventeenth century, an Irish philosopher, William Molyneux, suggested the following mental experiment to his friend John Locke:
Suppose a man born blind, and now adult, and taught by his touch to distinguish between a cube and a sphere […] Suppose then the cube and the sphere placed on a table, and the blind man made to see: query, Whether by his sight, before he touched them, he could now distinguish and tell which is the globe, which the cube?
Could he? In the years that I have been asking this question I’ve found that the vast majority of people believe that the answer is no. That the virgin visual experience needs to be linked to what is already known through touch. Which is to say, that a person would need to feel and see a sphere at the same time in order to discover that the gentle, smooth curve perceived by the fingertips corresponds to the image of the sphere.
Others, the minority, believe that the previous tactile experience creates a visual mould. And that, as a result, the blind man would be able to distinguish the sphere from the cube as soon as he could see.
John Locke, like most people, thought that a blind man would have to learn how to see. Only by seeing and touching an object at the same time would he discover that those sensations are related, requiring a translation exercise in which each sensory mode is a different language, and abstract thought is some sort of dictionary that links the tactile words with the visualized words.
For Locke and his empiricist followers, the brain of a newborn is a blank page, a tabula rasa ready to be written on. As such, experience goes about sculpting and transforming it, and concepts are born only when they acquire a name. Cognitive development begins on the surface with sensory experience, and, then, with the development of language, it acquires the nuances that explain the deeper and more sophisticated aspects of human thought: love, religion, morality, friendship and democracy.
Empiricism is based on a natural intuition. It is not surprising, then, that it has been so successful and that it dominated the philosophy of the mind from the seventeenth century to the time of the great Swiss psychologist Jean Piaget. However, reality is not always intuitive: the brain of a newborn is not a tabula rasa. Quite the contrary. We already come into the world as conceptualizing machines.
The typical café discussion reasoning comes up hard against reality in a simple experiment carried out by a psychologist, Andrew Meltzoff, in which he tested a version of Molyneux’s question in order to refute empirical intuition. Instead of using a sphere and a cube, he used two dummies: one smooth and rounded and the other more bumpy, with nubs. The method is simple. In complete darkness, babies had one of the two pacifiers in their mouths. Later, the pacifiers are placed on a table and the light is turned on. And the babies looked more at the pacifier they’d had in their mouths, showing that they recognize it.
The experiment is very simple and destroys a myth that had persisted over more than three hundred years. It shows that a newborn with only tactile experience–contact with the mouth, since at that age tactile exploration is primarily oral as opposed to manual–of an object has already conceived a representation of how it looks. This contrasts with what parents typically perceive: that newborn babies’ gazes often seem to be lost in the distance and disconnected from reality. As we will see later, the mental life of children is actually much richer and more sophisticated than we can intuit based on their inability to communicate it.
Meltzoff’s experiment gives–against all intuition–an affirmative response to Molyneux’s question: newborn babies can recognize by sight two objects that they have only touched. Does the same thing happen with a blind adult who begins to see? The answer to this question only recently became possible once surgeries were able to reverse the thick cataracts that cause congenital blindness.
The first actual materialization of Molyneux’s mental experiment was done by the Italian ophthalmologist Alberto Valvo. John Locke’s prophecy was correct; for a congenitally blind person, gaining sight was nothing like the dream they had longed for. This was what one of the patients said after the surgery that allowed him to see:
I had the feeling that I had started a new life, but there were moments when I felt depressed and disheartened, when I realized how difficult it was to understand the visual world. […] In fact, I see groups of lights and shadows around me […] like a mosaic of shifting sensations whose meaning I don’t understand. […] At night, I like the darkness. I had to die as a blind person in order to be reborn as a seeing person.
This patient felt so challenged by suddenly gaining sight because while his eyes had been ‘opened’ by the surgery, he still had to learn to see. It was a big and tiresome effort to put together the new visual experience with the conceptual world he had built through his senses of hearing and touch. Meltzoff proved that the human brain has the ability to establish spontaneous correspondences between sensory modalities. And Valvo showed that this ability atrophies when in disuse over the course of a blind life.
On the contrary, when we experience different sensory modalities, some correspondences between them consolidate spontaneously over time. To prove this, my friend and colleague Edward Hubbard, along with Vaidyanathan Ramachandran, created the two shapes that we see here. One is Kiki and the other is Bouba. The question is: which is which?
Almost everyone answers that the one on the left is Bouba and the one on the right is Kiki. It seems obvious, as if it couldn’t be any other way. Yet there is something strange in that correspondence; it’s like saying someone looks like a Carlos. The explanation for this is that when we pronounce the vowels /o/y/u/, our lips form a wide circle, which corresponds to the roundness of Bouba. And when saying the /k/, or /i/, the back part of the tongue rises and touches the palate in a very angular configuration. So the pointy shape naturally corresponds with the name Kiki.
These bridges often have a cultural basis, forged by language. For example, most of the world thinks that the past is behind us and the future is forward. But that is arbitrary. For example, the Aymara, a people from the Andean region of South America, conceive of the association between time and space differently. In Aymara, the word ‘nayra’ means past but also means in front, in view. And the word ‘quipa’, which means future, also indicates behind. Which is to say that in the Aymaran language the past is ahead and the future behind. We know that this reflects their way of thinking, because they also express that relationship with their bodies. The Aymara extend their arms backwards to refer to the future and forwards to allude to the past. While on the face of it this may seem strange, when they explain it, it seems so reasonable that we feel tempted to change our own way of envisioning it; they say that the past is the only thing we know–what our eyes see–and therefore it is in front of us. The future is the unknown–what our eyes do not know–and thus it is at our backs. The Aymara walk backwards through their timeline. Thus, the uncertain, unknown future is behind and gradually comes into view as it becomes the past.
We designed an atypical experiment, with the linguist Marco Trevisan and the musician Bruno Mesz, in order to find out whether there is a natural correspondence between music and taste. The experiment brought together musicians, chefs and neuroscientists. The musicians were asked to improvise on the piano, based on the four canonical flavours: sweet, salty, sour and bitter. Of course, coming from different musical schools and styles (jazz, rock, classical, etc.) each one of them had their own distinctive interpretation. But within that wide variety we found that each taste inspired consistent patterns: the bitter corresponded with deep, continuous tones; the salty with notes that were far apart (staccato); the sour with very high-pitched, dissonant melodies; and the sweet with consonant, slow and gentle music. In this way we were able to salt ‘Isn’t She Lovely’ by Stevie Wonder and to make a sour version of The White Album by the Beatles.
Our representation of time is random and fickle. The phrase ‘Christmas is fast approaching’ is strange. Approaching from where? Does it come from the south, the north, the west? Actually, Christmas isn’t located anywhere. It is in time. This phrase, or the analogous one, ‘we’re getting close to the end of the year’, reveals something of how our minds organize our thoughts. We do it in our bodies. Which is why we talk of the head of government, of someone’s right-hand man, the armpit of the world and many other metaphors* that reflect how we organize thought in a template defined by our own bodies. And because of that, when we think of others’ actions, we do so by acting them out ourselves, speaking others’ words in our own voice, yawning someone else’s yawn and laughing someone else’s laugh. You can do a simple experiment at home to test out this mechanism. During a conversation, cross your arms. It’s very likely that the person you are speaking to will do the same. You can take it further with bolder gestures, like touching your head, or scratching yourself, or stretching. The probability that the other person will imitate you is high.
This mechanism depends on a cerebral system made up of mirror neurons. Each one of these neurons codifies specific gestures, like moving an arm or opening up a hand, but it does so whether or not the action is our own or someone else’s. Just as the brain has a mechanism that spontaneously amalgamates information from different sensory modes, the mirror system allows–also spontaneously–our actions and others’ actions to be brought together. Lifting your arm and watching someone else do it are very different processes, since one is done by you and the other is not. As such, one is visual and the other is motor. However, from a conceptual standpoint, they are quite similar. They both correspond to the same gesture in the abstract world.
And now after understanding how we adults merge sensory modalities in music, in shapes and sounds and in language, and how we bring together perception and action, we go back to the infant mind, specifically to ask whether the mirror system is learned or whether it is innate. Can newborns understand that their own actions correspond to the observation of another person’s? Meltzoff also tested this out, to put an end to the empirical idea that considers the brain a tabula rasa.
Meltzoff proposed another experiment, in which he made three different types of face at a baby: sticking out his tongue, opening his mouth, and pursing his lips as if he were about to give the child a kiss. He observed that the baby tended to repeat each of his gestures. The imitation wasn’t exact or synchronized; the mirror is not a perfect one. But, on average, it was much more likely that the baby would replicate the gesture he or she observed than make one of the other two. Which is to say that newborns are capable of associating observed actions with their own, although the imitation is not as precise as it will later become when language is introduced.
Meltzoff’s two discoveries–the associations between our actions and those of others, and between varying sensory modalities–were published in 1977 and 1979. By 1980, the empirical dogma was almost completely dismantled. In order to deal it a final death blow, there was one last mystery to be solved: Piaget’s mistake.*
One of the loveliest experiments done by the renowned Swiss psychologist Jean Piaget is the one called A-not-B. The first part goes like this: there are two napkins on a table, one on each side. A ten-month-old baby is shown an object, then it is covered with the first napkin (called ‘A’). The baby finds it without difficulty or hesitation.
Behind this seemingly simple task is a cognitive feat known as object permanence: in order to find the object there must be a reasoning that goes beyond what is on the surface of the senses. The object did not disappear. It is merely hidden. A baby that is to be able to comprehend this must have a view of the world in which things do not cease to exist when we no longer see them. That, of course, is abstract.*
The second part of the experiment begins in exactly the same way. The same ten-month-old baby is shown an object, which is then covered up by napkin ‘A’. But then, and before the baby does anything, the person running the experiment moves the object to underneath the other napkin (called ‘B’), making sure that the baby sees the switch. And here is where it gets weird: the baby lifts the napkin where it was first hidden, as if not having observed the switch just made in plain sight.
This error is ubiquitous. It happens in every culture, almost unfailingly, in babies about ten months of age. The experiment is striking and precise, and shows fundamental traits of our way of thinking. But Piaget’s conclusion, that babies of this age still do not fully understand the abstract idea of object permanence, is erroneous.
When revisiting the experiment, decades later, the more plausible–and much more interesting–interpretation is that babies know the object has moved but cannot use that information. They have, as happens in a state of drunkenness, a very shaky control of their actions. More precisely, ten-month-old babies have not yet developed a system of inhibitory control, which is to say, the ability to prevent themselves doing something they had already planned to do. In fact, this example turns out to be the rule. We will see in the next section how certain aspects of thought that seem sophisticated and elaborated–morality or mathematics, for example–are already sketched from the day we are born. On the other hand, others that seem much more rudimentary, like halting a decision, mature gradually and steadily. To understand how we came to know this, we need to take a closer look at the executive system, or the brain’s ‘control tower’, which is formed by an extensive neural network distributed in the prefrontal cortex that matures slowly during childhood.
The network in the frontal cortex that organizes the executive system defines us as social beings. Let’s give a small example. When we grab a hot plate, the natural reflex would be to drop it immediately. But an adult, generally, will inhibit that reflex while quickly evaluating if there is a nearby place to set it down and avoid breaking the plate.
The executive system governs, controls and administers all these processes. It establishes plans, resolves conflicts, manages our attention focus, and inhibits some reflexes and habits. Therefore the ability to govern our actions depends on the reliability of the executive function system.* If it does not work properly, we drop the hot plate, burp at the table, and gamble away all our money at the roulette wheel.
The frontal cortex is very immature in the early months of life and it develops slowly, much more so than other brain regions. Because of this, babies can only express very rudimentary versions of the executive functions.
A psychologist and neuroscientist, Adele Diamond, carried out an exhaustive and meticulous study on physiological, neurochemical and executive function development during the first year of life. She found that there is a precise relationship between some aspects of the development of the frontal cortex and babies’ ability to perform Piaget’s A-not-B task.
What impedes a baby’s ability to solve this apparently simple problem? Is it that babies cannot remember the different positions the object could be hidden in? Is it that they do not understand that the object has changed place? Or is it, as Piaget suggested, that the babies do not even fully understand that the object hasn’t ceased to exist when it is hidden under a napkin? By manipulating all the variables in Piaget’s experiment–the number of times that babies repeat the same action, the length of time they remember the position of the object, and the way they expresses their knowledge–Diamond was able to demonstrate that the key factor impeding the solution of this task is babies’ inability to inhibit the response they have already prepared. And with this, she laid the foundations of a paradigm shift: children don’t always need to learn new concepts; sometimes they just need to learn how to express the ones they already know.
So we know that ten-month-old babies cannot resist the temptation to extend their arms where they were planning to, even when they understand that the object they wish to reach has changed location. We also know that this has to do with a quite specific immaturity of the frontal cortex in the circuits and molecules that govern inhibitory control. But how do we know if babies indeed understand that the object is hidden in a new place?
The key is in their gaze. While babies extend their arms towards the wrong place, they stare at the right place. Their gazes and their hands point to different locations. Their gaze shows that they know where it is; their hand movement shows that they cannot inhibit the mistaken reflex. They are–we are–two-headed monsters. In this case, as in so many others, the difference between children and adults is not what they know but rather how they are able to act on the basis of that knowledge.
In fact, the most effective way of figuring out what children are thinking is usually by observing their gaze.* Going with the premise that babies look more at something that surprises them, a series of games can be set up in order to discover what they can distinguish and what they cannot, and this can give answers as to their mental representations. For example, that was how it was discovered that babies, a day after being born, already have a notion of numerosity, something that previously seemed impossible to determine.
The experiment works like this. A baby is shown a series of images. Three ducks, three red squares, three blue circles, three triangles, three sticks… The only regularity in this sequence is an abstract, sophisticated element: they are all sets of three. Later the baby is shown two images. One has three flowers and the other four. Which do the newborns look at more? The gaze is variable, of course, but they consistently look longer at the one with four flowers. And it is not that they are looking at the image because it has more things in it. If they were shown a sequence of groups of four objects, they would later look longer at one that had a group of three. It seems they grow bored of always seeing the same number of objects and are surprised to discover an image that breaks the rule.
Liz Spelke and Véronique Izard proved that the notion of numerosity persists even when the quantities are expressed in different sensory modalities. Newborns that hear a series of three beeps expect there then to be three objects and are surprised when that is not the case. Which is to say, babies assume a correspondence of amounts between the auditory experience and the visual one, and if that abstract rule is not followed through, their gaze is more persistent. These newborns have only been out of the womb for a matter of hours yet already have the foundations of mathematics in their mental apparatus.
Cognitive faculties do not develop homogeneously. Some, like the ability to form concepts, are innate. Others, like the executive functions, are barely sketched in the first months of life. The most clear and concise example of this is the development of the attentional network. Attention, in cognitive neuroscience, refers to a mechanism that allows us to selectively focus on one particular aspect of information and ignore other concurrent elements.
We all sometimes–or often–struggle with attention. For example, when we are talking to someone and there is another interesting conversation going on nearby.* Out of courtesy, we want to remain focused on our interlocutor, but our hearing, gaze and thoughts generally direct themselves the other way. Here we recognize two ingredients that lead and orient attention: one endogenous, which happens from inside, through our own desire to concentrate on something, and the other exogenous, which happens due to an external stimulus. Driving a car, for example, is another situation of tension between those systems, since we want to be focused on the road but alongside it there are tempting advertisements, bright lights, beautiful landscapes–all elements that, as admen know well, set off the mechanisms of exogenous attention.
Michael Posner, one of the founding fathers of cognitive neuroscience, separated the mechanisms of attention* and found that they were made up of four elements:
(1) Endogenous orientation.
(2) Exogenous orientation.
(3) The ability to maintain attention.
(4) The ability to disengage it.
He also discovered that each of these processes involves different cerebral systems, which extend throughout the frontal, parietal and anterior cingulate cortices. In addition, he found that each one of these pieces of the attentional machinery develops at its own pace and not in unison.
For example, the system that allows us to orient our attention towards a new element matures much earlier than the system that allows us to disengage our attention. Therefore, voluntarily shifting our attention away from something is much more difficult than we imagine. Knowing this can be of enormous help when dealing with a child; a clear example is found in how to stop a small child’s inconsolable crying. A trick that some parents hit upon spontaneously, and emerges naturally when one understands attention development, is not asking their offspring to just cut it out, but rather to offer another option that attracts their attention. Then, almost by magic, the inconsolable crying stops ipso facto. In most cases, the baby wasn’t sad or in pain, but the crying was, actually, pure inertia. That this happens the same way for all children around the world is not magic or a coincidence. It reflects how we are–how we were–in that developmental period: able to draw our attention towards something when faced with an exogenous stimulus, and unable to voluntarily disengage.
Separating out the elements that comprise thought allows for a much more fluid relationship between people. No parent would ask a six-month-old to run, and they certainly wouldn’t be frustrated when it didn’t happen. In much the same way, familiarity with attentional development can avoid a parent asking a small child to do the impossible; for example, to just quit crying.
In addition to being connected for concept formation, a newborn’s brain is also predisposed for language. That may sound odd. Is it predisposed for French, Japanese or Russian? Actually, the brain is predisposed for all languages because they all have–in the vast realm of sounds–many things in common. This was the linguist Noam Chomsky’s revolutionary idea.
All languages have similar structural properties. They are organized in an auditory hierarchy of phonemes that are grouped into words, which in turn are linked to form sentences. And these sentences are organized syntactically, with a property of recursion that gives the language its wide versatility and effectiveness. On this empirical premise, Chomsky proposed that language acquisition in infancy is limited and guided by the constitutional organization of the human brain. This is another argument against the notion of the tabula rasa: the brain has a very precise architecture that, among other things, makes it ideal for language. Chomsky’s argument has another advantage, since it explains why children can learn language so easily despite its being filled with very sophisticated and almost always implicit grammatical rules.
This idea has now been validated by many demonstrations. One of the most intriguing was presented by Jacques Mehler, who had French babies younger than five days old listen to a succession of various phrases spoken by different people, both male and female. The only thing common to all the phrases was that they were in Dutch. Every once in a while the phrases abruptly changed to Japanese. He was trying to see if that change would surprise a baby, which would show that babies are able to codify and recognize a language.
In this case, the way to measure their surprise wasn’t the persistence of their gaze but the intensity with which they sucked on their dummies. Mehler found that when the language changed, the babies sucked harder–like Maggie Simpson–indicating that they perceived that something relevant or different was occurring. The key is that this did not happen when he repeated the same experiment with the sound of all the phrases reversed, like a record played backwards. That means that the babies didn’t have the ability to recognize categories from just any sort of sound but rather they were specifically tuned to process languages.
We usually think that innate is the opposite of learned. Another way of looking at it is thinking of the innate as actually something learned in the slow cooker of human evolutionary history. Following this line of reasoning, since the human brain is already predisposed for language at birth, we should expect to find precursors of language in our evolutionary cousins.
This is precisely what Mehler’s group proved by showing that monkeys also have auditory sensibilities attuned to language. Just like babies, tamarin monkeys reacted with the same surprise every time the language they were hearing in the experiment changed. As with babies, this was specific to language, and did not happen when phrases were played backwards.
This was a spectacular revelation, not to mention a gift for the media… ‘Monkeys Speak Japanese’ is a prime example of how to destroy an important scientific finding with a lousy headline. What this experiment proves is that languages are built upon a sensitivity of the primate brain to certain combinations of sounds This in turn may explain in part why most of us learn to understand spoken language so easily at a very young age.
Our brains are prepared and predisposed for language from the day we are born. But this predisposition does not seem to materialize without social experience, without using it with other people. This conclusion comes from studies of feral children who grow up without any human contact. One of the most emblematic is Kaspar Hauser, magnificently portrayed in the eponymous film directed by Werner Herzog. Kaspar Hauser’s story of confinement for the duration of his childhood* shows that it is very difficult to acquire language when it has not been practised early in life. The ability to speak a language, to a large extent, is learned in a community. If a child grows up in complete isolation from others, his or her ability to learn a language is largely impaired. Herzog’s film is, in many ways, a portrait of that tragedy.
The brain’s predisposition for a universal language becomes finetuned by contact with others, acquiring new knowledge (grammatical rules, words, phonemes) or unlearning differences that are irrelevant to one’s mother tongue.
The specialization of language happens first with phonemes. For example, in Spanish there are five vowel sounds, while in French, depending on the dialect, there are up to seventeen (including four nasal vowel sounds). Non-French speakers often do not perceive the difference between some of these vowel sounds. For instance, native Spanish speakers typically do not distinguish the difference between the sounds of the French words cou (pronounced [ku]) and cul (pronounced [ky]) which may lead to some anatomical misunderstanding since cou means neck and cul means bum. Vowels that they perceive as [u] in both cases sound completely different for a French speaker, as much so as an ‘e’ and an ‘a’ for Spanish speakers. But the most interesting part is that all the children of the world, French or not, can recognize those differences during the first few months of life. At that point in our development we are able to detect differences that as adults would be impossible for us.
In effect, a baby has a universal brain that is able to distinguish phonological contrasts in every language. Over time, each brain develops its own phonological categories and barriers that depend on the specific use of its language. In order to understand that an ‘a’ pronounced by different people, in varying contexts, at different distances, with head colds and without, corresponds to the same ‘a’, one has to establish a category of sounds. Doing this means, unfailingly, losing resolution. Those borders for identifying phonemes in the space of sounds are established between six and nine months of life. And they depend, of course, on the language we hear during development. That is the age when our brain stops being universal.
After the early stage in which phonemes are established, it is time for words. Here there is a paradox that, on the face of it, seems hard to resolve. How can babies know which are the words in a language? The problem is not only how to learn the meaning of the thousands of words that make it up. When someone hears a phrase in German for the first time, not only do they not know what each word means but they can’t even distinguish them in the sound continuum of the phrase. That is due to the fact that in spoken language there are no pauses that are equal to the space between written words. Thatmeansthatlisteningtosomeonespeakisliketryingtoreadthis.* And if babies don’t know which are the words of a language, how can they recognize them in that big tangle?
One solution is talking to babies–as we do when speaking Motherese–slowly and with exaggerated enunciation. In Motherese there are pauses between words, which facilitates the baby’s heroic task of dividing a sentence into the words that make it up.
But this doesn’t explain per se how eight-month-olds already begin to form a vast repertoire of words, many of which they don’t even know how to define. In order to do this, the brain uses a principle similar to the one many sophisticated computers employ to detect patterns, known as statistical learning. The recipe is simple and identifies the frequency of transitions between syllables and function. Since the word hello is used frequently, every time the syllable ‘hel’ is heard, there is a high probability that it will be followed by the syllable ‘lo.’ Of course, these are just probabilities, since sometimes the word will be helmet or hellraiser, but a child discovers, through an intense calculation of these transitions, that the syllable ‘hel’ has a relatively small number of frequent successors. And so, by forming bridges between the most frequent transitions, the child can amalgamate syllables and discover words. This way of learning, obviously not a conscious one, is similar to what smartphones use to complete words with the extension they find most probable and feasible; as we know, they don’t always get it right.
This is how children learn words. It is not a lexical process as if filling a dictionary in which each word is associated with its meaning or an image. To a greater extent, the first approach to words is rhythmic, musical, prosodic. Only later are they tinged with meaning. Marina Nespor, an extraordinary linguist, suggests that one of the difficulties of studying a second language in adulthood is that we no longer use that process. When adults learn a language, they usually do so deliberately and by using their conscious apparatus; they try to acquire words as if memorizing them from a dictionary and not through the musicality of language. Marina maintains that if we were to imitate the natural mechanism of first consolidating the words’ music and the regularities in the language’s intonation, our process of learning would be much simpler and more effective.
One of the most passionately debated examples of the collision between biological and cultural predispositions is bilingualism. On one hand, a very common intuitive assumption is: ‘Poor child, just learning to talk is difficult, the kid’s gonna get all mixed up having to learn two languages.’ But the risk of confusion is mitigated by the perception that bilingualism implies a certain cognitive virtuosity.
Bilingualism, actually, offers a concrete example of how some social norms are established without the slightest rational reflection. Society usually considers monolingualism to be the norm, so that the performance of bilinguals is perceived as a deficit or an increment in relation to it. That is not merely convention. Bilingual children have an advantage in the executive functions, but this is never perceived as a deficit in monolinguals’ potential development. Curiously, the monolingual norm is not defined by its popularity; in fact, most children in the world grow up being exposed to more than one language. This is especially true in countries with large immigrant populations. In these homes, languages can be combined in all sorts of forms. As a boy, Bernardo Houssay (later awarded the Nobel Prize for Physiology) lived in Buenos Aires, Argentina (where the official language is Spanish) with his Italian grandparents. His parents spoke little of their parents’ language, and he and his brothers spoke none. So he believed that people, as they aged, turned into Italians.
Cognitive neuroscientific research has conclusively proven that, going against popular belief, the most important landmarks in language acquisition–the moment of comprehending the first words, the development of sentences, among others–are very similar in monolinguals and bilinguals. One of the few differences is that, during infancy, monolinguals have a bigger vocabulary. However, this effect disappears–and even reverts–when the words a bilingual can use in both languages are added to that vocabulary.
A second popular myth is that one shouldn’t mix languages and that each person should speak to a child always in the same language. That is not the case. Some studies in bilingualism are conducted with parents who each speak one language exclusively to their children, which is very typical in border regions, such as where Slovenia meets Italy. In other studies, in bilingual regions such as Quebec or Catalonia, both parents speak both languages. The developmental landmarks in these two situations are identical. And the reason why the babies don’t get confused by one person speaking two languages is because, in order to produce the phonemes of each language, they give gesticular indications–the way they move their mouths and face–of which language they are speaking. Let’s say that one makes a French or an Italian facial expression. These are easy clues for a baby to recognize.
On the other hand, another large group of evidence indicates that bilinguals have a better and faster development of the executive functions; more specifically, in their ability to inhibit and control their attention. Since these faculties are critical in a child’s educational and social development, the advantage of bilingualism now seems quite obvious.
In Catalonia, children grow up in a sociolinguistic context in which Spanish and Catalan are often used in the same conversation. As a consequence, Catalan children develop skills to shift rapidly from one language to the other. Will this social learning process extend to task-switching beyond the domain of language?
To answer this question, César Ávila with his colleagues compared brain activity of monolinguals and Catalan bilinguals who switched between non-linguistic tasks. Participants saw a sequence of objects flashing rapidly in the centre of a screen. For a number of trials they were asked to respond with a button if the object was red, and with another button if it was blue. Then, suddenly, participants were asked to forget about colour and respond using the same buttons about the shape of the object (right button for a square and left button for a circle).
As simple as this sounds, when task instructions switch from colour to shape most people respond more slowly and make more errors. This effect is much smaller in Catalonian bilinguals. Ávila also found that the brain networks used by monolinguals and bilinguals to solve this task are very different. It is not that bilinguals are just increasing slightly the amount of activity in one region; it is that the problem in the brain is solved in an altogether different manner.
To switch between tasks, monolinguals use brain regions of the executive system such as the anterior cingulate and some regions in the frontal cortex. Bilinguals instead engage brain regions of the language network, the same regions they engage to switch between Spanish and Catalan in a fluid conversation.
This means that in task-switching, even if the tasks are nonlinguistic (in this case switching between colour and shape), bilinguals engage brain networks for language. Which is to say, bilinguals can recycle those brain structures that are highly specialized for language in monolinguals, and use them for cognitive control beyond the domain of language.
Speaking more than one language also changes the brain’s anatomy. Bilinguals have a greater density of white matter–bundles of neuronal projections–in the anterior cingulate than monolinguals do. And this effect doesn’t pertain only to those who learned more than one language during childhood. It is a characteristic that has been seen also in those who became bilingual later in life, and as such it might be particularly useful in old age, because the integrity of the connections is a decisive element in cognitive reserve. This explains why bilinguals, even when we factor in age, socioeconomic level and other relevant factors, are less prone to developing senile dementias.
To sum up, the study of bilingualism allows us to topple two myths: language development doesn’t slow down in bilingual children, and the same person can mix languages with no problem. What’s more, the effects of bilingualism may go above and beyond the domain of language, helping develop cognitive control. Bilingualism helps children to be captains of their own thought, pilots of their existence. This ability is decisive in their social inclusion, health and future. So perhaps we should promote bilingualism. Amidst so many less effective and more costly methods of stimulating cognitive development, this is a much simpler, beautiful and enduring way to do so.
Children, from a very young age, have a sophisticated mechanism for seeking out and building knowledge. We were all scientists in our childhood,* and not only out of a desire to explore, to break things apart to see how they work–or used to work–or to pester adults with an infinite number of questions beginning ‘Why?’ We were also little scientists because of the method we employed to discover the universe.
Science has the virtue of being able to construct theories based on scant, ambiguous data. From the paltry remnants of light from some dead stars, cosmologists were able to build an effective theory on the origin of the universe. Scientific procedure is especially effective when we know the precise experiment to discriminate between different theories. And kids are naturally gifted at this job.
A game with buttons (push buttons, keys or switches) and functions (lights, noise, movement) is like a small universe. As they play, children make interventions that allow them to reveal mysteries and discover the causal rules of that universe. Playing is discovering. In fact, the intensity of a child’s game depends on how much uncertainty the child has with regard to the rules that govern it. And when children don’t know how a simple machine works, they usually spontaneously play in the way that is most effective to discover its functioning mechanism. This is very similar to a precise aspect of the scientific method: investigation and methodical exploration in order to discover and clarify causal relationships in the universe.
But children’s natural exploration of science goes even further: they construct theories and models according to the most plausible explanation for the data they observe.
There are many examples of this, but the most elegant begins in 1988 with an experiment by Andrew Meltzoff–again–which produced the following scene. An actor enters a room and sits in front of a box with a large plastic button, pushes the button with their head and, as if the box were a slot machine paying out, there is a fanfare with colourful lights and sounds. Afterwards, a one-year-old baby who has been observing the scene is seated, on their mother’s lap, in front of the same machine. And then, spontaneously, the young child leans forward and presses the button with their head.
Did they simply imitate the actor or had the one-year-old discovered a causal relationship between the button and the lights? Deciding between these two possibilities would require a new experiment like the one proposed by the Hungarian psychologist György Gergely, fourteen years later. Meltzoff thought that the babies were imitating the actor when they pressed the button with their head. Gergely had another, much bolder and more interesting idea. The babies understand that the adult is intelligent and, because of that, if they didn’t push the button with their hand, which would be more natural, it was because pushing it with their head was strictly necessary.
This bold theory suggests that the reasoning of babies turns out to be much more sophisticated, and includes a theory of how things and people work. But how can one detect such reasoning in a child that doesn’t yet talk? Gergely solved it in a simple, elegant way. Imagine an analogous situation in everyday life. A person is walking with many bags and opens a door handle with an elbow. We all understand that door handles are not meant to be opened with your elbow and the person did that because there was no other option. What would happen if we replicated this idea in Meltzoff’s experiment? The same actor arrives, loaded down with bags, and pushes the button with their head. If the babies are simply imitating the actor, they would do the same. But if, on the other hand, they are capable of thinking logically, they will understand that the actor pushed it with their head because their hands were full and, therefore, all the babies needed to do to get the colourful lights and sounds was to push the button, with any part of their body.
They carried out the experiment. The baby observed the actor, laden with shopping bags, pushing the button with their head. Then the child sits on their mother’s lap and pushes the button with their hands. It is the same baby that, upon seeing the actor do the same thing but with their hands free, had pushed the button with their head.
One-year-olds construct theories on how things work based on what they observe. And among those observations is that of perceiving other people’s perspectives, working out how much they know, what they can and cannot do. In other words, exploring science.
We began this chapter with the arguments of the empiricists, according to which all logical and abstract reasoning occurs after the acquisition of language. But nevertheless we saw that even newborns form abstract and sophisticated concepts, that they have notions of mathematics, and display some understanding of language. At just a few months old, they already exhibit a sophisticated logical reasoning. Now we will see that young children who do not yet speak have also forged moral notions, perhaps one of the fundamental pillars of human social interaction.
The infants’ ideas of what is good, bad, fair, property, theft and punishment–which are already quite well established–cannot be fluently expressed because their control tower (circuits in the prefrontal cortex) is immature. Hence, as occurs with numerical and linguistic concepts, the infants’ mental richness of moral notions is masked by their inability to express it.
One of the simplest and most striking scientific experiments to demonstrate babies’ moral judgements was done by Karen Wynn in a wooden puppet theatre with three characters: a triangle, a square and a circle. In the experiment, the triangle goes up a hill. Every once in a while it backs up only to later continue to ascend. This gives a vivid impression that the triangle has an intention (climbing to the very top) and is struggling to achieve it. Of course, the triangle doesn’t have real desires or intentions, but we spontaneously assign it beliefs and create narrative explanations of what we observe.
A square shows up in the middle of this scene and bumps into the triangle on purpose, sending it down the hill. Seen with the eyes of an adult, the square is clearly despicable. As the scene is replayed, the circumstances change. While the triangle is going up, a circle appears and pushes it upwards. To us the circle becomes noble, helpful and gentlemanly.
This conception of good circles and bad squares needs a narrative–which comes automatically and inevitably to adults–that, on the one hand, assigns intentions to each object and, on the other, morally judges each entity based on those intentions.
As humans, we assign intentions not only to other people but also to plants (‘sunflowers seek out the sun’), abstract social constructions (‘history will absolve me’ or ‘the market punishes investors’), theological entities (‘God willing’) and machines (‘damn washing machine’). This ability to theorize, to turn data into stories, is the seed of all fiction. That is why we can cry in front of a television set–it is strange to cry because something happens to some tiny pixels on a screen–or destroy blocks on an iPad as if we were in a trench on the Western Front during the First World War.
In Wynn’s puppet show there are only triangles, circles and squares, but we see them as someone struggling, a bad guy who hinders progress, and a do-gooder who helps. Which is to say that, as adults, we have an automatic tendency to assign moral values. Do six-month-olds have that same abstract thought process? Would babies be able spontaneously to form moral conjectures? We can’t know by asking because they don’t yet talk, but we can infer this narrative by observing their preferences. The constant secret of science consists, precisely, in finding a way of bridging what we want to know–in this case, whether babies form moral concepts–with what we can measure (which objects the babies choose).
After watching one object helping the circle climb the hill and another bumping it down, infants were encouraged to reach for one of them. Twenty-six of twenty-eight (twelve out of twelve six-month-olds) chose the helper. Then, the video recordings of the infants watching the scenes of the helper and the hinderer were shown to an experimentalist. And, relying on their facial gestures and expressions alone, she could predict almost perfectly whether the infant had just seen the helper or the hinderer.
Six-month-old infants, before crawling, walking or talking, when they are barely discovering how to sit up and eat with a spoon, are already able to infer intentions, desires, kindness and evil, as can be deduced from examining their choices and gestures.
The construction of morality is, of course, much more sophisticated. We cannot judge a person to be good or bad just by knowing they did something helpful. For example, helping a thief is usually considered ignoble. Would the babies prefer someone who helps a thief to someone who thwarts one? We are now in the murky waters that are the origins of morality and law. But even in this sea of confusion, babies between nine months and a year of age already have an established opinion.
The experiment that proves it goes like this. Babies see a hand puppet trying to lift the top off a box in order to pull out a toy. Then a helpful puppet shows up and helps it open the lid and get the toy. But in another scene an anti-social puppet jumps maliciously on to the box, slamming it shut and keeping the first puppet from getting at the toy. When choosing between the two puppets, the babies prefer the helper. But here Wynn was going for something much more interesting: identifying what the babies think about stealing from an evildoer, long before they know those words.
To do this she designed a third act for the puppet theatre, and the helper puppet now loses a ball. In some cases, in this garden of forking paths, a new character appears on the scene and returns the ball. At other times, another character comes in, steals it and runs away. The babies prefer the character that returns the ball.
But the most interesting and mysterious part happens when these scenes feature the antisocial puppet that jumped maliciously on the box. In this case, the babies change their preference. They sympathize with the one who steals the ball and runs away. For nine-month-olds, the one who gives the bad guy his comeuppance is more lovable than the one who helps him, at least in that world of puppets, boxes and balls.*
Preverbal babies, still unable to coordinate their hands in order to grab an object, do something much more sophisticated than judging others by their actions. They take into account the contexts and the history, which turns out to give them a pretty sophisticated notion of justice. That’s how incredibly disproportionate cognitive faculties are during the early development of a human being.
We adults are not unbiased when we judge others. Not only do we keep in mind their previous history and the context of their actions (which we should), but we also have very different opinions of the person committing the actions, or being the victim of them, if they look like us or not (which we shouldn’t).
Throughout all cultures, we tend to form more friendships and have more empathy with those who look like us. On the other hand, we usually judge more harshly and show more indifference to the suffering of those who are different. History is filled with instances in which human groups have massively supported or, in the best-case scenario, rejected violence directed at individuals who were not like them.
This even manifests itself in formal justice proceedings. Some judges serve sentences displaying a racial bias, most probably without being aware that race is influencing their judgement. In the United States, African American males have been incarcerated at about six times the rate of white males. Is this difference in the incarceration rate a result (at least in part) of the judges having different sentencing practices? This seemingly simple and direct question turns out to be hard to answer because it is difficult to separate this psychological factor from possible racial differences in case characteristics. To overcome this problem Sendhil Mullainathan, Professor of Economics at Harvard University, found an ingenious solution, exploiting the fact that in the United States cases are randomly assigned to judges. Hence, on average, the type of case and the nature of defendants are the same for all judges. A racial difference in sentencing could potentially be explained by case characteristics or by a difference in the quality of the assigned attorneys (which is not random). But if this were all, then this difference should be the same for all judges. Instead, Mullainathan found a huge disparity–of almost 20 per cent–between judges in the racial gap in sentencing. While this may be the most convincing demonstration that race matters in the courtroom, the method is partly limited since it cannot tell whether the variability between judges’ results is due to some of them discriminating against African Americans, or some judges discriminating against whites, or a mixture of both.
Physical appearance also affects whether someone is likely to be hired in a job interview. Since the early seventies, several studies have shown that attractive applicants are typically judged to have a more appropriate personality for a job, and to perform better than their less attractive counterparts. Of course, this was not just a matter of comment. Applicants who were judged to be more attractive were also more likely to be hired. As we will see in Chapter 5, we all tend to make retrospective explanations that serve to justify our choices. Hence the most likely timeline for this line of argument is like this: first the interviewer decides to hire the applicant (among other things based on his or her beauty) and only then generates ad hoc a long list of attributes (he or she was more capable, more suited for the job, more reliable …) that serve to justify the choice which indeed had nothing to do with these considerations.
The similarities that generate these predispositions can be based on physical appearance, but also on religious, cultural, ethnic, political or even sports-related questions. This last example, because it is presumed to be more harmless–although, as we know, even sporting differences can have dramatic consequences–is easier to assimilate and recognize. Someone forms part of a consortium, a club, a country, a continent. That person suffers and celebrates collectively with that consortium. Pleasure and pain are synchronized between thousands of people whose only similarity is belonging to a tribe (sharing a jersey, a neighbourhood or a history) that unites them. But there is something more: pleasure at the suffering of other tribes. Brazil celebrates Argentina’s defeats, and Argentina celebrates Brazil’s. A fan of Liverpool cheers for the goal scored against Manchester United. When rooting for our favourite sports teams, we often feel less inhibited about expressing Schadenfreude, our pleasure at the suffering of those unlike us.
What are the origins of this? One possibility is that it has ancestral evolutionary roots, that the drive collectively to defend what belonged to one’s tribe was advantageous at some point in human history and, as a result, adaptive. This is merely conjecture but it has a precise, observable footprint that can be traced. If Schadenfreude is a constituent aspect of our brains (the product of a slow learning process within evolutionary history), it should manifest itself early in our lives, long before we establish our political, sports or religious affiliations. And that is exactly how it happens.
Wynn performed an experiment to ask whether infants prefer those who help or harm dissimilar persons. This experiment was also carried out in a puppet theatre. Before entering the theatre, a baby between nine and fourteen months old, seated comfortably on their mother’s lap, chose between crackers or green beans. Apparently, food choices reveal tendencies and strong allegiances.
Then two puppets came in, successively and with a considerable amount of time between the two entrances. One puppet demonstrates an affinity with the baby and says that it loves the food the child has chosen. Then they leave and, just as before, there is another scene where the puppet with similar taste is playing with a ball, drops it and has to deal with two different puppets: one who helps and the other who steals the ball. Then babies are asked to pick up one of the two puppets and they show a clear preference for the helper. One who helps someone similar to us is good. But when the puppet who loses the ball is the one who had chosen the other food, the babies more often choose the ball robber. As with the thief, it is gastronomic Schadenfreude: the babies sympathize with the puppet that hassles the one with different taste preferences.
Moral predispositions leave robust, and sometimes unexpected, traces. The human tendency to divide the social world into groups, to prefer our own group and go against others, is inherited, in part, from predispositions that are expressed very early in life. One example that has been particularly well studied is language and accent. Young children look more at a person who has a similar accent and speaks their mother tongue (another reason to advocate bilingualism). Over time, this bias in our gazes disappears but it transforms into other manifestations. At two years old, children are more predisposed to accept toys from those who speak their native language. Later, at school age, this effect becomes more explicit in the friends they choose. As adults, we are already familiar with the cultural, emotional, social and political segregations that emerge simply based on speaking different languages in neighbouring regions. But this is not only an aspect of language. In general, throughout their development, children choose to relate to the same type of individuals they would have preferentially directed their gaze at in early childhood.
As happens with language, these predispositions develop, transform and reconfigure with experience. Of course, there is nothing within us that is exclusively innate; to a certain extent, everything takes shape on the basis of our cultural and social experience. This book’s premise is that revealing and understanding these predispositions can be a tool for changing them.
In Émile, or Concerning Education, Jean-Jacques Rousseau sketches out how an ideal citizen should be educated. The education of Émile would today be considered somewhat exotic. During his entire childhood there is no talk of morality, civic values, politics or religion. He never hears the arguments we parents of today so often go on about, like how we have to share, be considerate of others, among so many other outlines of arguments for fairness. No. Émile’s education is far more similar to the one Mr Miyagi gives Daniel LaRusso in The Karate Kid, pure praxis and no words.
So, through experience, Émile learns the notion of property at twelve years old, at the height of his enthusiasm for his vegetable garden. One day he shows up with watering-pot in hand and finds his garden plot destroyed.
But oh, what a sight! What a misfortune! […] What has become of my labour, the sweet reward of all my care and toil? Who has robbed me of my own? Who has taken my beans away from me? The little heart swells with the bitterness of its first feeling of injustice.*
Émile’s tutor, who destroyed his garden on purpose, conspires with the gardener, asking him to take responsibility for the damage and gives him a reason to justify it. Thus the gardener accuses Émile of having ruined the melons that he’d planted earlier in the same plot. Émile finds himself embroiled in a conflict between two legal principles: his conviction that the beans belong to him because he toiled to produce them and the gardener’s prior right as legitimate owner of the land.
The tutor never explains these ideas to Émile, but Rousseau maintains that this is the best possible introduction to the concept of ownership and responsibility. As Émile meditates on this painful situation of loss and the discovery of the consequences of his actions on others, he understands the need for mutual respect in order to avoid conflicts like the one has just suffered. Only after having lived through this experience is he prepared to reflect on contracts and exchanges.
The story of Émile has a clear moral: not to saturate our children with words that have no meaning for them. First they have to learn what they mean through concrete experience. Despite this being a recurrent intuition in human thought, repeated in various landmark texts of the history of philosophy and education,* today hardly anyone follows that recommendation. In fact, almost all parents express an endless enumeration of principles through discourse that we contradict with our actions, such as on the use of telephones, what we should eat, what we should share, how we should say thank you, sorry and please, not be insulting, etc.
I have the impression that the entire human condition can be expressed with a piñata. If a Martian arrived and saw the highly complex situation that suddenly arises when the papier mâché breaks and the rain of sweets falls out, it would understand all of our yearnings, vices, compulsions and repressions. Our euphoria and our melancholy. It would see the children scrambling to gather up the sweets until their hands can’t hold any more; the one who hits another to gain a time advantage over a limited resource; the father who lectures another kid to share their excessive haul; the overwhelmed youngster crying in a corner; the exchanges on the official market and the black market, and the societies of parents who form like micro-governments to avoid what Garrett Hardin called the tragedy of the commons.
Long before becoming great jurists, philosophers, or noted economists, children–including the children that Aristotle, Plato and Piaget once were–already had intuitions about property and ownership. In fact, children use the pronouns my and mine before using I or their own names. This language progression reflects an extraordinary fact: the idea of ownership precedes the idea of identity, not the other way around.
In early battles over property the principles of law are also rehearsed. The youngest children claim ownership of something based on the argument of their own desires: ‘It’s mine because I want it.’* Later, around two years of age, they begin to argue with an acknowledgement of others’ rights to claim the same property for themselves. Understanding others’ ownership is a way of discovering that there are other individuals. The first arguments outlined by children are usually: ‘I had it first’; ‘They gave it to me.’ This intuition that the first person to touch something wins indefinite rights to its usage does not disappear in adulthood. Heated discussions over a parking spot, a seat on a bus, or the ownership of an island by the first country to plant its flag there are private and institutional examples of these heuristics. Perhaps because of that, it is unsurprising that large social conflicts, like in the Middle East, are perpetuated by very similar arguments to those deployed in a dispute between two-year-olds: ‘I got here first’; ‘They gave it to me.’
On the local 5-a-side football pitch, the owner of the ball is, to a certain extent, also the owner of the game. It gives them privileges like deciding the teams, and declaring when the game ends. These advantages can also be used to negotiate. The philosopher Gustavo Faigenbaum, in Entre Ríos, Argentina, and the psychologist Philippe Rochat, in Atlanta, in the USA, set out to understand this world: basically, how the concept of owning and sharing is established in children, among intuitions, praxis and rules. Thus they invented the sociology of the playground. Faigenbaum and Rochat, in their voyage to the land of childhood,* researched swapping, gifts and other transactions that took place in a primary school playground. Studying the exchange of little figurines, they found that even in the supposedly naïve world of the playground, the economy is formalized. As children grow up, lending and the assignment of vague, future values give way to more precise exchanges, the notion of money, usefulness and the prices of things.
As in the adult world, not all transactions in the country of childhood are licit. There are thefts, scams and betrayals. Rousseau’s conjecture is that the rules of citizenship are learned in discord. And it is the playground, which is more innocuous than real life, that becomes the breeding ground in which to play at the game of law.
The contrasting observations of Wynn and her colleagues suggest that very young children should already be able to sketch out moral reasoning. On the other hand, the work of Piaget, who is an heir to Rousseau’s tradition, indicates that moral reasoning only begins at around six or seven years old. Gustavo Faigenbaum and I set out to reconcile these different great thinkers in the history of psychology. And, along the way, to understand how children become citizens.
We showed to a group of children between four and eight years of age a video with three characters: one had chocolates, the other asked to borrow them and the third stole them. Then we asked a series of questions to measure varying degrees of depth of moral comprehension; if they preferred to be friends with the one who stole or the one who borrowed* (and why), and what the thief had to do to make things right with the victim. In this way we were able to investigate the notion of justice in playground transactions.
Our hypothesis was that the preference for the borrower over the thief, an implicit manifestation of moral preferences–as in Wynn’s experiments–should already be established even for the younger kids. And, to the contrary, the justification of these options and the understanding of what had to be done to compensate for the damage caused–as in Piaget’s experiments–should develop at a later stage. That is exactly what we proved. In the room with the four-year-olds, the children preferred to play with the borrower rather than with the thief. We also discovered that they preferred to play with someone who stole under extenuating circumstances than with aggravating ones.
But our most interesting finding was this: when we asked fouryear-old children why they chose the borrower over the thief or the one who robbed in extenuating circumstances over the one who did so in aggravating ones, they gave responses like ‘Because he’s blond’ or ‘Because I want her to be my friend.’ Their moral criteria seemed completely blind to causes and reasons.
Here we find again an idea which has appeared several times in this chapter. Children have very early (often innate) intuitions–what the developmental psychologists Liz Spelke and Susan Carey refer to as core knowledge. These intuitions are revealed in very specific experimental circumstances, in which children direct their gaze or are asked to choose between two alternatives. But core knowledge is not accessible on demand in most real-life situations where it may be needed. This is because at a younger age core knowledge cannot be accessed explicitly and represented in words or concrete symbols.
Specifically, in the domain of morality, our results show that children have from a very young age intuitions about ownership which allow them to understand whether a transaction is licit or not. They understand the notion of theft, and they even comprehend subtle considerations which extenuate or aggravate it. These intuitions serve as a scaffold to forge, later in development, a formal and explicit understanding of justice.
But every experiment comes with its own surprises, revealing unexpected aspects of reality. This one was no exception. Gustavo and I came up with the experiment to study the price of theft. Our intuition was that the children would respond that the chocolate thief should give back the two they stole plus a few more as compensation for the damages. But that didn’t happen. The vast majority of the children felt that the thief had to return exactly the two chocolates that had been stolen. What’s more, the older the kids, the higher the fraction of those who advocated an exact restitution. Our hypothesis was mistaken. Children are much more morally dignified than we had imagined. They understood that the thieves had done wrong, that they would have to make up for it by returning what they’d stolen along with an apology. But the moral cost of the theft could not be resolved in kind, with the stolen merchandise. In the children’s justice, there was no reparation that absolved the crime.
If we think about the children’s transactions as a toy model of international law, this result, in hindsight, is extraordinary. An implicit, though not always respected, norm of international conflict resolution is that there should be no escalation in reprisal. And the reason is simple. If someone steals two and, in order to settle a peace, the victim demands four, the exponential growth of reprisals would be harmful for everyone. Children seem to understand that even in war there ought to be rules.
Jacques Mehler is one of many Argentinian political and intellectual exiles. He studied with Noam Chomsky at the Massachusetts Institute of Technology (MIT) at the heart of the cognitive revolution. From there he went to Oxford and then France, where he was the founder of the extraordinary school of cognitive science in Paris. He was exiled not just as a person, but as a thinker. He was accused of being a reactionary for claiming that human thought had a biological foundation. It was the oft-mentioned divorce between human sciences and exact sciences, which in psychology was particularly marked. I like to think of this book as an ode to and an acknowledgment of Jacques’s career. A space of freedom earned by an effort that he began, swimming against the tide. An exercise in dialogue.
In the epic task of understanding human thought, the division between biology, psychology and neuroscience is a mere declaration of caste. Nature doesn’t care a fig for such artificial barriers between types of knowledge. As a result, throughout this chapter, I have interspersed biological arguments, such as the development of the frontal cortex, with cognitive arguments, such as the early development of moral notions. In other examples, like that of bilingualism and attention, we’ve delved into how those arguments combine.
Our brains today are practically identical to those of at least 60,000 years ago, when modern man migrated from Africa and culture was completely different. This shows that individuals’ paths and potential for expression are forged within their social niches. One of the arguments of this book is that it is also virtually impossible to understand human behaviour without taking into consideration the traits of the organ that comprises it: the brain. The way in which social knowledge and biological knowledge interact and complement each other depends, obviously, on each case and its circumstances. There are some cases in which biological constitution is decisive. And others are determined primarily by culture and the social fabric. It is not very different from what happens with the rest of the body. Physiologists and coaches know that physical fitness can change enormously during our life while, on the other hand, our running speed, for example, doesn’t have such a wide range of variation.
The biological and the cultural are always intrinsically related. And not in a linear manner. In fact, a completely unfounded intuition is that biology precedes behaviour, that there is an innate biological predisposition that can later follow, through the effect of culture, different trajectories. That is not true; the social fabric affects the very biology of the brain. This is clear in a dramatic example observed in the brains of two three-year-old children. One is raised with affection in a normal environment while the other lacks emotional, educational and social stability. The brain of the latter is not only abnormally small but its ventricles, the cavities through which cerebrospinal fluid flows, have an abnormal size as well.
So different social experiences result in completely distinct brains. A caress, a word, an image–every life experience leaves a trace in the brain. These traces modify the brain and, with it, one’s way of responding to things, one’s predisposition to relating to someone, one’s desires, wishes and dreams. In other words, the social context changes the brain, and this in turn defines who we are as social beings.
A second unfounded intuition is thinking that because something is biological it is unchangeable. Again, this is simply not true. For instance, the predisposition to music depends on the biological constitution of the auditory cortex. This is a causal relation between an organ and a cultural expression. However, this connection does not imply developmental determinism. The auditory cortex is not static, anyone can change it just by practising and exercising.
Thus the social and the biological are intrinsically related in a network of networks. This categorical division is not a property of nature, but rather of our obtuse way of understanding it.