Our choices define us. We choose to take risks or live conservatively, to lie when it seems convenient or to make the truth a priority, no matter what the cost. We choose to save up for a distant future or live in the moment. The vast sum of our actions comprises the outline of our identities. As José Saramago put it in his novel All the Names: ‘We don’t actually make decisions, the decisions make us.’ Or, in a more contemporary version, when Albus Dumbledore lectures Harry Potter: ‘It is our choices, Harry, that show what we truly are, far more than our abilities.’
Almost all decisions are mundane, because the overwhelming majority of our lives are spent in the day-to-day. Deciding whether we’ll visit a friend after work, if we should take the bus or the Underground; choosing between chips or a salad. Imperceptibly, we compare the universe of possible options on a mental scale, and after thinking it over we finally choose (chips, of course). When choosing between these alternatives, we activate the brain circuits that make up our mental decision-making machine.
Our decisions are almost always made based on incomplete information and imprecise data. When a parent chooses what school to send their child to, or a Minister of Economics decides to change the tax policy, or a football player opts to shoot at goal instead of passing to a teammate in the penalty area–in each and every one of these occasions it is only possible to sketch an approximate idea of the impending consequences of our decisions. Making decisions is a bit like predicting the future, and as such is inevitably imprecise. Eppur si muove. The machine works. That is what’s most extraordinary.
On 14 November 1940, some 500 Luftwaffe planes flew, almost unchallenged, to Britain and bombed the industrial city of Coventry for seven hours. Many years after the war had ended, Captain Frederick William Winterbotham revealed that Winston Churchill* could have avoided the bombing and the destruction of the city if he had decided to use a secret weapon discovered by the young British mathematician Alan Turing.
Turing had achieved a scientific feat that gave the Allies a strategic advantage that could decide the outcome of the Second World War. He had created an algorithm capable of deciphering Enigma, the sophisticated mechanical system made of circular pieces–like a combination lock–that allowed the Nazis to encode their military messages. Winterbotham explained that, with Enigma decoded, the secret service men had received the coordinates for the bombing of Coventry with enough warning to take preventive measures. Then, in the hours leading up to the bombing, Churchill had to decide between two options: one emotional and immediate–avoiding the horror of a civilian massacre–and the other rational and calculated–sacrificing Coventry, not revealing their discovery to the Nazis, and holding on to that card in order to use it in the future. Churchill decided, at a cost of 500 civilian lives, to keep Britain’s strategic advantage over his German enemies a secret.
Turing’s algorithm evaluated in unison all the configurations–each one corresponding to a possible code–and, according to its capacity to predict a series of likely messages, updated each configuration’s probability. This procedure continued until the likelihood of one of the configurations reached a sufficiently high level. The discovery, in addition to precipitating the Allied victory, opened up a new window for science. Half a century after the war’s end it was discovered that the algorithm that Turing had come up with to decode Enigma was the same one that the human brain uses to make decisions. The great English mathematician, who was one of the founders of computation and artificial intelligence, created–in the urgency of wartime–the first, and still the most effective, model for understanding what happens in our brains when we make a decision.
As in the procedure sketched out by Turing, the cerebral mechanism for making decisions is built on an extremely simple principle: the brain elaborates a landscape of options and starts a winner-take-all race between them.
The brain converts the information it has gathered from the senses into votes for one option or the other. The votes pile up in the form of ionic currents accumulated in a neuron until they reach a threshold where the brain deems there is sufficient evidence. These circuits that coordinate decision-making in the brain were discovered by a group of researchers headed by William Newsome and Michael Shadlen. Their challenge was to design an experiment simple enough to be able to isolate each element of the decision and, at the same time, sophisticated enough to represent decision-making in real life.
This is how the experiment works: a cloud of dots moves on a screen. Many of the dots move in a chaotic, disorganized way. Others move coherently, in a single direction. A player (an adult, a child, a monkey and, sometimes, a computer) decides which way that cloud of dots is moving. It is the electronic version of a sailor lifting a finger to decide, in the midst of choppy waters, which way the wind is blowing. Naturally, the game becomes easier when more dots are moving in the same direction.
Monkeys played this game thousands of times, while the researchers recorded their neuronal activity as reflected by the electrical currents produced in their brains. After studying this exercise for many years, and in many variations, they revealed the three principles of Turing’s algorithm for decision-making:
(1) A group of neurons in the visual cortex receives information from the retina. The neuron’s current reflects the quantity and direction of movement in each moment, but does not accumulate a history of these observations.
(2) The sensory neurons are connected to other neurons in the parietal cortex, which amass this information over time. So the neuronal circuits of the parietal cortex codify how the predisposition towards each possible action changes over time during the course of making the decision.
(3) As information favouring one option accumulates, the parietal cortex that codifies this option increases its electrical activity. When the activity reaches a certain threshold, a circuit of neurons in structures deep in the brain–known as basal ganglia–set off the corresponding action and restart the process to make way for the next decision.
The best way to prove that the brain decides through a race in the parietal cortex is by showing that a monkey’s response can be conditioned by injecting a current into the neurons that codify evidence in favour of a certain option. Shalden and Newsome did that experiment. While one monkey was watching a cloud of dots that moved completely randomly, they used an electrode to inject an electrical current into the parietal neurons that codify movement to the right. And, despite the senses indicating that movement was tied in either direction, the monkeys always responded that they were moving to the right. This is like emulating electoral fraud, manually inserting certain votes into the ballot box.
Additionally, this series of experiments allowed for the identification of three fundamental traits of the decision-making process. What relationship is there between the clarity of the evidence and the time we take to make a decision? How are options biased by prejudices or prior knowledge? When is there enough evidence in favour of one option to call the race? The answers to these three questions are interrelated. The more incomplete the information is, the slower the accumulation of evidence will be. In the moving-dot experiment, when almost all the dots move at random, the ramp of activation in the neurons in the parietal cortex that amass the evidence is not very steep. And if the threshold of evidence needed remains the same, it will take more time to cross it; which is to say, to reach the same degree of reliability. The decision cooks over a slow flame, but eventually it will reach the same temperature.
And how is the threshold established? Or, to put it another way, how does the brain determine when enough is enough? This depends on a calculation that the brain makes in a stunningly precise way, by pondering the cost of making a mistake and the time available for the decision-making.
The brain determines that threshold in order to optimize the gains from a decision. To do so it combines neuronal circuits that codify:
(1) The value of the action.
(2) The cost of time invested.
(3) The quality of the sensory information.
(4) An endogenous urgency to respond, something that we recognize as anxiety or impatience to decide.
If, in the random-dots game, mistakes are punished severely, the players (humans or monkeys) raise the threshold, taking more time to decide and accumulating more evidence. If, on the other hand, mistakes don’t count, then the players lower that same threshold, adopting again the best strategy, which here is to respond as quickly as possible. The most notable aspect of this adaptive adjustment is that in most cases it is not conscious, and often far more optimal than we would imagine.
Consider, for example, a driver stopping at a traffic light. The driver’s brain is making a great number of estimations: the probability that the light may turn amber or red, the distance to the crossing, the speed of the car, the effectiveness of the brakes, the traffic etc. Not only this: the driver´s brain is also pondering the urgency, the consequences of an accident… In the vast majority of cases (except when something goes wrong and the monitoring system of the brain takes control) these considerations are not explicit. We are not aware of all these calculations. Yet our brains do make this sophisticated calculus, which results in a decision of when and how hard we will hit the brake pedal. This specific example reveals a general principle: decision-makers know much more than they believe they do.
In contrast with this, in some conscious deliberations (which are the only ones we do remember at the end of the day) the brain often sets a very inefficient threshold to reach a decision. We all remember having slept too long on some matters which did not require that much deliberation. For example, most of us recall deliberating ad infinitum in a restaurant between two choices even if deep inside we know we would greatly enjoy either of those two options.
Even though in the laboratory we study simple decisions, what we are ultimately more interested in revealing is how the brain makes everyday decisions: the driver who decides whether or not to jump an amber light; the judge who condemns or exonerates a defendant; the voter who casts a ballot for one candidate or another; the shopper who takes advantage of or falls victim to a special deal. The conjecture is that all of these decisions, despite belonging to different realms and having their own idiosyncrasies, are the result of the same decision-making mechanism.
One of the main principles of this procedure, which is at the heart of Turing’s design, consists in how one realizes when it is time to stop gathering evidence. The problem is reflected in the paradox described by a medieval philosopher, Jean Buridan: a donkey hesitates endlessly between two identical piles of hay and, as a result, ends up dying of hunger. In fact, the paradox presents a problem for Turing’s pure model. If the number of votes in favour of each alternative is identical, the cerebral race is stuck in a tie. The brain has a way of avoiding the tie: when it considers that sufficient time has passed, it invents neuronal activity that it randomly distributes among the circuits that codify each option. Since this current is random, one of the options ends up having more votes and, as such, wins the race. It’s as if the brain tossed a coin and let fate break the tie. How much time is reasonable for making a decision depends on internal states of the brain–for example, if we are more or less anxious–and on external factors that affect how the brain counts the time.
One of the ways that the brain estimates time is simply by counting pulses: steps, heartbeats, breaths, the swinging of a pendulum or music’s tempo. For example, when we exercise, we mentally estimate a minute faster than when we are at rest, because each heartbeat–and therefore each pulse of our inner clock–is quicker. The same happens with tempo in music. The clock accelerates with the rhythm and, thus, time passes more rapidly. Do these changes in our internal clock make us decide more quickly and lower our decision threshold?
Indeed, music has much more direct consequences for our decisions than we recognize. We drive, shop and walk differently depending on the music we are listening to at the time. As the musical tempo rises, our decision-making threshold lowers and as a result risk increases in almost every decision. Drivers change lanes more frequently, go through more amber lights, overtake and exceed the speed limit more while driving as the speed of the music they are listening to increases. Musical tempo also dictates the amount of time we are willing to wait patiently in a waiting room or the number of products we tend to buy in a supermarket. Many supermarket managers know that the piped-in music is a key to sales and use that to their advantage, with no need to be familiar with Turing’s work. That’s how predictable our decision-making machine is, yet we are almost completely unaware of its workings.
Another key factor that affects the decision-making machine is determining where the race begins. When there is a bias towards one of the alternatives, the neurons that accumulate information in its favour start with an initial electrical charge, which is similar to giving them a head start in the race. In some cases, biases can have a fundamental influence; for example, in the decision to donate organs.
Demographic studies of organ donation group different countries into two classes: those in which almost all the inhabitants agree to donate organs, and another in which almost no one does. It doesn’t take a master statistician to understand that what’s striking is the absence of intermediate classes. The reason turns out to be extremely simple: what ends up determining whether a person chooses to donate organs is the wording on the form. In the countries where the form says: ‘If you wish to donate organs, sign here’, no one does. On the other hand, in countries where it says: ‘If you do NOT wish to donate organs, sign here’, almost everyone donates. The explanation for both phenomena comes from an almost universal trait that has nothing to do with religion or life and death but rather just that no one fills out the form completely.
When we are offered a wide variety of options, they don’t all start running from the same point; those that are given by default begin with an advantage. If, in addition, the problem is one that is hard to resolve, meaning that evidence in favour of any of the options is scarce, the one that started out with the advantage wins. This is a very clear example of how governments can guarantee freedom of choice but, at the same time, bias–and, in practice, dictate–what we decide. But this also reveals a characteristic of human beings, be they Dutch, Mexican, Catholic, Protestant or Muslim: our decision-making mechanism collapses when faced with difficult situations. Then we merely accept what we are offered, by default.
Until now we’ve talked about decision-making processes as if they were all of one class, governed by the same principles and carried out in the brain by similar circuits. However, we all perceive that the decisions we make belong to at least two qualitatively distinct types; some are rational and we can put forward the arguments behind them. Others are hunches, inexplicable decisions that feel as if they are dictated by our bodies. But are there really two different ways of deciding? Is it better to choose something based on our intuitions, or to carefully and rationally deliberate each decision?
In general we associate rationality with science, while the nature of our emotions seems mysterious, esoteric and essentially inexplicable. We will topple this myth with a simple experiment.
Two neuroscientists, Lionel Naccache and Stanislas Dehaene–my mentor in Paris–did an experiment in which they flashed numbers on screens so fleetingly that the participants believed they’d seen nothing. This type of presentation, which doesn’t activate consciousness, is called subliminal. Then they ask the participants to say if the number is higher or lower than five and, much to their own surprise, they answer correctly in most cases. The person making the decision perceives it as a hunch, but from the experimenter’s perspective it is clear that the decision was induced unconsciously with a mechanism very similar to that of conscious decision-making.
Which is to say that, in the brain, hunches aren’t so different from rational decisions. But the previous example doesn’t capture all the richness of the physiology of unconscious decisions. In this case, popular expressions such as ‘trust your heart’ or ‘go with your gut feelings’ turn out to be quite accurate and shed light on how intuitions are forged.
All it takes to understand this is putting a pencil between your teeth, lengthwise. Inevitably, your lips will rise in an imitation of a smile. This is obviously a mechanical effect, not a reflection of an emotion. But that doesn’t matter, it still gives a certain sense of wellbeing. The mere gesture of the smile is enough. A film scene will seem more entertaining to us if we watch it with a pencil held in our mouth that way than if we hold it between our lips, as if scowling. So, deciding whether something is fun or boring does not only originate in an evaluation of the external world, but in visceral reactions produced in our internal worlds. Crying, sweating, trembling, increasing heart rate or secreting adrenaline are not merely reactions by the body to communicate an emotion. Instead, the brain reads and identifies these bodily variables to encode and produce feelings and emotions.
That corporeal states can affect our decision-making process is a physiological and scientific demonstration of what we perceive as a hunch. When making a decision unconsciously, the cerebral cortex evaluates different alternatives and, in doing so, estimates the possible risks and benefits of each option. The result of this computation is expressed in corporeal states through which the brain can recognize risk, danger or pleasure. The body becomes a reflection and a resonating chamber of the external world.
The key experiment showing how decisions are based on hunches was done with two decks of cards.
As in so many board games, this experiment employs ingredients from real life decision-making: winnings, losses, uncertainty and risk. The game is simple but unpredictable. In each turn, the player merely chooses which deck to pick a card from. The number of the card chosen indicates the coins that the player wins (or loses if it’s negative). Since the cards are turned face down, the player has to evaluate, over the course of the entire experiment, which of the two decks is more profitable.
This is like someone in a casino who has to choose between two one-armed bandits just by observing how many times and how much each one pays out over a period of time. But, unlike in the casino, this game thought up by a neurobiologist, Antonio Damasio, is not purely random: there is one deck that on average pays out more than the other. If this rule is discovered, then the next step is simple: always choose the deck that pays more. Lo and behold, an infallible system.
The difficulty lies in the fact that the player has to discover this rule through pondering a long history of payouts amid large fluctuations. After much practice, almost everyone discovers the rule, is able to explain it and, naturally, starts to choose cards from the correct deck every time. But the real finding happens along the way to this discovery, among intuitions and hunches. Even before being able to articulate the rule, the players start to play well and more frequently choose cards from the correct deck. In this phase, despite playing much better than when they were choosing randomly, the players cannot explain why they opt for the correct deck (the one that pays out more in the long term). Sometimes they don’t even know they are choosing one deck more than the other. But unequivocal signs show up in their bodies. In this part of the experiment, when players are about to choose the incorrect deck, their skin conductance increases, indicating a rise in sweating, which is in turn a reflection of an emotional state. Which is to say that the players cannot explain that one of the decks gives better results than the other, but their bodies already know it.
My colleague María Julia Leone, a neuroscientist and international chess master, and I carried out this experiment on the chessboard, following the Borgesian concept of chess as a metaphor for life. Two masters face off. They have thirty minutes to make a series of decisions that will organize their armies. On the board, it is a battle to the death and emotions are running high. During the game we trace the players’ heartbeats. Heart rate–just like stress–increases over the course of the game, as time runs out and the end of the battle approaches. Their heart rates also spike when their opponent commits an error that will decide the outcome of the game.
But the most significant discovery we made was this: a few seconds before the players made a mistake, their heart rate changed. This means that in a situation with countless options, with a complexity that is similar to that of life itself, the heart panics before making a bad decision. If the players could recognize that, if they were able to listen to what their hearts are telling them, they could perhaps avoid many of their errors.
This is possible because the body and the brain hold the keys to decision-making long before we are consciously aware of those elements; the emotions expressed in our bodies function as an alarm to alert us to possible risks and mistakes. This destroys the idea that intuition belongs to the realm of magic or soothsaying. There is no conflict between hunches and science; in fact, quite the opposite: intuition functions hand in hand with reason and deliberation, fully in the realm of science.
Once we have discovered that hunches and intuitions are unconscious deliberations we can proceed to a question of more practical relevance. When should we trust our hunches and intuitions and when not? For those questions that matter most to us, should we trust our hunches or our rational deliberations?
The answer is conclusive: it depends. A social psychologist, Ap Dijksterhuis, found, in an experiment that is still generating controversy, that the complexity of a decision is what dictates when it’s best to deliberate consciously or act intuitively. Dijksterhuis found that to be the rule both in ‘mock’ decisions in the lab and in real-life decisions.
In the laboratory, he constructed a game in which participants had to evaluate two options–for example, two cars–and choose which was preferable in terms of utility. Sometimes, the two alternatives only differed in price. In that case, the decision was simple: the cheaper one was better. Then the problem became progressively more complex, when the two cars varied not only in price, but in petrol consumption, safety, comfort, risk of theft, engine capability and pollution levels.
Dijksterhuis’s most surprising discovery was that when there are many elements in play, hunches are more effective than deliberation. The same pattern appears in decisions in the real world. This was observed in an experiment whereby people who had just bought toothpaste–undoubtedly one of life’s easier choices–were asked how they had made their decision. A month later, those who had pondered their decisions were more satisfied than those who hadn’t. On the other hand, they observed the opposite result when interviewing people who had just bought furniture (a complex decision, with many more variables such as price, size, quality, aesthetic appeal). Just like in the lab, those who thought less chose better.
The methodologies of these experiments are quite different, but the conclusion is the same. When we make a decision by carefully thinking over a small number of elements, we choose better if we take our time. Yet when the problem is complex, in general we make better choices by following our intuition than if we stew over it.
The conscious mind is fairly limited in size and can hold little information. Our unconscious, however, is vast. This explains why when making decisions with few variables in play–price, quality and size of a product, for example–we are best served by thinking it over before acting. In situations where we can mentally evaluate all the elements at the same time, the rational decision is more effective, and therefore better. We also can see why–when there are many more variables in play than our conscious mind can juggle at once–our unconscious, rapid, intuitive decisions are more effective, even when based on approximated calculations.
Perhaps the most important and complex decisions that we make are social and emotional. It may seem strange, almost absurd, to decide whom to fall in love with in a deliberate way, by some arithmetic evaluation of arguments for and against that person we feel so drawn to. That’s just not how it works. We fall in love for reasons that are generally mysterious and can only be determined sketchily after some time has passed.
At pheromone parties, each participant sniffs the clothing that’s been worn for a few days by other guests. Based on the odour print that attracts them, they decide whom to approach at the party. Choosing this way seems natural because we associate our sense of smell with intuition, like when we say that ‘something smells fishy.’ And because we all recognize how evocative the intimate and indescribable scent of our lover’s sheets is. But, at the same time, it’s weird because, obviously, our sense of smell isn’t the most precise of our senses. So it seems fairly likely that someone could be sorely disappointed by the partner their sniffing leads them to, and run off cursing their ridiculous nose.
Claus Wedekind, a Swiss biologist, made a phenomenal experiment out of this game. He had a group of men wear the same T-shirt for several days, with no deodorants or perfumes. Then a series of women smelled the shirts and articulated how pleasurable they found each scent–and, of course, he also did the reverse, having the men sniff the women’s well-worn T-shirts. Wedekind wasn’t just fishing with this experiment to see what he would find: he had based it on a hypothesis constructed from observing the behaviour of rodents and other species. He was exploring the premise that as far as scent, taste and unconscious preferences were concerned, we are very similar to our inner ‘beasts.’
Each individual has a different immune repertoire, which explains, in part, why, when exposed to the same virus, some of us get sick and others don’t. We can think of each immune system as a shield. If two shields are placed one on top of the other protecting the same space, they become redundant. However, two shields covering different, contiguous spaces can together protect a larger surface area. The same idea can be transferred–with certain drawbacks that we will ignore for the moment–to the immune repertoire: two individuals with very different immune repertoires give rise to progeny with a more effective immune system.
In rodents, who use their sense of smell much more than we do when choosing a mate, the preference largely follows a simple rule governed by this principle: they tend to choose mates with a different immune repertoire. This was the basis for Wedekind’s experiment. He measured each participant’s MHC (major histocompatibility complex), a family of genes involved in the differentiation between our own and others’ immune systems. And the extraordinary result is that when we judge by our sense of smell, we do so according to the same premise as our rodent cousins: on average, women will be more attracted to the scent of men who have a different MHC. So pheromone parties* promote diversity. At least in terms of immune repertoires.
But this rule has a notable exception. A female mouse’s scent preferences invert when she is pregnant. Then she prefers the smell of mice with MHCs that are similar to hers. The simplified, narrative version of this result is that while the search for complementariness can be beneficial when mating, once there is already a baby in the womb it makes sense to remain close to a known nest, among kin, with those who are similar.
Does the same shift in olfactory preference happen with women? It seems plausible since, in the midst of the hormonal revolution that occurs during a woman’s pregnancy, her changes in smell and taste perception are among the most distinctive effects. Wedekind studied how olfactory preference changed when a woman is taking birth control pills with steroids that stimulate a very similar hormonal state to pregnancy. Thus it was discovered that, just like in rodents, the result was turned on its head, and the smell of T-shirts worn by men with similar MHCs became more appealing.
This experiment illustrates a much more general concept. Many of our emotional and social decisions are much more stereotypical than we recognize. In general, this mechanism is masked by the mystery of the unconscious and, therefore, we do not perceive the process of deliberation. But it is there, in the underground workings of an apparatus that may have been forged long before we were able even to begin pondering these questions.
In short, decisions that are based on hunches and intuition, which because they are unconscious are often perceived as magical, spontaneous and unfounded, are actually regulated and sometimes markedly stereotypical. According to the mechanical virtues and limitations of our awareness, it seems wise to delegate ‘simple’ decisions to rational thought and leave the complex ones to our smell, sweat and heart.
When making a decision, in addition to carrying out the chosen option, the brain generates a belief. That is what we perceive as trust or conviction in what we are doing. Sometimes we buy a chocolate bar at a kiosk, certain that it’s exactly what we want. Other times we walk away hoping that the chocolate sweetens our frustration at knowing we haven’t chosen well. The treat is the same, but the bitter perception of having made a clumsy decision is very different.
We have all, at some point, blindly trusted a decision that later turned out to have been wrong. Or, conversely, in many situations we act without conviction when we actually have all the arguments to be heady with confidence. How is this feeling of trust in our decisions constructed? Why do some people constantly walk around excessively confident, no matter what they do, and others live in doubt?
The scientific study of this trust–or hesitation–turns out to be particularly tempting because it opens up a window on subjectivity; it is not a study of our observable actions but rather of our private beliefs. Which is not to say that it is a minor matter from a purely practical standpoint, since our confidence (or lack thereof) in ourselves and our actions defines our manner of being.
The simplest way to study confidence is asking someone to draw a point on a line where one end represents absolute confidence and the other represents doubt about a decision that’s been made. Another way to detect confidence is by asking the decision-makers if they prefer to charge a fixed amount for the decision or make a bet on it to earn more. If they are very confident about the decision they’ve just made, they’ll be inclined to bet (two birds in the bush). If, on the other hand, they are hesitant about their choice, they’ll prefer the fixed amount (bird in the hand). These two means of measuring confidence are very consistent; those who display firm conviction on the line model also bet boldly. And the opposite is also true: those who tend to express low confidence in their decisions are not inclined to bet on them.
This parallel between confidence and betting has obvious relevance for daily life. Betting or investing poorly in financial, emotional, professional, political and family questions has a high cost. But this parallel also has scientific consequences. This type of experiment allows us to question our subjectivity in areas that previously seemed to be impossible to broach. When measuring someone’s predisposition to betting we are discovering something about confidence perceived by those who cannot express their beliefs in words.
The way each person constructs confidence is almost like a digital footprint. Some people express confidence with intermediary nuances, and others with extreme doubt or conviction. There are also cultural traits, and the ways certainty is expressed in some parts of Asia differs from how it is expressed in the West.
Almost all of us have witnessed examples where we assign confidence in a fairly imprecise way, such as when we think we did well in an exam and it turns out we failed it. And most of us have also known people who are quite accurate in assessing their own knowledge and therefore have a precise, dependable sense of confidence, knowing when to bet and when not to. Confidence is then a window into one’s own knowledge.
The accuracy of one’s confidence is a personal trait, similar to height or eye colour. But unlike those physical traits, there is a certain amount of space to change and modify this thought pattern. As an identity trait, it has a signature in the brain’s anatomical structure. Those who have more accurate confidence systems have a greater number of connections–measured in density of axons–in a region of the lateral front cortex called Brodmann Area 10, or BA10. Additionally, those with a more precise sense of confidence organize their cerebral activity in such a way that this BA10 region is more efficiently connected to other cortical structures in the brain, such as the angular gyrus and the lateral frontal cortex.
This difference in functional brain connectivity between those who have an accurate sense of confidence and those who don’t is only observed when a person turns their attention inwards–for example, by focusing on their breathing–and not when their attention is focused outwards on the external world. This establishes a bridge between two variables that were seemingly scarcely related: our sense of confidence and our knowledge of our own body. What they have in common is that they both lead our thoughts inwards. And that suggests that a natural way of improving accurate confidence in our decision-making system is learning to observe and focus on our own body states.
To sense whether we trust ourselves or not, our brain uses endogenous variables. For instance, it will sense lack of confidence if we sweat, stutter, lower our gaze or express other bodily signs of doubt. These body signs, which we use to sense confidence in others, also allow us to sense that about ourselves.
When the balance between doubt and certainty applies to outcomes in the unknowable future, our sense of confidence divides us into optimists or pessimists. Optimists are sure they will make every shot, win every big game, never lose their job and can have unprotected sex or drive recklessly because, after all, they are immune to the risks. What’s mysterious is how optimism survives despite knock-backs and the evidence to the contrary we receive each and every day. The solution to this conundrum in an optimist’s brain is selective forgetting. Every Monday, like every 1 January, is filled with repeated promises; every love is the love of our lives, and this year we are absolutely going to win the championship. Each of these affirmations completely ignores the fact that there have been plenty of other Mondays and plenty of other disappointments. Are we really so blind to the evidence? What are the mechanisms in our brains that bring about this fundamentalist adherence to optimism? And what do we do with this persistent optimism while accepting that it’s based on an illusion?
One of the most common models of human learning–now delegated chiefly to robotic and artificial intelligence–is the prediction error. It is simple and intuitive. The first premise is that each action we realize, from the most mundane to the most complex, is built on an internal model, a sort of simulated prelude to what will happen. For example, when we greet someone in a lift we are presuming that there will be a positive response from that person. If the response is different from what we are expecting–exaggeratedly warm or coldly reluctant–we are surprised.
Prediction error expresses the difference between what we expect and what we actually observe, and that is codified in a neuronal circuit in the basal ganglia, which generates dopamine. Dopamine is a neurotransmitter whose functions include being a messenger of surprise when travelling into various brain structures. The dopaminergic signal recognizes the dissonance between what is predicted and what is found, and it is the fuel of learning because circuits irrigated with dopamine become malleable and predisposed to change. In the absence of dopamine, neuronal circuits are generally rigid and not very malleable.
The cyclical renewal of our hopes, every Monday and every New Year’s Eve, forces us to hack into this learning system. If the brain didn’t generate a signal of dissonance when reality is worse than what we were expecting, we would indefinitely renew our hopes. Is that what happens? And if so, how? Is this the optimist’s secret gift?
All these questions are answered in unison by a relatively simple experiment conducted by a British neuroscientist, Tali Sharot. In it, she asks people to estimate the probability that various unfortunate events would occur. What is the likelihood of dying younger than sixty years old? What about developing a degenerative disease? What about having a car accident?
A large majority of those asked assume that the odds of something bad happening to them are lower than what statistics suggest. Which is to say, when trying to evaluate our risks–flying in a plane and urban violence are clear exceptions–we are almost all decidedly optimistic.
But the most interesting thing is what happens when our beliefs do not coincide with reality. For example, in the experiment, participants were asked to estimate the likelihood of suffering from cancer and later on were told that the average likelihood of someone like them doing so is close to 30 per cent.
According to the model of prediction error, people should use this information to modify their beliefs. And that is exactly what happens when most people discover that things are better than they supposed. Participants who believed that their likelihood of suffering from cancer was worse than in reality adjusted their outcomes to a value very close to the real one. For instance, those who believed that their likelihood was 50 per cent responded in subsequent interviews with values around 35 per cent, which is quite close to the real value of 30 per cent.
But–and herein lies the key–those who believed that their probability of suffering cancer was less than in reality (for instance those responding that it was 10 per cent) changed their beliefs very little. When asked again in subsequent interviews and after having heard the bad news that the likelihood of suffering from cancer was in fact 30 per cent, they adjusted their estimates by a mere 1 or 2 per cent (to 11 or 12 per cent). That is to say, the adjustment those people make is much less–almost nil–when they discover that the truth is worse than they imagined it to be.
And what happens meanwhile in the brain? Every time we discover a desirable or beneficial piece of information, a group of neurons in a small part of the left prefrontal cortex called the inferior frontal gyrus is activated. On the other hand, when we receive undesired information, another group of neurons activates in the homologous region on the right hemisphere. A sort of equilibrium between the good and bad news is established between these brain regions. But this equilibrium has two traps: the first is that it focuses much more on good news than on bad, which, on average, creates a tendency towards optimism; and secondly–and most interestingly–the bias in the balance varies from person to person, thus revealing the machinery behind optimism.
The activation of neurons in the frontal gyrus of the left hemisphere is similar in everyone when we discover that the world is better than we had thought. On the other hand, the activation of the frontal gyrus of the right hemisphere varies widely from one individual to the next when we find out that the world is worse than we believed it to be. In more optimistic people, this activation is minimized, as if they literally turned a blind eye on bad news. In more pessimistic people, the opposite happens; the activation is amplified, accentuating and multiplying the impact of that negative information. Here is the biological recipe that separates the optimists from the pessimists: it is not their capacity to value what’s good but rather their ability to ignore and forget what’s bad.
Many mothers, for example, have only a vague recollection of the pain they experienced during childbirth. That selective forgetting eloquently illustrates the mechanism of optimism. If the pain were much more present in their memory, perhaps we would see many more only children. Something similar happens among newlyweds; none of them believe they will ever divorce. Yet between 30 and 50 per cent of them will, according to statistics that vary based on time and place. Of course, the moment when they swear eternal love–whatever is meant by love and eternity–isn’t the most appropriate time for statistical reflections on human relationships.
The costs and benefits of excessive or insufficient optimism are pretty tangible. There are intuitive reasons to encourage a certain amount of naïve optimism, since it turns out to be a driving force behind action, adventure and innovation. Without optimism we would never have landed on the moon. Optimism is also associated in a fairly generic way with better health and a more satisfying life. So we could think of optimism as a sort of little insanity that pushes us to do things that we otherwise wouldn’t do. Its flip side, pessimism, will lead to inaction and, in its chronic version, to depression.
But there are also good reasons to temper excessive optimism when it encourages risky and unnecessary decisions. Conclusive statistics associating the risk of car accidents with being inebriated, using a mobile phone or not wearing a seatbelt continue to pile up. Optimists know these risks but act as if they are immune to them. They feel they are exempt from the statistics and this, of course, is false: if we were all exceptions, the rule would not exist. This expansive optimism–which usually does not consider itself as such–can lead to fatal yet avoidable consequences.
A much more mundane example of excessive optimism is our waking up each day. Often bedtime is filled with promises about the next morning: we plan to get up much earlier than usual to, for example, exercise. That intention is built on a genuine desire and on an expectation of some value to us, such as increased health and fitness. But, except for larks, the panorama the next day is a very different one. The person who made the decision the night before to get up early disappears by next morning. At 7 a.m. we are somebody different altogether, overcome by sleepiness and the strictly hedonistic pleasure of staying in bed.
The outlines of identity are blurred. Or, to put it more precisely, each of us makes up a consortium of identities that are expressed in different, sometimes contradictory, ways in varying circumstances. The disassociation between various members of the consortium has two clear projections: one that is hedonistic and bold, that ignores the risks and future consequences (the optimist), and another that ponders those risks and consequences (the pessimist). This dynamic is particularly exacerbated in two quite different scenarios: in certain neurological and psychiatric pathologies, and in adolescence.
The predisposition to ignore risk grows with the activation of the nucleus accumbens in the limbic system, which corresponds to the perception of hedonistic pleasure. In fact, in an experiment that shocked some of his colleagues at the Massachusetts Institute of Technology, Dan Ariely recorded this in a detailed, quantitative way with regard to a precise aspect of pleasure: sexual arousal. He found that the more excited people get, the more predisposed they are to doing things that they would otherwise consider aberrant or unacceptable. Such things of course included taking the risk of having unprotected sex with strangers.
Adolescence is a period plagued with excessive optimism and exposure to risky situations. This happens because the brain’s development, like the body’s, is not homogeneous. Some cerebral structures develop very quickly and mature within the first few years of life, while others are still immature when we become teenagers. One popular neuroscientific myth is that adolescence is a time of particular risk because of the immaturity of the prefrontal cortex, a structure that evaluates future consequences and coordinates and inhibits impulses. However, the later development of the control structure in the frontal cortex cannot explain per se the spike in risk predisposition recorded during the teenage years. In fact, children, whose prefrontal cortex is even more immature, expose themselves to less risk. What is characteristic of adolescence is the relative immaturity of prefrontal cortex development–and as a result, the ability to inhibit or control certain impulses–with a consolidated development of the nucleus accumbens.
The naïve clumsiness of those teenage years, in a body that is growing more than its capacity to control itself, can be seen as a reflection of the adolescent cerebral structure. Understanding this, and taking into account the uniqueness of this time in our lives, can help us to empathize and, as a result, engage in a dialogue more effectively with teenagers.
This understanding of the brain structure is also relevant for making public decisions. For example, in many countries there is debate surrounding whether teenagers should be allowed to vote. These debates would benefit from taking into account an informed view of the development of reasoning and the process of decision-making during adolescence.
The work done by Valerie Reyna and Frank Farley on risk and rationality in teenagers’ decision-making shows that, even when they don’t have good control of their impulses, teenagers are intellectually indistinguishable from adults in terms of rational thought. Which is to say, they are capable of making informed decisions about their future despite the fact that they struggle, more than an adult would, to rein in their impulses in emotionally charged states.
But, of course, we don’t need a biologist to tell us that we alternate between reason and impulse, and that our impulsivity shows up in the heat of the moment even beyond our teenage years. This is expressed in the myth of Odysseus and the Sirens, which also gives us perhaps the most effective solution for dealing with this consortium that comprises our identity. When heading off on his voyage home to Ithaca, Odysseus asks his sailors to tie him to the boat’s mast so that he won’t act on the inevitable temptation to follow the Sirens’ song. Odysseus knows that in the heat of the moment, the craving will be irresistible,* but instead of cancelling his voyage he decides to make a pact with himself, binding together his rational self with his anticipated future impulsive one.
The analogies with our daily life are often much more banal; for many of us, our mobile phones ring out with the contemporary version of the Sirens’ song, virtually impossible to ignore. To such an extent that, although we know the clear risks of answering a text while at the wheel, we do it even when the message is something completely irrelevant. Ignoring the temptation to use our phone while driving seems difficult, but if we leave it somewhere inaccessible–such as in the boot of the car–we, like Odysseus, can force our rational thinking to control our future recklessness.
Our brain has evolved mechanisms to ignore–literally–certain negative aspects of the future. And this recipe for creating optimists is just one of the many ways the brain produces a disproportionate sense of confidence. Studying human decisions in the social and economic problems of daily life, Daniel Kahneman, a psychologist and Nobel Prize laureate in Economics, identified two archetypal flaws in our sense of confidence.
The first is that we tend to confirm that which we already believe. That is to say, we are generally headstrong and stubborn. Once we believe something, we look to nourish that prejudice with reaffirming evidence.
One of the most famous examples of this principle was discovered by the great psychologist Edward Thorndike when he asked a group of military leaders what they thought about various soldiers. The opinions dealt with different aptitudes that included physical traits, leadership abilities, intelligence and personality. Thorndike proved that the evaluation of a person mixes together abilities that, on the face of them, have no relationship to each other. That was why the generals rated the strong soldiers as intelligent and good leaders, although there is no necessary correlation between strength and intelligence.* Which is to say that when we evaluate one aspect of a person, we do so under the influence of our perception of their other traits. And this is called the halo effect.
This flaw of the decision-making mechanism is not only pertinent to daily life but also in education, politics and the justice system. No one is immune to the halo effect. For example, when faced with an identical group of conditions, judges are more lenient with people who are more attractive. This is an excellent example of the halo effect and the deformations it causes: those who are lovely to look at are viewed as good people. Of course, this same effect weighs on the free and fair mechanism of democratic elections. Alexander Todorov showed that a brief glance at the face of two candidates allows one to predict the winner with striking precision–close to 70 per cent, even without data on the candidates’ history, thoughts and deeds, or their electoral platforms and promises.
The confirmation bias–the generic principle from which the halo effect derives–cuts reality down so we see only what is coherent with what we already believe to be true. ‘If she looks competent, she’ll be a good senator.’ This inference, which ignores facts pertinent to the assessment and is based entirely on a first impression, turns out to be much more frequent that we realize or will admit to in our day-to-day decisions and beliefs.
In addition to the confirmation bias, a second principle that inflates confidence is the ability to completely ignore the variance of data. Think about the following problem: a bag holds 10,000 balls, you take out the first one and it’s red, you take out a second one and it’s red too. You take out the third and fourth, and they are red as well. What colour will the fifth one be? Red, of course. Our confidence in that conclusion far outweighs the statistical probability. There are still 9,996 balls in the bag. As Woody Allen says, ‘Confidence is what you have before you understand the problem.’ To a certain extent, confidence is ignorance.
Postulating a rule based on just a few cases is both a virtue and a vice of human thought. It is a virtue because it allows us to identify rules and regularities with consummate ease. But it is a vice because it pushes us towards definitive conclusions when we have barely observed a tiny slice of the reality. Kahneman proposed the following mental experiment. A survey of 200 people indicates that 60 per cent would vote for a candidate named George. Very shortly after finding out about that survey, the only thing we remember is that 60 per cent would vote for George. The effect is so strong that many people will read that and think that I wrote the same thing twice. The difference is the size of the sample. In the first phrasing, the case explicitly states that it is the opinion of only 200 people. In the second, that information has disappeared. This is the second filter that distorts confidence. In fact, in formal terms, a survey showing that out of 30 million people 50.03 per cent would vote for George would be much more decisive, but the belief system in our brains mostly makes us forget to weigh in whether the data comes from a massive sample or whether we are dealing with three balls in a bag of 10,000. As the recent ‘Brexit’ outcome in the UK or the Donald Trump vs Hillary Clinton election show, often, in the build-up to an election, the pollsters forget this basic rule of statistics and draw firm conclusions based on a strikingly small and often biased amount of data.
In short, the confirmatory effect and variance blindness are two ubiquitous mechanisms that, in our minds, allow us to base opinions on just a small portion of the coherent world while ignoring an entire sea of noise. The direct consequence of these mechanisms is inflated confidence.
A vital question in understanding and improving our decision-making is to explore if these confidence flaws are native to complex social decisions or if they are seen throughout the vast spectrum of decision-making. Ariel Zylberberg, Pablo Barttfeld and I set out to solve this mystery by studying extremely simple decisions, such as which is the brighter of two points of light. We found that the principles which inflate confidence in social decisions, such as the confirmatory effect or ignoring variance, are traits that persist even in the simplest perceptual decisions.
It is a common trait of our brains to generate beliefs that are more optimistic than actual data suggests. This was confirmed by a series of studies recording the neuronal activity in different parts of the cerebral cortex. It was consistently observed that our brains–and the brains of many other species–are constantly mixing sensory information from the outside world with our own internal hypotheses and conjectures. Even our vision, the brain function we imagine to be most anchored to reality, is filled with illusions. Vision doesn’t function passively depicting reality like a camera, but rather more like an organ interpreting and constructing detailed images based on limited and imprecise information. Even in the first processing station in the visual cortex, neurons respond according to a conjunction of information received by the retina and information received by other parts of the brain–parts that codify memory, language, sound–which establish hypotheses and conjectures about what is being seen.
Our perception always involves some imagination. It is more similar to painting than to photography. And, according to the confirmation effect, we blindly trust the reality we construct. This is best witnessed in visual illusions, which we perceive with full confidence, as if there were no doubt that we are portraying reality faithfully. One interesting way of discovering this–in a simple game that can be played at any moment–is the following. Whenever you are with another person, ask him or her to close their eyes, and start asking questions about what is nearby–not very particular details but the most striking elements of the scene. What is the colour of the wall? Is there a table in the room? Does that man have a beard? You will see that the majority of us are quite ignorant about what lies around us. This is not so puzzling. The most extraordinary fact is that we completely disregard this ignorance.
Both in everyday life and in formal law we judge others’ actions not so much by their consequences but by their determining factors and motivations. Even though the consequence may be the same, it is morally very different to injure a rival on a playing field through an unfortunate, involuntary action than through premeditation. Therefore, in order to be able to decide whether the player acted with bad intentions, just observing the consequences of their actions is not enough. We must put ourselves in their place and see what happened from the victim’s perspective. Which is to say, we have to employ what is known as the theory of mind.
Let us consider two fictional situations. Joe picks up a sugar bowl and serves a spoonful into his friend’s tea. Before he did so, someone switched the sugar for a poison of the same colour and consistency. Of course, Joe didn’t know that. His friend drinks the tea and dies. The consequences of Joe’s action are tragic. But was what he did wrong? Is he guilty? Almost all of us would say no. In order to arrive at that conclusion, we put ourselves in his shoes, recognizing what he knows and doesn’t know, and seeing that he had no intention of hurting his friend. Not only that, but in most people’s minds he was not even negligent in any way. Joe is a good guy.
Same sugar bowl, same place. Peter takes the bowl and replaces the sugar with poison because he wants to kill his friend. He spoons the poison into his friend’s tea but it has no effect on him, and his friend walks away alive and kicking. In this case, the consequences of Peter’s action are innocuous. However, we almost all believe that Peter did the wrong thing, that his action is reprehensible. Peter is a bad guy.
The theory of mind is the result of the articulation of a complex cerebral network, with a particularly important node in the right temporoparietal junction. As its name suggests, it is found in the right hemisphere, between the temporal and parietal cortices, but its location is the least interesting thing about it. Cerebral geography is less important than the fact that a function’s location in the brain can be a window to inferring the causal relationships in its workings.*
If our right temporoparietal junction were to be temporarily silenced, we would no longer consider Joe and Peter’s intentions when judging their actions. If that region of our brains is not functioning as it should, we would believe that Joe did wrong (because he killed his friend) and that Peter did right (because his friend is in perfect health). We wouldn’t take into consideration that Joe didn’t know what was in the sugar bowl and that Peter had failed to carry out his macabre plan only through chance. These considerations require a precise function, the theory of mind, and without it we lose the mental ability to separate the consequences of an action from its network of intentions, knowledge and motivations.
This example, demonstrated by Rebecca Saxe, is proof of a concept that goes beyond the theory of mind, morality and judgement. It indicates that our decision-making machinery is composed of a combination of pieces that establish particular functions. And when the biological support of those functions is disarmed, the way we believe, form opinions and judge changes radically.
More generally, it suggests that our notion of justice does not result from pure and formal reasoning, but that instead it is conceived in a particular state of the brain.*
But there is no need in fact to have very sophisticated brain-stimulating devices to prove this concept. The common image of the reality of justice being ‘what the judge ate for breakfast’ seems in fact to be quite true. The percentage of favourable rulings by US judges drops dramatically during the morning, then peaks abruptly after the lunch break to drop again substantially in the next session. This study of course cannot factor out the many variables that change between breaks such as glucose, or fatigue, or accumulated stress. But it shows that simple extraneous factors which condition the state of the judge’s brain have a strong influence on the outcome of court decisions.
Moral dilemmas are hypothetical situations taken to an extreme that help us reflect on the underpinnings of our morality. The most famous of them is the ‘trolley problem’, which goes like this:
You are on a tram without brakes that is travelling along a track where there are five people. You are well acquainted with its functioning and know without a shadow of a doubt that there is no way to stop it, and that it will run over those five people. There is only one option. You can turn the wheel and take another track where only one person will be run over.
Would you turn the wheel? In Brazil, Thailand, Norway, Canada and Argentina, almost everyone–young or old, liberal or conservative–chooses to turn it on the basis of a reasonable, utilitarian calculation. The choice seems simple: five deaths or one? Most people across the world choose to kill one person and save five. Yet experiments show that there is a minority of people who consistently decide not to turn the wheel.
The dilemma consists in doing something that will provoke the death of one person or not doing anything to keep five people from dying. Some people could reason that fate had already chosen a path and that they shouldn’t play God and decide who dies and who lives, even when the maths favours that choice. They reason that we have no right to intervene, bringing about the death of somebody who would have been fine if not for our action. We all have a different judgement of the responsibility for action or inaction. It is a universal moral intuition that is expressed in almost every legal system.
Now, another version of the dilemma:
You are on a bridge from which you see a tram hurtling down a track where there are five people. You are completely sure that there is no way to stop it and that it will run over those five people. There is only one alternative. On the bridge there is a large man sitting on the railing watching the scene. You know for certain that if you push him, he will die but he will also make the trolley go off the tracks and save the other five people.
Would you push him? In this case, almost everyone chooses not to. And the difference is perceived in a clear, visceral way, as if it were decided by our bodies. We don’t have the right deliberately to push someone to save someone else’s life. This is supported by our penal and social system–both the formal one and the judgements of our peers: neither would consider these two cases to be equal. But let’s forget about that factor. Let’s imagine that we are alone, that the only possible judgement is our own conscience. Who would push the man from the bridge and who would turn the wheel? The results are conclusive and universal: even completely alone, with no one watching, almost all of us would turn the wheel and almost no one would push the man from the bridge.
In some sense, both dilemmas are equivalent. Seeing this is not easy because it requires going against our intuitive body signals. But from a purely utilitarian perspective, from the motivations and consequences of our actions, the dilemmas are identical. We choose to act in order to save five at the expense of one. Or we choose to let fate take its course because we feel that we do not have the moral right to intervene, condemning someone who was not destined to die.
Yet from another perspective, both dilemmas are very different. In order to exaggerate the contrasts between them, we present an even more far-fetched dilemma.
You are a doctor on an almost deserted island. There are five patients, each one with an illness in a different organ that can be solved with a single transplant that you know will undoubtedly leave them in perfect health. Without the transplant they will all die. Someone shows up in the hospital with the flu. You know you can anaesthetize them and use their organs to save the other five. No one would know and it is only a matter of your own conscience being the judge.
In this case, the vast majority of people presented with the dilemma would not take out the organs to save the other five, even to the point of considering the possibility aberrant. Only a few very extreme pragmatists choose to kill the patient with the flu. This third case again shares motivations and consequences with the previous dilemmas. The pragmatic doctor works according to a reasonable principle, that of saving five patients when the only option in the universe is one dying or five dying.
What changes in the three dilemmas, making them progressively more unacceptable, is the action one has to take. The first act is turning a wheel; the second, pushing someone; and the third, cutting into people with a surgical knife. Turning the wheel isn’t a direct action on the victim’s body. Furthermore, it seems innocuous and involves a frequent action unconnected to violence. On the other hand, the causal relationship between pushing the man and his death is clearly felt in our eyes and our stomachs. In the case of the wheel, this relationship was only clear to our rational thought. The third takes this principle even further. Slaughtering a person seems and feels completely impermissible.
The first argument (five or one) is utilitarian and rational, and is dictated by a moral premise, maximizing the common good or minimizing the common evil. This part is identical in all three dilemmas. The second argument is visceral and emotional, and is dictated by an absolute consideration: there are certain things that are just not done. They are morally unacceptable. This specific action that needs to be done to save five lives at the expense of one is what distinguishes the three dilemmas. And in each dilemma we can almost feel the setting in motion of a decision-making race between emotional and rational arguments, à la Turing, in our brain. This battle that invariably occurs in the depths of each of us is replicated throughout the history of culture, philosophy, law and morality.
One of the canonical moral positions is deontological–this word derives from the Greek deon, referring to obligation and duty–according to which the morality of actions is defined by their nature and not their consequences. In other words, some actions are intrinsically bad, regardless of the results they produce.
Another moral position is utilitarianism: one must act in a way that maximizes collective wellbeing. The person who turns the wheel, or pushes the man, or slices open the flu sufferer would be acting according to a utilitarian principle. And the person who does not do any of those actions would be acting according to a deontological principle.
Very few people identify with one of those two positions to the extreme. Each person has a different equilibrium point between the two. If the action necessary to save the majority is very horrific, deontology wins out. If the common good becomes more exaggerated–for example, if there are a million people being saved instead of five–utility moves into the foreground. If we see the face, expression or name of the person to be sacrificed for the majority–particularly when it is a child, a relative or someone attractive–deontology again has more weight.
The race between the utilitarian and the emotional is waged in two different nodes of the brain. Emotional arguments are codified in the medial part of the frontal cortex, and evidence in favour of utilitarian considerations is codified in the lateral part of the frontal cortex.
Just as one can alter the part of the brain that allows us to understand another person’s perspective and hack into our ability to use the theory of mind, we can also intervene in those two cerebral systems in order to inhibit our more emotional part and foster our utilitarianism. Great leaders, such as Churchill, usually develop resources and strategies to silence their emotional part and think in the abstract. It turns out that emotional empathy can also lead us to commit all sorts of injustices. From a utilitarian and egalitarian perspective of justice, education and political management, it would be necessary to detach oneself–as Churchill did–from certain emotional considerations. Empathy, a fundamental virtue in concern for our fellow citizens, fails when the goal is to act in the common good without privilege and distinctions.
In everyday life there are very simple ways to give more weight to one system or the other. One of the most spectacular was proven by my Catalan friend Albert Costa. His thesis is that when making the cognitive effort to speak a second language we place ourselves in a mode of cerebral functioning that favours control mechanisms. As such, it also favours the medial part of the frontal cortex that governs the utilitarian and rational system of the brain. According to this premise, we could all change our ethical and moral stance depending on the language we are speaking. And this does, in effect, occur.
Albert Costa showed that native Spanish-speakers are more utilitarian when speaking English. If a Spanish-speaker were to be posed with the man-on-the-bridge dilemma in a foreign language, in many cases he or she would be more willing to push him. The same thing happens with English-speakers: they become more pragmatic when evaluating similar dilemmas in Spanish. For a native English-speaker it is easier to push a man in Spanish.
Albert proposed a humorous conclusion to his study, but one which surely has some truth to it. The battle between the utilitarian and the emotional is not exclusive to abstract dilemmas. Actually, these two systems are invariably expressed in almost all of our deliberations. And, many times, in the safety of our homes more ardently than anywhere else. We are in general more aggressive, sometimes violently and mercilessly, with those we love most. This is a strange paradox in love. Within the trust of an unvarnished, unprejudiced relationship with vast expectations, jealousy, fatigue and pain, sometimes irrational rage emerges. The same argument between a couple that seems unbearable when we are living through it becomes insignificant and often ridiculous when seen from a third-person perspective. Why are they fighting over something so stupid? Why doesn’t one or the other simply give in so they can reach an agreement? The answer is that the consideration is not utilitarian but rather capriciously deontological. The deontology threshold drops precipitously and we are not willing to make the slightest effort to resolve something that would alleviate all the tension. Clearly we would be better off being more rational. The question is, how? And Albert, half joking and half serious, suggests that the next time we are fighting with our significant other, we should switch to Spanish (or any other non-native language). This would allow us to bring the argument into a more rational arena, one less burdened by visceral epithets.
The moral balance is complicated. In many cases, acting pragmatically or in a utilitarian manner requires detaching ourselves from strong emotional convictions. And it implies (most often implicitly) assigning a value (or a prize) to issues that, from a deontological perspective, it seems impossible to rationalize and convert into numbers.
Let’s perform a concrete mental experiment to illustrate this point. Imagine that you are going to be late for an important meeting. You are driving and right after you have crossed a railway line you realize that the warning signs at the level crossing are not working. You feel lucky that no train was passing when you drove across. But you understand that with the traffic due to get heavier someone will be hit by a train and most likely die. You then call 999 to inform the emergency services, but at the same time you realize that, if you don’t make the call, the fatal accident will close the streets just behind you and prevent traffic coming from various places in the city. And with that, you will make it on time to work. Would you hang up and let someone die to gain a few minutes and make it on time to your meeting? Of course not. The question seems absurd.
Now imagine that it is five of you in the same car coming together. You are the only one who realizes that the warning signals are not working–maybe because as a child you were fascinated by level crossings. Same question and surely same answer. Even if no one would know ‘your sin’, you would make the call and prevent the accident. It does not matter if it is one, five, ten or one million people arriving late, the answer is the same. More and more people being late wouldn’t add up to the value of one life. And this principle seems to be quite general. Most of us have a strong conviction that, regardless of the dilemma, an argument about life and death would trump all other considerations.
However, we may not live up to this conviction. As absurd as the previous dilemma seems, similar considerations are made daily by each driver and by policy-makers who regulate traffic in major cities. In Great Britain alone about 1,700 people die as a result road accidents. And even if this is a dramatic decrease from the number in the 1980s (close to 6,000), these numbers would be way lower if traffic speed were further restricted to, say, 25 mph. But this, of course, has a cost. It would take us twice the time to get to work.*
If we forget the cases in which fast driving actually saves lives, as with ambulances, it becomes clear that we are all making an unconscious and approximate comparison that has time, urgency, production and work on one side of the equation, and life and death on the other.
Establishing rules and principles for morality is a huge subject that is at the heart of our social pact and, obviously, goes far beyond any analysis of how the brain constructs these judgements. Knowing that certain considerations make us more utilitarian can be useful for those who are struggling to behave more in that way, but it has no value in justifying one moral position over another. These dilemmas are only helpful in getting to know ourselves better. They are mirrors that reflect our reasons and our demons so that, eventually, we can have them more at our disposal and not simply silently dictating our actions.
Ana is seated on a bench in a square. She is going to play a game with another person, chosen at random among the many people in the square. They don’t know each other, and they do not see each other or exchange a single word. It is a game between strangers.
The organizer explains the rules of the game, and gives Ana fifty dollars. It is clearly a gift. Ana has only one option: she can choose how to divide the money with the other person, who will remain unaware of her decision. What will she do?
The choices vary widely, from the altruists who fairly divide the money to the egotists who keep it all. This seemingly mundane game, known as the ‘Dictator Game’, became one of the pillars of the intersection between economics and psychology. The point is that most people do not choose to maximize their earnings and share some of the tokens even when the game is played in a dark room where there is no record of the decision made by the dictator. How much is offered depends on many variables that define the contours of our predisposition to share.
Just to name a few: women are more generous and share more tokens independently of their monetary values. Instead, men tend to be less generous and even less as the value of the tokens increases. Also, people behave more generously when under the gaze of real human eyes. What is even more surprising is that just displaying artificial eye-like images on the screen on which players are making their choice makes them share more tokens. This shows that even minimal and implicit social cues can shape our social behaviour. Names also matter. Even when playing with recipients that they have never met, and that they do not see, dictators share more of their tokens when the recipient’s name is mentioned. And, on the contrary, a more selfish behaviour can be primed if the game is presented using a market frame of buyers and sellers. Last, ethnicity and physical attractiveness also dictate the way people share, but in a more sophisticated manner. In a seminal study conducted in Israel, Uri Gneezy showed that in a trust game which involved reciprocal negotiations, participants transferred more money when the recipient was of Ashkenazic origin than when they were of Eastern origin. This was true even when the donor was of Eastern origin, showing that they discriminate against their own group. However, in the dictator game, players shared similarly with recipients of both origins. Gneezy’s interpretation is that discrimination is the result of ethnic stereotypes (the belief that some groups are less trustable) and not a reflection of an intrinsic taste for discrimination (the desire to harm or offer less to some groups per se). Attractiveness also modulates sharing behaviour in a more complicated way. Attractive recipients tend to receive more, but this seems to depend a lot on specific conditions of how the game is played. One study found that differences based on attractiveness are more marked when the dictators can see the recipient but also listen to them. And the list is much longer. The point is that there are a great number of variables, from very sophisticated social and cultural constructs to very elementary artificial features that shape, in a very predictable manner, our sharing behaviour. And, most often, without us knowing anything about this.
Eva takes part in another game. It also begins with a gift, of fifty dollars that can be shared at will with a stranger named Laura. In this game, the organizers will triple what Eva gives Laura. For example, if she decides to give Laura twenty dollars, Eva will be left with thirty and Laura will get sixty. If she decides to give Laura all of it, Eva will have nothing and Laura will get 150. At the same time, when Laura gets her money, she has to decide how she wants to share it with Eva. If the two players can come to an agreement, the best strategy would be for Eva to give her all the money and then Laura would split it equally. That way they would each get seventy-five dollars. The problem is that they don’t know each other and they aren’t given the opportunity to make that negotiation. It is an act of faith. If Eva decides to trust that Laura will reciprocate her gracious action, she should give her all the money. If she thinks that Laura will be stingy, then she shouldn’t give her any. If–like most of us–she thinks a little of both, perhaps she should act Solomonically and keep a little–a sure thing–and invest the rest, accepting the social risk.
This game, called the ‘Trust Game’, directly evokes something we have already covered in the realm of optimism: the benefits and risks of trust. Basically, there are plenty of situations in life in which we would do better by trusting more and cooperating with others. Seen the other way around, distrust is costly, and not only in economic decisions but also in social ones–surely the most emblematic example being in couple relationships.
The advantage of taking this concept to its most minimal version in a game/experiment is that it allows us to exhaustively investigate what makes us trust someone else. We had already guessed some elements. For example, many players in the experiment often find a reasonable balance between trusting and not exposing themselves completely. In fact, the first player usually offers an amount close to half. And trusting the other person depends on the similarities between the players, in terms of accent, facial and racial features, etc. So again we see the nefarious effects of a morality based on superficiality. And what a player offers also depends on how much money is at stake. Someone who may be willing to offer half when playing with ten quid might not do the same when playing with 10,000. Trust has a price.
In another variant of these games, known as the ‘Ultimatum Game’, the first player, as always, must decide how to distribute what they have been given. The second player can accept or reject the proposal. If they reject it, neither of them gets anything. This means that the first player has to find a fair balance point that is usually a little above nothing and a little below half. Otherwise, both players lose.
Bringing this game to fifteen small communities in remote parts of the planet, and in search of what he called homo economicus, the anthropologist Joseph Henrich discovered that cultural forces establish quite precise rules for this type of decision. For example, in the Au and Gnau peoples of Papua New Guinea, many participants offered more than half of what they received, a generosity rarely observed in other cultures. And, to top it all off, the second player usually rejected the offer. This seems inexplicable until one understands the cultural idiosyncrasy of these Melanesian tribes. According to implicit rules, accepting gifts, even unsolicited ones, implies a strict obligation to repay them at some point in the future. Let’s just say that accepting a gift is like taking on some sort of a mortgage.*
Two large-scale studies, one carried out in Sweden and the other in the United States, using twins both monozygotic (identical) and dizygotic (fraternal ones, whose genomes are as different as any other siblings), show that individual differences in generosity seen in the trust game also have a genetic predisposition. If a twin tends to be very generous, in most cases their identical twin will also be. And the opposite is also true: if one decides to keep all the money, there is a high likelihood that the identical twin will do the same. This relationship is found to a lesser extent in dizygotic twins, which allows us to rule out that this similarity is merely a result of having grown up together, side by side, in the same home. That, of course, doesn’t contradict what we already saw and intuited: that social and cultural differences influence cooperative behaviour. It merely shows that they aren’t the only forces that govern generosity.
Finding a genetic footprint in the predisposition to trust and cooperate leads to a somewhat uncomfortable question. What chemical, hormonal and neuronal states make a person more predisposed to trust others? As with olfactory preferences, a natural starting point for studying the chemistry of cooperation is examining what happens in other animals. And a likely chemical candidate emerges: oxytocin, a hormone that modulates brain activity and plays a key role in the predisposition to social bonding. A player who inhales oxytocin plays the trust game much more generously than a player who inhales a placebo.
Oxytocin is involved in parenting behaviour. In fact, it plays a primary role in the process of activating the uterus during childbirth, which explains its etymology: from the Greek, oxys, which means ‘quick’, and tokos, ‘birth’. It is also released by sucking on the nipple, which facilitates nursing. But oxytocin not only predisposes the body for motherhood: it also prepares mothers’ characters for the huge feat they are about to undertake. Virgin sheep, when receiving oxytocin, behave maternally with the lambs of others as if they were their own. They become great mothers. And vice versa, mother sheep, when given antagonistic substances that block the action of oxytocin, lose their typical maternal behaviours and neglect their offspring. So oxytocin was established as the molecule of maternal love and, more generically, of all love.
Since then a large body of research has shown in one experiment after another that administering a single dose of oxytocin improved different aspects of social cognition: trust, emotional recognition, the ability to direct and sustain gaze at others, understanding, cooperating, and reasoning about high-level social interactions. Oxytocin emerged in the media, with some reason, as the holy grail for empathy, social interactions, emotions and love. Could we simply irrigate oxytocin and make our world a better one? Would dreams of peace, trust, bonding, and a more just and caring society be solved by increasing the dose of a hormone?
The oxytocin hype was fuelled even more when genetics came in to show that variations of the OXTR gene that encodes the receptor of oxytocin were linked to deficits in social behaviour. This was shown directly in animal models, where this gene can be manipulated, but it was also found that variations in this gene could increase the risk of autism.
This closed the loop. Impaired social interaction is a diagnostic hallmark of Autism Spectrum Disorders. Autism has a very high prevalence (estimated to 1 in every 68 individuals) and there is currently no satisfactory (or even close to satisfactory) medical treatment, despite the enormous effort that has been devoted to it. In the light of this, oxytocin offered huge promise to a population that was avid for solutions. The first studies showed that, as with normal adults, single doses of oxytocin could increase social cognition of autistic children. But numbers matter. The effect was quite modest: children’s performance in a task in which they had to infer the emotion of another person by looking at their eyes improved, on average, by from 45 to 49 per cent. This is still very far from people without autism, who on average perform in the same task at above 70 per cent.
Oxytocin worked, but the effect was very small, almost negligible. And there were more important reasons to temper the oxytocin hype. Most drugs behave very differently when used in a single dose compared to extended treatments of multiple doses. And, here, results from animal research were not so promising. The same drug that after being administered boosted social behaviour in mice, sheep and voles resulted in very weird social behaviour in the long term, especially after repeated exposures. And, indeed, ten years of research have not shown any consistent effect of sustained treatments of oxytocin to improve social deficits in autistic children. Adam Guastella, one of the world leaders in oxytocin investigation, published in 2016 a review paper which analyses all current evidence to conclude that repeated doses of oxytocin have very limited therapeutic potential.
Let’s put all this together, because that gives us an important lesson not only on oxytocin and social cognition, but more generally on how naive interpretations of neuroscientific findings may be strongly misleading. It is true that oxytocin plays a role in social behaviour–there is ample evidence to support this idea. Oxytocin is expressed in nature during motherhood (a moment of maximal social bond), removing oxytocin usually leads to different forms of social neglect and lack of trust, and, conversely, providing oxytocin leads to increased trust, empathy, emotional recognition and understanding… Hacking the genes of oxytocin receptors leads to animals with very weird social patterns, and people who have atypical variations of these genes are more likely to express autism or other diseases which affect social behaviour. So evidence spans genes, molecules, pathology, and laboratory psychological experiments on humans and animals in a consistent picture. However, the fact that a molecule plays a role in this process does not imply that simply consistently boosting this molecule will lead to increasing this behaviour. However, this is often hidden or unsaid in the broad media reports of these studies, mainly following the natural desire to make a story simple, often more beautiful and optimistic than it really is.
Oxytocin is a biological and chemical trail that lays the foundations which predispose a person to cooperate, but it is a huge and unfounded leap to assume that this implies that a network of trust, love and social understanding can be built by popping a pill.
Trust is the foundation of human society. On every scale, in every stratum, trust is the glue of institutions. It is the key to friendship and love, and the basis of commerce and politics. When there is no trust, the bridges connecting people break, and societies fall apart. Everything collapses. And this idea of everything breaking apart is expressed in Latin as con (everything) and rumpere (to break), which is where we get the word corrupt. Corruption leaves nothing intact.* It destroys the fabric of society.
The world map of corruption is not hard to imagine.** The Nordic countries, Canada and Australia, are pale yellow, indicating a very low perceived level of corruption. Europe shows a gradient with corruption increasing from north to south and from west to east. The United States and Japan have intermediate orange values, and Russia and most of Asia, Africa and South America (with the notable exception of Chile and Uruguay) show up as the places in the world with the highest level of corruption.
Many economists think that endogenous, structural corruption, which is filtered through all the pores of a society, is a fundamental obstacle to development. Therefore understanding why there are very different corruption perception values is pertinent, especially when analysing how this mechanism can offer clues that might eventually change the course of things.
Rafael Di Tella, an economist, Argentinian Olympic fencer and Harvard professor, developed–along with his doctoral student Ricardo Pérez-Truglia–a modest project within this larger objective, which sought to detect one of the seeds of corruption. Rafael’s premise begins with a quote from Molière: ‘He who wants to kill his dog, accuses it of rabies.’ What he implies is that the perpetrator of a wrong action gets away with it by assigning the blame to the victim.
From a normative perspective we should construct opinions about others based on what they have or have not done; yet we do so based on the shape of their face, the structure of their speech or the way they walk. The consequences of Molière’s conjecture are even more unsettling. It implies that we construct unfounded opinions about others to justify our aggression.
Rafael took Molière’s idea to the laboratory with an ingenious experiment called the ‘Corruption Game’.
Like all games derived from the ‘Dictator Game’, it begins with a player–the agent–who decides how to allocate twenty tokens, which were the payment for a boring job that was done by the agent and another player (the allocator), who never meet throughout the experiment. The fundamental aspects of the Corruption Game are as follows:
Some agents can choose, completely freely, how many tokens they want to keep. Others have a small margin of action: they can only choose to keep ten, eleven or twelve tokens. According to the rules of this version of the game they are forced to distribute at least eight to the other player. This controls how much the agent can mistreat the other player, so as to see later what the agent thinks about the recipient.
The recipient receives the tokens in sealed envelopes, without knowing how they were distributed, then can trade those tokens and the agent’s tokens for cash. When doing so the recipient has to make a decision: trade them fairly–five dollars for each token–or trade them corruptly according to the arrangement offered to the cashier, who will pay $2.50 for each token but in exchange will offer a bribe. So the arrangement benefits the cashier and the recipient, and cheats the agent.
In this game, the agent can act in a generous or selfish way, and the recipient can act in an honest or corrupt way. The question (Molière’s) is whether the selfish agents justify their action by arguing that their recipients were going to be corrupt. The fundamental key is that the tokens are in a sealed envelope and therefore, when deciding to trade them, the recipient still doesn’t know how they were distributed. In this game, the player who acts corruptly does so on the basis only of personal predisposition, not for revenge or payback.
Despite that, Molière governs the game. Those agents who were offered more freedom to play aggressively tend to deem the recipients as more corrupt. And this is true both in terms of their fellow players–whom they haven’t met–and their view of the general public. When we can choose to be more hostile and aggressive, we tend to think that others are corrupt. Then, all dogs are rabid.
It remained to be seen how this plot is perpetuated; how opinions emanating from our own actions can, in turn, condition what we do, leading to a domino effect of corruption in the social network. In order to find this out, Andrés Babino, a doctoral student in my lab, and I joined the team Rafael Di Tella had put together.
The key was observing how the agent acted according to the opinion they had of the other player. We created a new experiment in which the recipient had to act according to one of these three instructions: The recipient:
(1) is forced to trade each token for its face value;
(2) can choose to act corruptly or not;
(3) is forced to trade the tokens for half of their value and keep the commission–in other words, is forced to act corruptly.
It could be expected that the agent–who knew which of the three rules the other was bound by–would distribute more when knowing that the recipient would not act corruptly, slightly less when uncertain whether the other player would act corruptly, and even less when warned that the recipient was forced to act corruptly.
However, that was not what happened. In fact, the agent distributed tokens with equal generosity when knowing that the recipient had no freedom of choice. It didn’t matter whether the way the recipient traded turned out to be more or less favourable for the agent. And the agent was much less generous when unsure as to what the recipient would do. In the game of beliefs and trust, it is ambiguity that’s the real killer.
The same argument can be applied in the opposite way. We are hostile with those we believe could betray us. It is the fear of being made a fool of, of trusting someone who will not reward us in the same way. So, putting together the two pieces of the puzzle, our own selfish actions turn into harmful beliefs about others (‘everyone is corrupt’), and ambiguity about others’ beliefs (‘they may be corrupt’) makes us selfish and aggressive. It is a vicious circle that is only remedied by firmly sowing certainty and trust. And this, at least in the laboratory, is possible. In order to do so we must enter the deepest recesses of words and the deepest structures of the brain.
When players make a confident, cooperative and altruistic decision in the trust game, the regions of their brain that codify dopaminergic circuits of pleasure and reward are activated. In other words, our brains react similarly when exposed to something pleasurable–sex, chocolate, money–as when displaying solidarity. Being good has value. And that explains why in all the economic games we rarely see decisions that exclusively maximize financial profit and ignore all social considerations. It turns out that there is a foundation for this hypothesis. Social capital is not only lovely and honourable–it pays.
When playing the trust game repeatedly, players learn and align in a pattern: if one player distributes generously, the other becomes progressively more generous. And the opposite is also true, if one is not generous, the other distributes in an increasingly more selfish manner. In general, the game comes to two solutions; the perfectly cooperative, in which all players win more, and the selfish, in which the first player wins less and the second gets nothing. The brain discovers the other player’s inclinations using the same learning mechanism that explains the neuroscience of optimism. A person, before playing, already has an expectation of their fellow player, whether that player will cooperate or not. When they find a discrepancy, the brain’s caudate nucleus activates and releases dopamine.
This produces a signal of prediction error that in turn makes us learn to calculate more precisely whether the other player will cooperate in the future. As this calculation becomes more exact, we learn to know our neighbours. As such there is less of a discrepancy between what is expected and what is found, and the dopaminergic signal lessens. It is the neuronal circuit of social reputation.
What’s most interesting here is understanding how this slow cooking of trust thickens into an obstinate tendency to trust others. This can perhaps explain in part the idiosyncratic differences between Argentinians, Chileans, Venezuelans and Uruguayans in their predisposition to trust others or, alternatively, to become corrupt.
The key experiment was done by a neurobiologist, Elizabeth Phelps, in New York. A person repeatedly plays trust games with different players. Each of them had previously been described in a brief made-up biography that marked them as morally upstanding or immoral.
And she discovered something extraordinary in the brains of those playing with someone described as morally upstanding who nonetheless behaves selfishly. Since the brain learns from discrepancies, we would expect that a prediction error would be produced in the caudate nucleus, releasing dopamine and in turn allowing for a revision in the opinion of the other person. Their good reputation would have to be adjusted to take into account the bad action just observed. But that doesn’t happen. The brain turns a blind eye when there is a discrepancy between your moral expectation of a person and their actions. The caudate nucleus does not activate, the dopamine circuits shut off and there is no learning. This obstinacy is a lasting social capital that can resist certain setbacks. Those who establish (based on the biography they were given) that the other player will act morally do not change that belief merely because they find an exception. Which is to say, the trust network is robust and sturdy. The seed of social confidence is optimism’s first cousin.
We can recognize this in a more mundane situation; for example, when someone whose taste in movies we respect enthusiastically recommends a film that we think is without merit. We curse their name in the moment, but our trust in them persists. There would have to be many more failed recommendations before we began to question their judgement in recommending films to see. Yet if a person we barely know recommends a bad book to us, we will be unlikely to take their advice ever again.
Over the course of this chapter we have travelled far and wide through human decision-making, from our simplest choices to our most sophisticated and profound. The decisions that define our morality, our notion of what is fair, whom we love. Those ones that José Saramago says ‘make us’.
Over the course of this journey a latent and implicit tension naturally appeared. On one hand, we spoke of the existence of a common neuronal circuit that mediates practically every human decision. On the other, we have shown that our ways of deciding are markedly personal, and that our decisions define us. Some are utilitarian and pragmatic; some are trusting and willing to take risks; others are prudent and spineless. What’s more, this mishmash of decisions coexists deep within each of us. How is it possible that one single cerebral mechanism can produce such a wide range of decisions? The key is that the machine has various screws, and the way they are tightened can result in decisions that seem very different despite having structural similarities. So a slight change in the balance between the lateral frontal cortex and the medial frontal cortex defines us as cold and calculating, or emotional and hypersensitive. Often what we perceive as opposing decisions are, in actuality, only a very slight disturbance in a single mechanism.
This is not only true of the decision-making machinery. It turns out to be perhaps the essence of the biology that defines us: diversity within regularity. Noam Chomsky caused a stir when he explained that all languages, each with its own history, idiosyncrasies, usage and customs, share a common skeleton. This holds true for the language of genetics. We all roughly share the same genes; otherwise it would be impossible to talk about a ‘human genome’. But the genes are not identical. For example, there are certain places on the genome–called polymorphisms–that are wildly varied and, to a large extent, define the unique individuals we each are.
Of course, these seeds take shape within a social and cultural breeding ground. Despite a genetic predisposition to and a biological seed for cooperation, it would be absurd from every standpoint to believe that Norwegians are less corrupt than Argentinians because of a different biological makeup. However, here there is an important nuance. It is not impossible–in fact, it’s quite likely–that the brain’s shape and organization develop differently depending on whether it’s reared in a culture based on trust or distrust. It is within cultures that the machine’s screws are tightened, its parameters are configured, and the results are expressed in how we make decisions and how we trust. In other words, culture and brain are intertwined in an eternal golden braid.*