In the early 1980s, while I was getting used to life in New York City and psychologists around the world were beginning to pay more attention to unconscious mechanisms, a neurologist at Salpêtrière Hospital in Paris was treating two older patients who had recently suffered strokes. The doctor’s name was François Lhermitte. He was round-headed, balding, wore glasses, and sported a tie beneath his white lab coat—the image of medical expertise at the domed, four-hundred-year-old hospital where he worked. His patients, a man and a woman, were both acting strangely, and in the same way. Their behavior seemed to be entirely driven by cues in their environment, as if they no longer had independent control over what they did. “An excessive control of behavior by external stimuli at the expense of behavioral autonomy” is how Lhermitte described it. Naturally curious about what he might learn from their odd openness to outside influence, he decided to expose them to a variety of everyday contexts and observe what might happen.
Lhermitte started simply. Filling two glasses with water, he set them down in front of the patients, who promptly drank them right down. Nothing unusual there, of course. Except Lhermitte kept filling the glasses, and the patients kept drinking them all right down, glass after glass, even while complaining about being painfully full. They could not help but drink the full glasses of water placed in front of them. On a different occasion, the doctor took the man to his home, an apartment. He led the man out onto his balcony, which overlooked a nearby park, and they admired the view together. Right before reentering the apartment, Lhermitte softly said “museum,” and when back inside the patient proceeded to scrutinize the paintings and posters on the walls with great interest, also lavishing his attention on common objects that sat on the tables—plates and cups with little aesthetic interest—as if they too were actual works of art. On next being shown the bedroom, the man looked at the bed, proceeded to undress, and got into it. Soon he was asleep.
What was going on here? It certainly didn’t seem like these two previously normal individuals were acting with conscious intent. As Lhermitte and other early neural psychologists knew (before brain scanning technology was invented), stroke victims often provide fascinating opportunities to understand the hidden operations of the mind, to part the curtain of behavior and peek into the backstage of its causes. The problems that people outwardly manifested after their strokes—in speech, vision, emotion, or memory—were important clues about the function and purpose of the region of the brain that had been damaged. So what did his two patients’ bewildering suggestibility, a kind of blind obedience to their environment, reveal?
Lhermitte continued his experiments in new locales around Paris that seemed to bring out his two patients’ bold and industrious nature. On the paths of the Tuileries Gardens, near the Louvre, they came across some gardening equipment: a watering hose and some rakes. Sure enough, both the man and woman grabbed the implements and spontaneously went to work, raking and watering, as if they themselves were gardeners. Despite their advanced years, they went on like this for hours, until the good doctor finally stopped them. On another occasion, in his medical office, the woman gave Lhermitte a physical exam, or at least her idea of what a medical exam was like. She went so far as ask him to lower his trousers for an injection, and, good sport that he was, Dr. Lhermitte complied (and even included a photograph of the scene when he published this research). Later, when he questioned them about their behavior, neither patient seemed to notice or find anything unusual or strange about it. They appeared unconsciously compelled by the naturally occurring primes in their environment, yet they had no trouble consciously justifying all these activities—their water-chugging, art appreciation, gardening, and practicing medicine without a license. Their strokes had fundamentally changed the nature of their behavior. The brain’s fine-tuned responses learned in the past—or guided by the future, in relation to any plans or goals they might have had—had been replaced by a hypersensitivity to the present, and seemingly only the present.
Eventually, Lhermitte’s two helplessly whimsical and hardworking stroke patients passed away. Careful examination of their brains revealed their strokes had damaged or destroyed the same location in both patients—areas of their prefrontal cortex that are critical for the planning and control of action. The patients were able to receive cues to behavior coming in from their environment through their five senses, but they lacked the complementary brain region that exerted intentional control over these impulses and their subsequent behavior. We more fortunate individuals have both of these, of course, but before Lhermitte’s discovery (and Gazzaniga’s, described earlier, at about the same time), scientists were only aware of the intentional control component. Lhermitte showed that we also have this second influence over our behavior, the outside environment, which suggests actions that are typical and appropriate for our current situation; without the conscious control component in place, those environmental cues can run the show all by themselves, with no conscious input or control necessary (but highly desirable, of course). Lhermitte humbly called this “environmental dependency syndrome,” but it soon became more widely known in his honor as “Lhermitte’s syndrome.”
With the aid of brain imaging scanners that Lhermitte did not have in the 1980s, neuroscience research has subsequently confirmed his conclusions. A major review by neuroscientist Chris Frith and his colleagues at University College London concluded that our brains store our current behavioral intentions in the prefrontal and premotor cortex areas, but the areas that are actually used to guide that behavior are in an anatomically separate part of the brain, the parietal cortex. This discovery helps to explain how priming and other unconscious influences can affect our behavior, and how Lhermitte’s patients could have been so influenced by their environments without having any intentional control over those influences. Priming and outside influences on our behavior can activate the guiding behavior in one part of the brain independent of the intention to perform that behavior, which is located somewhere else entirely.
Lhermitte’s stroke patients were certainly behaving without consciously choosing or controlling their behavior, showing that the act of conscious choice is not necessary to produce sophisticated patterns of action. Rather, it seems that William James had it right (writing in 1890, he was remarkably prescient about many things) when, in his famous chapter on “The Will,” he argued that our behavior actually springs from unconscious and unintentional sources, including behaviors appropriate to and suggested by what we are currently seeing and experiencing in our world. Our conscious acts of will, James said, are acts of control over these unconscious impulses, allowing some through but not others. The “control center” was exactly the part of the brain that was found damaged in Lhermitte’s stroke patients. Every human mind, then, is a kind of mirror, generating potential behaviors that reflect back the situations and environments in which we find ourselves—a glass of water says “drink me,” flower beds say “tend me,” beds say “sleep in me,” museums say “admire me.” We are all programmed this way, to react to these external stimuli as much as Lhermitte’s patients did. Before you know it, what you see is what you do.
Now, thirty years after the French doctor published his important observations, modern neuroscience has made remarkable advances in our knowledge of the brain and of the specializations of different brain regions, and how they interact with each other. Further research confirmed that, indeed, Lhermitte’s patients were simply exhibiting in their behavior uninhibited unconscious impulses to action that all of us have. Fortunately for the rest of us, who have intact behavior control systems, we have those other operations in the brain, the will that William James described, which serves as a gatekeeper or filter on those constant impulses. So what does it mean that, deep down in our brains, we are always involuntarily generating responses that mirror not only what’s directly going on around us, but also what’s implied by the situation or context in which we find ourselves? At first glance, it might seem that we are mindless automatons, pack animals, following the rest of the herd. Aren’t we, you might wonder, singular beings whose minds only express our unique nature as we think, talk, and do? Yes, and no—but with a lot more no.
We are much more like Lhermitte’s patients than we realize or perhaps care to admit. Our hidden impulses shape how we act in the present in extensive and powerful ways. The behavior and emotions of others are contagious to us, not only when we witness them directly and in person, but even when we read about them, or see signs of them after the fact (that is, their visible consequences). The “suggestions” for how to act that come from what we are perceiving at the moment extend beyond the physical actions of others that we might unconsciously imitate, to rather complex and abstract forms of behavior that we have learned are appropriate for our particular environment (what people generally do when in a garden, a museum, or a bedroom). Subtle cues that drive us to behave both nobly and badly are continuously coming in through our senses to influence our mind as it navigates the present. And like Lhermitte’s patients, we are unaware of these influences and so believe that we are acting autonomously.
We pay a lot of attention to the other people around us. Constantly, every day, we perceive other people doing things: their gestures or mannerisms, postures and emotional expressions, their tone of voice and speaking volume, and the content of what they say or write or post on social media. And what we see and hear has the natural effect of causing us to be more likely to do the same things ourselves—to unconsciously imitate them. We are not consciously aware of intending to do so. (As Darwin said was true of our emotional expressions, so too can we mimic and imitate intentionally as well as unintentionally, but mostly we do so without realizing it.) This adaptive tendency is not unique to humans, of course. We’ve all marveled at how schools of fish, and flocks of birds, seem to move as one, in unison. This does not happen because Fred Bird looks over at Susie Bird and decides, Hey, Susie is going that way, I think I will too! The movements are too fast and the synchrony too perfect to depend on a bunch of birdbrained intentional choices. Rather, the effect must be based on a hardwired connection between perception and behavior, an immediate impulse to action driven by the perception of the other birds’ motion and direction. We humans have that same hardwired connection, the perception-action link ; it’s just that we have more intentional control over it if we become aware of its influence. In the late 1990s, I set out with my students to better understand this relatively unplumbed depth of the mind. We wanted to see whether people mimicked each other unintentionally, without meaning to or trying to.
In our experiment design, we strove to create a situation in which participants would not be focused on each other or on trying to make friends, since it was known that people intentionally mimic each other more when they are trying to establish a relationship. Would imitation and mimicry occur even without this motivation? Would the mimicry follow from merely seeing what the other person was doing? To test this in our lab at NYU, Tanya Chartrand and I told the unsuspecting participants that we were developing a new kind of projective personality test, like the old Rorschach test but with photographs instead of inkblots. They would just pick up a photograph from a stack on the table between them and say whatever came to mind when they looked at it. We wanted their interaction to be about each other as little as possible, and so we focused them instead on the photographs on the table.
Now, only one of the people at the table was an actual participant; the other was part of our experimental team, a confederate who exhibited one of two types of behaviors while working on the photograph task. We had two of these confederates, and the participant worked on the photographs with first one, then the other. The key was that one of the confederates posing as another participant would, by design, cross her legs and shake one of her feet in a kind of nervous manner. The other one would not shake her foot at all, but would touch her own head and face with her hand, tug her ear, rest her face in her hand, kind of like the famous Rodin sculpture, The Thinker. So the real participant and the first confederate took turns talking about the photographs, and after a while, we broke them up and the participant went into another room and did the same task with the second confederate. We predicted that the participants would act like human chameleons, changing their own behavior to match that of the person they were with, just as a real chameleon changes its color and spots to match the background of wherever it happens to be at the moment.
We secretly videotaped each of these interactions so that we could later measure how much face-touching and foot-shaking the actual participants did in each situation. The videotapes revealed that the participants indeed copied the behavior of the person they were with, and changed that behavior when moving on to be with a different person. They touched their faces and didn’t shake their feet when with the face-touching confederate, and then they stopped touching their faces and shook their feet more when with the foot-shaking confederate. When questioned after the study, they showed no awareness of having imitated the two confederates during the experiment. The mirroring was entirely automatic and involuntary.
Examples of this chameleon effect abound in the world. All you have to do is look around. In fact, after our study was published, a CNN news crew that was doing a report on it went around New York’s Central Park filming pairs and groups of people sitting on park benches, standing talking to each other, or walking in step—countless real-world examples of people unconsciously mimicking each other. The producer told us they had no trouble at all finding examples of the effect to film for their report.
So why do we have this link between what we see and what we do? The answer lies in our past and in our genes. Infants and babies imitate and mimic others just like adults do (even more so, in fact); it is not something we have to learn how to do, or try to do. If it is an innate tendency we are born with, then it most probably served us well over evolutionary time, producing adaptive advantages that aided our survival as a species. One benefit, concluded Andrew Meltzoff, one of the pioneers of research on imitation and mimicry in infants, is that young children learn much about how to react and behave in various situations just by imitating fellow children and their adult caretakers. Infants are wide open to such imitative tendencies because they haven’t yet developed the ability to control those impulses (which starts around age three or four). In this way, they are much like Lhermitte’s patients, with just the primitive imitative responses spurred by their surroundings (as well as internal impulses from being hungry or having gas), but not the ability to suppress or inhibit them. But what exactly is happening neurologically?
It turns out that our brains are wired to take in different streams of information from the eyes: one for purposes of understanding and knowing, and the other for the purpose of behaving appropriately. The first stream flows more or less into a conscious estuary, and the second one into a more automatic, unconscious one. These two visual streams were discovered in the 1990s by the neuropsychologists David Milner and Melvyn Goodale. Each stream comes out of the retina and then heads for the primary visual cortex of the brain for further analysis. One stream then goes to brain regions responsible for our knowing, such as identifying an object, and supplies the kind of information we use to answer questions about it. The other stream goes directly to a region responsible for doing, for how we respond. This doing stream of visual information operates mainly outside of conscious awareness, while the one for understanding and recognizing is normally accessible to consciousness.
Again, this discovery comes thanks to stroke patients who willingly allow themselves to be studied and thereby advance our understanding of brain regions and functions. Milner and Goodale noticed that a stroke patient who had brain damage in one small area could not tell correctly what it was that a researcher was holding (say, a book), but could nonetheless orient his hand correctly to take it (vertically or horizontally) when casually handed the object. But other patients could say correctly what the researcher was holding up, and yet were not able to orient their hands correctly when the object was passed to them. It turned out that different regions of the brain had been damaged in the two cases; damage in one region blocked the “knowing” visual stream but left the “doing” stream intact, while damage to the other region blocked the “doing” visual stream but left “knowing” intact. We are literally born mimics.
As a result of our neural structure, though, when we mimic, we usually don’t know it. Information we perceive from another person’s actions can affect our “doing” tendency separately from our knowing about it (and our conscious minds are usually focused on other things). The chameleon effect—together with Milner and Goodale’s discovery of the two visual streams, and Lhermitte’s discovery of environmental dependency syndrome—shows that seeing can lead directly to doing in the absence of knowing. Our brains and minds evolved not only to think and know, but especially to act, and to act quickly if need be. But besides supporting us in our infancy and toddlerhood as we learn how to behave appropriately—a huge benefit in itself, to be sure—what other beneficial consequences does the chameleon effect produce? The answer has many layers, but first and foremost it greatly helps us to collaborate and cooperate with each other.
The mirroring that we engage in is a form of social glue. It holds two or more people together. Unconscious mimicry promotes bonding. My lab saw this in action in a second study we did after the first “chameleon” one. We reversed the roles of the first study, and had the confederate try to mimic, in a subtle fashion, the body posture and body movements of the research participant while they both discussed the photographs on the table between them. In the control condition, the confederate did not try to mimic the participant. Afterward, the confederate left the room and we asked the participant various questions about the experiment, including how much they liked the other participant (actually the confederate) and how smoothly they felt their interaction had gone. If they had been in the condition in which the confederate had mimicked them while they discussed the photos together, the participants liked the confederate more, and also thought their interaction had gone more smoothly, compared to in the no-mimicry condition. When someone acts similarly to us, even in a subtle way, we pick it up and like the person more, and feel more of a bond with them; also, our interaction with them goes more smoothly, our actions seeming more coordinated and in synch. Our natural tendency to do what others are doing in the moment pays off in greater feelings of togetherness and friendliness. And, just like with Dante and his poetic coldness, this effect of behavioral synchrony and bonding is apparently something human cultures have been aware of for thousands of years—unconsciously, of course.
For millennia, we have known about the bonding force of ritualized behavior, when everyone does the same thing at the same time. For most of recorded time, military bands and drummers have marched with armies to keep them in step. The Romans dragged a band along with them while conquering Europe around 200 BC. Soldiers not only marched to the rhythm of the band’s music but frequently used rousing songs to sustain them over long marches for days, weeks, and sometimes months. (During World War I, Belgian citizens were quoted as saying one of the worst parts of the German occupation of their country was having to listen to the soldiers’ constant singing.) While military units no longer march into battle with their bands, there are still many aspects of public life that we perform in unison. In religious services, for example, we often stand, kneel, and sing or chant at the same time, in unison. Likewise, we all stand and sing our national anthem in unison before sporting events—secular religious events if there ever were any—in part to remind us that while we may cheer for different teams (and uniforms), we are all part of the same national community. We can even use the unconscious power of mimicry and affiliation to change the behaviors of others, including criminals from whom we need to extract information. By exploiting our unconscious urge to identify with others, law enforcement officials may be able to open new, noncoercive avenues for solving crimes. That is, if they so choose.
Unfortunately, the traditional and still most common approach of interrogators has been to create the exact opposite type of atmosphere—to threaten, bully, and even torture suspects in order to get important information out of them. One of the first things you see when taking a tour of the Tower of London, right inside the central “Bloody Tower,” where enemies of the state were taken, is the rack on which prisoners were interrogated while their bones were slowly pulled apart and their bodies broken. And still today, five hundred years later, similar torture occurs.
In October 2002, a man named Abu Zubaydah was being held in a CIA “black site” detention center in Thailand. (Two months earlier, he had been captured in Pakistan by covert American forces. He was shot in the ensuing skirmish, then medical personnel tended to his wounds to ensure that he survived.) The CIA believed—erroneously—that he was a high-level Al Qaeda operative who possessed valuable information about 9/11, Osama bin Laden, and terrorist training camps in Afghanistan. To get this information, interrogators used what the government euphemistically called “enhanced interrogation techniques” to force the prisoner into a submissive, willing state. The CIA’s enhanced technique was waterboarding—and they subjected Zubaydah to this medievally cruel practice on a staggering eighty-three occasions. It’s painful even to imagine what his experience was like, but it’s an important exercise to do so.
Likely already in an exhausted fragile state—Zubaydah had also somehow lost an eye since being detained—he would have felt interrogators fix his body to an inclined board, but he wouldn’t have seen what happened next, since they put a piece of fabric over his face. Then his captors poured water through the fabric and into his mouth. This created a drowning sensation, as well as an accompanying state of physiological panic. Between Zubaydah’s gasps and chokes, the CIA agents asked for information, then went on to pour more water over the fabric and into his mouth. The sounds were surely horrendous—gurgling, gasping, choking, moans. Then the agents increased the amount of water, blocking his airway until his body seized violently. After what must have seemed an eternity, Zubaydah would feel the board tilt up, allowing him to breathe again. Then came more demands for information that he didn’t have. But the inhumane treatment he received didn’t stop there.
In a truly upsetting 2016 article, the scholar Rebecca Gordon studied Zubaydah’s case from its sinister beginning to its outrageous lack of an end. Not only was he waterboarded; he was also deprived of sleep for days on end, slammed repeatedly against a supposedly soft wall, and forced to listen to loud sounds for psychosis-inducing lengths of time. The trauma of 9/11 had spurred the forces of the United States to inflict deep trauma on others in the name of what they believed was a higher cause. The ends justified the means, in their view. President George W. Bush used the information that they extracted from Zubaydah to justify the invasion of Iraq, and to justify the “enhanced techniques”—torture—used upon countless other prisoners during the so-called War on Terror. Except, as they later admitted, the information they got from Zubaydah using these methods was totally worthless. Everything about the interrogation approach was wrong.
We still live in a world in which terrorists kill innocent people, and in which the United States and other governments use various tactics to get information from people they detain, many of which continue to be quite inhumane. That’s the (very) bad news. The good news is that new work by forensic scientists in the field of criminal justice has focused on the unconscious psychology of mimicry and imitation, and has begun to offer an alternative and much less cruel paradigm for how authorities conduct interrogations and extract information from suspects and enemies. This new approach, they report, also gains more valid and reliable information from the person being questioned than do the hard-line traditional methods, in which the suspect tends to tell the officials anything they want to hear, just to stop the unbearable pain and distress. Imitation and mimicry signal similarity, that I share your feelings and reactions to what is going on right now. It strengthens bonding and creates rapport between former strangers. As used in rituals by large social groups for thousands of years, it facilitates sharing and cooperation. So one would think that a good way to get someone who is uncooperative to be cooperative would be to try to establish rapport with them.
Mark Frank and his colleagues at the University of Buffalo looked at how that approach might play out in the arena of crime investigation and criminal interrogation. Cooperative witnesses are the primary sources of information for investigations. If a positive feeling is established between the person being questioned and the person asking the questions, the suspect or witness is more likely to cooperate. And if he is more cooperative, then he is more likely to provide valid and valuable information. Frank and his team conducted a study of the effect of such rapport on the accuracy and completeness of eyewitness reports. They used a video of a real-life event that all the participants saw, one time only, just as a real witness would see a key event only once. It was a vivid, minute-long color video in which a male bystander suddenly ran and dove into a burning car (apparently committing suicide), with agitated sounds from off-screen bystanders, and fire trucks arriving at the end. Then the participants were interviewed using one of three styles: either sympathetically to establish rapport, abruptly and coldly, or with the standard neutrality in which most law enforcement officials are trained.
In the first group, good rapport was established by the interviewer using a more relaxed body posture, a gentler tone, and referring to the participant by name. In the second group, the second style was employed—a harsher, staccato rhythm, stiff body posture, and the interviewer didn’t refer to the participant by name. Then there was the traditional, neutral group. The results showed that being nice works.
Participants in the rapport condition talked longer and provided considerably more (50 percent more) correct bits of information about what happened in the tape than did the other groups. A mere five minutes of rapport building paid off in a significant gain in accurate information from the witness.
While that first study didn’t specifically use imitation or mimicry to create the rapport, Frank’s next one, with Paul Ekman and John D. Yarbrough, did just that. They developed what is called the IIE, “Improving Interpersonal Evaluations,” for law enforcement and national security. The basic premise of the IIE is that good, effective interviewers create a more comfortable environment by building rapport with a subject. One technique they use to facilitate rapport is mimicry, in which the interviewer tries to match the behaviors of the interviewee. This involves the same kinds of behaviors that Chartrand and I had manipulated in our original chameleon studies—seating posture, resting a hand on the chin. Frank and his team added vernacular mimicry, the use of the same level of vocabulary as the witness. The stated goal of using mimicry in this interrogation technique is to establish synchrony of behavior between the interviewer and interviewee, because synchrony (as in group rituals) causes increased bonding and feelings of liking, which in turn lead to a sense of trust and to cooperation—a rapidly manufactured glue between two people. In fact, the instructions on using the IIE technique explicitly suggest periodically testing whether rapport remains established, by adjusting one’s own position deliberately to see if the person being questioned follows suit (mimics back). The IIE is now widely used in the training of law enforcement officers because it is a proven improvement over traditional questioning techniques.
Interrogators are certainly not the only ones who are (or could be) making use of the positive effects of mimicry. In one Dutch study, waitresses were instructed either to repeat back their customers’ orders (the mimicry condition) or not, without knowing why they were doing so (they did not know what the study was about). Those who mimicked the order back to the customer received significantly larger tips than those who didn’t—the mimicry apparently increased the liking and bond between the waitress and customer and the more positive experience resulted in a bigger tip at the end. And in a study conducted in the home electronics section of a large French department store, four twenty-something male salesclerks took turns mimicking or not their customers’ questions about the various MP3 players for sale. Which customers were mimicked and which were not was determined randomly. For example, “Can you help me choose an MP3 player for my grandson?” “Hello, yes, of course. I can help you choose an MP3 player for your grandson. How old is he?” These customers were later approached in the store parking lot and were asked to rate their store experience and liking for the clerk who helped them. They were also asked whether or not they had actually purchased the MP3 player. Nearly 80 percent had bought a player if they had been mimicked, compared to 62 percent of those who had not been mimicked; moreover, there was greater liking for the clerk and for the store itself in the mimicry than in the no-mimicry condition. These field studies demonstrate the power of mimicry in liking and bonding in our daily lives.
If what we see is what we do, then it follows that the more we see a certain person in our daily lives, the more opportunities we will have to do what they do. And who do we typically see more than anyone else? Our life partners.
Another consequence of our chameleonic nature has a fascinating physical effect in the context of long-term romantic relationships. Think of your typical middle-aged or elderly couple who has been married for twenty-five or thirty years or more. They see each other every day, they talk with each other, and they are constant witnesses, consciously or unconsciously, to each other’s facial expressions and emotional reactions. If your partner is mainly smiling and happy, you likely will be, too; if they are sad and downcast, you will be more likely to be that way, too. As the two of you spend your lives together, you are unconsciously mimicking your loved one on a daily, even moment-to-moment basis. As a result, over the decades you will tend to use the same facial muscles in the same ways, sharing each other’s emotions and expressions, so that eventually over many years you will come to develop the same muscle and line patterns on your face. In other words, in theory you should actually start to look more like each other, the longer you are together. But do you?
To test this prediction, my graduate school advisor Bob Zajonc and his colleagues at the University of Michigan analyzed photos of newlyweds—individual photos of each, not those showing them together as a couple—and then they analyzed photos of the same group of people after twenty-five years of marriage. The individual photos were paired both with the spouse’s picture but also with pictures of strangers of the same age, and their similarities were rated by a group of people who did not know any of the subjects or that any of them were married to each other. The raters judged that members of a couple looked more alike than two people who were strangers to each other. But more important, the couple was rated as looking more alike after twenty-five years of marriage than when they had married. And consistent with the explanation that they looked more alike, they were happier because they paid more attention to each other and shared the same emotional reactions to their many life events. In addition, the more the two members of a couple were rated as looking alike, the happier the couple reported themselves as being. I tell the students in my classes to be careful who they marry, because they will end up looking like them! Imitation isn’t just the highest form of flattery—it’s a love potion, too.
However, our imitation and mimicry hardwiring does not cause us to reflexively trust and cooperate with just anyone. For example, what if a person has shown by her behavior that you can’t trust her? Recall from Chapter 6 the study led by Oriana Aragon, in which the brain waves of participants were measured while they observed the finger movements of another person.The particular brain waves we measured were part of what is called the mirror neuron system, which is one of the brain’s very first responses to our perception of other people’s behaviors, and helps produce the tendency to make (mirror) those same movements. We found that this system normally became active when the participant observed the finger movements of another person, but it did not become active—this very first, immediate stage of imitation—when the participant watched the movements of someone who had just betrayed her in an economics game.Our brain’s imitation mechanism is sensitive to who can and who cannot be trusted, and this happens on a level of which we aren’t even aware. After all, it wasn’t that the participant chose not to imitate that deceitful person. Rather, the unconscious machinery supporting that imitation shut down so early, the participant didn’t even have a chance to do so.
We all want to have positive social relationships and to not be alone or isolated. But life doesn’t always go the way we’d like it to, and in the school of hard knocks, sometimes we are excluded or rejected by others, like the poor kid on the playground at recess whom no one picks for their team. Or, as adults, when a group leaves after work to go get a drink together and no one thinks to ask us along. Now that’s cold! Research has shown that when we find ourselves in such situations, we become more motivated than usual to try to form new bonds with people we meet, and at those times we are also more likely than usual to mimic and imitate others. It is as if our goal of making friends and getting others to like us already has the benefits of the chameleon effect wired into it. A similar dynamic comes into play during romantic pursuit, which, as we all know, often requires significant work in order for us to attain our goal. Evolution has also folded the chameleon effect into our instinctive bag of tricks for courtship. For our selfish genes, dating and mating are all about reproduction, getting those genes safely into the next generation. Thus it makes sense that in one experiment, men engaged in greater imitation and mimicry of a woman they were interacting with if, unbeknownst to the men the woman happened to be at the most fertile stage of her ovulation cycle at the time.
The other side of the same coin is that we will tend to resist these external influences on our behavior—both the chameleon effect of imitating others, and the Lhermitte effect of doing what the situation naturally calls for—if giving in to them conflicts with a goal or important motivation. At one conference about twenty years ago, just after I’d presented our chameleon effect findings, the Scottish psychologist Neil Macrae went up to the podium for his own presentation. He asked everyone in the room to raise our hands if we’d seen the film The Full Monty. This was a popular movie at the time, about a group of downtrodden English men who decide to put on a striptease show. Many people in the audience had seen it and raised their hands. Then Macrae asked them to keep their hands up if, during the famous scene in the movie where the dancing male leads take off all of their clothes onstage—the proverbial “full monty”—they too had stood up in the theater and taken off all their clothes. The audience laughed and only a few jokers kept their hands up. But everyone got his point.
Behavioral contagion effects are not obligatory and uncontrollable, because, unlike Lhermitte’s patients, we have a good deal of control over whether we do the same thing as the other person or not (if we realize we are doing it) and can also imitate intentionally if we want to. Recall that Darwin said the same thing about our emotional expressions. At receptions following talks I gave on the chameleon effect, I witnessed firsthand many attempts by people to control the effect, once they realized they were engaging in it themselves. Because I’d just been talking about the effect for the past hour or so, those at the reception were far more likely to notice themselves doing it, and it was fun to watch everyone trying very hard not to imitate each other. I would stand in front of a person chatting with my arms crossed on my chest. He would be doing the same, then realize it and suddenly jerk his arms into some other position! (And we’d both laugh at that point, knowing what was going on.) As Macrae’s example suggested, chameleon effects will be less likely to occur when there are perceived costs to doing what others are doing. Remember as a child when you would pester your parents to let you do something, telling them all your friends were doing it? And remember their canned reply? If your friends all jumped off a cliff, would you want to, too?
Well, no, we wouldn’t. Macrae and colleague Lucy Johnston demonstrated this limitation to behavior contagion effects in a two-part study. First, they primed participants with words related to helpfulness using the standard language test procedure, creating a Lhermitte-like impulse to help. The participants were then thanked and they left thinking the experiment was over. But on the elevator leaving the building, the real part of the experiment took place. A person on the elevator who was part of the experimental team dropped many pens on the floor, seemingly accidentally. What happened next? Those who had been primed with helpfulness words were more likely to bend down to help pick up the pens than were participants who had not been thus primed. The help-related words on the language test had their intended effect of increasing the participants’ tendency to help—except when the pens were messy and leaking. In that condition, very few participants wanted to help pick them up even if they’d previously seen the helpfulness-related words on the language test. The cost or disincentives of “doing what others are doing” came into play and blocked the unconscious influence.
This “leaky pen” study also illustrates that at any given time, we can be receiving unconscious suggestions regarding more than one type of behavior, and it is possible for them to be in conflict with each other. Those pen-study participants primed with helpfulness had the impulse to help (as they were more likely to help than others in the non-leaky-pen condition), but they also had an even stronger impulse not to pick up the messy, leaky pens, as if they were transmitters of germs or disease. You may recall a cruel yet revealing hidden-camera stunt in which the producers of the show stuck a hundred-dollar bill to a piece of dog poop and left it on the sidewalk. Different people, it turns out, have different cost thresholds when faced with such a dilemma. (And different degrees of need for the money as well.) Some people grabbed the poopy Ben Franklin, while others didn’t. Unfortunately, it is not only cooperation behaviors that can be increased by cues from the outside environment, but rudeness and antisocial behaviors as well.
Just as Macrae and Johnston showed that a subject’s helpfulness could be increased in the elevator merely by having just seen and used words related to helpfulness, our lab showed that rudeness (as well as politeness) could be increased in the same fashion. Our participants were NYU students who came into our lab on Washington Place for an experiment on “language ability.” They first completed a short scrambled-sentence test that featured words related to rudeness, words related to politeness, or, in the control condition, words related to neither concept. They were told that when they had completed that test they were to come down the hallway to find the experimenter, who would give them the second task of the study, after which they would be done.
When they had completed the language test and walked down the hall, however, the experimenter was busy talking to someone else, apparently another participant in the study. They could see the experimenter standing in a doorway talking into a room from which another person’s voice could be heard. This other person (who was actually part of the experimental team) kept asking questions about the task she’d just been given, the experimenter would answer, and this conversation continued with the experimenter’s attention focused entirely on this other person, while the actual participant was standing nearby. We wanted to see how long the participant would stand there waiting to get her second task before interrupting the conversation, how “polite” or “rude” her response would be. As soon as the participant had approached him in the hallway, the experimenter secretly started a silent stopwatch in his pocket.
While the participant stood there waiting to be given the second task, the experimenter kept talking. This went on until the participant finally interrupted him to ask for the second task—or until ten minutes had gone by, at which point he stopped the clock and gave her the second task. (As a matter of fact, when we first proposed the experiment to the university committee that screens and approves psychology studies, we did not include this ten-minute time limit, and they told us to put it in, since otherwise the participant might be standing there forever! That possibility had never occurred to us, because after all, New Yorkers are not generally known for their patience and politeness. We had just assumed everyone would interrupt in a matter of minutes, if not seconds. As it turned out, we were quite wrong about that.) The important measure was how long the participants in the rude and the polite conditions waited before interrupting. As we had predicted, those who had seen rude-related words on the first language task were both more likely to interrupt the experimenter (most of them did) and did so faster than those who had seen the polite words. But what surprised us was that most of those in the polite condition never interrupted at all, and just stood there patiently for the entire ten-minute maximum.
Researchers at the University of Florida have taken this rude priming effect out of the laboratory and into the business school classroom. They showed that in the workplace, the rudeness of others is “contagious.” Like the common cold, it spreads from one person to another. In a negotiation class, the rudeness of a person’s bargaining partner one week caused the recipient of the rudeness to be rude to a different person the next week. The researchers also showed that witnessing a work team leader’s very rude treatment of a team member had the same kind of effect that our “rude” language test had, to prime the concept of rudeness in their participants’ minds. That is why reading rude-related words has the same kind of effect on behavior as witnessing actual rudeness in a real-world setting. Both of those activities cause the behavior concept (in this case, of rudeness) to become more active in your mind, and the more active it is, the more likely you will be to behave that way yourself.
The Florida studies showed how the chameleon effect can affect the workday climate for many people. The behavior of your colleagues and coworkers—and your own behavior as well, of course—can spread contagiously throughout the office. The researchers concluded that people may be largely unaware that the source of their own rude and aggressive behaviors is the rude behaviors of others that they witness—and that the phenomenon of negative behavioral contagion may be much larger and more consequential for organizations and society than we realize. Sometimes the contagious virus of antisocial behavior comes not from the behavior of other people, but from the visible consequences of their behavior that they leave behind. I’m talking broken windows, graffiti, litter, signs of disregard and even contempt for one’s city and neighborhood. I’m talking New York City in the 1970s, ’80s, and ’90s.
Ah yes, New York. It was 1995, and in my lab we had just completed the experiment on how rudeness or politeness primes caused people to interrupt or not. As with Lhermitte’s stroke patients, the cues to their behavior had come in from the environment and influenced how they behaved in the ensuing situation. But isn’t this what happens to us constantly, every day, while out on the streets of our cities, or the back roads of our farmlands, in the diners of our small towns? The cues as to what other people are doing, how they are acting, are constantly flowing in through our senses. New York is known for having some of the nerviest, brashest citizens in the world, yet if instead they were primed with politeness cues that had just streamed into their minds, they were capable of the utmost deference and decorum. (Temporarily, anyway; let’s not get too carried away here.)
Deference and decorum were in short supply in New York City during the first fifteen years I lived there. The city had hit an all-time low. The Big Apple was infested with worms of a decline, and had become a wasteland of urban abandon. The U.S. economy was floundering, and the country’s most iconic city was nearly bankrupt. The result was a place that was collapsing both physically and morally. Many landlords who tired of the financial strain of upkeep and management burned their buildings down to illegally collect insurance money, and there the ruins remained, haunting husks of displaced lives. Garbage fires burned and the homeless proliferated on the streets. Heroin addiction ravaged communities, and violence and crime were everywhere. There were constant muggings on the subway. Times Square was a neon kingdom for the sex trade, and prostitution spiked in all of the boroughs. Even the Statue of Liberty wasn’t her same old beautiful self, as the water at her feet took on a greasy, iridescent sheen from oil contamination in New York Harbor. Many wondered how such a great city could have descended into such a dismal abyss.
By pure coincidence, we were conducting our rude-polite study at about the same time newly elected mayor Rudy Giuliani was enacting his plan of “enforcing the small stuff.” In line with what is called broken windows theory, the idea publicly espoused by Mayor Giuliani was that if you crack down on the small but visible crimes such as vandalism, littering, and even jaywalking (cops really did start giving out tickets for people crossing big streets, like Fifth Avenue, in the middle of the block), then the larger, more serious crimes will also decrease. If people saw cleaner streets, intact buildings and storefronts, and fewer of their fellow citizens engaging in small-potatoes civil disobedience such as jaywalking, they would have greater respect for each other and for the laws in general. And Giuliani’s plan, as pie-in-the-sky as it seemed to many at the time, was entirely consistent with the emerging psychological research on how external environmental cues can directly impact social behavior. Our mental representations of concepts such as politeness and rudeness, as well as innumerable other behaviors such as aggression and substance abuse, become activated by our direct perception of these forms of social behavior and emotion, and in this way are contagious. And people in New York were seeing a lot of hostility and addiction in the 1970s and ’80s. And garbage.
Lots of garbage. There was garbage in the streets and graffiti covering walls and trains. But could this really and truly have affected how millions of New Yorkers behaved? More to the point, if all the garbage and detritus was cleaned up, would that actually help to decrease the rate of violent crimes? (If you had answered yes to this question in 1995 I would have told you about a certain bridge over the East River that I was willing to part with at a bargain price.)
But wait—maybe it could. Maybe it even did. Consider a study reported in Science magazine in 2007 by a group of Dutch researchers who changed the appearance of an actual city street so that the walls were either covered with graffiti or painted over so that there was no graffiti. After this setup, they placed advertising circulars around the handlebars of all the bicycles parked in the racks on that street. (The Dutch are big on riding bikes everywhere, so there were a lot of them.) Then the researchers just waited to see what the bike owners would do with the advertising circulars. Lo and behold, when there was lots of graffiti on the walls, the bike riders tossed more circulars down on the street, creating more litter. When there was no graffiti, there was also much less littering. No one saw anyone actually painting graffiti on the walls, so this wasn’t a chameleon effect per se, but the signs or results of people having painted all over the walls were certainly there. The signs of others’ antisocial behavior—the graffiti—primed another form of antisocial behavior in the bike riders, that of littering. It was a kind of Lhermitte effect.
The Dutch researchers then showed this same kind of effect in other ways, again out in the real-world city environment. They put the same circular flyers under the windshield wipers in a parking garage. If there were shopping carts scattered around the parked cars, clearly taken away from the nearby grocery store by the shoppers despite many signs asking people please not to do this, the shoppers again were more likely to litter, compared to when no shopping carts (no cues to antisocial behavior) were present.
In city dwellers’ collective unconscious, one could say antisocial behavior spreads like a virus. What people saw is what they did themselves. But isn’t this looking at the glass half-empty? One could just as well say it is half-full—that the bike riders who saw the graffiti-free wall littered less, and the drivers in the parking garage with no “taken” shopping carts littered less, in both cases because of the lack of antisocial behavior cues in those conditions of the study. So let’s get back to Giuliani’s grand experiment of the 1990s. How did it turn out?
I happened to be away from the city, on sabbatical leave in southern Germany, for a year in the middle 1990s. When I got back I was amazed by the change that had occurred just in that short time. I was expecting to have the same culture shock I’d experienced coming back from sabbaticals in the past—after becoming accustomed to the low crime and clean streets of small-town Germany, having to readjust to noisy, dangerous New York all over again. But this time the shock was the lack of culture shock. The streets seemed much cleaner, the people even a bit friendlier. The change was especially noticeable to me because I’d been away and hadn’t experienced the citywide behavioral shift gradually, as my apartment building neighbors and psychology department colleagues had, but they had all noticed it, too.
The crime statistics from this period supported my impression. In the mid-1990s, New York City saw a dramatic decrease in serious crimes—assault and murder declined by an astounding two-thirds. There are of course other theories to explain this dramatic drop, and additional reasons for it, but it is also hard to argue with the positive consequences of a cleaner and more civil daily environment, with less visual evidence of the small crimes the broken windows theory was named after. And the Dutch research findings also support the idea that New York’s low was so low at least partly because of behavior cues that “breaking windows” was okay, and that the city’s resurgence in turn was a result of a new culture of cues for positive behavior being instituted instead.
As I’ve mentioned, I no longer live in New York City, or any city for that matter. I live in the farm country of central Connecticut, along with my family, our dogs and cats, and all the other animals that inhabit the area. Quite a change from twenty-plus years in Manhattan and Brooklyn, significantly reducing my exposure to the behavior of other people—in person, that is. Today, however, the Internet and social media reach everywhere, rural and urban areas alike, and new studies are showing that online moods, emotions, and behavior turn out to be just as contagious as offline, “real life” (in-person) behavior—maybe even more so. The unconscious mirroring of what other people do does not turn off just because the behavior we perceive is in digital instead of physical form. In fact, thanks to social media connecting us to one another far more widely than ever before, today there are many more opportunities for contagion effects than there used to be.
Birds in a flock move as one because they perceive each other’s movements and speed, and there is a direct link in their brains between perception and action. We humans are similarly influenced by our peers’ behavior, but unlike birds, we can see and hear what others do indirectly and virtually, in movies and videos and television, and through books and magazines and newspapers. Now these media have become enmeshed in our real lives in transformative new ways, as we are no longer just the passive consumers of images and texts, but constant creators of them as well. The media has become our real life. We can keep track of a very large group of our past and present friends on Facebook, Twitter, Instagram, or Snapchat, and they can keep track of us. And we can also follow the lives, musings, and behavior of celebrities as well. In “following” others we are exposed not only to their behavior and opinions but to their moods and emotions. As a result, the potential of the chameleon effect is in fact far greater today than it was when we first studied it in the 1990s.
Sociologists James Fowler and Nicholas Christakis have conducted several studies of behavior in large social networks. These demonstrate how many different forms of behavior and emotions spread over social connections on the Internet, so that you are often affected indirectly by people you don’t even know. Say for example, you know Bob, Bob knows Dale, Dale knows Mary, Mary knows Wayne, but you don’t happen to know either Mary or Wayne. All the same, because of their effect on the people you do know, whether Mary or Wayne is happy, cooperative, depressed, or obese makes it more likely you will be, too.
All of these emotions and behaviors have been found to spread and be more likely for a given person if people in their social network express those emotions, engage in those behaviors, or have those same qualities. The more you are in touch with people who are happy, the happier you are; with people who are overweight, the heavier you will tend to be. When people in your network cooperate with others, you are more likely to as well, and when they seem very sad, you become a bit sadder, too. The moods and behaviors of people to whom we are connected by friendship, family, or the same workplace are likely to “infect” us. The contagion is usually at least three people deep—three degrees of virtual separation—so that people you don’t even know are affecting your behavior and emotions, because they know somebody who knows somebody that you know. Of course, it also works in the other direction.
The average person has more than three hundred Facebook friends, so there is a great capacity for our own moods and behaviors to affect a lot of people in return. Researchers at Facebook measured how positive or negative the posts in a given Facebook user’s newsfeed were and showed that the more positive or negative the posts they read, the more positive or negative the user’s own posts became—up to three days later. James is sad and depressed and this shows in his Facebook posts; the posts of his friend Mary are affected, but then yours will be, too, because you know Mary, even though you don’t know James—and up to three days following James’s sad post. Perhaps then we should become more mindful of the types of people we expose ourselves to on social media.
In a similar but more controversial study, Facebook researchers deliberately manipulated the positivity or negativity of the newsfeed for nearly 700,000 of their users. They did this not by creating fake posts but by filtering which of the many posts by the users’ Facebook friends were put into their newsfeeds. (One thing I learned from this study was that most of us never see all of our friends’ posts, because that would be an overwhelming number we couldn’t possibly keep up with. Consequently, Facebook’s programming filters all of those posts each day by certain criteria and only puts a subset into what you actually see.) For some users, their newsfeed was deliberately programmed to be a bit more negative than usual, and for others a bit more positive. Then the researchers measured how this change in the mood of the feed affected the recipient’s own mood, as shown by the content and tone of their posts. They found that people indeed made more positive posts themselves if they were exposed to more positive posts of others, and people made more negative posts if they were in the group given more negative newsfeeds. Altogether, this research has shown that all types of behavior, including overeating and being cooperative, being rude or being polite, being a litterbug or not, are just as contagious over social networks as they are in person. Unconscious mimicry doesn’t require physical proximity.
The same principle applies when we read material besides social media, such as novels, in which we lose ourselves in a different world, which we see from the perspective of the story’s protagonist. Researchers at Cornell University had participants read a story in which the female protagonist goes on and stays on a diet to lose weight before a beach vacation in Cancún, and showed that this activated the readers’ own dieting goals—unless in the story the protagonist was said to have achieved the weight-loss goal, in which case the readers’ own dieting goal was not active. So the readers’ own goals became active or not depending on whether the protagonist’s goal was active or not. A second study found this same result again, but also that the more the reader identified with the protagonist, the more she wanted to lose weight. This happened, however, only when the character was successful in losing weight, not when she failed. Apparently, it is true not only that “what you see is what you do” but that “what you read is what you do.”
The social setting we are in at any given moment also signals to us how we are supposed to behave, and these norms unconsciously guide us, effortlessly constraining our behavior to fit in and be appropriate (and not stand out and attract the disapproval of others). In a landmark 1950s sociological study by Roger Barker, drawing on many months of careful observation of the citizens of “Midwest” (which turned out to be Oskaloosa, Kansas), by far the largest determinant of how people behaved was not their individual personalities or character, but where they were at the time—in a church service, at the barbershop, at home, at a park, at a restaurant, on the highway. Everyone is quiet in church and stays put, runs around and is a bit noisy at the park, waits patiently for the meal to arrive at the restaurant, sits less patiently in traffic on the highway. The similarities in behavior across different people in the same setting are far greater than the similarity in behavior of the same individual across different settings. If you keep your eyes open to how your own and others’ behavior clearly changes as you move from setting to setting, you can’t help but observe this powerful influence on human behavior. You’ll see what you do.
A clever demonstration of the unconscious nature of the setting effect comes from another Dutch study, this time at a university, involving the college library. As we all know, you are supposed to be quiet in libraries, since most people are there to read and study. In the experiment, college students were asked to take an envelope to another destination on campus. If the destination was the university library, they were quieter and spoke more softly to others on the way there than if they were heading to a different destination, such as the university cafeteria. The effect of having the destination “library” active in their minds (even though they were not in the library yet, but in busy hallways) was very similar to how “museum” or “doctor’s office” or “garden” influenced the behavior of Lhermitte’s stroke patients. Similarly, the behavioral norms when we are out on the street also influence our chameleonlike tendencies, causing us to do as others are doing.
For many of us, the most common settings in our life are home and work. We are often a very different person in these two places, because there are different sets of behavior appropriate for home but not appropriate for the office, and vice versa. And the different sets of people we interact with in the two settings have different expectations of us and we might even have distinctly different personalities in the two places. I know that as a dad at home I am the continual font of many awful jokes that I’d never dream of making at work. (But that’s what dads do.) A 2014 set of studies by the Swiss economist Ernst Fehr and his colleagues examined how these different situated identities at home versus at work operate unconsciously to produce quite different behavior within the same person, even immoral behavior in one case and moral in the other. They studied a native figure—Swiss bankers.
Fehr and his colleagues did so via an online experiment conducted on the weekend while these investment bankers were at home, not at their place of business. Their theory was that the bankers had a situated identity at their workplace that was different from their identity when at home. For some of the bankers, their workplace identity was primed at the start of the experiment when they answered several questions about their office environment; the other group of bankers was not asked about their workplace. Then all of them played a coin-toss game where they won twenty dollars for each successful coin toss (either heads or tails each time), with the catch being that they themselves reported whether they’d been successful or not. No one but them would know if they were being honest. This made cheating to get more money a very easy thing to do. But the researchers could look at the overall percentage of successful coin tosses for the two experimental groups and compare it to the 50 percent expected by chance. It was significantly higher for the bankers who had first answered questions about their weekday workplace, suspiciously so, with many more successes than would have been likely to happen by chance alone. The nonprimed group, on the other hand, was actually quite honest in their self-reporting of the coin tosses, describing a success rate much closer to the 50 percent expected by chance. Remember that these participants were all investment bankers, the same type of people in both conditions of the experiment, just randomly assigned to first think about their workplace or not.
The moral behavior of the bankers was markedly different depending on which of their personal identities, corresponding to the two main settings of their lives, was currently active. Morally speaking, each was a different person at work than at home. In this way the Swiss bankers were like the Asian-American girls in the Harvard preschool—if you primed one identity, they behave one way (good at math, honest); if you primed the other identity, the same person behaved in a very different way (bad at math, dishonest). In both studies, these effects occurred without the participants being aware of or intending them. But identities can be primed for the better as well.
We’ve all gone into a grocery store and been offered recipe samples that encourage us to try a new dish or type of food. Dutch psychologist Esther Papies and colleagues went into Dutch grocery stores and handed out recipe flyers to obese shoppers when they first entered the store. For some of the shoppers, the flyer contained prime words related to dieting and healthy eating, and for other shoppers these words were not included. Then the researchers waited until the shoppers had done their shopping and gone through the checkout lines. They went up to each of them and asked to take a photograph of their cash register receipts, so they could see how many unhealthy snack foods, like potato chips, the shoppers had actually purchased. There was a remarkable drop in the purchase of snack foods caused by the primes in the recipe flyers, even though very few of the shoppers could remember what was on the flyers they’d looked at before they started shopping, and none believed that the flyers had influenced what they had bought in the store. (Imagine yourself in their situation—would you believe it?) But despite the shoppers being unaware of the influence of the flyers, if those handouts contained primes for healthy eating, they had a significant influence on the snack purchases of the obese customers.
In their next study, Papies and colleagues moved on to a local butcher shop permeated with the tantalizing smell of grilled chicken. When customers entered the shop, a poster attached to the glass door, visible from the outside, presented a weekly recipe from the butcher that was “good for a slim figure” and low in calories. On two mornings and two afternoons of the four days of the study, this poster was on the door, and the other mornings and afternoons it was not (that was the control condition). Papies and her colleagues observed the number of free meat snacks customers sampled from a tray in the store, after they had been primed with the dieting goal or not. When they were leaving the store one of the researchers asked them some background questions, including whether they were currently dieting and their current height and weight. As in the grocery store study, the poster—the dieting prime—caused restrained eaters (obese and currently on a diet) to eat about half as many snacks in this butcher’s store, compared to restrained eaters when that poster was not on the front door. The poster did not affect how many snack items the nondieters ate in the store.
Obesity is a tremendous health and economic burden today in the United States and much of the developed world, so real-life priming interventions such as these are much needed. Yet a much more powerful and pervasive outside influence on what we do—advertising—does not always have our own best interests at heart. The makers of snack food and other unhealthy choices are trying hard to get you to eat their foods instead of eating healthily. Research proves that their ad campaigns work. Pictures of yummy advertised foods have been shown to directly activate eating-related areas of the brain associated with taste and reward. We showed the power of ads on eating behavior in a study led by Jennifer Harris of the Rudd Center for Food Policy & Obesity. Adults as well as a group of eight-year-old children participated one at a time, and watched a five-minute clip of a television comedy show, Whose Line Is It Anyway? A bowl of Goldfish crackers and a glass of water were set out for them while they watched. We edited the show so that it included food ads or not. After it was over we weighed the bowl of crackers to see how much the participant had eaten. Both the children and adults ate considerably more of the Goldfish when there were food ads in the show than when there were not. Food ads, then, act like unconscious behavioral suggestions and can influence our eating and other consumption, especially if we are not aware of their power over us.
The powerful link between television ads and our behavior was shown recently by a large national survey of more than one thousand young drinkers (they reported having had some alcohol in the past month), ages thirteen to twenty, conducted by researchers at Boston University’s School of Medicine and Public Health. A strong relation was found between the number of alcohol ads these kids had seen on television and how much they themselves drank. Exposure to ads for sixty-one different alcohol brands was measured; these were the brands that advertised on the twenty most popular nonsports television programs watched by underage youth. (And of course there is a tremendous amount of alcohol advertising on sports broadcasts as well.) Underage drinkers who didn’t see any alcohol ads had about 14 drinks per month on average, but those who saw the average amount of advertising drank about 33 drinks per month. And a separate study found that kids aged eleven to fourteen see between two and four of these ads every day, on average. The researchers concluded that the more alcohol ads the teenagers were exposed to, the more of that brand of alcohol they consumed.
Food and drink ads on TV and other media give us the idea or impulse to eat and drink, so we might want to second-guess the real reasons why we find ourselves heading to the refrigerator so often. And we also might want to monitor more closely the kinds of ads our children are being exposed to.
Even well-intentioned public service announcements to get us to stop smoking may have the opposite effect, because these contain cues about smoking. Many people are trying to stop smoking—for good reason, as worldwide more than five million deaths a year are caused by tobacco smoking. Yet attempts to help people quit or at least reduce their smoking often fail, not only because of the highly addictive components of tobacco smoke, but because the very intention to quit activates the same mental pathways and brain networks that are related to the craving for the cigarette. Neuroscientists have revealed this unintended consequence of the intention to stop smoking, through brain imaging research in which the same brain regions were found to become active in both cases—when people were craving a cigarette and when they were focusing on trying to quit smoking.
Dan Wegner and Robin Vallacher first discovered these “ironic” unintended consequences of trying not to do something. Perversely, when we try not to do something we have to keep in our minds the idea of what it is we do not want to do. But this keeps that unwanted behavior active in our minds, more so perhaps than if we were not actively trying not to do it. Our attempts to suppress the unwanted behavior work fine as long as we are paying attention to and actively trying to suppress that behavior, but if we are distracted, or our attention wavers (as when we are tired), then boom! You actually become more likely than usual to do exactly what you didn’t want to do, because it is so active, accessible, and ready to go in your mind. Wegner and Vallacher showed this in many clever studies, such as telling their participants not to think about a white bear, and showing that with distraction they were much more likely to think about a white bear than if they’d never brought up a white bear in the first place. (Try this for yourself. Tell a friend to not think about a white bear, and see how much more often they do think of one compared to a different friend you’d never mentioned white bears to.)
The same thing happens with well-intentioned antismoking signs and antismoking television public service announcements (PSAs). Again, these are messages telling people not to do something. But in so doing, they remind people of that “something,” which they might not have thought of otherwise. Often these PSAs show people smoking, which can have a “what you see is what you do” effect and increase instead of decrease smoking tendencies among viewers. Cigarette companies can no longer advertise their products, but the public service campaigns that they sponsor, featuring the words smoking and cigarettes and other visual and auditory cues to smoking, have been shown to actually increase smoking intentions and behavior among young people.
Seeking to understand this phenomenon more deeply, our lab experimentally demonstrated the perverse unintended effect of antismoking messages. In another study in our lab led by Jennifer Harris, fifty-six regular smokers watched a short segment of a television comedy show. For some of them, the commercial break halfway through the segment included an antismoking PSA (either from Philip Morris’s QuitAssist campaign or from the American Legacy Foundation’s “Truth” campaign); for others, it did not include an antismoking PSA. After they had watched the comedy show, all participants were given a five-minute break, and we watched to see how many of them would take this opportunity to go outside and smoke. We found that significantly more (42 percent and 33 percent for the Philip Morris and the “Truth” PSAs, respectively) of the smokers who saw the antismoking PSA went outside to smoke than did smokers who had not seen any antismoking PSA (11 percent). By presenting strong cues about cigarettes and smoking behavior, these antismoking messages had the unintended consequence of increasing smoking instead. What we see is what we do, especially when we are passively watching television or surfing the Web and not paying close attention to the messages bombarding us.
* * *
The mimicking nature of our mind isn’t inherently good or bad—it depends on the suggestions we’re receiving from the outside world in the present, like the cues that Lhermitte’s eccentrically exuberant patients picked up on. Our chameleonlike nature makes us more likely to do what other people are currently doing. This effect extends to what we see people doing in advertisements, as well as to our knowledge of what people generally tend to do in standard settings and situations. Some situations induce us to be more polite and peaceful, others to be more rude and hostile. Some imitative behaviors, such as dishonesty, can lead to financial meltdown, as with greedy investment bankers, while others can lead to the renaissance of a city, as when Mayor Giuliani and his fellow New Yorkers “sweated the small stuff.”
But the effect of our behavior on others, and theirs on us, ultimately depends on us. In practical terms, what you do really does influence the behavior of those around you and the general social climate. (This is especially true if you are a boss or a leader of others; they will take their cue for how to behave from how they see you behave.) You really can (and do) “pay it forward” by setting a good example, by performcing visible acts of kindness, such as holding the door for others, letting drivers trying to get out of a blocked lane of traffic merge in front of you, putting some change in a homeless person’s proffered cap, or carrying your unwanted advertising flyer over to the corner trash can. Just as with voting, I suspect that many of us don’t bother to do these small things because we don’t think they really matter much. After all, each of us is just one person in a world of billions, one drop of water in a vast ocean. But the impact of just one person, the effect of just one act, multiplies and spreads to influence many other people. A single drop becomes a wave. The reverberations of a single act can be felt for days. Why not set that wave in motion whenever you get the chance?