When we are children, around age five or six or so, our world starts to seem less confusing and intimidating. We’ve started to figure it out, to know right from wrong, and to be able to anticipate what is going to happen next. We feel proud to be from our town, our state, and our country. We know what is important to respect and value, what might be a funny prank to pull on a friend and what might not be so funny, what we can get away with and what we can’t. We don’t really think about any of these things too much; they are just the way things are. What we aren’t aware of at this young age is that our ways of thinking, feeling, and acting are not the only ones possible. We have absolutely no clue that it all could so easily have been very different for us. If we had been born in a different country, with different values and beliefs than our own, then we would have become a very different person.
You can take any human infant at birth to whatever far corner of the world you choose and that child will learn the language and the culture and the ideology of that country just as if he had been born there in the first place. The obviousness of this fact doesn’t subtract from its remarkableness. You would have been a very different person in many respects had you been born elsewhere, into a different culture with a different language. These days, in our highly globalized, topsy-turvy world, it’s not unheard-of to find, say, a person with Asian ancestry going back thousands of years, and whose native language is Spanish. Peru, for example, has a large community of people with Japanese heritage. And then there is the strange case of two brothers born to an American father in a country that wasn’t the United States. Naturally, those brothers learned the language of that country perfectly, and they also learned many other things. What they absorbed from their surroundings illustrates that the hidden mind is nurtured by the cultures we live in, ranging from the culture of our own family to the culture of the entire nation.
These two sons of an American father were born and grew up in North Korea.
In 1962, James Dresnok was an American soldier stationed in the Demilitarized Zone, or DMZ, on the border of North and South Korea. The Korean War had ended nine years earlier, and this no-man’s-land dividing the Communist North from the capitalist South was part of the conflict’s legacy. Back home in the United States, Dresnok’s wife had recently left him. His life was in shambles.
One night, perhaps restless and lonely, or maybe just bored, Dresnok snuck off his base with forged leave papers—and got caught. Instead of waiting around for his looming court-martial, he opted for a radical solution that would rewrite the trajectory of his life: he ran across the DMZ and defected to communist North Korea. As Dresnok told a pair of British filmmakers decades later, “On August fifteenth, at noon in broad daylight when everybody was eating lunch, I hit the road. Yes, I was afraid. Am I gonna live or die? And when I stepped into the minefield and I seen it with my own eyes, I started sweating. I crossed over, looking for my new life.”
In his new home, Dresnok married a Romanian woman who lived in North Korea and with whom he had two kids, Ted and James. Although the lives of the Dresnoks are cloaked in mystery for the most part, it does seem that they did relatively well in North Korea, thanks in part to the exceptionalness of their being Americans. Both the Dresnok boys and their father have acted in North Korean movies, often playing American villains. A new twist in their strange family saga occurred in May 2016, however, when the now grown-up Ted and James, slender men in their thirties, appeared in a propaganda video released on the Internet in which they attacked the United States. For what? For being like a villainous country in a movie.
“The U.S. wants to conquer the world, pursuing an anti–North Korean policy, trying to take over Asia,” said Ted. An aspiring diplomat, he was dressed formally in a suit. Sitting next to Ted at the conference table was his brother James, an army captain wearing his olive-drab uniform and North Korean emblems. James echoed these sentiments and praised the North Korean leader Kim Jong-un. The video briefly raised diplomatic speculation about its meaning and made for a juicy news story for a few days.
Many Americans would feel that the Dresnok sons were brainwashed or indoctrinated by the North Korean government. But they didn’t have to be—at least, not any more than you or I had to be brainwashed or indoctrinated to hold our own quite different beliefs. Imagine if their father had not taken the extreme measure of defecting to the North, and instead had returned home and eventually married a woman in the United States. His sons Ted and James would speak English, not Korean (unless he’d married a Korean), and they’d have a very different set of values and ideologies than they do today. So as children growing up in North Korea, they did what we all do—soak up the language and culture of where they happened to be born and grew up.
North Korean ideology stands out because it is so different from our own, but compared to the rest of the world, American ideology is also different from that of any other country or culture. Yet because we in the United States soak it up unquestioningly when we are very young, our beliefs seems natural and right to us, just as North Korean ideology seems natural and right to Ted and James Dresnok. To much of the rest of the world, however, there are aspects of mainstream, traditional American morality and ethics that seem rather, well, odd. I’m not talking about politics here, or about democracy versus socialism. I am talking about the legacy of the Puritans, one of the first groups to arrive in the New World nearly four hundred years ago now, and the great impact it continues to have on American culture today.
The culture we live in is like water to a fish: it is all around us and so constant and commonplace that we hardly even notice it. Longtime scholars of cultural influences on individuals, such as Dov Cohen of the University of Illinois, have sketched out the many ways in which culture permeates our daily lives, operating sotto voce in the background, a ubiquitous and powerful source of implicit influences on our values, choices, opinions, and actions. In any country, culture emerges from a shared historical past, one that we learn about in school and in books, but which we don’t remember firsthand. But we began absorbing our culture before we went to school, when we were very young. Researchers have argued compellingly that the United States’ famous Protestant Ethic isn’t just a favored cultural trope, but a set of values most Americans carry unconsciously. Even four centuries after European settlers landed on Plymouth Rock, our puritanical origins still shape Americans’ behavior regarding sex, money, and work.
The story begins back in the sixteenth century, when Protestants broke away from the Roman Catholic Church in protest over the corruption of the church establishment and its perceived deviation from the values and interdictions of the Bible. In England, the Anglican Church was established as a new Protestant church. However, a subgroup of these English Protestants—the Puritans—felt that the Anglican Church had not gone far enough when it broke away; it had not reformed as much as the Puritans believed it should. So they decided to emigrate to the New World and establish their own, new church based on the stricter values they believed in. Fervent in their religious zeal, they braved the long, dangerous voyage across the ocean to a primitive, uncharted continent, taking a leap of faith if there ever was one. They came to America in order to establish a religious utopia in what is now the United States—and in so doing became one of the first large groups of people to arrive in America in the early 1600s. And because they got here first, they exerted a disproportionate influence on the cultural values of all the people who came to inhabit the United States.
The Puritans gave us two core values, or “ethics.” The main one, known as the Protestant Ethic, is that hard work earns you eternal salvation. If you work hard, you are a good person, and you go to heaven. Conversely, if you don’t work hard, you are not a good person and your “idle hands” will be “the devil’s playground.” The other core value we call the Puritan ethic, or just Puritanism; it holds that promiscuity and overt sexuality are evil. The Puritans used this principle to guide their choices in clothing and language, and to condemn casual sex. And of course a big part of the Puritan legacy is the bedrock Christian belief in God and the Bible.
Remarkably, these religious values and basic ethics regarding work and sex that are still so strongly ingrained in American culture run against the grain of all other modern Western industrialized countries. As a general rule around the world, wealth and democracy produce secular and less traditional societies. Historically, Protestant, democratic, industrialized, and wealthy countries were the first to secularize and remove overt religious influences from their government and culture, and today they are among the least traditional societies in the world. Except for the United States. Despite being a mainly Protestant, democratic, and very wealthy country, the United States is one of the most tradition-oriented countries in the world. In the standard values survey of people all around the world, called, naturally, the World Values Survey, the United States is far above the world average on the survey’s index of traditional values—such as conventional family structures, nationalism, sexual repression, moral absolutism, a clear-cut difference between good versus evil—and a tendency to reject divorce, homosexuality, abortion, euthanasia, and suicide.
While other industrialized Protestant countries have become dramatically less religious and traditional over the past seventy years, the United States is just as religious today. In the year 2000, 50 percent of Americans rated God’s importance in their life as a maximum 10 on a 1–10 scale, and 60 percent said they attended church at least once a month. In 2003, the same percentage of people attended church once a week as had in March 1939—before World War II. In 1947, nearly all—94 percent—of Americans said they believed in God, and in 2001 that figure was unchanged. Except for Brazil, all other countries showed a drop in this percentage from 1947 to 2001. Finally, seven out of ten Americans say they believe in the devil, compared to three out of ten British people and two or fewer out of every ten Germans, French people, and Swedes.
Still, what makes America so exceptional in its religiosity and traditional values is not so much these values per se, but rather that it has kept these values in the face of such booming economic prosperity. If you predict from levels of economic wealth and development only, based on all the other countries in the world, only 5 percent of Americans should see religion as central to their lives. The U.S. cultural heritage is thus so powerful that it runs completely against this worldwide trend. This heritage comes from the Puritan Protestants who fled religious persecution in England—four hundred years ago.
When they were graduate students at Yale, Eric Uhlmann and Andy Poehlman conducted several experiments with me on the unconscious and implicit influences of this Protestant cultural and ideological legacy. We set out to test whether this Puritan Protestant cultural ideology operates unconsciously to influence judgments and behavior of modern-day Americans. Also, given that this ideology is unique to the United States, we needed to show that it did not influence the judgments and behavior of non-Americans. What were the manipulations we used to show this? In several of our studies we followed the lead of researchers in the field of cultural psychology, who have routinely used what are called priming methods in order to demonstrate how cultural ideologies and values operate unconsciously to influence a person’s judgments and behavior. These methods have been around now for more than fifty years. Typically the important information is presented in a disguised or sometimes even subliminal manner, so that if it affects the participant as it is predicted to, the influence is not something she was aware of. In this way the influence is shown to operate unconsciously, not consciously.
For example, in some of the original priming studies in cognitive psychology dating back to the 1950s, study participants were given a list of words to memorize in a first experiment; then, in a second, unrelated experiment they were asked to give the first word that came to mind for each of a second list of words. This is what we call a “free association test.” What the experimenters found, to their surprise at the time, was that the words in the first experiment—for example, stop, butterfly, and rough—were more likely to be given in the second, free association experiment, in which the participants were asked to give the words that first came to mind when they heard the words highway, animal, and wood. This priming effect occurred even for words from the first experiment that a participant had forgotten. The memory location of these words had been primed, or made temporarily more active, through the words’ use in the first experiment so that these same words became more accessible, or ready to be used, or said, or written down, as free associates in the second experiment. And all without the person knowing this effect was happening, and certainly without the person intending it to happen. After all, some people couldn’t even remember these words as having been on the list of words to memorize in the first experiment.
Social psychology started to use this “two unrelated experiments” technique to show how impressions and other judgments about people could be affected by one’s recent experience. For example, if you just saw firefighters rushing into a burning building, or had been reading a history of a major war, your concept of bravery and heroism would likely be primed. Just like those single words in the original priming study, that larger concept of bravery would be more active than usual. So if you then heard a news story about a person, say, trying to sail alone across the Atlantic, you’d be more likely to think of that person as very brave, maybe even heroic—and not as crazy and reckless, perhaps even suicidal, instead.
Priming effects are natural and automatic. Our everyday experiences activate ideas and desires and even ways of thinking about the world. Primes are like reminders, whether we are aware of the reminding or not. We walk through the airport on the way to our gate and the wonderful, intoxicating smell of a Cinnabon wafting by reminds us how good they taste, of how hungry we are, and how much we’d really like one. Our conscious mind was on a completely different matter at the time, that of getting to the gate on time, not thinking about Cinnabons at all. So it was the smell that did all that “priming” work. Let’s say that then, a few days later, we get cut off by one too many drivers on our morning commute, and when we finally get into the office, we find ourselves thinking what a selfish jerk our coworker is because he happens to be printing out a long document on the office printer when we need to use it. As we will see in the next chapter, these common everyday experiences affect us well after they are over and we have moved on to a completely different situation. In the lab, however, researchers have made good use of the basic principles of priming and accessibility (the readiness of a mental concept to be used) to study how one kind of experience can unconsciously shape and influence what a person does or thinks next, without her knowledge or awareness of these effects. Many of the studies of the unconscious effects of one’s culture, even in young children, have used these priming methods.
Now back to our experiment on the Protestant Ethic, in which we employed the priming method. We included not only American participants (for whom we expected to show effects) but also participants from other wealthy Western industrialized countries—Canada, Italy, and Germany—for whom we did not expect to show any effects. Because the Protestant Ethic holds that heaven and the afterlife are the reward for hard work in one’s earthly life, we tested whether Americans did indeed strongly associate the idea of heaven with the idea of working hard, using the standard “two unrelated studies” experimental technique. Our first experiment was described to participants as a language test, and in it they constructed a series of short, four-word sentences out of scrambled-up words. For one group of participants, some of these words had to do with the afterlife. For example, trip dormitory heaven was the (to which the participant could write down “The trip was heaven,” and less likely to a college student but also grammatically correct, “The dormitory was heaven”). In the control condition, the primes were equally positive words but not related at all to religion (for example, trip dormitory wonderful was the). In this way, we primed or activated the idea of heaven and the afterlife for some participants, without their being aware we were doing so, and we didn’t prime the idea for other participants (in the control group).
We predicted that for Americans, priming the concept of religion and the afterlife should also prime the Protestant work ethic, because the two are so intertwined in American culture (and therefore in the minds of Americans). These “heavenly” words, we hypothesized, should cause the American participants to work harder on the subsequent task—in this case, solving anagrams. But this same priming task should not cause the Germans, Italians, or Canadians to work any harder, because the link between salvation and working hard is not part of the cultures they grew up in. Only if ideas of heaven and the afterlife are strongly but implicitly associated with working hard in one’s mind should our priming of the first influence the second.
And that is what we found. Our U.S. participants primed with religious concepts did work harder and score higher on the anagram task, compared to U.S. participants in the control priming condition (with no exposure to words relating to heaven). And, as we expected, the heaven priming only affected the task performance of the Americans; it did not influence the anagram performance of the participants from the other countries. Finally, when carefully questioned after the study was over, no one in our experiment showed any awareness of the connection between the religious primes in the first task, and how hard or well they worked on the anagram task. It was a completely unconscious cultural influence on their behavior.
In our second study, we further established that these cultural influences operate unconsciously. We asked American participants to read a story about two young potato peelers who had just purchased a winning lottery ticket together. After winning the lottery, the first potato peeler retired, while the second continued to work peeling potatoes even though he was now a millionaire. We asked the participants to describe both their intuitive, gut feelings regarding each of the two potato peelers and their more conscious, deliberate judgments of them. The gut feelings were significantly more positive toward the one who continued to peel potatoes even after winning the lottery, compared to gut feelings toward the one who then retired rich and carefree. In contrast, on the deliberative, more thoughtful judgments, the two potato peelers were rated as morally the same. The Protestant Ethic at work—continuing to work after no longer needing to, financially, makes you a better person.
So now, on to the Puritan ethic. In our third study, we tested whether Americans strongly associated the Protestant and Puritan ethics with each other, as we would expect, since these ideas are pillars of the founding American ideology. We predicted that, if they are strongly associated, Americans should have more conservative attitudes about sex after they have been thinking about work! To show that this was an exclusive effect of American culture we chose a group of bicultural Asian-Americans to be our participants. This allowed us first to prime either their Asian identity or their American identity—so that within the same person, different effects of the work-related prime could occur depending on which of their two cultural identities was currently active. In other words, we were switching on different aspects of the early and now-forgotten pasts that had shaped their cultural identities.
For some participants, the Asian aspect of their identity was first primed using a questionnaire with items such as “What is your favorite Asian food?” For the other participants, their American identity was primed instead by asking “What is your favorite American food?” and with related questions about favorite movies, books, musical groups, and so on. Next, all participants completed a scrambled-sentence test, except that for one of the participants some of the words on the test were related to work—such as office, work, job. For the control participants, there were no words related to work on this first “language test.” Then everyone read a story about a high school’s proposal to make the school dress code stricter by prohibiting the wearing of revealing clothing at the school, and then answered questions about the study. We predicted that only when the American part of the Asian-Americans’ identity had first been primed, activating uniquely American cultural values, would priming work then cause a more conservative, Puritan response to the sex questions. The participants would be more in favor of the stricter dress code. Sure enough, this is what we found. Those who had been assigned to the Asian-identity priming condition showed no effect of the work priming on their responses about the school dress code. The Protestant (work) and Puritan (sex) ethics do not go together in Asian culture. So our opinions about morality, the rightness or wrongness of various social behaviors, are influenced by our cultural ideology, which we absorbed so readily as young children that it has become part of our hidden, unconscious past.
Thus work and sex—the twin Protestant and Puritan ethics—appear to be strongly linked in a uniquely American set of cultural values, one rooted in the country’s distant origins. Today, four hundred years later, we still see a profound effect of the founding ideology of the Puritan Protestants on the moral judgments of twenty-first-century Americans. For the most part, we are unaware and unconscious of these influences. They are the water that (many, not all) American “fish” swim in, and they generate feelings and moral values surprisingly consistent with those of our deeply religious Puritan forefathers and foremothers of the 1600s.
As our experiment on the American values with Asian-American participants showed, we can feel and behave differently depending on which aspect of our personal identity is currently active. Our identities have multiple aspects—mother, musician, teacher, yoga enthusiast, NASCAR fan. Within each of these is stored ingrained, implicit knowledge about appropriate values and behavior, likes and dislikes. Ways of being. Children learn from their culture what it means to be a boy or girl, an Asian-American or an African-American, a child or an elderly person—how you are supposed to act, what you are supposed to be able to do, and what you are not to do. And young children can adopt these cultural beliefs so strongly that they will actually behave differently, at a stunningly young age, depending on which aspect of their identity is primed.
In 2000, I attended the first annual meeting of the Society for Personality and Social Psychology, which has since become the largest conference in my field in the world, attended by thousands of researchers, students, and professors. This yearly event basically consists of symposiums, panels, and lectures in which eager, enthusiastic scientists present their ideas and latest findings, discuss and argue about them a bit, then head straight for the evening reception and cash bar. There was great excitement that year in Nashville at the first-ever convention, and I met dozens of new colleagues, but what stands out most in my memory is a talk, in the grand ballroom of the hotel, by the late Nalini Ambady.
Ambady was a brilliant social psychologist from Kerala, India, who went to graduate school at Harvard and took seminars with the likes of B. F. Skinner. She left us much too soon, succumbing to leukemia in 2013. She was a colleague I greatly respected, and I was not alone in that. The huge ballroom in Nashville was packed to hear her present her latest research, a study she had conducted with her colleague Margaret Shih on young Asian-American girls and boys. Nearly two decades later, their findings are still some of the most compelling demonstrations of just how early in a person’s life cultural influences on their motivation and behavior can begin.
Thanks to the pioneering research of Claude Steele, we have known for some time that reminders that cue or prime a person’s social identity can affect their test and academic performance, usually in a negative way. Merely checking off their racial or ethnic group at the top of a standardized test causes African-Americans to do worse on that test than if they had not checked off that box. Society teaches us that our social group is good or not so good across a whole lot of life domains. For example, that blacks can’t cut it academically, or that girls and women can’t do math or science as well as boys and men, that elderly people are slow and have bad memories. Remember the movie White Men Can’t Jump? Steele called this phenomenon stereotype threat. If you are reminded of your group status before performing a test or task, and the cultural stereotype says that your group is not very good at it, your performance will be affected. You will, consciously or unconsciously, “buy in” to that stereotype. Often this comes when the going gets tough, because when things get hard (such as more advanced math classes for girls) members of the stereotyped group start to attribute the difficulty they are facing to their group’s inability (“I’m having trouble with this because I’m a girl”) and stop trying. Others pick up their effort at these moments, try harder, and so do better.
There is some good news, though. The same effect can also help performance if your group is supposed to be good at the task. This is called stereotype gain. For example, Asian-American teens are stereotyped as nerdy, overachieving, and good at math. That this is a widespread cultural belief is perhaps best illustrated by the infamous 1987 Time magazine cover story with six brainy-looking Asian kids posing together, and the headline “Those Asian-American WHIZ KIDS.”
So what are you supposed to believe about yourself if you happen to be an Asian-American girl? According to American culture, one part of your social identity (Asian) says you should be good at math, while another part (female) says you should be bad at math. Ambady and Shih recognized that the dilemma of Asian-American girls afforded a unique research opportunity to gauge the automatic, unconscious effects of a person’s social identities on their actual behavior and performance. So in their first set of studies, they showed that high-school-age and ten-year-old girls alike did better on standardized, age-appropriate math tests if they were first instead primed with their Asian identity, so that it was the most active aspect of their identity when they worked on the test, but these girls did worse if they were instead primed beforehand with their female identity. It was disturbing that these effects showed up as early as fourth grade, but the researchers suspected that grade school teachers, from first grade on, had already gotten the message across, through different classroom treatment of boys and girls, that girls were not expected to be as good at math as boys. So, unfortunately, by fourth grade this was apparently already ingrained in the girls’ heads.
In their next study, the one Ambady presented in that packed Nashville ballroom, she and Shih used an even younger group of children: five-year-old Asian-American girls who had not yet started grade school. Cleaner slates, as it were. As before, though, they also had groups of fourth graders and high school students. Their assumption was that the stereotype effects would not be present until fourth grade because they were being transmitted by the grade school teachers and the culturally biased learning environment. Their assumption would be proven when the Asian or female primes did not affect how well the kindergartners did on the math test, but did affect how well the older girls did.
Ambady and Shih and their team brought the eighty-one Asian-American girls into their lab at Harvard—71 percent had been born in the United States—and randomly divided them into three groups: Asian-identity primed, female-identity primed, and a no-identity primed control group. The five-year-olds had their Asian identity activated by coloring in a picture of a two Asian children using chopsticks to eat rice out of a bowl; a different group of five-year-olds had their female identity activated by coloring in a picture of a girl holding a doll; and the control group just colored in a neutral landscape. The identities of the older girls were primed in the same way they were in Ambady and Shih’s original study. Then all the girls took a standardized math test appropriate to their age group. The identity primes for the five-year-olds would fail, right?
I will never forget the audible gasp from the audience in the crowded ballroom that afternoon when Ambady then presented the results of the study. Most of us there had placed so much hope on the educational system as the way to fix these harmful beliefs—harmful not only to girls themselves but to our society itself, in terms of wasted valuable human capital, and underdeveloped and underused abilities and talents. We never expected, and neither had Ambady or Shih, that these cultural beliefs that girls can’t do math were already entrenched in the heads of five-year-olds, before they’d even started school. They were so entrenched that subtle priming manipulations could cue that identity and unconsciously affect their performance on a math test.
But they had. The effects of the Asian and the girl coloring book primes on the five-year-old girls were there, just as they were for the fourth- and eighth-grade girls. The “girls can’t do math” belief was in the heads of all of them, even the preschool kids. When Ambady put the results up on the overhead projector, it felt like all the air had just been sucked out of the room. We in the audience just looked at each other, shaking our heads in disbelief. So much for Plan A, getting to these kids right away in first grade, nipping these false beliefs in the bud.
We now know that, for better or worse—often worse, as we’ve now seen—cultural stereotypes can take root even before kids start school. Yet this is not to say that they can’t be further perpetuated by teachers in the classroom, as the famous 1960s “Pygmalion in the Classroom” studies by Robert Rosenthal showed. In those studies, classroom teachers were given a false set of standardized test results about their students. High or low test scores were randomly assigned to each child. They were not related at all to the child’s actual abilities (and neither the children nor their parents ever saw or knew about these scores), yet at the end of the year, the students’ grades and test scores corresponded to those false scores. Because only the teachers knew about those scores, and because the scores were unrelated to the child’s actual ability, the only way this could have happened was through the teachers treating their students differently based on their expectations of them.
But in the case of the Asian-American five-year-olds, they showed negative effects of cultural stereotypes, that “girls can’t do math,” even before they had started school. So how did these deeply embedded early stereotypes find their way into the unconscious minds of these small children? One possibility would be that their parents were telling them that girls can’t do math, but when I spoke to her recently Shih strongly discounted this explanation. “These were high-achievement-motivated parents,” she said. “They had high aspirations for their daughters. Some even thought that participating in this study at Harvard would help their girl get into Harvard later on!”
It is certainly the case that American culture, at least, socializes girls quite differently than boys. One defining difference is the greater emphasis on physical attractiveness and appearance for girls than for boys. Early on in the home, getting ready for school in the morning, there is more attention to brushing and even styling the girl’s hair, and to the outfit she is wearing, than to boys’ appearances. And as they get older, the emphasis on physical appearance becomes more obviously about sexual attractiveness; researchers have described how girls and young women are “socialized into a culture that sexually objectifies the female body” and “the greater cultural demands placed on women to meet physical attractiveness ideals.” It is almost as if women in our culture grow up to develop—at a very early age—two distinct self-identities: their body, and their mind. Society seems to say, “It is better to be pretty than smart,” as if these two attributes were somehow mutually exclusive.
The nature of this subtly absorbed, unconscious past suggests that when a female’s body identity is made salient—say, at the beach—her “mind” identity—her intelligence—should suffer. The beach’s emphasis on the body and attractiveness triggers the cultural stereotype that a woman is to be valued and judged according to her physical looks, not her knowledge and intellectual abilities. A now-classic study at the University of Michigan by Barbara Fredrickson and her colleagues showed just this under controlled laboratory conditions. Male and female undergraduates came into the psychology lab, one at a time, for a study on “emotions and consumer behavior.” They were told they would be evaluating three types of consumer products: a unisex fragrance, an article of clothing, and a food item. After the participant rated the fragrance product, he or she went into a dressing room that had a full-length mirror on the wall. They were randomly assigned to be in the swimsuit or the sweater condition. The women tried on a one-piece swimsuit, available in sizes from 4 to 14, or a sweater available in sizes S, M, or L. The men tried on either a pair of swim trunks (four sizes, from S to XL) or a sweater (sizes M, L, and XL). Over headphones they were instructed to look at themselves wearing the item of clothing, and then they completed a set of questionnaires involving how they felt about their body.
After getting dressed again, the participants came out for the next part of the study, which was a challenging math test with twenty questions taken from the GMAT (the test you take when you apply to business schools for an MBA degree). They had fifteen minutes to work on it. The instructions made clear to the participant that this was a test of their mathematical ability. The final part of the study was a taste test of Twix candy bars. The package was unwrapped and the two candy bars were placed in front of the participant on a plate, with a glass of water and a napkin nearby. The participants were told to eat as much as they wanted.
Their answers confirmed that, as you might expect, wearing the swimsuit focused the participants’ identities more on their bodies than did wearing the sweater, and this was true for both men and women. As for eating the candy bars, overall the women ate less than the men, and if trying on the swimsuit had made them feel bad about their bodies, they then ate less of the candy bar than did the other participants. But the big news was about the math test performance. Recall that the participants were randomly assigned, by chance, to the swimsuit or the sweater condition. Also, the researchers controlled for important factors such as the participants’ overall math ability. Yet women who tried on a swimsuit instead of a sweater then did significantly worse on the math test (an average of 2.5 correct answers versus 4). Focusing on their bodies caused them to display less intelligence. Here’s the kicker: men’s performance on the math test was undisturbed by whether they had tried on the swim trunks instead of the sweater. Priming their body identity didn’t “harm” them in any way.
Just as with our studies of the Protestant and Puritan ethics, these results show that our various cultural beliefs are intertwined, that they are all associated with each other. After all, there is no logical reason why emphasizing physical attractiveness or increasing body consciousness should cause worse performance on a math test, unless both of those beliefs about women were components of the (American) cultural stereotype for women. So that when that stereotype is made salient, both of those beliefs—that women are supposed to make themselves physically attractive and are worse at math than men—are up and running in women’s minds. Priming one aspect of this cultural identity by having the women try on the swimsuits activated the other aspect. Keep in mind that these were college students, undergraduates at a large Big Ten university, who were successful students with a very strong academic identity compared to other, less high-achieving people. Yet even they succumbed to this damaging cultural belief about women and math, without knowing it.
If these unconscious influences are already present in the minds of preschool children, we can’t entirely blame our school systems. And physical attractiveness biases are likely not the fault of our educational system (or if so, only marginally). So where do these subtle winds blow from? What forces are constructing the hidden past of our minds? Shih said she and Ambady suspected that the girls had learned the stereotype through mass media and the general culture they had already been exposed to so much already in their young lives. There are a lot of developmental questions about children’s understanding of race and gender. For gender, though, it seems clear where some of the influences come from. “Dolls and princesses,” Shih said, noting the toys and models girls are given from a very early age. “Not spaceships.”
Just watch a little television and peruse newsstands for the messages targeted at girls and women in our culture (and many other cultures). On the cartoon and other entertainment channels directed at children, the girls’ toys advertised are often pretty dolls with hair to brush and different outfits to dress them in. Bracelets and necklaces and other forms of body adornment are routinely marketed to girls. So in their next research project, Ambady and her colleagues focused on the cultural transmission of racial biases in the United States via the mass media. They did a careful study of the content of the most popular prime-time U.S. television shows. The study was conducted in 2006 and focused on eleven shows, such as Bones, CSI, Friday Night Lights, and Grey’s Anatomy, all of which had an average viewership in the United States of 9 million people. However, they chose only participants who had never watched any of these shows before. All of the TV shows selected had a white and a black character of equal status—meaning that the two characters were equally important to the show’s theme and had equal job status (for example, both were police detectives). From these programs, a total of fifteen white and fifteen black characters were selected, and participants in the study were shown nine silent clips from the show featuring each character.
Now comes the twist: the featured white or black character was then edited out of the scene so that all that the participants saw was how the main show character, such as Mark Harmon or David Caruso, reacted to that character. Watching that clip, you would have no idea who the main character was interacting with at that moment. Because the audio had been digitally removed from each clip, the only information participants had was the main character’s nonverbal behavior—their facial expressions, gestures, body language—toward the (off-screen) show character. The researchers wanted to know whether the show’s main character was perceived to behave differently when interacting with a black or a white character on the show. Two hundred sixty-five total clips just like this were presented to each participant in a random order. After each clip, participants were asked how much the character (who was visible) liked or disliked the unseen character; they also rated the overall positivity of the interaction between the two. There was high agreement among the participants when they answered these two questions.
The results revealed that the nonverbal behavior of the main character was more positive toward the white characters in the show and more negative toward the black characters. Even though the participants who made these ratings did not know who the main character was talking to at the time, they could still detect in the main character’s facial expression and body posture a more negative attitude toward the black character. Multiply these subtle differences in treatment of white and black characters by the many such interactions the main character has in each show, multiple that by the number of episodes of the show, and multiply that by the number of popular shows on TV—and then multiply all that by the millions and millions of people watching all those shows, and you can get an idea of how powerful this cultural influence is on viewers, on our positive and negative attitudes toward blacks and whites. The differences were subtle but not so subtle that they could not be picked up by the participants who viewed them—just as they would be picked up by the millions of viewers, including children, at home watching that episode of their favorite show.
The real question, of course, is whether these more negative attitudes toward the black show characters have an effect on the viewer. We may notice them at some level, but that doesn’t mean they necessarily affect our racial attitudes. For example, as you watch these shows more, do your unconscious attitudes toward blacks become more negative? The news here, unfortunately, is not good.
In their next study, Ambady and colleagues examined the effect of watching these shows on the racial attitudes of viewers. A measure of each show’s relative (subtle, nonverbal) negativity toward blacks was calculated by taking the difference in the main character’s liking and positivity toward the unseen black character versus that toward the white unseen character. (Some shows displayed more of this negativity than others.) Then a new group of fifty-three participants were asked which of the eleven shows they watched regularly, and they were also given the adult version of the Implicit Association Test, or IAT, that uses Good-Bad and White-Black buttons to see how strongly the person unconsciously associates white with good, and black with bad. In this way the researchers could see if the more a person watched prime-time TV shows that had relatively high degrees of racial bias, the more racially biased they themselves became. And yes, this turned out to be the case. The more nonverbal bias in the shows they watched, the more negative the person’s implicit attitudes toward blacks. The actors’ hidden biases were unconsciously absorbed by their viewers.
So there is credible evidence for the cultural transmission of stereotypes and beliefs through the mass media; greater exposure to racial bias on prime-time television shows is linked to greater levels of personal racial bias. Such biases later shape our thoughts and actions before we know it; we aren’t aware of these biases or where they came from. The mass media also conveys cultural stereotypes through the way it presents the news to us. This may be an even more insidious way in which cultural beliefs are transmitted, because we expect the news to be an accurate reporting of the real world. And so, if it inaccurately presents to us negatively biased “news” about different groups in our society, we will tend to believe it is factual—just as young children soak up everything they hear without questioning it.
Before the cable TV and Internet revolution in communications, most people got their news watching the evening broadcasts of the (then) three major networks—CBS, NBC, and ABC—and reading the newspaper and major weekly newsmagazines—Time, Newsweek, and U.S. News & World Report. Even today, tens of millions still watch these programs and read these magazines, or new outlets and media with similarly wide reach. In 1996, in the pre-Internet heyday of these news sources, Yale political scientist Martin Gilens conducted a landmark study, the first of its kind, to examine the content of both the major weekly newsmagazines and the three major television network evening news broadcasts. He focused on the visual content that these mainstream mass media outlets presented while the anchor or reporter spoke about the problem of poverty in America—what were the pictures or videos that were selected to be the backdrop to the magazine text or the television narration?
The 1990 U.S. Census showed that African-Americans made up 29 percent of the poor in the United States. So roughly 30 percent of photos of people living in poverty in the United States should have been of African-Americans, right? In reality, in the 182 newsmagazine stories on poverty that Gilens studied, from 1988 to 1992, the photographs associated with the newsmagazine stories were of blacks 62 percent of the time—twice as frequently as they should have been. Naturally, this gave readers the strong but quite erroneous impression that the majority of poor people in the country were African-American. And Gilens found that the same thing happened in the evening news broadcasts by the three major television networks—fully 65 percent of the people shown in the TV news stories about poverty in the United States were black Americans. Such disproportionate representations affect not only people’s attitudes toward poverty—that is, “most poor people are black”—but also black people’s unconscious beliefs about themselves and their community.
In his report, Martin Gilens reminded us that when the journalist Walter Lippmann used the term stereotype in its psychological sense for the first time in the 1920s he was referring to the “pictures in our minds” that have more of an influence on our attitudes and behavior than reality does. And because we all rely heavily on the news media to get our “pictures in our mind” about the world, is it any wonder that people develop the stereotype and false belief that most of the poor people in the United States are African-American? Now, couple this belief with the Protestant Ethic, which as we’ve seen is still such an important part of the U.S. cultural ideology: Gilens describes one national survey taken during the same time period showing that 70 percent of those responding believe that “America is the land of opportunity where everyone who works hard can get ahead.” If you hold that belief, then you would conclude that poor people just don’t work as hard, or don’t want to work as hard, as other people. Meaning they are lazy, and since most poor people are black (according to what you see in the news all the time), well then, black people must be lazy. This quite potent and unjust cocktail of biases in the cultural and individual consciousness has its origins in the unintentional and unconscious biases of those who control our newsfeeds.
The mass media, both the entertainment and the news sectors, exerts tremendous power over the shaping of cultural beliefs and attitudes. Ambady’s study of negative racial attitudes in the top-rated television programs, and Gilens’s study of racial bias in news media coverage of “poverty in America” stories both show this quite clearly. But then the question naturally arises: why is the mass media in the United States portraying blacks in these ways? Is it that the editors and producers in charge are racially biased? In the case of the news stories about poverty, Gilens presents evidence against that explanation, showing that the photo editors who choose the pictorial content and the TV news editors who choose the associated video footage are in fact generally more racially liberal than most Americans; and in the case of the top-rated entertainment shows, it seems unlikely that Mark Harmon and the other actors were intentionally trying to convey their relative dislike for the black characters in their programs. After all, the television programs selected for Ambady’s study were the only top-rated ones that (quite intentionally) included both white and black characters whose roles were of equal status (for example, both detectives, both supervisors) in a deliberate attempt to present the races in an egalitarian manner.
So if the cause was not conscious and deliberate on the part of those in charge, it must have been unconscious and unintended. Gilens ends his study of the mainstream news media by saying that “the consistent pattern of racial misrepresentation (along with the consistently liberal nature of these editors’ conscious beliefs about racial inequality) strongly suggests that unconscious negative images of blacks are at work.” People who work on newsmagazines and in the TV news business are members of the same culture as their readers and viewers; they soaked up the same culture the rest of us did. So did the actors who portray the main characters in the top-rated entertainment programs. And culture exerted an unconscious influence on their choices of photographic and video content for their news stories, and on their nonverbal facial expressions and body postures toward the black characters in their shows. Even though these behaviors and choices are likely running against the consciously held beliefs and values of people in the media, that doesn’t stop their unconscious beliefs from having a very strong impact on the rest of us.
The editors and producers in charge of the content we consume may be just like the rest of us in one respect—that of having soaked up the same cultural biases as we did—but they are quite unlike us in another. They have a very powerful role in determining the “facts” the rest of us unconsciously learn from the media sources we generally (and should be able to) trust. They influence us without our realizing it and they help shape the hidden mind of early childhood. They need to use that power more responsibly than they have in the past, and efforts such as Gilens’s to make them more accountable are very positive developments.
Now that we have seen how cultural beliefs and values embed themselves in the hidden mind, it is useful to think of the early years of our lives as a kind of tunnel. First, in infancy, you see only what enters your narrow tube of attention: your family, your house, and other passing stimuli. This is your entire world. Then, as a toddler, as you begin walking and interacting with objects and people, that tunnel widens and becomes more like a country road. You travel down it, with your senses focused mostly on the road in front of you and the other travelers, but you do notice the landscape streaming by, the occasional building, and other roads that cross yours. This landscape includes more subtle stimuli: the layers of your culture, media, and the attitudes of others, which you absorb without noticing and without questioning. As you develop from a child into a preadolescent and then into a teen, this spatial expansion continues. Your experience becomes more of a busy highway, and you periodically get off that highway to stay in different cities and meet the inhabitants and see the sights: school, friends, trips, more media, and more things you observe and notice. No memories of that original tunnel remain, and most early memories of that country road disappear, too. You take in more and more of your surroundings and you settle into the perceptual driver’s seat of a fully developed adult. By then you have arrived at your destination as a full-fledged, card-carrying representative of your culture—with all its nice features, but all its warts, too.
Our everyday experiences, such as holding a hot cup of coffee, are constantly triggering or priming our deeply ingrained cultural beliefs and values. Americans who encounter words relating to heaven and the afterlife then work harder on a task than otherwise, and become more judgmental about revealing dress and sexual behavior. People with multiple aspects to their identities, even young preschool children, can have very different attitudes and even behave differently depending on which identity is currently up and running in their minds, without having any idea of the effect these cultural identities are having. We soak up these cultural influences like crazy as children and they are all around us, in the television and other media we spend so many hours watching, and in the subtle facial expressions and nonverbal behavior of our parents and older siblings toward members of other social groups. These stereotypes and other beliefs become second nature to us, so ingrained that even well-meaning people with liberal racial attitudes, in positions of great responsibility in the mass media, nonetheless communicate—and thereby perpetuate—those stereotypes to their viewers and readers. The cultural background we inhaled so innocently in our preschool years is there in the background of our adult lives all the time, operating in our minds behind the scenes like the hidden puppet master at our fourth birthday party. In the case of the Dresnok boys, it was powerful enough to turn the sons of an American soldier into sworn enemies of the United States.
“Pay no attention to that man behind the curtain!” exclaimed the Wizard of Oz, but, like Dorothy and her crew, maybe it’s prime time we did.