What social psychology has given to an understanding of human nature is the discovery that forces larger than ourselves determine our mental life and our actions—chief among these forces is the power of the social situation.
—MAHZARIN BANAJI
Evil that arises out of ordinary thinking and is committed by ordinary people is the norm, not the exception. . . . Great evil arises out of ordinary psychological processes that evolve, usually with a progression along the continuum of destruction.
—ERVIN STAUB
Values of independence are phony, really. There is no such thing. There can be no pride in living without oxygen. We’re not made that way. It is nonsense to try and give up symbiosis and become an independent self.
—HEINZ KOHUT
Social psychologists offer ample reasons to believe that the moral lessons we gathered from the studies of the Holocaust are not unique or aberrations but instead what goes on all the time. Social and situational forces rather than free will decision making and stable personal character largely determine what people do, what actions people take, whether they harm or help. In the Holocaust social and situational forces were more extreme and blatant cases of business as usual, so they were more visible to the eye. Some early social psychology experiments came about in an effort to understand what had happened in the Holocaust, especially to understand why seemingly normal people became perpetrators of mass murder or engaged in tasks contributory to mass murder. Stanley Milgram and Philip Zimbardo are the two social psychologists, independently of each other but each over a lifetime, whose experimental work has focused on the social and situational forces and conditions that lead people to engage in harming others, in inflicting pain, doing evil. It turns out that Milgram and Zimbardo were high school classmates at James Monroe High School in the Bronx, both graduating in 1950, and even back then both were fascinated by the power of social situations to shape people’s behavior. Milgram, being Jewish, was riveted by the Holocaust and how it could have happened, how Germans could have so readily and obediently murdered Jews. In school, Zimbardo says, he and Milgram would hang out and talk about the successes and failures of their friends and acquaintances. “Not coming from well-to-do homes,” he says, “we gravitated toward situational explanations and away from dispositional ones. . . . [We saw that t]he rich and powerful want to take personal credit for their success and to blame the faults of the poor on their defects. But we knew better; it was usually the situation that mattered.” Zimbardo comments that it was only after Milgram’s death that he discovered they had both been fascinated and even influenced by the television show Candid Camera, which began airing in 1948. Its creator, Allen Funt, would devise scenarios that would challenge the usual perceptions or ways of acting of unsuspecting passersby, whom he would be filming with a hidden camera. Then we TV viewers, just like psychology experimenters observing from behind one-way glass, would watch surprising human behavior from behind our hidden “screens.” Zimbardo remarks that Funt was “one of the most creative, intuitive social psychologists on the planet” and exposed “truths about compliance, conformity, the power of signs and symbols, and various forms of mindless obedience.” When Zimbardo interviewed Funt for an article in Psychology Today he discovered that Funt had been a student of Kurt Lewin, one of the great founders of experimental social psychology, at Cornell and had been as influenced by Lewin in his own creative way as had Milgram and Zimbardo in their disciplinary way.1
In his famous experiments Stanley Milgram devised ways to test people’s normal obedience to authority and willingness to harm others on that authority. Milgram, a psychologist at Yale, set out to test the obedience of normal adults and whether they would be willing to inflict pain on others when instructed to do so by an appropriate authority for an apparently appropriate purpose. These have become known as tests of blind obedience to authority. Milgram comments about his own motivation that “the impact of the Holocaust on my own psyche energized my interest in obedience and shaped the particular form in which it was examined.”2
Working at Yale University, Milgram in 1961 advertised in the local papers for subjects for his initial experiment, which was characterized as a scientific experiment to study memory and learning. Those accepted would be asked to volunteer for an hour at their own convenience and would be paid for their time. The subjects chosen came to the university laboratory and were met by a researcher in a lab coat who told them that they would be part of an experiment to determine ways to improve learning and memory through the use of punishment. The subject was told that he would be the “teacher,” and there would be a “learner” on the other side of a screen or wall to whom he would say a series of words that the learner was to remember and then recite. If the learner produced the correct words, the teacher was to say “good” or “that’s right,” but if not, the teacher was to press a switch on a device that delivered an electric shock to the learner. Each wrong answer meant that the next shock would be 15 volts higher than the previous one, and there were thirty levels of shock, each marked with a phrase such as “strong shock,” “very strong shock,” and so on, up until “danger, severe shock” at the twenty-fifth level and “XXX” at the highest levels (435 and 450 volts). The subjects were told that the shocks might be painful to the learner but would not cause any real harm. The researcher in the lab coat then had the two participants draw straws to see who would be the learner and who the teacher. This, of course, was rigged: the learner was a confederate of the researcher and would merely pretend to be shocked, since the shocking device did not actually give shocks. At a certain level of shock, the learner was instructed to complain of the pain; at a higher level, the learner was to admit to a heart condition. As the shocks got higher the learner was to shout out in pain and at a very high level say that he couldn’t stand the pain, his heart was hurting, and he wanted to get out of there. After 300 volts the learner was no longer to complain but maintain silence. During the experiment the researcher in the lab coat calmly but insistently explained to the experimental subject that the experiment must go on and that he was to follow the rules. In the initial version of the experiment the lab-coat-wearing researcher remained in the room with the volunteer subject.
Milgram wondered how many of his volunteer subjects would actually go along with it and how far up the scale of pain and harm they might venture. In anticipation of carrying it out, Milgram presented the plan to a group of forty psychiatrists and asked them to predict how far they thought these average American subjects would go in punishing a learner. Most of them predicted that less than 1 percent of the subjects would go all the way to the end of the scale and deliver the highest voltage. Only sadists, they predicted, would go all the way, and most people would quit at about level ten. This prediction turned out to be wildly off base. In fact, about two-thirds of the subjects (65 percent) went all the way up to delivering the maximum shock of 450 volts—beyond the point at which the learner had stopped responding and crying out in pain and had become ominously silent. The most obvious assumption for the subject to make at that point was that the learner was now unconscious. If a subject expressed concern about being responsible for hurting the learner, the researcher reassured the subject that the researcher was taking responsibility; if the volunteer subject expressed concern for the learner and the desire not to continue, the researcher told him to continue, and—astonishingly to everyone concerned, and to us today—a majority continued. Milgram repeated this experiment with nineteen different small variations over the course of a year. For example, he added women; he varied the distance of the researcher from the experimental subject; he introduced confederate peers to the subject who rebelled; he even took the whole experiment out of Yale University and set it up in a run-down office in Bridgeport, Connecticut, to see if subjects would still comply with scientific experimenters in a private experiment without the prestige of Yale behind it.
By introducing small variables, Milgram found that he could get 90 percent of the subjects to comply if the actual pressing of the switch was done not by the subject but by another person (a confederate of the research team). But if he introduced confederates who refused to go along, 90 percent of the experimental subjects would follow their lead and refuse. If the learner was more remote, compliance was increased. All in all, Milgram tried the experiment on more than one thousand Americans of all kinds. The results defied expectations—of the psychiatrists and certainly of people themselves—but were very powerful and reproducible. Since then, eight replications of the study have been done in the United States, and nine in European, African, and Asian countries. There was comparable obedience in all the different countries and over different time periods.3
The Milgram experiment, like the Stanford Prison Experiment, came under scrutiny for ethical concerns about human subjects research, since the subjects were put in a situation of significant (although brief) stress. Nevertheless, a slightly modified version of the Milgram experiment was repeated in 2006 by psychologists at Santa Clara University for ABC television’s newsmagazine program Primetime. In this version, the maximum shock was set at 150 (rather than 450) volts, a level that had been a turning point in the original experiments: almost all of the experimental subjects who administered 150 volts went all the way. (Of course, in the 2006 experiment these were pretend shocks, as they were in the Milgram experiment.) The initial reason for conducting the Milgram experiment yet again was the suspicion that times had changed and people were no longer obedient in the way that they had been forty-five years earlier. Perhaps that kind of obedience had been an artifact of a particular era. But that suspicion turned out to be totally wrong, and the results disturbing: the subjects in 2006 were just as obedient in the same percentages and under the same conditions as those nearly half a century earlier. Astonishingly, just as Milgram had discovered, no intimidation or fear, no political conditions of war or threat were needed to induce normal people to harm others and to keep going even amid victims’ cries of pain and pleas for release. Nor was isolation or brainwashing necessary. It didn’t even take Nazi-like conditions to produce Nazi-like behavior in normal people—and within an hour. The results this time around, forty-five years later, were as chilling as Milgram’s.
The Primetime film footage captured a number of the volunteer subjects as they went through the experiment and then reflected on what they did and said during the experiment in interviews with Primetime anchor Chris Cuomo. Interspersed with both the footage of the experiment and the interviews with the subjects are Cuomo’s interviews with social psychologist Jerry Burger. The setup followed the original Milgram design: there was a room with a lab-coat-wearing researcher sitting at a desk behind the subject, who was sitting with his or her back to the researcher and facing a large piece of equipment with a scale of voltage up to 150 volts and switches at each gradation. This electric shock device was in front of a wall, and the “learner,” a sixty-year-old research confederate, sat behind the wall, out of sight. Before the experiment began, the subject was brought by the researcher to the other side of this wall to meet the learner, who mentioned that he had a heart condition and expressed some anxiety about the experiment’s effect on his health. The subject looked on as the learner was strapped into the chair and fitted with electrodes.
The subjects were eighteen men and twenty-two women—a diverse group in terms of ethnicity, age, profession, and socioeconomic class. Sixty-five percent of the men and 73 percent of the women continued to shock the learner up to the maximum 150 volts, the voltage Burger told Cuomo “is something of a point of no return.” The typical response of the subjects when the learner first objected to the shocks or moaned in pain at the administration of the first “shock” was to turn back to the researcher, who would then indicate that the subject should continue. As the learner expressed increasing distress and pleaded to be released, the researcher also became more insistent that the subject continue, emphasizing that although the learner might be feeling pain, there would be no danger to him. During the experiment the researcher retained a thoroughly dispassionate and professional demeanor. Subject after subject looked quite concerned and disturbed; a few subjects even become quite agitated, though one or two showed almost no distress. Yet both those who displayed discomfort and those who didn’t continued to administer the shocks at the calm insistence of the researcher.
The subjects’ comments during the experiment and their self-reflections afterward are telling. At one point Chris, a fifty-year-old grade school teacher, asked about the risk (for herself, presumably) of a potential lawsuit. The experimenter replied, “If anything happens, I am responsible,” to which Chris replied, “[That’s what] I needed to know.” Later the Primetime interviewer, Chris Cuomo, asked, “Why was this what you needed to know? Having this guy in the lab coat sort of divorced you from your decision-making power?” And she responded, “Oh, sure. Just like when I’m told to administer the state [education] tests for hours on end.” Cuomo asked, “You’re doing your job?” and Chris responded, “Yes, [I’m] doing my job.” Similarly, another subject, a thirty-nine-year-old electrician, tells Cuomo in the exit interview, “I was just doing my job.” The electrician concludes that “having the experimenter right there next to me had a lot to do with it.” We may have thought that the echoes of the Nazi era were faint and gone forever, but here they are, and not in a wartime situation involving great fear or military discipline, but in an everyday academic setting with nothing at stake, no coercion nor potential reward, no hope of advancement resting in the balance—nothing is at stake for these subjects. Yet they do their job even when it’s clearly causing distress and potential harm and in the face of the screams of pain of someone they initially were introduced to and whom they have been told has a heart condition. Here is our first telling evidence that the social forces of the Holocaust are not just about genocide but about our everyday lives.
Chris Cuomo asked social psychologist Jerry Burger to explain why he thought the subjects were willing to cause obvious pain in the way they did here and in the way they did in the original Milgram experiments. Burger replied, “The power that [the experimenter] has in this situation is that in part he’s an authority figure. We’re all trained a little bit to obey authority figures. But also he’s the expert in the situation. He’s the one who knows the machine.” The major finding according to Burger was this: “We began to see a pattern. The majority of those who continued to the highest shock refused to take responsibility for the effects on the learner.” One subject Cuomo interviewed said, “It’s not my responsibility. We’re volunteers.” Another said, “I’m not in control. I’m just a conduit,” while a third remarked of the learner, “He chose to be there to take the shocks. His choice.” Cuomo asked Burger if he could predict after a while who would shock to the end and who would not. Burger replied, “It was impossible to tell. I tried to guess. I tried to look for signs, body language, anything to guess who’s going to continue and who’s going to stop. And this tells me that it’s not there are some kind of people who are different from the rest of us. It tells me that probably all of us are capable [of going the whole way].”
There were, of course, some who refused to go on, who walked out. One of these subjects, forty-six-year-old Fred, worked for a software company and was a self-proclaimed nonconformist. Initially Fred did not stop even though the learner screamed out when shocked. To explain that, Fred commented, “It’s two consenting adults deciding to do this. Not until at some point one of us has to say stop.” Cuomo asked, “So you put all of the responsibility about when is the right quitting point on the learner?” Fred: “For quite a lot of this, yes. There is going through my head, ‘How long are we going to do this?’ I am waiting for the other person to say stop. I don’t know where I would just have said stop on my own.” At 150 volts, the learner said he wanted out, and at that point Fred quit despite the urging of the researcher to continue. Then transpired the following exchange between the researcher and Fred:
Fred: He said that’s all. We’re not doing any more of this.
Researcher: I want to remind you that the shocks may be painful but they are not harmful to him.
Fred: It doesn’t matter. I’m not a sadist. He has said no more. He is not agreeing to this. I’m not agreeing to this.
Researcher: The experiment requires that you continue. The next item is . . . [The researcher reads the next series of words]
Fred: The experiment allows me to walk out at any time, and I will walk out if you want to push this.
Researcher: While that is correct, it is absolutely essential that you continue. Okay, remember whether the learner likes it or not—
Fred: The learner doesn’t like it. I’m walking out.
Researcher: We can discontinue with this patient. That’s just fine.
Fred: Okay. When somebody says no, it’s no.
Researcher: Okay.
Fred then walked out. In the exit interview, Fred reflected upon his words and actions: “It was obvious at this point wrong to go on. It’s not even an intellectual debate.” And Fred’s parting words pierce to the heart of the matter, “Do you have a brain? Shouldn’t you use it, too? Somebody walks up to you and says the blackboard is white and they’re wearing a lab coat, do you believe them? No, you’ve got your own eyes.” Another subject who resisted and walked out had a similar comment on his own refusal: “I would have felt that I would have hit the switch that had killed him. If he died, I would feel a deep responsibility.” Another subject who refused answered the researcher’s comment “You don’t have a choice. You must continue” with these words: “No. I do have a choice. I’m not continuing.” And another subject responded to a similar statement by the researcher in this challenging way: “You gonna hold me down?” Those who showed some resistance early and those who felt directly responsible for their actions were those most likely not to continue to the end and shock at the highest level. Primetime drew the connection to the Nazis obliquely by commenting that the excuse of “just following orders” has never held up in court, and then posed the question of why the military guards at Abu Ghraib did this to defenseless prisoners. Here Primetime inserted a clip of Private Lynndie England, sentenced to three years in prison and dishonorably discharged from the military for her crimes at Abu Ghraib. England told the camera: “We didn’t feel that we were doing things we weren’t supposed to because we were told to do them.”
Is the lesson of the Milgram experiment the danger of unreflective obedience to authority? Certainly Milgram took it that way. But there has been discussion since the initial experiments about what they actually document. Certainly they point to the overriding importance of situational processes over individual character and free will decision making. But is it all to be chalked up to obedience? I have already quoted Jerry Burger as pointing to the role of the expert in defining the situation and its rules. And Burger also emphasized the matter of the diffusion or assignment of responsibility to someone else other than the actor. Phil Zimbardo points out that in the Milgram experiments he was always struck by the difference between dissent and disobedience. While many participants dissented and grew upset at giving electrical shocks to the learner, very few were actually disobedient and stopped. This held true in the Stanford Prison Experiment as well, Zimbardo says. Commenting on conversations that he had over the years with Milgram, Zimbardo mentions that he once raised the question whether any of the dissenters ever ran to aid the victim in the other room behind the wall. Milgram said it had never happened. Zimbardo notes that this failure points to an even more chilling conclusion than Milgram’s: “That means that [Milgram] really demonstrated a more fundamental level of obedience that was total—100% of the participants followed the programmed dictates from elementary school authority to stay in your seat until granted permission to leave.”4
Obedience must be a much deeper, ubiquitous, and pervasive phenomenon than it seems. The responses of the subjects perhaps give us the first hint that we do not yet understand what’s really happening. For example, none of them asked whether they had permission to stop. Instead they raised the question of their responsibility. In other words, they implicitly understood their actions as part of a collective endeavor that was primarily not their own but to which they were contributing in a tangential way rather than a central one. This tells us something very important about the nature of most action, perhaps all action—namely, that it is tacitly understood as contributing to ongoing projects and institutions and social and cultural worlds not of one’s own making. Where we sit in the given context to which we are contributing we take to implicitly define the level and nature of our responsibility. And we think of responsibility as distributed within the whole and belonging especially to its leaders. We don’t think of our actions as morally discrete and as free choices for the most part, but as ongoing expressions and continuations of cultural and societal projects and institutions that began before us and continue after us. Whatever we mean by obedience seems to involve this pervasive kind of embeddedness and contextualization.
In Zimbardo’s analysis of the social and situational forces operative in the Stanford Prison Experiment; in Robert Jay Lifton’s analysis of the Nazi doctors; in Hannah Arendt on the banality of evil; in the now vast corpus in multiple languages of Holocaust memoirs, fiction, and films; in the sizeable literature on the psychology of genocide, of Hitler, and of the Nazi perpetrators; and in the more general social psychology literature we find some disentangling of the threads in the web of obedience. To a selection of these we will now turn. Our aim here is not to come to a definitive or even fairly reasonable explanation for the evil of the Holocaust. Our aim, as it has been all along in this chapter and in the previous one, is to use the Holocaust and both perpetrators and rescuers as examples of the social character, social shaping, of action, bad and good. Here our purpose is to begin to tease out some of the aspects and properties of this social character and determination and in so doing also expose that it pertains to business as usual and not just life in extremity, whether good or bad. The lessons of the Holocaust are our lessons as well.
Social psychologists have observed the general cultural penchant—which infects in their view not only lay beliefs about behavior but also many professional and academic disciplines—to attribute motivational efficacy to individual character traits and free will decision making rather than to group social processes. They call this the fundamental attribution error. We look at individuals as autonomous and as internally motivated—that is, people are motivated by character, reasons, and will, things that drive us consistently whatever the situation turns out to be—and we expect that different individuals will respond differently in the same situation. But we’ve learned that behavior is far more consistent across individuals within a given situation and far more varied between different kinds of situations than we tend to think. People do act consistently, but this consistency occurs across similar situations, not across different situations.5 Furthermore, that consistency is more often due to situational factors than to stable character traits or free will independent decision making.6 The places, social settings, and historical contexts we inhabit: these shape our interpretation of situations and of rewards (and punishments) and drive our behavior.7
The Milgram experiments sparked thoughtful analyses and reflections on the nature of obedience that introduced levels of nuance and depth that help explain Milgram’s findings. The work of two of Milgram’s students, John Sabini and Maury Silver, is particularly thoughtful and noteworthy. They introduce into the analysis of obedience questions about the role of hierarchical position and hierarchical institutions, of standards of civility and politeness, and of the presumed legitimacy of institutions and practices (in this case, those of science). They also bring to the analysis further distinctions in the mode or style of obedience and disobedience, raising the question whether certain kinds of disobedience were in some ways actually modes of obedience.
Sabini and Silver once asked Milgram if any of the minority of subjects in the experiments who ultimately refused to continue shocking ever confronted Milgram or his research associates with the morally problematic character of a psychology experiment that caused pain. They learned that none of these subjects tried to rescue or help the learner, as Zimbardo found out, nor did any raise a moral qualm. Intrigued by this finding, they hypothesized that there are situations in which intervening with a direct rescuing action implies a moral reproach. Both intervening to save the apparent victim and expressing disapproval would have amounted to a moral critique of the experiment and a reproach of the experimenters, and no one ventured to do that. Even considerable emotional distress on the part of a subject in shocking the learner did not bring about the saving action or a moral confrontation.8 But why? Sabini and Silver suggest that when a rational and seemingly commonsense response is absent, there must be an inhibition in place that precludes it. What is the nature of this inhibition? A clue was that when the researcher was absent from the scene but phoned in his instructions, the rate of obedience fell from 65 percent to 20 percent. One subject made a remark to the researcher that opened a window into what was going on, saying, “I don’t mean to be rude, but I think you should look in on [the learner].” Moral reproach would be considered rude behavior to someone’s face. Politeness was trumping moral concerns. Civility was trumping morality. When the researcher merely phoned in his instructions, the danger of noncompliance was lessened since there was no one present to defy or shame. This is a chilling discovery.
But what do politeness or civility, some form of the rules of etiquette in social situations, involve? What do they guard, and what do they guard against? Sabini and Silver point out that moral reproach is deemed appropriate from a hierarchically higher person to a hierarchically lower one. Someone with greater authority may reproach someone of lesser authority—parents may reproach children, clergy may reproach the congregation from the pulpit, teachers may reproach pupils, bosses may reproach employees. In matters of moral reproach, people defer to the relevant and proper authority: an example that Sabini and Silver propose is stopping someone from smoking on a bus. Most would defer to the driver to enforce the rule. Another consideration here is who has the right and authority to define or label the behavior. We all think harming others is wrong, but is a given situation a case of real harm? And who decides? Here doubt enters in, and self-doubt, according to Sabini and Silver. In novel circumstances, when confronted with an authoritative insider who has expressed no doubt about the situation and defines it counter to common sense, one’s own doubts and self-doubts about one’s counterinterpretation can be paralyzing, they suggest. The inhibition against confronting someone with a moral reproach is matched only by the joy in moral reproach in absentia, namely, gossip. But if the person involved learns of that gossip, it causes acute embarrassment. So perhaps now we have gotten down to the heart of the matter: it is the fear of embarrassment that tyrannizes the social situation, making cowards of all but the best of us—and perhaps sometimes of all of us, if we look at the discouraging evidence from the Milgram and Zimbardo experiments. But there is also the hierarchical dimension: valuation for the most part is a top-down affair.
Of course, there is more. In “On Destroying the Innocent with a Clear Conscience: A Sociopsychology of the Holocaust,” a chapter in their book Moralities of Everyday Life, Sabini and Silver try to sort out the components of the obedience to authority that the Nazis induced with such extraordinary effectiveness. After the state-sponsored mob violence of Kristallnacht, the Nazis concluded that whipping up shared rage was to be replaced by the bureaucratization of evil: organized routines and a hierarchy of responsibility. This was the institutionalization of murder within hierarchies responsive to the will of the ultimate authority. Sabini and Silver propose to treat the Holocaust as “the social psychology of individual action within the context of hierarchical institutions.” So they suggest that the face of the Holocaust is Eichmann, the colorless bureaucrat, rather than the raging rioter of Kristallnacht.9 In illustration of the point they quote the diary of Hans Hermann Kremer, a physician at Auschwitz:
September 6, 1942. Today, Sunday, excellent lunch: tomato soup, half a hen with potatoes and red cabbage . . . In the evening at 8:00 hours outside for a Sonderaktion [“special actions,” i.e., mass murders, including the burning alive of prisoners, especially children, in pits, which was particularly practiced at Auschwitz].
September 9, 1942. . . . Present as doctor at a corporal punishment of eight prisoners and an execution by shooting with small caliber rifles. Got soap flakes and two pieces of soap. . . . In the evening present at a Sonderaktion, fourth time.10
The authors say that what needs to be explained is how murder came to “have the same importance as soap flakes.” It seems to me that the diary itself is not naive but a kind of aggressive enactment and performance of the rationalization of the bureaucratization of evil.
In the Milgram experiment subjects often asked who was responsible for the administering of shocks (and for the potential harm), and they were always reassured by the researcher that he accepted full responsibility; that would inevitably free them to go on shocking. It is this anomaly of the feeling of being let off the hook while still performing actions for which one is held responsible as actions of one’s own that gives us a window into the nature of action in a bureaucratic and hierarchical context. In an organization, section heads are responsible for planning, but workers are responsible only for carrying out the plan. The boss is responsible for the plan and subordinates only for the execution. The internal rules and structure of command within a hierarchical organization obscure and subvert individual moral responsibility. Moreover, the bureaucratization of evil separates the enactment of it from the intent to do it. So large numbers of people can be engaged in harm or even murder when their basic intent is to do a job well for an organization they respect rather than to be murderers. Consequently, they don’t feel like murderers even if murder is what they’re in fact doing—and doing it as the main goal of their routine paperwork, for example, like those who had to register all the Jews in a district and make up comprehensive lists, rather than as a kind of collateral damage, inadvertent harm in the pursuit of a good aim. Milgram’s subjects, even those who went all the way and delivered what they thought were electric shocks that perhaps even rendered the learner unconscious, no doubt saw themselves primarily as contributing to science rather than as torturers. And most of those who shocked to the end exhibited considerable distress, clearly not liking to inflict pain, but perhaps seeing it as an unintended concomitant that they had to put up with. (Here we see the source of the perpetrators’ casting themselves as the victims.) Sabini and Silver suggest that “bureaucracies have a genius for organizing evil” because everyone except the deviser of the plans and projects is let off the hook and feels free of responsibility, their personal inclinations and intentions having been checked at the door. In Nazi Germany the entire society was like one enormous bureaucratic system directed toward the centralized purpose of murdering the Jews.
When a bureaucracy is entirely geared to an immoral aim, the normal relationships that we expect to hold between action and moral values, between temptations and desires, are subverted. Under those circumstances people’s perceptions of right and wrong become corrupted. For example, moral duty, conscience, becomes attached to doing evil and temptation to doing good. In the Milgram experiment, the subject is tempted not to shock the learner but resists temptation and shocks him. Sabini and Silver point out that it is not a desire to please the researcher that is operative here but loyalty to the institution of science and a commitment to carrying out the experiment. So the subject is not obeying the experimenter as such but following what he regards as the legitimate dictates of science. Not even defiant subjects questioned the experimenter’s morality; indeed, they often apologized for ruining the experiment.
The presumption of the legitimacy of institutions is further strengthened by the fact that people are gradually socialized into them. Newcomers are brought along and learn how things are done. Moreover, everyone else around the newcomer seems to accept how things are done and what they mean, so they represent a norm and a claim to legitimacy that cannot be questioned. Here we’re back to the issue of moral reproach and embarrassment. Those who know the ropes are seen to go along, whereas a newcomer has little authority or legitimacy in offering a rebuke or a criticism. Those already there own the system—and clearly this had an effect on the Milgram subjects. In the Stanford Prison Experiment it was another psychologist who came along, looked at the situation, and blew the whistle. Although it was certainly brave of her to do that, she also had the credentials, the professional authority, and the knowledge to be able to assess the legitimacy of a psychological experiment. That happens rarely.
The hierarchical institutional context may have been the most important factor in the social psychology of the perpetration during the Holocaust, and in the Milgram and Zimbardo experiments as well. But hierarchy is not the only word on the subject. In what follows I will touch briefly on a couple of the processes operative in extreme situations but focus mainly on those that are found both in normal life and in extremity, for ultimately our aim here is not to explain the Holocaust but to discover what we can generally about when, why, and how people do good and commit evil.
In The Lucifer Effect Philip Zimbardo examines the social forces at work in the Stanford Prison Experiment. The first dynamic that he saw operative was the desire for social power, the desire to be part of the in-group and not the out-group. To be part of the in-group gave the guards instant social status and an enhanced sense of self. The other side of the coin was the terror of being left out, the fear of rejection. Zimbardo says that this motive is so powerful it accounts for all kinds of things we see around us, not only cults but also bizarre fraternity rites and even the lifelong climb up the corporate ladder. It is a profound inner motive that ties each of us to the group. The next motive that Zimbardo cites is the one that I have repeatedly called attention to: the social construction of reality, the authoritative interpretation of the situation. This includes Sabini and Silver’s presumed authority of institutions.
Zimbardo remarks that the ancillary players, the “onlookers,” those who came in for a particular purpose, “all accepted my framing of the situation, which blinded them to the real picture.”11 Zimbardo teases out the “presumed legitimacy” that we grant institutions in the cognitive terms of the interpretation of situations. We accept the interpretations of situations that the authoritative definers, the higher-ups in the hierarchy, offer, and which institutions and situations embody and bring to life. Just as Milgram’s lab-coat-wearing researcher was given the benefit of the doubt, so was Zimbardo himself. Here we have a lesson in the sociology of knowledge or belief: we do not routinely assess and define situations and contexts for ourselves. They come ready to hand.
To make matters worse, coupled with the social nature of our understanding and the social pressures that drive us, we have, he says, “egocentric biases” that blind us to this very fact. So not only are we likely (or inevitably, to a large extent) to be driven by social forces; we are just as likely to deny that they exist and to be convinced that we are independent creatures making personal decisions. And this blindness further intensifies the power of the social forces upon us, making us even more vulnerable to them because we are so sure that nothing of the kind could ever affect us. We overestimate our own power and underestimate that of situations. We fail to see our behavior or that of others within its situational context (the fundamental attribution error). It is the tendency to attribute to individuals, to their character and their “free” decision making, what is actually attributable to the features of situations. We attribute the source of action to people’s interior qualities and autonomous will. We think of the individual as the “sole causal agent” when it is the entire context that produces the actions of individuals.12 We think of our minds as of our own creation and under our control, but what we think and believe are largely socially, situationally, and culturally-historically determined. I am always amused by the irony that many of my students express nearly indistinguishable beliefs about their religious attitudes and convictions but always preface their comments with the caveat that these are their very “personal” beliefs. They never seem to catch the irony that they are all expressing precisely the same sentiments and attitudes. So I have come to realize that what they mean is that they are expressing something about themselves that is usually kept private—we think of religion as a private affair in America—and is not only an intimate part of their identity but an important one as well. But what is salient for us is that the kind of identity and self-understanding the students are expressing as “private” is in fact profoundly socially constructed and determined, for they are all expressing the same position. They are revealing that they are all part of a moment in a particular place with a certain history and a certain present. It has become their inner, mental world, not just the outer world, and has permeated their sense of self. We are all like this. We all think that we mostly originate our thoughts and actions and that others do, too, even though we and they are profoundly embedded in and filled with the environment.
To illustrate the presumed legitimacy of institutions, Zimbardo cites another experiment in obedience, a variation on the Milgram model, conducted in 1972 by psychologists Charles Sheridan and Richard King. In this experiment, college students were told to give shocks to a puppy to train it when it failed to perform a task (a task that was, in fact, impossible). Fifty-four percent of the men and all the women obeyed to the highest level of shock. Zimbardo himself conducted another version of the Milgram experiment to test the “social power of physicians.” That experiment had even more chilling results. Nurses told during a call from an anonymous staff physician to give patients a dose of a drug (actually a placebo) twice the maximum safe level listed on the label on the bottle complied in twenty-one out of twenty-two cases. Since the order came from a legitimate source, it was obeyed, even when clearly wrong.13
The presumed legitimacy of hierarchical position also plays out in labeling. Studies demonstrating the effects of authoritative social labeling on educational outcomes have been well documented since the 1940s. This is another case of authoritative social interpretation. “The power of authorities,” Zimbardo comments, “is demonstrated not only in the extent to which they can command obedience from followers, but also in the extent to which they can define reality and alter habitual ways of thinking.” He cites the famous experiment of a third-grade teacher, Jane Elliott, in Riceville, Iowa, in 1971. Elliott, wishing to teach her homogeneous class of rural white farm kids about prejudice and the importance of tolerance, devised an experiment. She divided the children in her class into two groups, those with blue eyes and those with brown eyes. She labeled the blue-eyed group as superior and then watched what happened. Within a day, the brown-eyed children, labeled inferior, were doing poorly at schoolwork and were becoming sullen and angry. They called themselves “sad,” “bad,” “stupid,” and “mean.” The next day, Elliott reversed the labels, designating the brown-eyed kids as superior. Bullying, aggression, and abuse were now directed by the brown-eyed kids at the blue-eyed kids. The social labels were self-fulfilling prophecies, influencing future behavior rather than capturing past behavior.
Experiments in arbitrary authoritative labeling have been shown to induce exactly the quality being tested: for example, in IQ gains and losses, in promoting good behavior in individual children and in groups, and in improving math performance.14 In the IQ labeling experiment, teachers treated the positively labeled children differently from the others, and their subtle expectations affected the students’ performance. In the other cases, the students lived up to the (arbitrary) labels assigned to them.15 The power of labeling is so strong that social psychologists have reached the conclusion that “the consequences of academic disappointment can be manipulated by altering students’ subjective interpretations and attributions.”16
Like labeling, the power of social role and social modeling has been shown to trump character and free and independent decision making in moral action. Philip Zimbardo suggests that Browning’s “ordinary men” were, like the student prison guards in the Stanford Prison Experiment, influenced by social modeling, guilt-induced persuasion, and pressures to conform to the group. Ervin Staub, a Holocaust survivor and a psychologist who has devoted his career to studying genocide, comments, “Being part of a system shapes views, rewards adherence to dominant views, and makes deviation psychologically demanding and difficult.”17 The roles people were assigned induced in them changes of values in keeping with their actions. Intensive interviews with former members of the SS have shown that although those with a tendency to authoritarianism were attracted to the SS, these men committed violent acts only in the Nazi period and neither before nor after. They were violent only in the situation that sanctioned and institutionally supported violence.
The members of the SS have been classified into three categories: (1) those who enjoyed and identified with their positions and roles (often extreme ideologues) and gladly engaged in brutality, (2) those who adjusted to their roles and did their jobs, often changing their views to fit their actions, and (3) those who abhorred and were repulsed by what they were supposed to do and tried to lessen the burden on the victims but did not otherwise protest.18 Philip Zimbardo and Robert Jay Lifton observed exactly the same threefold categorization in the student prison guards and in the Nazi doctors. Individual differences seem to have played a role in the way that those in each particular category approached their assigned roles and in their type of identification with those roles, but in no case did those differences result in a challenge to either the authority or the legitimacy of the system or the persecution. Social forces submerged and overrode individual differences. Robert Jay Lifton coined the term “total situation” to designate conditions of complete immersion in a context in which no alternative point of view or action is allowed. The Milgram experiments, however, can be seen as posing a challenge to Lifton’s analysis, for it didn’t take a totalizing context to produce extraordinary compliance and the willingness to harm.19
Zimbardo suggests that conformity and obedience to authority get us only so far in explaining why people commit evil. There are other important social processes at work, ones we find documented again and again in the literature of the Holocaust: dehumanization, deindividuation, and bystander disengagement. Experiments have shown that being anonymous increases a person’s willingness to harm others. In another experiment designed and carried out by Philip Zimbardo, one in which women college students were to deliver painful electric shocks (of course not real) to other women (research confederates) under the guise of studying the effects of stress on creativity, half the volunteer subjects were randomly assigned to be completely covered by robes and hoods and had only number tags instead of name tags; the other half were randomly assigned to wear the same outfits but had name tags. The women were assigned to shock two other women from behind a one-way glass and to do so in groups of four while Zimbardo administered a creativity test to the victims. Unlike in the Milgram experiments, there was no researcher egging them on, and the subjects could see the confederates whose creativity was allegedly being tested. Four volunteers were tested at a time, but each was put in a separate cubicle, so there were no social conformity pressures—all would give shocks at the same time, so if one refrained, no one would know—a subject could simply slip into being a bystander rather than a perpetrator, and there would be no consequences, as with some of Browning’s ordinary men. The main question raised was how long each would hold down the button to shock, that is, the duration of the shock. In the introduction to the experiment, the subjects had been told that one of the women to be shocked was nice, but the other was labeled “bitchy.” First one research confederate was to be shocked, and then she was replaced by the other. Both were to give convincing performances that they were in pain. Both were to receive a round of twenty shocks.
None of the subjects refrained from delivering shocks. But the subjects who were anonymous, deindividuated, delivered twice as much shock to the victims as did the women subjects with the name tags, and they shocked the two victims equally. The individuated women, on the other hand, shocked the woman labeled “pleasant” for less time than the woman labeled “unpleasant” and even decreased their shocks to the pleasant woman over time. The anonymous women increased the duration of the shocks they administered over the course of the twenty trials, which Zimbardo attributes to an “upward spiraling effect of the emotional arousal . . . an energizing sense of one’s domination and control over others at that moment.”
Many other experiments have corroborated the effects of individuality masking and role anonymity on behavior—in the Stanford Prison Experiment, the guards’ use of reflective sunglasses and uniforms illustrates the point. Personal responsibility and accountability were reduced and group immersion enhanced. A sense of frenzy, the pleasure of the feeling of power and dominance, was allowed to emerge as well, so in this kind of situation “aggression becomes its own reward.” In a comparative study carried out by a cultural anthropologist of different societies and their practices in war, the societies in which the men changed their appearance to become anonymous when going to war were the most violent, with 80 percent of them brutalizing their enemies.20 Deindividuation creates a situation in which immediate social forces and inner primitive emotional states have an even greater hold on the individual than they do under normal circumstances. Responsibility was diffused, and self-reflection as well as social evaluation as an individual were muted. As a result, action becomes less reflective and more immediate, and one’s usual constraints dissolved. Vulnerability to social cues is heightened. These are some of the conditions found in extreme situations—genocide, massacre and torture, brutalizing war.21 Yet there is little room for comfort when we see how little inducement it took for college students, merely decked out in robes and hoods and given a good excuse, to get carried away and torment similar women in a lab setting. These were hardly the conditions of a My Lai, an Abu Ghraib, or even a Stanford Prison Experiment.
Moreover, although one of the women in the experiment was labeled as “bitchy” or “unpleasant,” dehumanization of the victim, which is the nearly ubiquitous condition of all torture and genocide (along with the anonymity of the perpetrators), was absent. We have seen how the Nazi genocide involved not only millennia-old theological anti-Semitism but also a biological ideology of dehumanization that characterized Jews as vermin, as a social cancer, or as a genetic disease that threatened the species. In Southeast Asia, American soldiers referred to innocent Vietnamese civilians as “gooks.” In the American South lynching of “niggers” continued into the 1960s. A common way that dehumanizing labeling is used is between an in-group and an out-group. Humanity and fellow human feeling, empathy, applies only within the group but not between groups. To the out-group are attributed animal-like characteristics. Along with this phenomenon, now referred to by researchers as out-group infrahumanization, they have identified what they call a self-humanization bias, which is to say that we tend to see the out-group as less than human and attribute full humanity to ourselves alone.22 And social psychologists point out that any out-group can function as a scapegoat as long as the out-group is weak and vulnerable and can’t retaliate.23 After my experience teaching and researching the Holocaust, at some point I realized that all it took to be a perpetrator—not a designer of the evil but a participant—was not hatred or rage but simple contempt. We are not aware of how potent merely looking down on others, curtailing our empathy because we don’t consider them as fully human as ourselves, can be. Dehumanization is a particularly strong case of the labeling effect. So here again we have a normal kind of social process used to extreme effect. It is perhaps as surprising that such seemingly minimal labeling power over arbitrarily selected children as the teachers in the IQ experiment were given had such a profound effect on the children’s subsequent performance as the dehumanizing labels in genocidal conditions that eased the devolution into atrocity.
What is surprising about the social forces that we have been examining is their potency to shape behavior. Small social variables appear to have enormous effects. Such is the case with a social force that one might think of as basically neutral but turns out to have profound effects on how the status quo is maintained and perpetuated: the passivity of bystanders. In the light of the famous Kitty Genovese case in Queens, New York, in which a young woman was stalked and stabbed as people allegedly looked on from their apartment windows and elsewhere but failed to call the police in time, social psychologists initiated a number of experiments to try to explain what had gone wrong. Were all these people merely callous city dwellers, or was there something in the situation that made ordinary people somehow freeze? That was the question raised by social psychologists Bibb Latané of Columbia and John Darley of New York University in a number of studies of this phenomenon. They found that the more people who witness an incident, the less likely it is that anyone will intervene, and they chalked that up to a diffusion of responsibility—everyone thinks someone else will help. In addition to a fear of violence there were also fears of doing the wrong thing, looking stupid, and what Zimbardo terms “an emergent group norm” of passive inaction. Psychological studies have shown that the inaction of bystanders has no significant relationship with their personality characteristics. When few people are present, studies show, New Yorkers and other big-city residents were just as ready as anyone else to intervene in an emergency. Zimbardo suggests that in the case of bystander apathy we are again confronted with the issue of the interpretation of situations: here, too, “we accept others’ definitions of the situation and their norms, rather than being willing to take the risk of challenging the norm and opening new channels of behavioral options.”24
Lee Ross and Richard E. Nisbett’s The Person and the Situation: Perspectives of Social Psychology tells us that discoveries in empirical social psychology have reached a point where they can now be gathered together and put to use in other areas. There are three central contributions, Ross and Nisbett say, that are germane to broader issues of contemporary social, political, and intellectual concern and should be brought to the attention of people working in those areas. The first is the power and subtlety of how situations drive behavior and the extent to which even small manipulations of a situation overwhelm individual character differences and free decision making. The second they call a refinement of the first: situations are driven by subjective interpretations rather than by objective brute facts about the world. People respond to the construals, to interpretations. The third important discovery of social psychology is that situations are not atomistic or static but are subject to dynamic processes of change and inertia. This is true both within social systems and within the individual’s own cognitive system, so changing one dimension of a system can cause ripples throughout until a new state of stability is reached. Alternatively, at times changing a single element will have little effect because it is overwhelmed by homeostatic processes. So while there is such considerable evidence from experiment after experiment that small situational changes produce large results, there are other cases in which the status quo is very hard to budge.
An example of the latter is a wide-scale intervention done in the 1930s, the Cambridge-Somerville Youth Study. Psychologists tried to improve the outcomes of about 250 working-class boys in Cambridge and Somerville, Massachusetts, who were considered (by teachers, judges, or schools) to be at risk for delinquency and criminality. They entered the program from ages five to thirteen and then continued in it for five years. The interventions were extensive: social workers visited the boys’ homes twice a month and provided assistance, including involvement in family conflicts, in one-third of the cases; academic tutoring was provided in over half of the cases; 40 percent of the boys received medical or psychiatric care; social and recreational programs were provided, and the boys were connected to youth groups, YMCAs, Boy Scout troops, and summer camps. The massive intervention was also a rigorous study with random assignment to intervention or to control groups and follow-up studies continued over the next forty years. Despite the positive feelings of the social workers and the many positive responses of the participants, this multifaceted intervention was a complete failure. There was no difference between the boys in the study and in the control group in terms of juvenile offenses, adult criminality, occupational success, health, mortality, or life satisfaction. In the few areas where a significant difference could be found (for example, in rates of alcoholism, adult crime rates, and professional white-collar status), in fact, it favored the control group. One source of the negative outcome might have been due to the labeling effect: these kids ended up stigmatized and lived up to the stigma. Yet all in all, the results of the study suggest that human psyches are not as vulnerable to family traumas as we tend to think, and that the community is a stronger and ongoing influence on individuals, on their deviance as well as on their success, one that even early sustained intervention is hard put to modify.
The interventions in the Cambridge-Somerville Youth Study, while substantial, were nevertheless directed at improving individuals’ personal and familial lives while the social and economic context of those individuals remained unchanged. This was a massive attempt to shore up individuals within a context rather than to transform the context, and it failed. Broader contextual factors seemed to be far stronger than the shoring up of individuals could overcome. Perhaps what was lacking was a broader public health model rather than an individual and family psychology model. But what also seemed to be a mistake was what social psychologists call distal (early) versus proximal (late) interventions, which is to say, there was a bias toward assuming that there was a formative period in which an intervention had to be made so as to improve the future outcome of these children’s lives. But the evidence from Head Start, from the Cambridge-Somerville Youth Study, and from examples and studies of later contemporaneous interventions is that later situational interventions are more powerful and effective than early ones. The current situation turns out to be more powerful, while the legacy of early learning is less so.
There are several good examples of the success of relatively small (certainly in comparison to the Cambridge-Somerville project) late situational interventions. In the early 1970s, a professor of mathematics, Urie Triesman, at the University of California at Berkeley noticed that African American students in his classes were failing introductory math in high percentages and thus being precluded from going on in science and medicine. He observed that Asian students, noted for their success in math and science, studied together in groups, while African American students tended to study alone. Study in groups was more productive because someone in the group was likely to have a solution to a given problem and could help the others. Triesman persuaded a large number of entering African American students to participate in a special honors program in math that featured group study (thus Triesman included positive predictive labeling as well). The African American students who participated had grades on average the same as those of white and Asian students, and their dropout rates plummeted.
The discovery that small concurrent interventions in situations to change people’s behavior in the present tend to be more powerful than carryovers from early learning has important implications for morals and the teaching of morals. We tend to think that children learn morals early and if they were taught well ought to be set for life. But the research seems to indicate otherwise: the current situation, how it is interpreted, and our investment in it can, to a large extent, override the past in determining our social behavior. This helps to answer some of the enduring questions about the perpetrators of the Holocaust: Weren’t they Christians? Didn’t they have a moral education? Didn’t they realize their actions violated basic moral codes and norms? The Nazi situation and its Nazi interpretation, for most, overrode earlier learning and values. And that turns out to be business as usual rather than an aberration. It’s just that the contrast is not usually as stark and as bleak as it was in the Holocaust. So it’s the Holocaust again that has brought to our attention, perhaps as never before, the overwhelming impact and influence everyday situations have upon us. Just as decision making is not independent of context or freely originated, and just as moral character is not stable across situations (as we shall soon see) and hence is not predictive of our behavior, so, too, early learning does not necessarily leave an indelible mark. We are creatures of present social contexts and situations far more than we realize.
It was Kurt Lewin, one of the founders of social psychology, who developed group discussion techniques and democratic procedures as powerful social interventions to change individual behavior. Lewin, a refugee from Hitler’s Germany, focused in his experimental work on the power of the immediate situation. His findings suggested that the peer group is the most important force in inducing or restraining behavior in the social context. In an early study he and colleagues showed that the manipulation of leadership style in recreation clubs to institute authoritarian or democratic group social environments transformed the relationships of the young men who were members of the clubs.25 When an authoritarian leadership style was introduced, the young men scapegoated each other and passively submitted to authority figures. With a short-term manipulation of the environment, Lewin had quickly produced the array of responses that Adorno and colleagues had argued were due to the “authoritarian personality.” In a slightly later study in which nutritionists were attempting to change entrenched eating patterns toward more available organ meats during wartime shortages in the 1940s but failed to accomplish their goal through lectures, pamphlets, and moral encouragement and exhortation to eat organ meats to help the war effort, Lewin introduced, instead, small discussion groups of homemakers with trained leaders who talked together about how to implement the program of alternative foods and new recipes, which was followed by a show of hands of the participants who were willing to commit to it. While the success rate for the nutritionists’ initial strategy of lectures, pamphlets, and moral exhortation was just 3 percent, the group discussions elicited a 30 percent success rate. Other interventions by group discussion and personal commitment in a group setting were also successful, for example, getting rural mothers to give their infants cod liver oil. Ross and Nisbett explain that what Lewin was doing was creating a social reference group for the new norm of behavior and creating a consensus in that group about the behavior. Lewin was also simultaneously reducing the social pressure of the existing group and its norm, and it worked. Earlier, in the 1930s, Lewin had pioneered participatory industrial management and workgroup decision-making procedures and brought them to Japan, where they were widely instituted in industry—techniques that returned home only in the 1970s, where they were introduced into the United States as “Japanese” management techniques.
One of the operative components of the small group method was the presence of a leader who is an appropriate social model. While social modeling is enhanced by the model’s characteristics—high status, attractiveness, power—even the mere presence of a model has been shown to be powerful in inducing people to engage in a desired behavior. In the Milgram experiments the presence of a researcher demonstrating how to present the series of words and use the equipment had a great effect on the volunteer subjects’ compliance, while when he merely called on the phone compliance was reduced. And the presence of research confederates who pretended to be volunteer subjects and refused to shock the learner created 90 percent noncompliance.
Another telling example of how the behavior of individuals could be changed by intervention in group structure and process rather than by addressing individuals directly or by depending upon early moral or intellectual training is the intervention devised by Elliot Aronson, a professor of social psychology at the University of California at Santa Cruz, to help prevent future massacres like that at Columbine High School in Colorado in 1999, when two disgruntled students went on a shooting rampage, killing twelve students and a teacher and wounding twenty-three other people before committing suicide. The first step for Aronson was to try to figure out what had gone wrong. Why did Columbine happen? Was it a failure of overall morality in the schools, as a congressional bill that would have allowed the posting of the Ten Commandments in the nation’s schools seemed to suggest? Was the answer to beef up moral education in the schools (we saw that the character education movement got a great boost after Columbine)? Was it to get violence out of the media, TV, movies, songs, the Internet? Was it to increase surveillance of kids in schools, identify problem kids early, and remove them from the school setting or subject them to intensive therapy? Was it to make all the kids into spies and informers to point out to the school administration any weird or suspicious behavior or dress? Was it to create more rules and codes of proper behavior, ensuring better obedience and respect for school authority? These were all answers that were proposed in the wake of the Columbine massacre, but Aronson points out that they have little evidence to support them even though they might feel like the right thing to do. These analyses and the answers they lead to, he says, rest on “emotion, wishful thinking, bias, and political expediency.”26
Aronson fully acknowledges that the actions of the students who massacred others were pathological and horrific. But in studying high schools in various parts of the country, Aronson concluded that the main cause of the massacre was a general atmosphere of exclusion that was not unique to Columbine but could be found in many of the nation’s high schools. There was a prevalent social atmosphere, he says, and it is toxic. These terrible perpetrators were “reacting in an extreme and pathological manner to a general atmosphere of exclusion,” he says. From his classroom research and from extant social psychology research on American schools, he found that the social atmosphere of most American high schools is competitive and cliquish, something that most students find unpleasant, difficult, distasteful, and humiliating.27 For some students, he says, it is “a living hell.”28 It is an atmosphere in which taunting, rejection, and verbal abuse are prevalent. School shooters seek revenge for their humiliation at being relegated to the bottom of the pack as well as fame. To prevent another Columbine, Aronson proposed to show how the atmosphere in high schools could be transformed from exclusion to inclusion, and from hierarchical rejection to mutual support, through a two-pronged approach: by introducing strategies for resolving conflict and dealing with anger and teaching empathy, on one hand, and by restructuring the classroom itself from competitive learning to cooperative learning, on the other. He devised a cooperative learning system, called the Jigsaw Classroom, that structures learning so that all students together aim at a common goal where everyone stands or falls together. He has developed a way to get around the problem with standard group learning in the normal competitive school environment, which is that the motivated and high-achieving kids do all the work for everyone in the group. The effect of the Jigsaw Classroom is also to create a kind of tolerance and even appreciation of differences that moralizing is not capable of producing. Aronson concludes that respect cannot be legislated and that forcing students to obey rules only makes them go through the motions but doesn’t change their attitudes. He has found that what does work is less direct moralizing exhortation, less call for individual moral decision making and virtuous character, and more structural intervention to introduce common goals and common strategies to meet them. He explains, “You don’t get students from diverse backgrounds to appreciate one another by telling them that prejudice and discrimination are bad things. You get them to appreciate one another by placing them in situations where they interact with one another in a structure designed to allow everyone’s basic humanity to shine through.”29
Although the Jigsaw Classroom and other interventions along those lines introduce a win-win situation, the difficulty and radicalism of a wholesale transition to interventions of that kind are not to be minimized. For that kind of systemic intervention to be seen as a legitimate and obvious answer to Columbine is hardly painless, involving as it does a major transformation in our moral outlook. What is gained in practical moral outcomes is balanced by what is lost: we lose the right to blame others exclusively for their actions and to merely expect others to do the right thing. We lose the right to inculcate, to lecture, and then to praise and blame. We lose the option of taking ourselves off the hook for the actions of others in contexts and situations for which we have some responsibility or could have contributed to their change. We lose our own innocence or presumed innocence as others lose their complete guilt. That is the price we must pay (and ought to pay) for getting ethics right and, as a result, being able to have an ethically transforming effect on others and even on ourselves.
Luckily, social forces are operative in the inducement and shaping of good behavior as well as bad behavior. In an almost diabolically clever experiment by social psychologists John Darley and C.D. Batson, Christian seminarians at Princeton Theological Seminary were told to prepare a brief extemporaneous talk to be delivered at a nearby building where it would be recorded. Half of the students were told to give a talk on the New Testament parable of the Good Samaritan. (This is the story of three passersby who find themselves on the road where a man has been held up by robbers, beaten, and left for practically dead. It is the Samaritan, the outcast of post-Exilic Israelite society, who turns out to stop and help the man while two notables pass him by. So the message is roughly that the moral worth of altruistic behavior, rather than religious and social status, is to be valued.) In the experiment the seminarians were divided into two groups; those in one were told that they were late and to rush over to the nearby building to give their talk and have it recorded. Those in the other group were told that it would be a few minutes before those in the other building would be ready to record their talk but that they might as well head over to the other building. En route to the destination, Darley and Batson had placed a research confederate, a man who was slumped in a doorway, head down, coughing and groaning. Of those seminarians who were told that they were late, only 10 percent stopped to offer assistance to the suffering man, while 63 percent of those from the early group stopped to help him. The students were also given a questionnaire before the experiment that asked whether their interest in religion was focused primarily on personal salvation or on helping others (dispositional motivational factors). It turned out that the particular answer a seminarian gave to the questionnaire had no correlation with whether that student stopped to help the suffering man or not. Determinants of altruism were not moral at all but instead a seemingly neutral feature of the situation (in this case, whether they were pressed for time). And this conclusion about the pertinence of morally neutral features of situations is borne out by other studies.
After Kitty Genovese’s murder in 1964, Darley and Latané developed a series of bystander experiments to try to determine when and why people act to help others in distress. In a 1968 study, Columbia University male undergraduates were put in a room to fill out a questionnaire. They were left either alone, with two other subjects, or with two research confederates instructed to remain impassive when an emergency was introduced into the situation. The emergency was a stream of smoke that poured into the room through a wall vent and eventually filled the whole room. Seventy-five percent of the subjects who had been left alone in the room reported the smoke, while only 10 percent of the subjects left in the room with two impassive research confederates reported the smoke; 38 percent of those in three-subject groups intervened. In a 1969 experiment, individuals answering a questionnaire either alone, in the presence of one impassive research confederate, or with another subject also working on a questionnaire were left in a room. Then a noise that sounded like the female research experimenter taking a bad fall came from behind a room divider. Again, 70 percent of the lone subjects intervened to offer assistance, but only 7 percent of those subjects who sat next to an impassive bystander (the research confederate) intervened to offer assistance; with the pair of subjects, only 40 percent intervened. Amazingly, four dozen follow-up studies provided the same results. Darley and Latané’s conclusions were that the feeling of responsibility is diffused in a group setting, that the interpretation of both the nature of the situation and the appropriate action is deferred to the group, and that the failure of other people to act serves as a confirmation of the appropriateness of nonintervention. So the presence of others inhibits intervention. Ross and Nisbett suggest that “the opinion of the majority carries normative or moral force.”30 Social influence, even of strangers, has a determinative effect on the individuals in the group. And conformity can be to a minority as well as to a majority, as we saw in the Milgram experimental variation in which when research confederates declined to go along only 10 percent of the subjects complied. That the presence of a small minority of independent thinkers and actors can embolden the majority to resist conformity and take responsibility suggests where some hope may lie.
Other factors that influence behavior can seem even more incidental. For example, a small elevation in mood leads to enhanced helping behavior. In one study the impact of finding a dime in a telephone booth increased the likelihood that a random person coming out of a phone booth would help a research confederate who dropped a folder of papers in front of him or her. Only 13 percent of dime finders failed to stop to help pick up the scattered papers, while 96 percent of those who didn’t find a dime failed to help, with many even trampling on the papers. The numbers are staggering: in study after study, positive affect has been shown to be related to prosocial behavior. Even pleasant aromas have been shown to improve prosocial behavior, it turns out. Subjects near a fragrant bakery or a coffee shop were more willing to change a dollar bill than subjects near a neutral-smelling department store.31 All the evidence seems to suggest that neither stable moral character traits or virtues nor free will decision making based on moral principles seems to have much to do with actual ethical behavior, whether under the most extreme conditions and situations, such as the Holocaust, or in the most trivial ones, like those just recounted.
We have just seen the power of the presence or absence of others in situations to modify our behavior quite profoundly. Just the mere presence of others seems to inhibit actions that we would think are the natural response, such as intervening to help someone in distress; such actions do seem to be our natural response, but only when we’re alone! Now we’ll look at experimental evidence that suggests not only that the presence of others is important but also that their opinions shape our own in unexpected and extreme ways. These findings were some of the earliest in social psychology. Experiments in the 1930s by Muzafer Sherif, a Turkish immigrant to the United States, showed the group influence and social conformity effect upon perceptual judgment. Sherif developed an experiment in which individuals and groups were put in a dark room with a single point of light located at some distance from them. Because the room was entirely dark they could not tell either the dimensions of the room or, as a result, how far the light was from them. The subjects suddenly saw the light move and then disappear; then a new point of light would appear, move, and finally disappear, a sequence that was repeated many times. In fact, the light only appeared to move; this is the perceptual illusion called the autokinetic effect. The virtue of the test was that there was no objective standard that could serve as a reference point. In each instance, the subjects were to estimate how far the light had moved. When subjects were alone in the room, their answers differed widely from person to person, from a few inches to a few feet, and even the same subject on different occasions gave different answers from his or her previous answers. When the research subjects were tested in groups of two and three, however, they quickly converged on a group consensus. Different groups turned out to converge on very different norms. They had substituted a social norm for a personal one. Then unbeknownst to the subject participants, Sherif introduced a research confederate who gave consistent estimates either far greater or smaller than the subjects typically made on their own. The subjects immediately adopted the confederate’s judgment. Sherif concluded that even an individual with no particular status or claim to expertise could impose a social norm by displaying consistency and confidence in the face of the others’ uncertainty. Subsequent repetitions and extensions of the experiment proved an even more extreme conclusion: subjects would adhere to the norm established by the arbitrary group after their peers were no longer present, and even a year later they would adhere to the same norm. It had been internalized. Subjects taken from an old experimental subject group to a new and different one would adhere to the old norm even when the new group established a different norm. And an extension of the experiment in 1961 showed that a norm established in an earlier trial could be transmitted to a new “generation” by introducing one new subject in each repetition of the experiment, so the final repetition included no subjects from the initial experiment, yet the norm that was set by the original group was handed down over several generations. Because there was no “there there” in the Sherif experiment, no possibility of a true answer to the question but only the social construal, Sherif concluded that our most basic judgments about the world are socially construed, and perhaps even perception is socially conditioned.32
Solomon Asch, one of the following generation of social psychologists, began his experiments with the express purpose of challenging the conclusions that Sherif had come to, but his experiments proved Sherif right and even extended the latter’s conclusions further. Asch’s experiments substituted for Sherif’s unknowable perception a knowable one, asking whether experimental subjects would still conform their perceptual judgments to each other and to a false standard confidently introduced if the actual answer was easily and readily available to all. Asch predicted that the Sherif paradigm would fail the test and that the limitations of social conformity in cognitive judgment would become clear. Precisely the opposite turned out to be the case, with the inevitable conclusion that not only would a group conform to a social judgment when the situation was ambiguous, but a group would conform to a false judgment that when individually tested each knew to be false. Like Sherif’s, Asch’s experiment was also one based on visual perception, but this time the task was to look at varying groups of three lines projected at the front of a room and compare their length with a different standard line. The lines were clearly drawn and the differences in length obvious and marked. In each iteration seven to nine participants were placed in the room and each was to give an answer in turn. But in this experiment Asch made all of the participants research confederates except for one naive subject. All the confederates followed a set script. The participants were told not to confer with each other and to make their own judgments. In the initial three iterations all the participants, the confederates as well as the naive experimental subject, gave the obvious true answers. But then in the fourth trial the first confederate, without hesitation, confidently gave a wrong answer. After a look of disbelief and a second check at the lines, the naive subject gave some sign of discomfort. Then all the other research confederates repeated the same wrong answer of the first confederate. The subject’s turn came last, and he or she had to either conform to the unanimous majority false judgment or remain independent and give the obviously true answer. In the various repetitions of this experiment, five to twelve instances of wrong answers unanimously given by all the research confederates were stuck into a total of between ten and eighteen trials. Subjects almost invariably showed great discomfort, and 50 to 80 percent in the different trials conformed at least once to the mistaken majority. Over a third of all trials exhibited a false conformity. In different slight variations of the experiment, the results revealed that conformity rates did not diminish when the unanimous group of confederates was reduced from eight to three or four.
When the number of confederates was reduced to two, however, little conformity was produced, and when only a single confederate was with a single subject, there was no evidence of social influence at all. In fact, when the unanimity of the confederates was eliminated, and when even a single confederate ally was introduced into the group who remained independent and called it as she saw it, the subject also retained independence of judgment. A third of Asch’s subjects never conformed at all, and another third conformed less often than they remained independent. In interviews with the subjects after the experiment, they acknowledged that despite their personal perceptions they articulated conforming judgments because they were unwilling to be a lone dissenter, even though they knew the majority of their fellows were mistaken. What was shocking about the outcome was the extent of the subjects’ willingness to defy and relinquish their own clear perceptual reality to conform to group opinion. Social psychologists of the 1950s were quick to relate Asch’s findings to the surrounding corporate and middle-class suburban culture as well as to McCarthyism and its loyalty oaths. With such marked conformity in a situation of no coercion, little at stake, and a clear and unambiguous reality, many at the time concluded that the conformity must be of much greater proportions in real-world situations where social and political pressures are great and the situation far more ambiguous and open to varying possible interpretations! Further experiments bore out that Asch’s basic findings were not mere experimental anomalies but had far and wide implications for real-life contexts.
After Asch’s experiments using simple objective perceptual data, social psychologists turned increasingly to experiments focused on conformity in social perceptions and subjective interpretations and opinions. Studies of cognitive dissonance—that is, of differences of opinion in a group—exposed the group pressures and also pressures within the individual to establish or reestablish conformity and consistency of opinion. Ross and Nisbett point out that the dissonance between one’s view and that of the group is “characteristically resolved in favor of the group’s view, often not by simple compromise, but by wholesale adoption of the group’s view and suppression of one’s doubts.”33 People’s beliefs and even pleasures turn out to be highly manipulable and determinable by interventions in the social meaning and interpretation of situations. Further experiments by Asch revealed that social influence governs not only positive and negative valuation of an object but also the very interpretation and definition of the object. In an experiment with two groups of peers, where both were told to rank the status of various professions, one group was told beforehand that the profession of politician had been ranked at the top of the list by a prior group and the other group was told that politician was ranked at the bottom by a prior group. Not only did the two groups conform to the anonymous ranking (of the imaginary prior group), but in exit interviews the subjects in the first group made clear that politician meant to them “great statesman” while the members of the second group defined politician as “corrupt political hack.”34 This reminds me of Ronald Reagan’s use of the term welfare queens to develop in the public a derogatory feeling toward welfare recipients and push the public to withdraw their support for the policy. Ethical interventions are established at the level of description far more than we realize. Once the description has been set, the actions merely follow. So the basic fight is for the airwaves and winning the description wars—those who fund the Rush Limbaughs of the world are acutely aware of this. And it’s social groups, much more than individuals, who sign on to such opinions. These deep sociological factors creating and maintaining groups underlie the more explicit group adoption of attitudes and beliefs.
All these tendencies point in the direction of the phenomenon called groupthink. Irving L. Janis, in his now famous 1972 book Groupthink: Psychological Studies of Policy Decisions and Fiascoes (expanded in 1982), documented a number of historical debacles, including the Bay of Pigs, the escalation of the Vietnam War, and the Watergate cover-up, that he argued were due to groupthink. The term groupthink refers to the social pressures exerted by the leaders of a group and its overall dynamics toward loyal consensus of opinion and the elimination of doubting and dissonant voices. Janis analyzes cases of groupthink to try to tease out what he calls a common “specific pattern of concurrence-seeking behavior” in which the approval of one’s peers in the group becomes more important than solving the problem at hand.35 He found again and again that from the historical facts a clear and consistent recurring psychological pattern could be identified that produced cognitive distortions and corruption through the quashing of critical thinking. Janis noted that for groups that exhibited groupthink, the most important group value was loyalty to the group. Loyalty was embraced and enacted as its highest moral value. This created, Janis says, a kind of softheartedness toward other members of the group but at the same time an extreme harshness toward outsiders. Within the group softheartedness leads to “softheaded” thinking, whereas toward outsiders and enemies the group exhibits extreme hard-heartedness and a tendency toward dehumanizing definitions, leading to harsh military solutions and even atrocity. At the same time, within the group there is the conviction that wonderful people such as the group members could never act inhumanely and immorally. Moreover, the tendency to seek concurrence also fosters unrealistic optimism about the outcome of the group’s policy decisions, a lack of vigilance, and “sloganistic thinking” about the weak and immoral character of out-groups. The greater the esprit de corps of a policy-making group, according to Janis, the higher the likelihood of groupthink and even of the group’s taking dehumanizing action against out-groups.36 Janis goes on to identify eight main symptoms of groupthink that fall into three main types:
Type I: Overestimations of the group—its power and morality
1. An illusion of invulnerability . . . which . . . encourages taking extreme risks
2. An unquestioned belief in the group’s inherent morality, inclining members to ignore the ethical or moral consequences of their decisions
Type II: Closed-mindedness
3. Collective efforts to rationalize in order to discount warnings . . . that might lead the members to reconsider their assumptions . . .
4. Stereotyped views of enemy leaders as too evil to warrant genuine attempts to negotiate . . .
Type III: Pressures toward uniformity
5. Self-censorship of deviations from the apparent group consensus . . .
6. A shared illusion of unanimity . . . (partly resulting from self-censorship of deviations, augmented by the false assumption that silence means consent)
7. Direct pressure on any member who expresses strong arguments against any of the group’s stereotypes, illusions, or commitments, making clear that this type of dissent is contrary to what is expected of all loyal members
8. The emergence of self-appointed mindguards—members who protect the group [and especially the leader] from adverse information that might shatter their shared complacency about the effectiveness and morality of their decisions37
All eight symptoms clearly fit the Nazi case—as well as the invasion of Iraq, a war that continues as I am writing.
Of course groupthink is not only a problem of governments and their policy making but also a danger in all kinds of groups from corporations to families. In any of these groups, the structural features and the situational factors that give rise to it can be intervened in, according to Janis, so as to avoid its disastrous consequences. In the groupthink context each person decides that his or her misgivings and doubts are not important, not really relevant. But groups can be organized to support dissident voices and to encourage the freedom to speak and even to think outside the box. The fear of humiliation can be lessened by an accepting attitude toward members that is not dependent on loyal consensus, unquestioned authority, or the rejection of out-groups. Janis proposes that the structural feature of a middle level of cohesiveness (rather than extreme cohesiveness or very loose affiliation) is best in damping the dangers of groupthink. This is because at that moderate level there is an accepting attitude toward members, but not to a degree that exacts extreme loyalty and conformity; nor, at the other extreme, does it make members uneasy that their acceptance is so contingent and minimal that they could easily be pushed out. Other structural features that promote groupthink and therefore ought to be avoided include insularity, the lack of a tradition of unbiased and open inquiry, the lack of impartial procedures for getting relevant information, and homogeneity in the backgrounds of people in the group. The solicitation and encouragement of divergent opinions can even be formalized and institutionalized procedurally by a group. Furthermore, procedures can also be put in place that help a group resist the tendency to a sense of moral self-righteousness, to cast itself as good and outsiders as stupid and bad. Times of stress of course make these tendencies worse.
Nevertheless, a group already given over to the kind of limitless and unchallenged power enabled by groupthink would hardly welcome interventions that would challenge and limit it in these ways. It seems to me that if a society, group, institution, or company has groupthink tendencies, it would take a real disaster and fall of Nazi or Watergate proportions or some other great debacle for its members and especially leaders to be willing to consider putting in place the kinds of democratizing and freedom- and dissent- and diversity-enhancing structural and procedural changes that Janis recommends. Or perhaps it would take a new golden age, a Renaissance, or an Enlightenment. Groupthink is a stellar example of the hijacking, the corruption, of moral judgment. And we need only look around us to become aware of how much that is the rule rather than the exception. Disinterested, independent moral valuation of others, of actions, and of policies—free decision making—is a utopian ideal, and hence hardly a reliable and firm basis upon which to build a society or any of its component institutions.38 As Ross and Nisbett put it,
People actively promote their beliefs and social construals and do not readily tolerate dissent from them. It follows that cultures, which have much more at stake than the informal groups studied by psychologists, would still be more zealous in keeping their members in line with respect to beliefs and values. It is primarily for this reason that cultures can be so starkly and so uniformly different from one another. . . . Isolation is the key to stability of cultural norms.39 [Emphasis added.]
Since we cannot rely on people to be independent (and rational) merely as a feature of human nature (as we have falsely assumed) and think and act for themselves, our best hope for ethics is to nurture pluralism and democratic structures and procedures, putting mechanisms in place that foster differences and protect minority groups with different perspectives and also shield whistle-blowers. In addition, it seems clear from this work that it is vital that different groups (ethnic and religious groups, regional groups, age groups, etc.) in a society interact and not have isolated, protected domains that keep them from challenging each other. They also must engage in shared tasks that bring them together rather than set them into unremitting competition and mutual antipathy and dismissal. Pluralism isn’t just a matter of letting everyone alone; rather, it involves actively bringing people together toward common goals, yet in ways that encourage differences but do not reify them. Janis proposes ways that this can be done in a policy-making group, but surely some of these ideas are more generally applicable as well. Janis’s work, like that in the social interpretation of situations that I have been documenting throughout this chapter, suggests that the point of intervention for ethics is not in a choice of (bad or good) action via a (mythical) free will but instead earlier in the process, in the cognitive interpretation of situations. And even that intellectual intervention is socially and situationally driven; it is not willed but must be enabled through social structures on the group level and the broadest kind of learning and rigorous self-analysis and self-reflection on the personal level, the philosophical education of desire of the kind spelled out in Spinoza’s Ethics. So Spinoza may indeed have got it right, that in the end all virtues depend upon and emerge from a more fundamental intellectual virtue, the openness and honesty that make independence of mind possible. But in its absence, good social and political institutions that foster differences and that promote a humane vision of self and others must pick up the slack. Spinoza envisioned civil religion as performing the role of promoting an ideology of universal humaneness in modern democratic societies, societies that he also envisioned as institutionalizing pluralism.
Another psychological thinker whose concerns and insights emerged in part from the experience of the Holocaust and were also brought to bear to try to understand the evil exhibited in the Holocaust is the psychoanalyst Heinz Kohut. Kohut grew up in an assimilated Jewish family and had already completed his medical degree in neurology when he escaped his native Vienna in 1939 and settled in Chicago, where he founded the self psychology movement within psychoanalysis. In a number of essays Kohut applied his psychoanalytic thinking to social, cultural, and political life, focusing mostly on an analysis of Hitler and his relationship to the German nation.40 Charles Strozier, editor of a volume of Kohut’s essays on history and literature, credits Kohut with “a refreshingly new theory of leadership, one that systematically formulates the relationship between leaders and followers.”41 Kohut conceived the relationship between leader and nation to be of a group self with a leader’s self, where both sides shape each other’s self-understanding and state of the self. The kinds of internal and intimate interrelational processes that govern individual selves also affect the historical group self and the group leader, Kohut proposed. A group self, he says, is as prone to “narcissistic injury”—that is, a collective humiliation—as an individual self.
It’s so easy to say that the Nazis were beasts and that Germany then regressed to untamed callousness and animal-like passions. The trouble is that Nazi Germany is understandable. There is an empathy bridge; however difficult to maintain. . . . People will go to extraordinary lengths to undo narcissistic injury. These people would rather die than live with shame. One has to study this dispassionately. What else can you do? You have no other choice.42
In an essay from the early 1970s titled “On Courage,” Kohut posed the question of what it is that enables “an exceptional few to oppose the [social] pressures exerted on them and remain faithful to their ideals and to themselves, while all the others, the multitudes, change their ideals and swim with the current.” Kohut’s answer was that each person has a unique nucleus of psychological being, a “nuclear self,” consisting in ideals and aspirations determined early by family, culture, and context but nevertheless modifiable by life experiences. By this nuclear self, Kohut warns us, he does not mean the Cartesian conscious self-substance, “a single self as the central agency of the psyche” from which “all initiative springs and where all experiences end,” the source of (an allegedly) free willed action. Instead, he says, he is speaking of a kind of unification of introjected ideals and capacities. But this nuclear self in most people, in the majority of adults, he says, “ceases to participate in the overt attitude and actions and becomes progressively isolated and is finally repressed and disavowed.” For most of us, then, the self becomes buried in the group and its attitudes, beliefs, and projects. It is only rare individuals who retain the capacity to experience an ongoing conflict between the nuclear self and the demands of the social context and resolve the conflict so that the sources of inner creativity lodged in the nuclear self can find expression and contribute to the needs of the larger context. Such individuals are rare and heroic. That is what it takes to have independence of mind and action, according to Kohut. These unusual individuals bring all their psychic energies together toward a totally committed central creative and independent effort. And the rare individuals who are capable of remaining in touch with the nuclear self and bringing it into effective expression in socially meaningful projects have a profound sense of inner peace, Kohut says, a “mental state akin to wisdom.” Kohut gives as examples of this rare capacity for courage the anti-Nazi activists of the White Rose, Hans and Sophie Scholl, whose calm clarity enabled them to go to their deaths at utmost peace. Surprisingly, Kohut does not claim that this rare kind of courage entails an extraordinary degree of mental health. Rather, he says, that capacity or psychic organization that results in a kind of courage and even heroism is to some extent independent of the extent of an individual’s psychopathology. Nevertheless, it is a very specific psychological capacity and inner balance, integration of components of the self.
The vastly more common phenomenon is the identification of the members of groups with different kinds of leaders and the subsequent social determination of belief and action. Kohut classifies group identifications into three categories: (1) a relation of group to leader that is like a mystical religious devotion to an omnipotent god whose power they participate in vicariously; (2) a relation of group to leader in which ideals and values cover up and cover over the main motive, which is again identification with a source of omnipotent power; and (3) groups or movements in which shared ideals and a leader who stands for them and promotes them bring people together. The Nazis represent the rise of an appeal to pure omnipotence in a social context in which both national prestige had been profoundly wounded (in World War I) and at the same time traditional values and ideals and religious beliefs had undergone a significant devaluation and debasement. Kohut holds that Germany had undergone a period in which people had become disillusioned with established religions, the country had declined in prestige, the monarchy had fallen, the aristocracy and officer class had declined, and the working classes had risen in power. Shared ideals had been sorely hurt while at the same time Germany had experienced a devastating and humiliating defeat. Hitler and the German nation shared a narcissistic wound (Hitler’s was personal as well as national) and held a common grandiose fantasy. The leader of this kind of group is the locus not of shared values and ideals but rather of pure ambition; through him the group hopes to become great or restore their lost greatness. Articulated ideals by the leader or the group under these circumstances are mere covers disguising a raw, grandiose ambition to banish shame. Hitler expressed this grandiosity and destructiveness under a thin veil of articulated ideals that served as a disguise. The collective hope of this kind of group is that through strength and triumph, vengeance will replace shame and humiliation. The leader and nation engage together in a kind of frenzy. Anyone who questions the leader who symbolizes this hope of restitution of self-esteem and power is cast as a traitor to the nation. Hence the amassing of power over the whole world and the destruction of the Jews, cast (mythically) as the source of the humiliation, became ends in themselves. This alliance of nation and leader for power itself, in order to overcome psychic humiliation, Kohut regards as the source of the worst atrocities of the twentieth century. He comments, “The most malignant human propensities are mobilized in support of nationalistic narcissistic rage. Nothing satisfies its fury, neither the achievement of limited advantages nor the negotiation of compromises, however favorable—not even victory itself is enough.”43
The Nazis’ goal was, and remained till their total defeat, the total extinction of an enemy. The German nation, Kohut suggests, pursued a vision of total control of the world via a supraindividual, nationally organized vendetta of merciless persecution, genocide, war, and destruction. It was a group regression, the primitivization of an entire society—much like an individual undergoing a kind of breakdown under the psychological pressure of profound humiliation coupled with loss of ideals. That the German nation found a fittingly pathological leader with whose grandiosity they could merge was a fortuitous tragedy. Kohut writes that in a conflict with such a frenzied and destructive national force as the Nazis, one can only hope the winner will be a nation or group of nations mobilized by their ideals and held together by their common love of and identification with those ideals, rather than by a striving for vengeance and overpowering force. For this counterforce to have the power to overcome the frenzy of narcissistic rage, Kohut muses, it may be necessary to carefully engage a religious merger, also a somewhat primitive and regressive tendency yet one that can be placed in the service of ideals. What is important about Kohut’s approach to Nazi evil is that, unlike so many other psychological treatments, his focus is on the pathological condition of the German nation, the German group self, rather than the psychopathology of Hitler alone. For the kind of danger that Hitler represents is always with us somewhere. It is the danger that even a highly civilized nation could be all too ready to embrace a leader who offered them an instantaneous feeling of intense power and pride through merger with an omnipotent figure. There are often Hitlers available, but their success depends in part upon a very particular condition of a national group, a state of acute weakness demanding instant relief.
Kohut’s analysis, whatever the precise details, as a general approach makes us intensely aware that the psychic forces of group belonging and the dynamics between leaders and groups are powerful and mostly irrational; they are not easily or merely rationally dispelled. All the evidence taken together in this chapter should make us quite skeptical about the human capacity for independent action, about relying on individual will free from group and situation to produce and ensure moral behavior. Moreover, evil actions now appear just as likely as good actions to be due to group behavior that relies on institutional authority and hierarchy, authoritative interpretations, and the like. The research is not new, nor is it arcane and hard to understand. It is overwhelming and widely known. Why do people generally and our society at large, then, largely ignore it and continue to think about why people are ethical, why they are not, and how to get them to be more ethical, in terms of individual autonomy, free will, blame and punishment, praise and reward?
In Chapter 4, I explore the cultural history of the notion of free will, how we come to have it, and why it seems the only plausible explanation for good and evil behavior. The notion of individual free will has such a hold on us that the widely available evidence I have summarized here, that social forces are largely responsible for both moral and immoral action and also that it is often the group that is the actor, remain invisible to us. The question I pose in Chapter 4 is: why do we focus almost exclusively on individuals as moral actors capable of (at least some significant degree of) independence from group, context, history, culture, biography, and even to some extent biology? This is the problem of free will, the ubiquitous assumption we saw being taught in schools and churches, in religious and moral education from before the founding of the country to its present appearance as character education in our schools, enshrined in law, and everywhere around us. Why do we think we have free will? Where does the belief come from? Can we do without it and still be ethical, and, if so, how? And if free will is a dead end—as it seems to be, and as the evidence from the new brain sciences that I put forth in the second half of this book is increasingly revealing—why have we stuck with it so long? Why have we been so reluctant to relinquish or at least modify the belief that individuals are independent from group, history, and context, as moral actors who originate their actions? Why do we have such a hard time envisioning other possible ways to account for moral motivations, commitments, and responsibility? These problems occupy the next chapter.