The study of group dynamics is important because groups are important. Our relationships with groups pervade almost every aspect of our lives. We are raised in families and educated in schools. Most of us work in a group setting and many of us live in families of our own. Moreover, we all live in a community, belong to one or more ethnic groups, and are categorized into a specific gender. Groups shape our daily life, powerfully influence our identity and self-esteem, and even shape our very perception of the world around us. Consequently, our psychology and the brain that creates it have evolved to promote our participation in groups.
The basic assumption of group dynamics is that there are characteristics of the group that cannot be reduced to the behavior of individual group members. In other words, the group has a personality of its own. We can see how this works when we consider that groups persist even after the individual members change. Individual group members may come and go, but the characteristics of the group continue. In effect, this is what is meant by culture; the norms and values and customs of the group—be it an institution, a society, an ethnic group, a religious group, or a corporation—exist apart from the individual members. This is not to say that groups do not change or that individual members do not matter, but that groups can develop an enduring identity and character that is independent of individual members.
There are various ways to define groups. Groups can be defined as a collection of people who share a common fate, who share a social structure, or who engage in face-to-face-interaction. In his 2000 book, Rupert Brown suggests the following definition: a group consists of two or more people who identify themselves as members of the group. Additionally, the group’s existence must be recognized by at least one other person who is not a group member.
Following the almost unimaginable slaughter of WWII, scholars in many fields became preoccupied by the notion of conformity. How could so many otherwise ordinary people participate in the atrocities of the Holocaust? Can the pressure to conform be so powerful that it can account for such extreme behavior? Questions such as these spurred investigation into the behavior of groups. A good deal of research on the structure and behavior of groups has been performed within a branch of psychology known as social psychology. Social psychologists conducted research on group norms, group identity, and on conformity. One of the pioneers of group dynamics was a social psychologist named Kurt Lewin (1890-1947).
While social psychologists were interested in the uniformity of groups—that is, the way that group members act in concert—some psychoanalysts became interested in the interactions among group members. From working in clinical settings, these clinicians recognized that group dynamics influenced the ways people relate to each other. Specifically, they noticed the ways that group members formed alliances with each other, split into different factions, and aligned with, and then rebelled against, group leaders. Wilfred Bion (1897-1979) was a pioneer in this movement. Another influential clinician in the group therapy movement is the psychiatrist Irvin Yalom (1931-).
Group identity refers to the recognition of the group as a distinct unit by both members and nonmembers. Group identity is a critical part of a group’s well-being and much of the behavior of a group will serve to promote and maintain group identity. For example, specific rituals, forms of dress, and speech patterns can help distinguish group members from nonmembers and thus promote group identity. We see this with teenage cliques, religious groups, and even military regiments.
This photograph shows Hasidic or ultra-orthodox Jews. Note the distinctive costume, including black felt or fur hats, long black coats, and long side curls, known as peyos. These outfits mark the boundaries between the in-group and the out-group and play a crucial role in maintaining group identity (Shutterstock).
Individual group members reshape their own individual identity in keeping with their group membership. Our social identity is a function of our group memberships. By identifying with certain groups, we state that we share values, goals, and beliefs with that group. Moreover, our self-esteem is impacted by our status and value within the group as well as the group’s status in relation to other groups. People who belong to low status or devalued groups suffer damage to their self-esteem. Likewise, people can gain self-esteem by joining a group that values them and/or is valued by others.
Group norms refer to the rules and expectations governing the behavior of members of the group. For example, within a corporate setting, individual group members are expected to dress and act professionally. They should not wear overly casual clothes, engage in illegal behavior, drink alcohol, or behave in explicitly sexual or violent ways. They are expected to display a strong work ethic. In contrast, in adolescent street gangs, group members are expected to show unquestionable loyalty, present themselves as tough and ready for violence, and show little deference to conventional authority.
When individual group members violate group norms, the group will act to bring their behavior back into compliance. Consider what would happen to a corporate employee who shows up drunk for work, makes explicitly sexual overtures to his or her colleagues, and then destroys office furniture. The group (in this case the corporation) would act immediately to bring the employee back in line. Either that, or the employee would be expelled.
Group cohesion refers to the degree that group members feel identified with and committed to the group. It is a reflection of the closeness and connection of the group as a whole. Cohesive groups have a strong group identity and tightly adhere to group norms. Early social psychologists believed that group cohesion reflected the degree to which individual members like each other. However, later researchers suggested that cohesion reflected the attachment of group members to the idea of the group rather than their attachment to specific group members. How much do group members agree with the goals and values of the group? How strongly do they feel about them?
A variety of factors promote group cohesion. These include shared goals that have emotional significance to group members, as well as a history of success in reaching those goals. Leadership styles are also important. Effective leaders attend both to the goals of the group and to the social and emotional needs of the group. Finally, opposition to or contrast with an out-group can also promote group cohesion. In some cases, this can be harmless or even beneficial, as with rivalry between sports teams. However, this tendency can also have negative implications as groups can demonize out-groups or escalate inter-group tensions in an effort to promote group cohesion. This can take the form of racism, prejudice, or even wars.
Initiation rites are found in diverse groups across many different cultures. Before being accepted to the group, an initiate must undergo various trials, most involving some degree of discomfort, pain, and humiliation. Such rites serve to increase initiates’ loyalty and obedience to the group and to heighten the boundary between the in-group and the out-group. Social psychologists have theorized about the mechanism behind initiation rites. Some have suggested that cognitive dissonance plays a role. This theory, first presented by Leon Festinger in 1957, states that people tend to rationalize away contradictory thoughts. Therefore, an initiate might reason “if I went though all this trouble to join this group, I must really want to get in.”
Anthropologists have long spoken about initiation rites in pre-modern societies. For example, adolescent boys in the Fulani tribe in West Africa engage in whipping battles with other adolescent boys as a rite of passage into adulthood. Related ordeals for adolescent males are found across different tribes in New Guinea. Many rites in modern Western culture may not be recognized as initiation rites per se although they may well serve that function. For example, boot camp in the marines, hazing in fraternities, and thirty-six-hour work shifts for medical residents may function as initiation rites. Initiates must go through painful and often humiliating trials before being accepted into an exclusive group.
Hazing rituals in fraternities can be seen as an example of a modern initiation rite. Pledges, or new initiates, are expected to undergo various trials which can include drinking massive amounts of water or alcohol. Although hazing practices have led to lawsuits and even deaths on occasion, the continuing popularity of this practice speaks to the psychological importance that initiation rites can have for group identity and cohesion (iStock).
Although we may think we come to our opinions independently, research shows that people are profoundly influenced by group norms at all levels of thought and behavior. People feel tremendous pressure to conform to group norms and feel anxiety when they go against the group. An fMRI study by Gregory Burns showed increased activation in the amygdala when people made non-conforming decisions. The amygdala is a brain region associated with the fear response. Moreover, a large body of social psychology research illustrates how the pressure to conform can influence even our perceptions of physical reality. The presence of four peers stating a consistently wrong opinion can cause subjects to deny even obvious physical facts, for example, to call a color green when it is clearly blue.
Two classic social psychology experiments illustrate the power of group norms. In a pioneering experiment by Muzafer Sherif in 1936, people were exposed to an optical illusion called the autokinetic effect. If you shine a pinprick of light in a completely dark room, the dot of light will appear to move. Sherif asked his subjects to estimate how much the light moved. He first tested people alone and then in groups of two or three. He found that when tested alone, individuals gave very different estimates of the light’s movement. In groups, however, people’s estimates tended to converge to the same answer. A group norm was formed that shaped people’s perception.
In 1956, Solomon Asch published results of another classic experiment. Subjects were recruited to take part in an experiment of visual judgment. They were placed in small groups and shown pictures of several lines. The groups were asked to match a target line with one of three comparison lines. In fact, only one member of the group was an actual subject. The others were part of the experiment, with instructions to unanimously state the wrong answer two-thirds of the time. The real point of the study was to see whether the true study subjects, the “naive” subjects, would answer correctly when their group members gave the wrong answer or whether they would conform to the group norm and agree to the wrong answer. In fact, the naive participants did conform to the wrong answer (either fully or in part) 36 percent of the time. This study was important because it showed that people will alter their response according to group norms even when it is clear that the group is objectively wrong.
Conformity is most pronounced in new or ambiguous situations when people are least confident about their own opinions. When people are more knowledgeable or more secure about their opinions, they are less swayed by group-think. People are also more conformist when their group identity is new or tentative. This is particularly acute in adolescence, when the adolescent’s entire social identity is new and insecure. Consider the intense pressure to conform that many adolescents feel regarding their choice of clothing, music, interests, and even friends. Most adults would consider these decisions to be either minor or personal, certainly not a target of intense social pressure.
In general, groups are conservative and slow to change. They are homeostatic. In other words, they work to regain their former state in the face of change. Consequently, group members tend to be far more responsive to majority opinions than to minority opinions. This has been shown many times in various adaptations of Solomon Asch’s classic 1956 conformity experiment.
Groups are not entirely closed to new ideas and there are opportunities for minority influence. If the ideas of the majority are not strongly held and do not carry much personal or emotional weight, the group will be less closed to new ideas and minority opinions. Moreover, if the minority group is consistent in their positions, this has more impact than if they are not consistent. Further, majority influence seems to be most powerful in the immediate aftermath of the discussion and in public settings. As there is social cost to deviance, people tend to conform publicly. However, over time, perhaps when the source of the new ideas has slipped from memory, the new ideas are likely to show more influence. This may explain why totalitarian governments go to so much trouble to stifle dissent. They realize that even unpopular opinions can have considerable impact over time.
Although some uniformity of values, behaviors, and viewpoints is necessary within groups, group members are not all interchangeable. In many groups, there is role differentiation. For example, in a work setting, every employee has a specific set of tasks and responsibilities. Group roles are also differentiated according to status and in most groups there is some degree of a status hierarchy. In other words, some roles have more power and prestige than others.
Wilfred Bion (1897-1979) was one of the pioneers of group psychotherapy. A psychoanalyst in the tradition of Melanie Klein (1882-1960), his main point was that the psychology of groups paralleled that of individuals. Like many before him, he was struck by the primitive quality of many group processes, the regressed and emotionally uncontrolled behavior that groups sometimes demonstrate. Using a very dense theory of emotional life, which may look bizarre to the modern reader, he nonetheless provided valuable insights about group dynamics.
Bion believed that the primitive quality of group psychology corresponded to the most primitive aspects of individual psychology. An immature or more primitive mind thinks only in extremes and in opposites (good/bad, love/hate). A mature mind can understand complexity, can see that people are a mix of good and bad. The world exists in shades of gray; it is not just black and white. In the same way that individuals can regress to more immature modes of thinking, groups too can lose track of shades of gray and jump to extremes.
One of Bion’s most valuable contributions was the concept of group splitting. This refers to the times when groups split into hostile factions that represent different parts of their shared experience. There are many examples of this. The part of the group that wants change can split against the part that wants to stay the same. The followers of one leader split against the followers of a rival. There may be a split between supporters and opponents of a controversial group member. These splits can become very antagonistic and a previously harmonious group can suddenly erupt into civil war. At some point just about everybody will encounter these kinds of splits in their everyday life, perhaps in a work situation, a family conflict, or even within a religious community.
It is important to recognize that these splits reflect a group dynamic and are not simply a product of individual behavior. When people can identify fractures like these as a group process instead of the fault of misbehaving individuals, they can reduce the inevitable blaming and finger pointing and work to restore the group’s cohesion.
Group polarization refers to the tendency of groups to take more extreme positions than individuals do when on their own. This has been attributed to various factors. The role of social pressure is clearly an important factor. Leon Festinger suggested that people compare their own positions with their peers, and then act to avoid deviating from the group norm. This pushes the group as a whole toward more extreme positions. Other researchers, such as Eugene Bernstein and Amiram Vinokur, have suggested that the greater amount of information gathered from group discussions leads groups to be more confident in their opinions than individuals who can only rely on their own knowledge. However, group polarization occurs even when group members do not share information. We can see how the process of group polarization can intensify splitting. Once a fracture has occurred within a group, the two new groups are liable to polarize into extreme positions. This, of course, only strengthens the split.
Prejudice and racism reflect some of the negative aspects of group psychology. As they have caused enormous suffering across history, it is very important to understand how they work. Prejudice and racism both refer to a negative view of one group of people based solely on their membership in that group. Rupert Brown defines prejudice as a derogatory attitude, negative emotion, or discriminatory behavior toward members of an out-group because of membership in that out-group. Racism is a specific form of prejudice, involving prejudicial attitudes or behavior toward members of an ethnic group. The definition of race is somewhat variable, but commonly refers to an ethnic group originating on a specific continent, such as people of African, European, or Asian descent. Social psychologists have long been interested in the phenomenon of prejudice and have contributed much to our understanding of it.
Stereotyping goes hand-in-hand with prejudice. The term stereotype as used in social science was first introduced by the journalist Walter Lippman in 1922. Previously, the term had been used in the printing business. When we stereotype people, we attribute a series of traits to them based on the one trait that signals their membership in a particular group. Common contemporary stereotypes are that Asians are hardworking and studious, Hispanics are macho, and that librarians are introverts. By definition, stereotypes are limiting and disregard each person’s individuality. They also lend themselves to negative and derogatory assumptions. When that happens, the stereotype blends into prejudice. Certainly, prejudice is highly dependent on stereotypes. You are unlikely to say you hate Hmong Cambodians unless you have some set view of what you think they are like.
The tendency to classify our experience into categories is a fundamental and universal aspect of human cognition. We create concepts in order to make sense of the endless complexity we encounter in our environment. This is a necessary part of human thought, allowing us to process information efficiently and quickly. If we did not create categories, our entire life would be a buzzing mass of confusion. In social categorization, we place people into categories.
Social categorization is a critical part of our social life and is evident as early as infancy. Studies have shown that infants can distinguish people according to their gender, age, and degree of familiarity. People also reflexively distinguish members of in-groups (groups of which the subject is a member) from members of out-groups. Furthermore, people tend to minimize differences within groups and to maximize differences between them. Finally, people tend to evaluate out-groups more negatively than in-groups. In this way, social categories easily lend themselves to stereotypes in general, and to negative stereotypes in particular.
Stereotyping creates automatic and unconscious biases that influence decision making. People frequently do not recognize that they are thinking in a stereotypical way, and assume that their views of various out-groups are based on solid information. Likewise, stereotypes are often quite persistent, even in the face of contradictory information. This occurs because people favor information confirming the stereotype and are more likely to ignore or disregard information contradicting the stereotype.
Furthermore, stereotypes are often self-serving and help people in powerful in-groups rationalize their privileged position relative to less privileged out-groups. For centuries women were depicted as emotional and childish and African Americans as lazy and unintelligent. More recently, homosexuals have been portrayed as sexually predatory (and thus a threat to our schools and our military).
Such views justify the exclusion of these out-groups from power and privilege. Stereotypes are not all powerful, however, and can be somewhat flexible. Our tendency to stereotype can also be shaped by cues in the environment. For example, when research subjects were cued to pay attention to gender, they thought in terms of gender stereotypes. This tendency was less evident when gender cues were removed.
The impact of negative stereotyping on the stereotyped is quick, pervasive, and destructive. In laboratory experiments, people can begin to act in concert with their stereotype, even when the stereotype was arbitrarily assigned to them. In real life, when stereotypes are consistently encountered over a lifetime, people can easily internalize the negative messages. In fact, people have to struggle hard not to see themselves through the eyes of the stereotyper, and not to act in the way that they are perceived. A famous study by Kenneth and Mamie Clark in the 1940s showed the tragic impact of racial prejudice on the self-concept of African-American children.
Ultimately, social prejudice is about intergroup relations. People are prejudiced against other people because of their membership in a particular group. Understanding the dynamics of intergroup relations can help shed light on social prejudice.
This is a difficult question and there have been many theories about the causes of inter-group strife. Unfortunately, there is no one theory that totally accounts for the relationships between different groups. One theory holds that social prejudice stems from frustration caused by deprivation of needed resources. Another view suggests that groups devalue each other when their goals are in conflict. Other research points to the nature of group identity in itself as a contributor to intergroup tension. As soon as people self-identify as part of one group, they tend to think more negatively about other groups.
During research conducted in the 1940s, Kenneth and Mamie Clark investigated the impact of racism on African American children’s self-concept. Children were presented with two types of plastic baby dolls. The dolls were identical except for skin color; some were white and some were black. As expected, the children were all able to distinguish the race of the dolls. More importantly, the children tended to prefer the white doll over the black doll, attributing more positive characteristics to the white doll. Furthermore, when asked to draw a self-portrait, many of the children drew themselves with much lighter skin than they actually had. This research was used in the 1954 Supreme Court case Brown vs. Board of Education, Topeka, Kansas. In this landmark case, the U.S. Supreme Court declared racial segregation in public schools to be unconstitutional.
Research in the 1940s showed that black children seemed to prefer Caucasian over African American dolls, a finding that helped win a U.S. Supreme Court case that outlawed school segregation (iStock).
Some capacity for favoritism of one’s own group over others appears to be a natura human tendency. In many studies, people attribute more positive traits to their ow group than to other groups. This has been demonstrated cross-culturally. In 197 Marilynn Brewer and Donald Campbell published a survey of thirty tribal groups ir East Africa. Their subjects had been asked to rate their own tribe and other tribes on series of traits. Twenty-seven of the thirty groups rated their own group more positive ly than any other group.
In-group favoritism or chauvinism can also be created in experimental research In a series of classic studies published in the 1950s and 1960s, Muzafer and Carolyi Sherif and their colleagues recruited a group of twelve-year-old boys to attend summer camp. The boys were divided into two teams which were then pitted against each other in competitive games. Following these games, the boys very clearly displayed in-group chauvinism. They consistently rated their team’s performance as superior to that of the other team. Furthermore, 90 percent of the boys identified their best friends from within their own group even though, prior to group assignment, many had best friends in the other group. In some cases, the devaluing of the out-group started immediately after group assignment, even before the competitive games began.
A major theory of intergroup conflict addresses the role of group goals. When the goals of two groups are in conflict with each other, there is likely to be an escalation of tension. Moreover, group chauvinism will kick in, and people will start to evaluate the other group in exaggeratedly negative terms while idealizing their own group. A number of studies show that perceived conflict of interest between groups heightens both a negative view of the out-group and in-group cohesion. As this tension escalates, the groups further polarize, taking more extreme positions against each other. The out-group becomes evil and badly intentioned, while the in-group remains morally decent and justified in their behavior. We can see this in many political situations, such as the current impasse between the Israelis and the Palestinians, who are fighting over the same land. On the other hand, when groups share goals, inter-group tension and group chauvinism decreases.
One important way to decrease intergroup prejudice is to introduce shared goals. In the summer camp experiments mentioned above, when the teams engaged in cooperative efforts toward shared goals, aggression towards the opposite team declined, as did in-group favoritism. Several other studies have shown a similar effect. However, success in meeting shared goals may play an important role. Failure can raise tensions again, especially if there is a prior history of conflict or competition between the groups. It is as if the groups want to blame each other for the failure.
The earliest theory of intergroup aggression, as presented by John Dollard in 1939, focused on the role of deprivation. Groups grew angry because they felt deprived of their basic needs. This aggression was then expressed either toward the perceived source of their deprivation or toward a convenient target, in other words a scapegoat. Later authors, such as Leonard Berkowitz and Ted Robert Gurr, revised this theory. Intergroup aggression did not arise so much out of actual deprivation but more from relative deprivation. In other words, it was not how much people had that mattered, but how much they had relative to a norm or expectation of how much they thought they should have. This notion is consistent with research on the relationship between money and happiness. In either case, our emotional response is less about what we actually have than about what we believe we should have.
So how do we develop norms of what we think we should have? To some degree, our norms are based on past experience. Groups whose conditions have deteriorated over time can become aggressive to other groups. A more powerful source of comparison, however, appears to be other groups. Groups who feel deprived relative to other groups around them are likely to become angry, which can lead to civil unrest. Further, in a 1972 paper by Reeve Vanneman and Thomas Pettigrew, a distinction was made between collective deprivation and egoistic deprivation.
In collective deprivation, people feel that their own group is deprived relative to another group. In egoistic deprivation, people feel deprived as an individual, but not in terms of group membership. Several studies have shown that feelings of collective deprivation are more strongly related to social prejudice. However, double deprivation, the feeling that one is deprived both as an individual and as a member of a group, leads to the highest level of social prejudice.
When scapegoating takes place, a stronger group takes out its aggression on a weaker group. In effect, the group displaces their anger at whatever difficulty they may be experiencing onto a convenient and defenseless target. This process refers to between-group scapegoating. Psychoanalytically oriented group theorists also talk about with-in-group scapegoating, which may be a somewhat different matter.
Although scapegoating cannot explain all of social prejudice, we can certainly think of examples when it played an important role. One example would be the rise of Hitler and Nazism in Germany that followed the devastation of World War I. Germany faced defeat, followed by extremely punitive conditions imposed by the Treaty of Versaille; this wrecked the German economy and unnecessarily humiliated a once-proud nation. In an attempt to rebuild German pride and group identity, Hitler used the Jews as a scapegoat, blaming them for all of Germany’s troubles. The result was the Holocaust.
In another example of scapegoating, a 1940 study by Carl Hovland and Robert Sears looked at the relationship between the price of cotton and the number of lynch-ings of African Americans that took place in the American South between 1882 and 1930. A negative or inverse correlation was found: the number of lynchings increased as the economy declined.
Given our diverse and multi-ethnic world, it is of great importance to understand ways to reduce social prejudice. In the 1950s, Gordon Allport introduced the intergroup-contact hypothesis. In this view, intergroup contact under positive conditions can reduce social prejudice. The necessary conditions include cooperation toward shared goals, equal status between groups, and the support of local authorities and cultural norms. Considerable research since then has supported these ideas. In a 2003 review, Stephen Wright and Donald Taylor also noted the effectiveness of identification with a superordinate group. In other words, different groups can come together as part of one overarching group, for example, as part of one community or of a common humanity.
A man marches to protest racism in schools. Research shows that inter-group tensions can be reduced granting groups equal social status, sharing goals, and getting support from local authorities (iStock).
Positive emotional experiences with members of different groups can also reduce negative stereotypes. Having close friends from different groups is especially effective in this regard. There may be several reasons for this. For one, it is near impossible to hold onto a simplistic, negative stereotype of someone you know well. Secondly, a close relationship promotes identification with the other person and of the groups they belong to. In other words, your relationships with other people become part of who you are. This is referred to as including the other in the self, a notion introduced by Stephen Wright, Arthur Aron, and colleagues.
Morality involves judgments between right and wrong. Morality is determined both by our reason, that is, by our cognitive analysis, and by our emotional response. Psychology tries to bring a scientific vantage point to the study of morality. Instead of determining what is and what is not moral—the content of moral decisions—psychological research studies the process of moral decisions and judgments. How do people determine what is and what is not morally acceptable?
In short—no. It is not the place of science to make moral decisions. Morality is ultimately about value, about what is right and what is wrong. Psychology is a science and therefore does not deal in the realm of value. That is the role of philosophy or of religion. However, psychology can provide information on how various behaviors affect other people, on the psychological impact of various choices. This information, in turn, can allow individuals and society to make informed moral decisions. Relatedly, we can study the process of moral choices and moral development; how people make moral decisions, what morality means to different people. Hopefully this can also help people enhance their moral decision-making abilities, so that they make more thoughtful, mature, and beneficial moral decisions.
Without morality, group life would not be possible. Morality is what holds groups together. It is the glue. If everybody simply acted out of self-interest with no concern either for other people or for the wellbeing of the group at large, the group would quickly dissolve into a free-for-all. Human beings have evolved with powerful self-protective and self-promoting motivations and drives. A good part of our motivations are entirely selfish; our desires, angers, and fears serve to promote our own individual interests, often at the expense of other people. Sometimes this erupts in horrific exploitation of, or even violence against, others. On the other hand, humans have also evolved as social animals. We are powerfully oriented toward social organization and most of our emotions serve social functions. Along with our self-serving motivations, we have also evolved moral capacities. Some sense that we must balance our own interest with the interests of others is deeply encoded in our genes.
Early students of morality, such as Jean Piaget (1896-1980) and Lawrence Kohlberg (1927-1987), focused on the intellectual aspect of moral development. They studied the role of reason in moral judgments. Carol Gilligan protested this narrow focus and emphasized the importance of compassion and caring. More recent psychologists, such as Steven Pinker, emphasize an evolutionary approach to human morality. In a 2008 article, Pinker noted that some of our most strongly held moral positions may not have any basis in reasoning or compassion. For example, many people react in horror when hearing scenarios in which an adult brother and sister engage in mutually consensual incest, a dog owner eats the family dog after it has died from natural causes, or a homeowner cuts up the American flag to use as a dust rag. (These scenarios were first described by the psychologist Jonathan Haidt.)
From research such as this, psychologists have concluded that evolution has inscribed in us the tendency to react with emotional disgust or horror at certain classes of situations. Such situations have evolutionary significance and involve behaviors that, over time, are destructive to our species. For example, the incest taboo is observed in many animal species and serves to protect variability in the gene pool. An aversion to eating members of our own family (and our pets become part of our family) has obvious benefits for kinship survival. In effect, we have evolved certain moral instincts.
Cross-cultural research has shown consistent themes to moral judgments, even across very different cultures. Jonathan Haidt suggested five general categories of moral concerns. These are harm/care, ingroup/loyalty, authority/respect, purity/sanctity, and fairness/reciprocity. Across cultures, people express disapproval and distress at the thought of harm coming to an innocent person. Betrayal of one’s community is likewise judged negatively. A respect for authority and the value of fair treatment for members of the community also appear to be cultural universals. The purity/sanctity category relates to the emotion of disgust and involves moral judgments about dietary laws, sexual practices, urination, defecation, and other similar issues.
It does not take much life experience to recognize that human beings can disagree passionately about moral choices. Contemporary debates about abortion and gay marriage show that people can hold completely opposite viewpoints with equal degrees of moral conviction. One way of understanding this is to assume that different people and different groups vary in the way they rank the five categories. For example, respect for authority is more highly valued in collectivist cultures than in individualist cultures, which might in turn prioritize fairness. This can explain the intense cultural divide between traditional Islamic cultures and Western European cultures over the 2005 controversy regarding a Danish cartoon about the prophet Mohammed. The Islamic side felt it was a moral transgression for the cartoonist to ridicule the Islamic prophet (authority/respect and ingroup/loyalty), while the Europeans felt the cartoonist had a moral right to free expression (fairness/reciprocity) without fear of violence (harm/care).
Attitudes toward the five categories of moral concerns may also influence political beliefs. In a large Website-based study, Jonathan Haidt and colleagues found that, when liberals and conservatives were compared with each other, liberals valued harm/care and fairness/reciprocity more than conservatives did, and conservatives valued authority/respect, ingroup/loyalty, and purity/sanctity more than liberals did. These differences held even after accounting for the effects of age, gender, education, and income.
Despite scientific evidence of a universal human tendency toward moral judgments, views on morality have changed dramatically across history. Things we now consider to be horribly immoral were not always seen that way. The ancient Greeks based their morality on the concept of honor, which was related to one’s reputation for bravery and strength. In the quest for honor, it was perfectly acceptable to slaughter whole cities. More recently, in fact well into the nineteenth century, slavery was not considered immoral in the United States. Moreover, behavior once seen as highly immoral is no longer seen that way by many people. Pre-marital sex was seen as highly immoral only a few decades ago and now is widely considered acceptable. Likewise, there is far more tolerance of public disagreement with people in authority than there used to be. If we consider these changes in light of Jonathan Haidt’s five moral categories, we can see that within Western society considerations of fairness/reciprocity have grown stronger while considerations of authority/respect and ingroup/loyalty have lessened.
An illustration of the Greek hero Achilles defeating Hector in battle. In ancient times, morality was based on concepts of honor, and killing other people was often seen as an honorable thing to do. That stands in stark contrast to today’s sense of morality (iStock).
The ability to analyze a situation with reason, or cognition, is a critical part of moral judgments. Earlier theorists of moral development, such as Piaget and Kohlberg, emphasized the importance of cognitive development in moral maturity. Two specific cognitive skills include: the ability to take another person’s perspective (that is to put yourself in another’s shoes), and the ability to recognize abstract rules that can be generalized across many situations. Similar ideas are reflected in many philosophers’ ideas about morality. For example, Immanuel Kant (1724-1804), a famous German philosopher, introduced the concept of the categorical imperative, which refers to the importance of recognizing universally valid rules of behavior. Certainly the Golden Rule, to “do unto others as you would have them do unto you,” assumes that we can take another person’s perspective. As developmental psychologists have shown, these cognitive abilities develop slowly across childhood and continue to develop across adulthood. Because children have an immature capacity for either abstraction or perspective taking, they are not held to the same moral standards as adults.
If the only way to save five workmen on a trolley track is to divert the trolley onto another track, thus killing just one man on that track, would you divert the trolley? (iStock)
Empathy is also a central part of our moral reactions. Our ability to feel another’s pain, and to imagine our own pain if put in the same situation, underlies our concern for the well-being of others. People who are deficient in empathy, such as psychopaths or some people with autistic traits, can behave in immoral ways. The extent to which either empathy or rational analysis influences our moral decisions depends on the situation. If we have direct, personal contact with the people affected by our decisions, we are much more likely to be influenced by empathy and emotion than if we have less immediate contact with them, if the people involved feel more abstract. Research on the “Trolley Problem” has shown us how much circumstances influence whether empathy or reason will dominate our moral decisions.
Aseries of studies have been done on the “trolley problem”, which is a moral dilemma first thought up by the philosophers Philippa Foot and Judith Jarvis Thomson. The scenario involves a trolley that is hurtling down the track out of control after the trolley driver has become unconscious. If nothing is done, the trolley will hit five workmen on the tracks who don’t see the oncoming trolley. You can save the workmen by throwing a switch that will divert the trolley onto another track. However, there is one workman on the other track. Will you throw the switch, sacrificing one man to save five others?
In these circumstances, most people say yes. From a purely rational standpoint, it makes sense. However, if the only way to save the five workmen is to throw a large man in front of the trolley, most people say they would not do it. When we have close contact with the person we are hurting, our moral decisions are likely to be based more on emotion than on reason alone. Likewise, when people considered these two scenarios during fMRI brain imaging, different parts of the brain lit up for each scenario.
As discussed above, moral judgments involve both emotions and cognition, and the importance of either one will depend on the particular circumstances involved. Joshua Greene, Jonathan Cohen, and colleagues asked people to consider the trolley problem and similar scenarios while undergoing fMRI brain imaging. The authors divided their moral dilemmas into moral-personal and moral-impersonal scenarios.
The trolley scenario that required killing someone directly (i.e., pushing the large man in front of the trolley) is an example of a moral-personal scenario. The trolley problem that did not demand direct, physical contact with the man who would be killed (i.e., pulling the switch) is an example of a moral-impersonal scenario. When people thought about moral-personal scenarios, the medial frontal and anterior cingulate regions lit up. The medial frontal region is associated with the processing of interpersonal relations and, possibly, empathy. The anterior cingulate is associated with processing conflicting messages from different parts of the brain. When people considered the moral-impersonal scenarios, the dorsolateral frontal regions were most active. The dorsolateral frontal region is involved with rational thought and analysis.
This suggests that the farther away we are from the human cost of our actions, the more our moral decisions are based on cold rational analysis, rather than gut emotion. These kind of rational moral decisions are known as utilitarian judgments, and involve a kind of cost/benefit analysis.
Children start showing a rudimentary sense of moral understanding around four years old, during the preschool years. Their initial sense of right and wrong is quite crude, and based mainly on what adults tell them or what behavior has brought punishment. A few years later, when children are about seven, they begin to grasp the importance of universal rules to govern behavior. Initially, they apply rules in simplistic and rigid ways (“Ooh, you said ‘stupid’! You’re not supposed to say ‘stupid’!”). With time, they develop a better understanding of the purpose that rules serve. Nonetheless, some capacity to respond to the feelings of others is evident as early as infancy, and even four-year-olds can distinguish between prohibitions that serve a true moral purpose, such as protecting people from harm, and those that simply express a preference, such as not sitting on the couch.
Kohlberg (1927-1987) was a pioneer in the field of moral development. Influenced by Jean Piaget, he developed a large body of research investigating moral reasoning. Like Piaget, he was interested in intellectual development, in the way that the ability to reason changes across development. Kohlberg relied on a method of vignettes. He wrote up scenarios that involved a moral dilemma and presented them to his research subjects. His best known vignette involves a man named Heinz who broke into a pharmacy to steal a drug in order to save his wife’s life. Based on his research, Kohlberg divided moral development into three levels, pre-conventional, conventional, and post-conventional. Each level contains two stages, for a total of six stages altogether.
The first level, pre-conventional morality, is most commonly found in children under ten. In this level, morality is determined by the consequences of the action to the person performing the behavior—whether the individual is punished or rewarded. In the second level, conventional morality, the morality of a behavior is determined by its effect on social relationships. The third and final level is called post-conventional morality. In this stage, the person is interested in abstract concepts of justice and a just society. Kohlberg believed that all children go through the same sequence of stages in the same order. A fair amount of research supports this view for the first two levels, but the scientific evidence for the third level is much weaker. Kohlberg was also interested in moral reasoning in adults. Indeed, research has shown that different adults are characterized by different stages of moral development.
Financier Bernie Madoff perpetrated the largest Ponzi scheme in history. Although Madoff was a widely respected member of his community and a devoted family man, he defrauded friends, family, colleagues, and many charities of approximately $50 billion. Did Madoff justify his behavior to himself as he did it? We do not know, but we can imagine he might well have. We do know, however, that people rationalize and justify their moral transgressions all the time. In fact, the very nature of cognition makes it inherently easy to justify behavior that is clearly against our moral code.
Cognition is never isolated from emotion. Emotion slants every thought we have and does so largely outside of consciousness. In other words, emotions bias our interpretation of events; we tend to interpret emotionally significant events in ways that are consistent with our emotions. Moreover, desire colors our thoughts as much as any other emotion. If we want something to be true, we often convince ourselves it is true. That’s why it is generally necessary to have some form of external check on our behavior; few people are up to the task of policing themselves.
A disgraced Bernard Madoff is led away by federal agents after being charged with creating a Ponzi scheme to rip off his customers of billions of dollars. How might an already successful, respected man like Madoff explain his behavior to himself? (AP/WideWoiid)
Carol Gilligan believed that Kohlberg’s theory was biased by an exclusively masculine viewpoint. She suggested that his emphasis on abstract thought and impersonal laws reflected a typically masculine bias to favor thought over emotion. Gilligan claimed that women are more likely to emphasize emotions and interpersonal relationships than men and, therefore, more likely to score at stage 3 (the first stage in conventional morality). This did not mean that women were less moral than men, only that they made moral judgments in different ways. In effect, women made moral choices “in a different voice”, which was the title of her 1982 book. While Gilligan’s critique raises important points about Kohlberg’s exclusive focus on intellect, she also has been criticized for oversimplifying the female style of moral reasoning. Other research has shown that women are no more likely to score at stage 3 than men. In general, both women and men take issues of justice and empathy into account when making moral decisions.
It is in the workplace where we are perhaps most exposed to the ups-and-downs of group dynamics. Office politics, issues of leadership, productivity, and staff morale-all of these reflect group processes. The field of study that specializes in group dynamics in the workplace is known as organizational psychology.
An organization is a group of people who join together in an organized manner for a common purpose. Although this can refer to any group united in a common purpose (for example, a religious, social, or community organization), in this context organizations refer to those groups that unite for the purpose of paid work.
Organizations vary in two important ways: size and degree of hierarchy. Organizations can be very small (like a five-person start-up company), or enormous (like an international conglomerate with a workforce of thirty thousand people). Small organizations tend to be more informally organized, while larger organizations depend on greater standardization of policies and procedures. Organizations also vary in terms of the degree of hierarchy. In nonhierarchical organizations, there is no power differential between members. An example of this would be a cooperative or a Quaker religious community. In these entirely nonhierarchical organizations, decisions are made by consensus only. The decision is not made until the entire organization comes to agreement.
Hierarchical organizations organize decision making and power vertically. Subordinates report to superiors, who in turn report to their own superiors. This chain continues up the hierarchy until the very top. Strongly hierarchical organizations include the U.S. military and the Catholic church. Most work organizations fall somewhere between these two extremes. However, the majority of large commercial organizations have a fairly hierarchical structure.
Organizational psychology is the study of human behavior and relationships within work organizations. Organizational psychologists study how the structure of an organization influences company performance, productivity, and morale, as well as worker-worker relationships. While organizational psychologists are interested in individual traits and behaviors, they are also interested in the larger picture—how the group dynamics of organizations impact the organization’s performance. Because large corporations have been the most frequent consumers of this kind of information, most organizational psychology research has been conducted in fairly traditional corporate environments. Nonetheless, the insights of organizational psychology can be fruitfully applied to a large array of work environments.
Classic theories of organizational psychology date back to the late nineteenth century. Writing in an age of massive industrialization, early theorists of organizational structure aimed to replicate the precision of a finely tuned machine. Frederick Winslow Taylor (1856-1915) introduced the notion of scientific management. He believed that the methods of empirical science should be adapted to engineer efficiency in the workplace. His work influenced the development of factory assembly lines. Another pioneer in this arena was Max Weber (1864-1920), a renowned German sociologist. While Taylor focused on the structure of tasks, Weber focused on the authority structure. Weber idealized the precision and control of a hierarchically organized bureaucracy. His aim was to standardize worker behavior and company policies into a completely impersonal, rule-bound system.
Both Weber’s and Taylor’s organizational models treated the workplace as a machine. Workers were cogs in a wheel; their motivation and morale were of little importance to the functioning of the workplace. In fact, Taylor believed that workers had no intrinsic motivation to work. Rather, their performance could only be motivated by carrots (specifically, pay) and sticks (negative consequences for undesirable behavior). Likewise, Weber emphasized the rational and impersonal nature of bureaucratic rules as an antidote to irrational, emotional impulses.
A movement arose in reaction to this dramatically dehumanizing model. The human relations approach recognized that people are motivated by their emotional and social needs as well by monetary rewards. Organizations that neglect the human element miss out on a huge part of what makes people tick. The surprise results of a famous series of experiments known as the Hawthorne studies gave birth to this movement. Nonetheless, while a focus on emotional experiences of workers succeeded in raising worker morale, studies showed that it had little effect on productivity. A later version of this approach, the neo-human relations school, recognized that managers have to attend both to task performance and to the social-emotional aspects of work life.
In the 1920s and 1930s, a series of studies was carried out in the Hawthorne factory of the Western Electric Company in Chicago. The experiment was conducted from the vantage point of Frederick Winslow Taylor’s theory of scientific management. The experimenters manipulated working conditions in a number of ways to determine what conditions would best enhance productivity. They attended to the temperature and level of humidity in the room, the hours worked, the amount of sleep the workers had, their meals, and various other variables.
After a year or two of this, performance greatly improved. This was at first attributed to the experimental manipulations (e.g., changing the level of light in the room). However, when working conditions were returned to their original state, the improvement continued. The experimenters finally realized that the improvement in worker performance was due less to changes in task conditions than to the human element inherent in the studies. While conducting these studies, the experimenters continually consulted the workers and paid careful attention to almost every detail of their work life. Because of this, workers felt valued and empowered, which greatly enhanced their work performance.
The systems approach draws from Ludwig von Bertolanffy’s 1967 work on general systems theory. In this view, organizations are seen more like living organisms than like machines. A system is a whole made up of interacting parts. Systems are composed of interacting subsystems (e.g., departments, divisions, work groups, teams). It is the relationships between the subsystems that make up the structure of the system. Therefore systems theory is particularly focused on relationships among individuals and groups within the work setting. While the classic organizational theories assumed that all members of the organization shared the same goals, systems approaches recognize that different subsystems can have very different interests and agendas.
Although questions of productivity may be of most interest to upper management, most employees are interested in the daily life of the workplace. Office politics marks an inescapable and sometimes very difficult part of such day-to-day work life. In their 1998 review, Erik Andriessen and Pieter Drenth discuss the multiple parties model, which came of age in the 1970s and was influenced by a Marxist theory of management-worker relations. This approach emphasizes the competition for power that often goes on within an organization.
Power offers numerous privileges, in particular, greater control over one’s life, which is highly correlated with life satisfaction. Power also offers social status, which for many people is an end in itself. There are several avenues to power in the workplace, including control over the distribution of rewards and punishments (authority), professional expertise, or the use of personal charm. Because the pursuit of power is such a potent and frequent motivator, there is often competition between different subsystems over access to power, or even access to the symbols of power.
Consider how different departments or divisions can fight over office space, control over budgets, hiring decisions, and even status symbols such as the corner office. Of course, such competition can occur among smaller units within these subsystems, such as individuals or coalitions of individuals. Likewise, in pursuit of power or in an attempt to maintain power, people frequently build alliances and coalitions. One’s network of alliances is a powerful tool within office politics. However, it is important to recognize that much of this maneuvering is not conscious, and deliberate calculation may play a small part of this behavior.
We can speculate that the quest for power is heightened in strongly hierarchical systems. When power differentials are more acute, people become more aware of having less power than other people and are more disturbed by their relative lack of power. As we know from a broad range of research, whether people are satisfied or dissatisfied with what they have is strongly impacted by social comparison, by the contrast between what they have and what others around them have.
Leaders are certainly important and a large body of literature shows how leaders can impact absenteeism, morale, turnover, group productivity, decision making, and even company profits. However when considering group performance, the qualities of the leader do not tell the whole story. Sometimes groups are largely autonomous, functioning with very little active leadership. Sometimes external factors, such as organizational structure and culture or larger economic conditions, can constrain a leader’s impact.
Earlier research studied the personality traits that contributed to effective leadership. Ultimately this research came up empty handed. The data was simply too contradictory to lead to any firm conclusions about what kinds of personality traits make for a better leader. What does seem important, however, is the fit between leadership style and the nature of the task. Different kinds of leadership are necessary in different situations.
Napoleon leads his troops in Egypt. Research is not clear about what qualities make a good leader, though charismatic leaders can be effective in times of turmoil (iStock).
A repeated theme since the early days of Max Weber is the notion of the charismatic leader. A charismatic personality is not necessary in most managerial situations, but can come in handy in times of turmoil when workers need to be inspired and reassured about the need for significant changes in their values, goals, and group norms. This kind of leadership has been called transformational leadership.
A task-oriented leader focuses on the most efficient ways to accomplish the goals of the group, whether that is to maximize sales, treat patients in a hospital, or produce the greatest number of widgets. The task-oriented leader does this by clarifying the group’s goals, delineating each worker’s responsibilities, and addressing any barriers to goal completion. A socioemotional leader addresses the overall morale of the group members. This includes consideration of group cohesion and morale, the emotions and needs of individual workers, and within-group relationships. Research shows that task orientation promotes efficiency, whereas a socio-emotional focus promotes worker satisfaction. However, worker satisfaction and worker performance are not always related. Therefore, most organizational psychologists conclude that leaders should attend to the demands of the task and to the socio-emotional needs of the work group.
One of the tasks of the leader is to create structure. Structure is a critical part of any organization, which needs rules, policies, and clear roles in order to function. However, it is quite difficult to achieve the right balance between structure and flexibility. Not enough structure prevents efficient, coordinated efforts toward the group’s goals. It can also result in chaos, corruption, and abuse of power. Too much structure leaves the organization inflexible and poorly adapted to change or to variability in local conditions. Because of this, a parallel underground system can develop outside the rules of the formal structure. This is similar to the black markets that develop in countries whose economies are overly controlled.
The extent to which decision making is shared with subordinates or concentrated at the top of the hierarchy differs across organizations. Thus, organizations can vary from strongly centralized decision-making practices to highly participatory decision-making practices. In participatory decision making, subordinates have much more input into how decisions are made. Research shows that greater participation in decision making improves employees’ satisfaction with the decisions, but does not necessarily translate into better group performance. Therefore, research has investigated when participatory decision making is most useful, and when it is less important. When the workers are highly educated, intelligent, and have considerable expertise in their areas, participatory decision making is more effective. Additionally, when the task at hand is highly complex and knowledge about local conditions is important to the decision, participatory decision making is important. Finally, in times of crisis, when the decisions have very strong impact, participatory decision making is useful.
In a 1973 publication, Victor Vroom and Philip Yetton considered the circumstances in which autocratic versus participatory decision making styles would be most effective. They believed that there is no one-size-fits-all leadership style, but that leaders have to adapt to different situations. They listed seven characteristics that might influence a decision, such as the amount of information needed to make the decision, the significance of the decision, employee support for the decision, and other related issues. They then made a decision tree based on the seven characteristics, resulting in 12 possible situations. For each situation, they listed which of five decision-making styles (AI, AII, CI, CII, or GI) would be appropriate. CII was listed for 9 of the 12 situations, while CI and GII were appropriate for 7 out of 12. Interestingly, the most autocratic styles, AI and AII, were only appropriate for three and five situations, respectively. In sum, this work suggests that consultative styles are appropriate for a broader array of circumstances than are autocratic styles.
In his theory of scientific management, Frederick Winslow Taylor took an early behaviorist approach to motivation. He did not believe that employees had any intrinsic motivation to work, rather that people would only work for rewards (such as pay) or punishments (such as fear of being fired). The human relations approach considered the emotional and social needs of employees. Later organizational psychologists recognized that human motivation is complex. Influenced by Abraham Maslow’s hierarchy of needs, various theorists came up with multifaceted theories of employees’ motivational needs. In 1972, Clayton Alderfer proposed a three-part model of worker motivation: existence needs (basic physical needs), relatedness needs (for social connection and support), and growth needs (for realizing their own potential, similar to Maslow’s self-actualization needs). In 1983, Wofford and Srinivasan suggested that worker performance reflected four factors: competence, motivation, role perception, and the limitations determined by the setting. A manager’s job would be to address each issue as it became relevant.
In 1959, Frederick Hertzberg (1923-2000) and colleagues published their survey of two hundred mid-level engineers and accountants in a Pennsylvania company. The subjects were asked about the high points and the low points of their work life. For high points, subjects frequently listed moments of accomplishment and recognition, increased challenge, promotion to a higher level of responsibility, and increased autonomy. For low points, subjects complained of problems with managerial and policy decisions of the company, recognition, salary, and relations with superiors.
Hertzberg interpreted these results to mean that causes of job satisfaction were intrinsic to the job (inherent within the work itself) while causes of job dissatisfaction were extrinsic (due to context). He integrated these insights into his two-factor theory of worker motivation, also known as the motivation-hygiene theory. Over the years, the study was repeated multiple times in different settings.
A consistent finding was that people attributed positive outcomes to intrinsic or internal causes (self-caused) and negative outcomes to extrinsic or external causes. In other words, we credit ourselves for our successes and blame others for our disappointments. Hertzberg’s findings have been highly influential in organizational psychology.
Drawing from the literature on organizational psychology, group dynamics, and family systems, we can put together a list of pointers:
Manager Dos
Identify and support appropriate boundaries in your work group:
Recognize positive behavior:
Make sure employees’ responsibility is proportionate to their control:
Listen to your employees:
Manager Don’ts
Pick favorites:
Avoid setting limits when needed:
Undermine group hierarchy:
Jump to conclusions when something goes wrong:
There is a large body of research examining how different personality types fit different types of jobs. The Strong-Campbell Interest Inventory is a well known test that aims to match personal interests, personality types, and occupational choice. This and similar tests are used in vocational counseling to help people decide on a career direction. According to their interests, people are characterized according to six personality dimensions: realistic, investigative, artistic, social, enterprising, and conventional (RIASEC). The pattern of test scores is then matched to professions whose members have similar patterns of test scores. For example, mechanics and construction workers score high on realistic, biologists and social scientists score high on investigative, and clinical psychologists and high school teachers score high on social. Newer adaptations of this test, such as the Campbell Interest and Skill Survey and the Strong Interest Inventory, have also been developed.
Developed by Isabel Briggs Myers and her mother Katharine Briggs and first published in 1962, the Myers-Briggs Type Indicator has become very popular in occupational settings. Based on Carl Jung’s theory of personality types, the Myers-Briggs classifies people into one of sixteen personality types depending on their scores on four dichotomies (pairs of opposites).
The first dichotomy, extraversion (E) vs. introversion (I), measures the degree to which someone is oriented toward the external, social world or toward their own inner thoughts and reflections. The second dimension, sensing (S) vs. intuition (N), refers to the way people gather information: Do they focus on concrete facts, or do they try to organize information into patterns? The third dimension, thinking (T) vs. feeling (F), relates to the way people make decisions: Do they focus more on facts and principles or on interpersonal concerns? The final dimension, judging (J) vs. perceiving (P), relates to the way that people come to closure: Do they prefer to come to a decision, or do they prefer to keep their options open, continuing to gather new information?
The sixteen personality types are identified by their initials (e.g., ENTJ, INFP, ESFJ) and have been linked to specific occupations. For example, people who score high on extraversion (E) might make good salespeople while people who score high on sensing (S) might make good mechanics. Although this test makes good intuitive sense—that is, it appears to make sense-it has been criticized as lacking adequate scientific validation. Despite these criticisms, the test remains very popular in many settings.
Many factors contribute to one’s degree of success in a career. Some of these are external, such as opportunity, education, economic conditions, and professional connections. However, there is evidence that certain personality traits also contribute to success. In a 2001 study of 291 Romanian engineers by Marcela Rodika Luca, creativity and self-management were better predictors of success than intelligence. Moreover, intelligence was more closely related to academic than professional success. Of course, the sample was already self-selected for a high level of intelligence. Therefore, in jobs demanding high intellectual ability, after a certain level of intelligence is met, additional intelligence may not add much to the mix.
Several studies have addressed success orientation, suggesting that people who want success aim for it, plan how to achieve it, and are willing to work for it. Additionally, based on the many studies that highlight the importance of human relations, good interpersonal skills are clearly important in the workplace. Finally, internal locus of control, the tendency to believe one has the power to affect one’s situation, also contributes to success. Research has shown that when people believe they have control over their life circumstances, they are more likely to take action to reach their goals. In contrast, people with an external locus of control tend to be more passive.
Drawing from the research literature, here is a list of survival tips for employees:
Traditionally, psychology focused on the private life of the individual, but over time the field has broadened its scope. From Wilhelm Wundt’s (1832-1920) studies of perception in the late nineteenth century, psychology has moved to the study of the group, as seen, for example, in social and organizational psychology. More recently, psychology has moved into the public sphere, conducting studies on the personalities of politicians, voting behavior, and even ballot design.
Although there has not been much empirical research into the personality traits of politicians, there has certainly been a lot of commentary on this topic from psychoanalysts and other clinicians. The extensive media coverage of the lives of politicians provides ample opportunity for clinicians to make inferences about their psychological traits. The information in the media shares many similarities with the kind of 2 information that psychotherapists gain from their patients during the course of treatment. Notably, the conclusions that different clinicians draw are quite similar. One of the most common traits that clinicians talk about is that of narcissism, maybe because the most active media coverage of politicians follows scandals.
In effect, narcissism refers to a very fragile and unstable sense of self. In order to compensate for their fragile self-esteem, narcissistic people become preoccupied with their self-image and intensely sensitive to perceived shame or humiliation. Typical narcissists have a grandiose sense of self, with an inflated sense of self-importance and an elevated need for attention, status, and recognition. More recent research has focused upon a kind of reverse narcissism, in which people are tormented by poor self-esteem, but harbor grandiose expectations of themselves. The Diagnostic and Statistical Manual, fourth edition (DSM-IV or DSM-IV-TR) lists nine criteria for the diagnosis of narcissistic personality disorder.
A 1998 study suggests that politicians may have a higher level of narcissistic traits than the general population (iStock).
Published in 2000 by the American Psychiatric Association, the DSM-IV Text Revision (DSM-IV-TR) is the latest version of the DSM. In order to meet DSM-IV-TR criteria for Narcissistic Personality Disorder, an individual has to display a pervasive sense of grandiosity, excessive need for attention, and lack of empathy across a broad array of situations. Five of the following nine criteria need to be met:
The Narcissistic Personality Inventory is a self-report questionnaire that assesses a person’s narcissistic personality traits based on criteria from an earlier version of the DSM. Published by Raskin and Hall in 1979, it has become a widely used test of narcissistic traits. In 1984, Robert Emmons divided the total NPI score into four distinct dimensions: leadership/authority, superiority/arrogance, self-absorption/self-admiration, and exploitativeness/entitlement. Emmons found that the first three subscales were correlated with adaptive personality traits, such as self-confidence, extraversion, initiative, and ambition, while the fourth subscale was correlated with measures of psychopathology. This study suggests that narcissistic traits can have both positive and negative implications.
In one of the few studies to empirically investigate narcissistic traits in politicians, Robert Hill and Gregory Yousey administered the NPI to 123 university faculty, forty-two politicians (state legislators from four states), ninety-nine clergy (both protestant ministers and Catholic priests), and 195 librarians. Their 1998 study found a statistically significant difference in total scores, with politicians scoring higher than the other three professional groups. In terms of the four subscales, politicians scored the highest on the leadership/authority subscale, and clergy scored the lowest on the exploitativeness/entitlement subscale.
In other words, politicians did score higher than the other three groups in total narcissism, but the differences seemed mainly due to their high scores on the leadership/authority scores. Interestingly, although the differences did not reach statistical significance, politicians also had the highest scores on superiority/arrogance and exploitativeness/entitlement subscales and professors had the highest scores on self-absorption/self-admiration. Without statistical significance, however, these last differences could be due to chance.
This is an important question. While there is little to no research investigating this question, most clinicians believe that the personality and the job interact with each other. The traits necessary for success in politics have to be there from the beginning. It takes considerable self-confidence, extraversion, and ambition to wage a successful political campaign. But the experience of political power also has very potent psychological effects. The power and public attention can be intoxicating, leading people to feel they are entitled to special treatment and should not be held back by any limits.
After the 2008 presidential election, John Edwards, a serious contender in the Democratic presidential primaries, was revealed to have had an extra-marital affair. In a television interview on ABC News, Edwards attributed his behavior to narcissistic attitudes that had mushroomed during his very high-profile campaign. “In 2006, I made a serious error in judgment and conducted myself in a way that was disloyal to my family and to my core beliefs. I recognized my mistake and I told my wife that I had a liaison with another woman, and I asked for her forgiveness…. In the course of several campaigns, I started to believe that I was special and became increasingly egocentric and narcissistic.” He stated that his experiences on the campaign trail “fed a self-focus, an egotism, a narcissism that leads you to believe you can do whatever you want. You’re invincible. And there will be no consequences.”
(Note: Quotes are from the New York Times, August 8, 2008, and the New York Post, June 19, 2009.)
This dynamic can also hold true for celebrities. Clinicians have further commented that the need for a managed and massaged public image can make politicians feel unaccountable for their private behavior. Their public persona becomes entirely cut off from their authentic private selves. All that matters is the image, not the actual beliefs or behavior. In fact, the psychiatrist Robert Millman has coined the term acquired situational narcissism, referring to the explosive impact of fame, power, and celebrity on narcissistic tendencies.
It seems almost every other week we hear of some new political scandal. Politicians get in trouble for financial shenanigans, abuse of power, and sexual indiscretions. Over and over again, we wonder how such politically astute people can act so recklessly. Don’t they realize they are bound to get caught? As discussed above, politicians may have a higher level of narcissistic traits than is found in the general population and these traits are only strengthened in the seductive spotlight of elected office. The perks of power can create a semidelusional sense of entitlement and invincibility. In addition, when it comes to sexual scandals in particular, the theory of evolution may have something to add.
The list of politicians caught in sex scandals is remarkably long and thoroughly bipartisan in nature. While many other countries expect extra-marital dalliances from their politicians, American political culture is still highly punitive of politicians whose behavior strays from monogamous family values, despite the frequency of such behavior among politicians. Gary Hart was a Democratic candidate for the 1988 presidential primary when a sex scandal caught up with him. Elliott Spitzer was the Democratic Governor of New York, brought down by a sex scandal in 2008. Republican John Tower was denied senate confirmation for a cabinet post in 1989 after revelation of an extra-marital affair. Mark Foley, a Republican congressman from Florida, resigned in 2006 after the exposure of inappropriate contact with adolescent congressional pages.
New York’s Governor Eliot Spitzer resigned in 2008 after a scandel involving prostitution (AP/WideWorid).
According to the theory of sexual selection, men gain an evolutionary advantage from pursuing multiple mates. In many species, males pursue social dominance in order to gain access to a harem of females. In short, these alpha males maximize their evolutionary fitness by seeking out youth, variety, and quantity in their sexual encounters. While such behavior among humans is certainly not universal, neither is it unprecedented, or even that unusual. Thus, there may be inherent contradictions between the personality traits of people who succeed in the competitive and aggressive arena of electoral politics, and the public fagade of pious self-control that many politicians feel compelled to adopt.
In a 2006 study, Mark Young and Drew Pinsky administered the Narcissistic Personality Inventory (NPI) to 200 celebrities. They found that celebrities scored significantly higher on the NPI than both the general population and a comparison group of MBA students. They also found that female celebrities scored significantly higher than male celebrities, which is the opposite pattern found in the general population. Further, reality television celebrities produced the highest NPI scores, followed by comedians, actors, and musicians. Interestingly, they found no correlation between NPI scores and years of experience in the entertainment industry. This suggests that the celebrities’ narcissistic tendencies may have predated their entrance into the industry.
One area that has received some attention in the research literature is that of celebrity worship. Several studies have looked at it from an absorption-addiction model, suggesting that extreme forms of celebrity worship may reflect a kind of addiction. Other research has found that mild forms of celebrity worship are quite common and unrelated to psychopathology, while more extreme forms do seem correlated with emotional disturbance.
In a 2003 study, John Maltby, James Houran, and Lynn McCutcheon administered the Celebrity Attitudes scale and a personality measure known as the Revised Eysenck Personality Questionnaire to 219 students and 390 community residents. They found modest but statistically significant associations between different kinds of celebrity worship and different personality traits. People who engaged in celebrity worship for social and entertainment purposes were more likely to score high on extraversion, an adaptive personality trait. People who had an intense and personal investment in celebrity worship scored high on neuroticism, which reflects an anxious and depressive emotional reactivity. Finally, people who scored high on the borderline pathological form of celebrity worship, the most disturbed form, scored high on psychoticism. In Eysenck’s scale, psychoticism is less about psychosis than about aggression, psychopathy, and social alienation.
Because voter turnout is essential to a democracy, psychologists have joined with political scientists to study the factors that motivate people to vote. If you look at it from a classic rationalist view of costs and benefits, you can argue that it doesn’t make much sense to vote. Voting takes time, energy, and even money if you have to miss a day of work to get to the polling place. And any single person is unlikely to feel that his or her vote will change the outcome of an election. Nonetheless, people do vote and their participation in electoral politics remains critical for the survival of the democratic system. Psychologists and their colleagues in other fields have considered the possible motivations for voting. Among other factors, they have suggested the role of habit, social pressure, altruism, and even genetics.
Research into voting records shows that some people vote regularly in every election, while some others seem to target their votes to “issue elections”, where there are issues at stake that the voter cares about. Regular voters, or “habitual voters,” are more likely to have lived in the same house over several election cycles, according to Wendy Wood, John Aldrich, and Jacob Montgomery.
Researchers have also looked at the impact of social pressure on voting behavior. Not surprisingly, fear of public exposure can motivate people to get to the polling place. A political scientist named Donald Green mailed out letters to about 90,000 Michigan households before the 2006 primary election. An additional 90,000 households received no letters. Four different letters were sent out. One letter simply reminded people of their civic duty to vote, the second letter reminded the recipients that voting records (whether or not people voted) were publicly available. The third letter included information on recipients’ previous voting behavior and the fourth letter listed the past voting behavior of recipients’ neighbors. It also stated that the recipients’ own voter turnout would be reported in another letter sent out to their community. Recipients of the fourth letter showed the greatest increase in voter turnout (8.1 percent), followed by recipients of the third letter (4.9 percent), and the second letter (2.5 percent). Recipients who were simply reminded of their civic duty to vote only increased their turnout by 1.9 percent.
James Fowler and Laura Baker have conducted a series of studies on voting behavior in families. They found that the party affiliation of adopted children tended to be similar to that of their adopted parents and siblings, suggesting that party affiliation was culturally transmitted. When the authors compared the voting behavior of a large sample of identical and fraternal twins, they found that identical twins were more similar than fraternal twins in regard to whether or not they voted, but no more similar in their choice of candidate. In sum, this work suggests that voter turnout is related to genetics, while party affiliation is related to environment.
Other researchers have suggested that altruism plays a role in voter turnout. In an experimental manipulation called “the dictator game,” subjects are given money and told to share it with another person who will not know their name. In a 2007 study by James Fowler and Cindy Kam, people who shared their money were significantly more likely to vote than those who did not. Moreover, Richard Jankowski found that people who agreed with altruistic statements were more likely to have voted in the 1994 elections. Perhaps altruism is related to a sense of social commitment, specifically to a sense of connection to the social group and a feeling of responsibility for its well-being. Additionally, we can speculate that altruism may have some genetic component, perhaps accounting for the apparent genetic influence on voter turnout.
Candidates are very interested in the psychology behind candidate selection. The classic rational tradition would hold that people determine which candidate best represents their interests or their values and then vote accordingly. However, the psychologist Drew Westen has argued that people rely on far more than rational analysis when choosing how to vote. Careful analyses of candidates’ qualifications, voting records, and positions on the issues is time consuming and difficult, especially for people who do not follow current events. Therefore people tend to fall back on shortcuts, basing their decisions on personal liking of candidates, identification with candidates, hot button issues, and simple messages that stimulate strong emotional responses. It is important to realize that much of this emotional information processing can be unconscious. As with many other kinds of choices, people may think their choices are based on rational analyses, when they are actually more emotionally driven.
Voting behavior is certainly not a purely intellectual activity; researchers have found that other factors, including social pressure, feelings of altruism, and even genetics can play a role (iStock).
Given the emotional influences on voting behavior, it is hardly surprising that a whole industry of political consultants struggles to figure out how to best package candidates to appeal to the voting public. In short, many campaigns try to influence voter response by shaping voters’ emotional reactions. One powerful way to do this is through associative conditioning. Politicians try to create either negative or positive associations with particular issues or candidates. This can be done through careful use of language, meticulously designed visual images, and intentional use of emotionally significant symbols. For example, the colors of the American flag grace almost every national campaign and, though it may be a cliché, politicians are frequently photographed holding babies. These images prod the voter to associate the candidate with patriotism and support of the family.
The political consultant Frank Luntz specializes in shaping language to influence public opinion. Through language he aims to attract the listeners’ attention, implant ideas in their memory, and stimulate either positive or negative emotional responses. In his 2007 book, Luntz states that the most effective political rhetoric is characterized by repetition, consistency, simple plain language, catchy memorable phrases, and short sentences. The aesthetic quality of the speeches matter too. A politician’s words should be pleasant to listen to with a rhythmic flow. While the message should be consistent, some degree of novelty is also important to capture the listener’s attention. Moreover, visual images can often have more power than words. While Luntz has been criticized for promoting style over substance (and in effect, manipulation over communication), he states that content is not entirely irrelevant. The speaker must have credibility; if the politician goes too far beyond what is believable, the audience will be turned off.
In Frank Luntz’s view, a single word can frame an issue, creating either a positive or negative association in a voter’s mind. In a 2005 memo to Republican party members that was widely disseminated in the media, Luntz listed 14 phrases that should never be said. Included below are seven examples from his list. In support of the power of language, Luntz stated that two-thirds of Americans wanted to “personalize” Social Security, while only one-third wanted to “privatize” it. Lutz’s 2005 memo on political rhetoric states:
To Promote a Positive Impression
Never Say | Instead Say |
Tax Reform | Tax Simplification |
Globalization | Free Market Economy |
Foreign Trade | International Trade |
Drilling for Oil | Exploring for Energy |
To Promote a Negative Impression
Never Say | Instead Say |
Government | Washington |
Undocumented Workers | Illegal Aliens |
Estate Tax | Death Tax |
Given its historical interest in perception, cognition, and motor function, there is much that psychology can contribute to the study of ballot design. There are two main problems to consider when designing ballots. For one, ballots should be functional. People should be able to use them with ease. This issue is of particular relevance for elderly voters, who may suffer from cognitive, perceptual, or physical difficulties. In a 2007 study by Tiffany Jastrzembski and Neil Charness, elderly voters were shown electronic voting machines that differed in two ways. Votes were entered via touch screen or key pad, and races were presented either one at a time or all at once. This resulted in four different combinations of ballot design. Elderly voters performed best on the combination of touch screen ballots with races presented one at a time. Additionally, ballot design should not favor one candidate over the other. For example, in a 1998 study by Joanne Miller and John Krosnick, name order was found to significantly affect voter choice in forty-eight percent of 118 races in the 1992 Ohio state elections. On average, the candidate on the top of the list received 2.5 percent more votes than the candidates listed further down the ballot. While that may not seem like a large margin, it is enough to win an election.