4
ETHICS
In the late 1960s, sociology PhD student Laud Humphreys began collecting data in “tearooms,” which referred to public restrooms that served as “locales for sexual encounters without involvement” (Humphreys 1975:2). Humphreys had been studying the lives of homosexual men and found that one aspect of this subculture not well studied was the behavior that took place in these tearooms. Humphreys found that many men—heterosexual, married, single, or homosexual—often wanted “instant sex” and would seek out these public restrooms (tearooms) in order to engage in this instant, impersonal, and anonymous sex.
Humphreys describes the intricacies of his field research by stating that in order to gather data on this type of private behavior, he had to pass as a “deviant.”1 Participating in the illegal activity of public sexual encounters was considered unethical, so how could Humphreys infiltrate these public spaces to learn about the behavior associated with this subculture? Humphreys discovered a way to participate as an observer. Due to the fear encountered by men who participated in “tearoom trade,” a role was created for a third party that fit the needs of a social researcher. Because acts of homosexuality were criminalized at the time, men who participated in these encounters had quite a lot to fear, thus opening an opportunity for an observer, in this case a lookout. Humphreys became the lookout (or “watchqueen” as it was called in the subculture), participating in hundreds of these “impersonal” encounters within the confines of public restrooms. By choosing to play the role of the watchqueen, Humphreys gained access to tearooms without having to participate in the sexual encounters themselves.
As the watchqueen, Humphreys’s role in the impersonal sex process was to keep an eye on the restroom entrance in order to determine who was coming into the restroom. If a stranger (legitimate “straight” men who were simply there to use the restroom facility), or police officer, or some other law enforcement officer was about to enter, Humphreys’s job was to give a loud cough to notify participants of the possible danger. If the entering man was a “regular” (someone known to the group to be there for the sexual encounter), Humphreys would nod to the other participants, letting them know they were not in danger of being discovered.
Participating in his unique role, Humphreys gained unprecedented access to a subculture that was not well known. In the course of his data collection, Humphreys realized that he had access to more information about the men he was studying. As a regular participant, Humphreys was able to watch the other men come and go from the restroom from his vantage point as a lookout. He had noticed the wide array of men who participated in the tearooms and began to connect them to the cars that they drove. By linking each man to his car, Humphries was able to tell more about the participants (for example, were they driving an expensive car, a family car, etc.) than just through his observations in the tearooms.
Recognizing that he could match participants to their cars, Humphreys began to record several pieces of information about each participant; he recorded the license plate number on the car, a brief description of the car, and a description of the participant. Humphreys reports that in most cases he would observe the activity of participants in the tearoom, leave the tearoom, and then wait in his car until participants came out, getting into their cars to leave the area. Armed with license plate numbers and descriptions of cars, Humphreys posed as a market researcher and visited local police precincts where “friendly policemen” gave him access to the license registers where he obtained the names and addresses of tearoom participants.
After the first year of his observations, Humphreys was asked to develop a questionnaire for a social health survey for men by a social research center. With the project director’s permission, Humphreys added the names and addresses of 100 tearoom participants to the survey sample in order to go to their homes and interview them about a host of topics. Since formal interviews had been part of his original research design, adding his tearoom participants to the survey sample gave him the opportunity to interview them in a nonthreatening way (since the surveys put them into the context of a random sample of men participating in a legitimate study rather than as sexual “deviants” who he had observed in anonymous sexual encounters). Subsequent to the formal health survey interviews, Humphreys took a sample of 50 of the tearoom participants and 50 of the non-tearoom participants from the health survey sample to do in-depth follow-up interviews so that he could compare his sample of tearoom participants with a “control” group of “nondeviants.”
When Humphreys published his work on the tearooms, the work incited a maelstrom of controversy (see Horowitz and Rainwater 1975; Humphreys 1975, von Hoffman 1975, Warwick 1975). Had Humphreys’s field research been ethical? Had Humphreys misrepresented himself by pretending to be part of the tearoom practices? Had Humphreys violated the rights of any individuals? Had he committed felonies while participating in criminal acts? These questions raised a larger question: what is permissible when collecting social scientific data?
Humphreys’s Tearoom Trade study illustrated a conundrum within the research process: who should have more freedom, the researcher or the participant? While there are certainly benefits to doing or participating in social research, there are also costs, which include deceiving participants (who are then unaware they are being observed for the sake of social research), and subsequently invading their privacy, using research findings in harmful ways (see Warwick 1975). In the case of Tearoom Trade: Impersonal Sex in Public Places, Humphreys deceived his subjects in a number of ways. First he misrepresented himself as just another participant by serving as a watchqueen when he was truly participating as a researcher. Second, by connecting participants to their license plate numbers and then posing as a market researcher to obtain their names and addresses, Humphreys misrepresented himself. Finally, by adding tearoom participants’ names into the sample for the men’s social health survey in order to go to their homes to interview them, he also misrepresented himself. (This last misrepresentation has implications for the integrity of social research for another reason—adding his participants to the sample impacted the representativeness of the sample with consequences for the health survey.)
Much of the social scientific community lauded Humphreys for his unique and interesting contribution to the field, noting that his motives were pure and that he had done an admirable job maintaining the confidentiality of his participants while at the same time getting “needed, reliable information about a difficult and painful social problem” (Humphreys citing Horowitz and Rainwater 1975:179). Others, however, were appalled at his tactics for obtaining such detailed information about his participants.
While researchers and the public at large have a right to know about social phenomena, as individuals and groups within society, each of us also has a right to our privacy. Warwick (1975 cited by Humphreys 1975:209) summarizes the consensus on the issues Humphreys’s study evokes: “Social scientists have not only a right but an obligation to study controversial and politically-sensitive subjects, including homosexuality, even if it brings down the wrath of the public and government officials. But this obligation does not carry with it the right to deceive, exploit, or manipulate people.”
Warwick’s concerns and Humpheys’s study coincided with a renewed professional emphasis in ethics. In the early 1970s, professional associations like the American Psychological Association and the American Sociological Association were in the process of debating new codes of ethics. Ethics are the standards of conduct of a given profession or group. When dealing with humans in research, there are several procedures in place to protect individuals and groups participating in research. While scientific research systematically investigates questions that contribute to our general knowledge of what people think and do, no scientific goal should ever overshadow the rights an individual has in choosing to (or not to) participate in social research.
We have a need to be clear about what is and is not permissible when conducting research with human beings. Unfortunately, there are myriad examples of human rights abuses in the name of scientific research. Accounts of human research atrocities during wartime are sometimes stupefying in their level of depravity. The extensive human research the Nazis performed in the concentration camps during World War II serves as an example of the inhumanity of some human research. “Nazi ideology was predicated on the concept of racial supremacy” (Bogod 2004:1155) with blacks, gypsies, homosexuals, and Jews (the untermenschen) at the bottom rung of the racial ladder. Since these groups were considered subhuman, they were then seen as legitimate targets for medical experimentation.
The so-called medical studies performed by the Nazis subjected people to a variety of chemical injections (gasoline) and injections of live viruses (including typhus, tetanus, and streptococcus). Experiments also subjected people to submersion in ice water or leaving them strapped to stretchers naked for extended periods of time in below-freezing temperatures (part of a program to help researchers learn how to revive Luftwaffe pilots who were shot down over the North Sea). As part of these experiments children were castrated, limbs were amputated, and men and women faced mass sterilization; women were injected with “caustic substances” and men underwent irradiation (Bogod 2004; see also National Institutes of Health [NIH] 2008; Weindling 2004).
As a counterpart to Nazi experimentation, Japanese military doctors and scientists also conducted a wide range of horrific experiments on humans in a quest to develop an effective biological weapons program from the 1930s until the end of World War II (Kleinman, Nie, and Selden 2010). Like the Nazi ideology that perceived certain groups of people as not fully human, the Japanese saw the Chinese and other subjects of their experimentation as simply “experimental material.” The Japanese referred to their subjects as maruta, meaning “logs of wood” or “lumber” (Kleinman, Nie, and Selden 2010:5). Carried out in occupied China and Southeast Asia (often through the secret Unit 731), the Japanese Imperial Army conducted human experiments including the amputation and reattachment of limbs from one person to another, the vivisection of live people, and injections of diseases including the agents that caused anthrax, cholera, the plague, and typhoid (Kleinman, Nie, and Selden 2010; see also Nie et al. 2010).
While it is easy to point fingers at other nations, one infamous human research study conducted in the United States spanned a 40-year period (1932–72). In 1929 the U.S. Public Health Service (USPHS) conducted studies in the rural South to assess the prevalence of syphilis among blacks in order to develop strategies for mass treatment of the disease. Brandt (1985) argues that social Darwinism had created a new racial ideology in America that considered blacks to be a more primitive people who could not be assimilated into complex societies. As a more “primitive” people group, blacks were seen as prone to disease, specifically venereal diseases like syphilis (Brandt 1985).
Having identified Macon County, Alabama, as having the highest rates of syphilis in the counties surveyed by the USPHS, the Tuskegee Syphilis Experiment began in 1932, seeking to track the development of untreated syphilis in 400 black men. Even when penicillin became available as a treatment for syphilis in the early 1950s, the men in the study were denied treatment (even during World War II when some of the men visited their local draft or health boards, they were denied entrance into the army and treatment for syphilis). In fact, the USPHS worked with doctors, county and state health officials, and draft boards to ensure the continued participation of the Tuskegee Syphilis Experiment subjects. As late as 1969, oversight committees (one from the federally funded Centers for Disease Control and Prevention) met to decide if the study still had relevance and should be continued—still denying treatment to the participants (see Brandt 1985; Jones 1993; NIH 2008).
Only in 1972 when details of the study came out in the popular press was the experiment halted. By then at least 28 of the men (much likely many more) had died due to complications from syphilis (Brandt 1985). The public outrage generated by the Tuskegee Study resulted in the National Research Act of 1974, as well as basic policy including the Protection of Human Research Subjects (Brandt 1985; NIH 2008). While science has the great potential to create better lives for humans, it also bears the burden of responsibility to make sure that harm does not come to humans in the name of scientific discovery.
From the “scientific” experiments on those in Nazi Germany and Imperial Japan to the Tuskegee experiment in the United States, research on humans has resulted in a number of truly horrific consequences. In order to guard against these wrongs, we now have federally mandated guidelines in place to protect human subjects. The U.S. Department of Health and Human Services’ Office of Human Research Protections (OHRP) evaluates research, holding research institutions and individual researchers to account for basic human safety, assessing the possible risks to human subjects, the adequacy of protection from risks, and the potential benefits to research participation for subjects. These federal guidelines are instituted through human subjects committees, or institutional review boards (IRBs) located at research institutions. In a university setting, for example, IRBs are composed of panels of faculty members who review proposed research involving human subjects in order to guarantee that subjects’ rights and interests are protected.
The basis for conducting human research comes from a 1979 document, the Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research (NIH 2008). The Belmont Report identifies three principles essential to ethical human research. These principles are respect for persons, beneficence, and justice (NIH 2008). Respect for persons rests on the principle that people should be treated as “autonomous agents” who are allowed to consider for themselves the potential harms and benefits of a situation, analyze how risks and benefits relate to his or her personal values, and then to take action based on their analysis. Social research can be intrusive; therefore research subjects have the right to evaluate whether or not they want to participate in research. Research should be voluntary, and researchers have the responsibility to provide potential subjects with adequate information so that the subjects can make an informed decision on whether or not they want to participate.2 This requires researchers to provide informed consent.
Informed consent allows subjects to base their voluntary participation on a clear understanding of what a proposed study will do, any possible risks associated with the study, how their data will be handled (e.g., how will researchers keep their data confidential), and if there are resources available for any adverse effects (psychological or medical). Informed consent should also clearly state that participants have the right to terminate their participation at any time in the course of the research. The challenges encountered within the first principle of the Belmont Report include making sure that participants fully understand both the risks and the potential benefits of participating in a study, and making sure there is no coercion or any undue influence by the researcher to get a subject to participate.
Participant’s name (print) _________________________________ | Researcher’s name (print) ________________________________ |
Participant’s signature _________________________________ | Researcher’s signature ________________________________ |
Date ______________ | Date ______________ |
Beneficence, the second principle of the Belmont Report, implies that researchers have an obligation to secure the well-being of research participants. Two general rules of beneficence are to do no harm and to maximize the potential benefits of the research while minimizing the potential harm of the research. Research should never be allowed to cause any harm to participants, which means no physical, psychological, social, legal, or economic harm. Along with providing participants with informed consent, researchers are responsible to apprise participants of any possible risks—the probability that a certain harm can or will occur over the course of the research. The general rule is that no risk during the course of the research would be greater than risks encountered in daily life. The challenge in applying the second Belmont principle lies in determining when the possible benefits from the research outweigh the possible risks, and vice versa.
The third principle of the Belmont Report speaks to the moral requirements for research; that is, are the procedures fair, are the individuals or groups treated fairly—in effect, is the research and how the subjects are treated just? For research to fall under the principle of justice, researchers must make sure that the benefits and risks of the research are fairly distributed among groups, individuals, or societies. One of the concerns with justice is that sometimes deception is used in the course of a study. Although generally unethical, there are times when slight deception can be used ethically. In some social experiments, for example, in order to observe a subject’s authentic responses, researchers have to redirect the gaze of the subjects.
Let’s suppose that a researcher was concerned that people’s behaviors were unduly influenced by the power of an authority figure. The researcher designs an experiment where a research scientist will request that a subject participate in an experiment on learning and memory. The subject will read word pairs over an intercom system to a “learner” in another room. In this other room a “learner” is strapped to a machine that doles out electric shock whenever the learner incorrectly answers the prompt from the teacher reading word pairs. In the initial phases of the experiment, the subject reading word pairs administers small amounts of electric shock to the learner in the other room. At some point, however, the subject hears the learner through the intercom system exclaim that he wants out of the experiment. The subject then turns to the authority figure (the research scientist overseeing the experiment) to ask if he should stop administering the shock to the learner and let the learner out of the experiment. The research scientist prompts the subject to continue with the experiment regardless of the response of the learner. This latter move is intended to truly test the relationship between authority and obedience. If the researcher had initially introduced the experiment as one in which subjects were being tested to see if they would follow authority, the chances that the researcher would observe true responses are quite small. Instead, the researcher redirects the gaze of the subject to focus on an experiment for the impact of memory on learning. If this experiment sounds familiar, it should; it’s the famous obedience experiment conducted by Stanley Milgram, which we’ll revisit in Chapter 14 on experiments.
Whenever deception is used in a study, researchers have an obligation to debrief each participant. A debriefing provides subjects with a full account of the study’s actual goals, so that subjects are fully aware of the true goals of the study in which they participated. By providing a full description of the study, a debriefing corrects misconceptions and reduces the stress, anxiety, or concerns participants may have encountered during the research. While the three principles of respect for persons, beneficence, and justice speak directly to human subjects, there are additional ethical standards in place to regulate the professional practice of social research.
One last concern regarding ethics is how to deal with the paradox that can occur in research. Researchers are sometimes in a unique position to give participants access to some benefit. In one famous educational experiment, researchers sought to study the connection between teacher expectations and student performance (Rosenthal and Jacobson 1968). In creating their research design, the researchers fashioned a fictitious measure (the Harvard Test of Inflected Acquisition) to “identify” students who would be experiencing an educational growth spurt over the course of a school year. Alerting teachers to these students, researchers created a classroom environment where teacher expectations seemed to positively impact student performance. Students identified at the beginning of the school year as “spurters” did show greater intellectual development at the end of the school year (Rosenthal and Jacobson 1968).
The experimental design in this study was intended to test if there was a link between teacher expectations and student performance; the researchers did not know that there would be a link. This illustrates the paradox. The students who were part of the experimental group (the “spurters”) benefited from their random assignment into the experimental group. If you had a child who had been assigned to the control group, would you be upset that he or she didn’t benefit from higher test scores? The general rule for researchers is that study participants cannot be worse off than if they had not participated in the study. The children in the teacher expectation/student performance experiment were not worse off than if they had not been participating in the study at all. Therefore, we consider the study to be ethical.
While the primary directive in doing social research is to protect those who participate, there are a variety of other ethical issues related to doing social research, including discipline-specific professional standards of competence, integrity, scientific responsibility, and social responsibility. As professional sociologists, we are subject to the American Sociological Association’s (ASA) Code of Ethics and Policies and Procedures of the ASA Committee on Professional Ethics. The ASA code of ethics, much like other professional codes of ethics, stipulates that members maintain their competency through education, training, and experience using the appropriate resources (scientific, professional, technical, and administrative resources).
As professional social scientists, we are also charged with being honest, fair, and respectful of others in the course of both research activities as well as everyday professional activities, adhering to the highest scientific and professional standards. There is also an expectation that professional researchers have a responsibility to their communities, by making public the knowledge they learn through their research for the public good. As the ASA Code of Ethics states, “When undertaking research, [sociologists] strive to advance the science of sociology and to serve the public good” (American Sociological Association [ASA] 1999:5). It would be easy to pretend that it is always simple to follow these guidelines for ethical research, as if they are always clear cut. Social research, however, can be quite complex as Humphreys’s Tearoom Trade study illustrates. As social scientists we must take to heart the professional ethical directive to strive for the advancement of science and the public good. In the case of research with human subjects there is no scientific goal, no matter how lofty, that should ever supersede the health and well-being of the people we study.
Notes
1 In 1966 when Humphrey began collecting data on men in tearooms, homosexual acts were illegal and seen as “deviant” behavior.
2 There are provisions in place to protect vulnerable populations who may not be able to make informed decisions due to their age (children) and/or condition (pregnant women, human fetuses, prisoners, people with mental disabilities, and/or people who are economically or educationally disadvantaged).
3 Adapted from the NIH Office of Extramural Research (see NIH 2008).