The Tuskegee (Public Health Service) Syphilis Study
This paper discusses the problem of making transhistorical moral judgments using the case of the so-called* Tuskegee Syphilis Study as an example. Can a later generation validly place moral blame on Public Health Service (PHS) physicians who began the study in 1932? Are past decisions and actions morally relative only to the standards of the times and within the social circumstances in which these standards apply? What reasons count for and against a retrospective moral judgment? Some personal and historical comments are followed by an overview of an approach to these questions.
Not then a reader of the venereal disease literature, I learned of the PHS syphilis study through Jean Heller’s 1972 news story.1 In 1966, Peter Buxton, then a PHS venereal disease investigator, courageously tried to stop it. After six years of PHS resistance to his efforts, he turned to a journalist friend and the news broke to an incredulous public. Ironically, in 1966 the PHS reformed policy to protect human subjects of research. In July 1966, Surgeon General William H. Stewart mandated prior group review of human-subjects research.2 The same month I began a two-year study of the ethics of medical research at the National Institutes of Health’s (NIH) Clinical Center. Thereafter, research proposals to the NIH’S extramural program that involved human beings had to undergo local review of the rights and welfare of subjects, the appropriateness of methods for informed consent, and relation of risks to benefits.3 How could both events—resistance to the Buxton protest and policy on prior group review—have occurred virtually side by side in the same agency? I later learned from James Jones’s 1981 classic history of the experiment that in 1965 a Detroit physician had written a letter of moral protest to a scientist at the Centers for Disease Control (CDC) who had written an article on thirty years of observation of the subjects.4 The letter went unanswered. As a bioethicist at the NIH’S Clinical Center from 1977 to 1987, I learned personally how a large and complex scientific bureaucracy can house the best and the worst on a moral spectrum.
*I say “so-called” because a more accurate name is “The Public Health Study (PHS) of Partially Treated Syphilis in Macon County, Alabama.” The PHS was morally responsible for the study in its entirety. The misnaming of the study is an illustration of how racism works against black persons. Tuskegee University and the townspeople are touched by a legacy of shame, each time the name is used. Changing popular usage is a lost cause, but one must protest the usage.
John C. Fletcher is Emeritus Cornfield Professor of Religious Studies and former director of the Center for Biomedical Ethics, University of Virginia.
Printed by permission of John C. Fletcher.
My interest in the PHS syphilis study and its legacy has several sources. Alabama was my home until early adulthood. There I idealistically entered the Episcopal ministry in 1956.* Encouraged by a then-liberal religious tradition to engage in issues of social justice, I worked with others of like mind for a “new South” to emerge out of poverty and segregation. Since southern reformers looked to the federal government as an ally in the 1950s and 1960s, it was a harsh blow to learn that government physicians and agencies did research “on” rather than “with” uninformed black Alabama sharecroppers.**
At the NIH from 1966 to 1968, I must have come close to learning about the experiment. I assembled ten meetings, now called “focus groups,” of leaders from each institute to discuss a question: “What are the most important ethical issues that your institute’s research poses for society in the next five to ten years?” Although the question was future oriented, some participants commented about past research activities that were “beyond the pale” of research ethics in 1968. The NIH received and analyzed spinal fluids and autopsy specimens of the Alabama subjects, but none of dozens of officials mentioned the study.5
Later, I taught biomedical ethics at the University of Virginia’s School of Medicine for ten years (1987–97). The three PHS officers who led the study until 1943 graduated from this medical school. Dr. Taliaferro Clark had the idea for a study of untreated syphilis for a brief period. The occasion for his idea was the loss of income to the Julius Rosenwald Fund caused by the Great Depression. The fund had no more resources to continue support for a PHS project of syphilis detection and treatment demonstration in six sites in the South, including Macon County, Alabama. At an impasse, Clark salvaged a convenient sample of infected and untreated black subjects to study. He wanted to compare the outcomes of black persons with untreated syphilis with those of whites, because such a study had been done with all-white Norwegians with syphilis in Oslo in the early twentieth century.6
*I found the profession rewarding for its proximity to life’s most important questions and passages but requiring loyalty to a theistic worldview which I cannot accept with intellectual honesty or bring to bear on the most important issues in bioethics. After many years of struggling with these issues and myself, I resigned from the Episcopal ministry in 1990.
**Mortimer Lipsett, director of the Clinical Center, NIH (1976–81) discussed the change in the ethos of clinical research in his lifetime by the use of these prepositions. Dr. Lipsett employed me as his assistant for bioethics in 1977.
Dr. Hugh Smith Cumming (1869–1948), surgeon general of the United States, made the official decision to approve the study. Dr. Raymond Vonderlehr succeeded Dr. Clark when he retired in 1933. Vonderlehr’s dedication to continue the study as director of the PHS’S Division of Venereal Diseases (1933–43) in large part explains the study’s longevity.* He was succeeded by Dr. John R. Heller, a PHS officer (but not a Virginia-trained physician) with extensive field experience in Alabama. I met Dr. Heller at the NIH in the late 1970s. He adamantly denied that any moral wrongdoing had occurred in 1932 or on his watch.7 Dr. Heller became director of the National Cancer Institute in 1948.
I also cochaired the Tuskegee Syphilis Study Legacy Committee with Dr. Vanessa Northington Gamble. With funding from the CDC, the committee met at Tuskegee University in January 1996. I was invited, due to my lobbying the PHS from early 1994, to make an official apology for the study.8 The committee’s mission was to alter the study’s destructive legacy. We discussed the social wound left by the study, which is large and fresh in the collective memory of African Americans who associate the study to conspiracy and genocide aimed at people of color. Even further, the committee recognized, this legacy seriously impedes participation of African Americans in AIDS prevention and research.9
We pressed our cause that President Clinton should apologize to the survivors, their families, and the community of Tuskegee, which unfairly bears the shame of the study’s name. To his administration’s credit, President Clinton made an apology at a ceremony held at the White House on 16 May 1997. The apology was certainly apropos, but it was offered at the wrong place. The committee had strongly urged the site of Tuskegee itself to enhance the symbolism of the event, to promote racial healing, and to make it feasible for more family members of the subjects to attend.10 I conclude these comments with two questions: How many black persons, even now, remember an apology for the experiment that was given at the “White House”? How many would remember an apology given by a president who journeyed to Tuskegee?
This four-part paper addresses the question of the validity of transhistorical moral judgments about the PHS syphilis study. Part One describes a method to assess the validity of a transhistorical moral judgment. Part Two uses this method to evaluate the validity of moral judgments made by an official body and others in 1973 that the study was unethical at its inception, due to lack of informed consent to research. This part includes a section pointing to two bodies of moral guidance in 1932 by which Clark and Vonderlehr et al. could have been challenged.
*The key figures and their many colleagues are hereinafter referred to as Clark and Vonderlehr et al.
Part Three argues for a graded approach to moral blame, focusing on PHS researchers and officials in the 1950s and 1960s for their moral blindness to the most objectionable features of the study. Part Four examines two concluding questions: Why did the PHS study endure so long without serious internal moral challenge, and what lessons can be learned from the experiment about protection of human subjects in our own era and in the future?
This paper originated as a response to a question posed to me by the Advisory Committee on Human Radiation Experiments in 1994: “Can we judge the standards and conduct of those who preceded us?”11
We plainly can and do make transhistorical moral judgments. Indeed, we must make such judgments to be loyal to moral norms and to transmit moral evolution to a new generation. The task of transmitting moral evolution also requires accuracy and truthfulness about the history of reform of social practices, which must document the most serious moral lapses and errors. The history of the morality of human experimentation and the role of the PHS syphilis study is a significant case in point. The United States is considered a world leader in innovation in the ethics of research; however, the PHS leaders who began a public process of reform were among those who condoned the syphilis study.
If we fail to judge the past, however measured our judgments, we will lose in our collective memory the harm and suffering caused by older practices. We will lose, too, in our moral evolution the ability to change those harmful practices. Making such judgments is risky; it invites the fallacy of misplaced moralism, caused by imposing present-day judgments onto the past, but it is not necessary to commit this fallacy. To avoid doing so, valid transhistorical judgments require two tasks. The first, with the aid of historical research, is to put ourselves as much as possible in the moral position of those who are under scrutiny. Were they morally culpable in their own time?
To pursue this task, one must examine both the practice of wrongdoing and any movement to abolish it by law or to reform it in social practice.The challenge is to identify the moral norms of the day, as well as any social movements aimed at shifting those norms to abolish certain practices. For example, we now cast moral blame on past practices of slavery, racial segregation, and economic exploitation of child labor, and we object to the conduct of those who defended such institutions and practices by the standards of their times. However, each of these practices and the movements to reform them have long and complex histories which were prefaced by the moral witness of gifted individuals. Slavery required three centuries to abolish in the United States; racial segregation is illegal in its overt forms but still embedded in our social practice; although children are legally protected in this nation from exploitative labor practices, multinational corporations operate today in countries where child labor is cheap and loosely regulated, if at all.
Every reform movement has a moral dimension, and we find reformers appealing to moral ideals that did not prevail at the time of the objectionable practices because the majority of people were loyal to competing moral standards. Abolitionists and reformers in these movements mainly appealed to the supremacy of moral ideals of equality and respect for persons, whether these ideals emanated from religious tradition or a theory of natural rights. The ideals required specification and reforms of morally discredited practices to penetrate society and win the loyalty of larger numbers.
The second task is one of historical comparison with a moral aim: comparing present to past practices and the standards that undergird them. The moral aim is to view the past as a negative “paradigm case,” to enhance moral education for future decisions and to prevent reoccurrences.
When the PHS syphilis study was reported in the press, the assistant secretary for health appointed the Tuskegee Syphilis Study Ad Hoc Advisory Panel in 1973 to review the study. The panel concluded that the study was “ethically unjustified in 1932,” that is, at the study’s inception.12 Also, Senator Edward Kennedy stated that the study “was an outrageous and intolerable situation in which this Government never should have been involved.…”13 Are these moral judgments valid?
The panel’s judgment was based on a premise that the subjects had been deprived of informed consent to a study of a disease with a known risk to human life.14 But in 1932 the norm of voluntary informed consent when based on a concept of respect for the individual’s autonomy that outweighs the beneficence of medical treatment or research was not a part of the ethos of American researchers. Thus the panel’s judgment is flawed because informed consent was not an intact norm of researchers at the time. It is true that a few exceptional investigators, like William Beaumont and Walter Reed in the nineteenth and early twentieth centuries, sought the voluntary informed consent of their subjects for dangerous studies. But Beaumont and Reed were exceptions to the norm in nontherapeutic research.15
Notwithstanding, it is a historical fact that no one protested the PHS study of untreated syphilis at the time on the basis of informed consent. Not until the 1960s was there any organized movement to reform human experimentation motivated by loyalty to any particular moral norm. Moreover, the PHS study was not cited in Henry Beecher’s famous article of 1966 naming twenty-two unethical or marginally ethical studies.16 Beecher may have known of the study but omitted it due to its inception in a much earlier era.
The original study design was a six- to eight-month investigation of the natural history of untreated syphilis, using comprehensive physical examinations and X rays, lumbar puncture, and specimens from autopsies. There was never an intent to seek informed consent for this study. On the contrary, the intent was to disguise the study as treatment to make it acceptable. Dr. Clark’s own words were: “To secure the cooperation of the planters … it was necessary to carry on this study under the guise of a demonstration and provide treatment for those cases uncovered … in need of treatment.”17
Notably, the decision to give some treatment resulted from an objection to the original design. When Dr. Clark presented the plan to Alabama public health officials and local physicians, they insisted that the subjects should receive some treatment. An agreement was reached that subjects who tested positive for syphilis were to receive eight doses of neoarsphenamine and some additional treatment with mercury pills, unless contraindicated on medical grounds.18 All of the men in the study received one or the other drug or both, which were known at the time to be inadequate to treat syphilis.19 The standard of care for treatment of syphilis at the time was a one-year treatment regimen of arsphenamine and bismuth of mercury.20
In Bad Blood, Jones saw the objection as political, arising from concern that the landowners for whom the subjects worked would not cooperate with a plan that lacked treatment.21 However, the physicians and officials may have operated with a degree of medical beneficence. Clark and Vonderlehr and the others were physicians who quickly compromised to give therapy. Also, state public-health officials could have been trying to stay within reach of an Alabama public health law passed in 1927. This and later laws were probably violated by the experiment.22
The compromise in study design had two morally relevant consequences. First, it brought a “fatal flaw” into the original scientific plan to study untreated syphilis.23 Due to partial treatment, all information was basically uninterpretable. Despite the scientific idealism (mixed with concepts of racial medicine) that inspired it, the study has to be viewed as a scientific failure that exploited and then wasted the sacrifice and suffering of the subjects. PHS officials through the years worried about the same issue. Dr. Austin Deibert discussed the contamination of results by partial treatment in 1938.24 Dr. Albert Iskrant, chief statistician of the Division of Venereal Diseases, sounded the same alarm in 1948 and also asked whether the study was in accord with Alabama law.25 His concluding comment about the study was: “Perhaps the most that can be salvaged is a study of inadequately treated syphilis.”26 The PHS took no action in 1948 but in 1951 expended a significant effort under Dr. Sidney Olansky to strengthen and overhaul the structure and efficiency of the experiment. No one proposed to stop it for either scientific or humanitarian reasons.
Secondly, even partial treatment provides a valid context within which to discuss the issue of informed consent and the ethics of Clark and Vonderlehr et al. as physicians. Did they violate the medical ethics of the time since some treatment was involved and they failed to obtain consent to it?
Scholars of history and ethics are divided about the prevalence and force of a norm of informed consent in nineteenth- and early to middle twentieth-century medicine. In a fine review of this question, Faden and Beauchamp compare the widely divergent views of Martin Pernick, a social historian, and Jay Katz, a psychiatrist member of the Yale law faculty.27 They write: “Where Katz sees no informed consent, Pernick finds it in abundance.”28 Pernick acknowledged differences between historical practices and the modern concept of “informed consent,” but his study of nineteenth-century materials and cases concluded that “truth-telling and consent-seeking have long been part of an indigenous medical tradition.”29 He found that the concept and practice were prompted by efforts to benefit patients therapeutically, rather than by any theory of individual rights or self-determination. Katz, on the other hand, while accepting the historical existence of such concepts and their role in early-twentieth-century legal cases, wrote of the “history of silence with respect to patient participation in decision making.… When I speak of silence I do not mean to suggest that physicians have not talked to their patients at all.… They have not, except inadvertently, employed words to invite patients’ participation in sharing the burden of making joint decisions.”30 Katz’s main point is that up until the 1970s, physicians did not have meaningful discussions with patients about their choices and alternatives in treatment and research. Faden and Beauchamp concede more to Katz than to Pernick as to the lack of a pervasive practice of informed consent in medicine.
Faden and Beauchamp compared these two views from a perspective on models of ethical justification. They argued that the earlier practices that Pernick found in abundance were defended by a “model of beneficence” that used disclosure and consent seeking to further the aim of therapeutic benefits for the patient. Education and motivation improved patients’ chances for a better response to therapy. Both Pernick and Katz acknowledge the influence of this view. Thus, there is a real historical link between treatment given in the PHS study and a paternalistic practice of informed consent in medicine in the early twentieth century. Disclosure and seeking the consent of patients for the patient’s welfare but not participation was a familiar practice in medicine at the time.
The primary aim of Clark and Vonderlehr et al. was not treatment of syphilis. Two ideas, one scientific and one ethical, motivated them. First, Clark saw an “unparalleled opportunity for the study of the effect of untreated syphilis.”31 A theory of a different natural history of syphilis in blacks, compared to whites, was commonly held but unproven among Clark’s physician contemporaries. Dr. Joseph Earl Moore, a well-respected expert on syphilis, gave a favorable peer review of the proposed study. He was convinced that the course of syphilis in blacks was different than in whites. The common view, shared by Moore, was that the disease attacked the cardiovascular functions in blacks and the neurological functions in whites. Moore’s review carried great weight in moving the study forward. A large element of racial bias was thus embedded in the study’s hypothesis. The Norwegian study, published in 1929, showed that cardiovascular complications were common and neurologic damage was rare. As Jones put it, “Anyone who was not predisposed to find differences might have looked at these facts and concluded that the disease was affecting both races in the same way.”32 Nonetheless, at its inception, a reasonable scientific argument could have been made for a six-month to one-year study of untreated syphilis in blacks and whites along with following those who died to autopsy.
The other imperative was one of medical beneficence. The investigators and other central characters in this unfolding tragedy, like Nurse Eunice Rivers and the physicians at Tuskegee’s John A. Andrew Hospital, believed that the benefits of bringing the subjects into the orbit of “government medicine” with its complete physical examinations, detection of co-morbidities like tuberculosis and other problems, was so preferable to the status quo, that the attention the subjects would receive more than justified the effort.
However, if research was the overriding goal, Clark and Vonderlehr et al. deceived themselves as scientists by adding therapy. The Oslo study had investigated the natural history of totally untreated syphilis. Now the two studies would not be comparable. In the ethos of the time, Clark and Vonderlehr et al. are more blameworthy at the outset for flawed science than as physicians who failed to seek informed consent. A short-term study of untreated syphilis would likely have refuted the racial hypothesis. Nonetheless, Clark and others lacked the courage to defend a nontherapeutic study to the end. If they had stood this ground and failed, the nation would have been spared the legacy of the study. However, they successfully compromised to start the study, which from the outset was a scientific mistake leading down a true ethical “slippery slope” and resulting in an eventual avalanche of moral problems. When Dr. Vonderlehr and others extended the study, especially into the 1950s, the risks of death due to withholding effective treatment were vastly higher, as well as violations of other ascending norms of research, like informed consent.
My conclusion is that the ad hoc committee inappropriately placed moral blame on Clark and Vonderlehr et al. by using a contemporary ethical understanding of participatory informed consent. Faden and Beauchamp describe this understanding as defensible on the basis of an “autonomy model,” within which respect for the principle of autonomy and the value of self-determination had a higher societal value than the beneficence principle. These scholars find no evidence for any version of this understanding until the 1950s. Also, the record shows no debate about consent whatsoever among Clark and Vonderlehr et al. or any other interested parties at the time.
The history of the gradual ascendancy of an autonomy model over a beneficence model in research is marked by controversy and resistance to the full practice of informed consent from within government agencies charged with regulatory oversight, such as the Food and Drug Administration in the 1960s and the NIH itself.33 This hierarchy of values was still not firmly in place in American medicine and research even in the early 1980s, when a president’s commission affirmed “shared decision making” in health care, a term that clearly reflects Katz’s view of the order of values at stake.34 Indeed, as the concluding part of this paper will show, there are significant weaknesses in the current system to protect human subjects by prior group review and informed consent of subjects or their legal representatives.
James Jones viewed the ethos of research in the United States in the early 1930s as follows:
In medical research, as with medical practice, work was evaluated by peer review. The scientific method provided the yardstick for measuring the validity of investigations, and the assessments of fellow workers determined which researchers received kudos. Results were what counted. Many investigators whose work involved nontherapeutic research on human beings no doubt were enlightened souls who viewed their patients as people and thought in terms of “informed consent” decades before the term was coined, but there was no system of normative ethics on human experimentation during the 1930s that compelled medical researchers to temper their scientific curiosity with respect for the patients’ rights. Here, as in private practice, a formless relativism had settled over the profession, holding that one investigator’s methods of conducting an experiment were about as ethical as another’s.35
It is true that “there was no system of normative ethics on human experimentation during the 1930s.” However, there were two contemporary ethical resources that could have challenged the study. One was specific to human experimentation in Germany and the other was the common morality.
German physicians prompted substantial debate about the ethics of clinical drug trials by a powerful German pharmaceutical industry. In 1930, after bitter debate about exploitation of subjects in drug trials, including prisoners, the Berlin Medical Board appealed for prior group review of research, that is, that there should be “an official regulatory body to which proposals for experiments on man should be submitted.”36 This body would have been similar to a National Human Investigation Board that Jay Katz recommended for the United States in the 1973 ad hoc panel review of the PHS study and on several other occasions.37 This idea has not been implemented except in the form of ad hoc national reviews of specific research as required by federal regulations.
The idea of national prior group review was easily defeated in Germany due to opposition from leading researchers and the drug industry. However, out of this debate came a remarkable set of guidelines on new therapies and human experimentation, released by the German minister of the interior in February 1931.38 One guideline held that it was contrary to medical ethics to take advantage of social distress and deprivation. Clearly it was relevant to the PHS syphilis study but was not applied; in the realm of medical research ethics America was isolated in these years.39 Surgeon General Cumming was doubtless unaware of these guidelines and their relevance to the Alabama project. In much the same way, the Nazi doctors’ cruel research made a mockery of the 1931 guidelines, which, in my view, were more comprehensive and insightful than the Nuremberg Code itself. But the Nuremberg Tribunal did not use them to judge the Nazi experiments. Their legal status in Germany during the 1930s and 1940s was questioned during the trial.40
The 1931 guidelines did not mention prior group review. In 1932, Clark conducted a conventional peer review of the proposed study, selecting his own reviewers from peers in syphilology.41 The earliest practice of prior group review was probably at the NIH’S Clinical Center in 1953. It was a form of partially disinterested peer review in a group deliberately designed for the purpose.42 It was not until after the revelation of the PHS syphilis study that Congress in 1974 required local prior group review by disinterested parties of all federally funded human subjects research. The NIH’S intramural program was not covered by the law until 1993, thus showing how slowly the process of reform of research ethics works in relation to federal agencies. Prior group review is still not legally mandated for privately funded studies involving human subjects. This problem will be discussed further in Part Four.43
Another source of moral challenge could have been from the common morality of the time. There one could find a prima facie moral rule that persons ought to be honest, not deceive others without justification, and be truthful with those who have a moral claim on the truth. How does one access the common morality of past times? Ronald M. Green, a Dartmouth College philosopher, adapts the strategy of the “reasonable person” rule in law to morality. Such a person should have a good grasp of the facts of the case and be well informed as to the prevailing moral standards of that time.44
In 1932, how would a reasonable person have answered this question: do these black men with syphilis, and the planters for whom they work, have a just moral claim to know that the men are being recruited for a study in which the best known treatment for syphilis (a one-year program of treatment with arsenicals and mercury) will be withheld in order to study the natural history of syphilis? One can easily imagine a challenge.
Clark and Vonderlehr et al. labored hard to be deceitful. Their effort is indirect proof of some level of awareness of violating a commonsense standard of honesty. Believing that the subjects could not understand the truth, they carried out a systematic program of planned deceit mainly to secure cooperation. When approaching potential subjects, they did not discuss a medical diagnosis of syphilis but used the local colloquialism of “bad blood.” In doing so, they exploited the economic and social distress of their subjects to facilitate their research.45 They did not disclose the experiment but deliberately disguised research activities as treatment. A dramatic example was the presentation of a required spinal tap performed in John A. Andrew Hospital at Tuskegee, the site of a former treatment center, as a “special treatment.”46 Spinal taps at the time had much greater risk of serious complications of paralysis and blinding headaches than the procedure as we know it today. Dr. Clark’s discussed deceit in a memo to Dr. Vonderlehr: “I agree with you that the treatment work should continue during the period of spinal fluid testing in order to minimize the amount of attention that will be given to this activity by the people of the community.47 Clark later explained his beneficent deceit to his consultant, Dr. James Earle Moore of Johns Hopkins: “These negroes [sic] are very ignorant and easily influenced by things that would be of minor significance in a more intelligent group.”48
Would a reasonable person, well-informed and motivated in 1932, have excused Clark and Vonderlehr et al. from the ordinary moral duty of honesty? One can certainly imagine a plausible challenge from this source. The argument here is simply that ordinary moral challenges could have been made in 1932. However, one cannot with historical confidence go on to judge that these ought to have been made or that Clark and Vonderlehr et al. were morally deficient by not considering the likelihood of such a challenge. The system of human experimentation then was virtually closed to the common morality and to external oversight, especially from nonscientists. It is doubtful that the trustees of the Rosenwald Fund or the Milbank Fund, which later supported the study, knew of the deception used in recruitment. The point here is that the lack of a “system of normative ethics on human experimentation” in 1932 does not imply that there were no moral norms at all from which to measure the morality of the motives and actions of Clark and Vonderlehr et al. It is implausible that they would have been held personally or professionally accountable in terms of these norms. At the time there was no specific moral context of accountability for researchers whose activities breached common morality or lagged behind advanced German thought in research ethics. Bodies of ethical guidance have to be created from actual historical experience and reflection on moral error.
The two major core values that compete and conflict in the tragic history of this case are: 1) society’s interests in research to understand disease and to alleviate or prevent human suffering and untimely death, and 2) the protection owed by society and physician-investigators to human subjects of research, whose individual welfare and rights, as well as their autonomous and informed choices to participate in research, deserve the highest respect regardless of their condition or rank in society. The argument thus far blames Clark and Vonderlehr et al. more for disloyalty to the first value than to the second, which was barely visible in American research ethics in the 1930s.
Where does the locus of moral blame, in the name of the second value, truly belong in this infamous case? An accurate transhistorical judgment must be made to transmit a reliable account of moral evolution to future generations. Using the approach demonstrated above on the informed-consent question, it is now appropriate to ask: were the leaders of the PHS study in the post–World War II period until 1966 morally culpable according to contemporary standards?
One looks in vain for moral protests on behalf of the well-being of the subjects before 1965. As described above, there were several protests from within the PHS in the name of the first value, but none for the second. An external protest came in a June 1965 letter from Dr. Irwin Schatz of the Henry Ford Hospital in Detroit. After reading a report on the study, he wrote to the primary author, Donald H. Rockwell:
I am utterly astounded by the fact that physicians allow patients with a potentially fatal disease to remain untreated when effective treatment is available. I assume you feel that the information which is extracted from observations of this untreated group is worth their sacrifice. If this is the case, then I suggest that the United States Public Health Service and those physicians associated with it need to reevaluate their moral judgments in this regard.49
Dr. Anne Yobs, coauthor of the report, received the letter and filed it with this attached comment: “This is the first letter of this type we have received. I do not plan to answer this letter.”50 Dr. Schatz’s lone voice must be heard in a context of moral silence from the thousands of readers of the thirteen published articles (1936–73) about the study.51
The next protest was Peter Buxton’s. He was more informed than Dr. Schatz about the moral features of the whole study. His protest was wide-ranging. His strategy gradually alerted officials to the risks of remaining passive in the face of moral questions about lack of treatment and racial implications. Finally in 1969, a panel of outside experts (all physicians) were assembled by the PHS to review the study. Only one, Dr. Gene Stollerman of the University of Tennessee, raised moral questions about the study and the obligation to treat the subjects maximally.52 His lone view did not prevail and the study continued until it was exposed.
Turning to the evolving ethos of medical research in the period from 1947 to 1966, there is historical evidence of slow but progressive advocacy for the second value. Some major benchmarks of this progress are the Nuremberg Code (1947), the Helsinki Code of the World Medical Association (1964), and the PHS human subjects policy itself (1966). However, the ascendancy of the second value in research practice and of socialization of researchers in loyalty to a new hierarchy of values was painfully slow; it met with deep resistance, and is still controversial in some respects today. The history of the ethics of human experimentation in the twentieth century is an ongoing struggle for a hierarchy of values placing the second above the first value. The evolution of a “system of normative ethics on human experientation”* has in retrospect taken half a century and is still evolving.
For valid transhistorical judgments, there must be a relevant moral context at the time within which decision makers would have been morally accountable. Due to the slow and gradual change in the ethos over this period, and the resistance to moral insights from within government itself, it is wise to take a graded approach to moral blame for the worst features of the PHS study. In effect, this would mean that Senator Kennedy’s categorical judgment is far more fitting for the period of the 1960s than for the 1940s or 1950s. One can, however, find enough of a real moral context to assign some blame at this earlier time. The main criteria is that judgment must be proportionate to the degree to which loyalty to the second value had penetrated a previously closed system of human experimentation and begun to transcend the first value.
Others have described the tumultuous history of the reform of human experimentation in this period.53 The events of these years dramatically posed progressively stronger claims of the second value, a score of crises and scandals depicted in Table 1, and a government that had to overcome deep resistance among its own scientists to higher loyalty to the second value.
*This quotation is from Jones’s description above of the ethos of the 1930s.
TABLE 1. Research Ethics Scandals
Thalidomide and FDA |
1962 |
Jewish hospital cancer study |
1963 |
Baboon-to-human heart transplant |
1964 |
Willowbrook hepatitis study |
1965 |
Beecher article |
1966 |
“Tea Room Trade” |
1967 |
Tuskegee (PHS-CDC) study |
1932–72 |
Fetal research |
1973 |
During this period the ethos of research did not lack for expressed moral obligations to benefit and heal the sick and to do the least harm possible in each endeavor. The Nuremberg Code in 1947 was a special point of entry of loyalty to the second value. Historically, we know that the moral claims of the code, especially in reference to informed consent, were more influential at the time and throughout the 1950s with researchers in the military than in the PHS or in academic medicine.54 Many American researchers self-righteously viewed the code as promulgated for Nazi physicians but not for Americans. Dr. Heller said in interviews with James Jones that he made no association between the code and the syphilis study.55 However, there were clinician-investigators at the NIH’S Clinical Center who used the Nuremberg Code to shape policy for the institution in 1953 and as they innovated use of prior group review.56 By 1964, the World Medical Association had adopted a very detailed code for researchers that distinguished between obligations in the contexts of therapeutic and nontherapeutic research.57 The United States was a member of this body. A progressively different ethos from that in 1932 and in 1947 had begun to take shape. After 1964, there was a clear moral context of accountability very much like a point of no return on a long journey.
Dr. John Heller led the study from 1943 to 1948. Some moral blame must be assigned to him and others for totally ignoring the implications of the Nuremberg Code for the study. Although he and many American researchers viewed the code as only for Nazi barbarians, his colleagues at the NIH were already using the code to shape practices to protect human subjects. It is implausible that a reasonable person at the time could make a judgment that Dr. Heller ought not to have associated the content of the code to his study.
The next section ranks the most serious moral violations of continued support of the study, which began with Dr. Heller and increased in moral blameworthiness along a succession of PHS officials. They failed to recognize or remedy severe moral violations progressively blameworthy by the moral lights of the period.
Adding to Risks of Death and Ill Health. First, the PHS maintained a study for forty years that did great harm to the study’s subjects who had syphilis and shortened their lives. Subjects from ages 25 to 45 when the study began had a 20 percent lower life expectancy than controls of the same age.58 The subjects were also in a condition of poor health and in higher danger of death from other conditions than controls.59
Penicillin was available and effective to treat syphilis by 1943. In fact, the PHS began giving penicillin to patients with syphilis in some clinics across the nation.60 The Alabama subjects were never informed of this development. In 1943, Dr. Heller became director of the Division of Venereal Diseases and could have given penicillin to the subjects and stopped the study. Dr. Heller and subsequent directors of the venereal disease division bear moral responsibility for deliberately shortening lifespans and inflicting remediable human suffering. They were: Dr. Theodore J. Bauer (1948–52), Dr. James K. Shafer (1953–54), Dr. Clarence A. Smith (1954–57), and Dr. William J. Brown (1957–71). All of the surgeons generals of this period were also morally responsible.
In 1951, when the PHS syphilis study was being reviewed after Dr. Iskrant’s criticism, Dr. Heller acknowledged that the experiment shortened subjects’ lives but defended it with a scientific duty to extend the study. His statement reveals two breathtaking realities: 1) he viewed the lives and health of socially and economically distressed human beings as expendable for the cause of science, and 2) he was morally blind to the central issue. He claimed:
We have an investment of almost 20 years of Division interest, funds and personnel … as well as a responsibility to the survivors for their care and really to prove [to them] that their willingness to serve, even at risk of shortening life, as experimental subjects [has not been in vain]. And finally a responsibility to add what further we can to the natural history of syphilis.61
Heller elevated science as a moral cause over the welfare of human subjects. He also distorted reality by attributing motives of “service” and altruism to persons whom he knew were uninformed as to their role as subjects. When subjects died, Heller and others did not inform their survivors of the likelihood that death could be attributed to having been enrolled in this study.
As the study endured and became routinized in the bureaucracy, it became an end in itself, and the human beings it used became purely means to this end. The supporters of the study knew full well that subjects were dying sooner and were sicker than controls. The Helsinki Code states: “In the purely scientific application of clinical research carried out on a human being, it is the duty of the doctor to remain the protector of the life and health of that person on whom clinical research is carried out.”62 In the context of accountability to this code and their profession, the second generation of PHS officials must be viewed in the moral position of scientists who consciously condoned increased premature death and ill health to pursue flawed science.
The reflections of philosopher Hans Jonas in 1969 help to frame the major moral wrong done by the study. Contrary to Walsh McDermott, who argued for the moral priority of the first value, Jonas defended the dignity of the individual over the advancement of knowledge. McDermott had said in 1967:
… the hard core of our moral dilemmas will not yield to the approaches of “Declarations” (i.e., Helsinki) or “Regulations” (i.e, the FDA’S 1967 human subjects regulations); for as things stand today such statements must completely ignore the fact that society, too, has rights in human experimentation.63
In response, Jonas wrote that social progress through medical research
is an optional goal, not an unconditional commitment.… Let us also remember that a slower progress in the conquest of disease would not threaten society, grievous as it is to those who have to deplore that their particular disease be not yet conquered, but that society would indeed be threatened by the erosion of those moral values whose loss, possibly caused by too ruthless a pursuit of scientific progress, would make its most dazzling triumphs not worth having.64
The PHS syphilis study was clearly a “ruthless” pursuit of knowledge but no “dazzling” triumph of knowledge. To appreciate the contrast, one should measure the costs in human suffering and death with the fact that no joint article was ever published comparing outcomes of the Oslo study and the PHS syphilis study.
Racial Bias and Unfairness. A second moral violation of the study was to the standard of justice or fairness by the racial bias involved in selection of subjects. A just research enterprise distributes the benefits and burdens of research as fairly as possible over a whole population. Even in the face of the 1960s civil rights movement, PHS authorities were unmoved by the unfairness and racial bias involved in selection of subjects. Each subject was a poor black male, the main point of Buxton’s protest. The study would never have been done, especially with such deception, in a social context of white persons with syphilis. By 1966 or earlier, this thought certainly ought to have moved someone in the PHS to action.
Moral Inconsistency between PHS Policy and Practice. Leaders of the PHS in the mid-1960s were guilty of a glaring and unfair moral contradiction, as well as hypocrisy. They made PHS policy to protect human subjects of research and were morally blind to violations long since done to the subjects’ health, autonomy, and dignity as persons. From 1966 to 1972, PHS leaders protected the Alabama study from Buxton’s criticism with one hand and with the other effected reforms provoked primarily by loyalty to the second value. Loyalty to this value would clearly condemn the PHS for violating the norm of informed consent and for deception.
Condoning Deception in Research Activities. Fourth, significant moral harm was done to the dignity and autonomy of the subjects by deliberately masking the real purpose of the study to facilitate recruitment. Subjects were also deliberately deceived into believing that tools of research were “treatment.” Moral debate about the justification for deception in experimentation began in earnest in the early to mid-1960s around the Milgram authority experiments65 and Humphreys’ “Tearoom Trade” study of homosexuality.66 A practice of poststudy debriefing subjects about the use of deception gradually became normative in research in this period. This development shows how far loyalty to the second value had progressed.
Philosopher Robert Nozick argued in 1968 that, even if justified, when a prima facie moral obligation must be infringed upon, the infringement leaves “moral traces”67 and cannot simply be set aside.68 Debriefing of deceived subjects responds to the “moral traces” of infringing on the claims of informed consent to seek knowledge that could not have been otherwise gained. There were hosts of scientists in the PHS at the time aware of the massive degree of deception involved in the syphilis study. Moral blame for the indignities suffered due to this cause must fall on them. No review of this feature of the experiment occurred until the ad hoc panel did so in 1973.
This discussion would be incomplete without addressing two final questions. First, how could the study have endured so long without serious internal moral challenge? Second, what moral lessons from the study are vital to carry into the future?
Moral blindness and racism do not adequately explain why the PHS study endured. Other main causes had to be chronically poor communication and systematic avoidance of ethical issues within particular branches of the PHS. It is also difficult to understand how reform efforts at the NIH did not motivate more attention to the syphilis study in the PHS. These efforts from 1953 to 1977 to innovate in methods of prior group review are described elsewhere.69 As in the wider research community, the norms of the NIH culture permitted wide latitude with regard to informed consent and did not require prior group review of each research project with patients or of a single experiment involving one or a few patients.70 Nonetheless, the insulation of the syphilis study from criticism could not have occurred if communication between branches of the PHS had been better.
There may well have been a contribution to the failure of dialogue from the NIH side, as well, because the NIH was involved scientifically in the syphilis study. In the 1950s and 1960s, the NIH was a relatively new agency where streams from two research cultures and one research bureaucracy met but with apparently little creative or critical interaction. The first was an older prewar research culture marked by a few general moral norms and a large degree of ethical relativism, as noted by Jones. It was this culture that created and supported the PHS syphilis study from 1932 to the 1960s. The second was a post-Nuremberg research culture. It was marked by high commitment to the best science, to informed consent (tinctured heavily with flexibility and the therapeutic privilege), and to new forms of prior peer review of proposed research. The founders of the NIH’S intramural program were largely members of this second culture. A third stream, a research bureaucracy with written ethical requirements on human subjects of research, grew up around the NIH’S extramural grants and contracts program in the 1960s. The 1966 and 1971 PHS-NIH policies requiring local institutional review boards and prior group review were required of grantees and contractors in this program.
More historical research is needed about whether the principals in these three arenas ever discussed ethical issues among themselves. If they did so, it was without much perspective on the implications that strong commitments to post-Nuremberg research ethics within the intramural program had for the extramural program or for earlier research like the PHS syphilis study. Did the right hand (PHS-CDC) know what the left hand (NIH–extramural/intramural) was doing? If great spaces of social distance between these three arenas could be demonstrated, it would help greatly to explain how the syphilis study endured.
In conclusion, what are the main lessons of the experiment for the task of protecting human subjects of research now and in the future? The first is clear, namely, to be vigilant about social and economic vulnerability to research exploitation. Vulnerable groups that come to mind are among the stigmatized and legally vulnerable citizens or strangers in our midst, e.g., substance abusers, illegal aliens, persons with HIV-AIDS, the homeless, or the poor who lack health care of any type.
Also, private-sector research, rather than research conducted by government, may pose greater risks of exploitation because there is an unfinished task in extending the legal protections of informed consent and prior group review to all citizens equally. Current U.S. law and regulations extend only to subjects in certain federally funded or regulated projects. Universalizing the scope of legal protection to research subjects regardless of source of funding, as has now been done by the 21 member countries of the Council of Europe, is a moral imperative for the U.S. Congress.71 A large and unknown number of human subjects are at risk in research projects funded through the private sector. Once the world’s leader in initiatives to protect human subjects, the United States has fallen behind.
A second lesson is that reliance on one main ethical resource, e.g., the professional ethics of individual investigators, the major ethical resource of previous eras, did not prevent the PHS study from occurring. The relevance of the lesson is that overreliance on the major resource of prior group review will not be adequate to prevent research projects that, on reflection, should have never been done. We need a plurality of resources to protect human subjects: a strong body of ethical guidance for researchers, effective institutional review boards (IRBS), and enlightened federal and state policies about human experimentation.
Today IRBS have authority to approve, alter, or deny proposed human-subject research projects. However, all is far from well with IRBS. According to a recent investigation of the Office of Inspector General (OIG), “the effectiveness of IRBS is now in jeopardy.”72 The report cites these escalating pressures on IRBS: expansion of managed care leading to pressure to accommodate research sponsors for income, increased commercialization of research, proliferation of multicenter trials, new types of research such as genetic testing, an increased number of proposals, and increased consumer demand for access to research.
According to the report, the main problems of today’s IRBS are: 1) IRBS review too much, too quickly, with too little expertise, 2) IRBS conduct minimal continuing review of approved research, 3) IRBS face conflicts that threaten their independence (e.g., locating IRBS in offices of grants and contracts that bring in research dollars), 4) institutions provide little training for investigators and board members, 5) institutions make little effort to evaluate IRB effectiveness, and 6) there are an alarming number of violations of informed consent and unethical advertisements for subjects. The report warns about the potential for self-serving motives in the emergence of for-profit independent IRBS that contract with pharmaceutical firms and hospitals to review research. Vigilance is especially required on this front. The OIG report makes several important recommendations for reform, which include relaxation of requirements of time-consuming routine review to enable more time to focus on projects with significant risks, federal requirements for education and training of investigators and IRB members, mandatory registration of all IRBS with the government, and insulation of IRBS from conflicts of interest.
A third lesson from the study is about the human potential for moral and institutional blindness. Moral sensitivity can be overwhelmed by excessive loyalty to the welfare of an institution and one’s role within it. Strong and uncritical loyalty to an institution and role impair independence of observation, judgment, and action, especially to prevent or moderate conflicts of loyalty and conflicts of interest. The syphilis study became an institution unto itself, and loyalty to it desensitized PHS officials and scientists to their conflicts of loyalty and conflicts of interest. They were appointed by society to protect the public health, yet they were officially charged with supporting a study that did great harm to subjects and to the public interest.
Some professions are better prepared and trained than others to detect and prevent conflicts of loyalties and conflicts of interests. Physicians and biomedical researchers do not receive the same degree of education and training about such issues as attorneys and behavioral scientists. For this reason, “because physicians are not trained to look for conflicts of interest, they often find themselves enmeshed in them without recognizing the problem.”73 The challenge for moral education about research ethics lies in a critical view of the contemporary consumer and market-place driven research environment. Large research organizations and enterprises, rather than individually funded researchers, have the momentum and resources in today’s environment. If the moral focus ought to be on large organizations, conflicts of loyalty, and conflicts of interest, then there will be a place for the PHS syphilis study in moral education. Discernment can be aided by accurate judgments of how and why the Public Health Service of the United States once abandoned its moral compass in the name of science.
1. Jean Heller, “Syphilis Victims in U.S. Study Went Untreated for Years,” New York Times, 26 July 1972, A1, A8.
2. Surgeon General, PHS, DHEW. “Investigations Involving Human Subjects, Including Clinical Research: Requirements for Review to Insure the Rights and Welfare of Individuals.” PPO 129, Revised Policy, 1 July 1966.
3. Robert Levine, Ethics and Regulation of Clinical Research, 2d ed. (New Haven: Yale University Press, 1986), p. 323.
4. James H. Jones. Bad Blood. 2d ed. (New York: Free Press, 1993), p. 190.
5. Ibid., pp. 124, 149.
6. E. Bruusgaard. “Uber das Schicksal der nich spezifisch behaldelten Lueticker” (The Fate of Syphilitics Who Are Not Given Specific Treatment), Archive fur Dermotologie und Syphilis 157 (1929): 309–22.
7. After an interview with Dr. Heller, James Jones wrote: “Had Dr. Heller wished to end the experiment by giving the men penicillin, he could have done so. Yet penicillin presented no more of an ethical issue to Dr. Heller than earlier treatment had. When asked to comment years later (1976), he could not recall a single discussion about giving the subjects penicillin. It was withheld for the same reason that other drugs had been held back since the beginning of the experiment: Treatment would have ended the Tuskegee study. Dr. Heller asserted: ‘The longer the study, the better the ultimate information we would derive.’ ” Jones concludes: “The men’s status did not warrant ethical debate. They were subjects, not patients; clinical material, not sick people.” See Jones, Bad Blood, p. 179.
8. With others, including Dr. Gamble, I was a speaker at a 23 February 1994 symposium on the study and its legacy held at the University of Virginia. The audience saw a film, Bad Blood, produced in 1992, in which a Dr. Sidney Olansky, a high-ranking PHS officer involved in the later stages of the study was interviewed. He completely denied any moral wrongdoing, defending the study in terms of medical beneficence. He showed no moral regret or reflection on the criticism that had been leveled at the study. Dismayed and appalled by the denial and self-righteousness in his face, I set aside my prepared text and gave an impromptu talk about the basic need to face moral errors and why an official apology for the study was appropriate. For more than a year, I campaigned with the help of others to PHS officials. Their decision was that an apology alone, without any other positive plan of action, would be “gratuitous.” The committee’s task was to flesh out a fuller response to the study’s legacy, which included funding from the Department of Health and Human Services for education of research trainees in ethics, assistance with participation of minorities in research activities, and a grant to Tuskegee University to preserve important papers, photographs, and artifacts from the study.
9. Jones, Bad Blood, especially chapter 14, for his excellent discussion of this point.
10. I declined to attend the ceremony to protest its being held at the White House rather than at Tuskegee and still feel strongly that President Clinton was ill-advised in a decision not to travel to Tuskegee. Nonetheless, for the first time, the government did accept its moral responsibility in the president’s apology. A great deal of credit for encouraging this event and for the collateral program to fund education in bioethics for researchers and assist minority participation in research should go to Dr. David Satcher, director of the Centers for Disease Control, and to Dr. Donna Shalala, secretary of the Department Health and Human Services.
11. I wrote this paper concurrently with the Advisory Committee’s completion of its official report. The report has a systematic ethical framework for making transhistorical moral judgments; see Advisory Committee on Human Radiation Experiments, Final Report (Washington, D.C.: U.S. Government Printing Office, 1995), pp. 196–223.
I believe the discussion that follows is compatible with the committee’s view of the task. For an extensive discussion of the committee’s framework and implementation of it, see Tom L. Beauchamp, “Looking Back and Judging Our Predecessors,” Kennedy Institute of Ethics Journal 6 (1996): 251–70.
12. Tuskegee Syphilis Study Ad Hoc Advisory Panel. “Final Report,” in Ethics in Medicine, ed. Stanley J. Reiser, Arthur J. Dyck, and William J. Curran (Cambridge: MIT Press, 1977), pp. 316–21.
13. Jones, Bad Blood, p. 214.
14. Ibid., p. 211.
15. Albert R. Jonsen, The Birth of Bioethics (New York: Oxford University Press, 1998), pp. 127–28.
16. Henry K. Beecher, “Ethics and Clinical Research,” New England Journal of Medicine 274 (1966): 1354–60.
17. Jones, Bad Blood, p. 100.
18. Ibid., p. 99.
19. Ibid., p. 119.
20. Ibid., p. 207.
21. Ibid., p. 99.
22. Ibid., p. 212.
23. Ibid., p. 131.
24. Ibid., p. 173.
25. Ibid., p. 181.
26. Ibid.
27. Ruth R. Faden and Tom L. Beauchamp, A History and Theory of Informed Consent (New York: Oxford University Press, 1986), pp. 56–60.
28. Faden and Beauchamp, Informed Consent, p. 57.
29. Martin S. Pernick, “The Patient’s Role in Medical Decisionmaking: A Social History of Informed Consent in Medical Therapy,” in the President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research, Making Health Care Decisions (Washington, D.C.: GPO, 1982), 3:3.
30. Jay Katz, The Silent World of Doctor and Patient (New York: Free Press, 1984), pp. 3–4.
31. Jones, Bad Blood, p. 95.
32. Ibid., p. 93.
33. See an excellent discussion of the debate, in the Drug Amendments Act of 1962, over informed consent and the exception in the law, i.e., physicians were not required to obtain the consent of subjects if they deemed it “not feasible or, in their professional judgment, contrary to the best interests of such human beings”; in William J. Curran, “Governmental regulation of the use of human subjects in medical research: the approach of two federal agencies,” Daedalus 98 (1969): 542–94.
34. President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research, Making Health Care Decisions. Vol. 1. (Washington, D.C.: GPO, 1983).
35. Jones, Bad Blood, pp. 98–99.
36. Norman Howard-Jones. “Human experimentation in historical and ethical perspective.” Social Science in Medicine 16 (1982): 1436. See also Paul M. McNeill, The Ethics and Politics of Human Experimentation (New York: Cambridge University Press, 1993), p. 42.
37. Jay Katz, “Human Experimentation and Human Rights.” St. Louis University Law Journal 38 (1993): 7–54.
38. P. M. McNeil, The Ethics and Politics of Human Experimentation (New York: Cambridge University Press, 1993), pp. 40–41; see also H. M. Sass, “Reichsrundschreiben 1931: Pre-Nuremberg German Regulations Concerning New Therapy and Human Experimentation.” Journal of Medicine and Philosophy 8 (1983): 99–111.
39. Jay Katz might have cited this guideline in support of his point about exploitation in his minority report. It would have been a fine rhetorical point, i.e., that the danger of exploitation of distressed human subjects in order to carry out research was a living moral idea in Germany at the very time the PHS study was being conceived.
40. See George J. Annas and Michael Grodin, The Nazi Doctors and the Nuremberg Code. (New York: Oxford University Press, 1992). There is some dispute about whether the guidelines were legally binding in Germany before and during the Second World War. McNeil argues that they were “legally binding” (Katz, “Human Experimentation,” p. 41). He followed Sass’s claim (see note 38 above) that they were legally binding throughout the Nazi period. At best, there seems to be confusion about the legal status of the guidelines.
41. Jones, Bad Blood, p. 103.
42. The first meeting of the Medical Board of the Clinical Center was on 16 January 1952. No mention of a Clinical Research Committee (CRC) appeared in subsequent minutes until 3 March 1953.
A document describing this committee and the NIH policy on the ethics of clinical research, “Group Consideration of Clinical Research Procedures Deviating from Accepted Medical Practice or Involving Unusual Hazard,” was issued in 1954. The Medical Board established a CRC in the same year. The document begins by noting that primary responsibility for the “formulation and conduct of clinical research and medical care is on the principal investigators designated by each Institute Director, in conformity with standards and principles of legal, ethical, and administrative propriety established by the Director, NIH.” The document then describes a two-level practice of “group consideration,” one by committees of each institute, and a second by the CRC. It states that the role of the CRC is to “serve as an expert body to advise on problems concerning clinical research involving unestablished or potentially hazardous procedures referred to it by the Director, NIH, institute or clinical directors, or the Director of the Clinical Center.” The practice of using this committee is later described in Stuart M. Sessoms, “Guiding Principles in Medical Research Involving Humans,” Hospitals, 1 Jan. 1958, p. 44. Another early document (12 March 1954), “Use of Human Volunteers in Medical Research,” was produced by the surgeon general’s office. The document noted that prior group review was “non-mandatory.”
43. The National Institutes of Health Revitalization Act of 1993, Public Law 103–43, 10 June 1993, Section 492A.
44. Ronald M. Green, personal communication, 21 June 1994.
45. Jay Katz made this point eloquently in his minority report which is appended to the 1973 panel, p. 320, and in this volume.
46. Jones, Bad Blood, especially chapter 8.
47. Ibid., p. 123.
48. Ibid.
49. Ibid., p. 190.
50. Ibid.
51. These articles are listed by Jones in ibid., pp. 281–82.
52. Ibid., p. 195.
53. David Rothman, Strangers at the Bedside (New York: Free Press, 1991); Albert Jonsen, The Birth of Bioethics (New York: Oxford University Press, 1998), pp. 125–65.
54. Jonathan D. Moreno, “Reassessing the Influence of the Nuremberg Code on American Medical Ethics,” The Journal of Contemporary Health Law and Policy 13 (1997): 347–60.
55. Jones, Bad Blood, p. 180.
56. Sessoms, “Guiding Principles in Medical Research.”
57. The Helsinki Code, amended as of 1975, is reprinted in Levine, Ethics and Regulation, pp. 427–29.
58. J. R. Heller and P. T. Bruyere, “Untreated Syphilis in the Male Negro: II. Mortality during Twelve Years of Observation,” Journal of Venereal Disease Information 27 (1946): 39.
59. J. K. Shafer et al. “Untreated Syphilis in the Male Negro: A Prospective Study of the Effect on Life Expectancy,” Public Health Reports 69 (1954): 88.
60. Jones, Bad Blood, p. 178.
61. Ibid., p. 182.
62. Levine, Ethics and Regulation, p. 429.
63. Walsh McDermott, “Opening Comments—The Changing Mores of Biomedical Research: A Colloquium on Ethical Dilemmas from Medical Advances,” Annals of Internal Medicine 67, Supp. 7, No. 3, Part II (1967): 39–42. Quote is from p. 42.
64. Hans Jonas, “Philosophical Reflections on Human Experimentation,” Daedalus 98 (Spring 1969): 245.
65. Stanley Milgram, “Issues in the Study of Obedience: A Reply to Baumrind,” American Psychologist 19 (1964): 848–52.
66. Laud Humphreys, Tearoom Trade: Impersonal Sex in Public Places (Chicago: Aldine Publishing Co., 1970).
67. Robert Nozick, “Moral Complications and Moral Structures,” Natural Law Forum 13 (1968): 1–50.
68. For a discussion of the “moral traces” concept in biomedical ethics, see Tom L. Beauchamp and James F. Childress, Principles of Biomedical Ethics, 4th ed. (New York: Oxford University Press, 1995), p. 105.
69. J. C. Fletcher and F. G. Miller, “The Promise and Perils of Public Bioethics,” in The Ethics of Research Involving Human Subjects: Facing the Twenty-First Century, ed. H. Y. Vanderpool (Frederick, Md.: University Publishing Group, 1996), pp. 155–84.
70. Advisory Committee on Human Radiation Experiments, “Research Ethics and the Medical Profession,” Journal of the American Medical Association 276 (1996): 403–9.
71. F. William Dommel and Duane Alexander, “The Convention on Human Rights and Biomedicine of the Council of Europe,” Kennedy Institute of Ethics Journal 7 (1997): 259–76.
72. Department of Health and Human Services, Office of Inspector General, Institutional Review Boards: A Time for Reform, June 1998. OEI-01–97–00193.
73. Roy G. Spece, David S. Shimm, and Allen E. Buchanan, Preface to Conflicts of Interest in Clinical Practice and Research (New York: Oxford University Press, 1996).