“I wonder if you and I could not write a very useful paper together, with some such title as THE RIGHT TO DIE … Our efforts can be a truly pioneering endeavor.”
—HENRY BEECHER TO WILLIAM CURRAN, June, 1967
“It was a strange business … how do we know these people are dead?” remarked Robert Young, a neurologist who arrived at Massachusetts General Hospital (MGH) in the 1960s. He was recalling encounters with severely comatose and respirator-dependent patients who appeared on hospital floors with accelerating frequency beginning in the mid to late 1950s. At MGH, Young had a neurology fellowship with Robert Schwab. Among other accomplishments in neurology, Schwab developed what he referred to as his “triad,” a set of clinical findings that formed the basis of the criteria found in the Report of the self-appointed Ad Hoc Committee of the Harvard Medical School to Examine the Definition of Brain Death, which appeared in the Journal of the American Medical Association on August 5, 1968.1 During much of the time the Committee deliberated, Schwab was at home convalescing from a heart attack, so Young represented him at Committee meetings.
Young later recalled an odd discomfort when he approached a unique kind of comatose patient. There was something familiar in these bodies yet something foreign as well, a blurring of the signs that had reliably set the line between death and life. Eventually, that line would be restructured altogether. “And the fact is we do live in a time of transition,” German theologian and ethicist Helmut Thielicke noted at a prominent early gathering of the nascent bioethics field, the 1969 Houston Conference on Ethics in Medicine and Technology. Thielicke continued, “Advances in man’s technical and scientific capacity have outstripped, as it were, the development of man himself … What man ‘can do’ is out of step with what man ‘is’… At what point does help cease to be help and begin to cancel itself out? Can it still be called ‘help’ when all that remains of the patient is a physical or mental torso?”2
These, of course, weren’t the first or last patients to provoke hesitation and strange uncertainty over the gap between common, lived experience and the experience of being a medical subject or biological object. Long before the advent of respirators and the idea of brain death, determining the moment of death has been an ongoing source of apprehension and changing social ritual. As historian Martin Pernick notes, “There never was a Golden Age of Hearts and Lungs when defining death was unambiguous and certain.”3 However, the ability to choose to maintain the familiar, though malleable, signs of a moving heart and expanding lungs, and to keep the body “working” with respirators and other medical interventions, brought a new and far-reaching ambiguity to the end of life. Referring to sixteenth-century and seventeenth-century Parisian funerary practices, Vanessa Harding notes the enduring importance of the question: “For how long does the dead human body retain the meanings and values it held in life, once it no longer has an incumbent but is perceived by outsiders only?”4 The appearance of severe coma challenged previous medical consensus and cultural practice as to where to draw this line between incumbency and objectivity; between objects and persons. It further highlighted the constant—and unavoidable—redrawing of that line through medical encounters.
The strangeness of the mechanically active, moving, pink, comatose body moved some in medicine to confidently render this body irreversibly comatose, and to go further and define this condition as irreversibly dead. “Our primary purpose is to define irreversible coma as a new criterion for death,” declared the opening lines of Beecher’s Report.5 But even in this opening sentence, ambiguity undermines the certainty of Beecher’s pronouncement.
A particular kind of coma—a term describing an illness or a treated impairment of the living body—became, at the same time, the sign of the absence of a living body and the end of treatment. Death could be named in some aspects before it existed in others. It could be deliberated over, decided. The appearance of a new world of “death before dying” was a highly significant development in the history of medicine. This new way of approaching death and the body—which began in specialized corners of medical practice and continues to migrate through and reinforce other developments in (at least much of Western) society at large—is central to the story of brain death and to the alternating resistance and accommodation to living in a biomedical culture.
The thirteen Ad Hoc Committee members were distinguished faculty of Harvard University; all but three—theologian Ralph Potter, attorney and legal medicine expert William J. Curran, and historian of science Everett Mendelsohn—were physicians at Harvard Medical School. The name of Henry K. Beecher, then the Henry Isaiah Dorr Professor of Research and Teaching in Anaesthetics and Anaesthesia at Harvard Medical School and Chairman of the Department of Anesthesiology at MGH, appeared with the title “chairman” in small type at the bottom of the first of four published pages. Following his name were listed those of the remaining twelve members, in alphabetical order.
The Report was the result of collective deliberations, significant edits, and the resolution of a few pointed disagreements. However, the Report was clearly Beecher’s, and was set in motion with a presentation Beecher made as Chairman of the Harvard Medical School Standing Committee of Human Studies. “With your permission,” Beecher wrote to then Medical School Dean Robert H. Ebert in a letter dated September 6, 1967, “I should like to call a meeting of the Standing Committee of Human Studies.” He continued:
At this meeting I should like to present, roughly, a 25 or 30 minute discussion of ETHICAL PROBLEMS CREATED BY THE HOPELESSLY UNCONSCIOUS PATIENT. As I am sure you are aware, the developments in resuscitative and supportive therapy have led to many desperate attempts to save the dying patient. Sometimes all that is needed is a decerebrated individual. These individuals are increasing in numbers all over the land and there are a number of problems which should be faced up to.6
Beecher wrote to future Ad Hoc Committee members such as his long-time friend Curran, a professor of health law at Harvard, and neuroscientist Jordi Folch-Pi, to insure they would attend this presentation.7 Curran had received letters from Beecher just the previous June suggesting that they write a paper entitled “THE RIGHT TO DIE” in order to address the problem of needless care for persons whose “brain has ceased to function.”8 When Beecher wrote Curran, he specifically linked the idea of a right to die—an idea vaguely and variably present in leading medical and law journals—with the idea that the cessation of brain function was the physiologic equivalent of death itself:
In this paper I think we will have to face up to what death really is. The ancient idea that when the respiration stops and the heart stops, death ensues is perfectly true, but is it not also death when the brain has ceased to function, as indicated by the absence of electrical activity? All major hospitals are confronted with situations where at great cost, and really at the expense of salvageable individuals, occasionally decerebrated subjects use up the money probably better spent elsewhere.9
That presentation was quickly followed with an October 20, 1967 letter to transplant surgeon Joseph Murray of the Peter Brent Brigham Hospital (later renamed Brigham and Women’s Hospital in Boston), also in attendance. Beecher underscored “how strongly I agree with you that it would be most desirable for a group at Harvard University to come to some subtle conclusion as to a new definition of death.”10
Beecher used his October presentation, and in particular Murray’s support and interest, to make a case to Ebert for a specific committee to consider “the idea of brain death.”11 Ebert agreed.12 Beecher picked most of the members himself. He tapped Curran and also MGH colleagues Raymond Adams, who was Chairman of Neurology, and neurologist Robert Schwab. Jordi Folch-Pi, whom Beecher also sought out to attend his Human Studies presentation, was also included, as were famed MGH neurosurgeon William Sweet and Murray, who received the Nobel Prize for performing the first human kidney transplantation. Beecher sought several non-physicians, listing in his own notes historian of science Everett Mendelsohn and sociologist David Riesman, although the latter did not eventually serve on the Committee. Beecher very much wanted to include a theologian. He thought his friend Joseph Fletcher would be a strong asset, but Fletcher was too publicly (and, therefore, controversially) associated with advocacy for a right to euthanasia. Beecher eventually settled on Ralph Potter of the Harvard Divinity School. Ebert suggested rounding out the representation by Harvard hospitals, and psychiatrist Dana L. Farnsworth, physiologist Clifford Barger, and neurologist Derek Denny Brown were added.
The whole group—Adams, Barger, Curran, Brown, Farnsworth, Folch-Pi, Mendelsohn, Merrill, Murray, Potter, Schwab and Sweet—were addressed by Ebert in a January 4, 1968 letter appointing them as members:
At a recent meeting of the Standing Committee on Human Studies, Dr. Henry K. Beecher reviewed some basic material on the ethical problems created by the hopelessly unconscious man. Dr. Beecher’s presentation re-emphasized to me the necessity of giving further consideration to the definition of brain death.13
Notably, Ebert continued this letter, “As you are well aware, many of the ethical problems associated with transplantation and other developing areas of medicine hinge on appropriate definition. With its pioneering interest in organ transplantation, I believe the faculty of Harvard Medical School is better equipped to elucidate this area than any other single group. To this end I ask you to accept appointment to an ad hoc committee.” Throughout the committee’s work from March into June of 1968, several members wrote to Beecher stating strongly that they did not think they were solving a transplantation problem but a very different problem.14 Ebert himself would later back off from the transplantation reference. When the Report of the Ad Hoc Committee was presented to him for the final revision before publication, Ebert requested only one change, indicating that perhaps the text “suggests that you wish to redefine death in order to make viable organs more readily available … Would it not be better to state the problem, and indicate that obsolete criteria for the definition of death can lead to controversy in obtaining organs for transplantation?”15
How do we understand this exchange? Was Ebert—and was the Report itself—expressing a wish to expedite, or instead to police transplantation? How central to the Committee’s purposes was either task? The issue of transplantation surfaces early on in the story of the Committee and dominates historical characterizations of it later. Soon after Beecher made his presentation to his colleagues and his pitch to Ebert, the first human heart transplant was performed by Christiaan Barnard on December 3, 1967, in Capetown, South Africa. The medical and public press captured a flurry of similar attempts that followed worldwide, but media coverage soon shifted to convey a subsequent flurry of failures, opening questions as to the wisdom as well as the eerie transgressions of the procedure. Were heart donors dead if their hearts continued to live? What boundaries distinguished one person’s life from another? What process would legitimize the procedure of donating organs?
The coma that would count as death by Beecher and his colleagues attracted increasing interest. As the Committee met, newspapers reported stories about transplant surgeons accused of causing, rather than intervening after, the deaths of comatose patients in order to obtain organs. Clearly, confusion on this score could prove a fatal obstacle to the new and still fragile field of transplantation—confusion that could presumably be resolved by a revised definition of death. Curran wrote to Beecher about three such cases that appeared in Houston hospitals and that raised the question as to whether a patient was dead before—or instead killed by—the removal of treatment and/or organs for the purposes of transplantation. Curran was concerned here that surgeons acted too hastily, demonstrating the possibility of “illegal and unethical conduct” and being “prematurely declared dead,” suggesting that his remark to Beecher in this letter that “the issues we are talking about are not academic” was, regarding transplant, responding to an interest in policing its use. Other evidence will underscore this point, including the ways that transplant figured into the Committee members’ respective life’s work and input into the Report itself.16
The Committee’s purposes with respect to transplant are important because of the prominent, consequential, but largely inaccurate uses made of these purposes by others, and the degree to which this inaccuracy has closed off needed curiosity over how brain death did make sense as a solution to those who first formally defined it. Early critics of the committee doubted medicine’s ability to follow Beecher in his pursuit to define the death of a patient, while at the same time reliably pursuing their “RIGHT[s].” Their accusations of stealth attempts to expand the scale of transplantation procedures underscored the case for changing who determined medical ethics. This sort of characterization persisted in most historical descriptions of the Committee. Historian Tina Stevens’ comments typify this vein of criticism:
More than a medical response to a technologically-induced moral problem, “brain death” was an artifice of legal self-protection … against the possibility that the public would perceive a potential conflict of interest and become alarmed—a conflict between the profession’s responsibility to care for the sick and dying and the demands of medical research to procure organs for transplant.17
Historical characterizations have generally taken this approach, tending to emphasize the Committee’s efforts to expand transplantation practices and the associated inability of its members to grapple with the necessary philosophical and conceptual issues brain death posed. Critics further attribute these perceived tendencies to a lack of ethics expertise, arguing that Committee members were blinded by their immediate and “purely medical” purposes. Prominent voices and scholarship in bioethics have repeatedly toed this line. Portrayals of the Committee as transplantationfocused have carried the weight of a larger agenda, justifying the establishment of a distinct ethical expertise over medicine by bioethics. One influential example, Albert Jonsen’s The Birth of Bioethics, found that the Report inadequately faced important philosophical questions about death in its overreaching and merely medical account. Jonsen contended that the argument for brain death was shallow in its science and ethics; cited no scientific data; and was a tool for grabbing organs for transplant.18 In Rethinking Life and Death, Peter Singer suggested that the Committee was “not being entirely candid” by withholding the central role of transplantation in their deliberations.19 Other prominent scholars of the history of brain death have argued that the Committee in fact hid its primary interest in transplants.20 Few, if any, of these histories, however, engage the work of the Committee in detail.21
For the key authors of the Report, their preoccupation was not with transplantation but with issues such as experimentation, truth-telling, and informed consent; the clinical signs of consciousness and coma; and the definitions and consequences of “hopeless” as well as “extraordinary” care. These are the topics that fill the following chapters of this book.
There were essentially three authors of the Report—Henry Beecher, Robert Schwab, and William Curran—with additional important input from Raymond Adams. The first meeting of the Committee, referred to in surviving minutes and drafts, was held on March 14, 1968. As the key players began to compose the Report, paper drafts and suggested edits circulated through Beecher’s hands. The first full draft that appears in Beecher’s files is dated April 11 of that year. Subsequent versions reviewed here are dated June 3, June 7, and June 13, with a final version dated June 25. The meeting schedule and correspondence in Beecher’s papers show that most of the writing of the Report was done in bursts at the end of April, and then again from the end of May into early June. The serial versions, along with the intervening correspondence and edits, indicate that the writing was primarily the work of Beecher, Schwab, and Curran, with some small refinements to the criteria themselves by Adams. Only weeks after completion, the paper appeared in print.
The April 11 draft outlines both the themes and the division of authorial labor that generally persisted throughout the Committee’s work. Described as “a very preliminary draft of a report,” the accompanying cover memo reported that “the legal section was, of course, written by Bill Curran, and the criteria of irreversible coma by Bob Schwab.”22 It is possible that the remaining documentary evidence leaves out other conversations and arguments that shaped the text. However, interviews and recollections with surviving Committee members confirm what the archival materials suggest: that Beecher essentially orchestrated the Report, pasting together written edits and contributions primarily from himself, Schwab, and Curran.
Beecher began working at MGH in the mid-1930s. At MGH, mentors encouraged him to change his plans to be a surgeon and instead to establish an academic program and laboratory science foundation for anesthesiology. When he assumed the Dorr Chair in 1941, the first endowed chair in anesthesiology in the world, he became an international leader in this field, helping to transform a then peripheral and poorly established medical specialty.23 His research focused on the physiology and effectiveness of anesthetic agents and on the efficacy of medication for pain.
By the time the Report appeared, he had also published widely quoted work on the ethics of human experimentation, work that strongly shaped his thinking about irreversible coma. It is no accident that the Ad Hoc Committee to Examine Brain Death was, literally, an ad hoc committee of the Standing Committee on Human Studies at Harvard, which Beecher also chaired. Beecher wrote parts of his contributions for the Report long before the Ad Hoc Committee existed. He lectured and wrote about the problem of comatose patients raising questions about the limits of medical intrusiveness and the equitable and efficacious use of medical resources. Specific sentences and themes from these writings appeared in the Report. The Report was his last significant project prior to retirement from MGH in 1969.24
Curran was a dominant figure in the development of the field of legal medicine through the latter twentieth century. He began teaching a series of interdisciplinary seminars on the interactions of law and medicine in the 1950s at Harvard Law School and the Harvard School of Public Health, and then went on to a position directing the Law-Medicine Research Institute at Boston University. He eventually returned to Harvard as a professor in the School of Public Health. After coming across some references of Beecher’s, Curran began their relationship in 1958 with a letter to Beecher noting mutual interests.25 The two engaged in a personal, and frequent, correspondence (Figures 1.1 and 1.2).
On the same day Beecher wrote Curran on the topic of “THE RIGHT TO DIE,” a thirty-four-year-old alcoholic merchant marine, referred to as DM, collapsed and arrived unresponsive at MGH with bleeding on both sides of his brain. DM was unresponsive to pain and had no reflexes or spontaneous movement except for swallowing reflexes. He could breathe spontaneously but insufficiently on his own and required a respirator. Two electroencephalograms (EEGs or “brain wave” tests) obtained twenty-four hours apart were described by Schwab as “flat … with no distinguishable electrico-cortical activity.” Based on his neurological status, DM’s physicians informed his only adult family, his divorced wife, of the “grave prognosis,” and an order was written to “D/C IV. No medications.” Soon thereafter, DM died.26
Figure 1.1 Henry K. Beecher (1904–1976).
SOURCE: Courtesy of Edward Lowenstein.
Figure 1.2 William J. Curran (1925–1996).
SOURCE: Courtesy of the President and Fellows of Harvard College.
DM was typical of the patients Schwab and his EEG laboratory staff encountered in consultation over the prior decade. EEG and physical examination findings were used to assess prognosis of severe brain injuries and to conclude whether removal of treatment seemed reasonable, if not compelling. DM did not fully meet Schwab’s and the Committee’s eventual criteria for brain death due to the preservation of a reflex in which the back of the throat, as gateway to the lungs, tightens or “gags” so as to prevent objects rather than just air from passing. He was also not apneic; that is, unable to mechanically initiate and sustain breaths on his own. But while he could generate respirations, these were insufficient to support life on their own. He was not “dead” but, without mechanical support, stood at death’s threshold. It was through experience with hundreds of patients like DM that Schwab identified what seemed to him and colleagues with whom he consulted to describe the line between legitimate medical action and violation of a corpse.
Schwab closely matched Beecher in the timing of his arrival at and departure from MGH (Figure 1.3).27 Accomplished in many areas—he established new treatments in myasthenia gravis and Parkinson’s disease and was among the first to use dilantin for epilepsy—Schwab was an early user of EEG. In an obituary written by Young, he was described as having “founded the first clinical laboratory for the routine recording of electroencephalograms … in any part of the world, as far as is known.”28 Beginning at least in the 1940s, Schwab showed an interest in the fate of the EEG in determining human death. In a series of studies conducted throughout the 1960s he refined and defended what he called his “triad” of criteria—no movement or breathing, areflexia, and isoelectric, or “flat,” EEG tracing—in order to clarify which symptoms or findings did and did not matter in establishing reliable outcome predictions for patients such as DM. This triad became the core of the Harvard criteria and of how brain death was then determined through much of the world.
Beecher was familiar with Schwab’s work and advocated the use of his criteria before the Committee existed. Schwab had significant concerns about the implications of a more technologically intensive medicine for dignified death and dying, coauthoring, with Sidney Rosoff, a paper on his criteria that was cited in early drafts of the Report. Rosoff was a prominent leader in the US euthanasia movement in the 1950s and 1960s, serving on the boards and as president of both the Euthanasia Society of American and the Hemlock Society.29
These were the primary authors of the Report. Their sections and themes organize the next chapters of this book: justification, law, and criteria. These sections each reflect specific bodies of literature, histories, and cultures of practice. Revisiting these sources underscores how these authors were primarily motivated to address centuries-old questions about how to define the limits of medicine and medical necessity and how to use medical knowledge. Norms around practices like truth-telling and human experimentation were often drawn upon to provide possible answers to these questions. The strange coma, uncertainty, and manipulative power of conditions of death before dying made these questions more challenging and compelling. Especially for Beecher, brain death was a response to that challenge.
Figure 1.3 Robert Schwab (1903–1972).
SOURCE: Courtesy of Massachusetts General Hospital Archives and Special Collections.
The appearance of positive-pressure adjunct lungs in the 1950s signaled a significant departure from decades of experience with the so-called “iron lung” used for polio patients since the 1930s. The iron lung tank was a cylinder that fully surrounded a person from the neck down and alternately evacuated and then reintroduced air into this cylinder. Reduction of air pressure within the tank relative to the outside, where the person’s head remained, established a negative pressure gradient between the air accessible to the mouth and nose compared with that surrounding the chest within the iron lung. This alternating pressure difference enabled the chest walls, and thus the lungs, to more easily expand, enabling the person to inhale. Intermittent positive pressure breathing machines (IPPBs) replaced this technology in the 1950s. IPPBs instead forced air under “positive pressure” directly into the lungs in regular bursts via a tube inserted into the trachea. Generally, these machines also allowed patients who could breathe even weakly on their own either to do so, or to trigger the assistance of positive pressure as they initiated a breath. Design fundamentals have not changed significantly since.
The iron-lung substitute generally supported the breathing of someone fully conscious, for a usually limited period, during which time the poliovirus interfered with brainstem control over breathing or directly paralyzed muscles used for breathing.30 If the patient survived this tenuous period, these viral effects on breathing soon remitted as the illness passed through its acute course. Like most hospitals, MGH relied on these devices as late as a polio epidemic in 1955.31 The new IPPB respirators, however, could sustain more severe, less predictable, and longer-term impairment to breathing and began to be used beyond their initial role in the management of post-surgical, especially post-thoracic surgical, patients at MGH a few years later. Attention to IPPBs was spurred apparently by crude use of positive pressure systems (manually squeezing a bag to force air into a patient’s lung through an inserted tube) at Danish hospitals during a polio epidemic there in 1952. But as indications for the use of IPPBs expanded to include a range of serious illness, so did the debilitating conditions with which patients were left after treatment. Hospital care now kept sicker people alive and left them in a more impaired state. Different costs and benefits began to line up against each other. Jack Emerson, who developed a widely used version of the iron lung in the 1930s, recalled how physicians often expressed concern that his new technology interfered with the natural course of illness and extended suffering. But such concerns were not commonly expressed. They did not play a significant role in the use of this technology.32 The conditions of death before dying were only dimly, though at times presciently, perceived in the iron age:
If the paralysis … should be permanent, we could not hope to accomplish anything. We would be forced to use the respirator indefinitely or until someone should turn executioner by stopping the machine, or until an intercurrent infection solved the problem for us.33
Eventually, however, concern about the limits and consequences of respirators eventually did grow alongside their expanded use. The new ambiguity of the body suspended and sustained by the respirator, and the expanded possibilities of intensive care medicine, prompted questions about the appropriateness of care and about physician identity as an intervener and risk manager.
This identity has an ongoing history. The consolidation of orthodox, or allopathic medicine practiced by licensed “mainstream” physicians came at the end of a busy nineteenth-century flurry of debates among a range of alternative, unlicensed, and variably educated groups of practitioners with competing schools of thought. This outcome is often attributed, especially by physicians, to key discoveries and related therapeutic successes around that time, such as identification of germs as the cause of disease. However, this is too simple an account. Much of the consolidation around the definition of the physician as a sanctioned expert guided by certain kinds of tested knowledge was, in retrospect, attractive for reasons other than the actual power of therapeutic knowledge at the time. This model of what a physician should be was “a cognitive system and a set of social practices”34 that incorporated other social and institutional developments of the time, such as associations between progress and measurement, and the notion of quantification as an indication of objectivity and science.35
An asserted ability to provide discerning management of what historian Martin Pernick once described as a “calculus of suffering”—that is, to provide reliable counsel in navigating difficult trade-offs and decisions—was particularly distinctive to this ascendant orthodoxy.36 Pernick took as the paradigmatic, if not specific, driver of this change the combination of promise and resistance that greeted the heralded—but also very risky—surgical use of general anesthesia in 1846. As key features of post-nineteenth-century health care, this recurring theme of describing medical work through a scope of manipulation tempered through a testable calculus is relevant for understanding much of the inherited vocabulary and concerns that mattered to Committee members, especially Beecher. The shifting meaning of the physician as “risk manager” describes a historical narrative that spans both early twentieth-century anxiety about medical heroics and early twenty-first-century debates about the design of health services. However completely or incompletely the “calculus of suffering” narrative explains the emergence of a form of mid-to late nineteenth century medical practice and knowledge in the United States, it carries enduring relevance. It focuses attention on the capabilities of medical knowledge, as well as the “cognitive system … and practice” within which these capabilities worked. Compared to the prevailing rescue narrative of bioethics, it directs curiosity to possible alignment between the capabilities of medical knowledge and concrete ways to frame and resolve medical dilemmas. It also seems pertinent to the appearance of brain death and the critical response to it. Beecher’s experience of how this physician identity played out in the context of rapidly changing hospital-based medical practices framed his thinking about brain death. That endurance should get our attention.
In an anonymously authored Atlantic Monthly article from 1957, which indicated emerging alarm over medicine’s ethic of preserving life at all costs, a woman grieved the death of her husband:
There is a new way of dying today. It is the slow passage via modern medicine. If you are very ill modern medicine can save you. If you are going to die it can prevent you from doing so for a very long time … As they fight for spiritual release, and are constantly dragged back by modern medicine to try again, does their agony augment … Enter the sickroom and sit with your beloved, and endure the long watch while this incredible battle between spirit and medicine takes place.37
An editorial in the New England Journal of Medicine noted: “This is an article that cannot be summarized. It should be required reading for physicians.” The editorial went on to argue that physicians should examine the effects of new medical technologies:
A decrease in dignity and rapport with the bereaved seems in inverse proportion to the efficacy of the medical sciences to prolong life. Perhaps there is no alternative, for certainly euthanasia is repugnant to every ideal of medical tradition. On the other side of the coin, however, is an approaching specter that looks almost as ghoulish and quite as menacing as euthanasia itself.38
The degree to which these anonymous remarks reflected broader public concern about the intrusiveness of medical technology is hard to gauge. Few articles like the Atlantic Monthly piece appeared over the following decade, and sporadic press attention to new medical technologies reflected a mixture of fascination and celebration, as well as anxiety. A more sustained and effective groundswell of concern does not appear until a flurry of newspaper editorials and special features appeared in response to the publication of the Harvard Report, further accelerated again by the Karen Ann Quinlan case when it was filed in 1975. Quinlan was a young, comatose woman whose parents wished to end medical treatment they saw as futile. The 1976 decision in that case by the Supreme Court of New Jersey recognized the legal standing of proxies to act for comatose patients and to authorize the removal of medical treatment. The publicity that surrounded this case mobilized public interest and attention around the value of intrusive, manipulative, and technologically intensive forms of therapy aimed at prolonging life.
The appearance of the Report, then, can be placed at the early outset of stronger public attention to these issues, including acceptance and humane treatment of terminal illness and renewed interest in euthanasia. By the 1970s, advocacy for euthanasia shed its beneficent, often eugenically oriented posture of earlier decades. Advocates instead framed euthanasia as more of a rights-based claim, an assertion of self-ownership over how people died or incorporated serious illness into their lives.39
But prior to these developments such issues were talked about in the medical literature, albeit hesitantly. In the early 1960s, articles began appearing in medical journals about what was often referred to as “the hopeless case,” echoing a critique of medicine launched by Anonymous:
In the last few years, I am sure many senior physicians (and some junior ones as well) have been troubled, as I have been, by certain of the effects of our increasing ability to prolong the life of people. The sulfonamides, the antibiotics, a better understanding of the uses of blood, machines such as artificial pacemakers, and artificial kidneys, the newer breathing apparatuses, radical and improved surgical techniques, our better knowledge of nutrition, etc. all have played a role at one time or another in saving, and hence prolonging the life of many people [but perhaps only prolonging suffering] … Who among us, after such sights can be proud of what we have wrought?40
Quoting with sympathy and agreement the remarks of Anonymous, Frank J. Ayd, a psychiatrist and a friend of Beecher’s, argued in a 1962 lead article in JAMA that treating the terminally ill risked a situation in which “life preserving treatment ceases to be a gift and becomes instead, a scientific weapon for the prolongation of agony.”41 He strongly advocated for and elaborated on Papal endorsement of discontinuing extraordinary treatments:
Since an individual has the right to dispose of his own person and, therefore, to have a voice in his manner of dying, he may not only refuse extraordinary means of prolonging life, but he may also reject means which are ordinary but artificial and which offer no hope of a cure.42
Patient or family authorization of removal of care was not an uncommon practice. But it also does not appear to have been widespread. Papers and editorials in journals and medical newsletters indicate other activity, mentioning conferences, speeches at local symposia, and medical society meetings with panels of clergy, citizens, and physicians—all hard to enumerate and catalogue but signaling, nevertheless, interest and anxiety within the profession as to the nature of physician responsibility to not treat or to stop treating.
These traces of conversations about medical overreach within the profession highlighted two key considerations in defining the appropriateness of treatment. One was whether a patient’s death occurring after a decision to withhold treatment was due to the inevitable course of disease in the face of passive omission or was, instead, the result of physician action—that is, of active commission. Another was the related difference between treatments considered to be “ordinary” versus those considered to be “extraordinary.” The distinction between an illness that was allowed to meet its natural end as opposed to one that still compelled medical intervention had long been used to guide care of the very sick. The balance between ending treatment but not euthanizing—of being clear when one was permitting, and not causing, death—was a preoccupation of much of 1950s and 1960s literature on hopeless and futile cases, as it had been decades before and as it would remain decades later. Similarly, opinion as to which set of circumstances more convincingly cast an intervention as “ordinary” instead of “extraordinary” had long played a defining role in shaping consensus over what constituted acceptable care. For those with severe coma, much of the work of Schwab and other neurologists would involve describing how it was possible to know which signs described a treatable condition. More than simple prognosis was at stake in this calculus. Attempts to specify what fell outside of the “ordinary” also often drove an understanding of how (and whether) uncertain medical knowledge progressed. After all, how did, or does, medicine progress other than by constantly making the extraordinary, ordinary?
The implications of differentiating between ordinary and extraordinary treatment had been pondered since at least the nineteenth century: administration of oxygen to a patient who was breathing spontaneously (before the advent of respirators) but in a presumed irreversible coma; continued insulin treatment for a diabetic patient with terminal, painful, metastatic cancer; use of caffeine for a lethargic dying patient; the intense pains of necessary amputation; the use of general anesthesia at all. For each of these examples, sources could be cited which considered these extraordinary treatments.43
New ambiguity over when death occurred complicated how doctors were to make these distinctions between ordinary and extraordinary care, and between omission and commission of harm. That ambiguity needed to be addressed, and growing unease over when death occurred can be read in medical journals especially in the latter part of the 1960s.44 This context, in which action versus omission and ordinary versus extreme care framed a calculus of suffering, helps understand (as will be discussed in more detail) the tension in the Report itself over why this seemingly new coma should not just describe when to end treatment, but how to define death. These new coma patients, supported by respirators and other new life-support technologies, seemed to blur these distinctions. One JAMA article remarked:
I remember when cessation of heartbeat was an observation on which we simply pronounced the patient dead; now this is a medical syndrome known as cardiac arrest, which demands prompt, skilled, and at times heroic treatment … I have seen patients with brain-stem failure, with dilated, fixed pupils, decerebrate rigidity, and cessation of spontaneous respiration, who … were assisted with a mechanical ventilator … I have never seen such a patient begin to breathe spontaneously and survive, and autopsy always shows advanced liquefaction necrosis of the brain … When did the soul leave the body? Is turning off the respirator murder?45
“What and when is death?” read another JAMA editorial appearing in early May 1968, months before Beecher’s Report.46 The editorial responded to a paper in that issue of JAMA written by thoracic surgeon and attorney Martin Halley, and a teacher of his in law school, William F. Harvey, which pointed to the “serious difficulties sometimes encountered in establishing the end point of human existence, or moment of death, by present medical definitions and with use of available objective standards and current criteria.”47 Medical practices, they argued, were beginning to incorporate a new working understanding of death that the law needed to acknowledge. Life could no longer be reliably understood by the appearance of “vital functions” such as pulse and respirations, unless one meant the vital functions without artifice, without “extraordinary measures”—a definition supported by Pope Pius XII in an address to the International Congress of Anesthesiology, which met in Rome in 1957 and which Beecher himself took part in. While ceding to other experts the question of whether to define death as the cessation of brain function, the Pope had clear opinions about extraordinariness. “In those that are considered to be completely hopeless … it cannot be held that there is an obligation to use resuscitative interventions,” nor was there an obstacle in the way of “letting the doctor remove the artificial apparatus before the blood circulation has come to a complete stop.”48
Halley and Harvey identified a new level of complexity in the oft-used distinction drawn between treatments that were ordinary and those that were extraordinary. Use of “extraordinary” generally meant “uncommon,” “experimental,” or “unnatural” aspects of medical practice, but Halley and Harvey used extraordinary to describe the condition of some patients. What now became strange, or extraordinary, was the notion of death itself. The machinations of the intensive care unit (ICU) obscured and ended unmediated access to “ordinary” death.49 Death before dying was strange business. “The concern was basically that there was a finite event that the patient was seen as alive, and then when you did something it was no longer alive … it wasn’t seen as a withdrawal … That took some time to develop.”50
This developing calculus of suffering applied old tools of omission and ordinariness to these new circumstances. These tools tended to underscore a certain understanding of medical knowledge with which they were aligned. They focused on the consequences, and therefore responsibilities, of physician judgment, and so were linked to attitudes that acknowledged but generally sidelined the patient’s role as arbiter over decisions to pursue or withhold medical interventions. In a widely read article in CA, the American Cancer Society’s informational journal for physicians involved in cancer treatment, entitled, “You are standing at the bedside of a patient dying of untreatable cancer,” Mayo Clinic physician Edward Rynearson advocated for an end to the prolonged suffering caused by prolonged treatment and for listening to patient wishes to guide such decisions.51 His article is of particular interest because of the response to it. Months later, CA editors published forty responses by physicians throughout the United States as well as other countries, including senior cancer practitioners, medical school professors, department chairs, deans, and American Cancer Society leaders. Of these forty, five respondents criticized and thirty-five generally supported Rynearson’s sentiments as to the dangers of overreach. However, only five responses could be interpreted as explicitly supporting reliance on patient choice in drawing that line. Rynearson’s critics and sympathizers alike voiced the perception that “most” physicians, patients, and people at large were of the opinion that physicians should withhold treatment to sufferers for whom treatment only delayed death.
This sort of response recurs throughout the medical literature, in which questions of what counted as “unremediable” suffering, and when treatment “only delayed death,” tended to be answered by falling back on the notion of the skilled physician as risk-calculator rather than by leaving such judgments to the discretion of patient choice, which was perceived as potentially unreliable and ambiguous. Hahnemann Dean Charles Cameron wrote that his school would no longer subscribe to CA and took steps to insure its students would not even see the issue containing Rynearson’s article:
For ten years, while I was Medical and Scientific Director of the American Cancer Society, I fought for the philosophy of fighting for the life of the cancer patient up to the end and I did this in an effort to overcome the resignation and inertia which seemed to characterize the care of the cancer patient in his last weeks. I do not know who is wise enough to say what cancer is treatable, or when it becomes treatable. Certainly, many cancers which we are treating today with a good deal of vigor and with an immense amount of psychological and physical support to the patient were considered untreatable at the time I was an intern.52
Subsequent generations of physicians, bioethicists, and judges would argue that the patient was by necessity wise enough. But from the 1940s well into the 1960s, physician commentaries on hopelessness, extraordinariness, and withholding treatment linked these themes together primarily through a shared focus on the perceived consequences of physician abdication of responsibility for hard choices. A diminished commitment to treating the severely ill would lead to neglected, “narcotized” victims of therapeutic pessimism, slackened attention to advancing therapies, illusory confidence in the certainty of prognoses, and the abandonment of hope. As a result of these fears, even those who advocated for limited treatment were not committed to full-bore patient autonomy.
This hesitant embrace for patient prerogatives by physicians can be, and certainly has been, criticized as undermining claims physicians might make as appropriate guarantors of medical ethics. But this criticism has often also closed off curiosity about the issues of medical knowledge, purpose and progress that did preoccupy these physicians and that lay behind much of this hesitation. The challenges of creating and using medical knowledge underlay the longevity and usefulness of tests of ordinariness and omission in medical dilemmas,, and shaped beliefs about how and when to include patient prerogatives. Beecher understood these challenges in ways that informed how he considered this new coma to signal not only the end of care, but the end of life.
Before the respirator and the ICU, the treatment of cancer raised questions of when and if treatment of disease was excessive. When was “radical surgery” too radical? The dilemma was that improvement in complex surgical procedures required learning by first doing these procedures poorly, thus potentially blurring the distinction between accepted treatment and experiment. Only continued attempts at risky treatment could inform progress and achieve greater success. This reverse Faustian bargain—hell first, then possibly heaven in return—proved an exceptional calculus of suffering and so, predictably, physicians tended to claim that its hazards demanded exceptional professional integrity, beneficent motive, and technical excellence.
Improvements in surgical technique and postoperative management in the 1940s and 1950s renewed surgical adventurism toward cancer along with a related tendency to see medical progress as a series of hard choices. Some radical abdominal resection surgeries for cancer showed surgical mortality rates of 30 percent, with a similar probability of only achieving palliation of symptoms in then end. In the context of other options, these outcomes were considered encouraging signs, confirming the wisdom of “a more radical attitude in regard to the surgical treatment of advanced abdominal cancer.”53 There were few perceived alternatives to surgery for cancer at this time. Surgeons could tangibly intervene in a dread disease at a time when radiation or chemotherapeutic treatments were less effective, even for palliation of symptoms. After all, “what are you going to do for a patient who is so uncomfortable that she is a morphine addict?” asked one of the leading gynecologic surgeons of his day, Joe Meigs of MGH.54 Many prominent surgeons—George T. Pack, Alexander Brunschwing, and Jerome Urban at Memorial Sloan-Kettering in New York, Owen H. Wangensten at the University of Minnesota, among them—proudly promised ever more extensive surgical resection of cancer.55
In contrast, Harvey B. Stone, one of the more prominent critics of what he called a “newer radicalism of cancer surgery,” argued that it did not necessarily follow that just “because it is possible to do certain extensive operations … it is therefore sensible or desirable to do them.” But at the same time he observed that “we must see human progress equally dependent on the sturdy retention of the proved good, and the adventurous search for the unknown better.”56 His was a characteristic belief that management of the relationship between risky treatment and progress was a defining feature of physicians. That experience of how medicine learned and progressed to improve outcomes was tightly connected to what physicians thought they owed those they treated in terms of disclosure, listening, and discussion of treatment options.
“A major problem in managing the ‘hopeless case’ concerns the imparting of pertinent diagnostic and prognostic information to the patient,” explained one early 1960s commentary.57 The removal of large amounts of intestine, or half of someone’s pelvis, carried significant surgical risk of mortality or disability; the possibility of a cure was uncertain and usually unlikely. Enormous costs mixed with the enormous uncertainty of benefit:
Should all the possibilities in every case be outlined to the patient and then the burden of decision placed on him? Such decisions require perspective that the physician cannot often impart, even to the intelligent and emotionally well-balanced patient. Should the physician then become the personal advisor …? If [so], according to what norms … should he make this decision … [as] his manner of presentation will often be decisive in selection of therapy.58
Whether decision making was imposed or shared, neither choice, in this view, circumvented the ultimate responsibility of the physician to shape the course of treatment. While data that could help guide decisions for cancer surgery accumulated, it remained limited. The uncertainties these decisions presented were substantial and not resolved by passing responsibility for decisions on to patients, many physicians argued. The physician ultimately, unavoidably, needed to exercise judgment. While paternalism and professional authority is at work here, so is the management of uncertainty that Pernick placed at the center of medical knowledge, purpose, and identity.
A discussion in medical journals throughout 1950s and into the 1960s on the responsibilities and meanings of sharing “truth” with patients often boiled down more specifically to the question: Should the cancer patient be told? This question was perhaps the most written-about ethical issue during this period, with the possible exception of closely related questions about human experimentation. It also was perhaps one of the first empirically studied medical ethics issues. Donald Oken’s frequently cited 1961 survey appeared at the end of a decade that included several published inquiries in surgery and cancer journals into physician practices and patient attitudes toward truth-telling about cancer diagnoses.59 Despite the fact that an overwhelming majority of patients surveyed during this period expressed expectations of a truthful diagnosis, with only a significant minority expressing resistance or uncertainty over the idea,60 the reported behavior of surveyed physicians generally showed more definitive resistance. Oken concluded that physicians relied on pat assumptions to explain their reticence to fully disclose this diagnosis; often, these assumptions weren’t substantiated by their own reported experience, including, for example, the assumption that patients would be suicidal if clearly told they had cancer. The reason Oken offered for this was their own discomfort and fear of cancer and death. While physicians wished to be told about cancer when it came to their own diagnoses, they were generally fatalistic about the disease and delayed pursuit of its diagnosis when they had symptoms themselves.61
Oken’s findings reflected a growing attention within medicine on the psychological dimensions of cancer treatment and the dynamics of medical decision making.62 During this time, Oken was the Assistant Director of the Institute for Psychosomatic and Psychiatric Research and Training at the Michael Reese Hospital and Medical Center in Chicago, and his work reflected that of a slowly growing group of physicians interested in the study of the psychological well-being of patients facing serious illness. Michael Reese—a storied public hospital that closed in 2009 and was founded by Jewish immigrants in the impoverished south side of Chicago in the latter nineteenth century to serve all those in need of care—was home to the Institute, which was led by Roy Grinker. Grinker was trained in neurology as well as psychoanalysis as an analysand of Sigmund Freud. He is credited with developing the “biopsychosocial” integrated model of disease and positioned the Institute, in its heyday, as a leader in efforts to re-describe psychological care in more integrated ways.
The growing presence of psychiatrists in medical hospitals through the middle of the twentieth century was fueled to a large degree by this interest in the psychological components of medical illness. These physicians sought empirical support for the relevance of the psychiatric and emotional dimensions of patient experiences with cancer and other illness, and studied long-assumed beliefs about the emotional impact of the information physicians gave cancer patients about their condition. Before the hospice movement and the emergence of routinely available care alternatives for the terminally ill, clinician investigators began to flesh out the principles of psychologically sensitive care for the dying and seriously ill, and produced a visible public literature that opened up the subject to larger attention.63
Arthur Sutherland, at Memorial Hospital in New York, and Jacob Finesinger, at MGH, used a detailed patient interview format to explore the psychological impact, resilience, and needs of cancer patients.64 This work generally did not offer spirited advocacy for a specific obligation to tell—and an inherent right to hear—the full truth. But it did capture, in significant detail and poignancy, the burdens of disease and its treatment on individuals and their families.
This curiosity about illness experience tended to reinforce but also modify physician-driven prerogatives:
“Good doctor–patient communication does not mean that the doctor give extensive lectures of frightening facts and statistics, but rather that he create an atmosphere in which the patient is encouraged to talk constructively about his problems … While the physician should not be evasive, he should certainly be circumspect in his direct statements. The physician should remember that the patient hopes more to be reassured than to be educated in oncology.65
While it was best to get the truth out there, it was generally advised to dole it out in portions for which the patient was deemed ready.66 One frequently quoted advocate of disclosure, the physiologist and clinical researcher Walter C. Alvarez, wrote that “medical lying is wrong, usually futile, and even harmful.” He was, nonetheless, a critic of “lying” in degree, not kind. He explained, for example, that if he found inoperable prostate cancer in an elderly man, especially if that man had concurrent heart disease or hypertension that probabilistically could kill him first, the patient would not be burdened with this information. “[T]he physician should talk or keep silent, depending on the patient’s courage and willingness to hear the truth.”67
Psychiatric advice also emphasized that physicians needed to face their own fears,68 as well as appreciate the psychological strategies used by those facing serious illness, so that they could be more engaged and responsive but still, necessarily, remain the primary decision makers: “Questionnaires and moralistic generalizations about ‘always’ telling or not telling a patient … are of little value in helping a particular patient who is faced with the problem not as an abstract question but as an agonizing immediacy.”69 This seemingly mixed message—to make highly edited disclosures but also assert full commitments to patients—was compelling at the time in the way it called upon the longstanding belief that the expertise of physicians lay in the application of medical knowledge to particular persons, along with the responsibility to weigh interventions against the costs specific to each person: “A policy [with regards to truth telling] implies uniformity and uniformity is a distillate of indolence and insensitivity having no place in the practice of medicine.”70
John Gregory’s groundbreaking eighteenth-century treatise on medical ethics reads almost the same way.71 His discussion of truth-telling sorts through clues as to the impact and imperfections of prognosis, but it also unfolds within another argument. That argument was to convince readers that the way he talked about knowledge and professionalism was a break from prior uses of knowledge that were anchored more in social position than in expertise and science. In this way his self-description as applying knowledge with patients as a “gentleman” might sound quaint and condescending to our ears but was radical and provocative to those of his contemporaries. His account captured a larger social change in Britain where natural knowledge and scientific work reinforced a revised social order.72
Then, and again in the United States centuries later, information was itself considered a potential therapy, or danger. More than just emotional awkwardness or simple professional power was at stake in how physicians disclosed information. Across centuries then, debate and advice about truth-telling practices reflected differing commitments to the nature of medical knowledge, certainty, and physician role, and were situated within a broader cultural milieu and norms of disclosure.73 Should the cancer patient be told the truth? Articles and books addressing this question from the 1950s through the late 1960s advised doing so. That inclination, however, was heavily qualified by the calculations of patient “benefits” and “risks” within a contingent and otherwise overwhelming mix of sickness, hope, and suffering.74
Of course, the extent to which such committed and beneficent relationships were actually practiced and experienced cannot be determined from this literature. There has been limited, careful study of the nature of doctor–patient interactions directly through patient diaries, letters, or medical records for example. Conceit, condescension, and bias are palpable in this truth-telling literature to be sure. But the emergence of a vocabulary for incorporating the reality of care for people, and the airing of otherwise previously quiet subjects, is evident as well: detailed descriptions of painfully held beliefs by some cancer patients, such as notions that they caused their cancer by moral or sexual transgression. Other examples of emerging disclosure were found in the experiences of women, especially those with hysterectomies or mastectomies who faced isolation wrought from their own (and their spouse’s) resentments of mutilated sexuality and impaired domestic roles. Still other patients spoke of debilitating depression and anxiety associated with the cancer diagnosis when faced with inadequate support, intervention, and/or information from the physician. Published case studies and anecdotes, the dread and discomfort palpable in both public and medical sources, and references in medical accounts attesting to the variability in patient preference for information perhaps reflected some greater ambivalence and fear about truth by patients than surveys suggested. The public in many ways shared and fueled reaction to cancer as a dread disease, as well.75 Answering the dilemmas of intrusive medicine by simply trying to raise the voices of patients and diminishing those of physicians in later decades did not necessarily solve the deeper challenge that animated prior centuries of physician approaches to truth-telling: discerning which medical evidence matters and how to incorporate and satisfy both the sufferer’s subjectivity and application of objectivity in the medical encounter and in healing practices that rest on biology.
After the 1960s, challenging beneficence as a cornerstone of that discernment and instead centering decision-making in patients required, argued critics, cracking open what Jay Katz described in the title of his 1984 book as “the silent world of doctor and patient.”76 That book and that characterization have proved to be highly influential in shaping a historical picture of prior physician practices with respect to truth-telling and informed consent that required the forceful crowbar of other experts to open up and challenge. Two legal cases in particular are cited as the leading edge of such disruption to the silence; Salgo v. Leland Stanford, Jr. University Board of Trustees, in 1957, and Natanson vs. Kline in 1960.77 The first earned Katz’s distinction as the first light shed upon that closed world. Yet a reading of both opinions, and of legal writing about them at the time, suggests more complex shifts. Indeed, these and other court rulings well into the 1960s continued to support the idea that appropriate disclosure included physician judgments about what patients were ready to, or needed to, hear. While Salgo pronounced that a physician violated duty to a patient by withholding facts necessary for “intelligent consent,” that case was described by a contemporary judicial ruling as reflecting an “extreme view.”78 Other jurisdictions soon before and after Salgo or Natanson argued the more sustained point of view (up to this time) that, for example, failure to disclose out of fear of causing excessive duress to patients “cannot be deemed such want of ordinary care as to import liability.”79 Specifically, failure to disclose risk of laryngeal nerve damage in an operation needed to be understood in the following way:
Doctors frequently tailor the extent of their pre-operative warnings to the particular patient, and with this I can find no fault. Not only is much of the risk of a technical nature beyond the patient’s technical understanding, but the anxiety, apprehension, and fear generated by full disclosure thereof may have a very detrimental effect on some patients.80
Such privileging of physician discretion and attention to emotional reaction was common. Even Natanson, hailed as a groundbreaking precedent in revealing and cracking the code of silence, endorsed this background consensus with respect to truth-telling. Natanson involved a claim for damages for injuries sustained from cobalt radiation treatment. The Supreme Court of Kansas ruled on the narrow issue of whether conditions existed to re-try the case. The Court agreed that there were inadequate jury instructions as to whether any liability attributed to hospital personnel applied as well to their supervisory physician. In the course of reviewing the case, the court commented:
There is probably a privilege, on therapeutic grounds, to withhold the specific diagnosis where the disclosure of cancer or some other dread disease would seriously jeopardize the recovery.81
The court also pointed out that Salgo’s initial instruction that a physician disclose all facts affecting his “rights and interests and of the surgical risk, hazard and danger, if any” was found on appeal to be overly broad, and the reviewing court cautioned of the need to “recognize that each patient presents a separate problem, that the patient’s mental and emotional condition is important and in certain cases may be crucial …”82
So, with respect to physician disclosure and informed consent, degrees of truth-telling, the ideal of physician-as-balancer, and the perceived role of the physician as emotional gatekeeper of information were connected in legal reasoning as well as in medical journals, conferences, and practices. Furthermore, subsequent changes in acceptable practice and legal standards were patchy and inconsistently applied or understood. At the time they were issued, rulings like Salgo and Natanson were not obviously headed in the directions that bioethicist commentators later attributed to them—a direction that some legal scholar contemporaries of Beecher and Curran warned these decisions were mistakenly construed to imply.83
That is not to question the path since these rulings toward patient participation in receiving and weighing medical information but rather to emphasize that it was a long path, remains a work in progress, and still succeeds or fails to the degree it is in dialogue with the conditions in which medical facts work. Pursuit of participatory, co-created, medical knowledge has recurred ever since there has been writing about healing. The debates over brain death after the Report appeared will illustrate the challenges of moving from participatory form and aspiration, to participatory content and realization. Successful participatory and patient-centered care even more so requires being curious to learn and take into account what medical facts can do, how they work, and how they are generated.
In addition to omission, ordinariness, and truth-telling, another key set of understandings that Beecher drew upon, described further in the next chapter, was expectations surrounding ethical experimentation. Setting clear rules around human experimentation is usually timed by ethicists and historians to follow WWII, the investigation of Nazi uses of humans in research, and the subsequent Nuremberg Code, which resulted from the prosecution of some of the Nazi medical leadership charged with responsibility. While it is difficult to make broad generalizations about the general conduct of research in the postwar decades, there was clearly widespread interest in the issue of human experimentation ethics. Irving Ladimer’s 1963 anthology of papers on human experimentation included excerpts of seventy-three papers written primarily in the previous decade, with an accompanying bibliography exceeding five hundred citations of English literature.84
Historical characterizations of debates over experimentation ethics tend to portray the Nuremberg Trial prosecutions and the resulting Code as the start of a line of thinking that replaced physician judgment, with patient choice. That line of thinking, and the path it took, was far from straightforward, however. How and why did physicians concerned about experimentation ethics discuss Nuremberg? Why did they, and other non-physician commentators, generally consider research ethics to be best understood, again, as a product of physician integrity, beneficence, and prerogative? Delving into these questions complicates a history that posits the straightforward movement of patient-choice principles out of Nuremburg.
The so-called “Code” of Nuremberg was, more specifically, a section in the published judgment of the trial which took the form of a list of ten characteristics of “Permissible Medical Experiments” culled by the presiding judges from the unspecified consensus of “protagonists of the practice of human experimentation.”85 The American Medical Association (AMA) sent physician and physiologist Andrew C. Ivy as a medical expert to the team prosecuting Nazi physicians. Along with a draft of principles penned by another expert witness, neurologist Leo Alexander, Ivy, in his report to the AMA, apparently generated much of the language used by the Nuremberg judges in writing the Code.86 For Ivy these rules were, as the Court’s opinion implied them to be, “well established by custom, social usage and the ethics of medical conduct,”87 and reflected a “common understanding.”88
The opinion responded to the defense assertion that these doctors technically met expectations of proper conduct because, as they argued, the subjects after all gave “consent.” Subjects purportedly consented since they were given the chance, for example, for a reprieve from a death sentence in exchange for research participation. No surprise, then, that the Code elaborated in exhaustive detail that uncoerced informed consent is required from the subject. The detail was tailored to close off creative loopholes—offered at trial by the defense—by thoroughly describing expectations that were “well established by custom.”
The authors of the Code had more immediate objectives than to explicate an ethical theory of informed consent to replace physician virtue and beneficence. From the published proceedings and histories of both Alexander’s and Ivy’s contributions, it seems more plausible to argue that the Court’s motivation was to reinforce those virtues. The Court reacted to the sheer horror at the brutality of these experiments by people who were trusted to have known better. The experiments struck those who judged them as, essentially, an abandonment of the kind of relationship and fidelity doctors were expected to owe patients—be they recipients of treatment or research—all in order to service the needs of a racist state and conform to a vile political culture. As historian of Nazi medicine Robert Proctor has noted, “The [Nazi] doctors … were not morally blind or devoid of the power of moral reflection … the primary failing of Nazi medicine, I would argue, was the failure of physicians to challenge the rotten, substantive core of Nazi values.”89 Yale law professor Robert Burt discouraged investing in Nuremberg the origins of a trans-historical ideal of patient/subject self-determination. The Code was instead a historically specific response. Investing the Nuremberg decision with these other meanings was a later development.90
For decades following WWII, physicians who described the proper conduct of experiments built upon a pre-WWII emphasis to distinguish between research that was therapeutic (research that was aimed at testing treatment for a patient) versus research that was nontherapeutic (research primarily aimed at collecting information of scientific interest without a prospect of benefit to that patient). Since the early twentieth century, this therapeutic/nontherapeutic distinction was a central concept for policing whether an experiment was proper or not and for determining how medical organizations, such as the AMA and medical school faculties, reconciled a growing commitment to organized research as still part of a physician’s core therapeutic identity.91 Defining most research as therapeutic legitimated it within the usual physician exercise of ordinary care. The traditional therapeutic/nontherapeutic distinction could help guard against a threat highlighted by the Nazi and Nuremberg experiences—loss of the proper balance between pursuit of obligations to patients and pursuit of obligations or benefits to society.
This logic was evident in much of the literature on experimentation ethics in the mid to late twentieth century. The inherent difficulties of achieving adequate informed consent, and of subjects truly grasping and investigators fully specifying the complexities and unknowns of an experiment—after all, by definition experiments involve risks and benefits that cannot be reliably known—were widely discussed. Some argued that if the risks of a nontherapeutic experiment were likely inconsequential, then it could be conducted without consent. Codes of experimentation ethics proliferated through the 1960s, including those of the British Medical Research Council (1963, [1953]), the British Medical Association (1963), the World Medical Association (Helsinki Declaration, 1964), and the American Medical Association (1966).92 All these simplified the more expansive Nuremberg definition of informed consent and explicitly required it only of nontherapeutic, and not therapeutic, research. Actually achieving the level of communication and understanding detailed in this much-discussed first item of the Code was thought by skeptics to be infeasible.93
It is hardly surprising that physicians defended idealized norms of beneficent medical practice as better protection for individuals against the possible dangers of research. But others argued this as well. Philosopher Samuel Stumpf, in a paper described by early bioethicist Albert Jonsen as the “first philosophical contribution to medical ethics,” found consent both morally indecisive (could one be permitted to consent to anything?) and often not feasible.94 The more skepticism a particular writer had about the reliability of consent, the more their ethical standard for acceptable research rested on the degree to which a study mirrored usual physician beneficent responsibilities in ordinary care.95 The more an experiment replicated the uncertainties considered common to routine care, the more usual physician leeway over disclosure in regular practice was considered adequate for research as well.96 The Nazi experience seemed only to reinforce the view that human subjects were best protected by the usual practices of ordinary medicine, policed by the beneficent ethic of that enterprise. This general understanding considered likelihood of abuse of subjects to be greatest in the absence of the guiding goal of benefit. On this score, some physicians voiced concern about Nuremburg’s permissiveness in sacrificing the individual for public gain.97
Parsing expectations of disclosure along degrees of therapeutic versus nontherapeutic purpose reflected an attitude about medical knowledge that included therapeutic “experimenting” within the scope of normal medical practice. That attitude was opposed to setting researcher and clinician identity farther apart from each other —a dichotomy which, as the Nazi experience also seemed to make clear, had to be avoided.98 The link between the Nazi experience and the need to clarify therapeutic and non-therapeutic responsibilities was frequently made explicit.
The legal literature on human experimentation mirrored much of what physicians were saying in this regard. A prominent Yale Medical Society panel discussion of the Nuremberg Code highlighted this tension:
Every doctor in the course of his daily practice engages in the conduct of experiment with his patients … the kind of clinical investigation which is more in the realm of pure scientific endeavor … where the information obtained is not likely to be of immediate value to the subject of the experiment … [Here is] the nub of our question.99
Ladimer was one of the most prolific commentators on the legal aspects of experimentation in the 1950s and 1960s. He was a faculty member of the Law-Medicine Research Institute at Boston University, founded in 1958 by William Curran, Beecher’s friend and fellow Ad Hoc Committee member. Both Ladimer and Curran acknowledged repeatedly what the clinical literature anxiously observed—that there was no clear statutory or jurisprudential understanding of experiment as a legal activity. “Experiment” generally referred, since eighteenth-century English common law cases, to the deviation by physicians from accepted treatments. Physicians, this tradition emphasized, deviated from accepted practice “at their peril.” Mid-twentieth-century physicians, especially in the shadow of Nuremberg, argued on the contrary that experiment wasn’t a deviation foreign to standard practice but was inherent in it and required for it.
Ladimer and others tried a different tack to make experiment less legally perilous. The way the common law addressed peril required an update. Experiment, Ladimer argued, should be considered not a deviation from practice but a different kind of practice altogether. It was a structured, separate activity quite different from usual medical practice. He defined it as “a sequence resulting from an active determination to pursue a certain course and to record and interpret the ensuing observations.”100
But attempts like Ladimer’s to specify the differences between normal medical hypothesis testing and “experiment”—based on distinctive features such as institutional setting or the processes of patient recruitment—generally faltered. The therapeutic/nontherapeutic distinction instead persevered to structure how research was conducted. Even Ladimer vacillated between portrayals of research as a distinct practice, and research as a nontherapeutic part of medicine managed within broader norms of medical conduct.101
The randomized clinical trial was put forward as the new gold standard for medical research beginning in the 1940s and 1950s. A randomized trial was presumably a distinct practice. Research in this case was different in kind from clinical care. But during this period, early architects of these research trial designs nonetheless often continued to describe appropriate levels of scrutiny and consent required of research through therapeutic/nontherapeutic distinctions. An ethical trial was one that most closely resembled the real-world ambiguities inherent in making choices about treatment that physicians presumably had experience navigating. Deviation from such usual medical practices in the case of nontherapeutic research meant more scrupulous review, justification, and subject participation as well as opportunities for ease of exit.102 By the late 1950s and early 1960s, experiment for the purpose of finding knowledge alone (nontherapeutic research) was, by wide consensus, subject to certain restrictions: informed consent, convincing relevance of study, competence of researcher, use of animal experimentation first if appropriate, and so forth. Such restrictions were contrasted to those applied to treatments for the patient’s therapeutic benefit.103
By the mid-1960s, legal scholars felt confident that a consensus about the law had moved far enough away from the peril idea. But concern remained with respect to the general absence of statutory or judicial traditions explicitly endorsing human experimentation, which was still generally construed to mean nontherapeutic study. “The law relating to clinical investigation,” reported a 1968 JAMA column, “is largely unsettled. There is little statutory regulation, and many of the important issues have not been decided by the courts.”104 Instead of suggesting research was some new practice, as Ladimer did, Beecher’s friends Paul Freund and Curran concluded it still made sense to take the ubiquity view. Freund and Curran argued in the pages of the New England Journal of Medicine that the best way to understand the status of experimentation in the law was to judge it within the broader expectations of the ethical physician. “In my own writings and those of Marcus Plante, of Michigan Law School,” wrote Curran, “the position is taken that ‘informed consent’ is not a clear concept so developed by the courts that it can now be followed with security by the medical profession in patient-treatment decisions … Even less, then, in clinical investigation.”105 The solution to a world without peril, opined Freund, is the “great traditional safeguard in the field of medical experimentation … the disciplined fidelity of the physician to his patient … First of all, do not do injury.”106
Physician and scientist Louis Lasagna, an authority on research design and often a prominent critic of his profession and of improper research practices, mused, in typical fashion:
One wonders how many of medicine’s greatest advances might have been delayed or prevented by the rigid application of informed consent … if we are concerned with the problem of risk and danger rather than the abstract trampling of human rights, we will need to [focus] the principle of informed consent in a host of other medical situations … if we are to rely on informed consent rather than the good judgment of the trained physician, we shall have to reorganize completely the practice of medicine.107
Lasagna made the connection between practices and values, and between the scope of change needed in practices in order for such a change in values as described here to make sense. The conduct of research would, indeed, be “reorganized” dramatically during and after the 1970s. Research was increasingly defined as a distinct activity through bureaucratic, regulatory, and institutional requirements and routines specific to it. These made Ladimer’s conditions of “difference-in-kind” not arguments, but realities. During the 1960s it was common to hear charges that Food and Drug Administration (FDA) requirements of consent undermined patient safety; that Nuremberg created an impossible ideal and did not address the lessons learned from the Nazi experience as to the potential claim of social interest on pressures to participate in needed research; and that National Institutes of Health and Public Health Service requirements for research review committees actually privileged investigator-driven, not subject-centered, criteria.108 But each of these views gave way, in turn, to a normalization of Institutional Review Board (IRB) routines, to the dominance of informed consent in framing research ethics, to widening the scope of research open to IRB committee scrutiny, and to an assumption that FDA practices and randomized, placebo-controlled trials represented a scientific gold standard of practice after being highly contested as such in the preceding decades. Widespread belief that research (therapeutic or not) was a distinct enterprise not intrinsic to practice and with separate rules and procedures, first required the sorts of changes that indeed made research a distinct enterprise—for example, the intensive capitalization, bureaucratization, and technocratic management of medicine and medical knowledge. These developments, in turn, supported the broader distribution, sourcing, and standardization of medical knowledge in the decades following the 1960s.109 Framing research ethics differently worked when the conditions of work and knowledge production changed.
Efforts, especially since the 1970s, to challenge paternalistic decision making and privileged judgment and to open up medical accountability are to be celebrated and strengthened. But my interest in these developments comes not from what they mean ethically but what they do substantively: how they can potentially make medical knowledge better in terms of the “styles of thought,”110 categories of explanation, points of view, driving questions, and rules to verify evidence that comprise it. The close interaction seen in these prior decades, and in centuries before, between positions around how medical knowledge is created, and the values and expectations guiding the conduct of care, deserve our attention.111 It suggests that re-engineering how medical inquiry knows things has to be part and parcel if not preliminary to any deeper change in the moral experience or accountability of care. Medical epistemology is wrapped up with moral position in the context of care.
In that respect I will argue that bioethics was actually not bold enough in driving needed innovation in how to generate and use medical knowledge. In the case of brain death, for example, a focus on bioethical tools of conceptual description and classification of brain states and meanings of personhood as a way to test the criteria, took on the surface but not the depth of the problems that ostensibly drove their advocacy and use. Uncertainty, ambivalence, and strangeness, persisted. Treating brain death as a problem of ethics analysis did not solve the kinds of challenges of managing medical work and knowledge for which older practices and values around omission, ordinariness, truthfulness, or therapeutic benefit were a response. Those practices remain a resource for curiosity rather than caricature, for a broader dialogue about how to do better in addressing what is hard about generating and using medical facts.
Before picking up these themes further later in this book, I first ask how Beecher took this background of prior medical decision making and experiment to arrive at brain death—how practice and value interrelated for him. Beecher relied on these familiar categories, but he took them on in new ways as well. He did so especially in response to how the brain-dead body erased familiar categories of medical explanation such that it not only compelled withdrawal of treatment but was no longer alive in that context.
1. Robert Young, interview with the author, July 25, 1991; Ad Hoc Committee of the Harvard Medical School to Examine the Definition of Brain Death, “A definition of irreversible coma,” Journal of the American Medical Association 205, no. 6 (August 5, 1968): 337–40.
2. Helmut Thielicke, “The doctor as judge of who shall live and who shall die,” in Who Shall Live; Medicine, Technology and Ethics, ed. KennethVaux (Philadelphia: Fortress Press, 1970): 146–94, 148.
3. Martin S. Pernick, “Back from the grave: recurring controversies over defining and diagnosing death in history,” in Death: Beyond Whole-Brain Criteria, ed. Richard M. Zaner (Dordrecht: Kluwer Academic Publishers, 1988), 7–74, 60.
4. Vanessa Harding, “Whose body? A study of attitudes towards the dead body in early modern Paris,” in The Place of the Dead: Death and Remembrance in Late Medieval and Early Modern Europe, eds. Bruce Gordon and Peter Marshall (Cambridge: Cambridge University Press, 2000), 170–87, 171.
5. Ad Hoc Committee, 337.
6. Beecher to Ebert, Box 6, Folder 17, Beecher Papers, The Harvard Medical Library in the Countway Library of medicine, Boston, MA.
7. Beecher to Folch-pi, Curran, Box 6, Folder 80, Beecher Papers.
8. Beecher to Curran, June 14, 1967 and June 23, 1967, Box 11, Folder 17, Beecher Papers.
9. Beecher to Curran, June 23, 1967, Box 11, Folder 17, Beecher Papers.
10. Beecher to Murray, Box 6, Folder 21, Beecher Papers.
11. Beecher to Ebert, October 30, 1967, Box 6, Folder 17, Beecher Papers.
12. Ebert to Beecher, November 3, 1967, Box 6, Folder 17, Beecher Papers.
13. Ebert memorandum, Box 6, Folder 17, Beecher Papers. Potter was actually appointed later as Beecher bemoaned to Ebert subsequently the unresolved lack of a theologian. See Beecher to Ebert, January 9, 1968, ibid.
14. Eelco F. M. Wijdiks, “The neurologist and Harvard criteria for brain death,” Neurology 61 (2003): 970. See also Calixto Machado, Brain Death–A Reappraisal (New York: Springer, 2007).
15. Ebert to Beecher, Box 6, Folder 17, Beecher Papers.
16. Transplantation was not a central concern to the Committee. Briefly put: the fate of Curran’s contribution to the Report, which began as an explicit response to the issues raised by transplant, was rejected by the Committee, as discussed in Chapter Three. MGH neurosurgeon Sweet failed, by his own account and supported by Committee archives, in his efforts as the only Committee member who purposefully sought to loosen the criteria explicitly in order to enhance transplant, as discussed in Chapter Five. The purposes, motives, and decades-long efforts and interests of Beecher and MGH neurologist Robert Schwab, who devised the essential elements of the criteria, show no indication of an interest in transplant but if anything do underscore an interest in setting limits to excessive uses of medical technology. Their work is discussed respectively later in the next chapter and in Chapters Four and Five. Other recollections and evidence support these interpretations as well. It is striking to me how scholars who are familiar with Ebert’s authorizing letter assume his words were aimed to smooth the path for transplantation. But the plain words simply do not convey that. Given the context that will unfold as summarized above, Ebert was arguably also more concerned about transplant and eager to see more responsible guidelines to police its use.
17. M. L. Tina Stevens, “Redefining death in America, 1968,” Caduceus, Winter 1995, 207–19, 217. Also see Stevens, “The Quinlan case revisited: a history of the cultural politics of medicine and law,” Journal of Health Politics, Policy and Law 21, no. 2 (Summer 1996): 347–66, and Bioethics in America (Baltimore, MD: Johns Hopkins University Press, 2000).
18. Albert Jonsen, The Birth of Bioethics (New York: Oxford University Press, 1998).
19. Peter Singer, Rethinking Life and Death (New York: St. Martin’s Press, 1994), 25.
20. One of the most accomplished scholars of the culture and emergence of brain death repeats this position as well. Margaret Lock, Twice Dead: Organ Transplants and the Reinvention of Death (Berkeley: University of California Press, 2002). This view also formed the core of critiques of the Committee’s work aimed at more general audiences. See Dick Teresi’s typical The Undead (New York: Pantheon Press, 2012).
21. Exceptions include Wijdicks, “The neurologist and Harvard criteria for brain death.” Most efforts to engage this detail, however, often misread it. The gradual consolidation of the Beecher archive itself, and limited availability of material from medical records and key actors such as Schwab, also contributes to this. Mita Giacomini, “A change of heart and a change of mind? Technology and the redefinition of death in 1968,” Social Science and Medicine 44, no. 10 (1997): 1465–82. For example, Giacomini quotes—and then medical anthropologist Margaret Lock and others (Scott Henderson more recently, and discussed here in Chapter Six) have subsequently re-quoted—a section of a memorandum that Curran prepared for the Committee, which is the subject of Chapter Three. Curran argued that brain death would be inadequate to overcome possible legal obstacles to transplantation. Although the passage has been interpreted by these writers to demonstrate a transplant preoccupation, when put in context—of the Committee’s rejection of the more expansive approach Curran suggested, recollections of Committee members, and other actions by the Committee and Beecher in drafting the document—the memo instead outlines a direction and a preoccupation that the Report avoided rather than embraced. Martin Pernick—eschewing the presentist tendency to critique the Committee when it is criticized as practical rather than conceptually pure—was able to see the Committee’s interest in transplant as part of a broader view that linked judicious resource use and worthwhile innovation. Martin Pernick, “Brain death in a cultural context: the reconstruction of death, 1967–1981,” in The Definition of Death: Contemporary Controversies, eds. S. J. Younger, et al. (Baltimore, MD: Johns Hopkins University Press, 1999), 3–33.
22. Box 6, Folder 23, Beecher Papers.
23. John Bunker, “Henry K. Beecher,” in The Genesis of Contemporary American Anesthesiology, eds. Perry P. Volpitto and Leroy D. Vandam (Springfield, IL: Charles C. Thomas, 1982), 104–19.
24. Beecher remained active until his death in 1976. He wrote an authoritative history of Harvard Medical School, published and lectured on brain death, and completed, as will be detailed later in this study, a second book on human experimentation.
25. Curran to Beecher, October 27, 1958, Box 11, Folder 16, Beecher Papers.
26. MGH, Medical Records. Medical case records reviewed throughout this book will not be singularly referenced as to source other than noting the date and describing content, as part of maintaining confidentiality of identities and conforming to rules of use and MGH IRB approval for this purpose.
27. Schwab complemented his training in medicine at MGH with exposure to neuropathology at the University of Munich and psychiatry at the Boston Psychopathic Hospital, before returning to MGH in the mid-1930s. He died in 1972. See, John S. Barlow, Obituary, Journal of the Neurological Sciences 19 (1973): 257–58.
28. Robert Young, Obituary, “Robert S. Schwab, M.D. 1903–1972,” Archives of Neurology 27 (September 1972): 271–72.
29. Ian Dowbiggin, A Merciful End: The Euthanasia Movement in Modern America (New York: Oxford University Press, 2002).
30. There was significant debate over the value of the respirator for these two generally different circumstances, with efficacy generally considered far greater in the latter case.
31. See Henning Pontoppidan, “The development of respiratory care and the Respiratory Intensive Care Unit (RICU): a written oral history,” in “This Is No Humbug!” Reminiscences of the Department of Anesthesiology at the Massachusetts General Hospital, ed. Richard J. Kitzler (Boston: MGH), 151–77.
32. Jack Emerson, interviews with the author, July 22, 1992 and August 4, 1992.
33. James C. Wilson, American Journal of Diseases of Children 43, no. 6 (June 1932): 1433–54.
34. John Harley Warner, The Therapeutic Perspective: Medical Practice, Knowledge, and Identity in America, 1820–1885 (Cambridge: Harvard University Press, 1986).
35. Joel Howell, Technology in the Hospital: Transforming Patient Care in the Early Twentieth Century (Baltimore, MD: Johns Hopkins University Press, 1995).
36. Martin Pernick, A Calculus of Suffering: Pain, Professionalism, and Anesthesia in Nineteenth Century America (New York: Columbia University Press, 1985).
37. Anonymous, “A way of dying,” The Atlantic Monthly 199, no. 1 (January 1957): 53–55.
38. Editorial, “Life-in-death,” The New England Journal of Medicine 256, no. 16 (April 18, 1957): 760–61.
39. Ian Dowbiggin, ⊠A Merciful End: The Euthanasia Movement in Modern America (New York: Oxford University Press, 2002).
40. Perrin H. Long, “On the quantity and quality of life,” Resident Physician 6, no. 4 (April 1960): 69–70.
41. Frank J. Ayd Jr., “The hopeless case: medical and moral considerations,” JAMA 181, (no. 13 September 29, 1962): 1099–1102, 1102.
42. Ayd, p. 1099.
43. Gerald Kelly, “The duty of using artificial means of preserving life,” Theological Studies 11 (1950): 203–20.
44. Examples of other frequently quoted considerations of this question that appeared soon before Beecher sought to establish the Committee include G. Biorck, “On the definition of death,” World Medical Journal 14 (September–October 1967): 137–39; J. Voigt, “The Criteria of Death,” 143–46; “When Are You Really Dead?,” Newsweek, December 18, 1967; Arthur Winter, ed., The Moment of Death (Springfield, IL: Charles C. Thomas, 1967); Frank J. Ayd Jr., “When Is a Person Dead?,” Medical Science 18, no. 33: 33–37.
45. William P. Williamson, “Life or death—whose decision?,” JAMA 197, no. 10 (September 5, 1966): 139–41, 139.
46. Editor, “What and when is death,” JAMA 204, no.6 (May 6, 1968): 539–40.
47. M. Martin Halley and William F. Harvey, “Medical vs. legal definitions of death,” JAMA 204, no. 6 (May 6, 1968): 423–25, 423. A similar analysis was in M. Martin Halley and William F. Harvey, “On an interdisciplinary solution to the legal-medical definitional dilemma in death,” Indiana Legal Forum 2 (1968–1969): 219–37.
48. Pope Pius XII, “The prolongation of life,” Pope Speaks, November 24, 1957, 393–98, 397.
49. Halley and Harvey, “Medical vs. legal definitions of death,” and “Law-medicine comment: definition of death,” Journal of Kansas Medicine Society 69, no. 6 (June 1968): 280–82; “Law medicine comment: the definitional dilemma of death,” Journal of the Kansas Bar Association 37 (Fall 1968): 179–85.
50. William F. Harvey, interview with the author, January 21, 2003.
51. Edward H. Rynearson, “You are standing at the bedside of a patient dying of untreatable cancer,” CA 9 (June 1959): 85–87.
52. “Symposium on terminal care,” CA (January-February 1960): 12–24, 20.
53. Alexander Brunschwing, “Radical resections of intra-abdominal cancer-summary of results in 100 patients,” Annals of Surgery 122, no. 6 (December 1945): 923–32.
54. Meigs as an audience discussant of a paper by Whipple, in Allen O. Whipple, “Radical surgery in the treatment of cancer,” Annals of Surgery 131, no.6 (June 1950): 812–23.
55. For a good summary and characterization of these individuals and perceptions of surgical therapeutic optimism, efficacy, and adventurism in this period, see Barron H. Lerner, The Breast Cancer Wars: Hope, Fear, and the Pursuit of a Cure in Twentieth-Century America (New York: Oxford University Press, 2001).
56. Harvey B. Stone, “The limitations of radical surgery in the treatment of cancer,” Surgery, Gynecology and Obstetrics 92, no. 2, (August 1953): 129–34, 133, 134. See also his editorial “Limitations in the treatment of malignant diseases,” 2–3.
57. Eugene G. Laforet, “The ‘hopeless’ case,” Archives of Internal Medicine 112 (September 1963): 314–26, 318.
58. John C. Ford, J. E. Drew, “Advising radical surgery: a problem in medical morality,” JAMA 151, no. 9 (February 28, 1953): 711–16.
59. Donald Oken, “What to tell cancer patients: a study of medical attitudes,” JAMA 175, no. 13 (April 1, 1961): 1120–28; W. T. Fitts and I. S. Ravdin, “What Philadelphia physicians tell patients with cancer,” JAMA (November 7, 1957); W. D. Kelly and S. R. Frieson, “Do cancer patients want to be told?,” Surgery 27 (1950): 822; D. Rennick, “What should physicians tell cancer patients,” New Medical Materia 2 (March 1960): 51–53, reported in Oken, supra.
60. Robert J. Samp and Anthony Curreri, “A questionnaire survey on public cancer education obtained from cancer patients and their families,” Cancer 10 (March–April 1957): 382–84, and “How much to tell?,” Time (November 3, 1961): 60. For similar findings of physicians’ reluctance to tell and patients saying they mostly wanted to hear, almost a decade later, see Group for the Advancement of Psychiatry, Death and Dying: Attitudes of Patient and Doctor, vol. 5, symposium no. 11 (New York: 1965), 591–667.
61. Guy F. Robbins, Mary C. MacDonald, and George T. Pack, “Delay in the diagnosis and treatment of physicians with cancer,” Cancer 6, no. 3 (May 1953): 624–26. See also Walter C. Alvarez, “How early do physicians diagnose cancer of the stomach in themselves? A study of the histories of 41 cases,” JAMA 97, no. 2 (July 11, 1931): 77–83.
62. Nathan S. Kline and Julius Sobin, “The psychological management of cancer cases,” JAMA 146, no. 17 (August 25, 1951): 1547–51, 1547–49.
63. H. Feifel, The Meaning of Death (New York: McGraw-Hill Book Company, Inc., 1959); Robert Fulton, Death and Identity (New York: John Wiley & Sons, Inc., 1965); B. G. Glaser and A. L. Strauss, Awareness of Dying. (Chicago: Aldine Publishing Company, 1966); and L. Pearson, ed., Death and Dying (Cleveland: The Press of Case Western Reserve University, 1969).
64. Harley C. Shands, Jacob Finesinger, Stanley Cobb, and Ruth Abrams, “Psychological mechanisms in patients with cancer,” Cancer 4 (1951): 1159–70; Arthur M. Sutherland, Charles E. Orbach, Ruth B. Dyk, and Morton Bard, “I. The psychological impact of cancer and cancer surgery,” Cancer 5 (1952): 857–72; Ruth D. Abrams and Jacob E. Finesinger, “Guilt reactions in patients with cancer,” Cancer 6 (1953): 474–82; Arthur M. Sutherland and Charles E. Orbach, “Psychological impact of cancer and cancer surgery. II. Depressive reactions associated with surgery for cancer,” Cancer 6 (1953): 958–62; Morton Bard and Arthur M. Sutherland, “Psychological impact of treatment of cancer and its treatment. VI. Adaptation to radical mastectomy,” Cancer 8 (1955): 656–72; Marvin G. Drellich, Irving Bieber, and Arthur Sutherland, “The psychological impact of cancer and cancer surgery. VI. Adaptation to hysterectomy,” Cancer 9 (1956): 1120–26; Arthur M. Sutherland, “Psychological impact of cancer and its therapy,” Medical Clinics of North America 40 (1956): 705–20.
65. Arthur M. Sutherland, “Psychological impact of cancer and its therapy,” 719–20.
66. An attempt reported in the American flagship journal Cancer, to see if they could devise a method by which psychiatrists could predict those patients who would manage full disclosure and those who would not. Bo Gerle, Gerde Lunden, and Philip Sandblom, “The patient with inoperable cancer from the psychiatric and social standpoints: a study of 101 patients,” Cancer 13, no. 6 (November–December 1960): 1206–17.
67. William C. Alvarez, “Care of the dying,” JAMA 150, no. 2 (September 13, 1952): 86–91.
68. John Trawick Jr., “The psychiatrist and the cancer patient,” Diseases of the Nervous System (September 1950): 278–80, 280. “Why then,” this author asked, “do we as physicians when faced with the ‘dread condition’ suddenly reverse our fields, drop all the painstaking technical knowledge which we have so laboriously acquired and regress to a floundering, rejecting and improvised but dishonestly rationalized and sanctimoniously self-justified level of performance?,” (p. 279). For another critique of the roots of awkward physician behavior with terminal patients and their own psychological needs, see Edward M. Litin, “Should the cancer patient be told?,” Postgraduate Medicine 28, no. 5 (November 1960): 470–75. Here too is advice to generally tell, with disclosure and venting of fears openly seen as healthy and constructive goals. But again, this advice is balanced by other advice to weigh doing so in response to patients’ needs and the expected impact of disclosure.
69. Paul Chodoff, “A psychiatric approach to the dying patient,” CA (January–February 1960): 29–32, 31.
70. Bernard C. Meyer, “Should the patient know the truth?,” Journal of Mount Sinai Hospital New York 20 (March–April 1954): 344–50, 349.
71. John Gregory, Observations on the duties and offices of a physician; and on the method of prosecuting enquiries in philosophy. (London: Printed for W. Strahan and T. Cadell, 1770).
72. Gary S. Belkin, “Moving beyond bioethics—history and the search for medical humanism,” Perspectives in Biology and Medicine 47, no. 3 (Summer 2004): 372–85.
73. Gary S. Belkin, “History and bioethics: the uses of Thomas Percival,” Medical Humanities Review 12, no. 2 (Fall 1998): 39–59. Many medical texts on truth-telling, whether strongly advocating or sharply criticizing withholding information, shared this characteristic of vetting their views within a set of questions characteristic as well of the mid-twentieth-century debates on truth-telling: How do physician actions and demeanor impact healing? Such illustrative texts include Northington Hooker, Physician and Patient (New York: Baker and Scribner, 1849); Richard C. Cabot, “The use of truth and falsehood in medicine: an experimental study,” American Medicine 5 (1903): 344–49; and Joseph Collins, “Should doctors tell the truth?,” Harper’s Monthly Magazine 155, 1927, 320–26.
74. Samuel Standard and Helmuth Nathan, eds., Should the Patient Know the Truth? A Response of Physicians, Clergymen, and Lawyers (New York: Springer Publishing Co., Inc., 1955); “Symposium: what shall we tell the cancer patient?,” Proceedings of the Staff Meetings of the Mayo Clinic 35, no. 10 (May 11, 1960): 239–57. James G. Wilders, “Should the cancer patient be told?,” JAMA 200, no. 8 (May 22, 1967): 157; Bernard C. Meyer, “Truth and the physician,” in Ethical Issues in Medicine: The Role of the Physician in Today’s Society, ed. E. Fuller Torrey (Boston: Little, Brown, and Co., 1968), 161–77.
75. James T. Patterson, The Dread Disease: Cancer and Modern American Culture (Cambridge, MA: Harvard University Press, 1987); George Crile Jr., “A plea against blind fear of cancer,” Life 30, October 31, 1955, 128–42.
76. Jay Katz, The Silent World of Doctor and Patient (New York: The Free Press, 1984).
77. Salgo v. Leland Stanford, Jr. University Board of Trustees, in 1957, and Natanson vs. Kline 350 P 2d 1093.
78. Watson v. Clutts 136 SE 2nd 617, 1964: 621.
79. Hunt v. Bradshaw, 242 NC 517, 1955: 521.
80. Roberts v. Wood, 206 F. Supp. 579: 583.
81. Natanson v. Kline, 1102.
82. Ibid., Natanson, 1103
83. Marcus L. Plante, “An analysis of ‘informed consent,’ ” Fordham Law Review 36 (1968): 639–72.
84. Irving Ladimer and Roger W. Newman, eds., Clinical Investigation in Medicine: Legal, Ethical and Moral Aspects (Boston: Law-Medicine Research Institute, Boston University, 1963).
85. Trials of War Criminals Before the Nuremberg Military Tribunals, vol. 2 (Washington D.C.: U.S. Government Printing Office, 1950), 181–82.
86. For a more detailed review of the contributions of Ivy, Alexander, and their work in the context of discussions by physicians and others in the years leading up to the actual trials, see Paul J. Weindling, “The origins of informed consent: the International Scientific Commission on Medical War Crimes and the Nuremberg Code,” Bulletin of the History of Medicine 75, no. 1 (Spring 2001): 37–71, and Weindling, Nazi Medicine and the Nuremberg Trials: From Medical War Crimes to Informed Consent (Basingstoke and New York: Palgrave Macmillan. 2004). See also Andrew C. Ivy, “Nazi war crimes of a medical nature,” Federation Bulletin 33 (1947): 133–146, reprinted in Ethics in Medicine: Historical Perspectives and Contemporary Concerns, eds. Stanley Joel Reiser, Arthur J. Dyck, and William J. Curran (Cambridge, MA: MIT Press, 1977), 267–72.
87. Andrew C. Ivy, “Report from war crimes of a medical nature committed in Germany and elsewhere on German nationals of occupied countries by the Nazi regime during World War II,” AMA Archives, quoted in Jonsen, Birth of Bioethics, 135.
88. A. C. Ivy, “The history and ethics of the use of human subjects in medical experiments,” Science 108 (July 2, 1948): 1–5, 3.
89. Robert N. Proctor, “Nazi science and Nazi medical ethics: some myths and misconceptions,” Perspectives in Biology and Medicine 43, no. 3 (Spring 2000): 335–46, 343, 344.
90. Robert A. Burt, Death Is That Man Taking Names: Intersections of American Medicine, Law, and Culture (Berkeley: University of California Press, 2002). See esp. 80–86.
91. See Susan Lederer, Subjected to Science: Human Experimentation in America Before the Second World War (Baltimore, MD: Johns Hopkins University Press, 1995).
92. Medical Research Council, “Responsibility in investigation on human subjects” (1963); World Medical Association, “Declaration of Helsinki” (1964); British Medical Association, “Experimental research on human beings” (1963); American Medical Association, “Ethical guidelines for clinical investigation” (1966), in Encyclopedia of Bioethics, vol. 4, ed. Warren T. Reich, “Appendix: Codes and Statements Related to Medical Ethics. Section II Directives for Human Experimentation” (New York: The Free Press, 1978), 1764–81.
93. “The voluntary consent of the human subject is absolutely essential. This means that the person involved should have the legal capacity to give consent; should be so situated as to be able to exercise free power of choice, without the intervention of any element of force, fraud, deceit, duress, overreaching, or other ulterior form of constraint or coercion; and should have sufficient knowledge and comprehension of the elements of the subject matter involved as to enable him to make an understanding and enlightened decision. This latter element requires that before the acceptance of an affirmative decision by the experimental subject there should be made known to him the nature, duration and purpose of the experiment; the methods and means by which it is to be conducted; all inconveniences and hazards reasonably expected; and the effects upon his health or person which may possible come from his participation in the experiment.” Quoted in Paul B. Beeson, Philip K. Bondy, Richard C. Donnelly, and John E. Smith, “Panel discussion: moral issues in clinical research,” Yale Journal of Biology and Medicine 36 (June 1964): 455–76, 455.
94. See Samuel E. Stumpf, “Some moral dimensions of medicine,” Annals of Internal Medicine 64, no. 2 (February 1966): 460–70, 468. I contest Stumpf’s “first” status by Jonsen. An earlier paper is one on deception in experiments by James P. Scanlan. Here, Scanlan argued that any deception, i.e. incomplete informed consent, violated a Kantian respect for others. But if one turned to an alternative utilitarian teleology, any defense of deception fails because a utilitarian claim would balance a future benefit with a current harm. Since the benefit of the experiment is unknown (that is why there is an experiment), utilitarian claims are inappropriate for experimental ethics on their own terms. A likely rejoinder to this, in the context of writing on this issue, particularly by clinicians in the 1950s and 1960s, would be that (a) then all medical intervention with uncertain outcome is immoral; and (b) reasonably expected goals and purposes, especially in the face of very limited risk, permit a good enough utilitarian calculus. James P. Scanlan, “The Morality of Deception in Experiments,” Bucknell Review 13, no. 1 (March 1965): 17–26. More important, however, is to point out how the “first” historical narrative employed by Jonsen uncritically privileges and defines bioethics as a unique, philosopher’s, discourse. It ignores and fails to engage centuries, if not millennia, of philosophical and theological commentary on medicine.
95. Mindel C. Sheps, “Problems created in clinical evaluation of drug therapy,” Perspectives in Biology and Medicine 5 (Spring 1962): 308–23. Typical of this sort of approach was a JAMA editorial that illustratively cited a study at Bellevue comparing postoperative infection incidence between patients who did, versus those who did not, receive antibiotics preoperatively. Since real practice faced no clear consensus about the safety of either approach, and use or non-use would not routinely be detailed by the surgeon to the patient, the experimental situation mimicked the real situation and thus no “superfluous” actions, like a uniquely prescribed or reviewed consent, were necessary. Maxwell Finland, “Ethics, consent and controlled clinical trial,” JAMA 198, no. 6 (November 7, 1966): 637–38. Of note, this particular editorial was a critical response to a widely read and discussed criticism of inadequate use of informed consent by Beecher that appeared in the New England Journal of Medicine the same year, and which will be discussed more in the following chapter as part of seeing how Beecher both inherited but also revised a longstanding set of practices and justifications for ethical conduct.
96. John J. Lynch, “Symposium on the study of drugs in man. Part III. Human experimentation in medicine: moral aspects,” Clinical Pharmacology & Therapeutics 1 (1960): 396–400, 396.
97. For example, prominent physician and future Chair of Medicine at Yale, Louis G. Welt, expressed the views of many when criticizing Rule 2 of the Code, which read: “The experiment should be such as to yield fruitful results for the good of society unprocurable by other methods or means of study. …” Referring to Rule 2, Welt wrote, “this can be so readily translated into actions wherein the ‘ends justify the means,’ and to a frame of references wherein the inherent rights of the individual are jeopardized in the interests of the society or the state.” See “Reflections on the problems of human experimentation,” Connecticut Medicine 25, no. 2: 75–78, 76–77. See also Michael B. Shumkin, “The problem of human experimentation in human beings. I. The research worker’s point of view,” Science 117 (February 27, 1953): 205–7, for an often quoted critique of ever invoking society’s interest in doing nontherapeutic research, but also how degrees of risk should correlate with degrees of protection.
98. See, for example, the argument by Donald Dietrich in “Legal implications of psychological research with human subjects,” Duke Law Journal (1960): 265–74; William Bennet Bean, “A testament of duty: some strictures on moral responsibilities in clinical research,” Journal of Clinical and Laboratory Medicine 39, no. 3 (1952): 3–9; Otto E. Guttentag, “The problem of experimentation on human beings – II. The physician’s point of view,” Science 117 (February 27, 1953): 207–10, 208.
99. Paul B. Beeson, et al, “Panel discussion: Moral issues in clinical research,” 455, 457.
100. Michael B. Shimkin, “The problem of experimentation in human beings,” Science 117 (February 27, 1953): 205–7, 205.
101. Irving Ladimer, “Ethical and legal aspects of medical research on human beings,” Journal of Public Health Law 3 (1955): 467–511. See also Ladimer, “Human experimentation—medicolegal aspects,” New England Journal of Medicine 257, no. 1 (July 4, 1957): 18–24.
102. For example, Austin Bradford Hill, “Medical ethics and controlled trials,” British Medical Journal (April 20, 1963): 1043–49; T. F. Fox, “The ethics of clinical trials,” Medico-Legal Journal 28 (1960): 132–41; Ladimer, “Ethical and legal aspects of medical research on human beings.”
103. Burke W. Shartel and Marcus L. Plant, “Consent to experimental medical procedures: failure to follow standard procedures,” Reprinted in. Irving Ladimer and Roger W. Newman, Clinical Investigation in Medicine, 223–30; Joseph Stetler and Robert R. Moritz, “Medical professional liability in general,” chap. 19 in Doctor, Patient and the Law, 4th ed., eds. Ladimer and Newman (St. Louis: CV Mosby, 1962), 230–33; Richard P. Bergen, “Consent in clinical investigation,” JAMA 203, no. 7 (February 12, 1968): 281–2, 281.
104. Richard P. Bergen, “Common law and clinical investigation,” JAMA 203, no. 6 (February 5, 1968): 231–2.
105. William J. Curran, “The law and human experimentation,” New England Journal of Medicine 275, no. 6 (August 11, 1966): 323–5, 324.
106. Paul Freund, “Problems in human experimentation,” New England Journal of Medicine 273, no. 13 (September 23, 1965): 687–92, 689.
107. Louis Lasagna, Life, Death and the Doctor (New York: Alfred A. Knopf, 1968), 259, 261.
108. These views were held by, among others, Curran in “The law and human experimentation.”
109. Gary S. Belkin, “Misconceived bioethics: the misconception of the therapeutic misconception,” International Journal of Psychiatry and the Law, 29 (2006): 75–85.
110. Ludwig Fleck, Genesis and Development of a Scientific Fact, eds. Thaddeus J. Trenn and Robert K. Merton, trans. Fred Bradley and Thaddeus Trenn (Chicago: University of Chicago Press, [1935], 1979).
111. Gary S. Belkin, “Moving beyond bioethics.”