Chapter 2
An Awakening
Medicine and Illness in Post–World War Two America

Every semester, when I ask my health sciences students to define what medical ethics means to them, I usually hear the same chorus of responses: treating the patient as a whole person. Advocating for the patient. My nursing students often chime in with the term “non-malfeasance”—avoiding doing harm to the patient. They often share examples of when those ethics were challenged without divulging personal details, since patient confidentiality is taken seriously in our classroom, thanks to the Health Insurance Portability and Accountability Act (HIPAA) of 1996. Teenage patients who want a course of treatment different from what their parents want for them, or an elderly patient’s wishes not being respected by the next of kin tasked with difficult decisions, are common examples. Usually, it is when we explore instances of perceived lapses in judgment or ethics that we circle around to the most exhaustive understanding of why the students choose to define ethics as they do.

Perched over Boston’s bustling Huntington Avenue in our fourth-floor classroom at Northeastern University, steps from Harvard teaching hospitals like Brigham and Women’s and Beth Israel Deaconess Medical Center, as well as other renowned institutions like Children’s Hospital, the Dana-Farber Cancer Institute, and the Joslin Diabetes Clinic, we are all fortunate. When I need to, I can take a left down Huntington Avenue, walking past the take-out restaurants and dive bars with dollar wing specials, past where the E Line street-level trolley cuts through the main thoroughfare, filled with high school students, nurses and medical residents, young mothers with children in strollers, and college students with iPod ear buds. When I walk through Brigham’s main revolving door I am no longer a writer and health sciences writing lecturer; I am a patient with a rare disease who depends on the innovative treatments and technology at hospitals like this. Quite literally, I am crossing Sontag’s threshold from the kingdom of the well into the kingdom of the sick. In many respects, I know this kingdom and its attendant customs and interactions more intimately than I do the realm of the healthy. Appointments and hospital admissions are frequent in my world, and the diagnostic tests, procedures, and treatments I sign consent forms for are a routine part of my life.

My students often make this same walk down Huntington to their respective clinical and co-op placements, field placements the school arranges, minutes from their dorms and apartments and their classrooms and labs. They learn about patient care in some of the most medically advanced and prestigious research hospitals in the world. They too cross into another kingdom, shedding their college student personas and adopting the mindset of the health care apprentice. We each have our roles, and it is easy to forget that not all patients and providers have access like this.

My students’ definitions of medical ethics are on point, but it isn’t without more probing and more discussion that we land upon the topics of informed consent and patients’ rights. I think this is partially a good thing; they see these principles at work so regularly in their rotations that the principles are just that: routine. Lists of patients’ rights are posted on hospital walls and in emergency room bays throughout hospitals. Informed consent for procedures big and small often—not always, but often enough—entails a quick overview and a dash of a perfunctory signature. But every now and then, a student will question what we often take at face value.

How informed is consent if, say, the patient doesn’t have a good grasp of English and there isn’t time for a translator, or the resources to provide one? To what extent do patients who have no health insurance and therefore limited ability to seek different care have their basic rights upheld? How helpful or equitable are online resources if they assume a digital literacy and access that some patients don’t have?

Most often, it is when something goes wrong that we stop and think about the potential risks listed on the procedures we agree to undergo, or consider just what it means to be treated with respect and dignity regardless of our origin, religion, or financial status. Because those of us with appropriate access to consent forms and patients’ rights have the luxury to navigate a medical establishment that is at least moderately successful in upholding these basic promises, we don’t have to stop and consider them as much as we might otherwise.

The ethical treatment of patients may depend in part on whether we think our illnesses say more about us than our health. On the surface, if we are just looking at obesity rates, cardiovascular disease, or a decline in physical activity precipitated by a digital lifestyle, it is easy to claim that yes, perhaps they do. If we consider the association between the environment in which we live and the risk of developing certain cancers and other conditions, then that is another layer of probability. However, the question probes at something much deeper than that. If our illnesses reveal strength or weakness in us, then so too does the way we treat the individual patient living with illness.

In the decades just following World War Two and leading up to the social justice movements of the 1960s and ’70s, many of the concepts most of us take for granted had a fairly egregious track record. Informed consent was at best an afterthought, at worst deliberately ignored, and medical decision making was too often deeply skewed toward those with power. The 1950s and ’60s were a pivotal turning point in patients’ rights, ethics, and medical decision making. For patients living with chronic and degenerative diseases, the timing of this was critical.

Chronic Illness as an Emerging Priority

On the heels of World War Two, America was coming down from the heady throes of patriotism and was exposed to more innovative medical technology. The establishment of the independent, nonprofit national Commission on Chronic Illness in May 19491 indicated a growing awareness of the demands of chronic disease. The Commission on Chronic Illness was a joint creation of the American Hospital Association, the American Medical Association, and the American Public Welfare Association2 and its initial goals included gathering and sharing information on how to deal with the “many-sided problem” of chronic illness; undertaking new studies to help address chronic illness; and formulating local, state, and federal plans for dealing with chronic illness.3 This included plans to dispel society’s belief that chronic illness was a hopeless scenario, create programs that would help patients reclaim a productive space in society, and coordinate disease-specific groups with a more universal program that would more effectively meet the needs of all patients with chronic illness, regardless of diagnosis.4

These goals indicate that when chronic illness was emerging as a necessary part of the postwar medical lexicon, it was seen as a social issue, not just a physical or semantic one. Many of these goals are the same ones patients and public health officials point to today, signaling that either the commission was particularly forward-thinking—or, that we have yet to mobilize and systematically address the unique needs of the chronically ill the way other movements have mobilized in the past.

Still, the Commission on Chronic Illness was an important concrete step in the process to recognize and address chronic illness. It defined chronic illness as any impairment characterized by at least one of the following: permanence, residual disability, originating in irreversible pathological alteration, or requiring extended care or supervision.5 Now, we have many variations of the same theme. Sometimes, the length of time symptoms must persist differs; sometimes, the focus is on ongoing treatment rather than supervision. Rosalind Joffe, a patient with chronic illness who is a life coach specializing in helping executives with chronic illness stay employed, offers three important characteristics experts agree are often found in chronic illness: the symptoms are invisible, symptoms and disease progression vary from person to person, and the disease progression and worsening or improvement of symptoms are impossible to predict.6 I’ve always found the “treatable, not curable” mantra a helpful one in discussing chronic illness, since it allows for all those variances in diagnoses, disease course, and outcomes. In some cases, treatment could be as simple as an anti-inflammatory drug to manage mild arthritis or daily thyroid medication to correct an imbalanced thyroid hormone level. At the other end of the spectrum are diseases like cystic fibrosis, where the treatment progresses to include organ transplantation (which is a life-extender, not a cure).

To get a sense of just how broad the spectrum of what we could define as chronic illness is, consider sinusitis, a very common chronic condition affecting some thirty-one million patients annually.7 Its frequency, duration, and treatment (because even those who undergo surgery for it are rarely fully cured) technically fit the basic meaning of a chronic illness, a prime example of the utility of substituting “condition” for “illness.” However, sinus congestion is not the ailment we usually associate with chronically ill patients. That this umbrella term reaches far enough to encompass AIDS is a telling shift and adds to that basic premise that chronic illness is treatable, not curable.

More than being a straightforward counterpart of acute illness, the very notion of chronic illness is one rooted in social and class consciousness. Ours is a society that values youth, physical fitness, and overachievement. By the middle of the twentieth century, this elevation of the importance of the perceptions of others played out in rigid social conformity, as well as in anxiety about that conformity. Scholars and writers of the time worried that people were living in “slavish compliance to the opinions of others—neighbors, bosses, the corporation, the peer group, the anonymous public.”8 Given the external events of the time—McCarthyism, the Cold War, the space race—it is not hard to see why maintaining the status quo and the cloak of homogeneity would have been appealing to many, and why in the ensuing years, so many would rebel from that same conformity.

As I write this, the term “self-improvement” conjures up images of extreme dieting and aggressive cosmetic surgeries and enhancements more often than it does industriousness or work ethic. In fact, the drive for perfection often spurs the desire for short cuts or immediate results our technology-driven culture makes possible. If science can improve on imperfections, shouldn’t we take advantage of its largesse? The middle of the twentieth century ushered in the idea that we must somehow stack up across all social and professional strata in our lives. News headlines are filled with stories that track stars’ adventures in surgical reconstruction, and daytime television commercials are rife with weight loss ads and other enhancement products that offer big rewards with supposedly little risk. This upsurge in enhancement technologies is what physician, philosopher, and bioethicist Carl Elliott calls the American obsession with fitting in, countered by the American anxiety over fitting in too well. The very nature of chronic illness—debilitating symptoms, physical side effects of medications, the gradual slowing down as diseases progress—is antithetical to the cult of improvement and enhancement that so permeates pop culture.

Autoimmune diseases, which affect nearly twenty-four million Americans,9 are a prime example of chronic illnesses that defy self-improvement. At their core, autoimmune disorders occur when the body mistakenly begins to attack itself. The concept first took root in 1957, but in The Autoimmune Epidemic, Donna Jackson Nakazawa points out that it wasn’t until the 1970s that the concept gained widespread acceptance. While heart disease, cancer, and other chronic conditions had been tracked for decades, as late as the 1990s no government or disease-centered organization had collected data on how many Americans lived with the often baffling conditions that make up autoimmune diseases.10 The mid-twentieth-century America in which the notion of autoimmune disease made its debut represents a pivotal time period in the evolution of chronic illness. The country had just moved past the frenetic pace of immunization and research that followed World War Two. Patients’ rights and informed consent began to be recognized as important issues, particularly with the emerging field of organ transplantation, and those topics plus the advent of managed care plans in the 1960s each contributed to the beginning of a marked change in how medicine and society looked at disease.

With autoimmune diseases, the specific part of the body that is attacked manifests itself in a wide variety of conditions, from the joints and muscles (rheumatoid arthritis and lupus) to the myelin sheath in the central nervous system (multiple sclerosis) to the colon or muscles (Crohn’s and polymyositis). It isn’t so much a question of whether autoimmune disorders are “new” conditions as it is a question of correctly identifying them and sourcing the origin of that fateful trigger. Sometimes, something as innocuous as a common, low-grade virus can be the trigger that jumpstarts the faulty immune response, and research suggests many of us carry genes that leave us more predisposed to developing autoimmune disease. However, when we look at alarming increases in the number of patients being diagnosed with conditions like lupus, the role of the environment, in particular the chemicals that go into the household products we use, the food we consume, and the technology we employ every day, is of increasing significance.

Nakazawa takes a strong position on this relationship: “During the four or five decades that science lingered at the sidelines … another cultural drama was unfolding in America, the portentous ramifications of which were also slipping under the nation’s radar. Throughout the exact same decades science was dismissing autoimmunity, the wheels of big industry were moving into high gear across the American landscape, augmenting the greatest industrial growth spurt of all time.”11

It is simply not possible to discuss disease in purely scientific language. Culture informs the experience of illness, and living with illness ultimately shapes culture. From the interconnectedness of the way we work and communicate virtually to the way we eat to the products we buy, the innovation that has so drastically changed the course of daily life and culture has an unquestionable impact on health and on the emergence of disease. Technology and science inform culture as well, and cultural mores influence what research we fund, and how we use technology.

If we look at current perspectives and definitions of chronic illness from the Centers for Disease Control and Prevention (CDC), there is a telling change in focus from earlier iterations. In detailing the causes of chronic disease, which current data suggest almost one out of every two Americans lives with, the CDC listed lack of physical activity, poor nutrition, tobacco use, and alcohol consumption as responsible for much of the illness, suffering, and premature death attributed to chronic diseases in 2010.12 Given that heart disease, stroke, and cancer account for more than 50 percent of deaths annually, and each is linked to lifestyle and behaviors, it is not a surprise that these four factors are highlighted.13 Such emphasis implies something more than merely causation. It denotes agency on the part of patients whose choices and behaviors are at least somewhat complicit in their illnesses. This parallels older attitudes toward infectious diseases: if patients weren’t living a certain way (in squalor) or acting a certain way (lasciviously), they wouldn’t be sick. At the same time, it separates certain chronic conditions and the patients who live with them from the forward momentum of medical science: we can kill bacteria, we can eradicate diseases through vaccination, we can transplant organs, but the treatment and prevention of many conditions is the responsibility of the patient.

Questions of correlation and causation depend on data. It was during the post–World War Two time period that the methodical collection of statistics on chronic disease began. In the America President Dwight D. Eisenhower inherited in the 1950s, the average lifespan was sixty-nine years. Huge trials were under way to develop a new polio vaccine and new treatments for polio, and heart disease and stroke were more widely recognized as the leading causes of death from noninfectious disease. In 1956, Eisenhower signed the National Health Survey Act, authorizing a continuing survey to gather and maintain accurate statistical information on the type, incidence, and effects of illness and disability in the American population. The resulting National Health Interview Survey (1957) and the National Health Examination Survey (1960) produced data that helped researchers identify and understand risk factors for common chronic diseases like cardiovascular disease and cancer.14 These developments, which occurred right before the Surgeon General’s seminal 1964 report on smoking and lung cancer, revealed an awareness of the pattern between how we lived and the diseases that affected us the most.

The legacy of this association between smoking (a behavior) and lung cancer (an often preventable disease) and policies to reduce smoking in the United States has helped shape public health in the twentieth and twenty-first centuries. From seat belt laws to smoke-free public spaces, the idea that government could at least partially intervene in personal decision making and safety issues started to gain real traction in mid-twentieth-century America. An aggressive (and successful) public health campaign to vaccinate against infectious diseases like polio was one thing; moving from infectious, indiscriminate disease to individual choice and behavior was quite another, and backlash against such public health interventions was strident.

Dr. Barry Popkin, a professor in the Department of Nutrition at the University of North Carolina, is on the front lines of the political debate over taxing soda and other sugary beverages and sees similarities in the pushback against policies like the beverage tax. He points to the public health success in saving lives through seat belt laws, or to data showing that the taxing of cigarettes cuts smoking and saves thousands of lives. “It’s the same with fluoridizing water,” he notes, adding how in the 1950s the American Medical Association accused President Eisenhower of participating in a Communist plot to poison America’s drinking water. “There’s not a single public health initiative that hasn’t faced these arguments,” he says.

At the same time these data collection and public health programs took off, the National Institutes of Health (NIH), a now-familiar and influential research organization, gained prominence. Originally formed in 1930, it wasn’t until 1946 that the NIH took on the kind of power that we recognize today. The medical research boon ushered in by World War Two and bolstered by the Committee on Medical Research (CMR) threatened to retreat to prewar levels, a move that politicians and scientists alike were loath to see happen. They rallied to increase funding and support of the NIH. Tapping into the patriotic fervor of the time, they argued that medicine was on the cusp of its greatest achievements, from antibiotics to chemotherapy, and that supporting continued research wasn’t just for the well-being of Americans as individuals but was critical to national self-interest, too.15 The timing of the combination of increased technology and patriotic zeal undoubtedly benefited researchers, and in many ways it benefited patients, too. However, the case for national self-interest also created a serious gap when it came to individual patients’ rights.

Not surprisingly, military metaphors were often invoked to champion the cause: the battle against disease was in full effect. War, long considered “good” for medicine, was also the physical catalyst for the increased focus on medical research and innovations. Military language first came into favor in the late 1800s, when bacteria and their biological processes were identified, and it continued to proliferate as researchers began to understand more and more about how diseases worked. By the early 1970s, the application of military terms to the disease process was troubling enough to attract the precision lens of essayist Susan Sontag’s prose. She wrote, “Cancer cells do not simply multiply; they are ‘invasive.’ Cancer cells ‘colonize’ from the original tumor to far sites in the body, first setting up tiny outposts … Treatment also has a military flavor. Radiotherapy uses the metaphors of aerial warfare; patients are ‘bombarded’ with toxic rays. And chemotherapy is chemical warfare, using poisons.”16 The problem is, this scenario leaves little room for those who fight just as hard and do not win.

It is no coincidence that the tenor of the National Cancer Act of 1971, signed into law in December of that year by President Richard M. Nixon, reflected this military attitude. The act, characterized as “bold legislation that mobilized the country’s resources to fight cancer,” aimed to accelerate research through funding.17 Just as experts and politicians were unabashedly enthusiastic that infectious disease would soon be conquered with the new tools at their disposal, those involved in cancer treatment believed we were on the precipice of winning this particular “war,” too. On both fronts, this success-at-all-costs attitude would have profound implications for patients with chronic illness.

Patient Rights, Provider Privilege: Medical Ethics in the 1950s–1970s

Until the mid-1960s, decisions about care and treatment fell under the domain of the individual physician, even if those decisions involved major ethical and social issues.18 Whether it was navigating who should receive then groundbreaking kidney transplants or what constituted quality of life when it came to end-of-life decision making, these pivotal moments of conflict in modern-day patients’ rights and informed consent took the sacrosanct doctor-patient relationship and transformed it into something larger than itself.

A major call for ethical change came in the form of a blistering report from within the medical establishment itself. In 1966, anesthesiologist and researcher Henry K. Beecher published his famous whistle-blowing article in the New England Journal of Medicine that detailed numerous abuses of patients’ rights and patients’ dignity. In doing so, he brought to light the sordid underside of the rigorous clinical trials pushed forth under the guise of patriotism in the 1940s. The examples that constituted Beecher’s list of dishonor included giving live hepatitis viruses to retarded patients in state institutions to study their etiology, and injecting live cancer cells into elderly and senile patients without disclosing the cells were cancerous to see how their immune systems responded.19 Such abuses were all too common in post–World War Two medical research, which social medicine historian David Rothman accurately describes as having “lost its intimate directly therapeutic character.”20 Is the point of research to advance science or to improve the life of the patient? The two are not mutually inclusive. The loss Rothman describes cannot be glossed over, and the therapeutic value of treatments and approaches is something we continue to debate today. So significant was this disclosure of abuse that Beecher was propelled to the muckraker ranks of environmentalist Rachel Carson (author of Silent Spring), antislavery icon Harriet Beecher Stowe (author of Uncle Tom’s Cabin, and not a relative of Henry Beecher), and food safety lightning rod Upton Sinclair (author of The Jungle). In the tradition of these literary greats, Beecher’s article brought to light an indictment of research ethics so great that it transformed medical decision making.21

While war might have been good for medicine in terms of expediency and efficiency, it was often catastrophic for the subjects of the clinical trials it spurred. The Nuremberg Code (1947) was the first international document to guide research ethics. It was a formal response to the horrific human experiments, torture, and mass murder perpetrated by Nazi doctors during World War Two. It paved the way for voluntary consent, meaning subjects can consent to participate in a trial, can consent without force or coercion, and that the risks and benefits involved in clinical trials are explained to them in a comprehensive manner. Building on this, the Declaration of Helsinki (1964) was the World Medical Association’s attempt to emphasize the difference between care that provides direct benefit for the patient and research that may or may not offer direct benefit.22

More than any other document, the Patient’s Bill of Rights, officially adopted by the American Hospital Association in 1973, reflected the changing societal attitudes toward the doctor-patient relationship and appropriate standards of practice. Though broad concepts included respectful and compassionate care, the specifics of the document emphasized the patient’s right to privacy and the importance of clarity in explaining facts necessary for informed consent. Patient activists were critical of the inability to enforce these provisions or mete out penalties, and they found the exception that allowed a doctor to withhold the truth about health status when the facts might harm the patient to be both self-serving and disingenuous. Still, disclosure of diagnosis and prognosis is truly a modern phenomenon. Remember that in the early 1960s, almost all physicians didn’t tell their patients when they had cancer; in one study, a staggering 90 percent of physicians called this standard practice. As a testament to these wide-scale changes, by the late 1970s most physicians shared their findings with their patients.23

Certainly the lingering shadow of the infamous Tuskegee syphilis study reflects the ongoing question of reasonable informed consent. In the most well-known ethical failure in twentieth-century medicine, researchers in the Tuskegee experiment knowingly withheld treatment from hundreds of poor blacks in Macon County, Alabama, many of whom had syphilis and were never told this, so that the researchers could see how the disease progressed. Informed consent was not present, since crucial facts of the experiment were deliberately kept from the men, who were also largely illiterate. The experiment began in 1932, and researchers from the U.S. Public Health Service told the men they were getting treatment for “bad blood.” In exchange for participation, the men were given free medical exams and free meals. Even after a standard course of treatment using penicillin was in place for syphilis in 1947, the appropriate treatment was kept from the subjects. In short, researchers waited and watched the men die from a treatable disease so they could use the autopsies to better understand the disease process.24

The experiment lasted an intolerable forty years, until an Associated Press news article broke the story on July 25, 1972. Reporter Jean Heller wrote, “For 40 years, the U.S. Public Health Service has conducted a study in which human guinea pigs, not given proper treatment, have died of syphilis and its side effects.” In response, an ad hoc advisory panel was formed at the behest of the Assistant Secretary for Health and Scientific Affairs. It found the experiment “ethically unjustified,” meaning that whatever meager knowledge was gained paled in comparison to the enormous (often lethal) risks borne by the subjects, and in October 1972, the panel advised the study be stopped. It did finally stop one month later, and by 1973, a class action lawsuit was filed on behalf of the men and their families. The settlement, reached in 1974, awarded $10 million and lifetime medical care for participants and, later, for their wives, widows, and children.25 A formal apology for the egregious abuse of power and disrespect for human dignity did not come until President Bill Clinton apologized on behalf of the country in 1997. “What was done cannot be undone,” he said. “But we can end the silence. We can stop turning our heads away. We can look at you in the eye and finally say, on behalf of the American people: what the United States government did was shameful.”26

The lack of informed consent was a major aspect of the morally and ethically unjustified Tuskegee experiment. Similarly, it is hard to argue that orphans, prisoners, and other populations that had served as recruiting grounds for experiments throughout the late nineteenth century and much of the twentieth century could have freely objected to or agreed to participate in experiments—freedom of choice being a hallmark of supposed “voluntary participation.” Widespread coercion undoubtedly took place. The rush to develop vaccinations and more effective treatments for diseases during World War Two blurred the line between medical care that was patient-centered and experimentation that fell short of this criterion. It is tempting to think such distasteful subject matter is in the past, but considering how many millions of patients with chronic illness depend on and participate in research trials to improve and perhaps even save their lives, it is an undeniable part of the present, too. When we factor in the complex issue of informed consent and the use of DNA in clinical trials today, we can start to see just how thorny these ethical questions remain.

Another poignant example of a breach in medical ethics is revealed by author Rebecca Skloot in The Immortal Life of Henrietta Lacks. The riveting narrative traces the cells taken from a poor black mother who died of cervical cancer in 1951. Samples of her tumor were taken without her knowledge. Unlike other human cells that researchers attempted to keep alive in culture, Henrietta’s cells (called HeLa for short) thrived and reproduced a new generation every twenty-four hours. Billions of HeLa cells now live in laboratories around the world, have been used to develop drugs for numerous conditions, and have aided in the understanding of many more. So prolific are the cells and their research results that some people consider them one of the most important medical developments of the past hundred years.27 And yet despite the enormity and immortality of Henrietta Lacks’s cells and the unquestionable profits their experiments have yielded, scientists had no consent to use them, and her descendants were never given the chance to benefit from them.

On the heels of Henry K. Beecher’s whistle-blowing article and heated debate over transplantation and end-of-life care, U.S. senators Edward Kennedy and Walter Mondale spearheaded Congress’s creation of a national commission on medical ethics in 1973. This helped solidify a commitment to collective (rather than individual) decision making and cemented the emergence of bioethics as a distinct field.28 With advances in reproductive technology, stem-cell research, and other boundary-pushing developments in medicine we see today, the importance of bioethics is far-reaching. New rules were put into place for researchers working with human subjects to prevent a “self-serving calculus of risks and benefits,” and written documentation came to replace word-of-mouth orders. Now, the ubiquitous medical chart that had once been a private form of communication primarily for physicians was a public document that formally recorded conversations between patients and physicians. In 1974, Congress signed into law the National Research Act, which created the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. This commission was tasked with identifying the principles that should govern experiments involving human subjects and then suggesting ways to improve protection of participants.29

Physicians were divided about the various changes and interventions from politicians, public policy experts, ethicists, and legal experts that emerged during this time period. While some of the most powerful disclosures (such as the Beecher article) came from physicians themselves, many also feared the intrusion into the private relationship between doctor and patient and the resulting changes in power they made possible. When physicians at the Peter Bent Brigham Hospital (now Brigham and Women’s Hospital) in Boston successfully transplanted a kidney from one identical twin to another in 1954, it was a milestone in surgical history as well as in collective decision making. Previously, in the closed relationship between physician and patient, the treating physician was responsible for exhausting every means of treatment for that patient. Quite simply, this model did not give physicians the answers for the complex new set of problems that arose once kidney transplants moved from the experimental to the therapeutic stage. Was it ethical, some wondered, to remove a healthy organ from a healthy donor, a procedure that could be considered “purposeful infliction of harm?”30 Even with donor consent, was participation truly voluntary, particularly when family members were asked to donate? How should physicians handle the triage and allocation of such a scarce resource? These questions were too complicated for the individual physician to address.

Kidney transplantation, which was soon followed by advances in heart transplantation, raised yet another crucial ethical question for physicians, patients, and society at large: How should we define death, especially when one patient’s death could potentially benefit another patient? For those languishing with end-stage illness and chronic disease, as well as the huge number of patients who would go on to receive transplants in coming decades, there was hardly a more important question. As transplantation increased, it became clear that using heart death as the definition would not work, since heart death put other transplantable organs at risk.

However, transplants weren’t the only impetus for a new definition: the process of dying itself was undergoing enormous change. Part of this change began from within the physical hospital itself. Intensive care units became more standard in hospitals beginning in the 1950s, and the advanced life-supporting equipment they made use of meant that more people died in the hospital than in their bedrooms. By the 1960s, 75 percent of those who were dying were in a hospital or nursing home for at least eight days prior to death. As critic Jill Lepore wrote in the New Yorker in 2009, “For decades now, life expectancy has been rising. But the longer we live the longer we die.”31 In 2009, the median length of time patients spent in hospice—palliative services for those facing terminal illness—prior to death was 21.1 days. This figure includes hospice patients who died in hospitals, nursing homes, rehabilitation centers, and private residences.32 This physical shift in space paralleled an intellectual and moral shift in attitudes toward death and, inevitably, toward disease. Both now involved and were practically indistinguishable from the institution of medicine and all the machines, interventions, and expectations that entailed.

This also meant that while respirators may have kept many patients alive in that their hearts were beating, the substance and quality of that life and the vitality of their brain function was questionable. No case exemplified this struggle more than that of Karen Ann Quinlan—a case that became every bit as important for medical ethics and end-of-life decisions as the revelations about the Tuskegee experiment had been for informed consent and ethical research. The twenty-one-year-old Quinlan mixed drugs and too much alcohol one night in 1975, and friends found her not breathing; her oxygen-deprived brain sustained significant damage, and she was left in a persistent vegetative state with a respirator controlling her breathing. When it was clear to her parents there was no room for improvement, they asked her physicians to remove her respirator. They refused, saying she did not meet the criteria for brain death. Remember that practical applications of the definition of brain death were primarily for the emerging field of transplantation, not for issues related to quality of life and prolonging life. In the same way physicians didn’t have a viable framework for the ethics of organ allocation on their own, this conceptual framework offered little guidance to the Quinlans. Next, the state of New Jersey asserted that it would prosecute any physicians who helped the young woman die. Joseph Quinlan, Karen Ann’s father, took his request to court and was first denied; eventually, the New Jersey Supreme Court ruled in his favor, noting that her death would be from natural causes and therefore could not be considered a homicide.33

Though some low-level basic brain function remained, all Karen Ann’s cognitive and emotional capabilities were wiped out. Lepore reports that pressed to testify as to her mental abilities, one medical expert characterized the extent of her injuries in the following manner: “The best way I can describe this would be to take the situation of an anencephalic monster. An anencephalic monster is an infant that’s born with no cerebral hemisphere … If you take a child like this, in the dark, and you put a flashlight in back of the head, the light comes out of the pupils. They have no brain. O.K.?”34

No, there was certainly no road map for scenarios like this in 1975, and the Quinlan case brought the bedside discussion of death to the government. It would take Karen Ann Quinlan ten long years to die from infection, but by that point the legal and ethical quagmire consumed those inside and outside the medical community. Similar to the way the right to life would grip those of us who watched the Terri Schiavo case and the fight her parents waged to keep her feeding tube inserted unfold in 2005, the Quinlan case was emotionally fraught and captured the attention of many, even those on the outside. The difference between the two cases—one of which is cast as the right to die, while the other represents the fight for the right to life—is, of course, historical and social context.

In 1968, Pope Paul VI issued the influential encyclical letter entitled “Of Human Life,” which argued for the sanctity of life beginning at the moment of conception, a central argument used by pro-life factions. In 1973, the United States Supreme Court ruled in Roe v. Wade, further polarizing both sides of the abortion rights debate. Add to this the growing fascination with the institutionalized nature of dying and the not-too-distant shadow of Nazi atrocities, and the fray the Quinlans joined that fateful day in 1975 was rife with agendas, perspectives, and controversy surrounding death. The Quinlan case embodied all these competing forces and beliefs, and its legacy continues in conversations about how to die today. We are still fascinated with death, though the focus now is on the role of government in end-of-life decisions. The kerfuffle over alleged “death panels” that health care reform precipitated in 2010 illustrates this anxiety over who is involved in dying all too well.

The Advent of Medicare

Because experimentation and exploitation so often involved minorities, the burgeoning movement to reform medical experimentation on humans was closely linked with the various civil rights movement of the 1960s and ’70s. Decisions about experimentation were no longer seen as the province solely of physicians, nor were the attendant issues strictly medical ones. Now, they were open to political, academic, legal, and philosophical debates. At the same time legislation was enacted to better protect human subjects, another groundbreaking development for patients (and for the role of government in shaping health care policy) took shape: Medicare. According to the Centers for Medicare and Medicaid Services, President Franklin D. Roosevelt felt that health insurance was too controversial to try to include in his Social Security Act of 1935, which provided unemployment benefits, old-age insurance, and maternal-child health.35 It wasn’t until 1965 that President Lyndon B. Johnson got Medicare and Medicaid enacted as part of the Social Security Amendments, extending health coverage to almost all Americans over the age of sixty-five and, in the case of Medicaid, providing health services to those who relied on welfare payments.36

In 1972, President Nixon expanded Medicare to include Americans under sixty-five who received Social Security Disability Insurance (SSDI) payments as well as individuals with end-stage renal disease. The way Medicare was set up was not an accurate reflection of the chronic conditions many of its recipients had. At its inception in 1966, its coverage, benefits, and criteria for determining the all-important medical necessity were all directed to the treatment of acute, episodic illness. This model, along with increased payment rates for physicians who treated Medicare patients, would continue virtually unchecked or unchanged for twenty-five years.37 It reflects the biomedical approach to disease, one that emphasizes treatments and cures, but it falls apart when it comes to patients with chronic disease, who make up the majority of Medicare patients.

Coverage of renal disease constituted the first time a specific ongoing disease was singled out for coverage, and concern over the overall costs of health care prompted Nixon to use federal money to bolster the creation of Health Maintenance Organizations (HMOs) as models for cost efficiencies.38 For many patients with chronic disease, HMOs would come to represent restrictions on treatment and providers rather than improved access to health coverage at best. If Nixon thought HMOs might help control health care costs, he was wrong. Health care spending and Medicare and Medicaid deficits have plagued virtually all presidents since Nixon’s time, and reached a pivotal point in 2010, when President Barack Obama helped push through the Patient Protection and Affordable Care Act.

It is not surprising that the move toward collecting more data to advance our understanding of chronic disease, toward the development of informed consent and codified research ethics, and toward expanded health coverage all happened in the decades immediately following World War Two. While the consequences of these changes are numerous, one of the simplest and most wide-reaching was the creation of a social distance between doctor and patient, and then hospital and community. As David Rothman describes, “bedside ethics gave way to bioethics.”39 This increased focus on ethics and collective decision making protected patients in new ways, but the social distance also created a gap in trust between patient and physician. Such a gap left plenty of space for the disease activists and patient advocates who emerged during the various social justice campaigns that followed.