Thirty-year-old Melissa McLaughlin remembers in painstaking detail when she first became sick. It was late October 1994, and the then high school sophomore’s crammed schedule reflected an active, passionate teenager: Advanced Placement courses, dance classes and competitions, the dance classes she taught to younger students, babysitting her siblings, volunteering. As a competitive dancer, she depended on her body to keep up with her rigorous schedule, and it always had.
“Everything was normal,” she says.
The weeks building up to the seismic shift that would turn the energetic teenager into a wheelchair-bound young woman living in constant pain were normal, if hectic. She threw a Halloween party for the dance students she taught, and sat for the PSATs. Her last “normal” day was spent with friends painting the walls of a homeless shelter. By the day’s end, the whole group was sweaty, exhausted, and covered in paint.
“All of us were worn out, but I just never got better,” McLaughlin says, describing how she went to sleep that night and woke up with a high fever and extreme fatigue and body aches. For the first few weeks, she slept twenty-two hours a day, and her doctors initially diagnosed her with mononucleosis. At this point, she, like her doctors and most people around her, figured with a few more weeks of rest, she would be fine. That was how an acute condition like mono worked: you got it, you lived through it for a few weeks, and then it went away. Case closed. For many people, this is the trajectory we associate with illness. We are familiar with both ends of the spectrum: the short, acute infections and injuries of everyday life and the terminal cases of cancer, heart disease, or stroke that have a finite end. Chronic illness is somewhere in the middle, confounding and unfamiliar.
Weeks and months went by, and McLaughlin’s improvement was minimal at best. She still slept several hours each day, only dragging herself into class half-days and often fainting when she was there. She could barely walk but tried to attend dance class anyway, only to fall asleep on a pile of mats in the corner. Her blood tests came back abnormal but not definitive, and as the months went on a variety of diagnoses were handed to her: chronic mono, Epstein-Barr virus, chronic fatigue syndrome (CFS), chronic fatigue and immune dysfunction syndrome (CFIDS). For each she was told there were no treatments, and the approach was reactive, treating symptoms and not causes. If she caught an infection—which was a regular occurrence—she was given antibiotics. If she had severe migraines, physicians prescribed migraine medication. When her extreme fatigue become even more overwhelming, they told her to get more rest. No one could explain how to make her better, and, just as frustratingly, no one could explain to her what had made her sick in the first place.
While the details vary, her physical manifestations and diagnostic roadblocks could stand in for the experience of millions of Americans over the past few decades. The history of CFS/CFIDS and related “nebulous” conditions in this country is a controversial one; even now, consensus over its name is lacking, and although many advocates of the disease fight for education and awareness, detractors remain. In Encounters with the Invisible, CFS patient Dorothy Wall calls the condition “so blatantly unmedicalized, so subjective, another one of those so-called ‘functional illnesses,’ like irritable bowel syndrome, that have always plagued medical practitioners, presenting symptoms with no known cause.”1 From dissent over labels and diagnostic categories to research dollars and clinical trials, the combination of politics, science, and policy is a potent one.
In All in My Head, Paula Kamen focuses on the other phenomenon at play, one as relevant and entrenched in attitudes today as it was in centuries past: If you cannot cure the patient, then blaming the patient often follows suit. When we don’t understand the source of the problem or how we can alleviate it, “the more psychological, spiritual, and moral meaning it takes on.”2 History reveals the foundation of this current pattern, as we will see, but when physicians adopt this mindset, it is particularly harmful. In comparing nineteenth-century tuberculosis and late-twentieth-century cancer, Susan Sontag wrote, “Any disease that is treated as a mystery and acutely enough feared will be felt to be morally, if not literally, contagious.”3
This applies to chronic illness, too. We do not like being reminded that there are still limits to modern medicine, and that named conditions exist that might not kill us but will not go away. But if we add to this scenario those illnesses that we can’t name or file under International Statistical Classification of Diseases and Related Health Problems (ICD-10) codes on medical billing forms, conditions we can’t put under our high-powered microscopes or see on advanced imaging tests, then the fear—and, often, distaste—grows. Perhaps if the symptoms can be explained away by claiming the patient is just lazy, or is not making appropriate lifestyle changes, then blame can replace the other niggling emotion: Maybe if it can happen to him or her, it could happen to me.
And likely, it will happen to many people at some point in their life. Chronic illness affects nearly 50 percent of the population. By the year 2025, it is estimated that chronic illness will affect some 164 million Americans.4 Some of the most common are heart disease, diabetes, cancer, and asthma, but that list is by no means exhaustive. Arthritis, lupus, multiple sclerosis, Crohn’s disease, colitis, epilepsy, and thousands of other diseases cause ongoing symptoms and are treatable but not curable. Chronic illness is the leading cause of death and disability in this country, with seven out of every ten deaths attributed to chronic diseases. Eighty-one percent of hospital admissions are a result of chronic illness, as are 76 percent of all physician visits. These statistics come with a hefty price tag, too; 75 percent of a staggering $2 trillion in health care costs in 2005 came from chronic diseases.
As the years passed, there was no doubt that Melissa McLaughlin dwelled in the kingdom of the sick. Her conditions weren’t going away, and they caused ongoing quality-of-life problems and disability. Gradually, some of her diagnoses got more specific: the combination of CFIDS and fibromyalgia explained the fatigue and the pain. Postural orthostatic tachycardia syndrome (POTS), a dysregulation of the autonomic nervous system that interferes with heart rate and other functions, explained the fainting and cardiac issues. Neurally mediated hypotension (NMH), an inability to regulate blood pressure often found in patients with CFIDS, and hypo-gammaglobulinemia, which increases the risk of infection, were also added to the list.
However, if she thought that since she now had many labels and acronyms attached to her symptoms the skepticism of others she encountered toward her pain or her treatment options would improve significantly, she was mistaken.
“I had a friend once who said that my illnesses were the hippest thing about me. I’m no trendsetter, but I had CFIDS before it was cool, according to him,” she says, aware of the antipathy that still surrounds the diagnosis of CFIDS. “Some doctors still sniff at the CFIDS/FM diagnoses, call them trash-barrel diagnoses. I say, that’s fine, but unless you know what it really is, then you’re not really helping anything, are you?” she asks.
And therein lies one of the most compelling tensions history reveals: the quest to understand the nature of illness from a biological perspective versus the quest to understand illness from a personal perspective.
For Emerson Miller, a forty-eight-year-old who is HIV-positive, his experience of illness has been no less challenging than Melissa McLaughlin’s. Where chronic pain conditions and autoimmune disorders often prove difficult to isolate, the test for HIV/AIDS is all too definitive. Fifteen years ago, after Miller had suffered from a flu-like illness that left him seriously ill, doctors tested him and found an incredibly high viral load. It was the mid-1990s, and doctors were flush with excitement over the newly approved triple drug cocktail used to treat HIV/AIDS, though no one knew the long-term effects of the drugs and back then, incorrect dosing was still a significant problem. At the time of his diagnosis, this barely registered.
“I was so ill that I didn’t care, I didn’t care if I had three weeks to live,” he says. It was a very different dynamic than the immediate experience of those patients who may feel okay but choose to get tested because of known risk factors or disclosure of illness from present or past partners. Miller recognizes this distinction acutely; in his job as a patient coordinator and advocate at AIDS Action Committee of Massachusetts in Boston, he witnesses firsthand the process of diagnosis, acceptance, and ongoing treatment of this next generation of HIV patients. The population is far more diverse now than it was in the early days of the “gay plague”—IV drug users, young heterosexual patients, and an increasing number of African-American and Hispanic females are just some of the groups who join homosexuals as sub-populations with very different needs—but some things remain the same. Despite many permutations, the stigma associated with HIV/AIDS and more specifically with the means of transmission that has characterized the HIV/AIDS epidemic since its beginning is still a predominant theme in living with the disease. It isn’t simply a matter of prevailing societal norms of the healthy versus the sick that many of us experience. Miller describes an intricate hierarchy of blame or perhaps judgment, too: gay people are better than drug users, while hemophiliacs and those whose illnesses can’t be attributed to lifestyle are innocent.
I often see a related form of hierarchy when it comes to suffering: patients who are quick to claim that their pain and “battle scars” are worse than those of other patients are an unfortunate reality in waiting rooms, support groups, and Internet forums. Both scenarios create internal divisions that weaken one of the greatest assets patients have in a healthy world: the solidarity of the illness experience.
“There is still a lot of shame about homosexuality. Even at my age … there is still a lot of shame,” Miller says. “In my opinion, everybody is still a patient.”
The dismissal, skepticism, and controversy surrounding Melissa McLaughlin’s diagnosis and the social stigma and internal hierarchy of illness in Emerson Miller’s story are evidence of even greater shifts in the trajectory of modern chronic disease. On the one hand, McLaughlin’s chronic pain experience rings true to the experiences of millions of patients, particularly women, living with conditions as disparate as migraine/chronic daily headache, irritable bowel syndrome, reflex sympathetic dystrophy (RSD) (also called complex regional pain syndrome, or CRPS) and many other conditions still tinged with the shadow of psychosomatic illness. In her memoir of chronic daily headache and Western attitudes toward pain and gender, Paula Kamen contemplates her “greater privilege to complain,” and reflects on her grandparents, who, like most everyone else at the time, were too caught up in survival to focus on maladies such as headaches.5 For that generation, infectious disease was a greater threat to mortality, and we will see the consequences of that loss of urgency and immediacy in later chapters.
One of the shifts has been in the doctor-patient relationship, which has certainly gone through drastic changes from the “doctor as God” complex of earlier times. Throughout much of the twentieth century it was still common practice for doctors of cancer patients to tell their families their diagnosis, rather than giving the patient the information. We expect more collaboration from our physicians now, and we bring more information to the encounter ourselves. If we have the wherewithal and resources to do so, many of us will shop around and find a better fit rather than settling for a negative relationship with our health care providers.
In his essay “How To Speak Postmodern: Medicine, Illness, and Cultural Change,” David B. Morris writes that a distinguishing characteristic of postmodern illness (“postmodern” here refers roughly to the period following World War Two) is that the narrative surrounding illness now often involves the patient.6 A postmodern view is one that must move beyond the biomedical model of illness that dominated much of the early and mid-twentieth century, a model based on the idea that the body and its ailments can be described in the language of physics and chemistry. That is, disease is something we cure using all the tools at our disposal.7 This model serves acute illness and injury well, but it falls short for patients who live with ongoing disease.
Given the success in immunization, antibiotic development, and understanding of microbial diseases that characterized the decades following World War Two, this biomedical view of medicine is not surprising. However, with today’s technological innovation and the social interaction made possible by Web 2.0, whereby patients are willing to not only swap stories but also experiment with alternative treatment therapies, Morris offers up what he calls a biocultural model as a better fit. Here, illness exists at the “crossroads” of biology and culture, a confusing landscape where all parties involved are increasingly aware of the limits of the rigid biomedical model.8 This idea of a crossroads is an ever-shifting point at which cultural expectations and assumptions about illness meet scientific inquiry and innovation.
McLaughlin’s and Miller’s stories are glimpses of this biocultural crossroads. In the former, we see the reservations ascribed to symptoms and conditions that cannot be verified with a blood test or a lab report. In the latter, we see a disease that is easily identified through testing, whose origins we can trace and whose biology is explored in research laboratories and hospitals around the world. Their diagnostic processes are opposites, yet their stories are bound by shared themes and, ultimately, shared experiences.
In tackling these issues, I face challenges of scope and context. Before we can look at the emergence and ongoing adjustments in how we perceive chronic disease, we need to establish a basic historical understanding of disease itself. How have scholars, scientists, and physicians thought of the body? What does it mean to be sick? To have a meaningful conversation about the present, we need some context of our past.
The Hippocratic oath to “do no harm” takes on added complexity when we factor in the extraordinary means and life-prolonging machines available to us now, but Hippocrates (ca. 460–ca. 377 B.C.) was responsible for more than the pledge so many of us recognize today. He was the first healer in antiquity to move away from the notion that illness and disease were caused by supernatural powers, cosmic forces that could destroy populations at will and who required sacrifices, prayers, and cajoling to spare people. Instead, Hippocrates relied on the power of observation and redefined the role of healer as a true clinician, one who paid close attention to patients’ symptoms in order to understand the nature of their diseases. In removing health and illness from the gods and insisting on natural causes and natural cures, Hippocrates caused a profound shift in agency to take place.9 No longer were patients solely at the mercy of mercurial gods, and no longer were their healers merely present to patch wounds or amputate limbs. This rational, observational medicine centered on the patient, not the disease, and Hippocrates and his followers were interested less in the specifics of singular diseases and more in understanding the natural course of an illness.10
The distinction between illness and disease is one that comes up repeatedly in the broader history of chronic illness, with disease being the objective, evidence-based experience of being sick and illness being the subjective, lived experience of patients. In Hippocrates’ time, disease was believed to result when an imbalance of the body’s natural forces, or humors, occurred. Blood came from the heart and was warm and wet; bile came from the spleen and was black, cold, and dry; phlegm came from the brain and was wet and cold; and lastly, yellow bile, which came from the liver, was warm and dry.11 While centuries of discovery and understanding would eventually disprove the notion of the four humors, fundamental aspects of Hippocratic medicine still resonate today: diseases manifest differently from individual to individual, and a patient’s lifestyle and environment play large roles in determining the course of a disease.12 The Hippocratics’ view was actually quite simple: health represented equilibrium, while illness represented an upset to that harmony.13
Influenced by the work of Hippocrates, Plato (ca. 428–ca. 348 B.C.) also believed that a disruption in the body’s natural forces—earth, fire, water, and air—caused disease, and that the physician’s duty was to advance health by harmonizing body and soul.14 The mind-body connection and the phrase “sound mind, sound body” that we see and hear often these days, particularly in the growing popularity of complementary and alternative treatments and relaxation techniques, is evident in the Platonic ideal of medicine. As Plato transmitted to us in The Republic, Socrates (ca. 470–399 B.C.) viewed health in similar terms, maintaining that virtue, beauty, and spiritual health are mutually dependent, unlike disease, ugliness, and weakness.15 Like Hippocrates, Plato involved the patient in his treatment of disease—not relying on divine intervention—and also invoked the patient’s at least partial responsibility for disease. Platonic healing depended on “the elimination of all evil from body and soul by means of a change in the way of living,” whereby the success or failure of treatment also rests with the patient.16
Since more of his writing survived than did that of the ancient Greeks, Roman physician Galen (A.D. 129–ca. 199) remains the most prolific ancient writer on medical subjects. He employed the Hippocratic theory to try to understand the nature of disease, and his work with animal cadavers—coupled with his fame and self-promotion—led to misunderstandings of human anatomy that would circulate for a thousand years.17 Like others of his time, he believed blood was produced in the liver, and his penchant for extreme bloodletting, sometimes until the point of lost consciousness, was derived from the mistaken belief that since women bled monthly and appeared to suffer fewer illnesses than men, bloodletting was an effective way to rid the body of disease. The pulse was another favored topic of inquiry for Galen, but lacking the correct physiological understanding of human anatomy and the circulatory system, his books and writings promoted ideas that would not be disproved until the nineteenth century.18 The power of his fame and accessibility perpetuated incorrect information about disease, similar to today’s landscape in which technology and social media make it possible to widely disperse information and research that may lack accurate or substantive evidence.
While ancient times were characterized by the desire to understand the nature of disease and tie it to physiological imbalances as well as lifestyle, the Middle Ages, influenced by the spread of Christianity, reflected a spiritual understanding of disease and plague as wrought by sin. This is a marked shift from the more naturalistic and rational practice of classical medicine. Early Christian thought emphasized the split between the body and the soul (not the body and the mind), a divine purpose and plan for everything, and the implicit subordination of medicine to religion. Physicians were not healers in the same sense; they tended to the body, while priests were concerned with the more important matters of the soul.19
Illness and suffering were viewed as punishment or a test of one’s faith, but the early Church also manifested a mission of healing.20 Since the body was created in the image of God and ultimately belonged to Him, He had the power to heal. The resurrection of Jesus Christ and the glory that waited for the faithful at the Final Judgment were the greatest examples of God’s power. The Gospel of Saint Luke, himself a physician, points to several miracles in which the healing power of Christ and his disciples triumphs over bodily disease, including restoring sight to the blind and raising the dead back to life. As Europeans struggled to survive and assimilate the great plagues of the first three centuries of the Christian Era, the notion that “illness is a consequence of sin and not a physical malady to be studied and analyzed as the Greeks did” was rooted in Biblical scripture.21
Christianity was just one of many faiths undergoing adaptation as a result of the changing world. It has more emphasis here since some of the themes popular in the Christian response to suffering and disease are still evident today. Even the word stigma itself, one so heavily associated with current experiences with disease, has its roots in early Christian tradition. A literal meaning conjures up images of the physical markings of crucifixion, but as psychologist Gregory M. Herek notes, more complex definitions include literal or metaphorical marks that infer an individual is “criminal, villainous, or otherwise deserving of social ostracism, infamy, shame, and condemnation.”22
Prior to the Black Death in the fourteenth century, it had been eight hundred years since Europe had last been besieged by major epidemics. The collapse of the Roman Empire meant less travel and commerce with Asia and, therefore, less contact with new diseases and infections. Now, with fourteenth-century towns like Venice and Genoa emerging as centers of trade and travel with more distant lands, the opportunity for disease and epidemics to infiltrate a new population was ripe.23 Increased commerce meant increased urbanization, and poor sanitation and overcrowded conditions meant the population was particularly susceptible to communicable diseases. This relationship between changes in the way people live and work is a constant in the social history of disease.
Bubonic plague, the cause of the Black Death, was thought to infect humans from the fleas carried by rats, though modern experts believe some human-to-human transmission was possible, given how quickly it wreaked havoc. Killing an estimated twenty million people, the Black Death remains Europe’s most catastrophic epidemic, having wiped out a quarter of the population.24 The impact of such devastation on the European psyche is telling. It had become “a crucible of pestilences, spawning the obsessions haunting late medieval imaginations: death, decay, and the Devil … the Grim Reaper and the Horsemen of the Apocolypse.”25 Unfortunately, responses to the Black Death are predictable through the lens of history. To the many who believed the plague was the work of divine retribution, acts of self-flagellation, prayer and fasting, and the religious persecution of Jews and others outside the faith were seen as appropriate defenses. Roy Porter recounts the horrific fate of thousands of Jews locked in a wooden building and burned alive, one of many instances of retaliation and violence during the Black Death.26 Physicians, powerless to effect any substantive treatment for individual patients, could do little to quell the public health debacle unfolding.
French philosopher Michel Foucault’s Panopticism, published in English translation in 1977, deals graphically with response to plague, describing the total lockdown enforced—the census, the front doors locked and checked by specially appointed officials, the total submission of medical and policy decisions to the magistrates. Order trumps chaos, power dominates disease. Given the rapid transmission and onset of the infection, and the lack of concrete physiological understanding of it, the extreme situation Foucault depicts in seventeenth-century France is understandable, if unappealing. Twenty-first-century movies like I Am Legend or Contagion tap into similar fears over uncontrollable outbreaks and the fragility of human life in the face of pathogens we cannot fight.
Medieval attitudes toward disease and the body perceived women as the “faulty version” of the male who were weaker because “menstruation and tearfulness displayed a watery, oozing physicality … Women were leaky vessels … and menstruation was polluting.”27 As patient narrative, research, and history will illustrate, gender remains an incredibly important variable in the chronic illness experience. Partly, this is because more females than males manifest chronic and autoimmune conditions. However, throughout history, deeply ingrained ideas about women as unreliable narrators of their pain and symptoms, as weaker than men, and as histrionic or otherwise “emotional” have had a profound impact on their ability to receive accurate diagnoses and appropriate care.
On the heels of the devastation wrought by the plagues of the Middle Ages, the Renaissance and Enlightenment were periods of progress and advancement. The invention of the printing press and the resulting printed health material made knowledge about the human body and disease (however incomplete) widely available for the first time. The gains in health literacy that printing made possible over time marked a huge shift in the understanding and treatment of diseases.
By the eighteenth century, physicians still couldn’t isolate the cause of infectious disease, so Hippocratic thoughts about individual responsibility for illness continued to dominate mindsets. American physician Benjamin Rush emphasized the importance of getting the patient’s history directly from the source, and focused on all the daily habits and behaviors that might play a role in the patient’s illness. His interest in the association between chronic disease and lifestyle are significant, as is his division between acute and chronic disease.
“In chronic diseases, enquire their complaints far back and the habits of life … Pay attention to the phraseology of your patients, for the same ideas are frequently conveyed in different words,” Rush counseled his peers.28 With acute illness, the precise daily habits that took place the week preceding the manifestation of symptoms were particularly important. Rush’s emphasis on patient history as a primary diagnostic tool took place in the context of improved standards of living and transportation across Europe and in the United States, which meant a now-predictable rise in diseases associated with indulgence and inactivity. Relying so heavily on patient history and lifestyle was logical, particularly since there was little else physicians could point to in order to assign cause (or blame) for disease. Other popular theories of the time included a focus on environment and external factors like squalid living conditions and dank areas, though those too brought in associations about wealth, status, and worth. Still, as a precursor to more current attitudes toward patients with chronic disease, this link with lifestyle and behavior is a key concept to carry forward.
The greatest dichotomy of this time period, however, was that while physicians gained new skills and attained a more elevated status, patients themselves saw little benefit from these developments. Even the early use of the microscope shows an interesting lack of focus on the patient, and a divergence from medical research as inherently therapeutic: while physicians used microscopes to study tissue, it wasn’t until the nineteenth century’s breakthroughs in bacteriology that microscopes were used in the process of treating patients.
Simply put, the nineteenth century was the century of the germ. Until physicians could see disease under the microscope, the same kind of guesswork that characterized disease and its treatments from its classical roots persisted. For example, well into the nineteenth century physicians believed that illness came from miasmas—the gases that seeped out from subway systems, garbage dumps, and open graves.29 The changes wrought by the Industrial Revolution and the emergence of capitalism affected virtually every part of daily life. More people moved to cities and worked in factories, and overall improvements in employment availability and children who could contribute economically to their families meant an increase in population growth. From unsafe working conditions to slums where infectious disease found places to thrive, a now-familiar historical pattern emerged: the technology that yielded improved transportation and innovations in production also paved the way for a new wave of communicable disease and social anxiety.
A fundamental shift in the understanding of disease—and in the way we perceived patients with communicable and other diseases—began with Louis Pasteur’s identification of bacteria and the role of germs in causing infection. Before that, leeches, laxatives, and brandy were among the most common cures of the day.30 By 1881, Pasteur had perfected the vaccination method, though it wouldn’t be until 1954 that a polio vaccine suitable and effective for humans was introduced.31 Nineteenth-century attitudes toward vaccines prevented universal vaccinations from happening. As we will see when we explore current perspectives on vaccines and autism, the combination of fear that the government was encroaching on civil liberties and concern over the safety of the procedures that characterized the opposition to vaccines looms heavily in our twenty-first-century consciousness. The difference between society’s perspectives then and now is that in the years between vaccines have largely eliminated many of the most harmful public health risks, such as polio and smallpox.
Vaccination is an approach to disease prevention so profound that it is in large part responsible for the emergence of chronic illness as a domestic public health and social issue in the twentieth century. Enough people did not die or become crippled and incapacitated from infectious disease that they began living long enough to acquire and suffer from chronic conditions. For example, from 1930 to 1980, self-reported illnesses rose by 150 percent, a clear indication that a population that lived longer wasn’t necessarily feeling better—and an idea that figures prominently in the social history of chronic disease.32
Pasteur’s work on germ theory ushered in the burgeoning field of microbiology. Using this theory, Pasteur’s contemporary Robert Koch was able to identify the bacteria that caused both cholera and tuberculosis (TB).33 These infections were scourges, particularly in heavily populated urban areas, and brought with them many unfavorable associations and connotations. Perhaps one of the most famous representations of TB appears in Susan Sontag’s extended comparison of it to cancer. While cancer was once associated with a repressed personality and middle-class anxiety, TB was the stuff of excess emotion and poverty. In Illness as Metaphor, Sontag observed that “TB is a disease of time; it speeds up life, highlights it, spiritualizes it … TB is often imagined as a disease of poverty and deprivation—of thin garments, thin bodies, unheated rooms, poor hygiene, inadequate food … There was a notion that TB was a wet disease, a disease of humid and dank cities.”34 This process of how identifying the origin of a disease changes the perceptions of patients living with it—or, fails to change the perception of patients—is one we still grapple with two centuries later.
In W. Somerset Maugham’s revealing early-twentieth-century short story “Sanatorium,” assumptions about the “typical” TB patient are powerfully laid bare. In describing one of the patients sent to recover from TB in a sanatorium, the author writes, “He was a stocky, broad-shouldered, wiry little fellow, and the last person you would ever have thought would be attacked by T.B … He was a perfectly ordinary man, somewhere between thirty and forty, married, with two children. He lived in a decent suburb. He went up to the City every morning and read the morning paper; he came down from the City every evening and read the evening paper. He had no interests except his business and his family.”35 All the things that make this patient a surprising candidate—he is gainfully employed, stable, married; in short, a respectable man with respectable middle-class tastes and aspirations—are what stand out here. He did not deserve his unlikely affliction.
The public health response to disease outbreak in America also reflected the nation’s emerging evangelical bent. Since disease was thought to be due to poor hygiene and unsanitary conditions, clean living was not just a health issue but a moral one as well. It fell to religious philanthropists to preach against the sins associated with unclean living, from drinking and immoral behavior to the alleged vices of atheism and greed. Such actions further demarcated the healthy—middle- and upper-class religious activists—from the ill, those languishing in slums whose slovenly living conditions and life choices made them culpable in their sickness. Being able to source the origin of infectious disease to its microbial roots was the first step in breaking down such misconceptions.
Other nineteenth-century developments that influenced the experience of chronic illness today include the advent of anesthesia, the beginning movement toward patient advocacy, and the professionalization of nursing. Until the 1840s, physicians had no effective, safe way to lessen the pain of surgery. The introduction of nitrous oxide, chloroform, and ether produced immense relief from the pain of surgical intervention. It also reflected a shift in physicians’ attitudes toward patients and a higher priority on alleviating suffering.
Another advancement in the consideration of the patient can be traced to the nursing profession. Prominent figures like Florence Nightingale and Clara Barton exemplified the holistic approach to patient care that characterizes nursing, and represented a marked departure from the tendency of other medical professionals to focus on singular aspects of a patient’s condition (i.e., the cause or the treatment). Galvanized by the suffering of soldiers, Nightingale was stalwart in her work to improve living and sanitary conditions for her patients. The patient as an individual, entitled to respect and compassion, was a concept made flesh by Nightingale and the cadre of professional nurses she mentored. Likewise, activists like Dorothea Dix and Alice Hamilton worked to make public the deplorable living conditions and inhumane treatment of the mentally ill and the urban poor.36 This indicated a new interest in health-care advocacy, a concept that would wholly redefine the lives of many different types of patients more than a hundred years later, most especially those with chronic diseases.
The world was still in the grip of deadly epidemics, though, as witnessed by the staggering transcontinental death toll of the 1918 influenza pandemic. Updated research suggests that the strain of the influenza virus that sprang up during the 1918–19 flu season killed between thirty and fifty million people globally, and killed an estimated 675,000 Americans. World War One had killed fewer people than the flu pandemic.37
Successes in identifying infectious disease and the post–World War Two development of antibiotic therapy led to the assumption that though infections might still cause temporary discomfort, they were no longer a serious threat to either survival or quality of life.38 Was this a sign of naïveté? Arrogance? Optimism? Or perhaps, a combination of all three? With the benefit of hindsight, the weakness of this position is easy to see: for one, antibiotics only treat certain strains of bacteria, and are not effective in treating the many viruses that still pose a threat to public health. In addition, as we see all too frequently today with infections like methicillin-resistant Staphylococcus aureus (MRSA) and flesh-eating Streptococcus, bacteria evolve into strains resistant to the medications developed to treat them. As a patient with a compromised immune system who is prone to infections, I know firsthand the danger of antibiotic resistance. As a preschooler, I spent several weeks in an isolation room in a hospital, tethered to an IV pole to receive Vancomycin, the drug used to treat staph infections like the one I had spreading from my ears to my brain. Knowing that some staph infections are now resistant to Vancomycin, a powerful “end-of-the-line” treatment for these life-threatening infections, scares me. Similarly, with only a few antibiotic options left that reliably treat my lung infections, resistance is not just a buzz-worthy topic for me; it is a real concern.
For better and worse, twentieth-century experiences with diseases like polio forever altered the way we view medical science’s ability to treat disease. At last, humanity could respond to the infectious epidemics that had wreaked havoc for centuries and do more than merely identify them—we could actually prevent them. Outside the spheres of public health and research, we don’t hear or talk too much about polio anymore; its omission in our lexicon is a luxury modern medicine affords us. But for the generation forced to dwell in iron lungs and the legions permanently crippled by polio, its specter was menacing. Many of the illnesses we grapple with today are a product of the way we live and work, just as living and working conditions in the past contributed to the rise of polio. Roy Porter deftly characterized the complex relationship between human progress and disease when he wrote, “Thus to many, from classical poets up to the prophets of modernity, disease has seemed the dark side of development, its Jekyll-and-Hyde double; progress brings pestilences, society sickness.”39
Though ancient in origin, the emergence of polio as a major medical threat in the 1900s can be traced directly to the processes of urbanization. Spread through infected fecal matter, the dominant strains of the polio virus were introduced early on to infants who dwelled in crowded homes with rudimentary plumbing, sanitation, and hygiene. Once more modern forms of sanitation and waste removal and treatment were developed in the 1900s, the immunity that early exposure to the virus gave patients happened less frequently.40 As immunity decreased, incidence of the more serious manifestation of polio, paraltyic polio, which involved the nervous system, increased. By the 1950s, polio kicked up most severely during the warm summer months and primarily affected children. Parents fled urban areas and communities banned the use of public swimming pools.41 The year 1952 brought with it the worst polio epidemic in American history; 58,000 cases were reported, including 3,145 deaths.42
That same year, 1952, Jonas Salk tested the first polio vaccine. He used a dead virus injected into patients to help build up natural immunity, and in 1954 more than one million children were given test vaccinations.43 Since polio was a disease that primarily affected children, treating it was a cause particularly vaunted by the American public. Children are understandably at the top of the illness hierarchy. By the late 1950s, a live virus was used to produce an oral vaccine, which was more popular since it meant patients didn’t need any shots. The World Health Organization (WHO) made fully eradicating polio a worldwide effort in 1985.44
Industrialization and urbanization were responsible for the emergence of diseases like polio, but changes in the way people communicated were responsible for spreading public health goals, too. Disease wasn’t just about scientific theories; it was a social phenomenon. The America that emerged after World War Two was fighting a war in Korea and was consumed with the Cold War and McCarthyism, and a new form of technology brought these events—and, more importantly, the intellectual and emotional basis for them—into the home. Television was an important player in spreading the “gospel of health” and promoting newly focused public health and medical research goals. A well-run state depended on people adopting a preferred public health agenda, and mass communication of health literature allowed that to happen.45 Putting health information in the hands of the general public took it out of the exclusive domain of the doctor in the laboratory or operating room and brought it into the realm of the patient’s narrative and subjective experience.
It is in this context that we reconsider Melissa McLaughlin’s chronic fatigue syndrome and fibromyalgia, or Emerson Miller’s HIV, the latest additions to an increasingly widening scope of conditions we can treat but we cannot cure.
“The fact that you’re just not going to get better seems unbelievable to most people, I guess,” says Melissa McLaughlin. One frustration for her is people who can’t understand that patients cannot control or fix everything. “It’s easier for them to believe that there is something you can control … There must be something you can do that you aren’t doing! Eating raw foods, forcing yourself to exercise, thinking your way out of it, trying the latest drugs that promise a cure in their commercial: something should work, and if you’re not better, then you’re not working hard enough. It’s frustrating, it’s everywhere (even, sometimes, in my own mind), and it’s just wrong. It’s just wrong: I can’t think or eat or exercise my way out of these illnesses, no matter how hard I try.” Even Melissa’s doctors followed suit, urging her to exercise more often even though it made her pain and fatigue much worse.
On the other hand, we have our great fear of HIV, the infectious disease that does not bend to our will. Shame is often embedded in its mode of transmission, and so far its wily ability to mutate has made it impervious to the very same vaccination process that revolutionized modern medical science. Emerson Miller doesn’t believe he will see a cure in the lifetime of current researchers, and, in fact, he worries that the progress we have made may actually have a negative impact on the search for a cure and on vigilance against the spread of HIV.
“I don’t want the sense of urgency to go away,” he says, hoping that the knowledge there is a drug cocktail that can effectively reduce viral load does not mean people will take the disease less seriously, particularly those who may contract the virus through preventable life choices.
The journey from Plato and Socrates to the Enlightenment and Industrialization to more modern public health advances is a circuitous one. By the middle of the twentieth century, the ability of scientists, physicians, and public health officials to alter the course of diseases that once devastated the population made it possible for people to adapt their thoughts on illness and disability; no longer were they considered to be inevitable and immovable components of daily life. This attitude would have strong repercussions for the next generation of patients, the ones touched by the other big medical emergence of the postmodern era: chronic illness. The period immediately after World War Two was a time of what scholar Gerald Grob describes as irresistible progress, a time when it seemed like science was on the brink of curing so much of what ailed us.46 With so many concrete victories to point to, the existence of illnesses that would not go away—chronic conditions that were somehow beyond the reach of medical science—would appear that much more unpalatable.