SEX, DRUGS, AND HEPATITIS B
In 1999 young children’s vaccination rates reached “record high levels,” according to the CDC. Ninety-six percent were protected against diphtheria, pertussis, and tetanus; 93 percent were vaccinated against Hib; 91 percent had shots against measles, mumps, and rubella; 90 percent against polio; and 88 percent against chicken pox and hepatitis B.1 The figures represented a substantial increase in immunization rates since 1992, the year before Clinton took office—and the year that saw the lowest vaccination rates since the Carter administration. That year, close to 20 percent of children hadn’t been fully protected against diphtheria, pertussis, tetanus, or measles, and close to 30 percent hadn’t been vaccinated against polio.2 Figures didn’t yet exist for three immunizations still to be added to the childhood schedule: the vaccines against Hib, chicken pox, and hepatitis B.
In fact, in 1999 the vaccine against hepatitis B was a recent addition to the schedule of child immunizations, even though it had been around for close to two decades. When the first hepatitis B vaccine was approved, back in 1981, it was recommended for a narrow segment of the population: drug users, gay men, some health care workers, and select immigrants. By the year 2000, however, forty-seven states had mandated hepatitis B vaccine for all schoolchildren, and federal guidelines recommended immunizing children not before school or preschool, but at birth. In the two intervening decades, hepatitis B infection, like mumps before it, underwent a gradual but dramatic transformation. As in the case of mumps, it was the vaccines against hepatitis B—there were two—that prodded this transformation along.
Hepatitis B, a blood-borne viral infection that attacks the liver, was a foreign obscurity of little direct relevance to most Americans when its first vaccine was unveiled in 1981. In the era of hepatitis B vaccination, however, the infection morphed into an AIDS-like scourge, and then again into a ubiquitous cancer-causing germ. Over the same period of time, the hepatitis B vaccine reflected a rotating set of national hopes and anxieties. It represented the promise of a new era of genetically engineered pharmaceuticals, a bulwark against the lurking threat of infections imported by immigrants, and an impediment to the carelessly assumed hazards of youth sex and fashion trends. Throughout the 1980s and 1990s, these shifting cultural perceptions of the virus and its vaccine continually influenced the evolution of policies regarding who should be vaccinated against hepatitis B, and at what age.
Hepatitis B vaccination policies evolved in the context of reemerging epidemics of vaccine-preventable infections, a national debate over health care, and, of course, Clinton’s push to expand federal support for childhood vaccination. They also evolved against a backdrop of accumulating vaccine scares—which prompted Time magazine to diagnose the country with a national case of vaccine “jitters” at the end of the twentieth century.3 For some critics, the hepatitis B vaccine came to signify all that was wrong with vaccines and related policies—precisely because it was originally recommended for select adults and ultimately administered to every newborn. The story of hepatitis B vaccine in the 1980s and 1990s illustrates just how and why the nation increasingly relied on the immunization of its youngest citizens to produce a healthy adult populace, even as vocal resistance to child vaccination continued to mount.
A DISEASE OF “HOMOSEXUALS AND DRUG ADDICTS”
While hepatitis had long been a concern of health officials, the disease wasn’t a terribly familiar one to most Americans in the decades prior to the first hepatitis B vaccine’s 1981 approval by the Food and Drug Administration. Two distinct forms of hepatitis, A and B—formerly “infectious” and “serum” hepatitis, respectively—had been known since the 1950s. But countless cases of serum hepatitis had gone undetected, in large part because of the vague symptoms it caused. Infection with hepatitis B virus may or may not cause debilitating fatigue, nausea, and loss of appetite. These acute symptoms may or may not be fatal, and those who do recover may or may not become chronic carriers. Carriers, in turn, may or may not become victims, decades later, of hepatitis-induced cirrhosis or liver cancer.4 Hepatitis B’s fairly mundane acute symptoms—its attendant fatigue, nausea, and loss of appetite—meant that for decades it was conflated with other conditions. In the 1970s, however, armed with a new set of diagnostic tools, researchers enumerated for the first time more than 200,000 new cases in the United States each year and more than 200 million carriers worldwide. Concurrent epidemiological studies identified those at highest risk of the disease. In addition to health care workers, the list included hemophiliacs, prisoners, gay men, injection drug users, sex workers, Native Alaskans, and immigrants from sub-Saharan Africa and Southeast Asia.5
Those new diagnostic tools emerged from the lab of research physician Baruch Blumberg, who in the 1960s identified a protein in the blood of Australian aborigines that he dubbed the Australia antigen. Australia antigen floated freely in the blood of people infected with serum hepatitis, and in 1967 the researchers determined that Australia antigen was in fact a surface protein on the virus that caused the disease. In related work, Blumberg and his colleagues found that monkeys injected with highly purified Australia antigen did not come down with serum hepatitis; the discovery suggested that non-infectious but still protective material could be separated from the virus itself. The basis for a novel vaccine had been found.6
As development of a vaccine against the infection accelerated in the late 1970s, Blumberg and other infectious disease experts predicted that an effective immunization would save hundreds of thousands, if not millions, of lives.7 But news of the potential vaccine, and of hepatitis B itself, rarely reached lay audiences over the course of the 1970s. The disease had isolated moments in the spotlight: In 1974 the host of the television show Today’s Health came down with hepatitis B when a surgical patient’s blood splashed in his eye; he chronicled in detail the disease’s “mean, sneaky malevolence” on TV and in print.8 Two years later, hepatitis B made headlines again when Blumberg shared the Nobel Prize in Medicine for his work on the disease.9 The press also reported on the 1979 outbreak of hepatitis B among youths who shared needles to take the recreational drug methylene deoxyamphetamine.10 Such stories confirmed for the public the picture then emerging from epidemiological studies, which was that the disease posed a risk only to specific subsets of the population, namely, surgeons and drug users.
On the eve of the hepatitis B vaccine’s 1981 introduction, most Americans thus had little reason to worry about the virus. Early press reports on the shot affirmed this notion. In 1980 the CBS Evening News reported that hepatitis B struck developing countries in Asia and Africa in “epidemic proportions,” whereas in the United States it affected mainly “patients on dialysis, medical personnel, and people living in institutions.”11 When news anchor Dan Rather reported on the new vaccine, he announced that it had been approved for “a disease affecting health workers, male homosexuals, and drug addicts.”12 On NBC the evening news anchor told Americans that “hospital workers get it, so do drug addicts, mental patients, homosexuals, and millions of people in Africa and Asia.” The network’s subsequent segment on the vaccine focused largely on the “medical adventure story” of the virus’s discovery, which a reporter recounted over grainy, choppy footage of Aborigines in native attire.13
Historian William Muraskin has argued that popular representations of hepatitis B in this period were deliberately constructed by a medical profession acting in self-interest. Having been identified as a high-risk group themselves, health care workers endeavored to define hepatitis B infection as an issue “private” to their profession and outside the public’s purview. The media, reliant on the medical community for information about the virus, reported what they were told: that gays, injection drug users, and certain immigrants and refugees were at high risk, and that the spread of hepatitis in hospital settings was controlled through the use of disposable gowns, masks, gloves, and other hygienic practices.14 The health care profession’s internal policy—of voluntary testing for carrier status—became the implicit policy toward the population at large, too. The upshot of this policy was twofold: it prevented “hysteria and discrimination of carriers,” but it also hampered public awareness of the extent of the epidemic and the true risk of infection.15
Muraskin’s assessment downplayed the role of bench scientists, epidemiologists, reporters, and other Americans in constructing hepatitis B’s popular image (however faint) in this period. Representations of the virus as one that posed little threat to “average Americans” were common in the late 1970s and the very beginning of the 1980s, but not merely because doctors willed it this way. This portrayal of hepatitis B made sense given the nation’s health priorities at the time. Cancer and heart disease were by far the country’s top killers; heart attacks alone caused 300,000 deaths a year. Hepatitis, meanwhile, appeared way down the list; in 1981 many more people died of homicide and ulcers than died of hepatitis of any type.16 A disease doesn’t have to cause high mortality, of course, to capture national attention, but hepatitis also failed to align with the nation’s other health preoccupations: soaring hospital costs, environmental scares like the meltdown at Three Mile Island, and a relentless “flurry of strange new ailments,” including Legionnaires’ disease, Lyme disease, Reye’s syndrome, and toxic shock syndrome.17 When a new ailment with characteristics similar to hepatitis appeared in 1981, however, hepatitis’s public image underwent a radical reconstruction. In light of AIDS, hepatitis control would take on a new sense of urgency.
TWO NOVEL VACCINES
The hepatitis B vaccine that was approved by the FDA in 1981 was an unusual product in the history of viral vaccine development. The vaccine didn’t contain live, weakened virus (like the Sabin polio vaccine) or killed, denatured virus (like the Salk polio vaccine). Instead, Heptavax B, developed by Merck, contained painstakingly purified versions of the antibodies that Blumberg had first discovered, harvested from the blood of people infected with hepatitis B. This novel procedure earned Heptavax B the title of the world’s first “subunit” vaccine against a virus—meaning that it stimulated an immune response by using just a part, or subunit, of the virus, and not the virus in its entirety.18
In a display of awe and enthusiasm for scientific discovery, the same news reporters who had downplayed the disease’s risk for average Americans played up, in the next breath, the new vaccine’s development and its novel form. The vaccine, after all, was fairly big news: as news anchor Dan Rather pointed out, it was the “first completely new viral vaccine in ten years”; it was also the “first vaccine ever licensed in the United States that is made directly from human blood.”19 Newsweek called its blood-derived antibodies “ingenious,” and magazines from Time to Glamour touted the vaccine as a “medical breakthrough.”20 Fervent reports in popular and scientific journals proclaimed that hepatitis B would soon join such well-known pathogenic villains as smallpox and polio as a problem of the past.21
This enthusiastic rhetoric was soon dampened by yet another medical discovery. The clinical trials that had tested the hepatitis B vaccine’s efficacy in the late 1970s had included only gay men, who had been identified as being at high risk of the infection.22 When, in 1982, the federal Advisory Committee on Immunization Practices (ACIP) issued its customary recommendations on who should receive the new vaccine, the list included those considered to be at highest risk for the disease, including health care workers with frequent blood contact; prisoners; patients and staff of institutions for the “mentally retarded”; hemodialysis patients; injection drug users; immigrants from eastern Asian and sub-Saharan Africa; and sexually active gay men.23 In the flurry of commentaries that followed in the medical literature, several reports highlighted the unique susceptibility of gay men to the infection. An editorial in the Journal of the American Medical Association identified the same list of groups to target with the vaccine, but added that “the highest HBsAG [hepatitis B antigen] prevalence in the United States is among male homosexuals. . . . Frequency of intercourse, the number of sexual partners, and the prevalence of anal intercourse all contribute to this.”24
In fact, gay media outlets had initially reported with pride that hepatitis B–infected gay men were frequent donors of blood for the vaccine. But the plasma-derived vaccine’s approval in late 1981 was quickly followed by emergent reports of a mysterious new illness causing “immune system breakdown” in what some experts estimated as tens of thousands of gay men.25 Within a year, health officials had documented a high rate of hepatitis B infection not just among gay men, but among gay men who were victims of the new illness, which came to be known as AIDS. The announcement spurred fears that the new vaccine was contaminated with the pathogen causing AIDS, increasingly presumed to be a virus.26 In 1982 and 1983, the press reported that gay men and injection drug users were frequent blood donors for the vaccine, and that many health care workers were refusing the vaccine themselves for this very reason.27 (Noted one physician, reportedly, to her daughter in confidence: “I know where the vaccine comes from. It comes from the blood of junkies and alcoholics. And who knows what they’ve got.”28)
The CDC moved quickly to address such fears, announcing in 1983 that of 200,000 individuals vaccinated against hepatitis B since 1982, none had come down with AIDS.29 There were, however, cases of AIDS in gay men who had participated in the vaccine trials, and the theory that the hepatitis vaccine (among other vaccines) had played a part in AIDS’s appearance and spread would gain momentum among certain segments of the population as the 1980s progressed.30
In the meantime, however, a new link between hepatitis B and AIDS emerged in mainstream media reports. This new link, a recitation of the similarities between the two viral infections, would persist in popular and scientific discourse for well over a decade. The analogy between AIDS and hepatitis B infection had been drawn early on by epidemiologists working to discover the causative agent of AIDS. Both diseases, scientists noted, appeared to be transmitted sexually and showed a pattern of infection among injection drug users and blood transfusion recipients. When the Centers for Disease Control (CDC) identified in 1983 the groups at “high risk” of AIDS—gay men with multiple sex partners, injection drug users, Haitian immigrants, and hemophiliacs—these closely mirrored the list of those earlier reported to be at high risk of hepatitis B.31 These parallels were echoed repeatedly by the media. As a 1985 cover story on AIDS in Time magazine pointed out, both diseases were scourges of “drug addicts, blood recipients and gay men,” and scientists were still uncertain as to whether hepatitis B was a “co-agent of AIDS or merely [a] tagalong infection.”32
As AIDS gripped the nation’s attention, interest in hepatitis also picked up. Media coverage pointed out not only how hepatitis B virus was similar to the virus that caused AIDS, but also how it was worse: fifteen times more prevalent in the population, two hundred times more infectious, far more stable in the environment, responsible for far more deaths, and, unlike the AIDS virus, spread by casual contact.33 But the public could take solace, said a top infectious disease expert, in the fact that a vaccine existed to keep this “cousin of AIDS” at bay.34 This message, that the hepatitis B vaccine was a beacon of hope in a time of fear, was oft repeated in the press. There’s no cure for AIDS, Gay Community News told readers, but there is one for hepatitis B, which kills five times as many people each year.35 Mademoiselle issued the same message: “There is no AIDS vaccine yet, but there are two new ones against hepatitis B.”36
That second hepatitis B vaccine, widely available by the late 1980s, was a vastly different product from the first, blood-derived vaccine. Recombivax HB, Merck’s genetically engineered hepatitis B vaccine approved in 1986, contained viral proteins not harvested from infected patients in the clinical setting, but manufactured by genetically engineered yeast in the lab. The vaccine was hotly anticipated by medical and public health professionals for its potential to address the high cost and “theoretical disadvantages” of plasma-based vaccines.37 And they weren’t the only ones excited about a vaccine made with recombinant DNA.
As news of Recombivax HB’s impending approval began to leak, press reports hailed its potential to prove that genetic engineering would revolutionize the pharmaceutical industry.38 Scientists and the reporters who quoted them called biotech vaccines generally “exciting and imaginative,” and referred to the hepatitis B vaccine specifically as a “pioneering product.”39 Researchers told the New York Times that biotech shots were “cutting edge weapons” that would globally eliminate not only hepatitis B, but also AIDS and malaria.40 The business press breathlessly reported on the race among “tiny” California biotech firms to produce the world’s first genetically engineered vaccine, and when biotech firm Chiron Corporation’s Recombivax was approved, Venture magazine crowned it one of the best entrepreneurial ideas of 1986.41 The approval of Recombivax HB—the first genetically engineered vaccine and the third genetically engineered pharmaceutical to make it to market—was heralded on the front pages of the New York Times, the Los Angeles Times, the Wall Street Journal, and elsewhere for ushering in what FDA commissioner Frank E. Young called a “new era in vaccine production.”42
By the mid-1980s, that era had been long awaited. Scientists and drug company representatives alike emphasized that the new vaccines would be cheaper and would finally allow for the marketing of hepatitis B vaccine in developing countries, where it was much more sorely needed than in the United States.43 They were enthusiastic for another reason, as well. As described in earlier chapters, Sabin’s oral polio vaccine caused cases of vaccine-associated polio; swine flu vaccine caused neurological disease; pertussis vaccine caused brain damage and sometimes death. Such vaccine-safety scares had been piling up and getting lots of attention. And as a result of lawsuits against oral polio vaccine makers, companies were dropping out of the market, leaving the country reliant on just a few manufacturers for its entire vaccine supply. The genetically engineered hepatitis B vaccine was heralded not only because it held the promise of a new generation of vaccines, but also for its potential to put safety concerns to rest. The Wall Street Journal announced that Chiron had brought an end to the days when vaccine development was “an inexact scientific art.”44 Because the new hepatitis vaccine did not contain a whole virus, it “just can’t do any damage, period,” said a microbiologist at the FDA.45 Researchers promised that genetically engineered vaccines eliminated the “risk of actually getting herpes, hepatitis B or influenza from the injection, since the viruses themselves are not present in the formula.”46 FDA commissioner Young echoed these sentiments in a press statement he made upon Recombivax’s approval: “These techniques should be . . . extended to any virus or parasite.” He went on to state that while the plasma-derived vaccine had never posed a risk of AIDS, the new “lab-made vaccine” should further reassure people. He also strongly urged those at high risk of hepatitis B to take advantage of this “new life-saving protection.”47
A PUSH FOR WIDESPREAD VACCINATION
Young’s plea came as public health officials were bemoaning stubbornly low hepatitis B vaccination rates. In the five years since the ACIP had recommended that gay men, injection drug users, health care workers, and select immigrants be vaccinated against the infection, hepatitis B prevalence had not only not decreased—it had increased, with rates particularly high among young adults.48 Incrementally, federal recommendations evolved in response. Instead of targeting all risk groups, however, the ACIP’s new guidelines targeted only those guaranteed to have an encounter with the health care system—namely, pregnant women and their newborns.49 In 1984 the ACIP had recommended that all “high-risk” pregnant women be screened during prenatal care visits for hepatitis B and, if found positive, that their infants be immunized at birth to prevent them from harboring the virus.50 But the plan had little impact on total hepatitis B prevalence, because high-risk women were difficult to identify and insurers weren’t always willing to cover the cost of screening them. As a result, the ACIP later noted, the United States continued to add another 3,500 chronic hepatitis B carriers—the unimmunized infants of infected mothers—to the population each year.51
To get around the difficulty of identifying high-risk women, in 1988 the ACIP recommended that all pregnant women be tested for hepatitis B, and, if positive, their infants vaccinated within twelve hours of birth to prevent transmission to the next generation.52 But disease incidence continued to persist at high rates. And at the same time, the demographics of the infected seemed to be changing: the proportion of homosexuals had decreased significantly since 1982, while the proportion of drug users and heterosexuals with no discernible risk factor had increased. When it released these figures, scientists in the CDC’s hepatitis division suggested that the only way to combat the disease would be to immunize all infants, all adolescents, or both.53 The ACIP agreed, and in 1991 it altered its guidelines yet again, this time recommending that all infants be vaccinated against the disease at birth.54
Health officials acknowledged that the new strategy was necessary “because vaccinating persons engaged in high-risk behaviors, life-styles or occupations . . . has not been feasible,” but also because many infected people had “no identifiable source for their infections.”55 Most people became infected as either adolescents or adults, the CDC reported; but as with mumps two decades before, vaccinating the youngest citizens offered the most expedient means of ensuring healthy adult citizens. “We do not feel that targeting adults for vaccination has worked,” a CDC official told the Boston Globe. “This will be the first time,” she went on, “that a vaccine is recommended for children to prevent a disease that primarily occurs in adults.”56 The vaccination of well children to maintain a population of well adults was, of course, by this time a tried-and-tested approach to public health. In this case, however, children were being vaccinated long before they could engage in the types of social activities, like school or play, generally implicated in the spread of contagious disease.
By the early 1990s, the message that just about every American was at risk of hepatitis B came to dominate media reports on the disease. Outlets from the Philadelphia Tribune to Good Housekeeping reported that a third of people who came down with the disease were not in any of the known risk groups.57 Redbook warned readers that hepatitis was “spreading fast,” and the Boston Globe noted that the infection was communicated by sharing gum, food, toothbrushes, and razors, and by body piercing.58 New York magazine—in a feature titled “The Other Plague”—recounted the stories of a young woman who contracted a fatal case by getting her ears pierced, a young man who was infected when mugged at knifepoint, and a woman infected at a nail salon.59 Frequent mentions of the prevalence of asymptomatic carriers heightened the sense of an immediate health threat: in the words of one reporter, anyone could be one of the United States’ 1.5 million “Typhoid Marys,” unwittingly transmitting hepatitis B to people unaware of the risk.60
Health officials at the CDC were meanwhile considering not just revised recommendations to increase hepatitis B vaccination, but a broader program to encourage higher vaccination rates overall. Measles, pertussis, and rubella were all on the rise at the end of the 1980s, and preschoolers’ immunization rates, as noted in the previous chapter, were shamefully low; health officials ascribed both situations to problems with the nation’s health infrastructure. But while health officials blamed the nation’s health care system, the Republican-controlled White House disagreed. “The facilities are there . . . the vaccines are there,” said President Bush during the height of the 1989–91 measles epidemic; “make sure your child is immunized,” he instructed the nation’s parents.61 As an incentive for the parents deemed most responsible for the rash of epidemics, his administration proposed tying welfare payments to children’s immunization status.62
A partisan dispute erupted in response to the welfare proposal. Administration officials maintained that individual citizens needed to assume more responsibility for their personal health, while left-leaning members of the public health profession accused the White House of “punishing the poor” and spending more on six hours of the Gulf War than it would take to curb measles.63 In the end, the administration’s welfare proposal was rejected. But while the political battle over measles control raged, some vaccine critics began questioning the measles vaccine itself, as well as the evolving guidelines on who should get it and when. The ACIP had not only added a second vaccine dose to the one previously advised for children; it had also added preschoolers, college students, health care personnel, and international travelers to the list of those who should get the shot. The changes were necessary, the committee wrote, to address the two causes of nationwide measles outbreaks: unvaccinated preschoolers and vaccine failure. Among the roughly 17,000 measles cases that had occurred between 1985 and 1988, 42 percent were in vaccinated people; in some school districts, measles outbreaks occurred even though 98 percent of the children were immunized. As noted in chapter 7, scientists had a few explanations for why this might be: vaccine-induced immunity might be fading with time, and some children might be getting vaccinated at too early an age, when maternally inherited measles antibodies could interfere with vaccine response.64
But the committee’s solution—more shots—struck some as confusing if not downright illogical. “Does it make sense to offer booster shots of any sort if a single shot of the vaccine has not been shown to do the job?” asked one mother, to whom the measles vaccine suddenly seemed “too experimental, too ineffective, and too risky.”65 The risks of the measles vaccine, usually given as the combined MMR vaccine, seemed to be proven by the outbreaks themselves. In its revised guidelines, the ACIP described measles as a “severe” disease that caused encephalitis in 1 of every 1,000 cases and death in 1 of every 1,000 cases.66 This rate of complications was dramatically higher than that measured when measles vaccination began in the 1960s, noted some attentive critics, who wondered whether the fallible vaccine was actually responsible for having created this more severe disease. Joanne Hatem, a physician who had herself suffered an adverse reaction to rubella vaccination, noted that while measles had become a more serious disease, the vaccine had its own flaws, including a far-from-perfect rate of protection and its own risk of encephalitis and death. She advised a tempered approach to immunization: the MMR shot at fifteen months, and then only boosters against individual infections as needed.67
As some parents and physicians received the “more shots” mantra with caution, proponents of a more robust immunization infrastructure found an effective ally in newly elected President Clinton.68 In addition to launching Vaccines for Children that spring, Clinton signed a proclamation supporting National Preschool Immunization Week, an annual week of coordinated efforts to fully vaccinate preschoolers with all federally recommended vaccines, including the vaccine against hepatitis B.69 In the context of a national dialogue about the broken health care system, which Clinton kept front and center during his first season in office, the cost-efficiency of vaccination generally, and hepatitis B vaccination in particular, took on new salience. Vaccinating young children against hepatitis B saved more money than efforts to vaccinate any other group could ever save, health economists calculated, simply because it prevented the greatest number of chronic infections.70
The enactment of Vaccines for Children coincided with yet another broadened set of hepatitis B vaccine recommendations by the ACIP. The committee now advised that all unvaccinated eleven- and twelve-year-olds be protected against the virus, as well as all children under age eleven who were either Pacific Islanders or who lived in households with immigrants from countries with high rates of hepatitis B. Health officials were blunt in justifying the widespread vaccination of adolescents. While universal infant vaccination would ultimately obviate the vaccination of adolescents and adults, in the meantime, vaccinating preteens would drive down disease incidence more quickly. Targeting immigrant children was necessary, they argued, because such children continued to experience “high rates” of hepatitis B infection: 2 percent became infected each year, and 2 to 5 percent became chronic carriers of the virus.71 The numbers may not have seemed objectively high, but they did seem excessive to those like Barbara Hahn, a deaf interpreter who, in her inability to trace her own hepatitis B infection, pinned it on immigrant children. “Recently, the immigration policies have brought an increasing number of foreign students into our school systems, and the incidents [sic] of hepatitis are much higher in other countries. Is that how I got this disease?” she wondered. “Did I get it from a child who ran into me on the playground or from the little girl who was upset and bit me while I was working at the Cincinnati public schools?”72
This refocused attention on the infectious status of immigrants came at a time when concerns about immigrants, the resurgence of infectious diseases, and the costs of health care were both prominent and intertwined. During his first year in office, as he attempted to overhaul health care generally and access to vaccines in particular, President Clinton entered into a battle with Congress over his campaign-trail promise to overturn a 1987 ban on the immigration of people infected with HIV.73 Heated opposition to Clinton’s plan reflected fears about an impending wave of immigrants from Haiti, which had a large number of people infected with HIV, as well as the nation’s resurgence of tuberculosis, which was frequently attributed to “immigrants and travelers.”74 Arguments against the importation of additional infections frequently cited the burdensome costs of providing health care to the chronically ill. Growing resistance to the prospect of adding immigrants to these ranks was embodied by California’s passage of Proposition 187, which proposed severely limiting illegal immigrants’ access to public services.75
Immigration anxieties framed the context in which federal hepatitis B vaccination recommendations took shape, even if they weren’t directly cited by the state-level hepatitis B vaccination laws that soon followed. As state health boards and legislatures began taking steps to mandate the hepatitis B vaccine for infants, kindergartners, and seventh-graders, many instead attributed these steps directly to the Vaccines for Children Program. Minnesota’s vaccine task force credited the Clinton program for the extra funds and discount pricing that made it feasible to require the hepatitis B vaccine for seventh-graders.76 State health officials in Colorado, Louisiana, Pennsylvania, and elsewhere also credited the administration for making it possible to require the new vaccine for students, hold school-based drives to encourage vaccination, and enforce new mandates—since students were now guaranteed the vaccine, regardless of their ability to pay.77
In addition to new federal funds and guidelines, state-level legislators and health officials had other reasons for requiring hepatitis B vaccination of youths beginning in the early nineties. When states such as Colorado, Idaho, California, and Pennsylvania mandated the vaccine for preteens, health officials and lawmakers cited as justification the growing popularity of tattoos and body piercing.78 Adult attitudes toward teenage body piercing, a trend that exploded in the nineties, often reflected what historian Paula Fass has referred to as the socially constructed perception of “the rocking, highly sexualized teenager.”79 In countless articles and talk shows devoted to the topic of body art in the late nineties, parents and doctors expressed bewilderment and concern over the trend and its hazards: according to the reports, children as young as eleven and twelve, often influenced by sex-symbol celebrities, were getting pierced and tattooed in record numbers. They were also facing skin rashes, swelling, scar tissue, tetanus, HIV, and hepatitis B as a result. “This fad communicates status, fashion-hipness—and unfortunately, disease,” noted Prevention magazine.80 It was also explicitly linked to sex: pop stars like Janet Jackson popularized the idea that because piercing created a “great sensation,” it could be “very sexual.”81 Body art—which was also linked in many reports to an increased risk of smoking, alcohol, and drug use—came to epitomize oversexualized youths at risk of a complex set of dangers and diseases. Fortunately, some commentators noted, at least one of these risks could be prevented with a vaccine.
By this time, the rhetoric that had tightly linked hepatitis B to AIDS less than a decade earlier was beginning to diminish, although an association between the two infections persisted in educational materials urging teens to get vaccinated. A 1994 educational campaign by the National Foundation for Infectious Diseases featured “sexpert” Dr. Ruth, who continued to inform audiences that hepatitis B was “100 times more infectious than HIV.”82 But as the 1990s progressed, characterizations of hepatitis B as a sexually transmitted disease increasingly gave way to characterizations of the virus as a preventable infection linked to cancer. “This is a very safe and effective way to avoid what is a terrible disease that causes cancer and other chronic problems,” said the head of Colorado’s health board regarding shots against hepatitis B.83 Indeed, this particular portrayal of the infection became increasingly prominent as parental resistance to the vaccine began to emerge—as it did when Colorado attempted to mandate the vaccine for school.
REJECTING HEPATITIS B VACCINE
The majority of state laws and regulations mandating hepatitis B vaccination for children went into effect between 1993 and 1998. Their adoption was largely streamlined by federal enthusiasm for universal vaccination and funding support for recommended vaccines, in addition to cultural preoccupations with the lifestyles of body-pierced youths and disease-harboring foreigners. While many of the laws and regulations were uneventfully adopted, a few minor debates did erupt. When Colorado’s health board proposed requiring the shot for kindergartners and seventh-graders in 1996, doctors and health officials were split on the issue. As with mumps nearly three decades before, health and medical experts in the state were neither united nor entirely clear on the urgency of vaccinating children against hepatitis B. Some noted that the American Academy of Pediatrics had advised the immunization of all older children only “where resources permit.” Some pointed out that the disease was unlikely to spread among elementary schoolchildren, “unless you have an infected child who’s a biter or who has a blood spill.”84 Others trotted out arguments about the risk-taking behaviors of teenagers: “We have a lot of children in this community who feel they’re invincible. . . . [T]hey experiment with sex and drugs, then die young because they get chronic hepatitis. . . . Since this is something we can prevent, we should prevent it,” said a county health director.85 But that argument didn’t hold water for everyone. “Just because we CAN vaccinate, does that mean we always should?” asked a Colorado state health department hepatitis B expert. “It’s a worthy public debate.”86
Those in favor of the health board’s proposal prevailed. By the fall of 1997, parents of all of the state’s incoming kindergartners and seventh-graders had to either provide proof of their children’s hepatitis B immunization or sign a form claiming a medical, religious, or personal exemption to the requirement.87 But as the requirement went into effect, popular opposition began to mount. In 1999 Patti Johnson, a member of the Colorado State Board of Education, began a campaign to encourage parents to question the vaccine, citing the small number of hepatitis B cases in young children (279 in 1996) and the large number of purported hepatitis B vaccine-related injuries reported to the federal government (24,776 between 1990 and 1999). Drug companies Merck and SmithKline Beecham hadn’t adequately tested the vaccine for long-term safety in children, she charged, and too few had questioned why this inadequately tested vaccine was being given to children to stop a disease that affects “IV drug users, prostitutes, sexually promiscuous persons, health care workers exposed to blood, and babies born to infected mothers.”88 At the Denver Post, columnist Al Knight repeatedly chimed in on the vaccine’s hazards, informing readers that New Jersey’s governor, Christine Todd Whitman, had vetoed a bill to require hepatitis B vaccines for schoolchildren.89 Whitman had cited the vaccine’s unknown duration of protection in her decision to postpone signing the bill; in Colorado her “refusal” to sign was described as a bold act that questioned the wisdom of vaccinating the young to prevent a disease brought on by adult behavior.90
Vaccine doubts weren’t new to Colorado; the state was, after all, the birthplace of Mothering magazine. By the late 1990s, several of the state’s communities had become renowned for their large numbers of children claiming personal exemptions to vaccine requirements and for the outbreaks of vaccine-preventable diseases, like pertussis, which sometimes ensued.91 Nor were doubts specific to the hepatitis B vaccine new, either in Colorado or elsewhere. Years before state laws requiring the vaccine for children went into effect, the National Vaccine Information Center worried about the lack of studies examining the shot’s long-term effects on children. They also questioned why the vaccine should be given to all infants, most of whom didn’t belong to any of the identified risk groups.92
But what was new in the last few years of the twentieth century was that suddenly concerns about the hepatitis B vaccine acquired new currency and a large audience, not just in Colorado, but across the country. In 1997 the ACIP again endorsed a “birth dose” of hepatitis B vaccine for all infants. The recommendation was part of a new strategy to eliminate transmission of the virus entirely in the United States; it also fell in line with a World Health Organization goal to make infant hepatitis B immunization routine globally before the century’s end.93 But in 1998 the national media reported on France’s decision to halt hepatitis B vaccination because of fears that the shot caused neurological damage, particularly multiple sclerosis.94 And early in 1999, the television news program 20/20 asked whether hepatitis B was “smart preventive medicine or an unnecessary risk.” The report featured several adults, including several health care workers, whose neurological and autoimmune symptoms—resembling multiple sclerosis, arthritis, lupus, and Guillain-Barré syndrome—set in after getting the vaccine. The broadcast also focused in-depth on the story of Lyla Rose Belkin, a healthy infant who went to sleep the night she received the vaccine and never woke up.95 In the spring of 1999, stirred by these reports, a House subcommittee held hearings on hepatitis B vaccine-safety concerns. The hearings, chaired by Florida Republican John Mica, were called to address charges that a vaccine now required for nearly all children had been implicated in a range of diseases and disorders. The testimonies demonstrated how dramatically the hepatitis B virus and its vaccine had been reframed by the nation’s shifting social and cultural concerns over the course of the 1990s.
HELPING OR HURTING?
On the morning of the hearings in May 1999, Representative Mica informed those in the chamber that they were assembled to answer four questions: Did the benefits of hepatitis B vaccine outweigh its risks? Were its hazards adequately disclosed to parents? Were the adverse reactions it caused being adequately studied? And what conflicts of interest existed when the CDC considered how and whether to recommend a vaccine?
In the testimony that followed, proponents of the vaccine emphasized the seriousness of hepatitis B infection, its tendency to cause untraceable infections, and the everyday challenges faced by those living with the virus in their bloodstream. People who spoke out against the vaccine—primarily people who had been injured themselves, or whose children had been injured following vaccination—emphasized the low infection risk of most infants, the greed of drug companies, and the dismissal they faced from doctors and other health professionals. When witnesses for each side made reference to hepatitis B itself, they seemed to be discussing two different diseases. To officials from the CDC and members of the American Liver Foundation and Hepatitis Foundation, hepatitis B was a lethal disease that infected 1 in 20 Americans and caused 5,000 deaths each year, many of these from liver cancer. To members of Massachusetts Citizens for Vaccination Choice and Parents Requesting Open Vaccination Education—and to the doctors and parents who had witnessed blindness, deafness, seizures, and other effects following vaccination—hepatitis B was instead a rare sexually transmitted infection that threatened drug addicts and foreigners, and posed no risk to American infants from healthy families.96 Two decades of varied representations of the disease accumulated in the House chamber, a potent illustration of how value-driven perceptions of the disease and its vaccine were destined to make objective answers to Mica’s questions a near impossibility.
By the date of the hearings, forty-two states had adopted laws or regulations requiring the vaccine for school or day care, and thousands of side effects following vaccination had been reported to the Vaccine Adverse Event Reporting System established by the National Childhood Vaccine Injury Act. Mica noted that he had called the hearings in part because of a New Hampshire report indicating that the state had had 3 cases of hepatitis B and 48 adverse reactions to the vaccine in children under ten. At his request, FDA statistician Dr. Susan Ellenberg testified that in the entire country in 1997, 95 children under two years of age contracted hepatitis B, and 43 had died following hepatitis B vaccination. But “the problems are all in the interpretation” of those numbers, said Ellenberg, because the reporting system cast a very wide net. Since anyone could report a reaction or death as being probably caused by a recently received vaccine, none of the reactions or deaths were definitively attributable to vaccines until investigated—and in the case of hepatitis B vaccine, that hadn’t happened yet. CDC scientist Harold Margolis assured Mica that the agency was conducting several ongoing studies. But to Mica, all of the present evidence added up to the fact that when parents were asked to vaccinate their babies against hepatitis B, they did so with insufficient—indeed, nonexistent—knowledge of the true risks of the vaccine.
That the perceived danger of the vaccine had begun to overshadow the perceived danger of the disease is a testament to changing attitudes toward HIV and hepatitis B’s relationship to that infection. In the 1980s and into the early 1990s, AIDS was a horrific and unmanageable specter, and hopes for an AIDS vaccine were projected onto the hepatitis B vaccine, which came to stand for the promise of triumph over insidious blood-borne infections. But by the late 1990s, the spread of AIDS had begun to come under control in the United States, thanks to campaigns that urged the use of condoms and the effectiveness and availability of antiretroviral drugs. The manageability of AIDS tipped the balance—slightly, but perceptibly—between fears of the disease itself and fears of the vaccine or vaccines that may have caused both AIDS and the rest of the nation’s autoimmune diseases. With AIDS under relative control, and with the push for broader hepatitis B immunization requirements, comparisons between the two diseases were no longer convenient for health officials, who increasingly emphasized the fact that anyone—not just drug users and promiscuous individuals—was at risk of hepatitis B. Indeed, in the course of the hearings, only one fleeting mention of the disease’s comparability to HIV was made. At the same time, parents of vaccine-injured children continued to quote from CDC publications stating that the disease was sexually transmitted, asserting that it was, as Lyla Rose Belkin’s father put it, an infection of “junkies, gays, and promiscuous homosexuals.”97
The emphasis on defining the precise nature and probability of the hazards posed by the hepatitis B vaccine was also a product of the skyrocketing emotional value of children in the very last decades of the twentieth century. Historians Paula Fass and Mary Ann Mason have argued that this emotional value began to soar in direct response to the breakdown of marriage in the same period: as divorce and non-traditional living arrangements became increasingly common, the bond between parent and child came to exceed the bond between spouses in emotional importance.98 Spouses, that is, came and went, but children provided a source of emotional gratification that was supposed to last a lifetime. This attitude was evident in the testimony of Marilyn Kirschner, the single mother whose teenage daughter had became incapacitated by seizures, migraines, nausea, and fatigue that grew worse after each of her three hepatitis shots: “This vaccine has ripped out a part of our lives that can’t be replaced,” she said. The parents whose children suffered from hepatitis B itself felt similarly, as Thelma Thiel, chair of the Hepatitis Foundation International, revealed when she spoke of the loss of her “precious” four-year-old son to cirrhosis. This commonality between parents on opposing sides of the issue was well articulated by Barbara Loe Fisher, who testified in favor of more robust vaccine-safety testing. “Whether death or disability is caused by a disease or a vaccine, the pain is the same,” she said. “We are all here because we love our children and we want to protect them from harm.”99
To parents on both sides of the hepatitis B debate, statistical figures concerning the risk of disease or vaccine injury were meaningless when faced with the lived reality of caring for an irreversibly damaged child. For all the commonalities faced by parents living with sick or disabled children, the origins of their plights led to slightly divergent but ideologically similar attitudes toward state involvement in family health matters. Parents of hepatitis B–positive children spoke of the stigma of the disease, the constant fear that their child would pass the infection to others, and their deep desire that parents in their communities would comply with state rules and have their own children immunized. Parents whose children became ill after vaccination, however, saw in those very same rules a state acting in the interest of itself and its corporate allies with little regard for the welfare of individual children and their families. But that didn’t always mean that they wanted less state involvement—like the parents of children infected with hepatitis B, they often wanted more: more oversight, more care, more attention paid to their concerns. Said one mother of a vaccine-injured daughter: “Lindsay, nor anyone [sic], should have to suffer like this because scientific studies weren’t done to determine if the vaccine was safe to give to every child. My daughter shouldn’t have to suffer like this because government officials and drug company executives didn’t do their jobs.”100
In the hearings, scientists and citizens were given equal time and attention by the assembled lawmakers. That lay and scientific testimonies were equivalently valued on Capitol Hill on that day is just one illustration of the degree to which scientific authority had been eroded over the previous quarter century. This erosion had been accomplished in large part through the social movements that had first gotten under way in the 1960s and 1970s, and whose influence on the opinions of vaccine critics was discernible yet in 1999. The emphasis that Mica and vaccine-injured witnesses placed on the uncertainty of the vaccine’s long-term safety and protection were made possible by the now-entrenched risk-oriented rhetoric of the environmental movement. The predominantly female patients who recounted their struggles to get male doctors to believe that their symptoms were real and vaccine-related recalled the anti-hegemonic discourse of the women’s movement. The influence of the consumer movement, too, was evident in the pervasive distrust of both government and industry scientists on display. One reportedly vaccine-injured woman, a public health nurse from Indiana who asserted that she was not anti-vaccine, testified that she was troubled to learn that Merck scientists had attended CDC meetings held to assess vaccine safety. “[How] can an employee of a pharmaceutical company that manufactures the vaccine be objective in designing experiments to show fault in a product that generates close to $1 billion in sales for his company?” she asked.101 The accusation was voiced again and again in the debate over hepatitis B vaccine. For by the late 1990s, the newly vaccine worried and longtime vaccine skeptics alike perceived an abuse of power and violation of trust on the part of industry and government officials engaged in the pursuit of public health.
In the immediate aftermath of the hearings, health officials voiced concern that “antivaccine groups” were gaining ground, getting the media interested, spreading word of vaccine risks on the Internet, and “gaining the ears of state and federal legislatures.”102 One specific fear they voiced was that such groups would successfully repeal hepatitis B requirements in the states where they existed—a logical response to the crisis of faith that seemed to be growing ever more deeply entrenched. But nothing of the sort happened. When New Jersey attempted to mandate the hepatitis B vaccine for its schoolchildren later in 1999, legislators, in response to parental concerns, wrote in a clause permitting parents to exempt their children from the requirement for “personal” reasons. Nervous state health officials, fearing an unenforceable mandate from the legislature, decided to write and adopt their own more restrictive rule while state lawmakers were in recess. The move infuriated vaccine-worried parents.103 “Even if there is only a slim chance that my perfectly healthy infant might die from a Hepatitis B injection, the fact that we are talking about chances at all is appalling. Since when did the New Jersey State Health Department legitimize gambling with lives?” asked New Jersey mother Laura Maschal.104 Despite parental worries and the legislative “skirmish,” however, the hepatitis B requirement went quietly into effect. By 2001 the vaccine was required of all elementary- and middle-school children in the state.
Maschal’s opinion and the venue in which it appeared—the New York Times—were testament to the fact that the hepatitis B vaccine was playing a part in keeping debate about the risks of vaccines and government’s power and ability to manage them in the public arena. But even as this debate continued and grew—as it would in the 2000s—most Americans seemed willing to accept the state in the role of “superparent,” trusting it to determine the best policies for the welfare of their well children.105 By 2002 all but three states—Alabama, Montana, and South Dakota—had adopted laws or regulations requiring the vaccine for children in day care, grade school, or both. That year 88 percent of the nation’s children were vaccinated against hepatitis B. The figure climbed to 92 percent the following year, where it held steady through the early 2010s.106
The federal policy recommending the universal vaccination of children against hepatitis B, and the historical moment of which it was born, represented the apex of the new era of vaccination heralded in the late 1960s. The state-level policies requiring the vaccine for all children were made possible by the consolidation of federal authority made manifest in the Vaccines for Children Program. Federal and state policies, meanwhile, embraced the vaccination of infants, placing significant health-citizenship responsibilities on the shoulders of the nation’s youngest members. Hepatitis B wasn’t ever considered a “childhood” disease, but the presence of an effective vaccine made it possible for health officials and health care providers to treat it like one. This approach represented a convenient route to a healthy populace; as one pediatrician put it, “at least with infants you can capture them because you know you see them at birth.”107 The vaccination of all children, as opposed to those at highest risk of infection, was also more cost-effective than screening for those at high risk of hepatitis B. Moreover, it conformed to the principles of early universal vaccination against childhood diseases that had been worked out decades before, with the administration of vaccines against rubella and mumps to all children at an early age to ensure a healthy adult population in the future.
Policies dictating the hepatitis B vaccine’s recommended use evolved in a landscape of shifting federal resources and changing scientific and cultural ideas about the disease itself and the risks it posed. But even as these policies reached the apex described herein, doubts about the hepatitis B vaccine specifically and vaccines generally continued to brew. In the decade that followed, vaccine-skeptical discourse would reach a crescendo pitch in popular media. As a result, when state lawmakers would attempt to pass universal school mandates for another sexually transmitted, oncogenic infection—human papillomavirus (HPV)—in 2006 and 2007, their efforts would meet with public outrage and legislative failure. Looking back, laws mandating the vaccination of all children against hepatitis B appear to have found a temporarily open window. Even as they went into effect, however, the window was already falling shut.108