It says something that at many medical schools, a student’s first contact with a patient involves a cadaver in an anatomy lab. For most medical students, the dissection of a human body, one often shared with a partner or two, is a rite of passage that reveals the physical, emotional, and spiritual reality of medicine while shielding them from the more challenging task of facing a living, breathing human being in an examination room. As much as people talk about the challenge of this process, it is far more difficult to work with a patient who can feel the impact of what a physician may say or do.
It also says something about my medical education, at Case Western Reserve University, that my first encounter with a patient was with a woman expecting a child. All students were introduced to an expectant mother with whom they were to interact and visit in medical settings and on house calls during their education. It was an incredibly powerful affirmation of what medicine is about and spoke legions about what the school valued and why I went there. Although respect and empathy are the expected norm in all cases, the cadaver model impresses a doctor in training with the facts of anatomy and, let’s face it, death. The Case approach, which sent entering med students to visit women at home, take their vital signs, and guide them through the process of obtaining prenatal care, emphasized life.
My first patient, a woman Yvonne Anderson,1 lived in a tough neighborhood off Euclid Avenue in downtown Cleveland. She was single and younger than I was, but when we met, I was the nervous one. I was twenty-three years old and feeling acutely the extent to which I didn’t know medicine. But Yvonne sensed that she and I were learning the ropes of the medical system together and at least I might help by being inside it—something she certainly didn’t feel as an African American first-time mother. Maybe I could at least help Yvonne navigate the system and get the best care for herself and her soon-to-arrive baby.
When I made my first home visit to Yvonne’s, it was an epiphany. I knew medicine would stretch my mind about science. I didn’t know how much it would open my eyes to cultures, the stuff for which there was no section of books in the medical library. I was about as white bread as anyone could be and walked down Yvonne’s street for that first visit with my heart in my throat (medical term). All antennae were activated, as the neighborhood was not one I would have walked down even when in Paterson or New York. Shades were drawn on every window in every house, including hers. The doorbell didn’t work, and the lack of response from my knock had me ready to spin around and head for cover. Then the door opened, and Yvonne gave me a stern look and walked me through dark rooms to the kitchen, where I would be expected to perform before her skeptical mother. One ceiling bulb, four mismatched chairs, and a table were the setting for the interrogation. Short-cropped answers from them, too-long, overcomplicated verbiage storms from me, and then we actually had a conversation. I was so relieved when a first laugh seemed to signal détente. By the end, Yvonne was making it clear that it was the two of us who would work together and win over the mom, who knew she would end up taking care of everything anyway. She made clear she knew more about pregnancy and childbirth than I did, and I didn’t mind getting a real-world education from her. It worked out, and I did indeed learn a ton from the Andersons. They taught me that medicine always exists in a human context. That context impacts what people hear. White and black, side by side, hearing the same words from the obstetrician—we often had different interpretations. I wasn’t naive enough to think this wouldn’t happen, but I never expected how extensive it would be and how it could really affect the quality of care and health itself. There was so much suspicion about doctors’ motives and so little ability to pay for the nonprescription part of medical care that it was eye-opening. It was also hard to know without seeing up close the impact of the stress from poverty and the anxieties that come with living in an area that was so rife with crime that everyone kept the shades drawn to confuse would-be burglars. (It took me a while to figure out that safety was the reason that the Johnson home was so dark. In a crime-ridden neighborhood it was best to keep your business, and your posessions, out of view.) My thinking and my way of communicating with patients were forever changed by my experience with Yvonne.
In 1976, when I arrived at Case, much of Cleveland lived with the pressures that Yvonne and her family dealt with every day. Labor issues and foreign competition in steel, cars, and other industries had destroyed so many jobs in the city that it was steadily losing population. In two years, Cleveland would become the first big city to default on its debt since the Great Depression. Then there was the environment. The air and water in Cleveland were notoriously polluted. In 1969, an oil slick on the Cuyahoga River caught fire. A river on fire! First-year medical students were given T-shirts with that image stating, CLEVELAND: YOU’VE GOT TO BE TOUGH. But it was a fabulous place to learn about medicine and people’s lives and how the two collided. My friends and I all volunteered in the funky Cleveland Free Clinic in the evenings, getting a rich supplement to our cultural education. I felt I could navigate things better for my patients when Yvonne delivered her healthy daughter, Annette, and went to the pediatrician visits. I was going to class to learn about the body, but it was in the clinical experiences that I learned how to help people.
It was extraordinary to be first connected to hospital medicine through a birth. The word awestruck aptly captures what it was like to be a clueless medical student in the delivery suite, serving half as physician, half as patient advocate. Fortunately, not much was expected, because I was wobbly kneed through the whole experience. In the presence of a miracle, doing more than standing with mouth agape was beyond me. By the end of medical school, I had delivered more than a dozen babies, including twins, and done minor procedures postdelivery. Participating in the joy with parents at a delivery and understanding the remarkable state of pregnancy drew me to the field. But the need and the impact of emerging science were less evident to me in obstetrics, and I eventually turned away.
I was somehow compelled by basic scientific research, especially in the area of molecular biology. No one with even the slightest interest could have missed the fact that by the 1970s we had entered a revolutionary period. Great advances were being made toward understanding the chemical secrets of life itself, and with each new discovery, we were getting closer to designing more rational treatments that would save lives and ease suffering. These advances created their own big questions for politicians, religious leaders, government regulators, and everyday citizens. The city of Cambridge, Massachusetts, had already banned recombinant DNA research by my first year of medical school. All of society would be engaged in issues prompted by the promise and peril of the biology revolution. I could imagine nothing more compelling.
But as exciting as the scientific moment seemed, I wasn’t completely confident in my abilities to participate. In undergraduate classes, I loved organic chemistry, which is the course almost everyone considers the make-or-break subject for people who hope to go into medical school. But organic chemistry is like mathematics, a language. You learn certain fundamental rules for how things relate and you can put them together in creative ways. That nature has done so to make the building blocks of life—proteins, RNA, DNA, lipids, and sugars—captures the imagination. Add the spark of being able to modify things by the same rules, and the language of chemistry starts to become song. But I was just a student, inspired but not equipped to really make the music. I assumed that was only for those born with a determination to be and do chemistry in capital letters. It has a complexity and diversity that can seem almost infinite and was, at the time, opaque and mystifying to me.
The connection to chemistry came back in a wave in the first class in medical school.
That first course, in a subject given the vague title of Metabolism, was supposed to give us a firm grounding in the ways that cells—the fundamental unit of life—create, store, and use energy derived from nutrients to assemble and maintain cells. It was remarkable to me how the reactions and principles of organic chemistry turned into the very basis for life. It was also remarkable how complex it all rapidly became. For a few of my fellow students, who had studied a great deal of science as undergraduates (I was an English major), the subject was so easy that they sat and read the morning New York Times while grunts like me struggled to understand the lectures. I didn’t know whether I was not as bright as my classmates or there was something unusually hard about this course, but I truly struggled with it. Also, I felt uncomfortable just saying, “I’m stuck; help me.” For all I knew, I was the only one feeling so lost.
Unlike at some other schools, where competition was overt and the general attitude was “survive it if you can,” we were encouraged to think of each other as colleagues who learned together. In that very first course, I discovered classmates like Bruce Walker—one of my closest friends to this day—who were eager to both give and receive help. Away from the lecture halls, we often met in groups to review lectures that were sometimes filled with daunting medical and scientific details. Together, we solidified our understanding of the material but also considered how it all might relate to the responsibilities we would assume when we started caring for people whose well-being and even continued existence would depend on us.
Even though we were first-year students, far from becoming actual doctors, most of us were late-1960s idealists who wanted to use our talents and opportunities to help others. The people I knew didn’t seem to focus on money or status, and they voiced strong advocacy for doing the right thing for patients. There was an implicit suspicion of the commercial side of medicine. Those I knew took very seriously the commitment to making the world better, to literally alleviate suffering. The anxiety most often spoken was that we might go into the world and make a serious, even fatal mistake with a patient because we missed something crucial in med school. With this in mind, we got together in the evenings and on weekends and tutored each other. Since classes were six days a week, the big break was when the dean of admissions would invite us over on a Sunday afternoon for croquet, cookies, and beer—a spectacular break even when your ball ended up deep in the poison ivy. For a few of us, the itch to get away was too great, and we took a weekend bicycling in Vermont rather than studying biostatistics. To a person, we learned to regret that gap in our knowledge made worse by the solid drenching of Green Mountain rain.
The tradition of student tutoring was well established at Case and kept all but the most ill-focused from falling behind. Once we got into the clinical time of our schooling, the third and fourth year of medical school, we were split into smaller and smaller groups, but the sense that doctoring meant continuous learning and teaching and learning again was preserved. People started distinguishing themselves by clinical specialties and areas of research. In the late 1970s, medicine was becoming more specialized as new disciplines arrived with their own requirements for training, testing, licensure, and standards for practice. Medical oncology was not a formal specialty until 1972, and many physicians eschewed any kind of subspecialty board testing and certification until it became imperative for participating in the era of managed care, the 1980s. This was not an entirely new idea. Physicians in ancient Egypt, believing each part of the body to be a separate entity, specialized along the lines of anatomy. In the modern era, some specialties like radiology emerged out of new technologies. However, the push for specialization came from those who believed that both clinical care and science would improve as minds were focused more intently on limited areas of interest and the amount of information in each burgeoned. Specialty information required specialty-specific knowledge that was tested by specialty-specific exams. This has led to an ever-heavier burden of credentials that physicians maintain, to the point of now-evident subspecialist revolt. But for medical students, it represented mostly a spectrum of information and activities that allowed them to sort themselves into comfortable spots across the spectrum. It also became evident that each specialty seemed to have a culture and an accepted set of behaviors. Personality-type matching with specialties was oddly predictable. The brooding self-reflectors never made the OR home, and the ex-athlete or ex-military rarely found the intangibles of psychiatry appealing.
I was inspired by all the advances being made in biology and genetic technologies, which were sending waves of excitement through this former English major, if not the medical world more broadly. This work promised to make comprehensible the opaque world of cancer and perhaps the immune system. Cancers of the blood, which reside in the realms of both hematology and oncology, were most compelling for study because you could see the disease evolve at the level of the cell by simple blood samples. What the patient was experiencing was sometimes vague, but the blood was more eloquent, revealing whether fatigue represented poor sleep or regrowing leukemia cells. Other so-called solid tumors were rarely so evident, requiring scans and x-rays and ultimately biopsies. To me, the proximity of patient symptoms to cells in the blood was very compelling. It seemed that the basis for the disease had to be closer to comprehension if you could actually visualize what was going on by blood samples. It was also apparent that cells of the blood were what protected us from invaders of all types: viruses, bacteria, fungi, and maybe cancer. Studying the blood seemed to me like a way to ride the rocket of new discoveries in molecular biology and have them meaningfully turn into therapies I could give to patients.
But before I could get aboard, I had to understand the landscape of the relevant science, which was changing faster than the seasons.
* * *
Most scientists recognize the creation of the first model of a DNA molecule, by James Watson and Francis Crick, as the breakthrough that began the era of molecular biology. (As The New York Times noted on its front page, their so-called double helix image brought us “near the secret of life.”) At the time when their work was published, the British Crick and the American Watson were little known outside their specialty. A bookish man, Crick would say his one hobby was “conversation.” A child prodigy who had appeared on the Quiz Kids radio show—sponsored by Alka-Seltzer—Watson had been briefly famous but then immersed himself in academia. He and Crick did their work at the University of Cambridge in a one-story, metal-roofed building affectionately called the Hut. However, there was nothing modest about their achievement, and as James Watson famously noted, Francis Crick was never in a “modest mood.” They announced their discovery to much fanfare and arguably too little attribution to those with whom they had worked, notably Rosalind Franklin.
The double helix established how cell division transmitted genetic content in a way that preserved the information encoded in DNA. It made clear the mechanics of inheritance by showing that DNA has two strands, with one being simply a reverse complement of the other. They defined the rules for how making new DNA strands from existing ones could lead to perfect copies. They unveiled what had baffled thinkers since before science was a discipline. Thanks to the work completed inside the Hut, biology had reached an inflection point comparable to the advance made when physicists split the atom in the New Mexico desert in 1945. Watson and Crick energized biological science and made real the possibility of altering DNA and thereby cells and organisms. Biology would no longer be just describing the things that are, but a dynamic experimental field that could define how we function and what governs how we change in development and disease. The puzzles inside of puzzles that represent life science looked like they might have had hard rules governing them—more like chemistry or physics than traditional field biology. That meant unleashing not just discovery but creativity: methods to change biology, perhaps to develop new therapies for biologic disease.
* * *
As the world absorbed the news about the double helix, Alick Isaacs was investigating influenza viruses at the National Institute for Medical Research in London, sixty-five miles south of Cambridge. He was motivated in large part by the not-so-distant Spanish flu pandemic of 1918–19, which had killed an estimated fifty million people. In 1957, as a less deadly Asian flu blazed across the world, he was invited to tea by a Swiss virologist named Jean Lindenmann, who had come to the institute for a one-year fellowship. Lindenmann and Isaacs were two very different men. Isaacs was a quintessential Scot and prone to serious bouts of depression. Lindenmann had been raised in Zurich by cosmopolitan parents and was known for an ironic sense of humor. Both men were medical doctors and research scientists, and they shared a fascination with the way that people infected with one strain of a virus seemed to develop immunity against related ones. They were determined to discover how this occurred.
Good research of course depends on carefully designed experiments. Isaacs and Lindenmann devised a classic. First they grew cells from the membranes of chicken eggs in a nutrient solution. Then they added one kind of virus to the cells and, as the virus flourished, added another type. The cells resisted this second virus. Isaacs and Lindenmann then removed all the cells and the viruses, leaving only the culture in which they had been grown. When they added fresh membrane cells to the nutrient mixture and hit them with a third type of virus, something remarkable happened—the new cells resisted that infection. Something produced by that first batch of cells, left behind when they were removed, had interfered with the infectious powers of the viruses. They called this substance, which was a protein, interferon.
Isaacs and Lindenmann were not alone in their scientific interest. At roughly the same time, others working in Tokyo and Boston had also noticed that some process was stopping viral growth in certain circumstances—and they, too, identified the key protein. Taken together, this work showed that interferon functions as both a virus blocker and a signal sender, communicating to nearby cells that an invader is present. The signal stimulates the listening cells to produce their own interferon, which then stops the virus as it arrives. All this communication, going on at the cellular level, was a stunning revelation for those seeking ways to intervene on the body’s behalf against viral threats that cause infectious disease in the short term and cancer later in life. (Viruses are now believed to be a factor in as many as 20 percent of malignancies. Among the ones we now know contribute to the risk of cancer are the Epstein-Barr virus, human papillomaviruses, the hepatitis B and hepatitis C viruses, human herpesvirus 8, HIV, and a related human T lymphotropic virus, HTLV-1.)
Unfortunately, in the years after it was discovered, the only known method of producing human interferon involved processing vast amounts of blood. A single one-hundred-milligram dose of natural interferon required sixty-five thousand pints. The only source was the lab of Finnish researcher Kari Cantell, who had arranged to obtain every ounce of blood donated by the Finnish people so that he could harvest the protein. The expense and technical challenges in producing interferon meant that it was used in very few medical trials. Nevertheless, small studies suggested that it could slow the development of some forms of cancer.
In the 1960s and 1970s, pharmaceutical companies, foundations, and government agencies poured more than $100 million—roughly $1 billion in 2017 dollars—into interferon research. This work established that cells produce many different types of interferon. Among the most important is the interferon made by T lymphocytes, which are essential to the body’s infection-fighting abilities. These T cells are produced in the human thymus, which was long considered a vestigial organ of no real importance.
The history of the thymus and science underscores the challenge in understanding the human body. As recently as 1960, many physicians considered this little mass of tissue to be inconsequential. Some thought it was implicated in sudden infant death syndrome (SIDS), and even though the connection was never proven, they “treated” infants with radiation, damaging the thymus, with the belief that they were preventing SIDS. Then in 1961, an Australian immunologist named Jacques Miller found he could all but disable the immune system of mice by removing the thymus. His experiment and follow-on work showed that through the production of T cells, the gland contributed greatly to immunity. (Much more about these cells in a later chapter.) This understanding opened the possibility that the body could be rallied, perhaps with T cell–stimulating treatment, to better fight disease on its own.
The science that identified interferon and then the creation and action of T cells revived interest in the idea that infectious agents, such as viruses, were implicated in the cancer process. This possibility had been raised in 1910, when a cancer in chickens was linked to a virus by a scientist named Francis Peyton Rous, who worked at the Rockefeller Institute in New York. Any “cause” for cancer offered a chance to investigate the mechanism of disease and explore the possibility of treatments based on the body’s ways of protecting itself. At the start of the twentieth century, Peyton Rous prompted a big search for human cancers caused by infectious agents, but years and then decades passed with no new discoveries. The chicken-virus-cancer link came to be regarded as an anomaly.
Anomalies die quiet deaths in science, except when they are confirmed. In the case of the cancer-causing infection, the confirmation finally came from an unlikely source—an Irish missionary physician named Denis Parsons Burkitt, who spent most of his life working in Uganda. (When he arrived, he became only the second surgeon in the entire country.) An unassuming man who had lost an eye in a childhood accident, Burkitt had almost failed at college and later said he had prayed his way through his chemistry courses. He said he considered his work “God’s call” and insisted that he received much in return for his labor. “I gave a spoonful and got back a shovelful,” was how he put it. As a scientist, Burkitt was fascinated by the types of disease maps epidemiologists use to show how illnesses arise in people across a community or region.
In the 1950s and early 1960s, Burkitt and his colleagues traveled thousands of miles to pinpoint the locations of children who developed a type of lymphoma rarely seen elsewhere in the world. (He joked that three doctors visiting hospitals across the continent “enjoyed the safest-ever safari in Africa.”) The common factors in the cases Burkitt studied turned out to be a history of malaria, which weakened the immune system, and infection with the virus that came to be known as the Epstein-Barr. The malaria was spread by a certain type of mosquito, which is common in the region Burkitt studied. The virus, which most commonly causes mononucleosis, is so ubiquitous that 50 percent of all children contract it by age five. After infection, it remains in the body, where it is almost always kept at bay by the immune system. It contributes to cancer only when other factors, like a weakening of the immune system, are present.
What was eventually called Burkitt’s lymphoma was characterized by extremely fast-growing tumors, typically found first in the lymph nodes of the neck or jaw, leading to a rapidly worsening illness, organ failure, and death. Under the microscope, lymph node biopsies exhibit a distinctive pattern that came to be called the “starry sky.” The “sky” is composed of large, densely packed tumor cells. The “stars” are made of debris left behind by the white blood cells that attempt to fight off the cancer. Although he became famous for his work on this malignancy, Burkitt would go on to do other important work. He helped establish the link between dietary fiber and the prevention of colon cancer and devised methods for producing cheap artificial limbs to help amputees in poor countries. But his most important legacy, in addition to the lives he saved, was his lymphoma research.
* * *
Although Burkitt, Isaacs, Lindenmann, and Miller had achieved real milestones, science was far from a true understanding of the causes of most cancers, let alone treatments that could be termed “cures.” Nevertheless, people hungered for answers to the questions that naturally arise in the case of cancer, especially “How did I get it?” and “How do I get rid of it?” Too often, the press, attuned to the power of the topic, seized upon reports published in serious journals and simplified them in ways that surely exasperated physicians. In 1962, Life magazine famously announced, on a cover that also featured a big photo of Marilyn Monroe, “New Evidence That Cancer May Be Infectious.” Life was a staple of doctors’ waiting rooms all across America. (I would bet that issue landed in Dr. Baldino’s office in Wyckoff.) By using the word may, the editors preempted critics who knew that the headline was sensationalistic. This left it to doctors to explain that no, cancer wasn’t contagious in the way people might imagine (as in transmitted person to person) and that patients needn’t be quarantined.
In addition to alarming patients and the general public, the idea that cancer was infectious spurred a big government investment in research intended to discover more cancer-causing viruses or bacteria. Looking back on this time, it is surprising to see such a focused effort. Scientific progress doesn’t usually follow the kind of straight-line route that can be directed from some central authority. More typically, research in one area gets unexpected aid from a breakthrough in another. Thus, more than a decade would pass, and enthusiasm would wane, before it became possible to see how we might actually create—or engineer—new models of life that would help us study and intervene in the cancer process.
In 1969, a team at Harvard led by Jonathan Beckwith became the first to isolate a specific gene among the three thousand or so in a type of bacteria called Escherichia coli. Soon came the gene-splicing experiments of biochemist Paul Berg of Stanford, who combined the DNA of two separate types of viruses to create a novel, engineered DNA molecule. (This process was called recombinant DNA methodology.) Berg, who had earned his doctorate at Case, had immersed himself in basic cancer research starting in 1952 and plunged into a lifelong investigation of biology at its most basic levels. He was followed by Fred Sanger, who devised a method for rapidly producing an image of long stretches of DNA. These discoveries and others excited scientists but sent shivers through the ranks of theologians, who wondered if humanity would usurp a role previously played only by God.
This concern, that scientists would abuse the power of the new biological science, echoed the fears raised when physicists unlocked the atom in the 1940s. Science and technology had reached a point where people could imagine them producing catastrophes that might affect millions of people and, perhaps, humanity itself. The public’s anxiety, which was reflected in science fiction movies and apocalyptic novels like On the Beach, had been appreciated by Beckwith’s group. When their achievement was announced, a member of the team, medical student Lawrence Eron, said, “The only reason the news was released to the press was to emphasize its negative aspects.” Eron hoped that society would engage in a serious conversation about genetic engineering’s potential to shape human populations for ill, as well as for good.
The discussion got going in earnest in 1974 when a group of biologists, including James Watson, declared they would suspend certain kinds of research out of fear that they would create something dangerous in their labs. (Watson had, along with Crick, received the Nobel Prize in 1962.) In a letter published in the American journal Science, and a British one called Nature, this group said it was concerned that gene splicing could produce drug-resistant microorganisms or cancer-causing viruses. “There is serious concern,” wrote the scientists, “that some of these artificial DNA molecules could prove biologically hazardous.”
Noting that much of the gene-splicing work being done involved E. coli, which is found inside every human being and throughout our environment, they said that any unwanted creations based on this bacterium could quickly spread. To avoid this possibility, they asked colleagues to suspend aspects of their work too. In addition to Watson, the signers included David Baltimore of MIT and Paul Berg.
A year later, Berg chaired a landmark meeting at a conference center on the Pacific coast called Asilomar, where migrating monarch butterflies flitted among the Monterey pines. The context for this gathering included the political crisis of the Watergate burglary scandal, which had forced a U.S. president to resign for the first time in history. Combined with the hugely unpopular war in Vietnam, Watergate had left many Americans feeling disillusioned and skeptical of those in authority who held the public trust. Opinion polls showed a steep decline in the public’s trust in all sorts of institutions and authorities. Science and medicine weren’t helped by revelations, in 1972, of a federally funded experiment called the Tuskegee Study, which denied treatment to African American men with syphilis so that the disease could be studied. The fact that government and science had collaborated in the abuse of the men in the study frightened people as they contemplated technologies that had the potential to produce both great benefits and vastly more harm than anything mankind had ever before created. Berg was among them.
The scientists and physicians at Asilomar were joined by lawyers and journalists, who advised them on everything from liability issues to communications. For some of the scientists, who never considered the issue, the fact that laboratories were also work sites covered under labor, health, and safety laws was a disturbing revelation. Few had ever considered the risks they were courting as they worked on what would be, for all intents and purposes, new forms of life. Others, including Norton Zinder of Rockefeller University, were already concerned that many in this new field—his field—lacked the experience in microbiology to keep themselves, their colleagues, and the public safe. Working first in small groups and then in one large body, the conference attendees eventually established six categories of genetic experiments, which they ranked according to hazard, and indicated how they should be regulated. On one end of the scale were experiments involving organisms that freely exchanged genes in nature and were already quite variable in the existing environment. On the other end were projects that posed such a high risk of creating new pathogens that they shouldn’t be conducted at all.
Asilomar, as the gathering came to be known, produced standards that showed that science was listening to public concerns and eager to hold itself accountable. For those engaged in work deemed most dangerous, the group recommended a “high containment” strategy that required air locks for lab entries and filtering of any air leaving the space where experiments would be done. Anyone entering these labs would have to first shower and don protective gear. Upon exiting, they would have to leave these clothes in a secure area and shower a second time. The cost of upgrading labs to meet the standards would be high. Berg’s own facility had just undergone upgrades costing tens of thousands of dollars (big money in the 1960s), and more would be required if he wanted to pursue the kinds of studies he was contemplating.
For skeptics, the results of Asilomar suggested that scientists were beginning to understand why thoughtful laypeople might be afraid of technologies that enabled the creation of entirely new life-forms. For those in government and politics, the guidelines produced at the conference served as a template for codes that would eventually have the force of law. In fact, as Asilomar ended, the National Institutes of Health embarked on a four-year process intended to turn the guidelines into enforceable regulations.
The slow pace of this work was supposed to reassure people that it was being done with care and consideration for a wide diversity of opinion. Of course, some were frustrated with the uncertainty caused by the delay. James Watson, who had so publicly signaled his worries about gene splicing, changed his mind. “Specifically, I was a nut,” he said as he retracted his support for the moratorium on certain experiments. “There is no evidence at all that recombinant DNA poses the slightest danger.”
Watson spoke with the great authority that comes with a Nobel Prize, but his tone—impatient, if not imperious—was a red flag to anyone who viewed the powerful with some skepticism. In the 1970s, both modern progressives and old-fashioned traditionalists could be discovered among the wary, who found something unsettling in the claims of powerful people, be they scientists, political leaders, corporate chiefs, or bureaucrats. No badge of status was enough to reassure some people, especially those who feared that an overconfident, overempowered elite could be dangerous.
The fear and anxiety many people felt about science and technology came into full focus in 1978 in Harvard University’s home city of Cambridge, Massachusetts, when Mayor Alfred Vellucci convened public hearings to review the public health implications of DNA research. (Cambridge is also home to the Massachusetts Institute of Technology, so the prospect of this work taking place within the city limits was quite real.) Vellucci, a silver-haired man who was born in 1915, had long feuded with the university over mundane matters like parking and rowdy students. He had often appealed to Cambridge voters by bashing the university and, consequently, earned the ridicule of the student-run satirical magazine called the Harvard Lampoon. At one point in this long-running town/gown dispute, after Lampoon editors mocked Italian Americans by claiming the Irish discovered the New World, Vellucci proposed turning the office building where the magazine kept its offices into a public restroom. He also suggested calling the area around the place Yale Square. And he planted a tree in the sidewalk in front of the place to spoil the view from inside. (Over the years, the tree would suffer much abuse, including shorn limbs and poisoning.)
At the city council hearing, where Vellucci appeared in coat and tie, he asked witnesses from the Harvard faculty to refrain from using “your alphabet,” by which he meant scientific jargon, because “most of the people in this world are laypeople, we don’t understand your alphabet.”
The very first person to testify, cell biologist Mark Ptashne of Harvard, struggled with the mayor’s request, sprinkling references to “P1 and P2 laboratories” into his argument. (P1 and P2 labs conducted extremely low-risk work.) When he spoke of the gene-splicing work proposed for Cambridge, he said that “unlike other real risks involved in experimentation, the risks in this case are purely hypothetical.” Ptashne, who was thirty-six, appeared in shirtsleeves and wore long hair and sideburns. He didn’t help his case in the minds of skeptics when he noted that “millions of [E. coli] bacteria cells carrying foreign DNA” had already been “constructed” and that “so far as we know, none of these cells containing foreign DNA has proved itself hazardous.”
Brilliant as he was, Ptashne was no match for the mayor in the political forum of a city council meeting. After the Harvard professor made his statement, Vellucci read off a long list of questions, which he suggested Ptashne write down. (He did.) First the mayor asked for a “100 percent certain guarantee that there is no possible risk” in any experiments envisioned by Harvard scientists. Then he moved on to a series of questions intended to alarm the crowd:
“Do I have E. coli inside my body right now?”
“Does everyone in this room have E. coli inside their bodies?”
“Is it true that in the history of science, mistakes have been known to happen?”
“Question: Do scientists ever exercise poor judgment?”
“Do they ever have accidents?”
“Do you possess enough foresight and wisdom to decide which direction that humankind should take?”
Before Vellucci was finished, he had warned of monsters created in laboratories and stirred up others on the council, who continued the attack. Council member David Clem complained that “there are already more forms of life in Harvard Square than this country can stand” and then asked Ptashne, “What the hell do you think you are going to do if you do produce” a dangerous organism?
Ptashne was exasperated by the questions and would later call the proceedings “an unbelievable joke.” But the Cambridge community of scientists was not monolithic. David Nathan, a physician and biologist at Boston Children’s Hospital, encouraged the politicians to complete their work on safeguarding the public because he was raising children in the city of Cambridge and didn’t see any need for unnecessary risk-taking. The mayor and council ultimately banned recombinant DNA work inside the city limits for nearly a year. In this time, they wrote and approved a number of biological safety ordinances and created a review board to consider all planned recombinant DNA research. Among the members of the board were a Catholic nun, engineers, a physician, and a professor of environmental planning.
Despite the mayor’s inflammatory rhetoric and fears among the faculty that the city might shut down research, something positive grew out of the Cambridge process. The city got ahead of other communities and figured out a way to accommodate science. In a few years, Harvard faculty and commercial research firms would make the city a hub for biotechnology—which would, in turn, become a driver for business growth. In an era when brainpower was quickly replacing heavy industry as a source of economic growth and development, cities, states, and nations that had found a way to regulate science without too much interference had enormous advantages.
When the new technologies were considered in political forums and at public meetings, nonscientists got a chance to question the experts and learn things that made them feel less wary and worried. Unfortunately, very few Americans lived in communities where scientists conducting recombinant DNA research were invited to answer their questions. The consequences of this gap between public understanding and the facts allowed all sorts of mischief to occur. One of the strangest episodes came a year after the Cambridge hearing, with the publication of a book, labeled nonfiction, that purported to tell the story of an eccentric millionaire who had brought scientists to a private island where they had produced his clone—a little boy—who was happily maturing inside a laboratory enclave.
In His Image: The Cloning of a Man was written by a journalist named David Rorvik, who had contributed articles to The New York Times and other publications. Rorvik knew enough science to make the tale plausible, and he was an excellent storyteller who provoked all the fears and ethical controversies you might imagine would follow a “credible” report on the first human clone. (He said the boy was fourteen months old, that he had seen him, and that he was “alive, healthy, and loved.”)
After he was interviewed by Tom Brokaw on the NBC network’s Today show, where he refused to divulge the name of the man who was cloned, Rorvik’s book flew to the top of bestseller lists across the country. Publishers around the world clamored for the rights to publish translations. With rising sales came rising controversy, including a complaint from a British scientist who objected to his name appearing in the book. A court in London would eventually find in the scientist’s favor and declare the book a hoax. But in the meantime, In His Image created such a stir that a congressional committee conducted a Capitol Hill hearing on the matter.
David Rorvik declined to testify before Congress, saying he was too busy on his international book tour. However, the experts who did speak to the Subcommittee on Health and the Environment of the House Committee on Interstate and Foreign Commerce explained that since no one had ever produced a clone of any mammal, the idea that a wealthy man had built a secret lab, staffed it with the best scientists, and cloned himself was ludicrous. However, they did speak encouragingly of new technologies and genetic research that could produce important insights into and even treatments for some illnesses, including cancer. This work was also likely to reveal important insights into the aging process. Robert G. McKinnell of the University of Minnesota noted experiments involving repeated transplantation of cells from a developing animal embryo to test theories about aging. (McKinnell would publish widely in cell biology and contribute to a textbook called The Biological Basis of Cancer.)
Aging and cancer were linked in the minds of cell biologists because both involve the mechanisms that govern cell growth and survival. In aging, the cells of all animals eventually succumb to the process called senescence as they gradually lose the ability to replicate and then cause inflammation. Reduced replication may thwart developing cancer cells, but inflammation pushes in the other direction. Combined with a reduced repertoire of immune competence, imbalances in the usual cell harmony emerge. This is marked by poor coordination in the immune system’s response to everything from viruses to genetic mutations. The change helps to explain not only the increased diagnosis of cancer in older people but their greater susceptibility to infectious diseases. Senior citizens who get the flu are more likely to develop pneumonia because they can’t fight off the influenza virus as quickly as can the young.
Senescence is relevant to cancer because malignancies somehow escape its limits on growth. They multiply in an out-of-control fashion that defies the usual mechanisms of aging. As they evade the body’s defenses and replicate willy-nilly, cancer cells inexorably overwhelm and replace normal ones and eventually, like mutinous sailors, gain control of organ systems and the body itself. This is true even for very old people, whose cancers may divide and multiply as if they had drunk from the fountain of youth. Terrifying as it is, cancer’s dynamism suggests that it may have taken a page from and corrupted cellular secrets about development, longevity, and even regeneration. What if the chemical signaling that spurs normal development could reveal abnormalities in cancer and conversely, lessons from cancer could be manipulated to deliver new, healthy cells to rejuvenate aged hearts or livers? Could this mirror be turned into therapies that help people with cancer or with injuries and wounds?
Just as scientists studied the body’s delinquents—cancer cells—in the context of those that behaved well, they looked to the defender of the body—the immune system—to suggest ways to stop malignancies. By the time I enrolled at Case, biologists understood that between normal cell division and the effects of radiation and other carcinogens, complex living creatures faced innumerable opportunities to develop cancer every day. Seen in this light, the most important question we could ask about cancer was: Why don’t we see more of it than we do? And the logical next question was: Can we mimic how the body controls cancer to help, or even cure, people with the disease?
Two years after the press and Congress went a little crazy over the human cloning hoax, a comparably loud press response greeted news of the first genetically engineered drugs, including human insulin, growth hormone, and synthetic interferon, which was made by a Swiss company called Biogen, which had among its founders a Harvard scientist named Walter Gilbert. A brilliant man with wide-ranging interests, Gilbert was Jewish and from Boston, while his mentor was a Pakistani-born Muslim named Mohammad Abdus Salam. Abdus Salam was a giant of nuclear physics who left his homeland to protest religious discrimination. Like Gilbert, he was wide-ranging in his interests, with an expansive imagination and an open mind.
Gilbert’s interferon breakthrough was announced on January 17, 1980. The next day, the stock of the Schering-Plough Corporation, which owned 16 percent of Biogen, surged in value. Investors were no doubt encouraged by a study published in the Soviet Union that noted that a weak interferon spray had protected a small group of test subjects against exposure to cold viruses. Other research centers were exploring interferon in cancer treatment. If interferon could fight everything from the common cold to malignancies, then the sky was the limit for the companies that could produce it and those who backed them.
* * *
Although basic science advanced so quickly that it was hard to keep up with the headlines, excitement around this work was impossible to ignore. Everyone, including cancer patients, seemed to believe that cures were right around the corner and that if they just lived long enough, some new treatment would save them. In reality, though, patients were being treated with the same basic interventions—surgery, radiation, and toxic chemotherapy—that had been available for decades. These drugs, which were effective against some cancers, including those of the blood (leukemias) and lymphatic systems (lymphomas), almost always produced terrible side effects, including hair loss, nausea and vomiting, and profound exhaustion. It was not uncommon for people to say that living while undergoing treatment was not much better than death.
I saw the benefits and limitations of all these treatments on the mornings when I went from room to room, and patient to patient, to draw blood for the tests that doctors had ordered. I was good with a needle, which people appreciated, and this work paid me a much-needed income as a student. It also gave me the chance to improve my bedside manner. Many of the patients were sleeping when I entered their rooms, and they invariably struggled to recognize where they were when they awoke. I was always a little bit sorry about disturbing them and about starting their day with the pain of a needle stick, small as it might be. I learned to discern who among them wanted to chat and who preferred that I perform my duties quickly and leave. As I grew more confident, I took a little pride in the fact that I was better than most at drawing blood from children, and even infants, whose small veins and big fears made this work especially challenging.
As I made my rounds with my needles and tourniquets and glass tubes with stoppers, I came to appreciate the ways that a touch, or a word, benefited patients who were dealing with challenging conditions. And sometimes the best I could do was not touch someone. I learned this lesson with patients who had sickle-cell anemia and often experienced so much pain that they couldn’t bear to be touched. After one intense encounter that reached the point where I was afraid a suffering man was going to punch me out, I started telling these patients in the first moments of an appointment that I knew they were in terrible pain and that they might be hoping that we would just let them get into bed, give them as much pain medication as they wanted, close the door, and turn out the lights. I then explained that I could not do that, but that I would do whatever I could to help them once we got through an exam, which I would do as gently as possible.
I valued this caring element of medicine because one of the things that became quite evident in medical school was that we really didn’t know very much at all about how the body worked, and kindness was sometimes the most powerful treatment at our disposal. Of course, great advances had been made since the age of bloodletting and phrenology, but still, we lacked good medicines to treat a whole host of conditions. We couldn’t answer most of the questions that people asked about why they got sick and whether they were going to get well. Medicine’s limitations became crystal clear to me when I studied immunology, which was an area of great interest to cancer specialists, who dreamed that drugs like interferon might harness the immune response against cancer. When you look at how the body monitors for infections and cellular injuries, develops immunity, fights off infection, or repairs damage, it seems like magic.
Case was the kind of place where med students were permitted and even encouraged to knock on doors and ask questions. I went to see George Bernier, a professor of hematology and oncology, who was one of the leading hematologists/oncologists in the country. He stopped what he was doing and just listened as I rattled off a whole bunch of questions.
How does the body know to recruit immune cells to a particular site? You get an infection. Suddenly all the infection-fighting cells that are otherwise circulating in the blood find their way out of the blood vessels, into the precise spot where you have an infection.
How does the immune system know when to switch on and switch off?
How do you shut off the immune response so that you don’t cause collateral damage?
Why do some bodies manage to repair cell damage and others do not?
Why do medicines help some people but fail in others with the same disease?
To each of these questions, Bernier responded with roughly the same answer, which was “These are things we just don’t know.”
Here was a truly great doctor, someone who would soon be named chief of oncology at Case, and he was saying that he didn’t understand the very basic operation of the immune system. He didn’t know why treatments worked in some patients and not in others. And he couldn’t explain how the immune system recognized a threat to the body, responded to it, and then turned off its response when the crisis passed.
To his credit, Bernier wasn’t defensive about how much he didn’t know, and he wasn’t annoyed by all my questions. When I found other students who shared the same curiosity about the immune system, we formed a group and essentially designed our own course of study. We read everything we could get our hands on, met from time to time to discuss what we learned, and met with Bernier himself. Only later, when I learned about how other medical schools worked, did I appreciate our professor’s generosity. He made us feel as if we, the students, were the most important people in the school and that whenever we sought to learn more, we had a right to study and eventually own that material. There was also something special about the students in that small group and my class. We spoke very openly about our experiences as both students and doctors in training.
Sometimes the different roles we played—in my case, blood-drawer, student, caregiver—combined to teach us lessons that were more profound and lasting than we could have had if we’d remained in the library. One instance of this dynamic arose when I met a young man named Carl. He was just a few years older than I was and had suddenly been overcome by exhaustion. His appetite had disappeared, and he had lost some weight. He was plagued by night sweats and had a constant low-grade fever. He had also developed a lump in his neck.
Carl was a bright, friendly guy who was more puzzled by his symptoms than worried. He hoped he had some sort of infection, like the flu, and that it was just taking its time leaving his body. I knew just enough to be more concerned than he was. After talking with him and doing the necessary legwork of admitting him to the hospital, I went to the lab and made slides so I could examine under the microscope the cells taken from his blood. Following the standard protocol, I added stains called hematoxylin, which is violet in color, and eosin, which is rosy red. (Eos was the Greek goddess of dawn.) I put the slide into the clips beneath the lens, switched on the light underneath the slide, put my thumb and forefinger on the wheel that adjusted the focus, and bent over the eyepiece.