six

EFFICIENT AND EFFECTIVE

During the second year of residency, I began to see patients of my own in the clinic. Unlike call, where I saw them in order of acuity, these patients were scheduled for specific times. Or, to be more accurate, overscheduled, with new patients added throughout the day. To handle the schedule, I learned to be, above all, efficient. Efficiency got me home in time to see our son. Inefficiency kept me in the darkened clinic after everyone else had gone home. During the second year of my residency, Bao challenged my efficiency, persistently dashing my hopes of completing an afternoon clinic before the front desk staff left for the night.

Bao was a forty-three-year-old Vietnamese immigrant who had been adopted at the age of six by a local family. She never really fit in with her adoptive family and eventually exhausted whatever stores of kindness they had. But she had no one else; Bao had never married, never become a mother, never even had a sexual partner. She once told me she felt “as alone as an empty cup.”

When the taste of her loneliness grew too bitter, she would come to our Emergency Department. Once a week, twice a week, thrice a week. Chest pain. Suicidal thoughts. Headache. If the Emergency Department physicians and staff could sit with her long enough to fill her cup, she could return home, but the Emergency Department staff usually triaged their time and rarely had sufficient to give to Bao. Eventually, the Emergency Department physicians took to asking, “Bao, on a scale of 1 to 10, which is worse, your chest pain or your thoughts of suicide?” If she answered “chest pain,” they paged the on-call internist. If she answered “suicide,” it was the on-call psychiatrist’s turn to share Bao’s lonely cup.

After a couple of rounds of these admissions, the internal medicine and psychiatry services tried to preempt Bao’s emergencies. At the beginning of each month, she would have a half-hour appointment with an internal medicine resident. Two weeks later, she would have a half-hour appointment with me, the psychiatry resident. In return, Bao agreed to work with her respective residents instead of visiting the Emergency Department every other night.

We fell into a rhythm. Bao began each month with the internist. A few days later, I would read his summary of their encounter in our hospital’s electronic medical record. He would assess her somatic complaints, reassure her, and encourage her to discuss her emotions and behaviors with me. When we met, Bao would discuss how she struggled with anxiety and depression. I understood her wavering moods as expressions of her personality structure. She felt empty all the time. Her sense of herself and other people was unstable. The only way to keep people engaged with her was to threaten suicide, and the scars on her arms recorded every failed attempt to keep a friend.

While preparing for one of my monthly meetings, I reviewed the notes from Bao’s most recent visit to the internist and saw that she had requested birth control pills. The internist documented that she did not smoke, was not pregnant, and never had cancer, a clotting disorder, liver disease, irregular menstrual bleeds, or a stroke. He concluded that Bao understood the risks and benefits and prescribed Ortho Tri-Cyclen.

Reading the note, I became worried. The internist had asked a series of well-informed questions, but not the most important question.

When Bao arrived, I asked the kind of questions shrinks like. “How’s your mood?” “Have you had any unsafe thoughts?” “Any trouble getting to sleep or staying asleep?” Then, I turned to the internist’s report.

“I saw that you visited your internist and requested birth control. I was surprised. You always described yourself as a virgin. Has anything changed?”

“I met a man.”

“Can you tell me about him?”

“My car broke down on the highway. I was on the side of the road, and a police officer helped me call a tow truck. Then he told me he was separated from his wife and he wondered if we could get together sometime. For sex. He told me he liked Asian ladies.”

Bao’s internist had weighed the available evidence to ensure that the medication he prescribed would not harm her. Oral contraceptives, like any medication, can produce adverse effects. For some people a medication is contraindicated, meaning that they should not take it because the risk of an adverse effect outweighs its potential benefits. A good physician knows and discusses the adverse effects and contraindications of a medication with his or her patients. Bao’s internist reviewed the contraindications scrupulously. He did his job.

But he also missed the point of Bao’s request. She was preparing for her first sexual encounter after a lifetime of being so ashamed of her body that she frequently harmed herself. An oral contraceptive would, yes, probably prevent her from becoming pregnant, but it would not prevent the cutting that was sure to follow a sexual encounter with a police officer who chose her for her vulnerability and ethnicity.

In medicine, we train physicians to be technicians who gather, assess, and follow evidence. There is much good in doing so. We all want every physician to know the adverse effects and contraindications of a medication before prescribing it to a patient. We all want every physician to know the available evidence for and against a particular test or treatment. In order to be certain that each physician knows and acts upon the best evidence in order to ensure reliable and predictable outcomes, many forces in medicine are compelling physicians to follow evidence-based practices. The problem is that you can follow the best practices and, like Bao’s internist, miss what is actually happening in your patient’s life.

.   .   .

When I began medical school, the younger faculty members often drew a distinction between “expert-based” and “evidence-based” medicine. In expert-based medicine, they told us, patients received the care prescribed by an expert, usually a physician, based on his or her clinical experience. In evidence-based medicine, we heard, patients received the care that, after careful consideration of the scientific literature, a physician concluded was best for them. When these faculty members talked, they described evidence-based medicine as a transformative shift from ephemeral opinion to certain evidence.

Many of these faculty members identified themselves as “clinical epidemiologists”—physicians who critically reviewed the available literature using statistical techniques to determine the right course of action for an individual patient. Clinical epidemiology was a relatively new field, pioneered by physicians like David Sackett in the latter half of the twentieth century. Sackett helped transform medicine by first transforming medical education, founding a new medical school at McMaster University in Ontario and doing away with courses or lectures to educate students solely at the bedside and in the library. Instead of listening to their faculty’s expert opinions, Sackett’s students examined a patient, researched the population-based evidence for the treatment of the patient’s condition in the library, and returned to the bedside. Sackett taught generations of McMaster medical students and residents to critically appraise the medical literature so they could use evidence, instead of clinical experience, to determine the best course of treatment for an individual patient.

Gordon Guyatt was one of those McMaster medical students who learned to move between the bedside and the library. When he finished his training, he joined the McMaster faculty as the director of the internal medicine residency and coined a phrase that transmitted the work of his teachers into global medical practice: “evidence-based medicine.” From his relatively remote rural outpost in Ontario, Guyatt was setting out to transform medicine.

He found allies among the editorial staff of the Journal of the American Medical Association (JAMA), one of the world’s most influential medical journals. Guyatt popularized the phrase “evidence-based medicine” in a 1992 JAMA article. In that article, Guyatt and his colleagues announced evidence-based medicine as “a new paradigm” and a “new philosophy” for medical education and practice. They wrote that, unlike the usual practice of medicine, evidence-based medicine “deals directly with the uncertainties of clinical medicine and has the potential for transforming the education and practice of the next generation of physicians.”1 Guyatt made good on his rhetoric. Over the next eight years, JAMA published thirty-two articles by Guyatt and his colleagues, an influential endorsement to evidence-based medicine that spurred its rapid incorporation into medical practice and training. A mere decade after Guyatt introduced the phrase, more than twenty-five hundred peer-reviewed evidence-based medicine papers were being published annually.2

The name Guyatt chose was a powerful polemic; who could oppose something that was “evidence-based”? Yet many physicians lamented the rise of evidence-based medicine as “cookbook medicine” that eroded professional autonomy and clinical judgment. In a critical article, two physicians described the declarations in Guyatt’s 1992 article “as analogous to a decision to occupy a territory without the involvement of [the] relevant military.”3 Guyatt and other epidemiologists were announcing the reorientation of medicine around their discipline. They did so with few initial contributions from disciplines informed by the humanities and social sciences, disciplines with wisdom about culture, history, and interpersonal relationships. Wisdom, because of its particularity and variability, was pushed to the periphery by evidence-based medicine in favor of efficiency and effectiveness.

The move from expert judgment to evidence-based medicine participates in a larger social trend—what the historian of science Theodore Porter calls the rise of quantification. In Trust in Numbers: The Pursuit of Objectivity in Science and Public Life, Porter observed that quantification is the preferred way to communicate about many things, including health and illness, in contemporary life. Physicians speak to patients about prevalence rates, odds of survival, and remission rates. With administrators, we speak about days of uncompensated care, billing variances, and readmission rates. In both cases, numbers win the argument. Porter believed that communicating with numbers was fundamentally distinguished from other forms of communication because it operated as “a technology of distance.” Quantification does not depend upon intimate knowledge of a person, or a trusting relationship. Indeed, it discourages them. Knowledge and trust are particular and personal. They are hard to standardize. In order to quantify something, you first have to standardize it, to tame its particular and personal aspects. You have to give up the local in favor of the universal.4 So when the advocates of evidence-based medicine called for physicians to bring the library to the bedside, they introduced technologies of distance, encouraging physicians to use the numbers, graphs, and formulas that are the strategies of communication particular to quantification. They encouraged physicians to count something rather than to seek understanding of a patient.

Evidence-based medicine proved popular in part because it reached prominence simultaneously with advances in technology, including the availability of journal articles in electronic databases rather than on the dusty shelves of a library, the escalating computing power of statistics programs that automated complex calculations, and the introduction of electronic medical records that enabled the rapid compilation of data. These technologies allowed researchers to answer questions with greater statistical power and precision. Instead of an article written by a senior physician describing favorite methods for, say, reorienting a baby in the breech position during delivery, an article might summarize all the known studies about and quantifying the risks and benefits of the different methods for reorienting a baby. Evidence-based medicine allowed physicians to see health and illness at the population level, which proved to be an alluring vision. Evidence-based medicine provided a state-of-the-art way to see much.

By the time I entered training, evidence-based medicine was synonymous with the future. So in my epidemiology course in my second year of medical school, we learned to critically appraise an article just as Sackett had taught his students to at McMaster, how to read forest plots and funnel plots, and how to interpret chi-squares and p values. The course forced us to think in a new way. Instead of beginning with the patient in front of us, we began with data sets. Instead of finishing with the uncertain conclusions of the clinical encounter, we ended with precise outcomes. So many of my medical school classmates found this appealing that a third of them took a year off from medical school to earn master’s degrees in epidemiology. They threw in their lot with the statistical future of medicine.

A few years later, I followed them.

During my psychiatry residency, I was assigned a research mentor who was an expert in schizophrenia. But instead of asking me to gather an Oslerian-style compendium of his expertise, he invited me to co-write a Cochrane review, a statistical, rather than narrative, summary of the effectiveness of a new treatment for schizophrenia.

We were eager to write a Cochrane review because of what it was and what it was not. At the time, many of the clinical trials in medicine were designed, financed, and conducted by pharmaceutical companies. These trials had the power and precision clinical epidemiologists desired—the resulting papers were thick with p values and chi-squares, statistical guarantors of “evidence” as certain as the scriptural citations used by members of a religious community for proof-texting—but they were also riddled with bias. In ways minute and massive, these studies were designed to generate the greatest effects for the manufacturer’s drug. So the authors would compare the company’s drug to an ineffective dose of another drug, limit the trial to the participants most likely to benefit from the treatment, or pay academic leaders to run favorable trials. When one looked closely at these papers, they looked less like scientific reports and more like marketing pamphlets.

And, indeed, they often became marketing pamphlets. Although the papers were published in medical journals, their most telling presentations came at lunches and dinners sponsored by Big Pharma. Pharmaceutical representatives would reprint favorable studies on glossy paper and hand them to physicians along with a roast beef sandwich in a hospital conference room. Or a leading physician would introduce the study in a projected presentation while local physicians ate overpriced meals in the backroom of a steakhouse.

My research mentor eschewed these kinds of studies and the corrupting influence Big Pharma has on medicine. He favored the clean lines of clinical epidemiology, and the cleanest lines in all of clinical epidemiology are those drawn by a Cochrane review.

Every medical school and residency lecture by a clinical epidemiologist included a picture of a pyramid. The bottom of the pyramid was a wide base labeled “expert opinion,” a visual rejoinder to experts that their expertise and opinion were neither exceptional nor refined. A hierarchy developed along the ascending steps of the pyramid: case reports to case series, cohort studies, clinical trials, critical appraisals of the literature, systematic reviews, meta-analyses, and, at the apex, Cochrane reviews. With each step, the quality of the evidence was said to increase.

In one way, the pyramid undid Osler, whose life’s work was focused on the base—expert opinion, case reports, and case series—in favor of clinical epidemiologists with the statistical knowledge to conduct systematic reviews and meta-analyses. You could be an excellent clinical epidemiologist without having any skill in conducting either a physical examination or an autopsy. If Osler described medical training and practice as a conversation between the bedside and the morgue, Cochrane reviews present them as a conversation between the bedside and the database. In another way, the pyramid was simply an updated version of Osler’s proposal. At each new level of the pyramid, the designated study included ever more patients until, from the apex, the Cochrane review, a physician could see the health not just of individual patients but of whole populations. Seeing much was still the goal.

.   .   .

Cochrane reviews are named after their inspiration, Archibald “Archie” Leman Cochrane, a pioneering British physician and epidemiologist who was present at many of the seminal moments of twentieth-century European history. Born into a wealthy Scottish tweed-making family whose patriarch was killed at the Battle of Gaza, Archie received psychoanalysis with disciples of Freud in Berlin and Vienna, served in an ambulance unit in the Spanish Civil War, earned a medical degree, and enlisted in the Royal Army Medical Corps as a medical officer. The Germans took him prisoner during his first military action and, because he had learned German during his course of psychoanalysis, put him in charge of providing medical care for twenty thousand fellow prisoners in the Dulag, a POW transit camp. Cochrane never had a traditional medical practice; from the start of his career, he was asked to manage the health of a population. In the camps, he was struck by how little evidence was available to guide him in selecting for or against a particular intervention. In what would become his habit, Cochrane ran simple clinical trials in which he would divide his patients into separate treatment groups to determine which treatment was most effective. When he was released from the POW camp, Cochrane returned to London to continue his medical training. He studied there with Austin Bradford Hill, a British epidemiologist famed both for identifying the association between cigarette smoking and lung cancer and for conducting the first randomized controlled trial.5 In a randomized controlled trial, participants are randomly assigned to different interventions, a design that reduces the influence of biases or prejudice toward or against a particular intervention. In Hill’s randomized controlled trials, Cochrane believed he had found a way to fix medicine.

The problem facing medicine, Cochrane later wrote, was fundamentally a question of determining which interventions worked and how these interventions should be distributed among a population. Even as a POW physician, he had been surprised at how little his own interventions altered the health of his patients:

Under the best conditions one would have expected an appreciable mortality; there in the Dulag I expected hundreds to die of diphtheria alone in the absence of specific therapy. In point of fact there were only four deaths, of which three were due to gunshot wounds inflicted by the Germans. This excellent result had, of course, nothing to do with the therapy they received or my clinical skill. It demonstrated, on the other hand, very clearly the relative unimportance of therapy in comparison with the recuperative power of the human body. On one occasion, when I was the only doctor there, I asked the German Stabsarzt for more doctors to help me cope with these fantastic problems. He replied, “Nein! Ärzte sind überflüssig.” (“No! Doctors are superfluous.”) I was furious and even wrote a poem about it; later I wondered if he was wise or cruel; he was certainly right.”6

From this experience, Cochrane learned to distrust received wisdom, medical dogma, and the necessity of medical treatment.

Cochrane knew, though, that there were times when having access to a physician was essential to health. At those times, he was deeply committed to equality in healthcare. So when the British government founded the National Health Service in 1948 to provide and finance healthcare through tax payments, Cochrane was a strong supporter. In time, though, he feared that its free services were leading patients to seek, and physicians to provide, all the possible treatments available. The solution, Cochrane believed, was for epidemiologists to inform physicians and the National Health Service about which of the possible interventions should be applied to a particular patient—he made the utilitarian argument that healthcare systems should provide only the interventions with the most favorable costs and risk-benefit profiles, the interventions demonstrated to be efficient and effective. In 1971 he delivered a series of lectures on the role randomized controlled trials could play in the National Health Service. Published as the book Effectiveness and Efficiency: Random Reflections on Health Services, the lectures became a cornerstone of clinical epidemiology and evidence-based medicine.

Cochrane’s ideas influenced not only David Sackett and his students at McMaster as they developed evidence-based medicine, but also the British health-services researcher Iain Chalmers. In his book, Cochrane had called for an international registry of randomized trials. Later he called specifically for a “critical summary, by specialty or subspecialty, adapted periodically, of all relevant randomised controlled trials.”7 Chalmers responded to Cochrane’s challenge by collecting all the randomized controlled trials he could find for his own field of study and organizing them into the Oxford Database of Perinatal Trials, an electronic resource for physicians and policy makers.

Chalmers’s work addressed several problems with randomized controlled trials. One trouble with such trials is that their results apply only to the participants of the particular study, not necessarily to the population to which a physician might want to apply them. Another is that two well-conducted randomized control trials may reach contradictory conclusions. Chalmers’s solution was to synthesize the findings into a single review, called a systematic review. Systematic reviews are both collections, in which are gathered all the high-quality studies on a particular topic, and syntheses, integrated findings aimed at answering a particular question. In 1989, Chalmers and his colleagues published A Guide to Effective Care in Pregnancy and Childbirth, a two-volume collection of systematic reviews that collectively proved that many common interventions in obstetrics were not only unhelpful but also dangerous.8 Chalmers dreamed of writing similar reviews for the entirety of medicine, and in 1992 the research and development division of the National Health Service helped him toward that goal by creating a center to conduct and publish similar systematic reviews; they appointed Chalmers as the first director and named the organization after his inspiration, Archie Cochrane.

Today, the Cochrane Collaboration is an international nonprofit organization staffed by 30,000 volunteer patients, practitioners, and researchers in 120 countries. Together, the members have written 6,000 reviews of specific interventions in medicine. Cochrane reviews are like Wikipedia with better math; the work is crowd-sourced, with reviews edited and updated regularly, but the resulting knowledge is appraised and synthesized using biostatistics. They are available worldwide, and are free for practitioners and policy makers in low- and middle-income countries. The reviews are self-consciously characterized as the best evidence in contemporary medicine.9

In a sly way, I think they are also a testament to Cochrane’s suspicion that physicians are superfluous to healthcare.

.   .   .

My mentor and I were assigned to write a Cochrane review of a drug to treat schizophrenia, the chronic, often disabling, psychotic disorder. The drug, paliperidone, was new but familiar; it is a metabolite of risperidone, a widely prescribed drug that chiefly works by blocking dopamine receptors in the brain. When risperidone’s patent ran out, the manufacturer’s profits withered, and they introduced paliperidone to take its place. Paliperidone had a few novel characteristics—a long-acting version has some practical advantages over risperidone’s long-acting version—and we were eager to evaluate it.

I managed to do the work during the margins of the day, in the clinic when a patient cancelled or from home while our son was sleeping. The review required no funding or regulatory approval, only my time and a laptop capable of running the Cochrane Collaboration’s proprietary software.

Writing the review felt like building a ready-made. Using the Collaboration I searched the international medical literature, from which I got thousands of pages of journal articles, presentation abstracts, and regulatory filings. My mentor and I read through them, identified additional references, and drew up a list of all the possible studies. Then we graded each of the studies based on criteria provided to us by Cochrane headquarters. If the study was the kind of randomized controlled trial that Archie liked, we included it; if not, we excluded it. A lot of this work seemed like something robots will eventually do, but we kept on, abstracting data from each of the included studies, entering it into a Cochrane-provided program, and watching as the program returned a statistical summary. We interpreted the stats, wrote a narrative summary, edited it based upon peer review, and then sent our evidence-based, state-of-the-art systematic review off into the world.

It stood little chance against the marketing pamphlets. Their conclusions were extravagant; ours were measured. They offered physicians glossy, full-color ads with pictures of disheveled men and women on one side and—thanks to the miracle of paliperidone!—clean and confident ones on the other; we issued a several-hundred-page, single-spaced document whose only pictures were forest plots and data tables. Though we found little evidence that paliperidone was superior to its less expensive parent compound, the manufacturer grossed billions of dollars on it. Despite following the best practices and selecting and synthesizing the best possible evidence, I felt as though the experience had been both inefficient and ineffective. The review’s selection criteria limited our analysis to the randomized controlled trials of paliperidone, but the only people conducting randomized controlled trials of paliperidone were researchers funded by the manufacturer, so our review was a carefully conducted analysis of all the marketing pamphlet studies it was meant to displace. In a funny way, our review unintentionally extended the evidence-based imprimatur to paliperidone.

And the studies with which we worked rarely answered the questions practitioners need to answer when considering whether to use paliperidone for a particular patient. Those studies excluded people with unstable social circumstances or substance-abuse problems and those who historically did not respond to risperidone. Of course, many psychiatric patients fit all three of these criteria. Worse, most of those studies lasted only six weeks—six weeks for a condition we believe is chronic. So if a patient asks me about the long-term effects of a medication I am supposed to recommend that he or she take for the rest of his or her life, the evidence-based answer remains little more than a shrug of the shoulders.

.   .   .

Randomized trials are designed for the kingdom of quantification, a place where we can readily measure standardized outcomes. They are hard to relate to the world of patients like Bao. Bao’s very self was porous, profoundly dependent upon how others—whether an internist, a police officer, or a psychiatrist—related to her. Counting anything connected with her first required imposing a decision about where she began and where she ended.

During my writing of the Cochrane review—it took two years of nights and weekends—I was seeing a lot of Bao. She had had sex with the police officer, and while she had not gotten pregnant, she became suicidal again when he stopped returning her calls. In the months that followed, she frequently returned to the Emergency Department, now always with thoughts or gestures of suicide. No need to call the internist. Instead, I would be paged away from other patients once a week so that I could evaluate her.

I would pull up a chair and ask her what was going on. She would roll up her sleeves to show bright red lines on her forearm or dig into her purse and wordlessly present me with an empty pill bottle like a defiant child. I would measure the depth of the cuts or count the number of pills she had swallowed, and if I thought my calculations indicated a gesture, a bid for attention, I would send her home. If they added up to a serious attempt, a threat on her life, I would admit her upstairs. I had no Cochrane review or evidence-based tool to guide my decision whether to discharge or admit Bao, but I tried to quantify the problem anyway because I had too little clinical judgment. I was trying to see her as a medical problem to be summed and solved: When Bao first walked into her internist’s office requesting an oral contraceptive, her problem was that she had a history of emotional instability and interpersonal conflict that suggested she was headed for an unhealthy sexual encounter.

Or: Bao’s problem was that a law enforcement officer had violated his code of conduct and propositioned Bao because of her ethnicity and vulnerability.

Or: Bao’s problem was that, as Archie Cochrane might have said, physicians were involved in a decision they should not have been.

But in a hospital run according to the logic of evidence-based medicine, Bao’s problem was simply that she wanted to have sex without getting pregnant. So her internist asked questions about the adverse effects that evidence-based reviews have most consistently associated with oral contraceptives.

These kinds of decisions are made daily in the clinic and hospital. A patient presents with a problem, and a physician offers a medical interpretation of the problem. The physician-as-technician then efficiently and effectively selects an evidence-based intervention for the problem, having first made the assumption that the patient needs a medical intervention. Evidence-based medicine guides like Cochrane reviews will never be able to rate how suitable a particular sexual partner might be or whether a patient will become suicidal after having sex with a police officer. Archie Cochrane had asked whether physicians should intervene, an apt question for practitioners in the National Health Service, where all services were funded and provided by the British government. In the National Health Service, a physician who saw Bao and elected to prescribe neither an oral contraceptive nor a course of psychotherapy would still be paid. When the Cochrane Collaboration was created, however, it advanced Archie Cochrane’s ideas into countries with very different financing systems. In these systems, practitioners are paid only if they do something or help someone achieve a measurable outcome. In these systems, a practitioner receives little payment if he or she does not prescribe an intervention of some kind for a patient like Bao. In this different context, Cochrane reviews do not ask if doctors should perform a medical intervention; they ask which medical intervention doctors should perform. The authors of Cochrane reviews know (and repeatedly state) the limits of the knowledge they generate, but those limits are forgotten when we turn that knowledge into scripts for clinical encounters where an intervention is presumed. In a clinic, a doctor must efficiently and effectively intervene in the life of a patient like Bao, even when Archie Cochrane himself might have said “No! Doctors are superfluous.” In medicine, we rarely recall Cochrane’s warning. Instead, we are working to bring his style of epidemiological findings into clinical practice, to bring the bedside ever closer to the dataset, and to introduce ever more technologies of distance between physicians and patients.