What gives medicine its authority is the presumption that it is based on science. A few centuries ago, the major source of intellectual and moral authority in Western culture was religion, which requires that you put your trust in some distant personage like Jesus or Mohammed because large numbers of other people who are thought to be trustworthy already do so. Science was a big improvement in that it does not require trust, which rests on social conformity, but offers a way of verifying things for yourself. I know that any scientific claim I encounter—whether about the moons of Jupiter or the best way to treat a fever—can in principle be tested by repeating the observations made by scientists. No belief is required, only patience and the infinite humility it takes to learn daunting forms of mathematics and biology. If the claim cannot be verified by independent observers, if it is, in other words, “irreproducible,” we are forced to conclude that it is not true.

Since most of us are unlikely to learn enough math to calculate the orbits of Jupiter’s moons, we tend to defer to those who have, at least in planetary lunar matters. Similarly, modern, educated people are usually intimidated by the mere invocation of science. We want remedies that are “scientifically proven” and can bear up under the phrase “studies show.” So in the medical case, if a person previously unknown to you requested that you undress and submit to his or her probings, you would be unlikely to comply. But if that person could justify their request with decades of experience and peer-reviewed studies showing that this procedure had contributed to the longevity and well-being of large numbers of others—well, then, it might be wise to do what is asked of you. The medical profession won its monopoly over the business of healing by invoking its scientific basis, and maintained that monopoly by diligently patrolling its borders against the alternatives, long described as “pseudoscience.” A little more than a century ago, the matter appeared to be settled when nonphysicians were legally barred from practicing, which in the United States meant outlawing midwifery in favor of obstetrics and marginalizing homeopathy in favor of “allopathic,” aka “regular” or scientific, medicine.

Only slowly did any kind of real détente arise, with the AMA-style doctors gradually softening their invective against alternatives. As late as the 1950s, the American Cancer Society, which is about as medically conventional as a health-related group could be, still had a “Committee on Quackery.” But according to Harvard magazine:

Later that [committee] turned into a committee on “unproven methods of cancer management,” superseded by one on “questionable methods.” The names indicate a gradual acceptance of the unconventional; today the cancer society has a Committee on Complementary and Alternative Medicine (CAM). The evolving vocabulary also reflects a sea change underway throughout medicine. In the last few years, the term “alternative,” suggesting something done instead of conventional medicine, has been giving way to “complementary,” a therapy done along with mainstream treatment. Both words may ultimately be replaced by “integrative medicine”—the use of techniques like acupuncture, massage, herbal treatments, and meditation in regular medical practice.1

This may look like commendable humility on the part of conventional medicine—or it may look like a shameless compromise. But how scientific is conventional “scientific” medicine anyway? By the late twentieth century, mathematically oriented physicians, as well as many patients, were beginning to ask for something more than a doctor’s word for the efficacy of medical interventions and something more tangible than the mere aura of science. They wanted hard evidence, and one familiar procedure after another has come up short.

In 1974, a former physician turned mathematician, David M. Eddy, was asked to give a talk on medical decision making, and chose to focus on diagnostic mammography since Betty Ford’s and Happy Rockefeller’s breast cancers were very much in the news at the time. Years later, he wrote that he had “planned to write out the decision tree I presumed their physicians had used, fully expecting to find strong evidence, good numbers, and sound reasoning that I could describe to my audience. But to my amazement I found very few numbers, no formal rationale, and blatant errors in reasoning. How could that be?”2

He decided to try to study the decision making behind something that had been around much longer than mammography—a seventy-five-year-old treatment for ocular hypertension that had been used on tens of millions of people. But he could find only eight controlled studies—meaning studies comparing people who had received the treatment to a group of similar people who had not—and these studies were all “very small and poorly designed.” Worse, six of them found that the recipients of treatment ended up worse off than those who had not received it. Eddy moved on to analyze other treatments, only to be warned off by experts who told him there was not enough data to work on.

That clinched it. If there wasn’t sufficient information to develop a decision tree, what in the world were physicians basing their decisions on? I then realized that medical decision making was not built on a bedrock of evidence or formal analysis, but was standing on Jell-O.3

So began what was soon called “evidence-based medicine,” the idea that anything performed on patients should be backed up by statistical evidence. This was a provocative designation, raising the immediate question of what medicine had been based on up until now: anecdotes, habits, hunches? Or was medicine traditionally not “evidence-based” but “eminence-based,” that is, backed by the reputations and institutional standing of the people who practiced it?

Most of the medical screenings that have been pressed on me by one health professional or another fail the evidence-based test. Mammograms, for example: The conventional wisdom, tirelessly repeated by the leading breast cancer advocacy group, the Susan G. Komen Foundation, was that early detection through annual mammography would significantly increase the five-year survival rate from breast cancer.4 But repeated, huge, often international studies showed no significant decline in breast cancer mortality that could be attributed to routine mammographic screening. True, any woman whose cancer had been detected through screening might claim to have been saved by the intervention, but the likelihood was that the spot in her mammogram would never have developed into full-blown cancer. What screening was finding, and doctors were treating, were often slow-growing or inactive tumor sites—or even noninvasive conditions like the wrongly named “ductal cancer in situ,” or DCIS. Treating pre-cancers and non-cancers may seem like a commendable excess of caution, except that the treatments themselves—surgery, chemotherapy, and radiation—entail their own considerable risks. Disturbingly enough, breast biopsies are themselves a risk factor for cancer and can “seed” adjacent tissue with cancer cells.5

The same kinds of concerns apply to screening for prostate cancer, which consists of a blood test for prostate-specific antigen (PSA) plus a digital rectal exam. As with mammography, statistical studies have found no overall decrease in mortality that can be attributed to the PSA screening that has been in place since the late 1980s.6 Here too, the price can be high for overdiagnosis and treatment: radiation and hormonal therapies that can lead to incontinence, impotence, and cardiovascular disease.7 In 2011, the U.S. Preventive Services Task Force recommended that men no longer receive PSA tests, and two years later the American Urological Association grudgingly followed suit, limiting PSA screening to men between the ages of fifty-five and sixty-nine.8 As for colonoscopies, they may detect potentially cancerous polyps, but they are excessively costly in the United States—up to $10,000—and have been found to be no more accurate than much cheaper, noninvasive tests such as examination of the feces for traces of blood.9

There is an inherent problem with cancer screening: It has been based on the assumption that a tumor is like a living creature, growing from small to large and, at the same time, from innocent to malignant. Hence the emphasis on “staging” tumors, from zero to four, based on their size and whether there is evidence of any metastasis throughout the body. As it turns out, though, size is not a reliable indicator of the threat level. A small tumor may be highly aggressive, just as a large one may be “indolent,” meaning that a lot of people are being treated for tumors that will likely never pose any problem. One recent study found that almost half the men over sixty-six being treated for prostate cancer are unlikely to live long enough to get the disease anyway.10 They will, however, live long enough to suffer from the adverse consequences of their treatment.

The physical part of the annual physical is largely determined by the individual physician and of course the insurance company or other institution underwriting the exam. According to the Canadian Task Force on Preventive Health Care, “It consisted of a head-to-toe physical examination and the use of whatever tests were available: blood count, urine glucose and protein, chest X-ray, and, since the 1950s, ECG, CT scans, and MRIs”11—and I would add an emphasis on “whatever.” In the 1940s and ’50s, when there were more US hospital beds than could be filled by the injured or ill, affluent patients could expect to be hospitalized for their annual exams, the better to anesthetize them for invasive procedures. At the other end of the class spectrum, more or less, pre-induction medical exams for the military were notably sketchy, usually consisting of a hearing and vision test, plus a quick inspection for hemorrhoids and open lesions. In between these extremes, most people had their vital signs taken, their urine and blood worked up, their breasts or testicles palpated, perhaps with a digital rectal exam thrown in. In 2015, the cost of annual physicals was estimated at $10 billion a year.12

Women are supposed to undergo a second, gynecological annual exam, and this one has been well defined since its inception in the 1950s: breast and external genitalia exams, a Pap smear to detect cervical cancer, a vaginal and perhaps rectal exam. These exams are not always voluntarily undertaken; they may be required as a condition of obtaining or renewing a prescription for a contraceptive: Recall the searing scene in Mad Men where Peggy undergoes a gyn exam in order to get birth control pills and the (male) doctor cautions her that just because the pills are expensive, she shouldn’t become “the town pump just to get [her] money’s worth.”13 Many women are traumatized by these exams, which in their detailed attention to breasts and genitalia so closely mimic actual sexual encounters. Out-of-place intimacies, like unwelcome touching by a male coworker, are normally regarded as “sexual harassment,” but the entire gyn exam consists of intimate touching, however disguised as a professional, scientifically justified procedure. And sometimes this can be a pretty thin disguise. A physician attached to an American missionary compound in Bangladesh is alleged to have molested girls, one as young as twelve, by subjecting them to almost daily breast and pelvic exams—procedures that are not normally performed on preteen girls.14

Even under the best, most “professional” circumstances, such exams can be deeply upsetting. According to one woman, writing on a site called For Women’s Eyes Only, pelvic exams are “humiliating, degrading, and painful”:

The first time I had a pap smear done, I was so traumatized, I now have to take prescription Xanax to avoid having panic attacks when I get pap smears done now. And I’m only 24. How many more am I going to have to have for the rest of my life? What am I going to do when I want to have children and every doctor wants to shove his/her fingers and tools inside me?15

Other women strive for a state of psychological dissociation, attempting to share the physician’s view of their body as a passive and unfeeling object, detached from a conscious mind.

One problem, though certainly not the only problem, with these regularly scheduled invasions of privacy is that they do not save lives or reduce the risk of illness. In 2014, the American College of Physicians announced that standard gyn exams were of no value for asymptomatic adult women and were certainly not worth the “discomfort, anxiety, pain and additional medical costs” they entailed.16 As for the annual physical exams offered to both sexes, their evidentiary foundations had begun to crumble over forty years ago, to the point where a physician in 2015 could write that they were “basically worthless.” Both types of exams can lead to false positives, followed by unnecessary tests and even surgery, or to a false sense of reassurance, since a condition that was undetectable at the time of the exam could blossom into a potentially fatal cancer within a few months. But such considerations do not seem to have deterred many physicians, like this one, quoted in a New York Times article entitled “Annual Physical Checkup May Be an Empty Ritual”:

Dr. Barron Lerner, an internist and historian of medicine at Columbia University’s College of Physicians and Surgeons, says he asks patients to come in every year and always listens to their heart and lungs, does a rectal exam, checks lymph nodes, palpates their abdomens and examines the breasts of his female patients.

“It’s what I was taught and it’s what patients have been taught to expect,” he said, although he acknowledged he would be hard pressed to give a scientific justification for those procedures.17

None of the above should be construed as an attack on the notion of scientific medicine. True, the medical profession has again and again misused the authority conferred on it by science to justify unnecessary procedures for the sake of profits or simply to gratify physicians’ egos (and, in the worst case, sexual impulses). But medicine’s alliance with science has also brought incalculable benefits, from sterile technique in the operating room to lifesaving pharmaceuticals. The only cure for bad science is more science, which has to include both statistical analysis and some recognition that the patient is not “just a statistic,” but a conscious, intelligent agent, just as the doctor is.

There remains a considerable market for “comprehensive” exams filled with tests and procedures that are no longer recommended, just as there is a luxury market for antique cars and vinyl records. I first encountered this phenomenon in the 1990s, when a wealthy acquaintance, unprompted by any symptoms, took off for a two-day-long medical exam at Johns Hopkins. Other, perhaps even wealthier, people opt for multiday exams combined with “spa services” and “lifestyle coaching” in luxury vacation settings. As of 2008, 22 percent of Fortune 500 companies provided “executive exams” to their top personnel,18 both as a perk and as a way to avoid having a trusted leader die of a heart attack at his or her desk. But an article in the Harvard Business Review entitled “Executive Physicals: What’s the ROI [Return on Investment]?” answers its own question with what amounts to a firm “not much”—and for all the reasons I have given here: the frequency of false positives, the dangers of the tests themselves (such as radiation), and the unlikelihood of finding a problem in a still-treatable stage.19

Mounting insistence on evidence-based medicine—some of it coming from the health insurance industry—led, in the early twenty-first century, to the perception that medicine was going through an “epistemological crisis,” that is, a crisis in its intellectual foundations. In 2006 the noted bioethicist Arthur L. Caplan wrote:

Contemporary medicine is sailing on very rocky seas these days. It is being buffeted by ever-rising costs, doubts about its efficacy, and intrusions on its turf from competitors that range from optometrists, psychologists, chiropractors, midwives, and nurse-anesthetists to the friendly folks at the herb and vitamin store.20

But, he went on, it was the “fervency of the embrace of evidence-based medicine”21 that struck at the profession’s core conceit: the notion that it was derived, at least since the late nineteenth century, from the arduous methods and processes of the hard sciences.

Laboratories and Cadavers

In fact, the relationship between medicine and science has always been tenuous. A hundred and fifty years ago, there was no American medical profession, only a collection of diverse men and women claiming healing skills, some with years of experience, but many with little more than an apprenticeship to go on. Only toward the end of the nineteenth century did it became fashionable for relatively elite college-educated doctors to round off their educations in Germany. There, they were entranced by the universities’ gleaming new medical research labs, with their microscopes, test tubes, and well-scrubbed countertops, which had no analog in the United States. Laboratories are forbidding places to the uninitiated, showing few signs of human occupation except for the occasional stool and making no concessions to decoration. But to a scientist they represent a place where he (and at one time it was almost always “he”) potentially exercised total control, with no disruptive breezes or temperature changes and hopefully no contaminants. It was the white coat of the laboratory scientist, the chemist, or the bacteriologist that physicians eventually adopted as a uniform suitable for encounters with patients. What it symbolizes is not only cleanliness, but mastery and control.

In a laboratory, the causes of disease could be traced to the cellular level and studied like any other natural phenomenon, hence the proclamation of the famed German researcher Rudolf Virchow that “medical practice is nothing but a minor offshoot of pathological physiology as developed in laboratories of animal experimentation.”22 This is of course an arguable proposition, but it immediately provided legitimation for a wave of professional reform in the United States: Medicine was the business of scientists, or at least of the scientifically trained, and no one should be legally certified to practice it without at least two years (now four) of college education and a thorough background in the laboratory sciences.

But the relevance of the scientific reforms in medical education to the actual practice of medicine remains unclear. For example, no one attains the right to practice medicine without studying organic chemistry in college—a course that pre-med students call the “weed-out course” since it eliminates so many aspirants to the medical profession. But organic chemistry, delightful as it may be from my point of view, has no obvious contribution to make to medicine. A comprehension of electron orbitals is not required for an understanding of the germ theory of disease, nor do you need to appreciate the structure of DNA to study genetic disorders. An obstetrician complained:

The Krebs cycle is a classic example—a biochemical cycle where you have to learn all these enzymes and when you get through you never use it. My sister in med school now tells me the same thing. She can’t understand why she is going through all these detailed analyses of DNA structures and things like that.23

But one likely effect of the scientific reform of medicine was to scare away any critical social scientists. Except for tongue-in-cheek offerings like “Body Rituals Among the Nacirema,” none of the anthropologists and sociologists who took up the study of medical care in the mid- to late twentieth century dared to question the relative efficacy of “primitive” rituals and those of modern scientific medicine. They seem to have assumed that medical procedures, being based on scientific observations and methods, must be of proven value, even when those procedures looked suspiciously like “rituals.” After all, the social sciences also considered themselves to be “sciences,” and were habitually deferential to the medical profession, with its formidable armor of biochemistry and microbiology. No mere social scientist was prepared to weigh in on the question of whether particular medical procedures actually did any good.

Predictably enough, the medical reforms of the early twentieth century narrowed the demographic base of the medical profession. The requirement that medical schools possess laboratories eliminated most schools that had admitted women and African-Americans. Furthermore, at a time when only 5 percent of the population had a college degree, the requirement of at least some college limited medical school admissions to the upper and upper middle classes. No longer could any “crude boy or…jaded clerk,” as one of the leading reformers described the run-of-the-mill doctors of his time,24 expect to gain medical training. Doctors would be recruited from the class of “gentlemen,” which was why even female patients could now entrust them with intimate access to their bodies. For most people, throughout most of the twentieth century, medical care necessarily involved an encounter with a social superior—a white male from a relatively privileged background.

With medicine anchored, at least symbolically, in laboratory science, the practice of medicine changed too. Medicine began to look like an “extractive industry,” as health policy expert Robb Burlage once put it,25 with the doctor’s office serving as a collection site where blood, urine, and bits of tissue were converted into laboratory samples and hence into data. Or it might be images, like X-rays or CT scans, that are harvested, sometimes sent out for analysis, possibly to a distant country where radiologists are paid less. As the focus shifted to tissues and cells, physicians began to seem impatient with the intact human body. They wanted—and according to their training, needed—to reach inside, past the skin, to whatever pathologies lay within. Melvin Konner, an anthropologist turned medical student, described his first experience with surgery:

My fingers had been inside another person’s body, not just in the mouth or vagina or rectum, but beneath the protective surface of the skin, the inviolable film set up by millions of years of evolution, the envelope of ultimate individuality.…For me it had been an unforgettable experience.26

In the laboratory-centered environment, the patient’s words—his or her medical history and reported symptoms—count for less than the objective data that instruments can collect. Recall my difficulty in convincing an internist that I could breathe perfectly well, despite the evidence from his brand-new device. At another time in my life, I had the opposite problem—trying to convince a doctor that my cardiac symptoms were “real,” not psychosomatic (the problem was eventually diagnosed as non-life-threatening and treatable with a beta blocker). When you visit a doctor for the first time, you may be asked to show up half an hour early to give you time to fill out a lengthy history, but many of the questions will be asked again anyway, suggesting that your efforts went unread. Or they may be ignored. Thomas Duncan, the first person to die of Ebola in the United States, told an emergency room nurse that he had just arrived from Liberia, one of the epicenters of the epidemic, but that information never reached the attending physician, who sent Duncan home with instructions to take Tylenol.

It could be argued that the ideal patient says nothing, lies perfectly still, and makes no objection to the most invasive procedures. In fact, the first “patient” a medical student usually encounters is dead—a cadaver donated for dissection—and death is a condition that, as philosopher Jeffrey P. Bishop points out, is almost a prerequisite for scientific study: “After all, life is in flux, and it is difficult to make truth claims about matter in motion, about bodies in flux.”27 The heart is beating, blood is flowing, cells are metabolizing and even rushing about the living tissue. “Thus, life is no foundation upon which to build a true science of medicine,” he continues.28 This may sound like calculated irony, but consider how a “true science,” like biology, actually works. Until very recent advances in microscopy, the study of life at the microscopic level required that you first kill a laboratory animal, remove the tissue you want to study, slice a bit of it very thin, then “fix” it by rendering it thoroughly dead—in fact embalmed—with formaldehyde. Only then is it ready to go on the glass slide that you will see through the microscope, although what you see will be only a distant approximation of living tissue within a living animal, just as a field strewn with dead bodies will give you little idea of the issues that led to war. Similarly, Bishop proposes that the dead body is “epistemologically normative” in medicine, since what goes on in living bodies is too blurry, ever-changing, and confusing for study.

Many physicians and social scientists have questioned the pedagogical value of cadaver dissection. After all, the cadaver is dead and artificially preserved; it is smelly, leathery, and utterly lacking in the “flux” that constitutes life. Some prestigious medical schools have abandoned it altogether, instead teaching anatomy on plastic “prosections” of body parts. But for the most part, American medical schools (though not Italian ones) still insist on cadaver dissection, going so far as to defend it as a “rite of passage,” in which even the trauma that some medical students experience can be justified as a vital part of their transformation from initiate into full-fledged physician. Medical schools often attempt to “humanize” the process with little rituals of gratitude to the cadavers’ donors, but dissection remains a violent and transgressive undertaking. As one bioethicist observes:

One of the functions of anatomy lab is to help teach physicians how to violate social norms that operate in every other social situation, a skill that will be necessary in clinical practice. The detachment that allows student physicians to cut up the dead may help practicing clinicians to put their hands and medical instruments in patients’ various bodily orifices or to ask patients to confess their most shameful secrets and expose their nakedness in the most vulnerable positions.29

Every profession requires a certain degree of detachment on the part of its practitioners, but in the case of medicine this affect may mask something darker. Konner, the anthropologist who became a medical student, observes that “the stress of clinical training alienates the doctor from the patient, that in a real sense the patient becomes the enemy.”30 The doctors-in-training, invariably exhausted, blow off steam by talking trash about their patients, who are of course the immediate cause of their woes—patients “blow” their IVs, as Konner mentions, or spike sudden fevers. Then too, the rushed pace of modern clinical practice, in which outpatients may be scheduled ten or fifteen minutes apart, contributes to the kind of resentment a retail worker feels in the face of a customer overload. The doctor’s detachment is not a defense against excessive empathy, but a “downright negative” emotional stance, Konner suggests:

To cut and puncture a person, to take his or her life in your hands, to pound the chest until the ribs break…these and a thousand other things may require something stronger than objectivity. They may actually require a measure of dislike.31

So it is ironic that it is the patient—the thinking, feeling, conscious patient, so long discounted or ignored—whom the medical profession has turned to as an ally against the threat of evidence-based medicine. When the epidemiologist points out the uselessness of a certain procedure, the physician counters that this is what his or her patients want and even demand. An internist in Burlington, North Carolina, reports that when he told a seventy-two-year-old patient that she did not need many of the tests she was expecting in her annual physical, she wrote a letter to the local paper complaining about him as an example of “socialized medicine.”32 What patients want, according to the foes of evidence-based medicine, is above all a highly stylized but human interaction with a doctor. As one physician argues:

Medicine’s theatrical trappings—the operating theaters, the costumes such as the doctor’s white coat and the patient’s Johnny gown, the formalized lines and gestures—all contribute to an aesthetic ritual which gives emotional meaning to doctor-patient contact that transcends the notion of a cure.33

There are arguments that can be made against an overreliance on statistical evidence—that, for example, it may obscure a patient’s unique constellation of problems. As the popular doctor-writer Jerome Groopman puts it, “Statistics cannot substitute for the human being before you; statistics embody averages, not individuals.”34 Another argument often deployed against evidence-based medicine is that it can become a tool of the insurance industry to limit the care that will be reimbursed. We should always err on the side of excessive care, medical liberals insist, rather than promoting a potentially dangerous austerity. So there are reasonable arguments against the uncritical adoption of evidence-based medicine. But the notion that it undermines an interaction that “transcends the notion of a cure” is not one of them.