6    How the Scientific Attitude Transformed Modern Medicine

It is easy to appreciate the difference that the scientific attitude can make in transforming a previously undisciplined field into one of scientific rigor, for we have the example of modern medicine. Prior to the twentieth century, the practice of medicine was based largely on hunches, folk wisdom, and trial and error. Large-scale experiments were unknown and data were difficult to gather. Indeed, even the idea that one needed to test one’s hypotheses against empirical evidence was rare. All of this changed within a relatively short period of time after the germ theory of disease in the 1860s and its translation into clinical practice in the early twentieth century.1

We already saw in chapter 1 how Ignaz Semmelweis’s discovery of the cause of childbed fever in 1846 provides a prime example of what it means to have the scientific attitude. We also saw that he was far ahead of his time and that his ideas were met with unreasoned opposition. The scientific attitude that Semmelweis embraced, however, eventually found fruition throughout medicine. At about the same time as Semmelweis’s work, medicine saw the first public demonstration of anesthesia. For the first time, surgeons could take their time doing operations, as they no longer had to wrestle down fully awake patients who were screaming in pain. This did not in and of itself allow for lower mortality, as one complicating factor from lengthened surgeries was that patients also had more time for their wounds to be open to the air and get infected.2 Only after Pasteur discovered bacteria and Koch detailed the process of sterilization did the germ theory of disease begin to take root. When Lister introduced antiseptic techniques (which kill the germs) and aseptic surgery (which prevents the germs from entering in the first place) in 1867, it was finally possible to keep the cure from sometimes being worse than the disease.3

From today’s point of view, it is easy to take these advances for granted and underappreciate how they led to the growth of better quantitative techniques, laboratory analysis, controlled experimentation, and the idea that diagnosis and treatment should be based on evidence rather than intuition. But one should not forget that Western medicine has always fancied itself to be scientific; it is just that the meaning of the term has changed.4 Astrological medicine and bloodletting were once considered cutting edge, based on the highest principles of rationality and experience. One would be hard pressed to find any eighteenth-century doctor—or I imagine even one in the early Greek era—who did not consider his knowledge “scientific.”5 How can one claim, then, that these early physicians and practitioners were so woefully benighted? As we have seen, such things are judged by the members who make up one’s profession and so are relative to the standards of the age—but according to the standards of the early nineteenth-century, bloodletting seemed just fine.

My goal in this chapter is not to disparage the beliefs of any particular period, even as they cycled through untested beliefs to outrageous hypotheses, sometimes killing their patients with what we would today recognize as criminal incompetence. Instead, I would like to shine a light on how medicine found its way out of these dark ages and moved toward a time when its practice could be based on careful observation, calculation, experiment, and the flexibility of mind to accept an idea when (and only when) it had been empirically demonstrated. Remember that the scientific attitude requires not just that we care about evidence (for what counts as evidence can change from age to age) but that we are willing to change our theory based on new evidence. It is this last part that is crucial when some area of inquiry is trying to make the jump from pseudoscience to science, from mere opinion to warranted belief. For it was only when this mindset was finally embraced—when physicians stopped thinking that they already had all the answers based on the authority of tradition and began to realize that they could learn from experiment and from the experience of others—that medicine was able to come forward as a science.

The Barbarous Past

In his excellent book The History of Medicine: A Very Short Introduction, William Bynum reminds us that—despite bloodletting, toxic potions, skull drilling, and a host of “cures” that were all too often worse than the disease—Western medicine has always considered itself modern.6

One of the greatest systems of medicine in antiquity came from Hippocrates, whose chief insight (besides the Hippocratic Oath for which he is famous) involved his theory of how the “four humors” could be applied to medicine. As Roy Porter puts it in his masterful work The Greatest Benefit to Mankind:

From Hippocrates in the fifth century BC through to Galen in the second century AD, “humoral medicine” stressed the analogies between the four elements of external nature (fire, water, air and earth) and the four humours or bodily fluids (blood, phlegm, choler or yellow bile and black bile), whose balance determined health.7

The theory was impressive and the explanatory potential intriguing. Since disease was thought to be due to an imbalance in the humors—colds were due to phlegm, vomiting was due to bile, and so on—health could be maintained by keeping them in balance, for instance through the practice of bloodletting.8

Though bloodletting was invented by Hippocrates, Galen brought it to a high art, and for more than a thousand years thereafter (right through the nineteenth century) it was seen as a therapeutic treatment. Having written four separate books on the pulse, Galen thought that bloodletting allowed the healer to capitalize on the example of nature, where the removal of excess fluid—such as during menstruation—prevented disease.9 As Porter puts it, “whatever the disorder—even blood loss—Galen judged bleeding proper.”10 As such, he often bled his patients to unconsciousness (which sometimes resulted in their death).

This is not the only ancient medical practice that we would judge barbarous from today’s point of view; there was also skull drilling, leeches, swallowing mercury, the application of animal dung, and more. What is shocking, however, is the extent to which such ignorance went unchallenged down through the centuries, with the result that until quite recently in the history of medicine, patients often had just as much to fear from their doctor as they did from whatever disease they were trying to cure.

It is not just that the theories in circulation at the time were wrong—for, as we have seen, many scientific theories will turn out to be mistaken—but that most of these ideas were not even based on any sort of evidence or experiment in the first place. Medicine did not yet have the scientific attitude.

The Dawn of Scientific Medicine

The transition out of this nonempirical phase of medicine was remarkably slow. The Scholastic tradition lingered in medicine long after it had been abandoned by astronomy and physics, with the result that even two hundred years after the scientific revolution had begun in the seventeenth century, medical questions were customarily settled by theory and argument—to the extent that they were settled at all—rather than controlled experiment.11 Both the empirical and clinical practices of medicine remained quite backward until the middle of the nineteenth century. Even to the extent that a fledgling science of medicine began to emerge during the Renaissance, it had more of an effect on knowledge than on health.12 Despite the great breakthroughs in anatomy and physiology in early-modern times (e.g., Harvey’s seventeenth-century work on the circulation of blood), “[medicine’s] achievements proved more impressive on paper than in bedside practice.”13 In fact, even the one unambiguous improvement in medical care in the eighteenth century—the vaccine against smallpox—is seen by some not as the result of science so much as embracing popular folk wisdom.14 This “nonscientific” outlook persisted in medicine straight through the eighteenth century (which was known as the “age of quackery”); it was not until the middle of the nineteenth century that modern medicine truly began to arise.15

In his delightful memoir, The Youngest Science, Lewis Thomas contrasts the kind of scientific medicine practiced today with

the kind of medicine taught and practiced in the early part of the nineteenth century, when anything that happened to pop into the doctor’s mind was tried out for the treatment of illness. The medical literature of those years makes horrifying reading today: paper after learned paper recounts the benefits of bleeding, cupping, violent purging, the raising of blisters by vesicant ointments, the immersion of the body in either ice water or intolerably hot water, endless lists of botanical extracts cooked up and mixed together under the influence of nothing more than pure whim. Most of the remedies in common use were more likely to do harm than good.16

Bloodletting in particular seemed popular, owing in part to its enthusiastic support by the prominent physician Benjamin Rush. A holdover from pre-medieval times, bloodletting was thought to have enormous health benefits and constituted one of the “extreme interventions” that were thought necessary for healing. In his important book Seeking the Cure, Dr. Ira Rutkow writes that

citizens suffered needlessly as a result of Rush’s egotism and lack of scientific methodologies. In an age when no one understood what it meant to measure blood pressure and body temperature, and physicians were first determining the importance of heart and breathing rates, America’s doctors had no parameters to prevent them from harming patients.17

Yet harm them they did.

Doctors bled some patients sixteen ounces a day up to fourteen days in succession. Present-day blood donors, by comparison, are allowed to give one pint (sixteen ounces) per session, with a minimum of two months between each donation. Nineteenth-century physicians bragged about the totality of their bleeding triumphs as if it were a career-defining statistic. The regard for bloodletting was so deep-seated that even the frequent complications and outright failures to cure did not negate the sway of Rush’s work.18

Fittingly Rush died, in 1813, as a result of treatment by bloodletting for his typhus fever.19 Yet this was not the only horrific practice of the time.

Medicine was performed literally in the dark. Electricity was newfangled and unpopular. Almost every act a doctor performed—invasive examinations, elaborate surgeries, complicated births—had to be done by sun or lamplight. Basics of modern medicine, such as the infectiousness of diseases, were still under heavy dispute. Causes of even common diseases were confusing to doctors. Benjamin Rush thought yellow fever came from bad coffee. Tetanus was widely thought to be a reflex irritation. Appendicitis was called peritonitis, and its victims were simply left to die. The role that doctors—and their unwashed hands and tools—played in the spread of disease was not understood. “The grim spectre of sepsis” was ever present. It was absolutely expected that wounds would eventually fester with pus, so much so that classifications of pus were developed. Medicine was not standardized, so accidental poisoning was common. Even “professionally” made drugs were often bulky and nauseating. Bleeding the ill was still a widespread practice, and frighteningly large doses of purgatives were given by even the most conservative men. To treat a fever with a cold bath would have been “regarded as murder.” There was no anesthesia—neither general nor local. Alcohol was commonly used when it came to enduring painful treatments and pure opium [was] sometimes available too. If you came to a doctor for a compound fracture, you had only a fifty percent chance of survival. Surgery on brains and lungs was attempted only in accident cases. Bleeding during operations was often outrageously profuse, but, as comfortingly described by one doctor, “not unusually fatal.”20

It is important to contrast what was occurring in the United States (which was fairly late to the scientific party in medicine) with progress that was already underway in Europe. In the early nineteenth century, Paris in particular was a center for many advances in medical understanding and practice, owing perhaps to the revolutionary outlook that had seen hospitals move out of the hands of the church to become nationalized.21 In Paris, a more empirical outlook prevailed. Autopsies were used to corroborate bedside diagnosis. Doctors appreciated the benefits of palpation, percussion, and auscultation (listening), and René Laennec invented the stethoscope. A more naturalistic outlook in general brought medical students to learn at their patients’ bedsides in hospitals. Following these developments, the rise of scientific labs in Germany, and appreciation for the microscope by Rudolf Virchow and others, led to more advances in basic science.22 All of this activity attracted medical students from around the world—particularly the United States—to come to France and Germany for a more scientifically based medical education. Even so, the benefits of this for direct medical practice were slow to arise on either side of the Atlantic. As we have seen, Semmelweis was one of the earliest physicians to try to bring a more scientific attitude to patient care in Europe at roughly the same time the miracle of anesthesia was being demonstrated in Boston. Yet even these advances met with resistance.23

The real breakthrough arrived in the 1860s with the discovery of the germ theory of disease. Pasteur’s early work was on fermentation and the vexed question of “spontaneous generation.” In his experiments, Pasteur sought to show that life could not arise out of mere matter. But why, then, would a flask of broth that had been left open to the air “go bad” and produce organisms?24

Pasteur devised an elegant sequence of experiments. He passed air through a plug of gun-cotton inserted into a glass tube open to the atmosphere outside his laboratory. The gun-cotton was then dissolved and microscopic organisms identical to those present in fermenting liquids were found in the sediment. Evidently the air contained the relevant organisms.25

In later experiments, Pasteur demonstrated that these organisms could be killed by heat. It took a few more years for Pasteur to complete more experiments involving differently shaped flasks placed in different locations, but by February 1878 he was ready to make the definitive case for the germ theory of infection before the French Academy of Medicine, followed by a paper that made the case that microorganisms were responsible for disease.26 The science of bacteriology had been born.

Lister’s work on antisepsis—which aimed at preventing these microorganisms from infecting the wounds caused during surgery—grew directly out of Pasteur’s success.27 Lister had been an early adopter of Pasteur’s ideas, which were still resisted and misunderstood throughout the 1870s.28 Indeed, when US president James Garfield was felled by an assassin’s bullet in 1871, he died many months later not, many felt, as a result of the bullet that was still lodged in his body, but the probing of the wound site with dirty fingers and instruments by some of the most prominent physicians of the time. At trial, Garfield’s assassin even tried to defend himself by saying that the president had died not of being shot but of medical malpractice.29

Robert Koch’s laboratory work in Germany during the 1880s took things the next step. Skeptics had always asked about germs, “Where are the little beasts?”—not accepting the reality of something they could not see.30 Koch finally answered this with his microscopic work, where he was able not only to establish the physical basis for the germ theory of disease but even to identify the microorganisms responsible for specific diseases.31 This led to the “golden years” of bacteriology (1879–1900), when “the micro-organisms responsible for major diseases were being discovered at the phenomenal rate of one a year.”32

All of this success, though, may now compel us to ask a skeptical question. If this was such a golden age of discovery in bacteriology—that provided such stunning demonstration of the power of careful empirical work and experiment in medicine—why did it not have a more immediate effect on patient care? Surely, there were some timely benefits (for instance, Pasteur’s work on rabies and anthrax), but one nonetheless feels a lack of proportion between the good science being done and its effect on treatment. If we here see the beginning of respect for empirical evidence in medical research, why was there such a lag for the realization of its fruit in clinical science?

As of the late nineteenth century the few medicines that were effective included mercury for syphilis and ringworm, digitalis to strengthen the heart, amyl nitrate to dilate the arteries in angina, quinine for malaria, colchicum for gout—and little else. Blood-letting, sweating, purging, vomiting and other ways of expelling bad humours had a hold upon the popular imagination and reflected medical confidence in such matters. Blood-letting gradually lost favour, but it was hardly superseded by anything better.33

Perhaps part of the reason was that during this period experiment and practice tended to be done by different people. Social bifurcation within the medical community tended to enforce an unwritten rule whereby researchers didn’t practice and practitioners didn’t do research. As Bynum says in The Western Medical Tradition:

“Science” had different meanings for different factions within the wider medical community. “Clinical science” and “experimental medicine” sometimes had little to say to each other. These two pursuits were practised increasingly by separate professional groups. Those who produced knowledge were not necessarily those who used it.34

Thus the scientific revolution that had occurred in other fields (physics, chemistry, astronomy) two hundred years earlier—along with consequent debates about methodology and the proper relationship between theory and practice—did not have much of an effect on medicine.35 Even after the “bacteriological revolution,” when the knowledge base of medicine began to improve, the lag in clinical practice was profound.

Debates about the methodology of natural philosophy [science] touched medicine only obliquely. Medicine continued to rely on its own canonical texts, and had its own procedures and sites for pursuing knowledge: the bedside and the anatomy theatre. Most doctors swore by the tacit knowledge at their fingers’ ends.36

For all its progress, medicine was not yet a science. Even if all of this hard-won new understanding was now available for those who cared to pursue it, a systemic problem remained: how to bridge the gap between knowledge and healing in those who were practicing medicine, and how to ensure that this new knowledge could be put into the hands of those students who were the profession’s future?37

Medicine at this point faced an organizational problem not just for the creation but the transmission of new knowledge. Instead of a well-oiled machine for getting scientific breakthroughs into the hands of those who could make best use of them, at this time medicine was a “contested field occupied by rivals.”38 Although some of the leading lights in medicine had already embraced the scientific attitude, the field as a whole still awaited its revolution.

The Long Transition to Clinical Practice

For whatever reason—ideological resistance, ignorance, poor education, lack of professional standards, distance between experimenters and practitioners—it was a long transition to the use of the science of medicine in clinical practice. Indeed, some of the horror stories of untested medical treatments and remedies from the nineteenth century held over right through the early part of the twentieth century. In the United States, there continued to be poor physician education and lack of professional standards by which practitioners could be held accountable for their sometimes-shoddy practices.

At the dawn of the twentieth century, physicians still did not know enough to cure most diseases, even if they were now better at identifying them. With the acceptance of the germ theory of disease, physicians and surgeons were now perhaps not as likely to kill their patients through misguided interventions as they had been in earlier centuries, but there was still not much that the scientific breakthroughs of the 1860s and 1870s could do for direct patient care. Lewis Thomas writes that

explanation was the real business of medicine. What the ill patient and his family wanted most was to know the name of the illness, and then, if possible, what had caused it, and finally, most important of all, how it was likely to turn out. For all its facade as a learned profession, [medicine] was in real life a profoundly ignorant profession. I can recall only three or four patients for whom the diagnosis resulted in the possibility of doing something to change the course of the illness. For most of the infectious diseases on the wards of the Boston City Hospital in 1937, there was nothing to be done beyond bed rest and good nursing care.39

James Gleick paints a similarly grim portrait of medical practice in this era:

Twentieth-century medicine was struggling for the scientific footing that physics began to achieve in the seventeenth century. Its practitioners wielded the authority granted to healers throughout human history; they spoke a specialized language and wore the mantle of professional schools and societies; but their knowledge was a pastiche of folk wisdom and quasi-scientific fads. Few medical researchers understood the rudiments of controlled statistical experimentation. Authorities argued for or against particular therapies roughly the way theologians argued for or against their theories, by employing a combination of personal experience, abstract reason, and aesthetic judgment.40

But change was coming, and a good deal of it was social. With advances in anesthesia and antisepsis, joined by the bacteriological discoveries of Pasteur and Koch, by the early twentieth century clinical medicine was ripe to emerge from its unenlightened past. As word of these discoveries spread, the diversity of treatments and practices in clinical medicine suddenly became an embarrassment. Of course, as with any paradigm shift, Kuhn has taught us that one of the most powerful forces at work is that the holdouts die and take their old ideas with them, while new ideas are embraced by younger practitioners. Surely some of this occurred (Charles Meigs, one of the staunchest resisters of Semmelweis’s theory on childbed fever and anesthesia, died in 1869), but social forces were likely even more influential.

Modern medicine is not just a science but also a social institution. And it is important to realize that even before it was truly scientific, social forces shaped how medicine was viewed and practiced, and had a hand in turning it into the science that it is today. Entire books have been written on the social history of medicine; here I will have the chance to tell only part of the story.

In The Social Transformation of American Medicine, Paul Starr argues that the democratic principles of early America conflicted with the idea that medical knowledge was somehow privileged and that the practice of medicine should be left to an elite.41 In its earliest days, “all manner of people took up medicine in the colonies and appropriated the title of doctor.”42 As medical schools began to open, and those who had received more formal medical training sought to distance themselves from the “quacks” by forming medical societies and pursuing licensing, one might think that this would have been welcomed by a populace eager for better medical care. But this did not occur. Folk medicine and lay healing continued to be popular, as many viewed the professionalization movement in American medicine as an attempt to grab power and authority. As Starr writes,

Popular resistance to professional medicine has sometimes been portrayed as hostility to science and modernity. But given what we now know about the objective ineffectiveness of early nineteenth-century therapeutics, popular skepticism was hardly unreasonable. Moreover, by the nineteenth century in America, popular belief reflected an extreme form of rationalism that demanded science be democratic.43

In fact, to the extent that medical licensing standards had already begun to be established in early America, by the time of Andrew Jackson’s presidency in the 1830s, there was an organized effort to have them disbanded as “licensed monopolies.”44 Incredibly, this led to the abandonment of medical licensing standards in the United States for the next fifty years.45

To anyone who cares about the science of medicine—not to mention the effect that all of this might have had on patient care—this is a sorry result. Of course, one understands that, given the shoddy medical practices of the time, there was deep suspicion over whether “professional” physicians had any better training or knew anything more about medical care than lay practitioners did.46 Still, the demand that scientific knowledge be “democratic” can hardly be thought of as a promising development for better patient care, when the scientifically based medical discoveries of Europe were still waiting to take root in clinical practice. The result was that by the end of the nineteenth century—just as a revolution in basic medical knowledge was taking place in Europe—the state of US medical education and practice was shameful.

Medical “diploma mills,” which were run for profit by local physicians but offered no actual instruction in the basic sciences or hands-on training in patient care, were rampant.47 The restoration of medical licensing in the 1870s and 1880s led to a little more professional accountability, as the idea that one’s diploma was the only license needed to practice medicine came under scrutiny. Eventually, the requirements stiffened.

One major landmark was an 1877 law passed by Illinois, which empowered a state board of medical examiners to reject diplomas from disreputable schools. Under the law, all doctors had to register. Those with degrees from approved schools were licensed, while others had to be examined. Of 3,600 nongraduates practicing in Illinois in 1877, 1,400 were reported to have left the state within a year. Within a decade, three thousand practitioners were said to have been put out of business.48

In Europe, the standards of medical education were higher. German medical schools in particular offered training in schools that were affiliated with actual universities (as few did in America). Eventually, this led to the emulation of a more rigorous model of medical education in the United States with the founding of Johns Hopkins Hospital in 1889, and its affiliated medical school four years later, which included instruction at all levels, including internships and residencies.49 The medical schools at Harvard, Hopkins, Penn, Michigan, Chicago, and a few others in the United States that were affiliated with universities were widely respected.50 But this accounted for only a fraction of medical education at the end of nineteenth-century America.

In 1908, a medical layperson named Abraham Flexner took off on a quest—under the aegis of the Carnegie Foundation and the Council on Medical Education of the AMA—to visit all 148 American medical schools then in existence. What he found was appalling.

Touted laboratories were nowhere to be found, or consisted of a few vagrant test tubes squirreled away in a cigar box; corpses reeked because of the failure to use disinfectant in the dissecting rooms. Libraries had no books; alleged faculty members were busily occupied in private practice. Purported requirements for admission were waived for anyone who would pay the fees.51

In one particularly colorful example, Flexner visited a medical school in Des Moines, Iowa, where the dean rushed him through his visit. Flexner had seen the words “anatomy” and “physiology” stenciled on doors, but they were all locked and the dean told him that he did not have the keys. Flexner concluded his visit, then doubled back and paid a janitor to open the doors, where he found that every room was identical, having only desks, chairs, and a small blackboard.52

When Flexner’s famous report came out in 1910 it was an indictment of the vast majority of medical education in the United States. Johns Hopkins was held up as the gold standard, and even other reputable schools were encouraged to follow its model, but virtually all of the commercial schools were found to be inadequate. Flexner argued, among other reforms, that medical schools needed to be rooted in an education in the natural sciences, and that reputable medical schools should be affiliated with a university. They also needed to have adequate scientific facilities. He further recommended that students should have at least two prior years of college before starting medical training, that medical faculty should be full time, and that the number of medical schools should be reduced.53

The effect was immediate and profound.

By 1915 the number of [medical] schools had fallen from 131 to 95, and the number of graduates from 5,440 to 3,536. In five years, the schools requiring at least one year of college work grew from thirty-five to eighty-three. Licensing boards demanding college work increased from eight to eighteen. In 1912 a number of boards formed a voluntary association, the Federation of State Medical Boards, which accepted the AMA’s rating of medical schools as authoritative. The AMA Council effectively became a national accrediting agency for medical schools, as an increasing number of states adopted its judgments of unacceptable institutions. [By 1922] the number of medical schools had fallen to 81, and its graduates to 2,529. Even though no legislative body ever set up either the Federation of State Medical Boards or the AMA Council on Medical Education, their decisions came to have the force of law. This was an extraordinary achievement for the organized profession.54

As they took over licensing requirements, state medical boards began to have much more power, not only in their oversight of medical education but in sanctioning practitioners who were already in the field. With the creation of the FSMB, there was now a mechanism in place not only to bring more evidence-based medicine into the training of new physicians, but also to hold existing physicians accountable for their sometimes-shoddy practices. In 1921, the American College of Surgeons released its minimum standards of care and there was a push for hospitals to become accredited.55 This did not mean that every state suddenly had the legal teeth to root out questionable practitioners (as occurred in Illinois as early as 1877). But it did mean at least that the most egregious practices (and practitioners) were now under scrutiny. Even if there was still not a lot that most physicians could do to cure their patients, they could be ostracized for engaging in practices that harmed them. In some states, the names of bad physicians were even reported in state medical bulletins.

Within just a few years after the Flexner Report, the scientific revolution in medicine that had started in Europe in the 1860s had finally come to the United States. Although the majority of this change was social and professional rather than methodological or empirical, the sum effect was that by excluding those physicians without adequate training, and cracking down on practices that were no longer acceptable, a set of social changes that may have started off in self-interest and the protection of professional standing resulted in furthering the sort of group scrutiny of individual practices that is the hallmark of the scientific attitude.

Of course, it may still have been true, as Lewis Thomas’s previously cited portrait makes clear, that even at the best hospitals in Boston in the 1920s, there was little that most doctors could do for their patients beyond homeopathic drugs (which were placebos) or surgery, other than wait for the disease to run its natural course. Even if they were no longer bleeding, purging, blistering, cupping (and killing) their patients—or probing them with dirty fingers and instruments—there were as yet few direct medical interventions that could be offered to heal them. Yet this nonetheless represented substantial progress over an earlier era. Medicine finally embraced the beginning of professional oversight of individual practices that was necessary for it to come forward as a science. Medical knowledge was beginning to be based on empirical evidence, and the introduction of standards of care promised at least that clinical science would make a good faith effort to live up to this (or at least not undermine it). Medicine was no longer based on mere hunches and anecdotes. Bad practices and ineffective treatments could be scrutinized and discarded. By raising its professional standards, medicine had at last lived up to its scientific promise.

This is the beginning of the scientific attitude in medicine. One could make the case that the reliance on empirical evidence and its influence on theory went all the way back to Semmelweis or even Galen.56 An even better case could probably be made for Pasteur. As in any field, there were giants throughout the early history of medicine, and these tended to be those who embraced the idea of learning from empirical evidence. But the point about the importance of a community ethos still stands, for if an entire field is to become a science, the scientific attitude has to be embraced by more than just a few isolated individuals, no matter how great. One can point to individual examples of the scientific attitude in early medicine, but it was not until those values were widespread in the profession—at least in part because of social changes in the medical profession itself—that one can say that medicine truly became a science.

The Fruits of Science

After the professional reforms of the early twentieth century, medicine came into its own. With the discovery of penicillin in 1928, physicians were finally able to make some real clinical progress, based on the fruits of scientific research.57

Then [in 1937] came the explosive news of sulfanilamide, and the start of the real revolution in medicine. We knew that other molecular variations of sulfanilamide were on their way from industry, and we heard about the possibility of penicillin and other antibiotics; we became convinced overnight that nothing lay beyond reach for the future.58

Alexander Fleming was a Scottish bacteriologist working in London just after the end of the First World War. During the war he had been working on wounds and their resistance to infection and, one night, he accidentally left a Petri dish full of staphylococcus out on the bench while he went away on vacation. When he got back he found that some mold, which had grown in the dish, appeared to have killed off all of the staph around it.59 After a few experiments, he did not find the result to be clinically promising, yet he nonetheless published a paper on his finding. Ten years later, this result was rediscovered by Howard Florey and Ernst Chain, who then found Fleming’s original paper and performed the crucial experiment on mice, isolating penicillin, which saw its first clinical use in 1941.60

In his book, The Rise and Fall of Modern Medicine, James Le Fanu goes on to list the cornucopia of medical discovery and innovation that followed: cortisone (1949), streptomycin (1950), open heart surgery (1955), the polio vaccine (also 1955), kidney transplantation (1963), and the list goes on.61 With the development of chemotherapy (1971), in vitro fertilization (1978), and angioplasty (1979), we are a long way from Lewis Thomas’s time when the primary job of the physician was to diagnose and simply attend to the patient because nothing much could be done as the illness took its course. Clinical medicine could finally enjoy the benefit of all that basic science.

But it is now time to consider a skeptical question: to what extent can all these clinical discoveries be attributed to science (let alone the scientific attitude)? Le Fanu raises this provocative question by noting that a number of the “definitive” moments in medical history during the twentieth century had little in common. As he notes, “the discovery of penicillin was not the product of scientific reasoning but rather an accident.”62 Yet even if this is true, one needs to be convinced that the other discoveries were not directly attributable to scientific inquiry.

Le Fanu writes, “The paths to scientific discovery are so diverse and depend so much on luck and serendipity that any generalisation necessarily appears suspect.”63 Le Fanu here explores, though he did not invent, the idea that some of the medical breakthroughs of the twentieth century may be thought of not as the direct fruit of scientific research, but instead as “gifts of nature.” Selman Waksman, a Nobel Prize winner for medicine for his discovery of streptomycin (and the person who coined the term antibiotic)64 argued—after receiving his prize—that antibiotics were a “purely fortuitous phenomenon.” And he was not just being humble. But, as Le Fanu notes, this view was so heretical that many believed it must be wrong.65

Can one make a case for the idea that the breakthroughs of modern medicine were due not to “good science” but rather to “good fortune”? This view stretches credibility and, in any case, it is based on the wrong view of science. If one views science as a methodological enterprise, where one must follow a certain number of steps in a certain way and scientific discovery comes out at the other end, then perhaps it is arguable whether science is responsible for the discoveries of clinical medicine. Fleming, at least, followed no discernible method. Yet based on the account of science that I am defending in this book, I think it is clear that both the series of breakthroughs in the late nineteenth century and the transition to the fruits of clinical science that started in the early twentieth century were due to the scientific attitude.

For one thing, it is simply too easy to say that penicillin was discovered by accident. While it is true that a number of chance events took place (nine cold days in a row during a London summer, the fact that Fleming’s lab was directly above one in which another researcher was working on fungus, that Fleming left a Petri dish out while he went on vacation), this does not mean that just any person who saw what Fleming saw in the Petri dish would have made the discovery. We do not need to attribute the discovery, perhaps, to Fleming’s particular genius, but we do not need to attribute it to accident either. No less a giant of medicine than Louis Pasteur once observed that “chance favors the prepared mind.” Accident and random events do occur in the lab, but one has to be in the proper mental state to receive them, then probe things a little more deeply, or the benefit is lost. Having the scientific curiosity to learn from empirical evidence (even as the result of an accident), and then change one’s beliefs on the basis of what one has learned, is what it means to have the scientific attitude. Nature may provide the “fruits,” but it is our attitude that allows us to recognize and understand them.

When Fleming saw that there were certain areas in the Petri dish where staphylococcus would not grow—because of (it seemed) contamination from outside spores—he did not simply throw it in the trash and start again. He tried to get to the bottom of things. Even though, as noted, he did not push the idea of clinical applications (for fear that anything powerful enough to kill staph would also kill the patient), he did write a paper on his discovery, which was later discovered by Florey and Chain, who set to work to identify the biochemical mechanisms behind it.66 Group scrutiny of individual ideas is what led to the discovery.

Finally, in a classic experiment Chain and Florey demonstrated that penicillin could cure infections in mice: ten mice infected with bacterium streptococcus were divided into two groups, with five to be given penicillin and five to receive a placebo. The “placebo” mice died, the “penicillin” mice survived.67

While it is easy to entertain students with the story that penicillin was discovered by accident, it most assuredly was not an accident that this discovery was then developed into a powerful drug that was capable of saving millions of lives. That depended on the tenacity and open-mindedness of hundreds of researchers to ask all of the right critical questions and follow through with experiments that could test their ideas. Indeed, one might view the modern era’s expectation that the effectiveness of every medical treatment should be tested through double-blind randomized clinical trials as one of the most effective practical fruits of those medical researchers who first adopted the scientific attitude. Scientific discovery is born not merely from observing accidents but, even where accidents occur, testing them to see if they hold up.

What changed in medicine during the eighty years (1860–1940) from Pasteur to penicillin? As Porter notes, during this time “one of the ancient dreams of medicine came true. Reliable knowledge was finally attained of what caused major sickness, on the basis of which both preventions and cures were developed.”68 This happened not simply because medical research underwent a scientific revolution. As we have seen, the breakthroughs of Pasteur, Koch, Lister, and others were all either resisted, misunderstood, mishandled, or ignored for far too long to give us confidence that once the knowledge was in place, practitioners would have found it. What more was required were the social forces that led to the transformation of this knowledge into clinical practice that, I have argued, was embedded in a change in attitude about how medical education and the practice of medicine should be organized. Once physicians started to think of themselves as a profession rather than a band of individual practitioners, things began to happen. They read one another’s work. They scrutinized one another’s practices. Breakthroughs and discoveries could still be resisted, but for the first time those who did so faced professional disapproval from their peers and, eventually, the public that they served. As a growing majority of practitioners embraced the scientific attitude, the scrutiny of individual ideas became more common and scientific medicine was born.

Conclusion

Within medicine we can see how the employment of an empirical attitude toward evidence, coupled with acceptance of this standard by a group who then used it to critique the work of their peers, was responsible for transforming a field that was previously based on superstition and ideology into a modern science. This provides a good example for the social sciences and other fields that now wish to come forward as sciences. The scientific attitude did not work just for physics and astronomy (and medicine) in the past. It is still working. We can have a modern scientific revolution in previously unscientific fields, if we will just employ the scientific attitude.

Yet we still must face the problem of those who reject the scientific attitude outright. Those who do not seem to understand that ideologically based beliefs like intelligent design are not scientific. Those who indulge in denialism about well-warranted theories like global warming based on a fundamental misunderstanding of how science works. We will deal with those problems in chapter 8. First, however, we must face an even lower low. In the present chapter, we have seen the scientific attitude at its best. In chapter 7 we will explore it at its worst.

Notes