4

The House of Pain

image

Moreau straps a living animal to a table. He cuts its body and manipulates the tissues into new arrangements. He immobilizes the animal’s head and ties its jaws open, then uses long needles to penetrate into its brain through the top of its pharynx, the shared part of the nasal and oral cavity. The animal is permitted to heal to some extent, then he conducts more interventions for further manipulations, until its healing results in a form much like our own. The animal fully experiences the pain every time. It can do nothing but scream.

Moreau’s project is an abomination, yes. What does that mean? What if he had used anesthetics, and used them well, so the procedure was free of pain? Just as wrong? Less wrong? Not wrong at all?

The agony is an issue all its own. Prendick never approves of the pain. After Moreau is killed, and after he and Montgomery recover his body, the first thing they do is to kill every subject currently in the surgery. He goes through a lot of changes and shifts of attitude about the humanity of the project’s created people, but not about what they suffered during their transformations, which always disgusts him. Yet he cannot say why, nor can he refute Moreau in their debate in Chapter 14. Significantly, until Moreau dies, he does nothing to restrict or stop the surgeries, tacitly accepting the project.

Why? Does Prendick have a solid moral argument against Moreau? He’s not a helpless captive, and Moreau has no research intentions toward him. During their conversation, Prendick is armed, and their entire debate is embedded in the freedom that Moreau has granted literally to shoot him if he cannot justify his position. I know plenty of people who would unhesitatingly put two bullets in Moreau’s chest and one in his head at the end of that conversation. Why Prendick doesn’t do it requires some reflection, both on the history at the time of writing and on the issues that persist today.

Most of the films drastically alter this dynamic, making the hero explicitly into Moreau’s prisoner, in turn making their debate or discussion unequal, as the scientist could always respond vindictively to being refuted. Typically it’s expressed by the hero discovering that his room is barred and locked in a far less ambiguous fashion than the way Prendick’s room is secured in the novel, and in some cases, literally penning him into a cell when the scientist has experimental designs upon him.

No Pain, No Gain

Until the 1790s, the word on medical pain relief is simple: nothing existed worth the name. If you were to receive surgery, you simply—well, you got to participate. Surgery on humans was in transition from surface features and wound repair to opening and closing the abdomen. That transition lasted decades, roughly from the first consistently successful ovariotomies in the early 1800s until the first excision of the gall bladder and the first abdominal appendectomy in the 1880s. I find it impossible to contemplate the horror of the experience, and I shudder at the recovery rate and the recoverers’ morbidity rate, given that sterile technique, although suggested as early as the 1790s, wasn’t widely adopted until a century later. Consider that the American physician James Marion Sims developed the surgical correction of vesico-vaginal fistula in the 1840s and 1850s, but his first success required thirty-three anesthesia-free operations on the same three patients.1 Bear in mind as well that shock was not studied until the 1890s, and often was disastrously treated with stimulants, if at all.

That makes three sets of techniques in uneven development: shock, sterile technique (without antibiotics, this is confined to assiduous washing-up and wound cleaning), and anesthesia. These were also debated and developed in the context of population and policy problems, including the toll of rapidly developing technological imperial warfare on soldiers, the terrifying incidence of puerperal fever in maternity wards, and epidemics of typhus, cholera, and typhoid fever in London alone, as well as the epidemics among displaced and crowded people throughout the colonies.

The most relevant feature to Moreau’s work is pain alleviation, including general anesthetics. You can forget all that business about berries and herbs, and about alcohol, which is best understood as a sedative—it doesn’t dull pain so much as make you less able to struggle free while people saw off your leg, and the necessary quantities are as likely to produce uncontrolled and dangerous vomiting as quiescence. Other powerful inhibitors such as hashish, mandrake, and opium bring a high chance for fatal overdose in the necessary quantities for surgery.

In the second half of the nineteenth century, inhaled anesthetics were at least available. Both nitrous oxide and ether (technically, diethyl ether) had been isolated earlier, but seem to have been used mainly as party drugs. Their use as anesthetics didn’t begin until the 1840s, and nitrous oxide turned out to be effective mainly for dental work. The first reliable general anesthetics were ether and chloroform, which had been discovered in the 1830s.

At first, a patient’s chance of dying may have increased rather than decreased. The problem is simply that all general anesthetics are by definition poison: having so many nerves inhibited is flatly not good for you, in more ways than I have room to list, and—like alcohol—both of these are crudely powerful and individually unpredictable. The dose of chloroform required to keep a patient anesthetized is very close to the lethal level. Ether is a little bit safer in terms of overdose, but it’s more damaging to the patient’s lungs; furthermore, its wearing off entails such nausea and so much vomiting that dehydration or disruption of one’s sutures—that is, massive internal hemorrhage—are a mortal threat.

Until technology could provide a more precise way of monitoring the dose throughout the procedure, and also to monitor the patient’s vital signs, any operation using general anesthesia was a race between the surgical techniques and the patient’s death due to the anesthesia or shock, and later, to postoperative hemorrhage. Controlled inhalation techniques combining nitrous oxide and ether were developed in 1876, but even then, anesthetized surgery was automatically life-threatening.

For people who could afford constant attendance, and especially for people whom the physicians would be terrified to lose, nitrous oxide and ether might be relatively safely available—emphasis on “relatively.” Queen Victoria famously delivered Prince Leopold under ether in 1853, quite early in this history, affording alternate-history buffs the opportunity to speculate about what might have happened with the British Empire if her dosage or response had been a bit off. But for low-income patients being treated rapidly and in great numbers, without the necessary technology, training, and infrastructure for supply and expense, anesthesia was both less available and much more dangerous.

Our notion that both pain relief and safety should be maximized developed much more recently than I find comfortable to contemplate. It wasn’t until Henry K. Beecher published Physiology of Anesthesia in 1938 that modern substances and techniques began to be used, and I still recall from my childhood in the 1960s and 1970s that people who underwent major surgery either died or returned dangerously underweight and physiologically ten years older.

This technological history provides the context for pain relief in nonhuman animal subjects and patients as well. Consider William Harvey’s famous experiments on circulation way back in the 1620s, which accurately identified the double circuit in the movement of mammalian blood, heart-body-heart-lungs, currently taught in health and biology classes everywhere, all the time. You may have encountered it in Schoolhouse Rock or The Magic School Bus. But modern education has more or less forgotten that this work relied upon horrific agony for the many animals he used, and the list of historical experiments underlying many such basic points of health and biology includes countless examples. Anesthetics simply didn’t exist.

The 1850s and 1860s were the transitional period, when the medical practice was becoming a profession with a much wider audience and more precise curriculum, and experimentation and general knowledge of normal function were immediate policy concerns. General anesthesia was possible if variable and unreliable, investigation included lab work on healthy bodies as well as clinical observations, and the culture of research had neither a moral nor legal precedent for considering the subjects’ experience to be important. Most, although not all, researchers using live animal subjects considered anesthesia little more than a likely way to bias the results or to lose the subject and hence the data halfway through. That’s where the physiological researchers were coming from, in stating so strongly that seeking scientific knowledge automatically must accept inflicting pain on experimental subjects, or “no pain, no gain.” It’s not true on principle, but it was the historical case, or logistic situation.

The case study must certainly be the French physiologist Claude Bernard, both in general and specifically as presented in the chapter addressing experimental subjects’ pain in his book An Introduction to the Study of Experimental Medicine,2 published in 1865:

The science of life is a superb and dazzlingly lighted hall which may be reached only by passing through a long and ghastly kitchen. (p. 15)

[and]

A physiologist is not a man of fashion, he is a man of science, absorbed by the scientific idea which he pursues: he no longer hears the cry of animals, he no longer sees the blood that flows, he sees only his idea and perceives only organisms concealing problems which he intends to solve. (p. 103)

If Réne Descartes’ statement that animals were soulless, clockwork mechanisms incapable of pain was ever taken seriously by scientists, which I doubt, it wasn’t commonly invoked during the heyday of vivisection. Bernard and contemporary researchers did not claim that animals feel no pain, and should at least be understood as not hiding behind denial.

Here’s an example. During this period, glucose (blood sugar to you and me) could be reliably assessed by a technique called copper reduction. By 1848, physiologists had recorded the primary insight of digestive physiology: when you eat carbohydrates, glucose is soon found throughout your circulatory system; it’s pretty clear that we get it from what we ate. The question people wanted to address was, is that it? More technically, do we synthesize glucose in addition to consuming it?

Actually I’ll cheat and give you the answer first (Figure 4.1). Your stomach and intestines have carbohydrates in them, because you ate food. They get digested into the littlest carbohydrates in there (that’s what digestion means, hammering big chemicals into little ones), small enough to be absorbed into your blood, mostly as fructose and glucose. All these blood vessels at the intestines are collected into the hepatic portal vein, delivering the blood to the liver. Then the hepatic vein delivers it from the liver to the general circulation of the body, specifically the vena cava, just before it gets to the heart. Or to put it differently, blood at the heart has only the amount of glucose in it that the liver permits. The liver is the gatekeeper for the glucose, because instead of just passing it on with the blood, it grabs it and stores it in a form called glycogen; the blood that goes into the hepatic vein gets glucose that’s resynthesized from the glycogen. Everything you’ve heard about “blood sugar” is a function of what the liver is doing with its glycogen/glucose conversions.

image

Figure 4.1 Hepatic portal system.

To figure this out, a whole lot of preexisting assumptions had to get overturned. People already knew that the liver produces bile (a digestive facilitator for fats), and, incorrectly, “one organ one function” was the prevailing dogma. More subtle is the idea that the sugar did have to get consumed, as we cannot make it de novo from simpler compounds (like plants can), but we store it in a fashion that means synthesized sugar is what we actually use in our tissues. Conceiving of body systems so that such ideas can even be addressed, and then taking the questions of how such things are chemically regulated, represented a whole new universe of thought. The credit for spotting these assumptions and setting up a more functional model for investigating blood sugar acquisition and regulation belongs to Bernard, from work reported in his Leçons sur le diabète et la glycogenèse animale (1877), published by Cours du Collège de France.

Bernard does not report what animal he initially used, but I’m assuming it was a rabbit; the laboratory rat and mouse had not yet found their place in science. The first step is to withhold food from the animal. Then he restrained it, opened the abdomen, pushed the intestines aside, partly folded up the liver, and withdrew blood from the vessels going into the liver from the intestines (hepatic portal vein) and also from the vessel (hepatic vein) going from the liver into the vena cava (the big vein going back to the heart). There’s glucose in both places.

More animals are needed for the first investigation of cause. This time, he tied off the hepatic portal vein to find that glucose is still present in the hepatic vein; in other words, the liver is producing glucose in some way. For this to be assessed, the animal stays that way, its abdomen opened, for hours. Bernard repeated this across a broad comparative array of vertebrates, which I presume means cats, dogs, and other livestock, and found the same throughout: liver tissue full of glucose, and nowhere else except as supplied by the liver. Please imagine a veritable parade through the laboratory of many species of strapped-down, muzzled animals under no anesthetic, with their abdomens open, their intestines reflected, and the liver inverted for access to the blood vessels.

So the liver is more than a mere speed bump between the intestines and the vena cava; it’s somehow delivering glucose on its own. But we haven’t yet answered the question of whether the liver stores the glucose it receives from the intestines, or makes it from scratch independently from digestion.

An ordinary if unpredictable detail of scientific work is the semi-accidental discovery. In this case, Bernard examined the contents of a liver that had been left sitting out all night, and he discovered more glucose present than its previous result. This suggested that the liver might be making the stuff, so his next experiment was to open the animal and inject water into the portal vein entering the liver, until the blood leaving it via the hepatic vein is “washed out” (i.e., has no glucose in it), and then assess that same vein to see if glucose reappears. He indeed found that this happens, strongly supporting the surprising notion that the liver is indeed making glucose, and coined the name “glycogen” for the precursor substance. Note here again that the animal is lying restrained, alive, gutted, and in pain even longer this time—at an informed guess, at least twelve hours, to permit multiple timed blood draws from the two veins.

This conclusion revolutionized the study of the body. First, it demolished the long-standing doctrine that a given organ has a single function—here, the liver not only produces bile, it makes and delivers glucose to the primary circulation of the body. Second, it changed how questions were asked about organs, in that they not only “do” something, they trade off and vary the rates among many functions, in a reactive and interactive way, connected by a system of signals.

Bernard’s next question is precisely the right one, based on this new perspective on bodies: What prompts the liver’s production of glucose into the hepatic vein? This is subtler than it looks, because instead of assuming that organs do things because they “know” or “should,” he’s looking for a systemic interaction among body parts—signaling based on current conditions, what today we call regulation. It’s also striking that he turned to the nervous system as the probable regulatory agent. He focused on the vagus nerve, as it was already understood to have powerful inhibitory effects (it regulates your heart rate this way, for instance), and he cut its branches that go to the liver, finding that this did indeed result in less glucose being produced. However, stimulating the nerve in this location did not increase it, failing to provide the positive control that would lead to a strong conclusion.

To address the nerve effect more directly required a second technique besides opening the abdomen: to go into the brain through the soft palate. You know what the soft palate is, right? Put your fingertip on the roof of your mouth behind the front teeth, then slide it backward toward your throat—when you hit the squishy part, that’s it. Bernard incised the soft palate and used a needle to stimulate the underside of the fourth brain ventricle, where the vagus nerve begins. I trust you are imagining a fully awake animal restrained with its jaws at maximum gape, such that a person can reach its brain tissue through its soft palate, in addition to the abdominal intervention. He found that stimulating the vagus makes blood glucose increase dramatically but briefly, and that a more radical intervention, cutting the spinal cord to block the (stimulatory) splanchnic nerve, does the opposite, leading him to suggest that the opposed effects of these nerves regulated the liver’s release of glucose to the body.

This turns out to be incorrect, as interfering with these nerves also brings adrenaline into the picture, one of several hormones prompting an increase in blood sugar from the liver. That’s why his results included the confusing effect from either stimulating or cutting the vagus’s terminal branches at the liver, as well as the similar detail that cutting the vagus at its origin had the same effect as stimulating it. Specific and immediate conclusions are not the currency of science, though. Since the results included both valid conclusions and an ambiguous effect, new debates ensued that opened up the whole idea of hormonal regulation occurring simultaneously with neural regulation. Bernard’s work set up an understanding of whole-body regulation to an extent never before imagined in the study of physiology.

If you’re looking for moral judgments from me, I’ll give you one. The person who conducted that protocol was a monster. Not only do I wish he had been prevented from doing it in the first place, but that if he did it, he’d be punished. In fact, I would like to dig him up and gibbet his body at—let’s see, what would be my local equivalent of London’s Blackwall Point—I suppose out at that construction site by the interchange that never seems to get finished.

A lot of people back then thought similarly, as Bernard was well known to activists and was regarded as little better than Satan himself. Although he lived and worked in France, his work was widely followed and so influential on physiology and medicine that the Royal College on Gower Street could well be considered a branch of his lab, and his blunt acknowledgment of animal use made his name the embodiment of vivisection in activist terms. You can also see him in the physician Edward Berdoe’s frightening and popular novel, St. Bernard’s: The Romance of a Medical Student (1887), published under the pen name “Aesculapius Scalpel,” followed by a second volume with annotations explaining the sources for the fictional events as Dying Scientifically: The Key to St. Bernard’s (1888). There’s no way the fictional horror hospital’s name is a coincidence.

Bernard’s outlook wasn’t unanimous among physiologists, and according to Applied Ethics in Animal Research (editors John Gluck, Tony Pasquale, and F. Barbara Orleans), he did eventually adopt anesthetics for his experiments, but scientific advocacy for anesthetics tended to get muddled or drowned out. Marshall Hall was in some ways his English counterpart and was criticized as a vivisectionist, but he did propose self-regulatory principles, which acknowledged that the subjects were sentient (his term). The head of the Royal College at the time, J. B. Sanderson, spoke in favor of regulation, but his 1873 Handbook for the Study of Physiology did not include anesthesia and became notorious. During the hearings of the Royal Commission, George Hoggan, an English physician, criticized cruelty and waste in Bernard’s laboratory in a letter to the Morning Post.

Most of the physiologists’ testimony heard by the commission invoked his successes and practices as the height of scientific achievement, and not dishonestly. If Bernard’s work was analytically unsound, he might be written off as a grotesque historical curiosity. However, it was not only sound, but foundational. An Introduction to the Study of Experimental Medicine set the intellectual bar for scientific technique and reasoning as high as it has ever been relative to contemporary practice. He championed and demonstrated the scientific method, including such important concepts as the relationship of conclusions to certainty, and the double-blind experiment. In addition to the liver processing of blood glucose to glycogen, his results include the discovery of homeostasis (he called it the milieu intérieur), pancreatic digestive enzymes, vaso-motor nerves, the beginnings of understanding the endocrine system, the coordination of processes across multiple organs, and more. He may have been personally responsible for the most effective quantum leap in knowledge of vertebrate body functions in history; probably every chapter in a standard textbook of undergraduate training in physiology can trace its contents to one or more of his experiments.3

I have taught literally thousands of students either straight-up physiology or anatomy with multiple physiological tie-ins, and I’ve taught just as many about hypothesis testing and experimental reasoning in exquisite detail. Without knowing it, I quoted Bernard up, down, left, right, inside, and outside.

Why didn’t I know? For some topics, biology texts border on the hagiographic when picturing and describing the foundational work of historical scientists. In teaching physiology and statistics, I never saw one picture or one mention of Bernard by name. Granted, the physiology texts tend not to showcase the historical personalities and experiments much anyway; it’s not as if there were tons of other scientists that were mentioned. But statistics and core biology texts often do, and Bernard’s explanations of hypothesis testing may be found practically word for word in both. He isn’t in any of those. I can at least speculate that our teaching is too squeamish to admit the details of its experimental history: that although “no pain, no gain” isn’t in principle true, it was applied as true during this significant period of discovery.

That squeamishness is not such a good thing: it tacitly supports the value that “the pain was worth it,” privileging human gain as an entitlement; and it obscures what researchers do differently today, inviting rock-throwing from those who see only the kid versus the guinea pig in the windows.

Moreau 1, Prendick 0

In Chapter 14 of the novel, Prendick is finally fully informed of Moreau’s work, and the two of them settle down for one of the great intellectual cage-matches in literature. Thankfully, it’s actually not about sentiment for dogs matched against sentiment for babies. Instead, the two positions are exquisitely precisely chosen, pitting the reform-sympathetic intellectual against the physician, and simultaneously pits the naturalist-scientist against the physician-scientist, regarding the value of their work (Figure 4.2).

image

Figure 4.2 Views of science and policy during the nineteenth century.

Moreau is the fully grounded late-stage medical physiologist, aware of his subculture’s continuing victories in the debates in and out of science, and also mining evolutionary theory toward his ends. Overall, he is rather cleanly constructed from several historical sources. Certainly he owes a great deal to Bernard, but there was in fact a French neurologist named Jacques-Joseph Moreau whose ideas factor into the fictional character’s techniques. The fictional character is converted to British, despite his name, and his former career is quite rightly and most sensitively located on Gower Street, presumably part of the Royal College.

Prendick is more complex. He is not, as he might have been, a fiery everyman reformer with a street Protestant’s intuitive faith, nor a hard-nosed but decent engineer as in several of the films. He is instead a student of Huxley’s who references contemporary evolutionary ideas casually and accurately, with a default sympathy toward research, but also overlapping with social reform, especially temperance. His reflections in Chapter 8 place him right on the fence concerning live animal research. Think of him as exactly that educated person who is confronted by Huxley’s Evolution and Ethics, as it throws his rather sunny view of evolution and human achievement into shadow.

The discussion proceeds through five phases:

1.They bounce back and forth between pain and knowledge, illustrating how the two are historically confounded.

2.They address the goal of achieving humanity, mainly explaining its feasibility.

3.They debate the morality of pain as such, with the subset topic of religion.

4.Moreau describes his early experimentation.

5.They revisit the goal of achieving humanity with an inadvertent self-revealing comment from Moreau.

To them, the discussion hinges on a slippery concept: materialism. By the time of the novel, this word has changed a lot from the days of Abernethy’s invective. The topic is less about vivifying forces and more about evolutionary origins—“man coming from monkeys” to use the popular phrase—but as an accusation, the term retains its core content of failing to exalt life and humans, and rejecting morality. Arguably its fear factor was increased by this shift to a less abstract topic than chemistry.

When Prendick objects to Moreau’s characterizing him as a materialist, it’s because he thinks Moreau is referring to a famous exchange of essays, beginning with the phrase “I am no materialist,” in Huxley’s On the Physical Basis of Life (1868). In the purely technical philosophical terms as used by Lawrence, this essay is as material as the concept can get; however, Huxley is objecting to a new use of terminology and its core ideas, called materialistic philosophy, or the new philosophy, or positivist philosophy, as espoused by the French philosopher Auguste Comte. Huxley hated materialistic philosophy so much that I don’t think he presented another lecture for the rest of his life without taking at least a moment to criticize confounding the observations of physical life with moral directives or social obligations.4

However, and more relevant here, the topic of materialism had recently received further development in a further exchange of ideas, beginning with Huxley’s term “agnosticism” in his 1869 essay, “The Nineteenth Century.” His point of view is today called positive agnosticism, holding that metaphysical questions (including “is there a God?” among others) are unknowable, irrelevant, and ultimately not very interesting. It’s arguably the boldest of all such views, nothing like the popular notion of agnosticism as merely sitting on the fence between faith and atheism.

The religious scholar W. S. Lilly objected to this idea with some force in his 1886 Materialism and Morality, stating a common viewpoint among theologians, that faith in metaphysical principles, both in intention and in their presence in physical reality, was required for a person even to have a moral compass, let alone lay claim to one. He argued that a person who views the natural world, including humans, as composed solely, or even by and large, by physical and understandable processes, is unconstrained from unregulated, sociopathic conduct. In other words, to be ethical at all, one must necessarily embrace everyday reality as full of miraculous qualities and guided events.

In Huxley’s rejoinder the next year, in Science and Morals, he had not yet arrived at the grimness of his 1893 Evolution and Ethics; he replied that the mind is organic, and indeed deterministic in chemical terms, but that science, the product of the mind, can in fact yield morality and ultimately happiness.

Therefore, against Moreau, Prendick is asserting his opposition to Lilly’s accusation, nigh perfectly quoting Huxley, although as it turns out, he’s misunderstood Moreau’s more sophisticated use of the term. It’s an important misstep. Huxley was famously successful at puncturing his establishment religious opponents like Lilly, both with logic and by knowing their texts better than they did. I have no doubt that Prendick would have been equally handy in a similar debate. However, he is completely unprepared for matching ideas against Moreau, who mops the floor with him. I’ll pull their debate topics into a more schematic order to show how.

Morality and Knowledge

Moreau expertly refutes the twin accusations of insanity and hubris. Regarding his motivation, he rightly claims the mantle of plain ordinary science:

. . . I went on with this research just the way it led me. That is the only way I ever heard of research going. I asked a question, devised some method of getting an answer and got—a fresh question. Was this possible, or was that possible? You cannot imagine what this means to an investigator, what an intellectual passion grows in him. (The Island of Doctor Moreau, pp. 55–56)

That’s not insane, nor is it intrinsically cruel or unjust. I know every detail of it myself. His question of plasticity even matches my own research topic, and is not at all distant from what we now call Evo Devo (evolutionary developmental biology), or the variations in development that lie at the heart of species variations. This is what he’s seen of how the body works, and this is what he seeks to know about it, and as long as we’re talking about the “plasticity of the shape,” then he’s on track. (He veers off that track later in the discussion, though.)

He also scotches the image of the God-defiant, would-be-God scientist effortlessly, and firmly establishes his intellectual bona fides by referencing his grounding in faith.

. . . I am a religious man, Prendick, as every sane man must be. It may be I fancy I have seen more of the ways of this world’s Maker than you—for I have sought His laws in my way, all my life, while you, I understand, have been collecting butterflies. (Ibid., p. 55)

This might be a little odd for modern audiences, who are strongly influenced by twentieth-century American political rhetoric, so I need to emphasize the metaphysical importance of knowledge to most nineteenth-century researchers. They generally felt that studying Nature’s utterly physical mechanisms is effectively worshipping God (the Maker), and are therefore religious in an approximately Deist way straight out of the Vestiges. This language may sound a bit strange today, but it was explicitly accepted as a valid scientific outlook in its time, and it might apply to more modern scientists than you think.

Morality and Agony

Granted, Moreau’s clarity of thought may not be immediately obvious at the “don’t try this at home” moment, when he flicks open a knife and stabs himself in the thigh.

“. . . Why, even on this earth, even among living things, what pain is there?”

He drew a little penknife as he spoke, from his pocket, opened the smaller blade and moved his chair so that I could see his thigh. Then, choosing the place deliberately, he drove the blade into his leg and withdrew it.

“No doubt you have seen that before. It does not hurt a pin-prick. But what does it show? The capacity for pain is not needed in the muscle, and it is not placed there; it is but little needed in the skin, and only here and there over the thigh is a spot capable of feeling pain.” (The Island of Doctor Moreau, p. 55)

Is this even possible? It sure is! Well, even the smaller blade of a penknife is probably too big, but yeah, you can do this. It’s a standard exercise in undergraduate physiology labs: the students use felt pens to mark off squares in a grid on their forearms, then use pins to poke the individual squares on one another, without the poked person looking, recording which ones register pain and which don’t. They’re routinely surprised by how much of the surface area doesn’t feel it. Pain is a local physiological phenomenon, and receptors for it are not distributed continuously throughout the skin, although Moreau exaggerates the receptors’ scarcity, and his act of actually driving the blade into his leg must be tagged as artistic license.

It is possible to misread this as Moreau demonstrating the triviality of pain by ignoring it, but that is not his point at all. He’s demonstrating that pain is absent when specific nerves are avoided, to show that pain is not a signal from the heavens that some act or inflicted experience is immoral.5

Moreau drills on from there to Bernard’s point, where he departs from my (and I imagine many people’s) sympathy:

. . . The thing before you is no longer an animal, a fellow-creature, but a problem. Sympathetic pain—all of I know of it I remember as a thing I used to suffer from years ago. (Ibid., p. 56)

Appalling as this is to me on its face, it is philosophically sound for its specific point. Moreau is saying that inflicting pain is simply and only an individual choice, a matter of whether you will or will not do it, and that its worth is a matter of the goal toward which it’s directed. It’s not even “the ends justify the means,” but rather, “if this is the end, then these must be our means,” which is effectively Bernard’s position to the letter: if you’re going to study creatures, and you say the goal is worth it, then at this time in history, you inflict the pain. In that context, it’s morally insoluble—you can’t determine the right thing through debate, you simply individually have to choose what you will do, and Bernard chose to seek his Golden Age in his ghastly kitchen. Before you get snooty about that, consider that culturally and historically, we all did it with him.

The Superiority of Applied Research

Perhaps it takes training in ecology and systematics to feel Moreau’s other, damning hit fully, but there is no historical buffer in this case. He exerts the rather crude but extremely culturally effective pride of place felt by some medical physiologists, who consider themselves at the apex of scientific thought and societal benefit, in contrast to the waffly “nature special” romance of other biologists. Armed as well with authority over patients and moral certainty regarding health benefits, they find natural history to be trivial. To revisit the earlier quote:

. . . I am a religious man, Prendick, as every sane man must be. It may be I fancy I have seen more of the ways of this world’s Maker than you—for I have sought His laws in my way, all my life, while you, I understand, have been collecting butterflies. (The Island of Doctor Moreau, p. 55)

It’s taken directly from Bernard, who decried natural history as “passive science,” distinctly inferior to experimental work that directly contributed to medical practice.

The outlook is by no means rare today and, if delivered to a colleague in such tone and terms, might prompt a return punch in the nose—largely because the modern naturalist would be most stung by the fact that outside listeners would frequently agree. Despite its extreme relevance to human life and society, ecological work struggles for a fraction of comparative recognition. This is a long-standing subcultural rift and intellectually just as unfair back then—for example, to Wallace’s groundbreaking biogeographical work—but Moreau wields his upper hand in it effortlessly.

The Opponent’s Key Weakness

Moreau includes evolution in his scientific view, but again characteristically for his subculture, he cherry-picks the concept of humans as animals for purposes of comparisons and proxy research while ignoring its other implications about humans.6 He also cites its text most specifically to Prendick in the sense that “even your evolution” agrees with him—and in doing so, he exploits one of the weakest elements of the theory as it stood: its progressive interpretation as a refiner of rough systems based on utility.

I never yet heard of a useless thing that was not ground out of existence by evolution sooner or later. (The Island of Doctor Moreau, p. 55)

The role of use or disuse in evolution came from Lamarck’s work, and it became an ongoing motif as the ideas developed further, even when it fit poorly. Whenever Darwin writes of utility, or appearance or disappearance of a feature based on use or disuse, his text struggles terribly, and the theory during the following century didn’t do much better.

Evolution, or specifically natural selection, does not occur “for the benefit of the species,” in any way, and reduced or lost organs aren’t a matter of “need” but are instead a case-by-case phenomenon depending on local variables. Consider as many species with a radical ecological shift in their history as you like, with some reduction in form and function in some body part. You’ll find that plenty of these organs remain very similar to their prior form with minimal use in the new lifestyle, like sea otters’ hind legs; others are present but with much of their capacity gone, like blind eyes in some cave fish; and many are gone or very nearly gone, like the leg limbs of whales.

The reason for this diversity of outcomes is that each case presents its own interplay among (i) the energy it costs to make that organ, (ii) the degree to which its reduction is even genetically available, and (iii) whether that cost interferes with reproductive effort in this new environment. If the developmental cost (i) is high relative to the energy budget, if the genetic variation (ii) is present, and if the reproductive cost is high, then selection kicks in, reducing that organ’s presence in the body. But that applies only insofar as the math holds among the three things. If the selection isn’t strong, then the organ remains despite its potential to be reduced; or if the selection is as strong as you like, but the genetic variation isn’t there, then the organ remains; or if the selection occurs as I described but one or another cost is alleviated before the organ is gone; and so on—any situation that doesn’t include all three components results in the organ remaining or partially remaining. Theoretically, the entire range from presence to absence should be observed when you look across species, which is confirmed in reality. There is no role for getting rid of an unneeded thing in the engineering sense, or finding some ideal body type “for” that particular new environment, or any goal-directed concept at all.7

Prendick can hardly be expected to reply in this way, though, as evolutionary theory didn’t begin to address this problem until the late 1970s. Moreau is strategically using this point as it stood then, such that challenging it as a Darwinian would be undermining one’s own position, and doing so quickly enough to keep Prendick from retorting that Moreau should choose between invoking Darwinian thought and dismissing it as so much butterfly watching. I rather admire Moreau’s expertise at delivering this precise flaw in current evolutionary theory as a rhetorical tactic.

The Shared Foundation of Exceptionalism

By now you might be getting the idea that I think Moreau is right. I don’t, and the reason is easy. To refute Moreau, I don’t need either unshakable faith in my own morality or the benefit of a century’s science. All I need is not to hold an exceptional view of humans, and to call him out on that very point, when he says:

Each time I dip a living creature into the bath of burning pain, I say: this time I will burn out all the animal, this time I will make a rational creature of my own. (The Island of Doctor Moreau, p. 59)

Wait, what happened to our abstract interest in the plasticity of shape? What’s this about getting rid of animality to result in a new creature that we know is in fact an animal? “Come off it, Moreau! These are creatures like yourself and this is torture, as they feel it and as you’d feel it too. I don’t see you avoiding pain-receptors with your scalpel—are you claiming that experiencing the pain is itself accomplishing an experimental goal?” And even more so, “Come off it, Moreau! You think you’re making something special, but humans are nothing special—we’re already a species of animal, which is why your re-shaping works. This has nothing to do with ‘burning out’ anything. Your whole goal is a mystical phantasm, and you call yourself a scientist.” But to make these arguments means tossing one’s own exalted status as a human out the window, for good.

The trouble with a material worldview, specifically a non-exceptionalist one, is that you have to go all the way—you can’t claim cosmic and natural justification for your own position either. Supporting your position becomes much harder than merely grabbing for the brass ring of such justification harder than your opponent. In this case, Prendick can’t refute Moreau about pain unless he equates their pain to humans’, not merely as like it but actually the very same thing, not intrinsically worth less consideration than human pain. He can’t refute Moreau about science’s worth, and what kind of science, unless he puts aside the entitled assumption that science is supposed to benefit humanity uniquely. Therefore, jarringly, and only because Prendick shares the assumed special status of humanity, he gives in, more in despair than in defeat. He doesn’t quite know why he’s lost.

Exceptionalism lurks throughout late nineteenth-century evolutionary theory, again and again touching upon the flat observation that everything about humans was and would be a subset of animal variations, and just as often shying away because the cultural narrative demands that humans must transcend animals in some key way. Moreau utilizes it for armor in the debate. He strategically does not mention Darwin’s research on animal emotions and expressions, or thoughts on human origins, which are profoundly non-exceptionalist. It leaves him open to counterargument on his vulnerable point, his complete lack of science on the single crucial issue: What is, in material observation, a human person?

As an intellectual, Prendick could combat superstition or institutional privilege, but unless he flatly refutes the presumption of God’s favor toward humans as evident in His works, he cannot challenge Moreau’s position.8 It’s an exquisite tension into which Moreau has maneuvered him. If he’d stuck with identifying humans as animals, period, de-nobilizing human aspirations, he’d have won the debate. But as a reformer-idealist of his time, he cannot. Therefore absent anti-intellectual, anti-abomination wrath, all Prendick has going for him is a softer heart.

Prendick intellectually and morally fails to refute Moreau in words, so from this point forward in the story, refuting Moreau must be left to action. But it’s not Prendick’s action, as I’ll discuss in Chapter 6.

Pain Is Real

The novel’s depiction of the historical debate was borne out for at least half a century, and its legacy is still with us. It’d be nice to say that researchers immediately switched to pain alleviation as soon as the technology was available, but they didn’t. My candidate for the most egregious work would be the interventions upon and general treatment of uncountable rhesus macaque monkeys until the late 1970s, horrors that Bernard himself might have fled from.

One Life with Labs

One can read all the journal articles and books there are, but if my students are any indication, the popular discourse has been trapped in amber, with the older dichotomy still available on tap to recapitulate the Brown Dog controversy in painful detail at any time. The two windows are the default throughout the larger culture, and all the intellectual musing in the world doesn’t change a cultural code this strong. In this construction, the scientific use of animals is especially painful and especially wrong. It doesn’t budge in its view that both scientists’ psychology and their experimental techniques are by definition uniquely and distinctly suited to the mistreatment of animals. “So what if you have all these rules, you’re torturing them anyway because that’s what you do.” Some sectors of animal care activism have retained the word “vivisectionist” to express this point; nor is this outlook restricted to them.

The Moreau films consistently present the image—coded as normative—that no scientist is even barely trustworthy regarding the treatment of animals. Island of Lost Souls is the primary example, but there’s no matching the scene in The Island of Doctor Moreau (1977) in which Moreau, presented with a subject whose behavior fails to match his performance standard, whips him mercilessly for no imaginable reason.

Movies in general blatantly depict scientists casually and habitually torturing animals, as in The Secret of NIMH, 12 Monkeys, and 28 Days Later, and the list goes on. Existing legislation and standards are either ignored or dismissed. Films are particularly misleading about repeated use, depicting animals as living miserably for years in laboratories, subjected to experiment after experiment—a phenomenon I have never seen, nor would it be tolerated by any scientist I’ve ever met.

For context, the amended Cruelty to Animals Act of 1876 in the United Kingdom was emulated by a number of European nations within a few years, and was revised and given stronger content after a Second Royal Commission’s verdict in 1912. In the United States, the Animal Welfare Institute was founded in 1951 to find common ground between researchers and activists, and in 1959 William Russell and Rex Burch published The Principles of Humane Experimental Technique, fairly regarded as the long-overdue rebuttal to Bernard’s infamous position statement. Not only were the ethics brought forward as a valid component of science—as a human activity—but also the crucial point that pain compromises the scientific results much more than anesthesia does. These points informed the first US Animal Care Panel in 1960, providing the first Guide for the Care and Use of Animals in 1963, and the founding of the independent Association for Assessment and Accreditation for Laboratory Animal Care International (AAALAC) in 1965.

Federal legislation began with events similar to the fictional trigger that resulted in Moreau’s defrocking, including video footage of wretched and emaciated dogs in facilities that sold them to research laboratories. Articles in Sports Illustrated (1965) and Life (1966) sparked an unprecedented, tremendous lobbying effort, which resulted in the first version of the Animal Welfare Act being passed in the same year, soon followed by the Horse Protection Act (1970) and the Marine Mammal Protection Act (1972). For the kind of research most relevant here, typically funded by the National Institutes of Health (NIH) and the Centers for Disease Control (CDC), the governing body established by the Act is the Office of Laboratory Animal Welfare (OLAW), which oversees inspections. Little or none of this legislation criminalizes the scientific mistreatment of animals; instead, their care lives or dies in the operations of each Institutional Animal Care and Use Committee (IACUC), an in-house committee that reviews all the projects using animals in a given institution (like an academic department) and is empowered to close the research, temporarily or permanently, based on their findings. In addition to the in-house members, it has to include at least one veterinarian and one member from outside the institution.

During the early period of legislation, it’s impossible to generalize about the actual change in scientific practices—here and there, yes, in different ways, sometimes not at all, as with the rhesus monkeys in the Department of Defense. The big changes may have been generational, as younger scientists phased into leadership positions.

Both the Act and the Guide have been revised several times, including the publication of US Government Principles for the Care and Utilization of Vertebrate Animals Used in Testing, Research, and Training in 1985, providing standards for the mandated IACUCs, historically in parallel with the revisions to the Cruelty to Animals Act in the United Kingdom in the Animals (Scientific Procedures) Act of 1986.9

My research experiences track the reform transition to the letter. At that moment in the mid-1980s, as the national research culture shook into new form with the presence of IACUC, I became an apprentice to the world of Rattus norvegicus. I witnessed the local decision to apply the Act’s standards to rodents despite their not being officially included, and was relieved to find that all the lab animals used in an experiment were killed quickly, that the surgical interventions were anesthetized, and that the animals were monitored for normal function. However, the trip one took to get to our rodent lab space, through facilities housing animals for multiple other projects, revealed that the macaques still lived in bare metal cages—a practice soon to be discontinued and regarded by my generation of scientists with loathing.

My research experience also includes museum collections, which are immense deposits of body parts, most of the animals having been killed toward that end, or, in that subdiscipline’s own euphemism, “collected.” This work is arcane even to other biologists, but it’s crucial, literally grounding our knowledge of life’s diversity in geography, ecology, and verifiable specimens. Without it, all other biological investigation loses its relevance—even our basic knowledge of how many species exist of any given type of living creature. The ethics issues in this work are also profound, especially since they often must be applied in a complicated web of different nations’ laws, and require a lot of decisions about death and pain on the fly.

Within science, a wide range of independent dialogues had begun, indirectly informed by the activist world, but focused on science specifically—not, for instance, the meat industry, product testing, or performing animals. The discussions multiplied and started jumping disciplinary boundaries to produce to new symposia and debates. One early paper (1989) by Strachan Donnelley, “Speculative Philosophy, the Troubled Middle, and the Ethics of Animal Experimentation,” described scientists as a “troubled middle between human welfarists and animal rightists,” which I believe was the first time the two-windows problem (not by that name) was identified. Some scientists knew they were not brutal torturers of animals in the name of human benefit, or if they were, they wanted to stop and do it a different way—but they also knew that simply switching windows was no solution at all. Donnelley suggested that such scientists address the problem themselves without necessarily buying into what he described as the “ethical three-ring circus” of animal activism ideologies.

A few years later, the journal in which he published, the Hastings Center Report, would publish the proceedings of the Sundowner Conference, held in 1996, which yielded a set of principles similar to the Belmont Report concerning research on humans in 1978–1979. These and other results from this period have produced a significant library with a voice of its own, appearing in book form in the early 2000s.

This drive in internal reform was paralleled by external activism, and by the late 1980s, the leading activist writers, topics, and orientations had fallen into identifiable lines of thought, as follows:

Utilitarian: most identified with Peter Singer (Animal Liberation), most applied toward with animal welfare, originally introduced with the view that the current level of inflicted pain is not worth the benefit, associated with improving treatment while reducing, not necessarily eliminating, use.

Rights: most identified with Tom Regan (The Case for Animal Rights), mostly applied toward reducing use as such.

Combination: most identified with Richard Ryder (Victims of Science), including the widely adopted term “speciesism”; here the most relevant concept is his term “painism,” combining the utilitarian view that pain must be reduced with the rights view that animal use is questionable—or outright wrong—even in the absence of pain.

This construction doesn’t mesh neatly with the internal scientific reform that was under way. Conservation work, for instance, had established that sentimentalism about single individuals in captivity or about single species didn’t result in effective policy, so scientists killed and collected animals in the field toward the goal of understanding and preserving the entire habitat—often successfully. Physiological and behavioral work in laboratories underwent many different changes based on institutions and on the specific animal, and my own research experiences fell right into the transition between imposed standards and internalized reform.

I discovered the limitations in ethical research and teaching the hard way, losing half of the mice I’d field collected to mishandling at the post office, objecting to the treatment of turtles in the physiology class, and losing one animal in my vole colony, after over a year of successful care, to an aggressive cage-mate. I canceled that project in full, for which a certain sentence in the novel seems resonant:

Then we went into the laboratory and put an end to all we found living there. (The Island of Doctor Moreau, p. 82)

My eventual work made use of existing museum collections rather than live-animal work. I’d found a pretty solid spot for my own ethics. Bluntly, I was an untroubled killer, either directly or indirectly, but not a willing agent of pain, insofar as research standards were able to ascertain. It also seemed right to be honest about it, and I resisted using the standard lab euphemism of “sacrifice,” which I still think is a weird thing to say. My line appeared at collateral damage. Deaths that I planned and conditions I monitored were one thing, but conditions that yielded even a little accidental death and agony along the way were another. To do otherwise would be “to shoot and cry about it,” a position I refused to take.

All scientists I knew who worked with live animals did the same thing: they self-regulated, finding a creature, a topic, and techniques inside their personal limits, and those were typically already inside the larger institutional limits. The problem with this, however, is that it also caused them to self-censor, resulting in no ongoing dialogue and no sense of community identity regarding animal care. Anything beyond one’s personal processing was left to comparatively sterile concerns like “enough to keep my job,” and that context quickly develops cracks. That why the biggest problems weren’t in the research intervention but in the more general, day-to-day husbandry. The period for my own research with live animals corresponded with this escalation in dialogue, although the institutional improvements weren’t yet in place. Although we graduate students were generally animal lovers already, trained by animal care staff, and technically overseen by advisors, in the moment we were often on our own in terms of engagement with the regulations or ethics—especially those like me who did original work rather than filling an existing slot in an advisor’s big research project. We didn’t need ethical consciousness-raising, we needed robust policy.

In 2001, the IACUC at the Children’s Memorial Research Center (CMRC) invited me to be their outside member, bringing me back in contact with live animal work after about ten years’ absence. The research topics at the center were right at the cutting edge for some of the most challenging topics in applied biology: stem cells, cancer, neural development, regeneration, and gene regulation, mostly associated with natal problems or inherited conditions. If you wanted to know what biotechnology could do, or might do, or was under argument for what it was doing, then this was the place.10

Nothing I’d seen compared to an animal-use endeavor like this one. The animals there were almost all rats and mice. The total rodent housing capacity was about 20,000, if they were mice (figure half that for rats), but at no time was this capacity approached.

For a while, rabbits were used as training surgery animals for classes on infant surgery, and some of the facilities were prepared for larger animals like pigs, but no larger animals were used during that time. The rats and mice were mainly housed in a centralized colony, meaning banks of plexiglass cages, which supplied the animals to individual researchers and whose staff monitored them, whether in each lab or in the main colony, and whose supervisor was also a veterinarian and a committee member. This is a bit different from my experience at universities, in which a given lab conducts its own animal husbandry, and it made the committee’s job a lot easier.

I hadn’t seen live rodent work for almost ten years, and the changes in practice were stunning. The biggest funder of biological research in the United States, the National Institutes of Health, was now accredited by AAALAC, which means its funding has to follow much more stringent mandates than those of the Animal Welfare Act. Although a motion to include lab-bred rodents’ and birds’ coverage in the Act failed in 2002, treating these animals in accord with the “animal” definition in the Act was already the standard practice anyway (not that it would have hurt to see it made official in the Act).

In Responsible Conduct of Research (2003), biochemist Adil E. Shamoo and philosopher David B. Resnik provide the detailed history, and it was a long road, longer than it should have been in my view, with some bad moves along the way. But the tipping point seems to have arrived. The numbers of animals used in research was reduced by a full 50% from the 1970s to the 2000s, according to both the US Department of Agriculture and the Humane Society. The Great Ape Project reported in 2005 that 3100 nonhuman primates were kept in research facilities in the United States, with 2180 used in research, which is infinitesimal compared to even a decade before. Education in animal care history and ethics is now mandated content for graduate studies in most research programs.

The required in-house committee overseeing animal research, IACUC, is now expected to enforce three mandates:

Animal care applies not only to animals’ direct use in experiments but also to their husbandry, or living conditions, which means that it applies at all times to any and every animal in the scientist’s or institution’s control.

All animals used in any research project must be killed at the project’s end. Not only does this account for possible trauma or effects that being in the study might induce, it ensures that no animal is used in more than one study.

Stress and misery are considered equivalent to outright pain, and are subject to every rule concerning pain.

Per project, the rubric is explicit: a given study must minimize the number of animals used, prevent or minimize pain and stress, and address an understandable, relevant question. The five categories of pain, A through E, are worth a good look because A, the “least,” is never assigned—by definition, being subject to human control already puts the animal into B.

No project could be done at all pending committee review. We reviewed each one at its initiation and on a following schedule, usually going over five or six projects per monthly meeting. The committee also oversaw the animal housing for the general colony and in each of the labs, when the latter had any, including the numbers present and how many were being used, and including a walk-through of the whole facility every six months. Not to put a fine point on it, if the work—funded as it might be—didn’t pass the committee, it would be suspended or shut down.

If you think that the committee’s composition of mostly researchers from the facility meant taking it easy on one another’s colleagues, think again. We were strongly influenced by Shamoo and Resnick’s book and did not consider the work to be mere compliance. The standards were set as follows:

Most broadly by the Animal Welfare Act

Sharply limited by the standards of a third-party agency in the later years

When applicable, limited by the standards of a granting agency

Specified by our own research of the techniques, especially monitoring the latest knowledge by the veterinarians

Specified to the specific techniques and practices of a given researcher as known by us directly.

The CMRC consistently passed external inspections, and while I was there the institution applied for accreditation with AAALAC, receiving it in 2008. We paid special attention to pain categories; to endpoints, which is when and how subjects are scheduled to die; to standards for monitoring animals for evidence of misery or stress; to the research literature to assess relevance and redundancy; and to the latest information on pain-alleviating substances.

My part often involved critiquing a project’s experimental design to see if it really addressed the stated question, so that more animals would not be suddenly requested halfway through the project, or that some category provided with animals wouldn’t turn out to be unnecessary. A given researcher and I were often able to design away from multiple interventions per animal, which as it happened was the substantive point regarding Bayliss’s demonstration of the famous Brown Dog incident and the subsequent trial in 1903. Sharpening up the criteria for experimental design meant arriving at the most honest animal numbers possible.

I stress this because utility is the weakest requirement in the IACUC mandate and in the entire discussion of research animal welfare. I don’t care how glowing one’s proposal for funding is; no one ever knows whether a whole sector of research will bring “utility” or “benefit,” or when, or even what that is, let alone for a single project. Demanding an immediate benefit militates against basic research, the wellspring of ideas in science, and therefore reduces potential research benefits considerably. Instead of effectively forcing people to lie about how their work would usher in undreamed-of benefits, we required that the experimental designs make maximum sense, that a study had an embedded “yes” or “no,” or genuinely forged into the unknown.

The committee did sometimes take punitive action. If animals in a researcher’s care weren’t following the standards to the letter, for instance if the animals’ cages weren’t being labeled completely, we’d cite them with the explicit threat that they’d lose their access to animals unless it was corrected. Most of these instances arose from a technician’s momentary carelessness, and the researcher fixed it immediately. Some projects were simply denied, usually from external researchers or add-ins that hadn’t originated at the CMRC, for example a proposal to work with chimpanzees in one case, which simply had no reason for such a subject that we could see.

In three cases that I can remember, we halted a current project and closed access to the animals; the most serious infraction involved a cage of mice left on a car seat for a while, and although they came to no harm, it was considered a startling lapse. Not once did I observe a case comparable to the treatment of lab animals prior to the Animal Welfare Act, nor did any researcher ever dismiss animal care as a viable issue for his or her work.

I can’t speak to the larger topic of research as a whole, because there isn’t any such whole. Mistreatment of animals was not a myth in the 1980s, as exposed by activist films such as Unnecessary Fuss, and I can’t claim that my little life history in science represents the most powerful social force in the gains made. I do know that I’d aged into a time and position in which we scientists collectively meant it, and we considered care to be a work in progress—which needed progress—rather than abiding by perceived industry standards. These values didn’t arise under direct pressure from activism, but from our collective experiences. I think that the activism had its effect more subtly than its major advocates perhaps wanted, meaning that we’d all grown up with it and thought about it for ourselves. Several of the researchers and administrators had been active in animal welfare and stayed in research—in this case, very ambitious research—rather than condemning it. We regarded pain, stress, and misery as a constant danger because this is animal use, period, and considered ourselves not just as a committee, but as part of the community, to be—or rather, do our best as—moral agents. In that, Strachan Donnelly had been right on the mark.

Many activists for the reform of animal care are to be credited for their understanding of this history, especially Peter Singer, and I also cite Rod Preece, who, although hardly supportive of animal research, does not demonize it and clarifies in his Animal Ethics that no human culture is especially nice to animals despite a tendency to make such myths about cultures outside one’s own. Since the 1970s, some branches of animal welfare activism have found a positive interlocking path with the generational changes in scientific culture.

I want to focus on the two-windows problem, though—not only is it still present in branches of animal activism, it’s also—as I said earlier—powerfully entrenched in the larger culture. The Island of Doctor Moreau shows the way, right in the breaking point between Moreau and Prendick: human exceptionalism. Much activist discourse is still stuck in it: the idea is that some creatures are afforded privilege against suffering—with ourselves as the gold standard—and the question is which others are permitted this privilege—as judged and implemented by ourselves. The relevant texts are startlingly frank about “lower” animals and about eligibility based on their similarity to us, or how much they can suffer when compared to our assumed maximum capacity.

Other branches of activism seem disconnected from biological information and positions about nonhuman behavior. After over 300 pages of multiple authors’ point-counterpoint debate in Animal Rights, including all manner of abstractions, Martha Nussbaum’s conclusions concede that some research is allowable, but that animal rights demand that ecological habitats should be preserved and that animals utilized in experiments should receive dignified deaths. I’m not certain what scientists she has consulted with, but possibly not very many. Habitat preservation and painless euthanasia already receive unanimous scientific support, and have for decades.

Look at Moreau, because in this, he’s exactly right. He’s a not a villain in a “don’t meddle” story, as he’s not trying to emulate or outdo evolution, not trying to be anyone (or Anyone) he’s not, not seeking fame, and not whining about being laughed at “back in the academy.” But more important, he’s not the vivisectionist imagined in the utility/pain dichotomy either, because he’s not bragging about the inconceivable benefits his work is about to bring. He simply wants to know; the problem is not a utilitarian one, but a matter of how much pain he’s willing to inflict. Moreau is completely aware that the subjects suffer; telling him so in hopes of him suddenly recanting is futile. What’s painism when you’re willing to do it? What are designated rights to the powerless, or to some subsector of the designated rights-bearer from whom they’re withheld?

We’re supposed to be exalted Man and we can’t arrive at an ethic for inflicting and managing suffering? The ethics discourse keeps running up and down the arrows between the levels of the Man/Beast dichotomy. Moreau’s and Prendick’s debate is right on target, in exposing human exceptionalism as the culprit, the real intellectual agent behind the two windows. Once humans are neither masters of the lowly beasts nor magnanimous uplift-agents for (some) beasts to be included, the three-ring circus merely disappears.

I was technically part of Donnelley’s “troubled middle,” with the minor correction that I—as with many of my age group—were not precisely troubled, which would imply that we were confused. We knew exactly where our limits were; we’d found them ourselves at various crises within our work and even at costs to our careers. When we could, we took the helm. It was a matter of pure practice, ahead of formal policy rather than trying to legislate it and then force compliance. It also bypassed the debate about some incontrovertible “ethical good” in isolation. We knew the issue is not pain traded off with utility. It is instead the interplay of knowledge, care (or mercy, as Christopher Scully calls it in Dominion), and death, or more accurately, killing. The nearest to a formal voice is found in Nikola Biller-Andorno concept that animals’ capacity to be harmed or, more generally, misused is a matter of no controversy among scientists. Our question instead is whether and when we can harm them, and framed that way—and acknowledging the role of activism as pressure—I think it resulted in more reform than in any other period in scientific history.

Readings

Readable summaries of medical developments during the nineteenth century include Michael T. Kennedy, A Brief History of Disease, Science and Medicine (2009); Roy Porter, Blood and Guts: A Short History of Medicine (2004), and focusing on animal subjects, Hilda Kean, Animal Rights: Political and Social Change in Britain since 1800 (1998). Extensive biographical information about Claude Bernard and many of his original papers are available at the Bernard archives (claude-bernard.co.uk); and his An Introduction to the Study of Experimental Medicine (1865) is an education in itself.

The difficult and politically charged concepts of utility, progress, and biological processes during the nineteenth century are explained in Robert J. Richards, The Meaning of Evolution (1993).

Current and past standards for animal care in research in the United States are all available, through the Public Health Services, including the Guide for the Care and Use of Animals (8th edition, 2010) and its associated standards for IACUC work. All these should be read in the context of William Russell and Rex Burch, The Principles of Humane Experimental Technique, from 1959, and Strachan Donnelly’s “Speculative Philosophy, the Troubled Middle, and the Ethics of Animal Experimentation,” Hastings Center Report 19(2), 1989.

Books like The Monkey Wars (1995) by Deborah Blum; Bryan Norton, Michael Hutchins, Elizabeth Stevens, Terry Maple, Ethics on the Ark (1996); F. Barbara Orlans, In the Name of Science: Issues in Responsible Animal Experimentation (1996); and Phillip Iannaccone and D. G. Scarpelli (editors), Biological Aspects of Disease: Contributions from Animal Models (1997) capture the diversity of views held by the incoming generations in science in the 1990s.

The many works over many years by Peter Singer and Rod Preece are a library of their own, all well known and available for investigation, and more overlap may be found with the internal ethics of modern science than one expects. My topic does not include direct-action groups like the Animal Liberation Front or the Animal Rights Militia, or specific social organizing like People for the Ethical Treatment of Animals, but certainly a broader discussion of policy would.

It’s useful to compare Lawson Tait’s departure from research, and his Last Speech on Vivisection, delivered to the London Anti-Vivisection Society in 1899, with the similar experience of Richard Ryder, whose works include Speciesism (1974), Victims of Science (1975), Animal Revolution (1989), and Painism (2001).

Cass R. Sunstein and Martha C. Nussbaum (editors), Animal Rights: Current Debates and New Directions (2005), features point-counterpoint essays by many prominent writers in animal welfare and animal rights activism.

The newer library of animal care is large, but good starting points include John P. Gluck, Tony DiPasquale, and F. Barbara Orleans (editors), Applied Ethics in Animal Research (2002); Adil E. Shamoo and David B. Resnik, Responsible Conduct of Research (2003); and Nell Kriesberg’s Research Ethics Modules (http://www.ncsu.edu/grad/preparing-future-leaders/rcr/modules/index.php)for North Carolina State University, especially “Animals, Science, and Society.” For an understandably less conciliatory view, see Dario Ringach, “The Use of Nonhuman Animals in Biomedical Research,” The American Journal of Medical Research 342(4), 2011.

Notes

1.The women Sims operated on were slaves, which blurs the distinctions among human patients, animal patients, captive animals, human prisoners, and experimental subjects.

2.Claude Bernard, An Introduction to the Study of Experimental Medicine, 1865, translated by Henry Copley Greene (1927), published by Dover Books Inc., 1957.

3.Bernard argued against the use of summary measurements, which seems strange today, but his era of research preceded methods to assess variation. Lacking the distinction between standard deviation and standard error, his criticism was valid. Modern inferential statistics combine such methods with the rest of Bernard’s points.

4.Alexander Comte is a key figure in the development of today’s political and ethical vocabulary, not least in adopting evolutionary terms into progressivism, such that today policies and persons are still spoken of as “evolved.” He also coined the terms “secular humanism” and “altruism,” both of which carry implications of really human, not animal, as virtuous behavior. As I understand these matters, David Hume and Arthur Schopenhauer had demolished this position before Comte popularized it.

5.In “Claude Bernard and An Introduction to the Study of Experimental Medicine: ‘Physical Vitalism’ Dialectic, and Epistemology,” Journal of the History of Medicine and Allied Sciences 62(4): 495–528, 2007. Sebastian Normandin suggests that Bernard’s research was vitalistic. I am not so sure. The initial chapters in his Introduction use “vital” as a synonym for “alive,” and in citing its causes as strictly physical-chemical, seems to be in the classic Lawrence mode. But in Part II, he claims that “the vital idea” directs embryonic development, while preserving (material) determinism for the means. Further passages slip back and forth between the two meanings. I bring this up because the question of Moreau’s materialism or vitalism is subtle. His one fixed idea about the rational man born from a bath of pain is indeed more like a medieval alchemist than a modern scientist, practicing a transmutation of the flesh to accomplish a spiritual goal, but that idea stands in sharp contrast to all his other views. If he’s a vitalist, then he’s a later sort, not so much about magic forces but using rationality for his distinguisher, a presumed freedom from urges and especially from fear.

6.Bernard does not mention natural selection or Darwin, but he does acknowledge evolution, like all nineteenth-century scientists and scholars of whom I’m aware. Hugh LaFollette and Niall Shanks are mistaken in stating that he did not (“Animal Experimentation: The Legacy of Claude Bernard,” International Studies in the Philosophy of Science 8(3): 195–210, 1994), as well as that he misreads comparative studies; his logic is perfectly sound.

7.Vestigial organs also cause difficulty in the modern discussion of evolution, because one might ask, if evolution discards unneeded things, why not discard them entirely? It also raises impossible questions about whether an existing organ can be called useless insofar as it’s part of a functioning body. The better argument in favor of evolution is to consider all organs in terms of homology, regardless of their current operations and activities, and therefore less-active organs become useful road maps of historical change, with no reference to how “useful” they are.

8.The dividing line between sentimental anthropomorphism and a recognition of similar experiences and suffering is difficult given our current vocabulary. Sherryl Vint’s Animal Alterity (2013) and Nicole Anderson’s many journal articles provide some new terms that can help, especially when discussing pain and quality-of-life standards in research based on the animals’ own experience.

9.Animal welfare and rights activism is often tied to other current issues, such as environmentalism and opposition to the Vietnam War in the 1970s. Members of ALF and ELF today claim historical solidarity with activist groups like Weather and Earth First!, with a characteristically difficult overlap with science-based activism like global warming and stem cell research.

10.When I joined the committee, the institution was named the Children’s Memorial Research Center, then the Ann and Robert H. Lurie Children’s Hospital of Chicago, then the Children’s Memorial Institute of Research and Education, and most recently the Stanley Monroe Children’s Research Institute. It is affiliated with Northwestern University but is technically freestanding based on funding from its governing agency, the Lurie Foundation. All research positions there are also faculty at Northwestern, and it follows university rather than private standards for training.