BEFORE I DEMONSTRATE the value of the concept of telos to moving discussion of animal ethics forward, there are a number of points surrounding the concept that merit additional discussion. In particular, I described this book as being based in common sense. Yet our talk about differing and incompatible metaphysical views of the world that cannot be decided empirically and yet seem to be solidly grounded is indeed far removed from common sense. After all, it is a highly credible commonsense question to ask, Is the world the way our ordinary experience tells us it is, or is it what science tells us it is? And in a less commonsense vein, we might ask, Are there various “natural kinds” differing qualitatively in an irreducible way from each other, the way Aristotle claims and the way Descartes denies, or is it true that if we could see from a perspective of godlike accuracy, we would see a homogeneous reality differing in quantitative characteristics forming the illusion of qualitative differences? To use a homey analogy, are pancakes and coffee cake the same thing because they are made of the same ingredients (Bisquick, eggs, and water), or are they as different as our gustatory experiences tell us they are?
My inclination is to take a pragmatistic position on the question. Even in ordinary life there are very different answers to a given question, depending upon the way in which the question is asked, and all the answers may be true, but some may be totally inappropriate to the context of inquiry. Suppose, for example, I get a telephone call from a personnel officer of a large company to whom my name has been given as a reference by a former student, Ms. X, applying for a job. The company representative begins our conversation by asking me what I know of Ms. X. I respond by enthusiastically declaring that “she has the best forehand smash I have ever seen in any student.” My statement may be absolutely true, but it is totally—and laughably—irrelevant given the context of our discussion. The interviewer may then respond by saying that he has no interest in her tennis prowess but rather in her ability to interact with clients. If I then proceed to say that “she will be fine in that kind of interaction as long as the client is not allergic to that odoriferous perfume she usually wears,” there is little doubt that the interviewer will quickly thank me for my time and terminate the discussion, even though what I said may be true. It is not the truth or falsity of my assertion that is an issue—it is rather its relevance to the current context.
Similarly, in everyday experience we rarely ask about matters to which an answer from mathematical physics is the only plausible response. If you and I are discussing your lovely flower garden and I ask you how you manage to achieve such a wonderfully balanced array of colors, you would appear to be insane if you were to answer me in terms of the physics of color vision. Whether the world is “really” just a matter of homogeneous atoms randomly colliding in the void, that perspective on reality is rarely relevant to ordinary discourse. Insofar as our ordinary experience, which we cannot but treat as real, manifests differences between hot and cold, ugly and beautiful, good and bad, the reductionistic language of physics is inappropriate to ordinary conversation.
Nonetheless, the preceding point has been studiedly ignored by reductionistic scientists from Democritus to Descartes who are inclined to dismiss from their version of reality much of what we as human beings must deal with in our lives. It is precisely in keeping with this point that scientists have developed what I call the scientific ideology, or the common sense of science, for it is to science what ordinary common sense is to daily life.
What is an ideology? In simple terms, an ideology is a set of fundamental beliefs, commitments, value judgments, and principles that determine the way someone embracing those beliefs looks at the world, understands the world, and is directed to behave toward others in the world.
When we refer to a set of beliefs as an ideology, we usually mean that for the person or group entertaining those beliefs, nothing counts as a good reason for revising those beliefs and, correlatively, that raising questions critical of those beliefs is excluded dogmatically by the person with the belief system. (As twentieth-century political philosopher David Braybrooke has stated it, “Ideologies distort as much by omitting to question as by affirming answers” [1996, 126].)
The term is most famously, perhaps, associated with Karl Marx, who described capitalist ideology (or free-market ideology) as involving the unshakable beliefs that the laws of the competitive market are natural, universal, and impersonal; that private property in ownership of means of production is natural, permanent, and necessary; that workers are paid all they can be paid; and that surplus value should accrue to those who own the means of production.
Though most famously associated with the Marxist critique of capitalism, we all encounter ideologies on a regular basis. Most commonly, perhaps, we meet people infused with religious ideologies, such as biblical fundamentalism, who profess to believe literally in the Bible as the word of God. I have often countered such people by asking them if they have read the Bible in Hebrew and Greek, for surely God did not speak in antiquity in English. Further, I point out, if they have not read the original language, they are relying on interpretations rather than literal meaning since all translation is interpretation, interpretation that may be wrong. To illustrate this point, I ask them to name some of the Ten Commandments. Invariably, they say, “Thou shalt not kill.” I then point out that the Hebrew in fact does not say, “Thou shalt not kill,” it says, “Thou shalt not murder.” This should be enough to convince them that they in fact do not believe the Bible literally, if only because they cannot read it literally. Does it do so? Of course not. They have endless ploys to avoid admitting that they cannot possibly believe it literally, for example, “The translators were divinely inspired,” and so forth.
We of course are steeped in political ideology in grade school and high school, for example, on issues of “human equality.” Ask the average college student (as I have done many times), What is the basis for professing equality, when people are clearly unequal in brains, talent, wealth, athletic ability, and so forth? Few will deny this, but most will continue to insist on “equality” without any notion that “equality” in our belief system refers to a way we believe we ought to treat people, not to a factual claim. If they do see equality as an “ought” claim, almost none can then provide a defense of why we believe we ought to treat people equally if in fact they are not equal. And so on. But virtually never will any student renounce the belief in equality.
Ideologies are attractive to people; they give pat answers to difficult questions. It is far easier to give an ingrained response than to think through each new situation. Militant Muslim ideology, for example, sees Western culture as inherently evil and corruptive of Islam and the United States as “the Great Satan” and the fountainhead of Western culture, which in turn is aimed at destroying Islamic purity. The United States is thus automatically wrong in any dispute, and any measures are justified against the United States in the ultimate battle against defilement.
What is wrong with ideology, of course, is precisely that it truncates thought, providing simple answers and, as Braybrooke indicated in the statement quoted earlier, cutting off certain key questions. Intellectual subtlety and the powerful tool of reason in making distinctions are totally lost to gross oversimplification. Counterexamples are ignored. I recall working in a warehouse where the preponderance of blue-collar employees were strongly possessed of racist ideology, particularly antiblack ideology. It was universally believed that blacks are lazy, unintelligent, sneaky, and crooked. One day I had an inspiration. There was in fact one African American (Joe) who worked in the warehouse and was well liked. I raised this counterexample with some of the white workers. “Surely,” I said, “this case refutes your claim about all black people.” “Not at all,” they said. “Joe is different—he hangs around with us!”
But it is not only that ideology constricts thought. It can also create monsters out of ordinary people by overriding common sense and common decency. We can see this manifested plainly throughout the history of the twentieth century. The recent experiences of Eastern Europe and Africa make manifest that ideologically based hatreds, whose origins have been obscured by the passage of time, may, like anthrax spores, reemerge as virulent and lethal as ever, unweakened by years of dormancy.
As we have seen, ideologies operate in many different areas—religious, political, sociological, economic, ethnic. Thus it is not surprising that an ideology would emerge with regard to science, which is, after all, the dominant way of knowing about the world in Western societies since the Renaissance. Indeed, knowing has had a special place in the world since antiquity. Among the pre-Socratics, or physikoi (physicists), one sometimes needed to subordinate one’s life unquestioningly to the precepts of a society of knowers, as was the case with the Pythagoreans. And the very first line of Aristotle’s Metaphysics—by which term he meant “first philosophy”—is “All men by nature desire to know.” Thus the very telos of humanity, the “humanness” of humans, consists in exercising the cognitive functions that separate humans from all creation. Inevitably, the great knowers, such as Aristotle, Francis Bacon, Isaac Newton, and Albert Einstein, felt it necessary to articulate what separated legitimate empirical knowledge from spurious knowledge, and to jealously guard and defend the methodology used to distinguish between them from encroachment by false pretenders to knowledge.
Thus the ideology underlying modern (i.e., postmedieval) science has grown and evolved along with science itself. And a major—perhaps the major—component of that ideology is a strong positivistic tendency, still regnant today, of believing that real science must be based in experience since the tribunal of experience is the objective, universal judge of what is really happening in the world.
If one were to ask most working scientists what separates science from religion, speculative metaphysics, and shamanistic worldviews, they would unhesitatingly reply that it is an emphasis on validating all claims through sense experience, observation, or experimental manipulation. This component of scientific ideology tracks directly back to Isaac Newton, who proclaimed that he did not “feign hypotheses” (“hypotheses non fingo”) but operated directly from experience. (The fact that Newton in fact did operate with nonobservable notions such as gravity, or, more generally, action at a distance, did not stop him from ideological proclamations affirming that one should not do so.) The Royal Society members apparently took him literally: they went around gathering data for their commonplace books and fully expected major scientific breakthroughs to emerge therefrom. (This idea of truth revealing itself through data gathering is prominent in Bacon.)
The insistence on experience as the bedrock for science continues from Newton to the twentieth century, where it reaches its most philosophical articulation in the reductive movement known as logical positivism, a movement that was designed to excise the unverifiable from science and, in some of its forms, to formally axiomatize science so that its derivation from observations was transparent. A classic and profound example of the purpose of the excisive dimension of positivism can be found in Einstein’s rejection of Newton’s concepts of absolute space and time, on the grounds that such talk was untestable. Other examples of positivist targets were Henri Bergson’s (and some biologists’) talk of life force (élan vital) as separating the living from the nonliving, and the embryologist Hans Driesch’s postulation of “entelechies” to explain regeneration in starfish.
The positivist demand for empirical verification of all meaningful claims has been a mainstay of scientific ideology from the time of Einstein to the present. Insofar as scientists think at all in philosophical terms about what they are doing, they embrace the simple, but to them satisfying, positivism I have described. Through it, one can clearly, in good conscience, dismiss religious claims, metaphysical claims, and other speculative assertions not merely as false, and irrelevant to science, but as meaningless. Only what can be in principle verified (or falsified) empirically is meaningful. “In principle” means “someday,” given technological progress. Thus, though the statement “There are intelligent inhabitants on Mars” could not in fact be verified or falsified in 1940, it was still meaningful since people could see how it could be verified, that is, by perfecting rocket ships and going to Mars to look. Such a statement stands in sharp contradiction to the statement “There are intelligent beings in heaven” because, however our technology is perfected, we cannot visit heaven, it not being a physical place. (Ironically, the emphasis on empirical verification clashes with the belief that the world is really what physics tells us.)
What does all this have to do with ethics? Quite a bit, it turns out. Ethics is not part of the furniture of the scientific universe. You cannot, in principle, test the proposition that “killing is wrong.” It can be neither verified nor falsified. So, empirically and scientifically, ethical judgments are meaningless. From this, it was concluded that ethics is outside of the scope of science, as are all judgments regarding values rather than facts. The concept that I in fact learned in my science courses in the 1960s and that has persisted to the present is that science is “value-free” in general and “ethics-free” in particular. This denial in particular of the relevance of ethics to science is explicitly stated in science textbooks.
In addition to being explicitly affirmed, this component of the scientific ideology has also been implicitly taught in countless ways. For example, student moral compunctions about killing or hurting an animal, whether in secondary school, college, graduate school, or professional school, were never seriously addressed until the mid- to late 1980s, when the legal system began to entertain conscientious objections. One colleague of mine, who was in graduate school in the late 1950s studying experimental psychology, tells of being taught to “euthanize” rats after experiments by swinging them around and dashing their heads on the edge of a bench to break their necks. When he objected to this practice, he was darkly told, “Perhaps you are not suited to be a psychologist.” In 1980, when I began to teach in a veterinary school, I learned that the first laboratory exercise required of the students, in the third week of their first year, was to feed cream to a cat and then, using ketamine (which is not an effective analgesic for visceral pain but instead serves to restrain the animal), do exploratory abdominal surgery ostensibly to see the transport of the cream through the intestinal villi. When I asked the teacher the point of this horrifying experience (the animals cried out and showed other signs of pain), he told me that it was designed to “teach the students that they are in veterinary school, and needed to be tough, and that if they were ‘soft,’ to ‘get the hell out early.’”
As late as the mid-1980s, most veterinary and human medical schools required that the students participate in bleeding out a dog until it died of hemorrhagic shock. Although Colorado State University’s veterinary school abolished the lab in the early 1980s for ethical reasons, the department head who abolished it was defending the practice ten years later, after moving to another university, and explained to me that if he did not, his faculty would force him out. As late as the mid-1990s, a medical school official told the veterinary dean at my institution that his faculty was “firmly convinced” that one could not “be a good physician unless one first killed a dog.” In the autobiographical book Gentle Vengeance, which deals with an older student going through Harvard Medical School, the author remarks in passing that the only purpose he and his peers could see to the dog labs was to assure the students’ divestiture of any shred of compassion that might have survived their premedical studies.
Veterinary surgery teaching well into the 1980s was also designed to suppress compassionate and moral impulses. In most veterinary schools, animals (most often dogs) were operated on repeatedly, from a minimum of eight successive surgeries over two weeks to over twenty times at some institutions. This was done to save money on animals, and the ethical dimensions of the practice were never discussed, nor did the students dare raise them.
At one veterinary school, a senior class provided each student with a dog, and the student was required to do a whole semester of surgery on the animal. One student anesthetized the animal, beat on it randomly with a sledge hammer, and spent the semester repairing the damage. He received an A.
The point is that these labs in part taught students not to raise ethical questions and that ordinary ethical concerns were to be shunted aside, and ignored, in a scientific or medical context. So the explicit denial of ethics in science was buttressed and taught implicitly in practice. If one did raise ethical questions, they were met with threats or a curt “This is not a matter of ethics but of scientific necessity,” a point that was repeated when discussing questionable research on human beings.
Even at the height of concern about animal use in the 1980s, scientific journals and conferences did not rationally engage the ethical issues occasioned by animal research. It was as if such issues, however much a matter of social concern, were invisible to scientists, which in a real sense they in fact were. One striking example is provided by a speech given by James Wyngaarden, the director of the National Institutes of Health, in 1989. The NIH director is arguably the chief biomedical scientist in the United States and certainly is a symbol of the research establishment. Wyngaarden, an alumnus of Michigan State University, was speaking to a student group at his alma mater and was asked about ethical issues occasioned by genetic engineering. His response was astonishing to laypeople, though perfectly understandable given what we have discussed about scientific ideology, or the common sense of science. He opined that, while new areas of science are always controversial, “science should not be hampered by ethical considerations” (Michigan State University, 1989). Probably no other single incident shows as clearly the denial of ethics in science. When I read the unattributed quotation to my students and ask them to guess its author, they invariably respond, “Adolf Hitler.”
Nor is this sort of response restricted to biomedicine. Some years ago, PBS ran a documentary special on the Manhattan Project, which developed the atomic bomb. Scientists on the project were asked about the ethical dimensions of their work. They replied that the ethics was not their business; society makes ethical decisions, scientists simply provide technical expertise regarding the implementation of those decisions. In fact, every time I am interviewed by a reporter on ethical issues in science, my raising the “science is value-free” component of scientific ideology elicits a shock of recognition. “Oh yeah,” they say, “scientists always say that when we ask them about controversial issues like weapons development.”
It is therefore not surprising that when scientists are drawn into social discussions of ethical issues they are every bit as emotional as their untutored opponents. It is because their ideology dictates that these issues are nothing but emotional, that the notion of rational ethics is an oxymoron, and that he who generates the most effective emotional response “wins.”
Just how extraordinarily incapable scientists are of responding to rational ethical argument was driven home to me when I ran a long session on animal ethics and legislation at a 1982 national meeting of the American Association for the Advancement of Laboratory Animal Science (AAALAS), where I carefully laid out the arguments for legislating protections for research animals. Though the audience of laboratory-animal veterinarians expressed great frustration that researchers did not listen to them, particularly in human medical schools, and that their expertise, if attended to, would make for better animal care and better science, they steadfastly refused to support their own legislative empowerment since they opposed the importation of ethics into science!
As irrational as that was, it paled in comparison to what occurred after my session. Reporters converged on the president of the AAALAS, asking him to comment on my demand for legislated protection for animals. “Oh, that is clearly wrong,” he said. “Why?” they queried. “Because God said we could do whatever we wish with animals,” he affirmed. The reporters then turned to me and asked me to respond. Amazed that the head of a scientific organization could so invoke the Deity with a straight face (imagine the head of the American Physical Society responding to budget cuts in the funding of physics by saying, “God said we must fund physics”), I poked fun at his reply. “I doubt he is correct,” I answered. “He comes from Kansas State University.” “So what?” said the reporters. “Simple,” I replied. “If God chose to reveal his will at a veterinary school, it certainly would not be at Kansas State! It would be at Colorado State, which is God’s country!”
What are we to say of the aspect of scientific ideology that denies the relevance of values in general and ethics in particular to science? As I hope the astute reader has begun to realize, as a human activity, embedded in a context of culture, and addressed to real human problems, science cannot possibly be value-free, or even ethics-free. As soon as scientists affirm that controlled experiments are a better source of knowledge than anecdotes, that double-blind clinical trials provide better proof of hypotheses than asking the Magic 8 Ball, or, for that matter, that science is a better route to knowledge of reality than mysticism, we encounter value judgments as presuppositional to science. To be sure, they are not all ethical value judgments but rather epistemic (“pertaining to knowing”) ones, but they are still enough to show that science does depend on value judgments. So choice of scientific method or approach represents a matter of value. Scientists often forget this obvious point; as one scientist said to me, “We don’t make value judgments in science; all we care about is knowledge.”
In fact, reflection on the epistemic basis of science quickly leads to the conclusion that this basis includes moral judgments as well. Most biomedical scientists will affirm that contemporary biomedicine is logically (or at least practically) dependent on the use of—sometimes the invasive use of—animals as the only way to find out about fundamental biological processes. Every time one uses an animal in an invasive way, however, one is making an implicit moral decision, namely, that the information gained in such use morally outweighs the pain, suffering, distress, or death imposed on such an animal to gain the knowledge or that it is morally correct to garner such knowledge despite the harm done to animals. Obviously, most scientists would acquiesce to that claim, but that is irrelevant to the fact that it is still a moral claim.
Exactly the same point holds regarding the use of human beings in research. Clearly, unwanted children or disenfranchised and marginalized humans are far better (i.e., higher-fidelity) experimental models for the rest of us than are the typical animal models, usually rodents. We do not, however, allow unrestricted invasive use of humans despite their scientific superiority. Thus another moral judgment is presuppositional to biomedical science.
I was once arguing with a scientist colleague about the presence of moral judgments in science. He was arguing their absence. I invoked the argument that, if science were ethics-free, we would always use the highest-fidelity model in our researches, thus deploying unwanted children rather than rats. In the ensuing silence, I asked him again: “Why not use the children?” “Because they won’t let us!” he snapped.
In any case, many other valuational and ethical judgments appear in science, not just those involved in methodology. Which subjects and problems scientists are funded to pursue—AIDS, nonpolluting energy sources, alcoholism, but not the tensile strength of blonde hair or the intelligence of frogs—depends on social value judgments, including ethical ones. (Engaging in scientific research today depends on funding from federal agencies or private enterprise.) The once popular scientific subject of race, or the measurement of an alleged biological property called IQ, are now forbidden subjects for ethical reasons, as are myriad other subjects inimical to current social-ethical dogmas and trends.
Even experimental design in science is constrained by ethical value judgments. The statistical design of an experiment testing the safety of a human drug will invariably deploy far greater statistical stringency than a similar experiment testing the safety of an animal drug used for precisely the same disease in animals, for ethical reasons of valuing harm to people as a much greater moral concern than harm to animals.
The root paired concepts of biomedical science—health and disease—can also be readily shown to contain irrevocably valuational components. Physicians are convinced that the judgment that something is diseased or sick is as much a matter of fact as is the judgment that the organism is bigger or smaller than a breadbox. Diseases are repeatable entities to be scientifically discovered—physicians are scientists. This scientific stance has been repeatedly noted in its nonsubtle manifestations; anyone who has been in a hospital is aware of the tendency of physicians to see patients as instances of a disease rather than as unique individuals—science after all deals with the repeatable and law-like aspects of things, not with individuals qua individuals. This tendency to remove individuality is a chronic complaint of patients—it is demeaning to be treated as an instance of something. Indeed, it is less often noticed that this tendency is medically pernicious as well. When it comes to dispensing pain medication, for example, it has been shown that pain tolerance thresholds (i.e., the maximum pain a person can tolerate) differ dramatically across individuals and that thresholds can be modulated by a variety of factors, not the least of which is surely rapport with the physician, or the sense that the physician cares about the patient’s pain. Among physician authors, only Oliver Sacks, in Awakenings, has stressed the extraordinary degree to which a disease varies with the individuals, in all their complexity, in whom that disease is manifested or instantiated.
This much ordinary common sense (but not the common sense of science) recognizes. The more subtle sense in which scientism in its emphasis on fact versus value—with only the former term entering into the medical situation—misses the mark is in its understanding of the very nature of disease. For the concept of a disease, of a physical (or mental) condition in need of fixing, is inextricably bound up with valuational presuppositions. Consider the obvious fact that the concept of disease is a concept that, like good and bad, light and dark, acquires its meaning by contrast with its complement, in this case, the concept of health. One cannot have a concept of disease without at least implicit reference to the concept of health (that is, okay and not in need of fixing). Yet the concept of health clearly makes tacit or explicit reference to an ideal for the person or other organism; a healthy person is one who is functioning as we believe people should. This ideal is clearly valuational; most of us do not feel that people are healthy if they are in constant pain, even though they can eat, sleep, reproduce, and so forth. That is because our ideal for a human life is really an ideal for a good human life—in all its complexity.
Health is not merely what is statistically normal in a population (statistical normalcy can entail being diseased); nor is it purely a biological matter. The World Health Organization captures this idea in its famous definition of health as “a complete state of mental, physical, and social well-being.” In other words, health is not just of the body. Indeed, the valuational dimension is both explicit and not well defined, for what is “well-being” save a value notation to be made explicit in a sociocultural context?
Heedless of this point, and wedded to the notion that disease is discovered by reference to facts, not in part decided by reference to values, physicians make decisions that they think are discoveries. When physicians announce that obesity is the number one disease in the United States, and this “discovery” makes the cover of Time magazine, few people, physicians or otherwise, analyze the deep structure of that statement. Are fat people really sick people? Why? Presumably the physicians who make this claim are thinking of something like this: fat people tend to get sick more often—flat feet, strokes, bad backs, heart conditions. But, one might say, is something that makes you sick itself a sickness? Boxing may lead to sinus problems and Parkinson’s disease—that does not make it in itself a disease. Not all or even most things that cause disease are diseases.
Perhaps the physicians are thinking that obesity shortens life, as actuarial tables indicate, and that is why it should be considered a disease. In addition to being vulnerable to the previous objection, this claim raises a more subtle problem. Even if obesity does shorten life, does it follow that it ought to be corrected? Physicians, as is well known, see their mission (their primary value) as preserving life. Others, nonphysicians, however, may value quality over quantity for life. Thus, even if I am informed—nay, guaranteed—that I will live 3.2 months longer if I lose forty-five pounds, it is perfectly reasonable for me to say that I would rather live 3.2 months less and continue to pig out. In other words, to define obesity as a disease is to presuppose a highly debatable valuational judgment.
Similar arguments can be advanced vis-à-vis alcoholism or gambling or child abuse as diseases. The fact that there may be (or are) physiological mechanisms in some people predisposing them to addiction does not in and of itself license the assertion that alcoholics (or gamblers) are sick. There are presumably physiological mechanisms underlying all human actions—flying off the handle, for example. Shall physicians then appropriate the management of temperament as their purview? (They have, in fact.) More to the point, shall we view people quick to anger as diseased—Doberman’s syndrome?
Perhaps. Perhaps people would be happier if the categories of badness and weakness were replaced with medical categories. Physicians often argue that when alcoholism or gambling are viewed as sickness, that is, something that happens to you that you cannot help, rather than as something wrong that you do, the alcoholic or gambler is more likely to seek help, knowing he or she will not be blamed. I, personally, am not ready to abandon moral categories for medical ones, as some psychiatrists have suggested. And, as Kant said, we must act as if we are free and responsible for our actions, whatever the ultimate metaphysical status of freedom and determinism. I do not believe that one is compelled to drink by one’s physiological substratum, though one may be more tempted than another with a different substratum.
I recall one occasion where one of my freshman students came to my office hours, visibly upset and indeed on the verge of tears. When I asked him what the problem was, he told me that he had visited a physician for a routine checkup. In the course of taking his history, the physician determined that both his parents were alcoholics and told the student that he could not go to venues where alcohol was consumed, and specifically cited student parties. The student was very upset because this advice dealt a serious blow to his social life, and he asked my opinion. I told him that I thought he could go to parties and other such events, provided he remained very aware of the need not to drink. “Take a glass of ginger ale or Coke, consume it slowly so no one attends to what you’re drinking. Take no alcohol and act normally. Keep me posted on your progress.” Three years later, when he was about to graduate, he came back to see me and thanked me warmly for the advice. Following it, he had been able to enjoy a normal social life and had not become an alcoholic even in the presence of alcohol. He had been scrupulously careful not to consume alcohol, and was in fact never tempted to do so.
Be that as it may, the key point is that physicians are not discovering in nature that conditions like obesity or alcoholism are diseases, though they think they are. They are, in fact, promulgating values as facts and using their authority as experts in medicine to insulate their value judgments from social debate. This occurs because they do not see that facts and values blend here. They are not ill intentioned, but they are muddled, as is society in general. And to rectify this, we must discuss, in a democratic fashion, which values will underlie what we count as health and disease, not simply accept value judgments from authorities who are not even cognizant of their existence, let alone conceptually prepared to defend them. At the very least, if we cannot engender a social consensus, we should articulate these for ourselves.
In 1988, the Environmental Protection Agency rejected scientifically sound toxicological data on moral grounds because the experiments that generated it were done by the Nazis on human beings against their will, out of fear of legitimating such experimentation. This decision was made despite the fact that other well-established areas of science—such as research on hypothermia and human reactions to high altitude beginning in the 1940s—had been based on and derived from Nazi experiments, and despite the fact that failing to use the data essentially entailed that much invasive animal research needed to be done to replace it.
Consider a revolution that I have looked at in considerable detail in another book, the replacement of psychology as the science of consciousness by behaviorism, which saw psychology as the science of overt behavior and ignored internal mental states. What facts could force such a change? After all, we are all familiar with the existence of our subjective experiences. Few people were impressed with behaviorism’s founder John B. Watson’s denial of consciousness (he came perilously close to saying, “We don’t have thoughts, we only think we do”). Rather, people were moved by his valuational claims that studying behavior is more valuable because we can learn to control it.
Clearly, then, the component of scientific ideology that affirms that science is value-free and ethics-free is incorrect. We can also see that the more fundamental claim—that science rests only on facts and includes only what is testable—is also badly wrong. How, for example, can one scientifically prove (that is, empirically test) that only the verifiable may be admitted into science? How can we reconcile the claim that science reveals truth about a public, objective, intersubjective world with the claim that access to that world is only through inherently private perceptions? How can we know that others perceive as we do, or, indeed, perceive at all, since we cannot verify the claim that there are other subjects? How can science postulate an event at the beginning of the universe (the Big Bang) that is by definition nonrepeatable, nontestable, and a singularity? How can we know scientifically that there is reality independent of perception? How can we know scientifically that the world was not created three seconds ago, complete with fossils and us with all our memories? How can we verify any judgments about history? How can we reply that we know things best when we reduce them to mathematical physics rather than when we stay at the level of sensory qualities? Answers to the above questions are not verified scientifically. In fact such answers are presuppositional to scientific activity.
I have in fact alluded to another component of scientific ideology that worked synergistically with the denial of values to remove animal ethics from the purview of science. This is the claim that we cannot legitimately speak of thoughts, feelings, and other mental states in science since we cannot deal with these things objectively, not having access to the thoughts and consciousness of others. As I have explained elsewhere, this denial allowed scientists to negate the reality of animal pain, distress, and fear, while at the same time using animals as models to study pain. In a previous book, The Unheeded Cry, I demonstrated that this viewpoint was adopted in the early twentieth century by behavioral psychologists despite the fact that the dominant approach to biology was Darwinian, and Darwin himself, and most of his followers, eloquently affirmed that if morphological and physiological traits are phylogenetically continuous, so too are mental ones. I showed in that book, I hope, that the removal of thought and feeling from legitimate science was not a matter of new data that refuted old attempts to study animal mind, nor was it a result of someone’s finding a conceptual flaw in that old approach (as Einstein did with Newton’s views of absolute space and time). In fact, the shift to studying behavior rather than mind was effected by valuational rhetoric, namely, that if we study behavior rather than thought, we can learn to shape it and modify it—to extract behavioral technology from science, as it were. Anyway, the rhetoric continued, real sciences like physics deal with observables (a claim not always true—consider particle physics), and if we want to be real scientists, we need to lose subjectivity. So despite the ideological belief that science only changes by empirical or logical falsification, we have shown that, at least in psychology, a major change in what counted as scientific legitimacy was driven by values.
Another component of scientific ideology that follows closely upon our discussion of values is the ubiquitous belief that we best understand any phenomenon when we have understood it at the level of physics and chemistry, ideally physics. It is this component of scientific ideology that led a very prominent colleague of mine in physiology who works on fascinating issues in animal evolution at the phenotypic level to affirm in one of my classes, “Science has passed me by. . . . My work is archaic. . . . All real science now operates at the molecular level.”
This reductionistic approach further removes scientists from consideration of ethics. If what is “really real” and “really true” is what is described by physics, it is that much easier to treat ethical questions arising at the level of organisms as being as “unreal” or “untrue” as the level at which they arise. The language of physics is, after all, mathematics; yet ethical questions seem inexpressible in mathematical terms. The belief that expressing things mathematically, as physics does, is getting closer to the truth leads in fact to a kind of “mathematics envy” among areas of science less quantitative, and sometimes leads to pseudo-mathematical obfuscation being deployed in fields like sociology or psychology to make these fields appear closer to the reductionistic ideal. In the end, of course, as I pointed out regarding the scientific revolution, a commitment to reductionism represents a value judgment, not the discovery of new facts. No empirical facts force the rejection of qualitative work for quantitative, and Aristotle, for one, explicitly rejected such rejections.
The final elements of scientific ideology worth mentioning are the beliefs that science should be ahistorical and aphilosophical. If the history of science is simply a matter of “truer” theories replacing false or partially false ones, after all, why study a history of superseded error? How things come to be accepted, rejected, or perpetuated is ultimately seen as not being a scientific question. Thus many scientists lack a grasp of the way in which cultural factors, values, and even ethics shape the acceptance and rejection of whole fields of study (for example, consciousness, as we have already discussed, eugenics, intelligence, race, psychiatry as a medical discipline, and so on). To take one very interesting example, it has been argued that quantum physics in its current form would never have been possible without the cultural context prevailing in Germany between 1918 and 1927.
Historian Paul Forman (1971) has argued that a major impetus for both the development and the acceptance of quantum theory was a desire on the part of German physicists to adjust their values and their science in the light of the crescendo of indeterminism, existentialism, neoromanticism, irrationalism, and free-will affirmation that pervaded the intellectual life of Weimar and that was hostile to deterministic, rationalistic physics. Thus quantum physicists were enabled to shake the powerful ideology of rationalistic, deterministic, positivistic late nineteenth- and early twentieth-century science, with its insistence on causality, order, and predictability, as a result of the powerful social and cultural ambience in German society that militated in favor of a world in which freedom, randomness, and disorder were operative and that valued such chaos both epistemically and morally.
The rejection of philosophical self-examination is also built into scientific ideology and into scientific practice. Since philosophy is not an empirical discipline, it is excluded by definition by scientific ideology from the ken of science. Further, historically, philosophy, like theology, competed with science, at least in the area of speculative metaphysics, so the few historically minded scientists approached it with suspicion, which spread to others. In any case, scientists do not have time for “navel-gazing” or “pontificating,” as they often characterize philosophy—they are too busy doing science to reflect much on it. As one scientist said to me, “When I win a Nobel Prize, then I will write philosophy, because then everyone will want to read it, whether it makes sense or not!” Clearly, then, reflection on science and ethics must also await a Nobel Prize.
The reader will note that many of my examples of scientific ideology have been drawn from earlier decades. This is because scientific ideology was largely uncritically accepted in the public mind because the public mind was unconcerned with what scientists believed—scientists believed many strange things—so scientists thought little about making public statements that revealed various aspects of their thinking, such as when the scientific community looked askance at the work of Jane Goodall, or failed to respect and share the public’s concern for ethical issues raised by science, in areas from genetic engineering to animal research. With the public now more aware of what scientists think, scientists have become more guarded, for example, regarding the denial of animal consciousness. Blatant disregard for social and ethical concerns put scientific freedom at risk, and science became more circumspect, even though the basic ideology was unchanged.
An excellent example is provided by the fact that ethical issues are still neither taught nor discussed in science courses. However, in 1990 the NIH mandated the teaching of what the government considers research ethics, instruction in “the responsible conduct of research,” concerning regulatory compliance, wherein a series of “thou shalt not” policies are taught like a religious catechism in a few days, with no discussion, by people with no ethics background whatever. (In one case, a chemistry professor at my university attended a two-day seminar in teaching ethics and earned a “certificate” designating her as a “qualified ethics instructor.” She had never had a philosophy class, but what the hell . . . it was just ethics! I asked an artist friend of mine to make a plaque for me certifying that I had had more than two days of chemistry instruction and thus was qualified to teach quantum chemistry.)
Such contempt for ethics teaching pervades virtually all such classes. As it happens, I have taught a number of science classes at an upper level as well as having taught ethics for more than thirty years, and I can unequivocally affirm that it is much harder to teach ethics properly. Recently, I had numerous students in my graduate course Science and Ethics who had taken the “regulatory compliance” class and, in writing, pronounced the difference between the two courses as “night and day.” As one student wrote, “Regulatory compliance taught us a list of what to do and what not to do—you taught us why!”
Now that we understand ideology and the way it operates in the common sense of science, we can return to discussing the way multiple metaphysics can exist in the same individual. Consider an analogy: Imagine a person who requires glasses to see normally at his workplace. The only problem is that, due to the peculiarity of his optical correction, these glasses do not allow him to see anything colored purple as purple; instead, he sees it as gray. Eventually, he may forget that the work world contains purple items. Similarly, if one is sufficiently indoctrinated with scientific ideology, and has grown accustomed to working with peers who share that ideology, awareness of ethical issues in science simply does not come up in one’s thinking or conversation. If one interacts with nonscientists who do see ethical issues in science, they can simply be dismissed as lacking a scientific perspective, in the same way that an educated person dismisses scientifically naive people who see Australians as living “upside down.”
One important distinction must be drawn here. Some ideologies typically pervade every aspect of one’s life. Racism is this sort of ideology. Scientific ideology tends to be restricted to the context of one’s scientific activity. For example, John B. Watson was ideologically committed to denying consciousness to other beings, human or animal. One can, however, be morally certain that when Watson went home to his wife and she began a conversation about some newsworthy issue by saying, “Do you know what I think, John?” he of course did not respond by saying, “I doubt that you think at all.” In other words, one’s ideological commitments are checked by the pressures of ordinary life. The pressure of donning and doffing ideology creates a condition in people that psychologists call compartmentalization. When I was an undergraduate at the City College of New York, I was aware of a group called the Society of Orthodox Jewish Scientists. Fundamentalist scientists exist everywhere and are quite capable of believing that the world is five thousand years old, while also fully accepting an age of billions of years in their scientist moments. Many scientists, Descartes included, treat animals with doting and affection in their ordinary lives (he raised spaniels and gave them as gifts), while seeing them as machines incapable of pain in the course of their scientific activities. In the same way, many scientists espouse politically liberal causes in their ordinary lives, while concurrently denying any sense to ethical judgments. Of paramount importance is the fact that coexisting ideologies rarely confront one another, hence the pervasiveness of compartmentalization.
As important as the denial of ethics is to scientific ideology, equally important is the denial of consciousness, thought, feeling or any other mode of awareness and evidence of mental activity. In fact, as we saw earlier, the absence of mind in animals, so that nothing matters to them, would constitute a morally relevant difference removing them from the scope of moral concern. Recall that Descartes conveniently made the attribution of consciousness dependent on possession of language. As we shall see, there are indeed certain aspects of mind that do require language. Without language one cannot think in negative terms (there are no jaguars in the library), fictional terms (Superman grew up in Smallville), counterfactual terms (if it doesn’t rain we can meet in the park), or futural terms (I want to visit Ireland next summer). But one can certainly experience the negative thoughts and feelings associated with violation of telos, as well as the pleasures and satisfaction that come with meeting telos demands. The important philosophical question that arises is this: It appears that attributing mental states to animals requires that we utilize anthropomorphic inferences. All sorts of scholars, particularly scientists in the biological and psychological sciences, see anthropomorphism as a cardinal sin. Can one construct an argument that makes anthropomorphism legitimate? Before embarking on that question, we need to remind ourselves that telos violations can be so egregious that one need not be Jane Goodall to be cognizant of them—witness sow stalls, giraffe and orca enclosures, veal crates, and so on—this is why ordinary people not grounded in ethology are so shocked by them.