Philosophy and Ethics
IT IS QUITE patent that college students are often drawn to studying philosophy, or indeed, majoring in philosophy, by something one can call the “Wow! factor.” How many eighteen-year-olds can fail to be charmed and enchanted by Zeno’s proof that motion is impossible? Or by McTaggart’s demonstration that time is unreal? Or by Hempel’s paradox showing that a piece of white chalk serves as evidence for the claim that all ravens are black? All such dazzling arguments elicit a “Wow!” when they are first encountered. For students who have lived a sheltered life, Hume’s attack on the religious Argument from Design that purports to show the existence of an intelligent designer can be similarly mind expanding.
College students tend to be attracted either to intellectual approaches that radically challenge their understanding of reality, such as the paradoxes mentioned above, or else to those at the other extreme that promise to fix and reform the world, such as Marxism. A famous quip from Bertrand Russell well characterizes the latter tendency: “If you are not a radical at age twenty, you are a knave. And if you are a radical at age forty, you are a fool.” Twentieth-century Anglo-American analytic philosophy was adamant about the role of philosophy in real-world matters. Ludwig Wittgenstein, the patron saint of analytic and ordinary-language philosophy, set the tone for analytic approaches to ethics when he remarked that one can take an inventory of all the facts in the world and not find it a fact that “killing is wrong.” Thus mainstream British and American philosophers did not embrace real-world issues, instead engaging in “meta-ethics,” thinking about the nature of ethical judgments. The dominant approach to ethics was “emotivism,” which claimed that apparent ethical judgments that seem to be making claims about reality are really just expressions of emotions such as revulsion or disgust. Claiming that killing is wrong is to be analyzed as analogous to saying, “Killing, yuck!” Thus one cannot rationally argue about ethics. When we appear to argue about an ethical issue, such as the illegitimacy of capital punishment, we are really arguing about such facts as whether capital punishment deters other killers. Needless to say, such an approach to ethics is very unlikely to have the Wow! factor for students seeking to change the world, or even to generate intellectual excitement for them.
In my own case, both in my undergraduate studies and in graduate school, I could develop no enthusiasm for ethics. For a smart young person, the meta-ethics that was regnant during my schooling came as no surprise—of course ethical terms do not refer to, nor can they be explained by, natural, empirical properties. And the contrary notion, that ethical terms refer to abstract entities that “subsist” in a Platonic realm of Forms, was at least equally implausible. Only much later in my career could I see the power of arguments purporting to lead to the postulating of abstract entities. And it became clear to me that various theories intended to explain our ethical intuitions as deployed in daily life lack plausibility. Regarding utilitarianism, which defines the ethical notions of “good” and “bad” in terms of those actions that result in the greatest amount of pleasure for the greatest number and the least amount of pain, it is difficult to see any way of adding up pleasures and pains in any consistent way. For example, how does one compare physical pain with psychological distress in a way that lends itself to a felicific calculus that can be consistently applied? How, for example, does one weigh the pain of a back injury in football against the risks involved in continuing to play or the loss of a lucrative potential career? Obvious problems arose as well with Kantian ethics—how could anyone buy into the idea that there may never have been a moral act or the related idea that if one derives even an iota of pleasure from an action, it ceases to be moral? According to Immanuel Kant, for an action to have any moral worth, it must be performed strictly out of acknowledgment and respect for the moral law. Thus, for example, if one is contributing to charity, say, buying Christmas presents for destitute children, and if one consequently derives pleasure or satisfaction from contemplating the children’s joy, the action cannot be viewed as strictly moral, for it is possible that the satisfaction one derives is the major reason for the action.
In short, ethics did not elicit a “Wow!” from me. In part, this is because we all make mundane ethical judgments all the time. Very rarely is an ethical decision highly dramatic the way ethical problems are depicted in the media—two patients, only enough medicine for one, who lives and who dies? Even in medicine, such decisions are extremely rare. (“Thank God!” say my physician friends.) When genuine dilemmas do arise, rarely is one forced into a philosophical examination of one’s fundamental assumptions, in the way that Zeno’s paradoxes do drive such conceptual reflection about space, time, and motion. Rather, the agent will attempt to find some morally relevant characteristic that sways the resolution one way or the other. (In the medical example above, one patient may be a mass murderer of children, a fact that would help break the deadlock over who receives the medicine.)
Thus, for many young philosophers trained in the 1950s, 1960s, and 1970s, ethics seemed to be rather dull, as it failed to produce shocking conclusions and was specifically taught as having nothing to do with changing the world. While some hushed talk about “applied ethics” could be heard by the 1970s, it was spoken of derisively and never addressed in the “good” philosophy schools. All of this was quite odd, given the social turmoil extant during this period. When I began to work on animal ethics in the mid-1970s, I received a number of letters from former classmates at Columbia University, warning me not to get sidetracked from “real philosophy.” I must confess to partaking in this snobby and snotty attitude myself for quite a long time. And with the chutzpah shared by the very bright and the very dumb, analytic philosophers dismissed the history of philosophy as irrelevant to a sound philosophical education because, after all, it is a history of errors. That rankled, both because it was my specialty and because even Aristotle, arguably one of the two greatest philosophers in history, began all his inquiries by examining what his predecessors had said about the subject in question, and he had virtually no predecessors.