Those are my principles, and if you don’t like them . . . well, I have others.
—GROUCHO MARX
MEANING IN LIFE IS A COMPLEX MATTER THAT DEPENDS on circumstances, friends, family, career, and accidents of birth (as in both our genetic makeup and when and where we happen to be born). But, unquestionably, morality plays a crucial role in how we see ourselves and others, which is why we have spent some time here first investigating how the human brain makes moral decisions (Chapters 2 and 3), and then gaining some hint as to why we are moral beings to begin with (Chapter 4). That’s all well and good, you might say, but what am I supposed to do about it? Assuming that moral instincts come from evolution, that our brains conduct moral reasoning by a complex combination of logical reasoning and emotional input, and even that gods have nothing to contribute to the issue regardless of whether they exist or not (we’ll get to that in Chapter 18), how do we then come up with a reasonable understanding of morality upon which to build the foundations of a meaningful life? Enter the handy-dandy morality menu!
We’ll proceed in two steps. First, we will answer what is sometime referred to as the “metaethical” question (the question we temporarily set aside in Chapter 2): if there is no absolute source of morality (like a god), how do we avoid sliding into “anything goes” moral relativism? (The good news is, we don’t!) In the second step, we will take a look at the three major ethical systems competing for your vote: deontology (rule-based ethics), consequentialism, and virtue ethics. Finally, I will suggest that, up to a point, we can in fact mix and match the insights from these three major views of ethics to build a custom-fitting, yet not arbitrary, ethical view for ourselves.
Metaethics is that branch of philosophy that asks the fundamental question: how do we justify ethical reasoning at all? We shall see toward the end of the book that the most common answer—that morality is a god-given gift to humanity—simply won’t do, for very solid philosophical reasons. A good number of people take this to be an admission that ethics is therefore a matter of taste: I prefer dark chocolate and you go for the milk variety (really?); I prefer that young girls’ genitals not be mutilated, and you don’t mind it so much. Who’s to say what’s right and what’s wrong? I cannot make an argument that preferring milk over dark chocolate is irrational (though it does seem very strange to me), but I can make a very substantial argument that—political correctness about other cultural norms notwithstanding—some actions are just wrong, across the board of human experience.
Discussions of metaethics can get really complex, since philosophers tackle the problem by taking a variety of approaches. Still, what is needed for our purposes here is to understand the peculiar role played by “facts”—our empirical observations about human behavior (and hence science)—when it comes to issues of “values”—our ethical choices. In the discussion on the relationship between science and philosophy at the beginning of the book, we encountered David Hume’s naturalistic fallacy: the idea that one cannot simply move from a matter of fact (what is) to a matter of value (what ought to be). There are plenty of natural things that are simply not good for us—poisonous mushrooms come to mind, for instance. Similarly in ethics, just because it is natural (that is, instinctive), say, for people to distrust outsiders, it doesn’t follow that we should treat immigrants any differently from the way we treat native-born citizens.
Recently a spate of claims have been made by scientists, and particularly neurobiologists, that research into the structure of the brain can solve all sorts of philosophical problems, from the issue of free will (see Chapter 9) to the question with which we are presently occupied: morality. Perhaps the most comprehensive scientific attack on moral philosophy is the one mounted by author Sam Harris in his book The Moral Landscape: How Science Can Determine Human Values. Harris dismisses the entire philosophical literature on ethics in a footnote at the beginning of his book, on the grounds that “every appearance of terms like ‘metaethics,’ ‘deontology,’ . . . directly increases the amount of boredom in the universe.” That assertion, needless to say, is simply not serious scholarship (not to mention that one could level the exact same “criticism” at every appearance of terms like “fMRI,” “parietal lobe,” “axon,” and so forth—you get the drift). Moreover, again in a footnote, Harris helps himself to such a broad definition of science that one can make a reasonable argument that he is actually talking about scientia (which, as we saw at the beginning of the book, includes both science and philosophy and is what I call sci-phi), since he does “not intend to make a hard distinction between ‘science’ and other intellectual contexts in which we discuss ‘facts.’” But then he isn’t talking about the “science” that, in the actual sense practiced by scientists and understood by most of the public, can help us determine our values. Harris is playing a game of bait-and-switch with his readers.
The more substantial criticism of Harris’s endeavor (and similar others), however, is that he simply does not seem to acknowledge or understand the distinction between facts and values. This is most comically on full display in a passage where he reports on his own neurobiological research on the fact/value distinction:
First, belief appears to be largely mediated by the MPFC [medial prefrontal cortex], which seems to already constitute an anatomical bridge between reasoning and value. Second, the MPFC appears to be similarly engaged, irrespective of a belief’s content. This finding of content-independence challenges the fact/value distinction very directly: for if, from the point of view of the brain, believing “the sun is a star” is importantly similar to believing “cruelty is wrong,” how can we say that scientific and ethical judgments have nothing in common?
This is nonsense on stilts. To begin with, nobody has ever claimed that scientific and ethical judgments have nothing in common, from the point of view of the brain. More importantly, it most certainly does not follow from this that facts and values are the same sort of thing, only that the brain deals with both in the same areas. (By similar reasoning, since the same areas of the brain respond to having sex and thinking about having sex, it would follow that the two experiences are one and the same. Another stunning discovery of modern science.)
Finally, Harris’s entire project is predicated on the idea that science is the best way to judge the consequences of our actions and to channel them in the direction of increasing human well-being, by which he means an increase in happiness and a decrease in pain. But this project is incredibly philosophically loaded, as Harris is taking for granted a particular consequentialist approach to ethics (utilitarianism), which is far from the only contender in the field of moral philosophy (as we’ll see in a minute). Needless to say, he does this with neither a scientific (because it is not possible) nor a philosophical (because it is boring) defense of his assumptions.
My critique of Harris and other neuro-enthusiasts aside, empirical facts are most certainly not irrelevant to ethical judgment. Morality is a human attribute, and it makes no sense to think about it without regard to what sort of animals we are. In an important sense, ethics is about human well-being (I will leave aside the issue of animal rights, for the simple reason that animals would not have “rights” unless there were human beings capable of thinking about such things as rights), so we want to know empirical facts about what increases or decreases our well-being whenever we engage in ethical reasoning. (Incidentally, the very concept of well-being is in fact open to much discussion, both philosophically and in terms of social science research. We’ll take a close look at the idea in the last chapter of the book.) Let me put this point another way: morality makes sense only within the context of a group of social animals capable of reflecting on what they are doing and why. A lion that kills another male’s cubs when he takes over a harem is not acting immorally, he is just being a lion. Similarly, even a human being could not possibly commit immoral acts if he or she were, say, permanently stranded on a deserted island, because against whom would such acts be committed?
It should be clear from our discussion of the evolution of morality in the last chapter that evolution is where humanity got the foundational building blocks for a moral sense. We can philosophize all we like about what is right and what is wrong, but unless we have a strong natural instinct that makes us care about perceived injustice, all such discussions are literally academic and do not affect our lives. Evolution has equipped us with only a very basic moral instinct that is tailored to work in the situations that historically affected our survival—that is, within small bands of individuals who had to stick together and often defend themselves from aggression not just by other species but by members of Homo sapiens who belonged to different tribes. The idea of morality as a system of thought rather than an animal’s unconscious assessment of certain social situations relies on the unique ability of humans (so far as we know) to reflect on what we do and why we do it. That, of course, is where philosophy comes into play to show us how to build on our instincts and broaden our conception of what it means for something to be right or wrong.
Consider again the example I mentioned earlier: humans’ strong innate feeling that members of their in-group ought to be treated with respect because survival depends on them. We eventually realized that there is no reason why this conditional reciprocity (the “tit-for-tat” strategy we encountered in the last chapter) should not be extended to every other human on the planet, simply on the grounds that human beings are fundamentally the same everywhere, not just in our own neighborhood. What is interesting in making this move is that the underlying reason for our attitude changes in an important manner: natural selection presumably endowed us with a tendency to engage in tit-for-tat with our neighbors and fellow in-group members because such behavior is (demonstrably, by mathematical modeling, as it turns out) the best evolutionary strategy to maximize our own well-being. Recall that tit-for-tat boils down to being nice to your fellow humans unless they are being nasty to you, in which case you retaliate. This is the same behavior that can be observed to this day in our closest evolutionary cousins, the bonobo chimpanzees. But when we enlarge the conditional reciprocity circle to the entire human race, we do so because by reflective reasoning we see that it is the right thing to do. It is not that what other humans do thousands of miles away actually affects us (at least not most of the time); it is that we realize that we would not want, say, torture, mutilation, killings, famine, and so on, to happen to us and there is no rational defense of the idea that we are somehow more deserving than anyone else on the planet.
What we have, then, to simplify a bit, is a two-step process to answer the metaethical question. In the first place, we would not even be talking about morality if it were not for the contingent fact that we are a species of large-brained social animals. Here science tells us the most plausible story of how the rudiments of moral thinking (or more precisely, feeling) came about. Second, our ability to communicate with each other and to critically reflect on our own actions generated the concept that we ought to extend the reach of our moral system to (at least) all other members of our species (and arguably beyond). This is philosophy’s contribution to the problem.
And speaking of philosophy, it is finally time to turn to a brief examination of the three major schools of thought vying for the title of moral system of choice, beginning with deontology. The basic idea, as we saw in Chapter 2, is familiar to anyone who subscribes to a religious system of morals: there are rules to follow unquestioningly, and they are to be followed because they spell out what is the right thing to do. The Ten Commandments of the Judeo-Christian tradition are a classic example of a deontological (duty-based) moral system. But wait, did I not just say that gods have nothing to do with morality? Yes, but modern deontological theories are based on philosophical analysis, not on theology. By far the most influential of such systems is the one developed by Immanuel Kant. A deeply religious person, Kant was raised in a Pietist household where his parents taught him to take the Bible literally. But Kant the philosopher, eventually realizing that ethics needs a rational foundation independent of any particular religious tradition, set out to provide that foundation. (Curiously, Kant is also the philosopher who arguably provided us with the best arguments against the existence of God, presented in his Critique of Practical Reason.)
In one of his most influential books, Groundwork of the Metaphysics of Morals, Kant formulated several versions of his now-famous “categorical imperative,” the foundation for his deontological system. One version reads, “Act only according to that maxim by which you can at the same time will that it would become a universal law,” and another goes: “Act in such a way that you always treat humanity, whether in your own person or in the person of any other, never simply as a means, but always at the same time as an end.” The first version of the imperative should sound very familiar: it is “the Golden Rule” espoused by many religions worldwide (not only Judeo-Christianity but also Buddhism, Confucianism, Hinduism, and Taoism, to mention just a few). The second version of the imperative is a bit more philosophically subtle and wider-ranging than the Golden Rule: Kant is saying that to act morally is to think of others as beings endowed with exactly the same intrinsic value we hold ourselves to have, and never as tools to achieve another goal.
Although this imperative sounds reasonable and certainly noble, it is actually very difficult to practice. For instance, suppose that I do something nice for my friend, or imagine that I give money to a charity to help the victims of a calamity. Naturally, I will feel good about doing such things, and indeed, I may even pat myself on the back for being such a nice guy. According to Kant, however, if I do so I would not be acting morally. My behavior would not be immoral, but since I derived pleasure from it, a pure Kantian could argue that I acted kindly in order to feel better, hence, that I used other people as a means to an end of my own.
Perhaps you can begin to see why Kant developed a reputation for being a strict moralist with little understanding of, or sympathy for, human nature. But of course in order to accept a deontological system of morality we do not have to be as joyless as the famous sage of Königsberg (Kant’s natal city, where he spent his life). We can agree that other people have the same intrinsic value as we do, and yet we can still feel okay about our natural tendency toward self-congratulation for a deed well done. Moreover, we can agree that the categorical imperative is a generally good rule regardless of Kant’s own stricter interpretation of morality.
Deontological systems do run into some interesting problems, however, outside of Kant’s idiosyncratic brand of moralism. Indeed, we encountered one major issue for deontology when we examined the trolley dilemma. We saw that most people agree that it is justified to kill one person to save five by switching a lever that causes the runaway trolley to change course. But in so doing, obviously we do treat the unfortunate victim as a means to an end (saving the other five). Regardless of how ethically praiseworthy that end is, we are in direct violation of the categorical imperative. This sort of problem arises for deontologists because deontology tends to be concerned with intentions (as opposed to consequences) and with universal rules, while sometimes good moral intentions are best served by not following universal rules (so-called situational ethics). For instance, Kant had a serious problem with the idea of lying and argued (correctly) that, if universalized, lying would be an unqualified disaster for society. Yet it is easy to come up with instances where lying is not only acceptable but the right thing to do—as when, in the classic example, a Nazi officer knocks at your door, asking whether you are hiding a Jewish fugitive, and you lie to him, saying that you are not. You can see how the idea of universal imperatives, as attractive as it is initially, quickly leads to significant moral dilemmas.
One way to resolve these problems, as we also saw in discussing the trolley dilemmas, is to turn to a second major school of thought in ethics: utilitarianism. Utilitarianism was the original idea of two influential British philosophers, Jeremy Bentham and John Stuart Mill. Their cardinal concept is the so-called principle of utility: an ethical action is one that furthers the greatest happiness for the greatest number. Subsequent philosophers have elaborated many variations of utilitarianism, producing, for instance, a “negative” version that essentially states that morality is about reducing suffering (as opposed to increasing happiness). This is the principle that Princeton’s Peter Singer extends to other animals as well as human beings, thus providing a serious philosophical underpinning for the animal welfare movement.
This approach to ethics is often referred to in the modern literature as “consequentialism” because it can be thought of as a way to evaluate moral choices in terms of the consequences of our actions (as opposed to the intentions underlying them); this is a radically different approach from deontology, as we have just seen. Consequentialism seems to be better able than deontology to handle situations like the classic trolley dilemma: we ought to switch the lever because the consequences of our action (saving five lives while sacrificing one) increase overall happiness (or at least decrease overall pain) over what would happen if we did nothing (one person survives but five die).
Still, consequentialism itself can be criticized on various grounds. There are two common objections to the idea, both of which present serious problems for utilitarians. The first is that it is not at all clear how far we need to foresee the consequences of our actions—or indeed, whether we are even capable of doing so. Suppose, for instance, that I live in a country where a horrible dictator is oppressing his people, and I decide that the only way out is a revolution. I manage to organize a clandestine operation that, truly motivated by the best ethical intentions, attempts a coup d’état. Alas, the revolution fails, and as a result, not only do thousands of my comrades die, but the dictator clamps down even more on my fellow citizens’ liberties, thus enormously decreasing everyone’s happiness and increasing their pain. Even though I did my best to do the right thing, the whole enterprise ended up being a disaster. A strict consequentialist could then argue that my decision to start the revolution was immoral. (You might have noticed by now that philosophers engage with some gusto in producing disturbing thought experiments. I wonder what a psychologist would make of that.)
The logical answer to this objection is that we are morally responsible only for the consequences we can reasonably foresee. That sounds pretty straightforward, until we realize that what we can foresee in part depends on how much we are capable of giving our actions (and their consequences) some reflection, as well as on how much reliable information we have concerning potentially very complex chains of events. For instance, let’s modify the revolution example: suppose that I was very careless in the whole enterprise and started the revolution out of pure youthful enthusiasm, dramatically overestimating the likelihood of success. Now, of course, I do carry more moral responsibility for the pain and suffering that ensued as a result of my actions. But then again, we often think of young and idealistic people as morally praiseworthy because they at least try to improve things. What actually happens is often beyond anyone’s direct control and comes down to what philosopher Thomas Nagel calls “moral luck.” This line of reasoning does not necessarily pose a fatal objection to consequentialism, but it highlights how difficult it can be to make a decision based on an evaluation of its possible range of consequences.
A second classical objection to utilitarianism is represented by the so-called transplant problem, a more malicious version of the trolley dilemma, if you will. Imagine that you are a surgeon at a local hospital and five injured people are brought into the emergency room. Each has a lethal lesion in one vital organ—the liver, heart, kidneys, pancreas, or lungs. (The only other vital organ in humans is the brain, and for now at least we don’t have the medical know-how to do a brain transplant—not to mention that such a procedure, if possible, would raise a panoply of philosophical questions in its own right!) If you are a consequentialist doctor, it would seem that you should seriously consider forcing a perfectly healthy person who happens to be standing nearby to undergo an operation so that you can transplant his vital organs and save the five lives. This would certainly increase the general degree of happiness, or decrease the general amount of pain, but most of us would unhesitatingly say that anyone who acted that way would be a monster who should be prosecuted to the fullest extent of the law. It seems, then, that there is something flawed about the principle of utility (though, predictably, there are some reasonable counter-objections that utilitarians can mount here).
If neither deontology nor consequentialism quite cut it, is there perhaps a third option? As it happens, there is, and we encountered it already in Chapter 1: so-called virtue ethics. It was first proposed by Aristotle, and in an updated version it’s the third major modern contender in the field of ethical theories. The first thing we need to understand about virtue ethics is what Aristotle meant by “virtue.” To possess a virtue is to be a certain kind of person, to have a character trait that is morally valuable. For instance, a virtuous person would be honest, and that honesty would reflect not just a natural propensity but also the person’s thoughtful recognition that honesty and truth are to be valued; in other words, one is not virtuous in this sense by accident, but because one works at it. The second crucial aspect of virtue ethics is that, unlike deontology and consequentialism, it does not directly address the question of “what is the right thing to do?” but rather deals with the more fundamental issue of “how are we to live?”
As we have seen, for Aristotle a virtuous person can overcome akrasia (weakness of the will) in order to achieve eudaimonia (to flourish). According to virtue ethics, then, human beings need to steer themselves in the direction of virtuous behavior both because that is the right thing to do and because the very point of life is to live it in a eudaimonic way. Interestingly, however, Aristotle also thought that you have to be at least a bit lucky in order to be able to live a eudaimonic life: if you happen to suffer from an extremely crippling disease or live in dire external circumstances, you may not be able to flourish and your very character will be affected by such conditions. While many ethicists (except Nagel) find the idea that luck has anything to do with morality rather bizarre, I think Aristotle got it exactly right in this case, demonstrating his deep understanding of the complexity of the human condition.
Just as in the other two cases, there are of course reasonable criticisms of virtue ethics. The chief one perhaps is that the idea sounds good in general, but is short on specific guidance about how to conduct our lives. There surely are several different ways for a human being to flourish, and Aristotle’s suggestion that we steer a middle course among extremes in order to be virtuous may be hard to apply because “the middle” is a large space. Consider, for instance, the example of courage, one of the Aristotelian virtues: too little of it makes a person a coward, while too much of it makes him reckless. But this notion is unlikely to help us when we are trying to figure out whether we should risk our life in a specific circumstance.
Then again, virtue ethicists turn this objection into an advantage: we have seen with both deontology and consequentialism that they run into trouble precisely because they attempt to codify behaviors too rigidly, either according to a particular set of rules (deontology) or by following a simple overarching criterion (consequentialism). Real life is too complex for that, and ethical decisions are indeed difficult to make, which is why virtue ethics’ emphasis on character rather than actions or intentions may ultimately be more realistic.
Another objection often raised against virtue ethics is that the whole concept of akrasia is absurd because it does not make sense to say that I may decide to do something against my will (like not going to the gym when I really want to). This objection stems from an overly rationalistic view of humanity, however, and completely misses the psychological dimension of what it means to be human. If you truly do not know what it is like to have to battle the weakness of your will, you are a very peculiar human being indeed (and may be in for a shock when you read Chapters 9 and 10).
As is often the case in both science and philosophy, there is no ultimate answer to the question of how we should conduct our life. But that doesn’t mean that there is no answer, or that all answers are about the same. In particular—and despite the possible protestations of professional philosophers—we can put together a reasonable view of the ethical and meaningful life by combining elements of all three major moral theories, and the specific combination does not have to be the same for everybody. For instance, I am particularly sympathetic to virtue ethics because I find the idea of flourishing as a lifelong project attractive, and because I easily recognize my own akratic tendencies. But I am also aware of the power of Kant’s categorical imperative, which I interpret just a bit less strictly than that august philosopher. And finally, I also find much value in the consequentialist emphasis on our personal responsibility to make informed decisions because morality has a lot to do with the ramifications of our actions.
A professional moral philosopher will probably object to this project of constructing a morality menu on the ground that some of the ethical concepts we have examined entail mutual contradictions, so that it is not possible to arrive at a coherent system of thought by combining the best of virtue ethics, deontology, and consequentialism. I am tempted to respond as the American poet Walt Whitman (1819–1892) famously did (in Song of Myself): “Do I contradict myself? Very well then I contradict myself, I am large, I contain multitudes.” There is some wisdom in Whitman’s retort. Although we will see in Chapters 14 and 15 how philosophers engage in a very useful type of reflective exercise to increase the internal coherence of their beliefs, there is something to be said for the possibility that human affairs are simply too messy to be interpreted through a rigid, formal, logical approach. Just as in applied science—in medical research, for instance—we sometimes have to make decisions that cannot wait for more data to be accumulated or for the perfect experimental protocol to be devised, so in practical philosophy we may have to live with the best that can be done instead of seeking an unattainable Platonic ideal. Nevertheless, in order to live an ethical and meaningful life we have to think about what we are doing and why, and such thinking is greatly helped by considering what the greatest philosophers of all time have had to say about the human condition. It is then still up to each and every one of us to decide what to make of it all.