I was driving around in the desert of the southwestern United States in the early ’90s with a few people who would end up being notable scientists. We were all pre-PhD research assistants working at Los Alamos National Laboratory. On a particularly long stretch of road, I got to thinking about morality. I thought about how sometimes I did things I knew were wrong. My clichéd revelation in the desert was that this was unjustified. I resolved then to never do anything immoral again.
I announced it to the car. My buddies laughed at me.
But I actually was different from that day on. It wasn’t that I was a menace before that day or anything, it’s just that I became much more intolerant of minor ethical violations that I had let slide before. I felt like I was really a better person.
But about fifteen years after that I learned more about what it really means to be good. It’s more than just not being bad.
So let’s get down to brass tacks. What is the best ethical theory?
As we have seen, we want a theory that matches pretty well with what we intuitively believe is right and wrong, maybe the moral intuitions we hold most dear, or are most sure of, but one that irons out some of the inconsistencies. One of the glaring inconsistencies we talked about was our moral intuition’s insensitivity to number. But when you’re going to try to optimize your morality, and be the best person you can be, this is something that cannot be ignored. In more ethical decisions than you might think, magnitudes matter.
When many of us think of morality, we tend to think of it in terms of simple rules: Don’t steal. Keep your promises. Be nice to the people around you. Don’t betray your friends. If only one rule applies, no problem, you don’t have to deal with magnitudes. The problem with any moral rule set is that you will run into situations where the rules will be in conflict.
Suppose your friend Andy has been spending too much money on going to the spa. It’s causing some tension with his wife, Lorraine, who is worried about the household finances. Andy can’t resist, though, and tells you over coffee that he’s going to spend the day at the spa and is going to leave his phone home so Lorraine can’t reach him and find out where he was. “Promise me you won’t tell her where I went,” Andy asks you, and you grant this promise.
Later, Lorraine calls you, frantic, because Andy needs a liver transplant, and a liver is available, but nobody can reach him. “Do you know where he is?”
Here we have conflicting morals: keep your promises, don’t betray your friend’s (Andy’s) trust, don’t betray your other friend’s (Lorraine’s) trust, help your friends when you can, don’t lie. How do you navigate situations like this? In this case, the answer is pretty obvious. Andy needing a new liver is a life-and-death situation, and it pales in comparison to the promise you made to him about not telling his wife where he was. The promise just isn’t as important as his opportunity to get a liver. Why is this decision so easy?
You come up with an answer by comparing magnitudes. How important is helping your friend in this instance? He needs a new liver; that’s pretty serious. How serious was this promise, and how much is at stake? You might not explicitly think about numbers when you make this judgment, but you can bet that your mind is comparing magnitudes of some kind. That is, your mind is using the informational equivalent of numbers, even if you’re unaware of what they are and experience it only as a strength of feeling or conviction.
In this case, I suspect most of us would break the promise to help Andy with his liver, because the liver transplant is so important, and hiding yet another spa trip from Lorraine is relatively unimportant. When values conflict, people weigh the importance of different factors.1
As I mentioned at the beginning, this book is for optimizers. I’ve talked about how to optimize your productivity and your happiness. Now I’m going to tell you what science has to say about not merely how to be a good person, but how to be the most good person you can be.
Being as good as you can be means thinking big, and magnitudes matter even more when we think big. Suppose you are considering where to put a homeless shelter. There are two candidate locations. You have lots of information to inform this decision, including costs to your organization, the proximity of the location to other services street people might need, and the impacts of the shelter on property values. You might have moral rules, like helping people when you can, and not letting people get hurt, but how do you use these rules to come up with a decision?
Well, putting a shelter in either location will cause harm to the property owners in the area by lowering their property values. Also, putting the homeless shelter here rather than there will help some street people and not others. So it appears that placing the shelter in either location is technically immoral, if you’re thinking purely in term of rules, as it’s harming people no matter what you do. And if you try to avoid doing anything bad by doing nothing, then you don’t help anyone at all, which (depending on your rule set) also breaks a moral rule.
When you’re thinking about how you act in your day-to-day life, it’s easier to think about rules that are or are not broken, rather than magnitudes of good and bad. This isn’t all bad. The “user interface” of morality has practical importance. If mores are complex, and hard to understand and remember, compliance will suffer. We use rules of thumb to guide us, because figuring out what’s right and wrong can be complicated.
This book tries to go one level deeper. In particular, when you think about what’s the right thing to do when large numbers of people are affected, you more or less have to think in terms of magnitudes to find the moral way forward.2
Perhaps a more striking example is the moral justification of having a police force and justice system that is allowed to use violence or restriction of freedom. When police detain someone, or physically harm them, or a justice system imprisons someone, society is harming some individuals (the suspects and criminals) with the intention of helping society. Presumably, incarcerating dangerous people helps the other people in society more than it hurts the criminals.3 In other words, you can’t even justify a police force with moral rules that don’t involve some kind of nuance regarding magnitude of helping and harming. People who claim that morality has nothing to do with magnitude are usually just trying to ignore the unexamined magnitudes in their heads that influence their moral judgments. We want these magnitudes out in the open so we can critique or change them.
Okay, when we want to maximize what is good or right, and minimize bad and wrong, how can we get more specific about what we need to maximize and minimize?
We’ll start with what we’re most certain is right and wrong, and what the most people agree is morally relevant: helping people is good and hurting people is bad. I say this is the most certain because every culture has a moral like this (even if they define “people” as only being members of their in-group). Every well-thought-out moral theory includes something like this. Just about every person (with the possible exception of persons with psychopathy) has this moral intuition.4 When we try to step back from our cultural mores, get skeptical of any value of the difference from culture to culture, what is left? What moral value would someone have to be stupid or crazy to deny? For a lot of people there’s only one answer: that things like happiness, pleasure, life satisfaction, utility, conscious good feelings, subjective well-being, preference fulfillment, etc., are good, and that harm, misery, pain, suffering, preference frustration, etc., are bad. Can we justify this moral stance? Most people cannot, but also feel they don’t need to. That helping others is good and hurting them is bad feels self-evident.5
I’ve been sloppy about what care and harm means, so I’ll take a moment to be clearer. On this view, there is an inherent good to things like happiness, pleasure, life-satisfaction, good feelings, not being hurt, and so on. Similarly, there is an inherent bad in feelings of misery, suffering, harm, preference frustration, pain, and so on. Technically, I mean positively and negatively valanced conscious mental states, and I’ll refer to them simply as “good feelings” and “bad feelings.” But know they are shorthands for complex phenomena.
Can all morality boil down to good and bad feelings? This is probably the most important question in ethics.6 Some say no, there are other things that we care about that don’t have to be justified in terms good and bad feelings. For them, these other moral things (perhaps knowledge, or beauty, or reparation) are also self-evident, and do not need to be justified in terms of care and harm.
Are they right? Perhaps. But acknowledging the existence of these other things is much easier than figuring out what those other things actually are. As we have seen, people disagree about many of their moral intuitions. People throughout history have held different opinions about the rightness and wrongness about abortion, infanticide, euthanasia, slavery, chastity, caste systems, cannibalism, eating meat, how many wives someone can have, the importance of religion, the importance of etiquette, and whether or not wearing a hat is respectful or disrespectful to religion.7 If Elizabeth thinks that scolding other people’s children is inherently wrong, and not just instrumentally wrong, and you disagree, there’s not much to talk about. Because the moral is supposed to be self-evident, it can’t have, or doesn’t need, justification with anything more fundamental, such as helping and hurting.8 With a zillion different cultures, with so many fundamental disagreements about rights and wrongs, and a zillion people in those cultures, each with their own, idiosyncratic moral preferences, how can we be confident about what additional self-evident rights and wrongs should be endorsed?
I’m going to focus on good and bad feelings in this book, treating them as the most important moral factor, which most can agree on, even if they think there are other important factors besides. There are several reasons for this.
First, we have a high certainty that good and bad feelings are important moral ideas, and we have a high uncertainty about any of the others, which seem to be more subject to individual and cultural differences. It is telling that when diverse cultures come together to hammer out some kind of ethical agreement, their common ground ends up being weighing harms against benefits.9 It is, perhaps, the one ethical thing that just about everybody can agree on.
Second, many moral ideas we have can be justified by appeal to good and bad feelings. For example, killing people and stealing things can be justified as bad because of the bad feelings that are likely to result. Why is it wrong to lie? Because it tends to hurt people. Why is it wrong to kill, or steal, or slap somebody? Because these things hurt people. Someone might argue that slavery is bad (or good) based on issues about how much people are helped or hurt. Similarly with helping, it’s good to give to the needy, to help out a friend, to give a gift, to comfort someone when they’re crying, because those actions help people. This trait is not shared by many other candidate values: not everything can be justified in terms of, say, beauty or knowledge.
Third, as we’ve learned in the section on moral psychology, all of us have the ability to think in this way. In this sense, we all agree with the concept of weighing the bad against the good, and all use it sometimes. Although we also all have the ability to think in terms of rules, our disagreement on what those rules should be (see reason one) makes this moot. There is little profit in agreeing that we should reason in terms of moral rules if we cannot agree on what those rules actually are.
Fourth, rule-based ethical theories don’t tend to have much to say about maximizing goodness. I could write a book about how to minimize your chances of lying, stealing, and murdering, but that won’t do anything for all the people who die of preventable disease, or animals living in the factory farms, or preventing the catastrophic effects of climate change. Popular versions of rule-based ethics do not have explicit guidelines for decision-making on a grand scale. They are simply not up to the task, which is why large organizations tend to use some form of good and bad cost-benefit analysis, rather than relying on rule-following without consideration of magnitudes. Can you imagine trying to make decisions for an entire country using only a few simple moral rules? However, if you are a die-hard deontologist, I recommend reading Peter Unger’s Living High and Letting Die after you finish this book to see why deontology will bring you to many of the same conclusions as this book.
Fifth, I’m personally convinced by the philosophical arguments in its favor. When you understand how our moral psychology works (which, if you’re reading this book front to back, you do), it looks like many of the objections to weighing benefits and harms stem from intuitions from the old brain, designed by evolution to propagate our genes rather than doing the right thing, or making the world a better place.10 To my eyes, some of the “advantages” of rule-based ethics sound downright selfish.11 Do you really want to trust a blind process that maximizes nothing but gene propagation as an arbiter on questions of morality? In regards to this reason, there are smarter people than I on both sides of this issue, so feel free to take this reason with a grain of salt.
But what I really want to communicate in this book is a way of thinking about ethics. When you’re trying to optimize your goodness, you don’t really have a choice but to think in terms of magnitudes. If you want to use more than care and harm, or you interpret care and harm in a way different from good or bad feelings, be my guest. But I encourage you to think in terms of magnitudes, not just good and bad as an all-or-nothing affair. That is, put numbers and weights to all you think is ethically relevant, and reason from there.
Can Science Tell Us Anything About Morality?
Thinking in magnitudes is not the only thing I want to encourage. Science and rationality are relevant, too, when it comes to right and wrong. I’ve talked about productivity, what makes people happy, and what people think is right and wrong, all through a scientific lens whenever possible. Now I’m going to talk about what’s actually right and wrong. Can science even speak to that?
It’s been said that science is, ultimately, without value. That science can tell us what is, but not what should be. That it is things like philosophy and religion that describe value. Suppose Pat is sick, and Chris has medicine. Science can tell us what is likely to happen if Chris gives Pat the medicine, but what scientific experiment could you possibly run to show that Chris should give it?
The reason for this skepticism is because in ethics, if you keep asking why, why, why, you end up with some assumptions about what is right and wrong. Assumptions that can be supported with neither science nor argument. You have to start somewhere. These assumptions, sometimes called faiths, are thought to be antithetical to the scientific enterprise, which is based on evidence and reason.
However, anybody who thinks that science does not also depend on assumptions is seriously mistaken. I’ll talk about a scientific experiment in a very simple form to make this point. Suppose you have some new medicine. We’ll call it “curitall.” You want to know if it will cure a sickness, which we’ll call the heebie-jeebies. You get a bunch of people with the heebie-jeebies and give some of them the medicine and others a placebo. Suppose that the overwhelming result is that the patients who took curitall got better much faster than the placebo group. We conclude that the curitall effectively treats the heebie-jeebies.
But how do we reach this conclusion? Why are we justified in thinking that, just because curitall seemed to cure the heebie-jeebies in this experiment, that anyone else with it will react similarly? What do the trials in the experiment have to do with unobserved events, perhaps far away, perhaps in the past or future, on a completely distinct group of people?
We are justified because we believe that unobserved events will be similar to observed events. All of experimental science is based on this belief. Without it, we cannot generalize from what happened in the experiment to anything else, ever. We all believe that the way things work over here will work the same over there, and that the way things work now, or in the past, will work that way in the future, and worked that way even further in the past. Without this belief, experimental conclusions make no sense.
Is this belief justified? Or is it just an assumption? You might think, well, science has been going on for hundreds of years, using this belief, and it really works. When Galileo did his experiments with gravity, showing that two objects of different weights fell at the same rate, when people tried it later, it worked the same way. There are hundreds, thousands of examples of how what worked in the experiment worked later. Isn’t that evidence that this belief, that unobserved events will work the same way as observed events, is justified?
There’s a problem with this, though: the argument is circular, which means that to reach the conclusion, you have to assume the conclusion is true already. How? Because to say that unobserved experiments will resemble observed experiments because they have in the past requires us to assume that the future will resemble the past, which is what we’re trying to find evidence for in the first place. If you don’t accept this to begin with, the argument falls apart. This was articulated by my man David Hume over a century ago, and it’s called “the problem of induction.”
What’s the way out of this? We just assume it’s true, and don’t try to justify it. And what is meant by assumption here is that we believe it without any rational argument or evidence. Sound familiar?
When we talk about ethics, we end up at core beliefs, or faiths, that are not justified by science or reason. They might be “don’t cause unnecessary suffering” or “strive to make the world more like what God wants it to be.” When we talk about science, we have assumptions and faiths, too: induction, the fundamentals of logic, and so on.
I’m not going to argue here that science is just another religion—it’s not, because one of the great things it has over religion is that it tries to keep these faiths to a minimum, with the principle of parsimony, or Occam’s razor. I want to argue that to the extent that we can use evidence and reason in science, we can also use it in ethics. Just because the foundation of ethics and science are ultimately assumptions doesn’t mean we can’t use logic, reason, and evidence to determine the answers to many—most—questions.
If we agree on some set of assumptions, like suffering is bad and that induction is okay, we can then throw the whole arsenal of reason, evidence, science, economics, and analytic philosophy at interesting problems that are relevant to how to live our lives and run the world.
When we ask about how right or wrong an action is, it’s almost always a scientific question that reason and evidence can speak to.
Let’s take a look at where we are so far. We’re going to consider how to be as good as you can be, where being good means maximizing good feelings and minimizing bad feelings in the world. Further, we’re going to use science and reason to try to figure out what are the best things to do.
The Limits of Empathy
As discussed in the moral psychology section, judgments based on emotions tend to be more deontological in nature (based on rights and rules), and judgments based on reasoning tend to be more utilitarian in nature. Psychopaths with low anxiety, who have social and emotional deficits, tend to be more utilitarian when judging others’ actions, and people with disorders that reduce awareness of their own emotions do, too. People who say they often rely on their gut tend to make decisions that are less utilitarian. When you ask someone to make moral judgments while doing another task at the same time (putting their frontal areas under cognitive load), utilitarian judgments are affected, but not other ones.12
If we want to look at a utilitarian approach to bettering the world, then this suggests that moral judgments based on emotion are going to generally be worse than those made through reasoning. This makes the popular idea that empathy is the moral cure-all a very shaky position. If we want to be as good as we can be, we need to use thinking, not emotion.