11 Deciding among the Groups—Rights versus Welfare

11.1 Introduction

In chapter 9, I argued that a world with life extension has more net welfare than a world without it. In chapter 10, I argued that maximizing welfare in that way violates the rights of the Have-nots and Will-nots not to be harmed but does not violate any other rights. Maximizing welfare in that way conflicts with respecting their right against harm. How should we resolve that conflict?

In general, we should not maximize welfare at the expense of rights. However, in some circumstances, welfare can outweigh rights. I will propose a test to guide our moral judgment about when those circumstances exist: a gain in welfare is big enough to outweigh rights violations when we would be willing to risk that harm on a chance of getting that benefit if we were making that choice for ourselves. If the welfare of Haves outweighs the rights of Have-nots and Will-nots, then justice requires Promotion. If the welfare of Haves does not outweigh the rights that will be violated under Promotion, then justice requires Inhibition. In this chapter, I’ll argue that the welfare of Haves does outweigh the rights of the other two groups, so justice requires Promotion. This is the crux chapter in my long argument in favor of developing life extension.

11.2 How to weigh rights against welfare

As we saw in section 10.2, when rights conflict with maximizing welfare, we usually think we should respect the right even though that means producing less welfare. To reuse an earlier example, we should not kill one person, violating his right against harm, in order to use his organs to save several others who will die without those organs, thereby maximizing welfare (the welfare gained by saving their lives exceeds the welfare lost by ending his life). Ronald Dworkin characterizes this aspect of rights by describing rights as trump cards that can outweigh other moral objectives that are not rights, like maximizing net welfare or making society more equal in some respect, just as a joker trumps any other card in the game of euchre.1 When increasing welfare or making things more equal can be achieved only by violating some right, then we may not increase welfare or make things more equal in that way or to that extent.

And as noted earlier, rights protect interests. Sometimes the interest in question is the right-holder’s own welfare. This means that when a right protecting the holder’s welfare conflicts with maximizing the welfare of others, we have a conflict between one person’s welfare and the welfare of others. Thus, when a right that protects welfare conflicts with maximizing welfare, what happens is this: the right gives the right-holder’s welfare greater weight than the welfare it conflicts with. For example, in the organ case, the healthy person’s right against harm gives his welfare greater weight than the combined welfare of the five people his organs can save. If we were weighing welfare loss against welfare gain, then rights that protect welfare interests would be empty—they would protect nothing. When we give someone’s welfare the protection of a right, as we do when we say there is a right against harm, we are giving that person’s welfare greater weight—stronger protection—that the amount of welfare itself would call for if we were merely weighing net welfare. We are giving that person a trump card.

However, we don’t think that we should always respect a right when it conflicts with maximizing welfare. There are exceptional cases where most of us would say we should violate some right in order to maximize welfare. Let’s revisit the case discussed in section 10.2: Someone has a genetic mutation that makes her white blood cells extremely effective at fighting infections, giving her an abnormally strong immune system—far better than anyone has ever had before. We need to insert a needle to draw a small blood sample so we can clone her white blood cells and develop a serum that gives a similar boost in immunity to everyone else on earth, thereby saving hundreds of millions of lives (now and in the future). She hates needles. She refuses. She reminds us that she has a right against harm. Draw your own conclusions, but my judgment is that she should be forced to give another blood sample, violating her right against harm and her right to self-determination, thereby giving the rest of the human race a spectacular boost in welfare. The harm of being poked with a needle for a couple of minutes is trivial compared to the staggering amount of harm her blood will help us prevent.

So we’re not absolutists about rights; we don’t think rights trump welfare in all circumstances. If the burden the right-holder suffers when his right is violated is sufficiently small and the welfare at stake is sufficiently great, violating that right is the right thing to do. Notice that this is consistent with what we believe about positive duties. A positive duty is a duty to actively do something for someone (rather than merely a duty to refrain from doing something to them). Some philosophers say you have a positive duty when the cost you suffer by helping is sufficiently small compared to the amount of welfare at stake for the person you help (just as welfare might outweigh a right when the welfare is great enough relative to the right).

This is also consistent with what most philosophers say about redistribution to benefit those who are worse off: redistribution from those who are better off to those who are worse off can be justified by improving the condition of those who are worse off even if doing so reduces the total amount of good in that society, but it cannot be justified by merely marginal improvements for some at a dramatic cost to those who are better off.2 For example, many of us might say that justice requires reducing the assets and income of the top third of society by 20 percent in order to improve the lot of the bottom third by 10 percent, but few of us would say that justice requires taking half the top third’s wealth in order to improve the bottom third by merely 1 percent.

Because the right against harm gives the right-holder’s welfare extra weight, that right can be outweighed by an amount of welfare large enough to exceed the extra, weighted amount of welfare the right protects.3 To put this another way, we tend to think that morality can require harming someone to get a greater amount of welfare if the gain in welfare is very, very large and the reduction in welfare for the right-holder is much, much smaller.

How do we tell when a given increase in welfare outweighs a right against harm? Philosophers speak of “weighing” or “balancing” duties, rights, and welfare in these situations, but the process of weighing or balancing is not an algorithm—talking this way is just a way of saying that you must use your judgment about which action is right when you must choose between protecting welfare and respecting a right. Consider the scenario where we must decide whether to take a blood sample by force from a woman who hates needles. We consider the situation and reach a judgment about what is right to do. This is what we mean by “weighing.”

But what do we really do when we engage in such weighing? There is a large philosophical literature on this, but the short answer goes something like this: Gather as much information as you can, make sure you don’t have a personal interest in the outcome one way or the other (this isn’t always possible, of course), mull things over for a long time, consult with others whose judgment you respect, and then use your best judgment and see which answer “feels” right to you. In the end, that’s all you can do.

However, that merely tells us what the experience of forming such judgments is like. We want to know more: What makes such a judgment correct or incorrect? There are two questions here, and it’s important not to confuse them. First, there’s an epistemic question about how we can know that we’ve reached the correct judgment. That it “feels” right isn’t a guarantee that we came to the right decision. Unfortunately, there is no way to be certain about this. Be as careful as you can when making sure you’ve thought long and hard, consulted wise people, tried to eliminate your personal interest, and are fully-informed; that’s the best you can do.

However, the second question is not an epistemic question about how we know our judgment is correct; it’s a metaphysical question about what makes a judgment correct. In other words, it’s a question about what kinds of facts or states of affairs a correct judgment detects. Suppose I’m watching a horse race, and I think I see horse A come across the finish line a nose ahead of horse B. My judgment is correct if some part of horse A’s body extended farther forward than any part of horse B’s body. What plays the role of the horse’s nose in judgments about when welfare outweighs a right? One way to approach this question is to ask what rule or principle such judgments seem to fit. It may be that there is some pattern to our judgments about when welfare outweighs a right. If so, perhaps we can state that pattern as a principle. We might be following some principle when we form such judgments even if we’re not aware of the principle and can’t articulate it. What principle might we be following when we decide whether some benefit to one person or group is large enough to justify some harm to another person or group?

I think this is the principle that best represents the judgments we make in such cases:

The gambling test: A gain in welfare is big enough to outweigh rights violations when we would be willing to risk that harm on a chance of getting that benefit if we were making that choice for ourselves and we didn’t know our identity or position in society—and hence didn’t know whether we would be a beneficiary or a victim of that harm.

When you don’t know your identity or position in society, you’re under a partial veil of ignorance—a sort of imaginary amnesia about your own identity. Obviously this never happens in real life, but that’s all right; the test concerns what you would decide if that veil of ignorance were real.

The veil of ignorance falls within a long tradition in ethics, appearing explicitly or implicitly in Hume’s “judicious spectator,” Kant’s categorical imperative, Adam Smith’s “impartial spectator,” Rousseau’s general will, Sidgwick’s “point of view of the universe,” and Harsanyi’s impartial observer. Approaches that use something like a veil of ignorance are said to model sound moral reasoning by suppressing any awareness of one’s own interest and of morally irrelevant factors, such as race or gender. That’s the argument for treating judgments we make under a veil of ignorance as credible: if we would choose some arrangement under the veil of ignorance, then that arrangement is just, for it’s consistent with impartiality concerning morally irrelevant factors like race or gender. The best-known version of a veil of ignorance test comes from John Rawls, who used it to determine which social structures are just. In Rawls’s version, people are choosing principles of justice to govern the basic structure of society. They don’t know their identity, race or ethnicity, gender, age, talents and endowments, or position in society. Unlike Rawls, the version I am using is a partial veil of ignorance, for in the gambling test, people do have some information: they know the percentages of people who will get access to life extension and who will not. In other words, they know the odds of being a Have or a Have-not. However, they don’t know whether they are Haves or Have-nots, nor do they know which generation they are in.4

Notice that the gambling test does not specify the rule of choice it would be rational to use under those conditions. In other words, the test doesn’t give you a rule to follow when deciding whether you would take some gamble or other. There are several such rules, and some of them are more conservative than others. At one end of the spectrum of risk aversion, we might follow a rule that tells us to make the choice with the greatest expected utility (e.g., welfare). At the other end of the spectrum, the “maximin” rule tells us to make the choice whose worst possible outcome is better than the worst possible outcome of any other choice. These are only two of many possible rules of choice. One approach to the gambling test is to first decide which rule is the most rational rule of choice under the conditions of the gambling test, with its partial veil of ignorance, and then apply the rule and deduce some decision from it.

However, any argument for a particular rule of choice under the conditions of the gambling test will have to include showing that the rule is consistent with our judgments about what is rational in a given scenario. If, for example, the maximin rule turns out to generate results that don’t seem rational to us, then we have reason to think the maximin rule is the wrong rule of choice for that scenario, regardless of a priori arguments that it is the most rational rule of choice. In other words, if the rule conflicts with our judgments about which choices are rational, then we should go with our judgments and set aside that rule. Consistency with such judgments is the best argument for a rule of choice. I believe the correct rule is the one that matches our judgments about when to take the gamble, and to discern that rule, we need those judgments first. Moreover, once we have the judgments, we don’t really need to identify the rule. (It would be interesting to do so, but not necessary for our purposes.) Besides, such rules are most useful when we need to make our moral judgments quickly. Big questions like the ones addressed in this chapter deserve more time and reflection. It’s better to approach them directly rather than letting a rule guide our judgment.

However, we do need an argument that the gambling test itself is appropriate for determining when a given amount of welfare outweighs some set of rights violations. There are three such arguments. We just saw the first one: the gambling test, like other versions of the impartial spectator/veil of ignorance tradition, ensures the kind of impartiality that is part of justice and models moral reasoning to some extent. Therefore, judgments reached under the veil of ignorance are valid moral judgments.

Second, when we impose a policy that treats one group worse as the price of increasing the welfare of another group, we are treating both groups as if they took a gamble and this was the result. This is just if and only if it is a gamble they would take (or would if they were ideally rational, choosing under suitably idealized conditions). This argument for the gambling test is analogous to one way of understanding social contract arguments: we can assess the justice or injustice of a social arrangement by asking whether it is an arrangement people would agree to if they had a choice. Similarly, we can ask whether imposing a policy with different implications for different groups is just by asking whether people would agree to that arrangement before knowing which group they belong to. The gambling test models that choice.

The third argument is that the gambling test fits our judgments about cases where we conclude that welfare has outweighed some rights violation, such as the needle/immune system case—or at least, I hope to show this. If I succeed, then I will have provided further support for the gambling test. I’ll try to do this in sections 11.3 and 11.4, where I apply the test to Have-nots and Will-nots.

Now imagine a society where everyone has roughly the same amount and level of welfare as everyone else. Suppose that you’re a member of that society, and you’re deliberating about a policy that would give 1 percent of the population 5 times as much benefit as they had before but harm the other 99 percent by reducing their welfare by 5 percent per person. (These figures are arbitrary; I use them here just for illustration.) Because you are under a partial veil of ignorance, you know the percentages of the two groups and the amount of harm or benefit they receive, but you don’t know your own identity or position in society, so you must assume that your odds of being among those who benefit are equal to their percentage of the population: 1 percent.

Now for the crucial step. To determine whether that increase in welfare outweighs the rights of the other 99 percent not to be harmed, ask this question: Would you (or better yet, some hypothetical rational, reasonable person), under that veil of ignorance, be willing to risk a 99 percent chance of having your own welfare reduced by 5 percent on a 1 percent chance of having your welfare increased five times over? If you would take that gamble, then the net welfare outweighs the right to be harmed in that situation. If you would not, then it does not.

This particular example may be a close call. I probably would not take that gamble, but I can imagine some people taking it. People might disagree about whether the gamble is worthwhile. Of course, we might say that all hypothetical rational, reasonable people will make the same choice, but if they don’t, then perhaps this is simply a case where the truth is somewhat vague or indeterminate—we are in a gray area where welfare might or might not outweigh some rights. I won’t try to settle that question here; if moral truth is indeterminate in that way, then this is a problem for any moral analysis and not just mine. If it’s not, then my discussion is no more troubled by this than any other approach to balancing rights against welfare.

How dramatic a rights violation does this analysis permit? Could we, for example, justify slavery using the gambling test? Let’s imagine a case: Suppose that if we enslave 1 human out of every 100, the other 99 people gain a huge increase in welfare. On those odds, people might indeed take that gamble. However, is this a realistic scenario? How would enslaving 1 percent of the population produce such a dramatic gain for the other 99 percent? All we thereby achieve is getting one person’s labor for free and making that person do things he or she doesn’t want to do. That typically doesn’t produce a huge gain in welfare for 99 other people. Slavery doesn’t generate dramatic gains in welfare for others unless the percentage of slaves is much higher. However, once the percentage of slaves is much higher, the odds of becoming one are much higher too, and the gamble looks a lot less attractive.

We can imagine outlandish cases where severe, dramatic injustice to one person results in vast amounts of welfare for everyone else, but such cases don’t really support the objection that the gambling test is wrong because, under outlandish conditions, it justifies slavery. As Richard Hare argues when he defends utilitarianism against the objection that it justifies slavery, the wrongness of slavery, like the wrongness of anything else, must be shown in the world as it actually is.5 Our moral principles, including those that express rights, are devised for the kinds of situations and human nature we actually encounter. When we consider outlandish cases like this, we can decide that under those outlandish cases, slavery really would be justified and conclude that it seems wrong only because we are used to more normal cases where it really is wrong. In that event, the slavery cases that pass the gambling test don’t support an objection to that test, for the moral objection would not survive in that outlandish world—we merely mistakenly think it would. Consider, after all, the range of rights violations many people already consider acceptable: unintended but foreseeable civilian casualties from military bombing raids in a just war, quarantining people with a lethally contagious disease, or the mere fact that we accept a certain level of risk in product safety. People have a right not to be killed, and we realize that some people will be killed if we allow private automobiles to be used instead of forcing everyone to use mass transit. We tolerate those deaths because we realize that the odds of any one of us being killed in a car crash are pretty low, and the convenience and benefit of having private automobiles is thought to be great enough to justify those deaths. And is death really worse than slavery? Fortunately, plausible real-world cases where enslaving a tiny minority benefits everyone else substantially are extremely scarce, and life extension will not make them more common.

11.3 Weighing Have-not rights against welfare

Now let’s use the gambling test to see whether the welfare of Haves is likely to outweigh the rights of Have-nots and Will-nots. We will consider the Have-nots in this section and the Will-nots in the next section. The right in question is the right against harm. As we saw in previous chapters, the possible harms to Have-nots include distress, increased death burdens, Malthusian harms, entrenched elders and opportunity drought, a slower rate of innovation and social change, and challenges to existing arrangements for pensions and elder care. However, as we saw in section 9.5, if life extension is possible at all, it’s likely that at some point in the future, the world will be rich enough and the technology cheap enough for everyone to have access to life extension. Presumably that will remain the case from that point on, more or less indefinitely. Thus, in the vast majority of future generations, everyone who wants life extension will be able to get it. In effect, everyone will be a Have. (Again, I’ll consider Will-nots in the next section.) Therefore, the Have-nots are a very small percentage of the entire future human race. Finally, under the partial veil of ignorance used in the gambling test, the parties don’t know what generation they belong to—that is, where they exist in time. They know that they live in some century anywhere from the advent of life extension into the far, far future, but they don’t know when.

So suppose you must choose whether to live in a world where life extension is developed at some time or other, and you don’t know which century you’ll live in. Because the vast majority of generations in the future will consist of nothing but Haves, there’s a very high probability that you’ll be a Have. When we ask whether justice permits us to violate the rights of some minority in order to increase or protect the welfare of the majority, it doesn’t matter morally whether the two groups exist at the same time. We might, for example, have to decide whether to impose some harm on future generations in order to benefit those living now, or vice versa. Even though the generations don’t exist at the same time, the weighing of rights and welfare works the same way. All that matters is the size and importance of the interests involved. In this case, the Have-nots are in the majority in earlier generations but in the minority over the entire future span we are considering. In order to decide whether the welfare of future generations outweighs the interests of Have-nots in the near future, we must consider all those generations, and that requires a thought-experiment where the gambler does not know which generation she belongs to.

For purposes of this thought-experiment, let’s suppose that the percentage of Haves in the entire future human race will be 99 percent. Let’s further suppose that I was wrong in chapters 2 and 3, and extended life has some serious drawbacks, so that each Have suffers a 25 percent reduction in welfare for each year of extended life, just to give our critics the benefit of the doubt once again. Finally, let’s stick with the assumption that each Have-not suffers a 50 percent reduction in welfare. You are choosing whether to risk a 1 percent chance of cutting your lifetime welfare in half in hopes of a 99 percent chance of increasing your lifetime welfare by 8.8 times as much welfare (1,000 years reduced by 25 percent then divided by 85 years). If we consider the entire future history of the human race, the percentage would probably be less than 1 percent, but let’s use a 1 percent figure to keep the math simple.

Would a rational agent take that gamble? Most of us would, I suspect.6 If you’re not sure, then consider this case for comparison: Suppose you’re suffering from a fatal disease that can be cured with a drug that speeds up the dying process in 1 percent of those who take it but restores normal life span in the rest. In other words, it doesn’t lengthen life beyond a normal life span; it restores a normal life span. If you have that disease, would you turn down that drug, thereby forgoing the rest of your life, just to avoid a 1 percent chance of dying from that disease even faster? Very few of us would.

Now alter the case: imagine that the drug does not restore your normal life span, but—as an unavoidable and unintended side effect—increases your life span by several times. Would the fact that the drug lengthens your life be a reason to be more fearful of that 1 percent chance of speeding up your disease than you are when merely restoring normal life span is the alternative? Surely not. (Quite the opposite, in fact.) You might reject this if you consider extended life so undesirable that given a choice between a life span shorter than normal and a life span longer than normal, you would choose a shorter-than-normal life span. However, if my arguments about the desirability of extended life are sound, very few people would make that choice.

Once we take the long view, it appears that the welfare gained by Haves really does outweigh the rights violations of Have-nots, even if we assume that Forced Choice is not imposed. Of course, if we assume that Forced Choice is imposed, then the Malthusian consequences are prevented and the total harm is substantially reduced. In that case, the gamble looks even better.

Objection: If the invention of life extension is inevitable and Prohibition is impossible, why does it matter whether we follow Promotion or Inhibition?

The gambles considered so far involved two alternatives. One alternative was some version of world B, where life extension is developed. The other alternative was world A, where life extension is never developed at all. However, if I’m right and life extension is possible, it will be developed sooner or later. This fact has a very important consequence, for it means that our choice is not between a world with life extension and a world without it but between Promotion, where life extension is developed sooner, and Inhibition, where life extension is developed later.

This fact gives rise to an objection. The objection begins with the fact that there are three time periods to consider. First, there is the period before life extension is developed, a period free of any injustice related to the development of life extension.

Second, there is the period when life extension has recently been developed, and the Have-nots greatly outnumber the Haves. Let’s arbitrarily suppose that period lasts for 50 years (it doesn’t matter exactly how long it is). Let’s also suppose that life extension technology is in its infancy; it confers only 50 extra years of life. Finally, let’s suppose that only 5 percent of the human race can get life extension and that the harms caused by making life extension available to that 5 percent reduces everyone else’s welfare by 30 percent.

Would you run a 95 percent risk of reducing your welfare by 30 percent in exchange for a 5 percent chance of increasing your welfare by the equivalent of 50 extra years of quality life? Probably not. At best, it’s a close call for many of us. We can tweak the figures back and forth, but unless the harm to Have-nots is trivial, this looks like a bad gamble. That harm may be trivial, but I’ll give critics of life extension the benefit of the doubt and assume that the harms are significant.

But there’s a third period to consider: the vast expanse of time after the first 50 years, when everyone is a Have. Under Promotion, the first period is shorter and the second period is still 50 years, but that 50-year period begins sooner and ends sooner. Under Inhibition, the first period is longer and the second period (the same 50 years) begins later and ends later. All else being equal, the degree of injustice and the number of people affected during the second period is the same under both policies, so neither involves any more injustice than the other. The difference, in short, is that the 50-year period occurs earlier under Promotion and later under Inhibition. (The same is true if we suppose that it takes longer than 50 years to make life extension available to everyone who wants it; the second period is longer, but that makes no difference when we choose between the policies.)

The objection, then, is this: If the development of life extension is inevitable, it doesn’t matter morally which policy we follow. There’s nothing wrong with Inhibition and, in particular, nothing wrong with Inhibition through neglect—the funding policy we have right now. Of course, this doesn’t mean that Inhibition is more just than Promotion, but it does mean that Inhibition is no less just than Promotion. The objection shows that it doesn’t matter whether we follow Promotion or Inhibition.

However, this objection fails for two reasons. The first reason is that the objection assumes the number of years in which everyone is a Have is infinite. If the human race were going to last forever, then starting later would not result in fewer Haves receiving life extension; it would merely start that infinite series later on. If the third period (when everyone has access to life extension) is infinitely long, then both futures contain the same amount of welfare (an infinite amount). However, it’s not likely that the future of the human race is infinite. It’s much more likely that our species will come to an end at some point, however many thousands or millions of millennia from now. (Science tells us that eventually the universe will either collapse back into a dense ball resembling the one that turned into the Big Bang or expand forever and suffer a “heat death,” where all temperatures fall to an extremely low level. Either way, we can’t avoid extinction forever.) If the future of the human race is not infinitely long, then starting life extension sooner provides life extension to Haves that much sooner and thereby provides it to that many more Haves. Even if we can’t know for sure that the human race will come to an end one day, it is far more likely that it will and that probability justifies Promotion. Promotion provides extended life to a larger number of Haves, and it does not harm a larger number of Have-nots; therefore, Promotion is more justified than Inhibition.

The second reason the objection fails is that it shows, at most, that there is no reason to prefer Promotion over Inhibition—or vice versa. It doesn’t show that there’s a reason to prefer Inhibition or to oppose life extension generally. Therefore, even if the first argument (which assumes that the future of the human race is not infinitely long) seems like a very strange argument to rest on, at least it’s an argument that favors Promotion over Inhibition. There’s no countervailing reason to favor Inhibition.

11.4 Weighing Will-not rights against welfare

I have argued that two long-range trends will gradually make it possible to provide life extension to everyone: the falling cost of life extension and rising levels of prosperity. In time, there may be no Have-nots. However, this is irrelevant to the Will-nots, who have access to life extension but don’t want it. Making life extension available to more people will not reduce the percentage of Will-nots over time. Taking the long view may not be relevant here. What does justice require when it comes to the rights of Will-nots?

The answer depends on the variables. Suppose, for example, that half the population are Haves and half are Will-nots. You don’t know whether, in the extended world, you’re a Have or a Will-not. In other words, you’re under a veil of ignorance that prevents you from knowing your values or preferences, so you don’t know whether you would want extended life—that’s why you don’t know whether you’re a Have or a Will-not. Suppose further that the harms to Will-nots reduce their welfare by 25 percent.

You’re deciding whether to risk a 50 percent chance of suffering a 25 percent reduction in lifetime welfare in exchange for a 50 percent chance of multiplying your lifetime welfare by roughly 12.5 times (1,000 years divided by 80). I’m not sure whether I would take that gamble; it’s a close call. I don’t have any confident sense of how many people would. If a hypothetical rational person would not take that gamble, then the rights of Will-nots outweigh the welfare gained by the Haves.

Now suppose the harm to Will-nots drops to 10 percent or less and the odds of being a Will-not drop to 10 percent. You’re considering whether to take a 10 percent chance of having your net welfare reduced by 10 percent in exchange for a 90 percent change of multiplying your net welfare by 12.5 times. That looks like a much more attractive gamble. If a hypothetical rational person would take that gamble, then the welfare of Haves outweighs the rights of Will-nots.

The problem is that we don’t know how much harm the Will-nots will really suffer, though I argued in chapter 4 that it’s very little. We also don’t know what the actual percentage of Will-nots will be, but the two surveys discussed in section 1.9 provide some clues. The Pew Research Center’s Religion and Public Life project surveyed 2,012 American adults about life extension; 56 percent said they would not want life extension if it were available to them.7 Two Australian universities surveyed 605 adults about life extension; only 35.4 percent said they would take life extension pills if they could.8 These two surveys suggest that the percentage of Will-nots could be as high as half the population. However, there are two reasons to take those survey results with a grain of salt.

First, the survey respondents knew very little about life extension. In the Pew survey, more than half the respondents had never heard of life extension before taking the survey, 30 percent had heard a little about it, and only 7 percent had heard or read a lot about it. In my teaching and talks about life extension, I find that many people think life extension involves keeping elderly people alive longer, as we do in hospitals now. They often don’t understand that life extension involves slowing or halting aging, not end-of-life care of the conventional kind. Perhaps many of the survey respondents mistakenly thought that extending human life means keeping people alive in an infirm, elderly condition far longer than we do now. It may take time for people to adjust to the idea of extended life and become sufficiently informed to form stable views about it.

Second, these surveys revealed some possible inconsistencies in attitudes. In the Pew survey, 68 percent of respondents thought most people would want it, even though slightly more than half of those respondents said they didn’t want it. The Australian respondents were similar: 65.1 percent said they favored research into life extension technologies even though only a third said they wanted it for themselves. The fact that these attitudes are inconsistent suggests that they’re merely proto-attitudes—attitudes that aren’t yet carefully considered or stable. They may later shift in one direction or another.

Setting aside qualms about the two surveys, there are two reasons to be skeptical when people currently say they would not want extended life even if it were offered to them. First, it’s one thing to declare that you don’t want extended life the first time you hear about it, before you’ve thought about it, and when the prospect is highly remote. It’s quite another thing to turn it down when it’s a real, concrete possibility. Life extension seems like science fiction right now, and it’s hard to seriously consider revising one’s sense of the future on the strength of something that speculative. Because the prospect seems so far-fetched, people may affirm their existing attitudes simply because that’s easier and because there’s no reason to go to the trouble of changing their expectations.

Second, people who say they don’t want extended life may be expressing adaptive preferences. (See section 3.9.) An adaptive preference happens when you adapt what you want to what you can get. (I can’t reach the grapes, but I didn’t want those grapes anyway; they’re probably sour.) Throughout human history, we’ve never expected to avoid aging or live longer than a few decades. We may have adapted our desires about aging and life span to constraints we can’t escape. Such adaptation makes the anticipation of death easier to bear. Moreover, it’s not plausible that most adults alive today will be able to take advantage of life extension even if it arrives during their lifetime. They’re either not young enough or not rich enough—or both. If you have reason to think future generations will get this and you won’t, then you have reason to want to believe that it’s not worth having. However, future Will-nots are not in that position, and desires concerning death might change when the constraints do.

I’ll plant my flag here, for this is the best we can do with the information we now have: So far as Will-nots are concerned, Promotion is more just than Inhibition, because once people become familiar with life extension and have had time to think about it, very few people will turn down extended life if they can get it. The survival instinct is too strong and the aches and pains of aging too severe for any other choice to make sense to most people. (I am leaning, of course, on my arguments in chapters 2 and 3 that extended life is attractive and good to have.) Moreover, if they have doubts, the fact that life extension is a reversible choice makes it easy to experiment with extended life. Anyone who isn’t sure can try it for a while and later go off their life extension meds if they change their mind. In light of all this, I’ll hazard this prediction: In time, turning down life extension will become as eccentric a choice as choosing lifelong celibacy. It will be an intelligible choice but a rare one.

That line of argument leads to this objection: It seems that in the end, my argument for Promotion rests on a slender and speculative foundation—a set of reasons to think there will not be very many Will-nots. Is that really a sufficient basis for such a momentous policy decision? Perhaps we should hold off until we have more information.

My reply is this: This objection is a sword with two edges, for we also lack sufficient information to be sure that there will be lots of Will-nots. Neither Promotion nor alternatives to Promotion have strong factual support, but such support is simply not available until after life extension is available. If we do nothing in hopes of gaining more information, then (a) we will still learn very little until after life extension is available anyway, and (b) we are effectively making a policy choice in favor of Inhibition by neglect. Not choosing a policy is a policy choice, and it’s no better supported than any other choice.

11.5 Two versions of Promotion

We’ve been discussing a version of Promotion where the Haves can get life extension as soon as it’s available. We might call that Promotion with Immediate Access. There’s another policy to consider: Promotion with Delayed Access, where we develop life extension as quickly as possible, but no one can get it until the technique is affordable enough to be provided to everyone. The rich will just have to wait until society can manage to offer it to the entire population.

Promotion with Delayed Access does not produce as much welfare as Promotion with Immediate Access, since the Haves cannot get life extension as quickly, but unlike Promotion with Immediate Access, it doesn’t harm any Have-nots, for it’s not available to anyone until the Have-nots can have it too (and thereby become Haves). In short, Promotion with Immediate Access produces more welfare for the Haves at the expense of some rights violations for the Have-nots, while Promotion with Delayed Access avoids those rights violations at the cost of producing less welfare for Haves. Which policy is more just? That depends on whether the increased welfare available under Promotion with Immediate Access is great enough to outweigh the rights of Have-nots. If it is, then Promotion with Immediate Access is more just. If it’s not, then Promotion with Delayed Access is more just.

However, I won’t explore scenarios where we choose between these two policies, for even if Promotion with Delayed Access is more just, I have two practical concerns that tip my verdict from Promotion with Delayed Access back to Promotion with Immediate Access. First, I don’t think we can take Promotion with Delayed Access as a serious option. If life extension is developed, it will leak out and get used regardless of attempts to prohibit its use until everyone is rich enough to afford it. Promotion with Delayed Access requires temporary Prohibition, and Prohibition is not feasible. If Promotion with Delayed Access is not possible, it can’t be the more just policy—or, at least, attempting to institute it can’t be the right thing to do. (You can’t have a moral duty to do something you can’t do.) Remember, we’re doing applied ethics; we want to know what is the right thing to do under the circumstances we find ourselves in, not what is right under ideal circumstances that are highly unlikely to occur.

Second, Promotion with Immediate Access is likely to stimulate the development of life extension faster. The Haves have more money and more political influence; their support will tend to accelerate life extension research, and they’re more likely to support it if they think they’ll get it. (Look at the money that Google and several Silicon Valley billionaires are pouring into life extension research even now; would they do this if their leaders believed they wouldn’t get life extension immediately after its development?) Moreover, the firms that develop it can anticipate bigger and quicker profits if they’re allowed to sell it immediately, and that will motivate them too.

Conclusion: so far as the rights of Have-nots are concerned, Promotion with Immediate Access is the most justified life extension research policy.

11.6 Conclusions

  1. EE. Usually, we shouldn’t maximize welfare at the expense of rights, but welfare can outweigh rights when the conditions of the gambling test are met. The gambling test says that a gain in welfare is big enough to outweigh rights violations when we would be willing to risk that harm on a chance of getting that benefit if we were making that choice for ourselves and we didn’t know our identity or position in society. (Section 11.2)
  2. FF. The gambling test tells us that if we take all future generations into account, the welfare gained by Haves outweighs the rights violations of Have-nots. (Section 11.3)
  3. GG. Even if the development of life extension is inevitable, it matters morally which policy we follow. Promotion is more just than Inhibition even if life extension will eventually be developed under Inhibition too. (Section 11.3)
  4. HH. It’s likely that there will be so few Will-nots that the welfare gained by Haves outweighs the rights violations of Will-nots. (Section 11.4)
  5. II. There are two versions of Promotion: Promotion with Delayed Access, where no one gets life extension until it can be made available to everyone, and Promotion with Immediate Access, where the Haves can get it as soon as it is developed. However, even if Promotion with Delayed Access is more just than Promotion with Immediate Access, Promotion with Delayed Access is the morally preferable policy. (Section 11.5)