images

If reliable intelligence determined that a large-scale terrorist attack in your city were highly likely during the next few weeks, and it also pointed to a particular suspect, would you support the preventive detention of that suspect for a period of time—say, a month—until it could be determined whether he or she was, in fact, planning the attack?

What type and level of interrogation would you authorize to elicit the information necessary to prevent the attack?

If arrest were not feasible because the suspected terrorist was hiding in an enemy country, would you support his targeted assassination if that were the only way to stop the attack?

What if the attack could be prevented only by a military strike against a terrorist base in a foreign country?

What if a full-scale invasion were required?

If the feared attack involved a weaponized virus, such as smallpox, whose deadly impact could be substantially reduced by massive inoculation, which would, however, kill 150 to 200 of those inoculated, would you support compulsory inoculation?

If an article demonstrating how to manufacture and weaponize smallpox were about to be published, would you support the preventive censorship of that article?

These are the kinds of tragic choices that every democratic society is now—or soon will be—facing. Yet we have not even begun to develop an agreed-upon jurisprudence or morality governing such drastic preventive and preemptive actions by our government. Instead, we are simply taking such actions on an ad hoc basis as we face the ongoing threats.

images

The democratic world is experiencing a fundamental shift in its approach to controlling harmful conduct. We are moving away from our traditional reliance on deterrent and reactive approaches and toward more preventive and proactive approaches. This shift has enormous implications for civil liberties, human rights, criminal justice, national security, foreign policy, and international law—implications that are not being sufficiently considered. It is a conceptual shift in emphasis from a theory of deterrence to a theory of prevention, a shift that carries enormous implications for the actions a society may take to control dangerous human behavior, ranging from targeted killings of terrorists, to preemptive attacks against nuclear and other weapons of mass destruction, to preventive warfare, to proactive crime prevention techniques (stings, informers, wiretaps), to psychiatric or chemical methods of preventing sexual predation, to racial, ethnic, or other forms of profiling, to inoculation or quarantine for infectious diseases (whether transmitted “naturally” or by “weaponization”), to prior restraints on dangerous or offensive speech, to the use of torture (or other extraordinary measures) as a means of gathering intelligence deemed necessary to prevent imminent acts of terrorism.

Although the seeds of this change were planted long ago and have blossomed gradually over the years, it was the terrorist attack against the United States on September 11, 2001, that expedited and, in the minds of many, legitimated this important development. Following that attack, the former attorney general John Ashcroft described the “number one priority” of the Justice Department as “prevention.”1 The prevention of future crimes, especially terrorism, is now regarded as even “more important than prosecution” for past crimes, according to the Justice Department.2 In his confirmation hearings of January 5, 2005, Attorney General Alberto Gonzales reiterated that the administration’s “top priority is to prevent terror attacks.”3 The tactics that have been employed as part of this preventive approach include tighter border controls, profiling, preventive detention, the gathering of preventive intelligence through rough interrogation and more expansive surveillance, targeting of potential terrorists for assassination, preemptive attacks on terrorist bases, and full-scale preventive war. We are doing all this and more without a firm basis in law, jurisprudence, or morality, though there certainly are historical precedents—many questionable—for preventive actions.

From the beginning of recorded history, prophets have attempted to foresee harmful occurrences, such as flood, famine, pestilence, earthquake, volcanic eruption, tsunami, and war. Attempting to predict crime—to determine who is likely to become a criminal—has also captured the imagination of humankind for centuries. From the Bible’s “stubborn and rebellious son,” identifiable by his gluttony and drunkenness,4 to nineteenth-century criminologist Cesare Lombroso’s “born criminal and criminaloid,” identifiable by the shape of his cranium,5 to Sheldon and Eleanor Glueck’s three-year-old delinquent, identifiable by a composite score derived from familial relationships,6 “experts” have claimed the ability to spot the mark of the potential criminal before he or she has committed serious crimes. Though the results have not generally met with scientific approval, it is still widely believed—by many police officers, judges, psychiatrists, lawyers, and members of the general public—that there are ways of distinguishing real criminals from the rest of us, even before they commit any crimes.

In the 1920s and 1930s eugenicists not only in Nazi Germany but in the United States, Great Britain, and other Western nations believed that they could prevent criminal behavior in specific, and the weakening of particular races or of humankind in general, through sterilization and other eugenic measures.7 Even before the Holocaust, the German government forcibly sterilized four hundred thousand men and women, nearly 1 percent of Germans of childbearing age, believing that “[i]t is better to sterilize too many rather than too few.”8 The legislation authorizing sterilization was called the Law for the Prevention of Genetically Diseased Offspring.9 Although this “science” became justly discredited following the Holocaust, as recently as the 1970s it was suggested that the presence of the XYY karyotype in a man might be associated with, and consequently predictive of, certain kinds of violent crime.10 The mapping of the human genome has stimulated contemporary genetic research into the predictability of violence and other harms. Racial, ethnic, religious, and other “profiling” is now thought by some to hold promise in the effort to identify potential criminals, especially terrorists.

Historically, the widespread use of early intervention to preempt serious threats to the state and its rulers has been associated with tyrannical regimes. Hitler and Stalin excelled at killing their enemies before they could rise up against them. But preventive approaches have been championed by progressive forces as well.

Over the past several decades, especially in Europe, the so-called precautionary principle has become “a staple of regulatory policy.”11 It postulates that one should “[a]void steps that will create a risk of harm. Until safety is established, be cautious; do not require unambiguous evidence. In a catchphrase: Better safe than sorry.”12

The New York Times Magazine listed the precautionary principle as among the most “important ideas” of 2001.13 This principle, which originated in Germany and grew out of efforts to prevent environmental and other “natural” disasters, has now moved beyond these concerns, which have traditionally been raised by the left. According to Professor Cass Sunstein, the precautionary principle has now “entered into debates about how to handle terrorism, about ‘preemptive war,’ and about the relationship between liberty and security. In defending the 2003 war in Iraq, President George W. Bush invoked a kind of Precautionary Principle, arguing that action was justified in the face of uncertainty. ‘If we wait for threats to fully materialize, we will have waited too long.’ He also said, ‘I believe it is essential that when we see a threat, we deal with those threats before they become imminent.’”14

Professor Sunstein points to an interesting paradox in the different attitudes in Europe and the United States: “[T]he United States appears comparatively unconcerned about the risks associated with global warming and genetic modification of foods; in those contexts, Europeans favor precautions, whereas Americans seem to require something akin to proof of danger. To be sure, the matter is quite different in the context of threats to national security. For the war in Iraq, the United States (and England) followed a kind of Precautionary Principle, whereas other nations (most notably France and Germany) wanted clearer proof of danger.”15*

This observation can be generalized beyond Europe and the United States and beyond the contemporary scene: All people in all eras have favored some preventive or precautionary measures, while opposing others. The differences over which preventive measures are favored and which opposed depend on many social, political, religious, and cultural factors. As I shall argue throughout this book, it is meaningless to declare support for, or opposition to, prevention or precaution as a general principle because so much properly depends on the values at stake—on the content of the costs and benefits and on the substance of what is being regulated.

One can of course sympathize with efforts to predict and prevent at least some harms before they occur, rather than wait until the victim lies dead. Indeed, Lewis Carroll put in the queen’s mouth an argument for preventive confinement of predicted criminals that Alice found difficult to refute. The queen says:

“[T]here’s the King’s Messenger. He’s in prison now, being punished; and the trial doesn’t even begin till next Wednesday; and of course the crime comes last of all.”

“Suppose he never commits the crime?” said Alice.

“That would be all the better, wouldn’t it?” the Queen said. . . .

Alice felt there was no denying that. “Of course that would be all the better,” she said: “But it wouldn’t be all the better his being punished.”

“You’re wrong . . .” said the Queen. “Were you ever punished?”

“Only for faults,” said Alice.

“And you were all the better for it, I know!” the Queen said triumphantly.

“Yes, but then I had done the things I was punished for,” said Alice: “that makes all the difference.”

“But if you hadn’t done them,” the Queen said, “that would have been even better still; better, and better, and better!” Her voice went higher with each “better,” till it got quite to a squeak. . . .

Alice [thought], “There’s a mistake here somewhere—”16

There are numerous mistakes and perils to liberty implicit in this kind of thinking, and they are not being sufficiently debated today.

Part of the reason for our neglect of the issues surrounding prevention is the mistaken assumption that any form of preventive detention would be alien to our traditions. Lord Justice Denning, one of the most prominent common law jurists of the twentieth century, purported to summarize the irreconcilability of preventive punishment with democratic principles: “It would be contrary to all principle for a man to be punished, not for what he has already done, but for what he may hereafter do.”17 It may be contrary to all principle, but as we shall see, it is certainly not contrary to all practice.

The shift from responding to past events to preventing future harms is part of one of the most significant but unnoticed trends in the world today. It challenges our traditional reliance on a model of human behavior that presupposes a rational person capable of being deterred by the threat of punishment. The classic theory of deterrence postulates a calculating evildoer who can evaluate the cost-benefits of proposed actions and will act—and forbear from acting—on the basis of these calculations. It also presupposes society’s ability (and willingness) to withstand the blows we seek to deter and to use the visible punishment of those blows as threats capable of deterring future harms. These assumptions are now being widely questioned as the threat of weapons of mass destruction in the hands of suicide terrorists becomes more realistic and as our ability to deter such harms by classic rational cost-benefit threats and promises becomes less realistic.

Among the most frightening sources of danger today are religious zealots whose actions are motivated as much by “otherworldly” costs and benefits as by the sorts of punishments and rewards that we are capable of threatening or offering. The paradigm is the suicide terrorist, such as the ones who attacked us on September 11. We have no morally acceptable way of deterring those willing to die for their cause, who are promised rewards in the world to come.18 Recall the serene looks on the faces of the suicide terrorists as they were videotaped passing through airport security in the final hours of their lives. It is not that they are incapable of making “rational” cost-benefit calculations, by their own lights. But these calculations involve benefits that we cannot confer (eternity in paradise) and costs (death) that to them are outweighed by the expected benefits. They are in some respects like “insane” criminals who believe that God or the devil told them to do it. Because they are not deterrable, the argument for taking preventive measures against them becomes more compelling. Blackstone made this point in the context of “madmen”: “[A]s they are not answerable for their actions, they should not be permitted the liberty of acting. . . .”19 Nations whose leaders genuinely—as opposed to tactically—believe that their mission has been ordained by God (such as some in today’s Iran) may also be more difficult to deter than those who base their calculations on earthly costs and benefits (such as today’s North Korea or Cuba).20

The New York Times, in its lead editorial on September 10, 2002, a year after the 9/11 attacks and six months before the invasion of Iraq, recognized the distinction between the theory of deterrence and the theory of prevention (or preemption or first strike) in the context of the war on terrorism: “The suddenness and ferocity of last September’s terror attacks tore the United States free from the foreign-policy moorings that had served the nation well for more than five decades, including the central notion that American military power could by its very existence restrain the aggressive impulses of the nation’s enemies. In its place, the Bush administration has substituted a more belligerent first-strike strategy that envisions Washington’s attacking potential foes before they hit us. That may be appropriate in dealing with terror groups, but on the eve of the anniversary of Sept. 11 there is still an important place in American policy for the doctrine of deterrence.”21

Deterrence, in the context of international relations, boils down to “a brutally simple idea”: If America or its allies are attacked, we will retaliate massively. The Times continued: “Deterrence is diplomatic parlance for a brutally simple idea: that an attack on the United States or one of its close allies will lead to a devastating military retaliation against the country responsible. It emerged as the centerpiece of American foreign policy in the early years of the cold war.”22

According to the Times, this approach has the advantage of inducing “responsible behavior by enemies as a matter of their own self-interest. . . . Aggression becomes unattractive if the price is devastation at home and possible removal from power.”23 The Times then argued that while preemption may be appropriate against terrorists, it was a far more questionable strategy in dealing with Iraq: “In the wake of Sept. 11, President George W. Bush has made a convincing case that international terrorist organizations, which have no permanent home territory and little to lose, cannot reliably be checked by the threat of retaliation and must be stopped before they strike. Whether Saddam Hussein falls into that category is a question that the country will be debating in the days ahead.”24

The debate predicted by the Times did occur, but only in the relatively narrow context of the Iraq War. In this book, I shall broaden it beyond any specific war, and even beyond the issue of war itself, to the wide range of harms that may not be subject to (or may not be thought to be subject to) the strategy of deterrence.

The classic theory of deterrence contemplates the state’s absorbing the first harm, apprehending its perpetrator, and then punishing him publicly and proportionally, so as to show potential future harmdoers that it does not pay to commit the harm. In the classic situation, the harm may be a single murder or robbery that, tragic as it may be to its victim and his family, the society is able to absorb. In the current situation the harm may be a terrorist attack with thousands of victims or even an attack with weapons of mass destruction capable of killing tens of thousands. National leaders capable of preventing such mass attacks will be tempted to take preemptive action, as some strategists apparently were during the early days of the Cold War. Again according to the New York Times, “During the Truman administration, some strategists suggested attacking the Soviet Union while it was still militarily weak to prevent the rise of a nuclear-armed Communist superpower. Wiser heads prevailed, and for the next 40 years America’s reliance on a strategy of deterrence preserved an uneasy but durable peace.”25

With the benefit of hindsight that decision was clearly correct, but if the Soviet Union had in fact subsequently used its nuclear arsenal against us, our failure to take preventive action when we more safely could have would have been criticized, just as Britain’s failure to prevent the German arms buildup in the years prior to World War II has been criticized. One of the great difficulties of evaluating the comparative advantages and disadvantages of deterrence versus preemption is that once we have taken preemptive action, it is almost never possible to know whether deterrence would have worked as well or better. Moreover, at the time the decision has to be made—whether to wait and see if deterrence will work or to act preemptively now—the available information will likely be probabilistic and uncertain. It is also difficult to know with precision the nature and degree of the harm that may have been prevented. For example, if a preemptive attack on the German war machine had succeeded in preventing World War II, we would never know the enormity of the evil it prevented. All that history would remember would be an unprovoked aggression by Britain on a weak Germany.26

The conundrum, writ large, may involve war and peace. Writ small, it may involve the decision whether to incarcerate preventively an individual who is thought to pose a high degree of likelihood that he will kill, rape, assault, or engage in an act of terrorism. At an intermediate level, it may involve the decision to quarantine dozens, or hundreds, of people, some of whom may be carrying a transmittable virus like SARS or avian flu. At yet another level, it may raise the question of whether to impose prior restraint on a magazine, newspaper, television network, or Internet provider planning to publish information that may pose an imminent danger to the safety of our troops, our spies, or potential victims of aggression.27 Since the introduction of the Internet, which, unlike responsible media outlets, has no “publisher” who can be held accountable after the fact, there has been more consideration of before-the-fact censorship.28

At every level, preventive decisions must be based on uncertain predictions, which will rarely be anywhere close to 100 percent accurate. We must be prepared to accept some false positives (predictions of harms that would not have occurred) in order to prevent some predicted harms from causing irreparable damage. The policy decisions that must be made involve the acceptable ratios of false to true positives in differing contexts.

Over the millennia we have constructed a carefully balanced jurisprudence or moral philosophy of after-the-fact reaction to harms, especially crimes. We have even come to accept a widely agreed upon calculus: Better ten guilty go free than even one innocent be wrongly convicted.29 Should a similar calculus govern preventive decisions? If so, how should it be articulated? Is it better for ten possibly preventable terrorist attacks to occur than for one possibly innocent suspect to be preventively detained? Should the answer depend on the nature of the predicted harm? The conditions and duration of detention? The past record of the detainee? The substantive criteria employed in the preventive decision? The ratio of true positives to false positives and false negatives? These are the sorts of questions we shall have to confront as we shift toward more preventive approaches—whether it be to terrorism, crime in general, the spread of contagious diseases, or preventive warfare.

Decisions to act preemptively generally require a complex and dynamic assessment of multiple factors.30 These factors include at least the following:

 1. The nature of the harm feared31

 2. The likelihood that the harm will occur in the absence of preemption32

 3. The source of the harm—deliberate conduct or natural occurrence?

 4. The possibility that the contemplated preemption will fail

 5. The costs of a successful preemption33

 6. The costs of a failed preemption

 7. The nature and quality of the information on which these decisions are based

 8. The ratio of successful preemptions to unsuccessful ones

 9. The legality, morality, and potential political consequences of the preemptive steps

10. The incentivizing of others to act preemptively

11. The revocability or irrevocability of the harms caused by the feared event

12. The revocability or irrevocability of the harms caused by contemplated preemption

13. Many other factors, including the inevitability of unanticipated outcomes (the law of unintended consequences)

In light of the complexity, dynamism, and uncertainty of these and other factors that must go into rationally making any preemptive decision, it would be difficult to construct a general formula with which specific decisions could be quantified, evaluated, and tested. At the simple level, any such formula would begin by asking if the seriousness of the contemplated harm,34 discounted by the unlikelihood that it would occur in the absence of preemption, would be greater than the likelihood of the harms caused by successful preemption, discounted by the likelihood (and costs) of failed (and successful) preemption. This simple formula can be made more complex by the inclusion of other factors, such as the appropriate burdens of action and inaction, the legal and moral status of the intervention,and the likelihood of long-term, unintended consequences. Any formula will necessarily mask subtlety, nuance, and indeterminacy. But a formula that even comes close to approximating reality will help clarify the relationship among the factors that either explicitly or implicitly should be considered by any rational decision maker responsible for taking preemptive actions.

There have been several legal contexts in which judges have tried to construct gross formulas for analyzing decisions with predictive implications. In the First Amendment context, Judge Learned Hand formulated a “clear and present danger” exception to protected free speech in the following terms: “In each case [the judge] must ask whether the gravity of the ‘evil,’ discounted by its improbability, justifies such invasion of free speech as if necessary to avoid the danger. . . . We can never forecast with certainty; all prophecy is a guess, but the reliability of a guess decreases with the length of the future which it seeks to penetrate. In application of such a standard courts may strike a wrong balance; they may tolerate ‘incitements’ which they should forbid; they may repress utterances they should allow; but that is a responsibility that they cannot avoid.”35

That formulation was used (misused) by the Supreme Court to sustain the conviction of the leaders of the American Communist Party in 1951, despite the minuscule likelihood that this weak and unpopular party could actually succeed in overthrowing our government “by force or violence.” The “clear and present danger” test was made more speech-friendly by the Supreme Court in its 1969 Brandenburg decision, which required that the danger be likely and imminent.36 This is the current view of the First Amendment.

In the context of issuing an injunction, the courts also write about balancing future harms and likely outcomes. Justice Stephen Breyer summarized “the heart of this test” as “whether the harm caused plaintiff without the injunction, in light of the plaintiff’s likelihood of eventual success on the merits, outweighs the harm the injunction will cause defendants.”37

These rather simple formulas do not even begin to capture the subtleties and difficulties of balancing the claims of prevention against those of freedom. Consider the following preemptive decisions, all of which potentially involve life and death choices.

The most far-reaching may be whether a democratic nation, committed to humane values, should go to war before it is attacked in order to prevent an anticipated attack or provide it a military advantage in what it regards as an inevitable or highly likely war.38 We can call this the preemptive or preventive war decision.39 Related to that exercise in anticipatory self-defense is the decision to engage in military action to prevent genocide or ethnic cleansing of others within a given country. This can be called the humanitarian intervention decision.

Another may be whether to vaccinate all, most, or some people against a contagious germ that can be, or has been, weaponized under circumstances in which a small, but not insignificant, number of those vaccinated may die or become seriously ill from the vaccination. The decision becomes especially difficult when it is only possible, but not probable, that an attack using the weaponized germ will occur. We shall call this the preventive inoculation decision.

Yet another may involve the decision to try to identify and confine (or otherwise incapacitate) potentially dangerous individuals (rapists, killers, terrorists, child molesters)40 or groups (Japanese-Americans, Arab-Americans, those who fit certain “profiles,” or others). We shall call this the preventive detention decision (or Minority Report approach, based on the motion picture about a futuristic law enforcement system that relies on predicting and preventing crime).

A particularly difficult decision may be whether a government should try to prevent certain kinds of speech (or other expression) that are thought to incite, provoke, facilitate, or otherwise cause (or contribute significantly to) serious harms, ranging from genocide to rape to the killing of spies to an overthrow of a government. The causative mechanisms may vary: In some situations the mechanism is informational (revealing the names of spies, the locations of planned military attacks, the instructions for making a nuclear weapon, the names and locations of ethnic enemies, as in Rwanda), in other situations it may be emotional (incitements, degrading the intended victims, issuing religious decrees), while in still others it may contain a combination of elements. We shall call this the censorship or prior restraint decision.

The question is whether decisions as diverse as the above share enough elements so that there is some benefit in trying to construct a common decision-making formula. Such a formula, with appropriate variations, may help clarify the balancing judgments that must be made before preemptive or preventive action is deemed warranted. Even in the absence of a single formula, comparative discussion of these different but related predictive decisions may contribute to clarification of the policies at stake in each type of decision.

Human beings make both predictive and retrospective decisions every day. Routine predictive decisions include weather reports, college admission decisions, stock purchases, vacation plans, bets on sporting events, and voting choices. Routine retrospective decisions include trial verdicts, historical reconstructions, and punishing children for misbehavior that they deny. Many decisions are of course a mixture of retrospective and prospective elements. These include sentencing, marriage proposals, issuing protective orders against feared abusers, and denying bail to criminal defendants.

In theory we should be no worse at predicting at least some types of future events than we are at reconstructing some past ones, because the accuracy of our predictions (at least our short-term, visible ones, such as weather, stocks, sports, and college performance) is easily tested by simple observations of future events as they unfold, whereas past reconstructions (such as whether a particular crime, tort, or historical event actually occurred) are often not retrievable or observable.41

Yet in practice we seem better (or we believe we are better) at reconstructing the past than in predicting the future, perhaps because we fail to learn from our predictive mistakes. It has been argued that prediction is more difficult than reconstruction, because predictive decisions are inherently probabilistic (e.g., how likely is it that it will rain tomorrow?), whereas retrospective decisions are either right or wrong (Booth either assassinated Lincoln or didn’t). But this is really a matter of how the issue is put. Predictive decisions are also either right or wrong: It will either rain tomorrow or not. And the question of whether a past act did or did not occur can also be stated in probabilistic terms; jurors are asked to decide “beyond a reasonable doubt” or by “a preponderance of the evidence” if a disputed past event occurred. In either instance, the target event either did or will occur or did not or will not occur, but in the absence of full information, we cannot be sure and must state our level of certainty in probabilistic terms—e.g., it seems highly likely (or 90 percent likely) that X occurred; it seems highly likely (or 90 percent likely) that X will occur.42 Our ability to predict the future as well as to reconstruct the past has almost certainly improved with developments in science (predictive computer modeling, DNA, etc.), but we are still far from any level of accuracy that eliminates the problems of false positives and false negatives and thus the moral challenge of assigning proper weights to these inevitable errors.

In real life, as distinguished from controlled experiments, most important decisions involve both predictive and retrospective judgments, often in combination. Consider the sentencing decision a judge must make with regard to a convicted defendant. Although the jury has already decided (beyond a reasonable doubt) that the defendant almost certainly committed the specified crime with which he was charged, the judge will generally also consider other uncharged past crimes (his record) as well as the likelihood that he will recidivate.43 Or consider the decision to hire a lawyer. When a potential client interviews one, he wants to know his past record (which is more complex than simply the ratio of wins to losses because the difficulty of the cases is relevant). He also wants to assess his current status—has he gotten too busy or too old for a long, complex case?—as well as his likely future performance when the case comes before the court. Similarly, with regard to a potential pinch hitter at a crucial stage of a baseball game, the manager looks at past performance—batting average, on base percentage, success against a particular pitcher—and then makes a prospective judgment: How is he likely to do in this specific situation?

Or consider the much more serious, even monumental decision to go to war against Saddam Hussein. The primary considerations were future-looking: What was the likelihood that he would use weapons of mass destruction against the United States, his own people, or one of our allies? What was the likelihood he would sell or otherwise transfer such weapons to terrorist groups? Those future-looking probabilistic judgments had to be based on past and present-looking assessments: What was the probability that he currently had weapons of mass destruction? Did he have them in the past, and if so, what did he do with them? Did he use them against his own people and against Iran? Similarly a decision to target a ticking bomb terrorist for arrest, assassination, or other form of incapacitation will inevitably be based on a combination of past and future-looking probabilistic judgments: What is the likelihood that he has engaged in terrorist activities in the recent past? Has anything changed to make it less likely that he will persist in these activities? Is there current, reliable intelligence about his plans or activities for the future? How many people will he likely kill if he is not incapacitated? How many people (and of what status—other terrorists, supporters of terrorists, innocent bystanders) will likely be killed or injured in the effort to incapacitate him? Will killing him cause others to resort to more terrorism? Or might it deter others?

Asking the broad question of whether preemption is good or bad is as meaningless as asking whether deterrence is good or bad. Preemption is a mechanism of social control that is sometimes good and sometimes bad, depending on many factors. Just as deterrence can be used for bad purposes or in bad ways (in parts of the South in the 1930s any black person who acted “inappropriately” toward a white woman could be lynched), so too preemption can be used for good purposes and in good ways (planting informers within the Ku Klux Klan to learn about and prevent anticipated lynchings). There is, however, something understandably unsettling about giving the government broad powers to intervene in the lives of its citizens before a harm has occurred in an effort to prevent anticipated harm, rather than respond once it has occurred.

Requiring a past harm as a precondition to the exercise of certain governmental powers serves as an important check on the abuse of such powers. But this check, like most checks, comes with a price tag. The failure to act preemptively may cost a society dearly, sometimes even catastrophically. For example, when the United Nations Charter was originally drafted in the wake of World War II, it demanded that an actual “armed attack occur” before a nation could respond militarily. Now, in the face of potential weapons of mass destruction in the hands of terrorists or rogue nations, that charter is being more widely interpreted to permit preemptive self-defense “beyond an actual attack to an imminently threatened one.”44 But acting preemptively also comes with a price tag, often measured in lost liberties and other even more subtle and ineffable values. That is perhaps why deterrence, rather than preemption, is the norm—the default position—in most democracies for the exercise of most extraordinary governmental powers, such as waging war, confining dangerous people, requiring citizens to submit to medical procedures, and restraining speech. More and more, however, this presumption against preemptive actions is being overcome by the dangers of inaction, of not acting preemptively. The stakes have increased for both taking and not taking preemptive steps, as we live in a world of increasing physical dangers and increasing dangers to our liberty. Hence the need for thoughtful consideration of the values at stake whenever an important preemptive action is contemplated.

Because the debate throughout the world has become politicized, it has too often focused on the yes-no questions of whether preemption is a good policy, rather than on the more nuanced issues discussed above. Even for those adamantly opposed to all preemption—or to all preemption of a particular sort, such as preemptive war—the reality is that preemptive actions of different types and degrees are becoming routine throughout the world. These actions are being taken without the careful, rational consideration that carries with it the prospect that this important, if controversial, mechanism of social control can be cabined in a way that maximizes its utility, while minimizing its potential for misuse and abuse. Precise quantification of many of the factors that are relevant to predictive decisions probably exceed our current capacity. Some may indeed go beyond our inherent ability to quantify. A profound observation, made centuries ago and included in the Jewish prayer service, cautions that there are certain things that cannot be measured or quantified, such as helping the poor and doing acts of loving- kindness.45 Despite this caution, it may still be true that thinking about the costs and benefits of an important mechanism of social control in a roughly quantified manner can be a helpful heuristic.

The elusiveness of any quest for a precise formula capable of quantifying the elements that should govern preemptive decisions must not discourage efforts at constructing a meaningful jurisprudence of preemption. After all, we still lack a precise formula for evaluating retrospective decisions. We have been struggling with efforts to quantify punitive decisions since biblical times, when Abraham argued with God about how many false positives would be acceptable in an effort to punish the sinners of Sodom:

Will you really sweep away the innocent along with the guilty?

Perhaps there are fifty innocent within the city,

will you really sweep it away? . . .

Heaven forbid for you!

The judge of all the earth—will he not do what is just?

YHWH said:

If I find in Sodom fifty innocent within the city,

I will bear with the whole place for their sake.

Avraham spoke up, and said: . . .

Perhaps there will be found there only forty!

He said:

I will not do it, for the sake of the forty.

But he said: . . .

Perhaps there will be found there only thirty!

He said:

I will not do it, if I find there thirty.

But he said: . . .

Perhaps there will be found there only twenty!

He said:

I will not bring ruin, for the sake of the twenty.

But he said:

Pray let my Lord not be upset that I speak further just this one

time:

Perhaps there will be found there only ten!

He said:

I will not bring ruin, for the sake of the ten.

YHWH went, as soon as he had finished speaking to Avraham,

and Avraham returned to his place.46

In other words, fifty false positives—innocents punished along with the guilty—would be too many. So would forty, thirty, twenty, even ten! But Abraham seems to concede that fewer than ten would not be unjust, since he stops his argument and returns to his place after God agrees that he “will not bring ruin, for the sake of the ten.” Even this powerful story does not contain sufficient data on which to base a formula, since we do not know how many false negatives (sinners who deserve punishment) are being spared “for the sake” of the ten, or how many future crimes that could have been prevented would now occur. Despite incompleteness of the data, this biblical account—perhaps the first recorded effort to quantify important moral judgments—almost certainly served as the basis for the formula later articulated by Maimonides,47 Blackstone, and others that it is better for ten guilty defendants (some have put the number at a hundred, others at a thousand) to go free (to become false negatives) than for one innocent to be wrongly condemned (to become a false positive). That primitive formula is about the best we have come up with in the thousands of years we have been seeking to balance the rights of innocent defendants against the power of the state to punish guilty defendants for their past crimes.

We apply or at least claim to apply the identical formula to suspected murderers, pickpockets, corporate criminals, and drunken drivers. (There are some historical exceptions, such as treason, which our Constitution made especially difficult to prosecute, and rape, which historically was difficult to prosecute because of sexist distrust of alleged victims, a phenomenon that has been undergoing significant change during the past several decades.) A rational, calibrated system might well vary the number depending on the values at stake. The U.S. Constitution contains no specific reference to the maxim that it is better for ten guilty to go free than for one innocent to be convicted, but the Supreme Court has repeatedly invoked it as part of the requirement of proof beyond a reasonable doubt. Although the maxim was first articulated in the context of the death penalty, which was the routine punishment for all serious felonies, over time it came to be applied to imprisonment as well. Many Americans (and many jurors) probably do not prefer to see ten murderers go free in order to prevent the false imprisonment of even one wrongly accused defendant. Nonetheless, the maxim has become enshrined among the principles that distinguish nations governed by the rule of law from nations governed by the passion of persons. The maxim emerged of course from a criminal justice system that dealt with crime as a retail, rather than a wholesale, phenomenon. The guilty murderer who might go free as a result of its application was not likely to engage in future mass murders. The cost of applying the maxim could be measured in individual deaths, terrible as any preventable murder might be. Now, with the advent of terrorists using weapons of mass destruction, the calculus may have to change. It remains true, in my view, that it is better for ten guilty criminals (even murderers) to go free (and perhaps recidivate on a retail basis) than for even one innocent person to be wrongfully convicted. But it does not necessarily follow from this salutary principle that it is also better for ten potential mass terrorists to go free (and perhaps recidivate on a wholesale basis) than for even one innocent suspect to be detained for a limited period of time, sufficient to determine that he is not a potential terrorist. It was reported, for example, on November 14, 2005, that a year prior to the suicide bombings of the three American-owned hotels in Amman, Jordan, a man with the same name as one of the suicide bombers had been detained and quickly released by American forces in Iraq. If this was indeed the same man, and if the killing of more than fifty innocent civilians could have been prevented by continuing to detain him (a doubtful conclusion in light of the ready availability of suicide bombers), then it would be fair to question the decision to release him (at least with the benefit of 20/20 hindsight).

These are the sorts of issues that must now be faced squarely as we shift from a primarily deterrent focus to a significantly preemptive approach, especially in the war against terrorism.

The terms “preemption,” “preemptive war,” and “preemptive actions” have recently come into common usage largely as the result of the Bush administration’s policies in Iraq and Afghanistan and with regard to global terrorism. But the phenomena are not themselves new. Preemptive and preventive wars have been fought over the centuries, as I shall show in Chapter 2.48 Other preemptive actions, short of outright war, have been taken since the beginning of recorded history. For example, the killing of heirs to the thrones of deposed leaders to prevent their later ascendancy was common since before the Bible. And preemption as a concept has always been an important mechanism of social control, though often called by other names, such as prevention, prior restraint, anticipatory action, Vorsorgeprinzip (the precautionary principle), and predictive decision making.

One early, if incomplete, effort at constructing a jurisprudence of preemption was undertaken by the twelfth-century Jewish scholar Maimonides. He began with the biblical rule, “If a thief is caught breaking in and is struck so that he dies, the defendant is not guilty of bloodshed, but if it happens after sunrise, he is guilty of bloodshed.”49 This rule was explained as presuming that a nighttime thief anticipated confronting and killing the homeowner, whereas a daytime thief intended merely to steal property from a vacant house. The Talmud generalized this interpretation into a rule of anticipatory self-defense: “If a man comes to kill you, you kill him first.”50

Maimonides expanded this rule into an even more general obligation: “Whenever one pursues another to kill him, every Jew is commanded to save the attacked from the attacker, even at the cost of the attacker’s life.”51 This obligation was called din rodef, or the law regarding the pursuer. It was a law of last resort applicable only if the danger was imminent and there was no reasonable alternative way to prevent the pursuer from killing.52 Beyond these broad constraints, however, there was little in the way of a detailed jurisprudence governing the permissible use of anticipatory self-defense. There was no specification of the number of witnesses required to invoke din rodef or of the level of certainty that the pursuer intended to kill, nor was there any requirement that a warning be given to the pursuer before lethal force could be directed at him. This contrasts sharply with the laws regulating after-the-fact imposition of the death penalty for such completed crimes as murder. Two witnesses and advance warning were obligatory, and a high standard of proof was required. This difference may be understandable, since din rodef is an emergency measure, a law of necessity. Preemptive action, to be effective, must be immediate. This is true, however, of all measures of self-defense. Yet a careful jurisprudence of self-defense has evolved over time. The same cannot be said for anticipatory or preemptive self-defense, for either the individual (the micro level) or the state (the macro level).

The dangers of a generalized doctrine of preventive killing, unregulated by specific jurisprudential constraints, is demonstrated by how the doctrine of din rodef was exploited by some extremist right-wing rabbis in Israel to “justify” the assassination of then Prime Minister Yitzchak Rabin, who was trying to reach a compromise peace with the Palestinians. The man convicted of murdering Rabin claimed in court that “Halakhah (rabbinic law) states that if a Jew hands over his people or his land to the enemy he must be killed at once.”53 Several rabbis supported that perverse interpretation of din rodef.54

Although most rabbis across the spectrum of Judaism rejected that expansive view, the fact that anyone could believe it shows the need for more than a broad statement permitting the killing of those who pose any danger. The same is true today, as advocates of preventive or preemptive self-defense cite general principles to justify all manner of anticipatory intervention. The time has come to develop jurisprudential constraints on the kinds of preventive and preemptive steps that are now being employed and considered by governments throughout the world.

For more than forty years, I have focused much of my own scholarly writing, teaching, and thinking on these ideas and concepts. I have taught courses on the prediction and prevention of harmful conduct since the 1960s. I have written numerous articles on preventive confinement, predictability, and related subjects from the time I started teaching at Harvard.55 My interests have been far broader than the recent policies of the Bush administration. They have included the confinement of predicted sex offenders, delinquents, criminals, and other allegedly dangerous individuals.56 They have also included the preemptive confinement of entire ethnic groups, such as the Japanese-Americans who lived on the West Coast during World War II.57 I have written about predictive tests designed to identify future delinquents, criminals, deviants, and even unethical lawyers.58 I have criticized experts, particularly psychiatrists and judges, who were too certain of the correctness of their predictive decisions.59 I have complained about the unwillingness of our legal system to develop a jurisprudence of preemption in the face of the reality that we are employing this mechanism quite extensively.60 But I have never confronted the broad issue as a general societal problem. To my knowledge, no one has. The movement away from deterrence and toward prevention is an important and largely unnoticed trend that potentially affects the lives of many people.

The current focus on preemption, resulting from the political debate over particular preemptive policies and actions, affords an opportunity to evaluate preemption in its varying contexts and manifestations. In this book, I suggest how a democratic society might begin to construct a jurisprudence, a philosophy, of preemption. To put it another way, I shall try to articulate the factors that should go into any process of striking the proper balances between the virtues and vices of early intervention, especially when the intervention involves the use of force, power, compulsion, censorship, incarceration, and death—and especially when the failure to intervene may also involve comparable threats, dangers, and harms. I shall not deal with all these issues, but rather with those that are the most urgent and difficult. When the stakes are high and the available options tend to be “tragic choices” or “choices of evils,” the need for thoughtful, calibrated, and nuanced analysis becomes even more compelling. The question, Are you for or against preemption? (or “precaution” or “prevention”) thus becomes exposed as a meaningless polemic, as empty of content as the question, Are you for or against deterrence? or, Are you for or against punishment? The costs and benefits of each category of preemptive or preventive decision—indeed of each specific decision—must be carefully weighed and openly debated. I hope to stimulate such debate with this book.

But before we address the more general and specific contemporary issues—both macro, such as preventive war, and micro, such as preventive detention and profiling—we should briefly look to the Anglo-American history of prevention and preemption. We shall see that prevention in general, and preventive confinement of the dangerous in particular, have deep roots and are not as “unprecedented” as some would have us believe. It is to that fascinating, though largely unacknowledged, double-edged history we now turn.

images

* Following the London subway bombings, signs were posted in train stations reading, GUILTY UNTIL PROVEN INNOCENT—TREAT ABANDONED BAGS WITH SUSPICION.