3

War and Responsibility

AT THE INTERNATIONAL War Crimes Tribunal organized by Bertrand Russell in 1966 to try the US government for its crimes in Vietnam, an international group of philosophers, activists, lawyers, and commentators found the United States guilty of the crimes of aggressive war and crimes against peace and humanity.1 Jean-Paul Sartre accused the American government of the recently named ultimate crime, genocide.2 Their invocation of the Nuremberg Principles, used to try Nazi leaders and officials at the International Military Tribunals twenty years earlier, fell on deaf ears. In 1966, the Vietnam War was seen, as Daniel Ellsberg later wrote, as a “problem” or a “stalemate,” not yet as a “crime.”3 By 1969, this had changed. Revelations about the My Lai Massacre put the issue of war crimes to the center of public debate.4 Russell and Sartre looked more like prophets than cranks. Stuart Hampshire, then chair of Princeton’s Philosophy Department, cautioned “hardheaded liberals” for deriding what they saw as Russell’s “simple-minded radicalism”; where their theories had failed to yield accurate predictions, Russell’s had succeeded.5 The war was not just a prudential or strategic error. It was a moral crisis.

It was this moral crisis that galvanized moral and political philosophers into action as they began to address the international and interpersonal problems of life and death that the continuation of war made unavoidable. Revulsion at the “value-free” social scientific realism of expertise, seen as the reigning ideology of government, was widespread. So was anger at the conduits of that expertise—the “American statesmen responsible,” Thomas Nagel wrote, “for the more murderous aspects” of policy, in a war “perpetrated,” in Michael Walzer’s terms, “by professionals and experts.”6 In philosophy journals as much as at antiwar protests, calls for an alternative “new morality” abounded as many tried to find ways to hold to account those responsible for the failure of the old.7 The new philosophy of public affairs was part of that challenge. In the founding statement of purpose of Philosophy and Public Affairs in 1971, the editors stated that philosophers should “bring their distinctive methods to bear on problems that concern everyone” to show that “philosophical examination” of “issues of public concern” could “contribute to their clarification and to their resolution.”8

After 1968 and into the early 1970s, as the antiwar movement came to encompass liberals as well as the leftists who had long seen the war as a product of Cold War ideology and neocolonialism, many philosophers took up the problem of morality and justice in war. They synthesized law, ethics, and political philosophy in books titled War and Morality, War and Moral Responsibility, and Moral Argument and the War in Vietnam.9 First, they looked for an ethical basis for the rules of war and turned to theological resources, like just war theory. They also looked to international law and the precedents of the trials of the post–Second World War period. These promised moral constraints on state action abroad similar to what constitutionalism provided at home. Rawls, Walzer, Nagel, and other SELF philosophers tried to carve out a space for a moral theory to judge the limits of war. They positioned their ideas between pacifism and forms of moral absolutism that forbid all violence, on the one hand, and utilitarianism and consequentialism, increasingly sullied by association with the realism of foreign policy intellectuals, on the other.10 They wanted a new set of moral rules—to show how to assess the actions of those who broke them, to condemn those who justified murderous ends by claims of necessity, and to answer the question of who was responsible. It took Walzer a decade to finalize his theory and publish his Just and Unjust Wars (1977), and Rawls would soon set aside the subject of the international realm altogether until the 1990s. But it was out of their attempts to navigate antiwar protests and the moral limits of war that late twentieth-century liberal theories of war and international morality emerged. It was also here that the approaches to “applied ethics” and “public morality” that later dominated liberal philosophy had their origins.11

The war prompted changes in ideas about political philosophy’s scope. With attention fixed not on American institutions but on the terrain of the international, the moral rules were stretched beyond the bounded realm of Rawls’s basic structure. The two realms were, however, treated separately. The international realm had little philosophical relation to the distributive. While the politics of the war was deeply intertwined with questions of welfare and prosperity—and in the mechanism of the draft, problems of inequality, and citizenship—the philosophical account of war was tied to moral action, not to distributive justice or political economy. The normative and institutionalist turn among moral and political philosophers was well under way, and it would accelerate after the publication of Rawls’s theory in 1971.12 Yet in discussions of war, ideas that had roots in the study of ethics, philosophical psychology, and the philosophy of action, and were concerned with agency, intention, choice, and responsibility, were brought into the terrain of political action and moral conflict. The problem of distributing goods within a community of shared moral values was severed from that of moral responsibility; the action in question was individual-focused and interpersonally justified, detached from political or distributional conflict between groups or interests. This division of philosophical labor was justified by the distinction between ideal and non-ideal theory.13 In subsequent years, critics would argue that this conceptual logic was used to justify the unjustifiable neglect of various political realities that could not easily be accommodated within the structures of ideal theories of justice, particularly persistent forms of class, racial, and gender domination.14 In late 1960s America, such neglect was the result of philosophers focusing on the war without, rather than that within. At this time of domestic disorder, the move to international theory provided a kind of escape valve. Ideas of conflict were externalized, beyond the distributive realm, to the international. The vision of a society founded on moral consensus was thus maintained.

Conflict was also individualized and moved to the terrain of individual ethical decisions. The trajectory begun in debates about civil disobedience continued in those about war. By the start of the 1970s, the challenge of ascribing individual moral responsibility had displaced that of holding citizens and states politically responsible for war and its conduct. The problem of civil disobedience had been concerned with defining when citizens were justified in breaking the rules. The problem of war became how to define when those who waged it were justified in ignoring them. To explore this, philosophers looked not to conflicts between agents but to the internal conflict experienced by agents faced with the “moral dilemma” of choosing between moral principles as they decide on a course of action. Ethical choices in war became a test case for a new vision of applied ethics in which general principles could be established, agreed on, recognized, and applied to particular cases in order to understand what kinds of actions were morally permissible. Applied ethics soon also encompassed medicine, law, and business. But the fact that moral thinking about war, with all its extremities and appeals to necessity, was one of the first test cases for applied ethics had a wider impact on the development of moral and political philosophy. Its focus on dramatic and extraordinary moral choices was soon imported into other realms of inquiry. Crucially, it was imported back into the realm of domestic politics, which was turned into a case of its own: public morality.15

In these debates of the late 1960s and early 1970s about individual agency and moral principles, liberal philosophers continued to respond rapidly to political events. They also continued the ambitious search for a general moral theory that Rawls and his postwar contemporaries began. The challenge of finding general principles to cover many possible cases would lead philosophers into complex territory. It took a particularly thorny form when it came to politics. As such, the Vietnam-era attempts to theorize politics with the tools of philosophy were especially generative: from these debates emerged a distinctive liberal philosophical view of political action as a series of choices in which moral principles clashed, or were confronted by the claims of necessity. Thinking about what was permissible according to general principles and rules also entailed thinking about the limits of morality—the point where morality ran up against other kinds of claims. Here the messy world of Weberian politics and “dirty hands” entered into the study of ethics. The relationship of private and public morals was opened up to philosophical debate.

This had unintended consequences. The concern with dirty hands, necessity, and guilt served to blunt the force of alternative contemporary proposals for dealing with wartime responsibility. One of the consequences of the turn to war taking place on the terrain of individual agency and against the backdrop of Rawls’s ascendant institutional distributive theory was the neglect of institutional, corporate agents—the army, the bureaucracy, the state. Political philosophers worried about their students, the draft, and militarism, but said less about the military and the corporate universities they worked for. As they sharpened their ability to deal with moral dilemmas, their diagnostic capacity in this regard was blunted. Longer-term institutional changes, notably the transformation of the army, were largely ignored.16 Instead, philosophers exchanged ideas about collective responsibility for a focus on the responsibility and punishment of individual leaders. In these debates, the philosophical view of moral wrongs and the delineation of ethical constraints on war became more sophisticated. But understood against the backdrop of the public discourse on war and responsibility, their arguments, in the end, were often less demanding than the antiwar moment required. In the realm of war, the proliferation of a philosophy of moral rules and limits acted as a license as much as a constraint. Political philosophers began to use moral principles as a political weapon. But the appeal to those principles also signaled a kind of retreat.

images

At the start of the 1960s, the Nuremberg Principles and the London Charter, which had set guidelines for constraining war and defining war crimes after the Second World War, had faded from political view.17 When the trial of Adolf Eichmann captured international attention, Hannah Arendt focused her account of it more on conscience and the nature of evil than on international legal rules and their infraction.18 For Judith Shklar, this lack of interest in Nuremberg reflected a liberal blind spot. Legalist ideology, which obfuscated the political and non-neutral nature of law, prevented liberals from confronting the fact that the trials had been a tool for coping with the past, not a legal precedent or set of guidelines for the future.19 Shklar thought that was in some sense for the good: adopting strict Nuremberg definitions of “aggressive war” would serve, just as the older categories of just war theory had, to legitimate the kinds of war left out of the category as “‘defensive’ in purpose, respectable, and even morally desirable.” Yet she did not see this as an immediate prospect: “The distance between this outworn conception of the morality of war and the present actualities of warfare seems too great to make the survival of the theory of the ‘just war’ likely.”20 Shklar was right that the Nuremberg Principles and just war theory could be used to legitimate rather than restrain war.21 But she was wrong that they would not survive. They were being revived as she wrote.

The search for an ethics of war began not within philosophy but in the streets and in the courts. As conscientious objectors and draft refusers looked to justify their opposition to the Vietnam War in particular, rather than to all wars, appeals to the Nuremberg Principles and just war theory proliferated.22 In United States v. Mitchell (1966), David Henry Mitchell used the Nuremberg Principles to justify refusing induction as a means of disassociating himself from America’s guilt of war crimes under international law.23 Though he lost his case and appeal, Mitchell’s defense was repeated in subsequent cases.24 The Nuremberg precedent became the standard fare of antiwar petitions like “Individuals against the Crime of Silence” and “A Call to Resist Illegitimate Authority,” signed by more than twenty thousand people, including Herbert Marcuse, Susan Sontag, and Paul Goodman.25 At the same time, just war theory, reinvigorated as a way of coping with nuclear war, underwent a second revival among Catholic antiwar activists and Protestant theologians.26 Traditionally an alternative to pacifism and the crusade designed to guide statesmen, it offered a framework for judging the ends and means of war: the jus ad bellum (which treats the justice of war); the jus in bello (the justice of the conduct of war); and a set of requirements a war would have to meet to be designated “just.”27 Alongside the Nuremberg Principles, it provided a basis for the moral right to refuse service where a war was unjust, and for avoiding complicity in violations of the jus in bello. It also potentially provided a legal right to refuse criminal culpability and participation in criminal activity.28

As the antiwar movement radicalized, debates over the usefulness of these resources became a proxy for debates about the legitimacy of the protests. Commentators warned against campaigners’ “imprecise” use of Nuremberg and just war principles. The veteran socialist campaigner Michael Harrington—antiwar, critical of the government’s “dangerously anti-libertarian logic” when it came to draft protests, but skeptical of antiwar militancy—dismissed the Nuremberg analogy as vague.29 John Courtney Murray, who played a key role in Vatican II’s Declaration on Religious Freedom and served as a member of the committee reviewing Selective Service classifications, urged caution in the use of just war principles: if just war theory were legally recognized as a defense of selective conscientious objection, citizens would have to be “cultivated” to exercise the discretion demanded by the theory and to prevent the problem of “erroneous conscience” arising.30 Paul Ramsey, a Protestant ethicist at Princeton University (whom Rawls knew and whose book he reviewed) had argued in 1961 for a legal category of “just-war objection,” but now condemned the “legalist-pacifist version of the just-war doctrine” deployed in the call to the March on Washington for Peace in Vietnam and by Students for a Democratic Society (SDS).31 His The Just War (1968) saw just war theory as a set of criteria for realist statecraft to place limits on modern warfare—not nuclear war but the counterinsurgency warfare that nuclear peace enabled.32 It supplied justifications for the architects of the Vietnam War, not its opponents.

Yet many also recognized the broader uses of these bodies of theory—as well as the laws of war and international criminal law—to legitimate the constraint of state action. They had the potential to create a direct relationship between international principles and individuals, bypassing the state and the obligations of citizens. Just war theory, Ramsey wrote, introduced into state politics “the transcendent claims of the person and of humanity” as they had been fixed in “international juridical order.”33 International law had long had appeal for those—both liberals and early neoliberals—looking to constrain or encase the state, but the cosmopolitan dream of international lawyers had faded in the postwar years.34 Now lawyers began to treat the war in Vietnam as a breach of international law, returning to a view of international politics as an arena of principle. Richard Falk, at the forefront of protests by lawyers against the Vietnam War and also involved with the Society for Philosophy and Public Affairs, saw in the Nuremberg Principles “guidelines for citizens’ conscience and a shield that can be used in the domestic legal system to interpose obligations under international law between the government and members of the society.”35

To some philosophers, however, the basis of appeals to theological doctrine or legal precedent seemed thin. The laws of war were weak and nonbinding. Some argued that what was needed was a body of moral theory to which political actors could appeal.36 Rawls again attempted to provide it. In 1968, at a Harvard antiwar rally, he spoke alongside Noam Chomsky, Rogers Albritton, and antidraft organizers and sketched a justificatory apparatus for the defense of selective conscientious objection. This would go beyond “merely religious” or purely “moral” theories like pacifism. What justified SCO for Rawls was neither an appeal to conscience nor merely the appeal to the sense of justice of the majority that he had used to explain civil disobedience—the “political principles conceiving the common good.” What mattered was the breach of the principles governing the waging and conduct of war. Where Ramsey had used just war theory to defend the government, Rawls used it to support the antiwar movement. The injustice of a war generated the right, and in some instances the duty, to refuse service.37

Rawls did not publish on international ethics until old age. But prompted by these debates, he formulated a theory of war in a course titled “Moral Problems: Nations and War” that he taught in the spring of 1969.38 Across a semester of biweekly lectures, Rawls surveyed theories of war and generated his own unpublished account of the limits of war. For Rawls, the morality of war was not an extension of the morality of institutions—the principles of justice. It was an extension of morality in general to the international realm where a set of moral rules and principles, independent of state institutions, applied to international actors and individuals. These would be chosen in an original position situation by the representatives of states—behind a veil of ignorance like that of the domestic theory—who would agree, in their national interest, to constrain war, as part of their natural duty to preserve their just institutions. The principles would include the laws of nations and of war and peace, familiar from standard international legal doctrine, as well as traditional prohibitions on conduct—“the natural duties that protect human life.” Here Rawls relied not on Nuremberg, or on more recent theories of international law, but on J. L. Brierly’s classic textbook The Law of Nations (1926) for an old-fashioned account of international law that turned on the principles of sovereign equality and the duty to uphold treaties and avoided the political complexities of decolonization or international organizations.39 Like just war theory, Rawls’s account tried to move beyond the poles of moral absolutism (a set of moral rules and prohibitions that could not be overridden) and reason of state (which justified a great deal in the name of state stability). It also followed the distinction between judgments about the war and its conduct.40

For Michael Walzer, it was the problem of war crimes, rather than draft refusal, that first led him to just war theory. Figures as different as Bayard Rustin and Dwight Eisenhower, he wrote in a 1967 article in Dissent, were offering the same theory of moral judgment in wartime. Whether pacifists or militarists, they focused on the justice or injustice of a war and excluded the idea that whatever the justice of the cause, there were moral limits to be placed on its conduct.41 But some acts could never be justified by the claim of “military necessity” or by the aims of war. The constraints on means that fell under the jus in bello—the protection of noncombatants, for example, and the ethical treatment of prisoners—should bear no relation to the ends of war. A series of absolute ethical distinctions had to be drawn. With American search-and-destroy missions in southern Vietnam targeting noncombatants, the line between soldiers and citizens that designated civilian immunity was a priority. By 1971, Walzer claimed that acts that eradicated the soldier-civilian distinction, like the firebombing of cities, were almost impossible to justify.42

For Walzer, this reorientation toward the conduct of war was a step toward the kind of moral theory of limits he associated with the account of limited, alienated citizenship in the liberal state. In his first book, The Revolution of the Saints (1965), he had differentiated a Catholic idea of limited morality, visible in just war theory with its ethical constraints, and a Protestant all-encompassing morality, seen in sectarian devotional forms of citizenship and the idea of a crusade.43 On this view, if all are crusading “saints” under God, there is little distinction between combatants and noncombatants; the same was true in a democratic state in which all citizens serve equally in the military to protect their community. Just war theory, with its stringent adherence to the combatant-noncombatant distinction, challenged this thicker idea of citizenship.44 Yet, as Walzer argued elsewhere, the private lives of citizens—the fact, in his terms, that most people, most of the time, do not want and cannot afford to be involved in politics—meant that this thicker idea did not hold in practice.45 Just war theory, on this view, was more realistic in an era when not all citizens served.

Walzer’s move to define limits and distinguish categories of combatants anticipated a major shift in debates about the war that took place after 1969, when war crimes became the focus of politics and theory alike.46 Clergy and Laymen Concerned about Vietnam published In the Name of America, a collection of evidence of American war crimes.47 Seymour Hersh published his exposé of crimes in Vietnam. His revelations about the My Lai Massacre transformed the debate. There was no longer doubt that war crimes had been committed. The question was what should be done. What did the breach of moral conduct that this represented mean for those directly, or indirectly, responsible? Soon after Rawls treated the problem of refusal and the justice of the war, these problems were submerged by concerns with the justice of its conduct.48 The resources of international law, the Nuremberg Principles, and just war theory, which had enabled justifications for draft refusal on grounds of the injustice of the war itself and the crime of aggressive war, were repurposed to explore its criminal conduct.49

This reorientation toward conduct and the focus on noncombatants had a number of implications. As particular areas of war were delineated, others were set aside. One broader consequence of the shift to the conduct of war was the philosophical sidelining of the political and ideological issues around the war. Russell’s International War Crimes Tribunal and the parts of the later antiwar movement bolstered by the international anti-imperialist left had linked US war crimes to geopolitical questions about the Cold War, decolonization, and neocolonialism.50 Some to the left of the philosophical mainstream, like Walzer, Sidney Morgenbesser, Kai Nielsen, and other members of the New York branch of the Society for Philosophy and Public Affairs, continued to comment on the politics of war.51 Walzer himself would develop an expansive just war theory that included both crimes of aggressive war and crimes of conduct in war; he would use it both to justify opposition to the war in Vietnam and to designate Israel’s position in the Arab-Israeli war of 1967 as just.52 But the focus on conduct and the absolute moral limits to war abstracted from these questions. Most philosophers looked to conduct rather than to politics, and to the actions of individual soldiers and leaders rather than citizens. Just war theory made distinctions that the morality of crusading citizens masked—between soldier and citizen, and between the ethical means of war and its political ends. Once moral and political theorists like Walzer began to make distinctions like these, others went further—in precisely the way Shklar anticipated. Walzer’s focus on conduct and noncombatant immunity was a sign of things to come.

images

As more philosophers associated with SELF and the Society for Philosophy and Public Affairs turned to theorize war, their dissatisfaction with the inadequacies of existing frameworks became clear. While lawyers like Nuremberg prosecutor Telford Taylor affirmed the capacity of the laws of war to address war crimes, some philosophers saw the laws as “morally unattractive” and by themselves insufficient to constrain war or the state.53 The legal philosopher Richard Wasserstrom argued that because they were “not a rational, coherent scheme of rules and principles,” their silences legitimated acts left out of the legal code. Unless a soldier was ordered to do one of the few proscribed acts, there was no “readily applicable general principle to which he can appeal for guidance.”54 Moreover, international law had failed to change existing assumptions about war, which dismissed morality. The realist position, derived from General Sherman’s “declaration”—which implied that, because “war is hell,” anything goes—was still taken seriously in both the theory and practice of international relations. So was the argument that without positive international law and the machinery to enforce it, the morality embodied by those laws was irrelevant.55 For SELF philosopher Marshall Cohen, the laws of war failed to recognize that morality came first. A “more rigorous” conception of the morality of war that corresponded “more convincingly with fundamental principles” was required to provide the constraints that realist accounts of international relations did not.56 Moral principles could not be “subordinated” to other interests.57 But what should those principles be?

When Rawls and Walzer suggested versions of just war theory, they were looking for a set of moral rules that could go further than international law. They sought to carve out a space between moral absolutism and pacifism, on the one hand, and utilitarianism and realism, on the other. Others now also sought a moral theory to navigate these poles of moral absolutism and utilitarianism. At the turn of the 1970s, philosophers were, in general, challenging utilitarianism. Rawls advocated justice over utility. Other attacks on utilitarianism, like the British philosopher Bernard Williams’s, were increasingly influential, particularly in discussions of moral responsibility.58 When Williams attacked consequentialist theories, he argued that they cut out the idea that “each of us is specially responsible for what he does, rather than for what other people do.”59 In the wake of the Vietnam War, the reputation of utilitarianism worsened. Its critics often fused it with a species of realism, on account of its ideological association with the state and its war-making capacities and calculations. By allowing that all means could be justified in the name of greater utility, it justified murderous means. A political critique of utilitarianism thus supplemented the philosophical one. “Bad moral philosophy . . . under the influence of bad social science,” Stuart Hampshire wrote retrospectively, was complicit with the wrongs of war. Its “computational morality”—its reliance on forms of cost-benefit analysis where incommensurable goods were traded off against each other—was an “obstruction” to reform.60 The affinity between utilitarianism and bureaucratic ideology was, Alasdair MacIntyre later argued, not “just a matter of resemblance.”61 Utilitarianism created a false confidence among policymakers who believed the “non-propositional and unprogrammed elements in morality” could be dismissed or controlled.62 New Left anti-statism here joined with liberal critiques of bureaucracy, militarism, and paternalism to put utilitarianism under pressure. For those who cared about conduct, a theory of war could not be utilitarian: its evasion of responsibility and defense of means-end reasoning ruled it out.

In the search for alternatives, the modern approach to applied ethics was born. Many began to bring to bear tools from linguistic analysis and philosophy of action to posit moral rules, delimit what was permissible, and decide who should be protected in war (irrespective of its ends or politics) and who could be held responsible for it. Philosophers now used abstract, personal, and interpersonal modes of justification to explore the most concrete of existential questions of life, death, and killing—questions that continued to remain at the core of applied ethics.63

One of the most influential arguments had been put forward over a decade earlier: “the doctrine of double effect.” This old idea had been reintroduced into linguistic philosophy by Elizabeth Anscombe. It had a long afterlife in ethics. By the Vietnam era, Anscombe was known for her blistering critique of modern moral philosophy.64 She had led Oxford’s internal rebellion against the noncognitivist theories of the likes of A. J. Ayer that denied the truth or falsity of moral statements. She was also known for her opposition, alongside Philippa Foot, to the university’s decision in 1956 to award President Truman an honorary degree. Anscombe had condemned unlimited objectives in war. She argued that the distinction of “legitimate” killing from murder had “barbarous” consequences, as did the doctrine of collective responsibility (which had been used to legitimate civilian deaths in the atomic bombing of Japan but was nonetheless still defended in a “lugubriously elevated moral tone”). Exploring the common claim made about Truman’s intentions—that he had not aimed to kill innocents, but to end the war—she argued that the choice to kill innocents to achieve one’s end was always murder. Truman’s was not a borderline case. Anscombe thought that neither emotivism (which reduced morality to the expression of attitudes) nor consequentialism (which justified means by pointing to ends) succeeded in maintaining the prohibition on murder in war.65 An absolutist moral theory was needed—not pacifism, but one that would nonetheless demand firm rules to prohibit the murder of innocents.

Anscombe turned to the doctrine of double effect. This stated that it is sometimes permissible to bring about as a merely foreseen side effect a harmful event that would be impermissible to bring about intentionally. In Intention (1958), Anscombe described intentions as dependent on actions and circumstances. She stressed that action should not be understood in the terms of the natural sciences but in those of ordinary social life—the motives, intentions, desires, and explanations of agents themselves, their non-observational or practical knowledge.66 After Donald Davidson’s intervention in philosophical debates about agency in 1963, this account of intention was dismissed, alongside other “anti-causalist” explanations for action, as ordinary language philosophers swapped piecemeal analysis for Davidson’s general theory of meaning.67 But Anscombe’s ideas nonetheless became influential in ethics—particularly her concern with why some actions can be intentional under some descriptions but not others, and how others can be unintentional, even if they are understood as intentional when described as such. This insight was picked up in the doctrine of double effect. Double effect offered a way to delineate what counted as moral conduct and to challenge the casual utilitarianism of war policy without collapsing into pacifism.

Part of double effect’s appeal was its wide application, beyond conduct in war. For some, these applications were not wide enough. In “The Problem of Abortion and the Doctrine of Double Effect” (1967), Philippa Foot pointed to its limitations in the context of abortion.68 Like Anscombe, Foot attacked subjectivist and utilitarian theories that equated the badness of failing to prevent an evil outcome with perpetrating it. She wanted finer ways of delineating what was morally permissible. Foot began with a hypothetical thought experiment. A runaway tram is barreling along a railway track and gets to a fork: if the tram goes one way, it will kill five men who are working on one track; if the driver diverts it to go the other way, it will kill one man on the other. What should the driver do?

Thought experiments like this would become widespread, circulating beyond their origins in Oxford analytical ethics. So would “moral dilemma” situations, in which moral agents had to choose between actions with both good and bad results. This particular one was reintroduced in modified form by Judith Jarvis Thomson as the well-known “trolley problem.” (Thomson’s decision-maker was a bystander rather than the driver.) Their point was to reveal intuitions not through observing ordinary linguistic usage but by analyzing extraordinary situations. Intuitions were checked against general principles that justified or fit other cases, in order to revise them.69 In light of her thought experiment, Foot argued that double effect did not fit our intuitions. If it did not apply in multiple situations, it could not be a useful moral rule. She suggested a simpler distinction: between “doing” and “allowing.” This had even wider applications, most influentially in the field of biomedical ethics—then rapidly expanding under the aegis of new research institutions like the Hastings Institute of Society, Ethics, and the Life Sciences—in which it was redrawn as the distinction between “killing” and “letting die.”70 It could also be deployed to delineate permissible conduct in war: while doing harm was always morally impermissible, allowing harm was not.

images

Throughout the coming years, moral and political philosophers increasingly used thought experiments and the model of intuition-testing through hypothetical cases to arrive at principles. More immediately, these distinctions had significant consequences for debates about responsibility, action, and decision-making. For the SELF philosophers, these ideas were initially tied to discussions of the war. In the attempt to find moral rules to cope with war, double effect and its alternatives provided the circuitous route to an answer. At a meeting in 1968, Thomas Nagel led a discussion, which began from Foot’s article, on the problem of intention, double effect, and war. It was the first of many discussions of double effect: Charles Fried together with Gilbert Harman introduced a meeting on the topic six years later, and debates about double effect and noncombatant immunity continued in the pages of Philosophy and Public Affairs in subsequent years.71

In response to Nagel, Rawls argued that insofar as double effect provided a non-utilitarian decision rule (and thus an alternative to both intuitionism and utilitarianism), it was compelling. In Anscombe’s rendering, Rawls wrote, it derived from Wittgenstein’s “attack on mental acts as special experiences or private acts.” It provided a way of maintaining a strong prohibition on murder in war that stopped short of pacifism.72 It made moral rules flexible, without lapsing into utilitarianism. Decisions were not resolved by a balancing of goods and evils. “It makes [absolutism] less restrictive; adjusts it to the demands of real life,” Rawls wrote. “It forbids absolutely only certain means and chosen ends, and not foreseeable though unintended consequences. By suitably choosing our means and ends, we can live within its constraints.” And yet it was simply a “local small scale restriction with no apparent intuitive basis.” Only sometimes would it lead to “the correct conclusion.”73 Such objections were relatively mild. When Anscombe, who opposed contraception, used her typology of acts and intentions to argue that the rhythm method did not count as contraception, Bernard Williams and Michael Tanner accused her of cavalier absurdity: “Like sophists throughout the ages, she combines a commonsense bluffness against other people’s distinctions, with the most sensitive indulgence to the niceties of her own.”74 Double effect could be used to legitimate intuitions and to license as much as to constrain.

Like Anscombe, Rawls wanted to decide what in warfare should be regarded as “morally impossible.”75 He was less concerned than some of his contemporaries with the moral dilemmas of individual agents facing tough decisions. His theory was designed to provide preemptive solutions to such dilemmas, to limit the tyranny of having too much choice. But in war, where the principles of institutional justice did not easily apply, such dilemmas required different solutions. Because so much of Rawls’s account of war flowed from the broader moral theory he had begun to call his “theory of right”—the part of his theory that dealt with relations between persons and included the natural duties that bound individuals independently from institutional connections—international morality was determined less by institutional principles than by humanitarian duties to avoid harm, to aid the needy, and so on.76 What happened when these duties conflicted—for instance, when a soldier might need to kill a man to save others?

Rawls dealt briskly with such dilemmas. He wrote that his priority rules—chosen in the original position to give an order to moral principles—would serve the same purpose as double effect, to provide absolutist limits to the calculus of good and evil, but with more success.77 There was no need to resort to arguments from utility or necessity in situations of uncertainty or conflict. Decisions could be made by reference to the rules. Yet in war, a situation of noncompliance, the principles needed extra support. Wanting to connect these ideas to his contract theory, Rawls insisted that all principles of war needed to be those chosen in the original position, and he extended his concerns there with individuals and stability to war. But he also introduced additional principles of stability to solve conflicts in “non-ideal” conditions of partial compliance. These included a system of punishments and penalties and a “principle of individual responsibility”—drawn from criminal and international law—aimed at ensuring that those who broke the rules would pay their due. It was this principle, according to Rawls, that protected noncombatants. But not everything followed from these basic principles. Rawls also provided considerable detail on the more substantive rules of warfare that he claimed would be agreed to in the original position. There would be a ban on weapon manufacturing and the use of those weapons that “necessarily violated . . . the strictures and aims of the just causes of war.”78 “The means of ordinary warfare must not,” he wrote, “involve an attack on the ‘normal life of the country, its persons and insts [sic].”79 There were strict constraints on the waging and conduct of war, though Rawls allowed for exceptions: “humane interventions” might sometimes be permitted, in violation of the principle that all wars should be wars of self-defense.

The only immovable moral limit on conduct derived from the principle of individual responsibility and its protection of noncombatants. This was that “genocide is always wrong.”80 Rawls was likely taking aim here at his former teacher, Ramsey. In defending the United States against charges of genocide and exploring the justice of counterguerrilla warfare, Ramsey suggested that “insurgents” bore responsibility for enlarging “the area of civilian death and damage that is legitimately collateral.” This argument seemed to make extremely large numbers of collateral deaths permissible.81 Rawls addressed this directly, writing that it was necessary for his social contract theory to “explain the absolute prohibition concerning genocide.” This strikingly minimal prohibition aside, Rawls thought the general “rejection of absolutism had been correct.”82 He did not say whether US acts in Vietnam amounted to genocide. Given his account of intention and his definition of genocide as the deliberate “destruction” of a people, “in the sense of a population with a distinct culture,” he probably did not think so.

Here Rawls was trying to find a way of assigning responsibility for wrongs. The tendency of recent philosophy—and Rawls’s own—had been toward deflationary views of responsibility. J. L. Austin’s influential “Plea for Excuses” was at core about how people account for and explain their responsibility for their actions. In distinguishing between “accidents” and “mistakes,” Austin pointed to the difference between acts people do that are determined by circumstances outside of their control, and acts that go wrong through no fault of their own.83 Much linguistic philosophy following Austin and also Peter Strawson implicitly rested on a conception of agents as vulnerable to contingency, flux, and fortune.84 Such ideas also fit with the kind of political arguments that were increasingly deployed in institutional debates about welfare states and social insurance schemes, in which personal responsibility and desert were downgraded.85 In Rawls’s theory, these concerns played out in his rejection of desert as an institutional basis for distribution. Yet his use of a principle of individual responsibility in his ethics of war (and elsewhere, his conventional account of retributive justice) indicated that he was quite content to assign to different social realms and practices—of distribution and retribution—different notions of responsibility.86 When it came to his account of moral persons, moral feelings, or natural and reactive attitudes, Rawls, like Strawson, saw blameworthiness as crucial to moral responsibility and personhood.87 Despite his discomfort with notions of merit and desert, Rawls suggested that it was this principle of individual moral responsibility that ultimately grounded the rights and wrongs of war, in the absence of a stricter legal and moral code.

Utilitarianism might let agents off the hook, but a strict absolutism was too punitive or unsustainable. Yet anything less than absolutism might risk being too permissive, like the laws of war. In general, Rawls did not share the philosophical urge of some of his colleagues to make the rules as simple or as general as possible. By contrast, when Nagel turned to problems of war, he tried to find an absolutist alternative that, like Foot’s, was simpler than double effect and that did not require Rawls’s complex apparatus. Absolutism, he argued in “War and Massacre” (1972), forbids doing certain things to people. It does not forbid bringing about certain results. In war, there exist absolute prohibitions, acts that cannot be done morally—acts that, if done, no argument or justification can make “all right.”88 For Nagel, it was possible to extract those prohibitions from our everyday moral principles—the rules we accept in everyday life. Absolutism about murder had “a foundation in principles governing all one’s relations to other persons, whether aggressive or amiable,” Nagel wrote. “These principles, and that absolutism, apply to warfare as well, with the result that certain measures are impossible no matter what the consequences.”89 “If there are special principles governing the manner in which we should treat people,” Nagel went on, “that will require special attention to the particular persons toward whom the act is directed, rather than just to its total effect.”90 Eliminating all weighing of consequences from political thinking was impossible, so absolutism was not a substitute for utilitarian reasoning, but a limit on it.

Yet absolutism and utilitarianism involved two different ways of viewing the world. Absolutism was associated

with a view of oneself as a small being interacting with others in a large world. The justifications it requires are primarily interpersonal. Utilitarianism is associated with a view of oneself as a benevolent bureaucrat distributing such benefits as one can control to countless other beings, with whom one may have various relations or none. The justifications it requires are primarily administrative.91

For Nagel, what you could do to someone was circumscribed by what you could justify. This commitment—to horizontal, interpersonal justifications and to reject executive decisions that neglected individual persons for the sake of general welfare or a possible future—applied even in war. Was it possible to justify to a victim what was being done to them? The impermissibility of murder and murder in war was derivable from this general requirement of interpersonal justifiability. Adherence to it buttressed the protected immunity of noncombatants and discredited the utility calculus as a guide to action when dealing with the “problem of means and ends.”92 Nagel’s alternative to utilitarianism, like Rawls’s, was an interpersonal ethics. But where Rawls was ultimately concerned with the institutional contexts of interpersonal relations, Nagel focused on the universal moral rules arising from them. Where the young Rawls took aim at the institutions of the administrative state, Nagel extended the attack to utilitarian reasoning in emergency politics. In doing so, he challenged the idea that there is a form of politics, like the politics of war, where anything goes and emergency ends justify all kinds of means. What was true of war was true of the rest of social and political life. No elaborate theory or specific principles of war were needed; simple rules, built from fundamental moral principles, could provide a moral limit to action.

Such confidence in the capacity and flexibility of moral theory, and in the applicability of general principles to particular situations, was a staple of the new philosophy. Nagel here joined Rawls in taking an interpersonal rather than institutional approach to war. Yet Nagel was ready to acknowledge the need for absolutist moral rules, even if they could not accommodate human failings. Even in war, some means were never justified. That, for Nagel, was how absolutism retains its force even if moral rules are violated. The rules still remain in place, even when they are ignored. The trouble with absolutism was that real-life agents break the moral rules, as even self-declared absolutists like Nagel knew well. “We have always known that the world is a bad place,” he wrote in the aftermath of the My Lai revelations. “It appears that it may be an evil place as well.”93

The commitment to finding moral rules and principles of war persisted among philosophers. But the approach to moral questions it initiated was not without critics. The response of the British political philosopher Brian Barry was particularly damning. Barry, who studied with H.L.A. Hart at Oxford and whose less systematic approach to philosophy was typical of the Oxford style, would become, following the publication of his Political Argument in 1965 and his founding of the British Journal of Political Science in 1971, one of the most influential political philosophers in Britain.94 For Barry, these “moral absolutists” had tried to find a middle ground between theories that let all actors off the hook and theories that held them responsible for all the consequences of their acts. Double effect was a casuistical symbol of the tortured results, an attempt to limit moral liability that had gone too far. It introduced too much uncertainty, justified too much, and, in its willingness to let people off the hook on account of human failings, let perpetrators get away with their crimes. Barry later dismissed these wartime debates as examples of philosophical “simple-mindedness.”95 These debates anyway failed to get at the questions that preoccupied many after My Lai: given that the rules had been broken, how could the people responsible be made to pay?

images

These philosophical debates about intention and individual responsibility took place at a moment when many moral, legal, and political thinkers, both outside and within the field of academic philosophy, were debating rival views of responsibility in war. In 1970, congressional representatives invited Daniel Ellsberg and Senator George McGovern to join Hannah Arendt, Hans Morgenthau, and other politicians, political scientists, and lawyers like Richard Falk and Telford Taylor at a conference titled “War and National Responsibility.”96 The attendees debated what approach to responsibility to take. Some worried the talk of leaders’ responsibility was a distraction. Morgenthau argued that it was merely “psychologically convenient” to assuage guilt by trying a few individual leaders.97 Others argued that the “overlegalization” of responsibility obfuscated the political nature of responsibility; the law diverted “attention from our aggregate responsibilities as citizens.”98 For the journalist Jonathan Schell the question was whether responsibility should be assigned to individuals, to mankind, or more simply to “ourselves.”99 But how? It seemed, to many, to be impossible. It was not clear that American citizens could be understood as a collective, or as a corporate agent that could act and be morally responsible (and, potentially, be punished). The distinction between trying ourselves and imposing criminal liability on leaders was, Taylor wrote, “interesting conceptually but not very realistic.”100

These discussions about the relative merits and possibilities of individual and collective responsibility had recent precedents, particularly in debates in the aftermath of the Second World War about whether it was possible for the nation, or the citizenry in general, to be held responsible. In 1945, Karl Jaspers had argued that the German citizenry was politically guilty, but with differing degrees of responsibility.101 Dwight Macdonald attacked the notion of collective responsibility in his memoirs. He thought speaking of the collective responsibility of “the German people” for state violence against the Jews required an implausible organic conception of the state. By contrast, speaking of the responsibility of the “entire white community” of the American South for violence against African Americans during Jim Crow was more viable: that community acted deliberately and often against the state, albeit with its complicity.102 Arendt, meanwhile, argued that while collective guilt was impossible, collective responsibility was not. That was the definition of political responsibility: responsibility for the political world that citizens make together.103 Addressing different concerns, C. Wright Mills argued that “the power elite” as a group could be held responsible for the unequal distribution of power in America. Elites could not rescind political responsibility by blaming luck, fortune, or providence. Moreover, those with far less power to control the shape of the social system nonetheless had a collective responsibility to hold elites accountable.104

In contrast to these arguments, philosophical debate about responsibility in the 1960s had focused largely on moral and legal, rather than political, responsibility and had taken a different direction. In his attack on “legal moralism,” Hart extended the skepticism among linguistic philosophers about ascribing moral responsibility to legal terrain, insisting that moral responsibility was narrower than legal responsibility. Hart, with Tony Honoré, had published an influential account of causality and causal attribution in law.105 In response to a wave of retributivist defenses of punishment—part of the broader backlash against utilitarianism and its deflationary view of responsibility—Hart provided an alternative that combined aspects of utilitarian defenses of punishment (their forward-looking nature, which rendered the suffering of the punished justifiable by its deterrent effect) with a defense of the principle of personal responsibility (which retained as morally relevant the distinction between innocence and guilt).106 The law, Hart wrote, can hold a person responsible for things done by accident. Under the cover of strict liability, that person can be responsible for things done by others. Morally, however, they could not be held responsible for things they could not have avoided doing “or for the things done by others over whom they had no control.” For Hart, that way of assigning responsibility conflicted with what it meant to have a morality and the commonly accepted features of ordinary morality itself. For instance, someone could be said to have moral responsibility if we could blame them for doing something wrong. But if we think they could not have done otherwise, we do not blame them. They cannot be described as morally responsible. A legal system ought not to make people liable for what they did by mistake.107 Punishment and responsibility should track moral blameworthiness. The bar for that should be set high.108

Different arguments were made to similar ends by the philosopher Joel Feinberg, who argued that it was conceptually implausible to find every German (or American) guilty for acts done by every person acting under the authority of the state. He described moral responsibility as hinging on fault rather than merely liability. With law, by contrast, a person could be liable without being at fault, and liability could be transferred to other individuals or collectives, or across generations.109 Arendt, though critical of Feinberg’s abstraction from politics, agreed that guilt, unlike liability, cannot be transferred, yet she argued that liability need not imply blame in the same way guilt does. Guilt and blameworthiness were tied to particular agents and were thus much harder to establish. Only those directly at fault could be held responsible.110

These arguments were not the place to find a robust way of ascribing responsibility for war crimes. The debate over how much people could be held responsible for their own choices when they might not control the circumstances of those choices continued, with the grounds for responsibility becoming increasingly attenuated.111 Meanwhile, some sought to show the implausibility of the idea that collectivities or social systems could bear moral responsibility. They acknowledged that in ordinary language we might hold collectivities responsible, but that did not make the members of a collectivity also, or equally, responsible. Others saw a difference between organized groups—an “armed forces unit acting under command”—and random collections of individuals, but insisted that any case for group responsibility had to begin from the fact that “we can assign responsibility only to persons.”112 “Persons” now meant individuals. There was little philosophical enthusiasm for doctrines of corporate responsibility.113

So philosophers concerned with war looked to international law instead. The laws of war offered distinctions for delineating responsibility, but legal philosophers continued to find them wanting. The Nuremberg Principles shifted the bearer of responsibility from the nation to the individual, but collective responsibility was still part of the principles. Article 6 of the Nuremberg Charter established a principle of vicarious liability, whereby any member of a conspiracy could be held liable for the acts committed by others. Article 10 derived responsibility from group membership: if a group was criminal, membership counted as an offense.114 The charter imposed significant liability on those who accepted conscription—potentially, responsibility for waging war and, through the doctrine of conspiratorial responsibility, for war crimes. The trials had narrowed the scope for group responsibility. Its principle of individual responsibility implicated citizens less, but it still allowed room for the idea that even if accepting conscription did not make soldiers liable, it nonetheless placed them in a causal chain in relation to war criminality.115

Many draft refusers had made this case. With hindsight, Richard Wasserstrom wrote that the ideas of group criminality and conspiratorial liability did not generate sufficient fear of punishment to be effective.116 Moreover, the laws of war failed by creating misleading hierarchies of responsibility, especially in their distinctions between different kinds of killing. The bombing of cities, Wasserstrom argued, was not morally different from other forms of killing civilians. Punishing those who did the latter while rewarding those who ordered the former was unjustifiable. What was crucial was distinguishing among different categories, rather than ontological kinds, of agents—soldiers and citizens, combatants and noncombatants, volunteers and conscripts, military and civilian leaders, and munitions workers and those in combat.117 Yet even these distinctions did not make assigning responsibility easy. It was hard to apply an ordinary mens rea requirement to a typical combat soldier, and in those cases, responsibility rested on the tenuous Nuremberg definition of “moral choice”: could soldiers defend their action in the field by appeal to “superior orders and duress”? Did they realistically have a “moral choice” to disobey orders that entailed war crimes?118 Clear-cut cases like My Lai aside, it was hard to show full culpability. Habituated as soldiers were by obedience, the plea of superior orders still covered their actions.119

With leaders, things were different. In the early 1970s, philosophical as well as political attention shifted to the responsibility of leaders, with whom it was far easier to satisfy the mens rea requirement of legal culpability.120 Precisely how the burden of responsibility would fall was more complex. As Shklar had pointed out, Nuremberg was in this respect unique, since so many individuals had been so obviously culpable.121 Unlike soldiers, leaders were not subject to formal military discipline. They had more discretion, more time, and more power to reflect and make real choices. Many thought accountability, particularly for My Lai, should go all the way to the top. How to determine it? Wasserstrom argued that bad motives were not an essential requirement for responsibility. It was appropriate to hold leaders responsible both for acts ordered that they knew violated laws of war and also for harder cases. “Actual knowledge” was not required, and strict liability was unattractive. The appropriate test for culpability was “what the leaders ought to have known or foreseen” about their actions.122 This was in effect a reversal of the double effect framework and a claim about its tendentiousness. Good intentions were beside the point. What mattered was foresight. Hart made a similar point: at least from the point of view of criminal law, the known side effects of an action were morally indistinguishable from the actions’ intended effects.123

Given this, some began to argue that individuals whose legal culpability could be proven should be tried in an international or domestic court of law. Where there was sufficient evidence, there was no excuse that could get them off the hook. Soon, after the release of the Pentagon Papers in 1971, Noam Chomsky would argue that the evidence was more than sufficient.124 The law, and the legalization of discourse about war, had political intent. Richard Falk had long called attention to the role of law as a political instrument in the antiwar movement.125 He recognized the practical limitations of the call for trials given the restricted political force of international criminal courts and law. At the “War and National Responsibility” conference, Falk suggested alternative mechanisms to investigate the actions of leaders—international commissions or a more active domestic judiciary.126 Other proposals abounded. Speakers called for a committee of jurists; additional legal principles; schemes for domestic legal courts, tribunals, commissions, and forums for returning soldiers to provide evidence (and confess); or new bodies of law that covered environmental assaults, including the category of “ecocide,” first proposed as a way of indicting the destruction of the Vietnamese lived environment.127

Yet many continued to doubt the power of international law to hold even individuals responsible and pointed to the unintended consequence of this legal discourse. Wasserstrom suggested the overlegalization of responsibility neglected moral responsibility, encouraging conscripts to avoid complicity, not in order to be moral, but to avoid a punishment.128 Falk, by contrast, organizing with the antiwar movement, insisted that it had important political consequences of its own. Legal responsibility could be used to create politically responsible citizens. Falk consistently tied law to politics, the jus in bello to the politics of the war. He distinguished two “orientations toward crime,” juridical and political: first, the “indictment model,” a conception of crime based on the “plausibility of indictment and prosecution of individual perpetrators”; and second, the “responsibility model,” based on “the community’s obligation to repudiate certain forms of governmental behavior and the consequent responsibility of individuals and groups to resist policies involving this behavior.”129 Instead of emphasizing the juridical nature of war crimes trials—Falk and other lawyers saw them as legally implausible—he explained the tribunal’s mission by reference to the responsibility model.130

Nuremberg might have been an example of a real indictment model of war crimes, he suggested, but in the context of Vietnam its function was political. It would connect individuals to the international, challenge state sovereignty, and build “transnational solidarity with every victim of governmental crime.” The best outcome of invoking Nuremberg was not holding individual leaders to account. Rather, legal principles had an educative function, encouraging citizen action as a form of responsibility. They should “educate the public” about what it meant to depart from “moral and legal standards.” Behind the responsibility model lay the conviction that “individuals of conscience are the most reliable check upon the war criminality of governments.”131 Shklar had described Nuremberg in similar terms. But she had seen its political function as unique to postwar Germany. Falk expanded its lessons: exposing Vietnam policies as criminal was necessary to prevent their repetition. The call for the redress of past wrongs was not a question of legal responsibility and culpability. Legal principles were put to use in the name of future-oriented political mobilization.132

There was one very real way for American citizens to take responsibility for and share the burdens of war, but it was being removed from democratic politics altogether. By 1973, conscription was over. The all-volunteer army was in place.133 With the end of conscription, the most obvious institutional mechanism for putting the idea of collective political responsibility for war into practice disappeared.134 The idea of citizen responsibility had practical force in the context of a conscripted army: citizens who fight can have a substantial causal impact on a war by withdrawing their labor. This practical basis for responsibility was threatened by the new professionalized military and what the military sociologist Charles Moskos characterized as the move toward a “split-level garrison state.” Harold Lasswell’s famous dystopia of the “garrison state” had described a society where civil order was militarized and the distinction between citizens and military personnel obliterated. It had to be updated, Moskos argued at the “War and National Responsibility” conference. The armed forces were now isolated, “more distinct and segmented from civilian society.” As a result, overseas interventions had “fewer political repercussions at home.” The danger to American democracy was not “the specter of overt military control of national policy, but the more subtle one of a military isolated from the general citizenry, allowing for greater international irresponsibility by its civilian leaders.” Only when the consequences of such irresponsibility were felt throughout society could military policy be democratically constrained.135

Antimilitarism, this implied, had distracted liberal and New Left intellectuals, who focused too much on the moral critique of expertise—the prioritization of liberty and personhood as a challenge to bureaucracy and administration. They ignored the fact that it was actually the democratic control of institutions and, in turn, democratic accountability and collective responsibility that were under threat.136 The practical mechanisms by which the burden could fall on “ourselves”—by which the power elite could be held accountable—were disappearing.

In any case, few philosophers had pointed to the draft as a concrete instantiation of collective political responsibility or a way of holding those in power to account. Rawls was an exception. With civil libertarianism ascendant, he nonetheless acknowledged that the institutional changes to the military, wrought by anti-statist campaigns of both left and right, might have consequences for responsibility. Rawls’s worries about militarism led not to an all-out opposition to the draft, but to a concession of its importance. In an unpublished essay on military recruitment schemes, he described the “professional and market military” advocated by the libertarian right as inflexible, expensive, and potentially an officer corps in service to specific group and class interests. It would provide an army always ready for “neo-imperialist adventures abroad,” without the potential constraint on war that political opposition to the draft allowed in the case of unjust wars.137 Like the standing armies that once preoccupied Renaissance thinkers, Rawls thought professional militaries had tyrannical implications for liberty. Citizen armies, by contrast, were more cautious and would provide a check on aggressive international politics, if citizens had the right to refuse service. As an institutional mechanism that gave substance to the idea of citizen responsibility, such armies prevented the slide from republic to empire.138

Rawls stopped short of the diagnosis that his postwar anxieties about the state might have entailed and did not pursue a critique of the militarization of the state or the depoliticization of the military. America was showing itself to be still more warfare state than welfare state. Spending on strategic bombing abroad and the so-called “war on disorder” at home outstripped spending for social democratic ends.139 In light of these wars, Rawls might well have deployed his theory of justice to judge the moral limits of such a state. But his institutional theory was not put to work against the war, nor against state institutions. The concern with concrete legal and institutional mechanisms stopped here. Amid the end of conscription and the libertarian mood of American liberalism, few took up the idea that the draft could be the mechanism for making sense of what it meant for “ourselves” to bear the burden. In the early years of the 1970s, just as the philosophy of public affairs became overwhelmingly institutional and distributional in focus, the distribution of the responsibility of leaders for war was dealt with separately, as part of the turn to applied ethics, law, and individual conduct.

images

After Nuremberg, it had not been inconceivable that leaders would be made to pay for their wrongs. During Vietnam, there was less optimism. As the war crimes revelations faded from view, it became likely that civilian and military leaders would go unpunished. What was to be done? In debates about the responsibility of leaders among philosophers of public affairs, they now established a framework for interrogating “public morality.”

The political trouble with responsibility was not only that it was hard to distribute. Even locating the responsible parties proved difficult. Was it possible to find “discrete criminals” in the faceless bureaucracy of the American state?140 Opponents of war crimes trials and “extralegal judgment” appealed to this difficulty. As the legal philosopher Sanford Levinson noted, they invoked the idea that government was a “Kafkaesque world of institutions without actors, a mad kind of world where individual activities (though not ‘decisions’) culminate in a world that no one desires and for which no one is responsible.” Organizational complexity minimized the possibility for finding legal culpability. By contrast, those who called for extralegal trials of individual leaders took a view of politics based on great individuals, great events, and great decisions, in which responsibility could easily be located and trials should appeal to morality as much as law. The latter group, Levinson pointed out, were too willing to find individuals guilty. The former were too unwilling. In the face of the refusal to take seriously the charges of war criminality, Levinson thought it better to find some individuals responsible than none at all. It was more productive to focus on individual responsibility than on corporate guilt, which he saw as a vague notion that enabled an evasion of responsibility and rarely entailed corporate punishment. Though skeptical of extralegal proceedings, he suggested that a blanket rejection of attempts to enforce the law, in the face of the state’s refusal to do so, brought to the fore questions about the legitimacy of norm-enforcing institutions themselves.141

Faced with the choice Levinson diagnosed, many philosophers, increasingly frustrated by the government’s lack of accountability for war, focused on individual moral responsibility, setting aside the questions of bureaucratic, corporate, and dispersed responsibility. The same names were mentioned repeatedly as the architects of war to be held responsible: McGeorge Bundy, Henry Kissinger, Robert McNamara.142 To discuss the crimes of such leaders, philosophers used the conceptual frameworks they developed earlier. Philosophical attempts to navigate between utilitarianism and absolutism to find simple rules of war here met with debates about the relationship of law, morality, and punishment, which had proliferated thanks to the concern with civil disobedience. Philosophers who debated what infringements on citizens were justified in emergencies now asked the inverse question of what leaders could justifiably do in equivalent emergencies. The ethics of dissent was redeployed to deal with the morality of powerful agents; to explore not what should happen to individuals who break the rules, but how breaking the rules might be justified.

This practical focus made the puzzle of the relationship of law and morals thornier. If the legal and the moral were one and the same, then civil disobedients were criminals and those committing war crimes had to face the force of the law. If they were different, the civil disobedient could be a moral agent. Could the war criminal? This did not pose a problem for those like Wasserstrom, who saw the laws of war as senseless.143 But others examined the traditional justification for war criminality: that “military necessity” justified the breaking of a rule in a given situation. Some political philosophers developed a particular approach to discussing the morality of political action, one that extended the views of intention, agency, and moral decision-making by appeal to moral principles into the political realm. This consolidated the move from applied ethics to a new field of inquiry termed “public morality.” It was also here that some of the earliest critics of the application of general ethical principles to particular political cases established an alternative vision of moral action and decision-making—the appeal to the dilemma of “dirty hands,” in which the demands of ethics and politics clashed.

When Nagel presented his “War and Massacre” at a Philosophy and Public Affairs symposium in 1971, the debates about moral absolutism, responsibility for war crimes, and the fate of political leaders were brought together. He argued that certain moral rules can never justifiably be broken and that even if they were, the rules remained. The two utilitarian philosophers who responded to his paper, Richard Brandt and Richard Hare, retorted that Nagel’s absolutism was not without exception. He unwittingly conceded to utilitarianism, they said.144 His theory unraveled, making war crimes justifiable in exceptional circumstances, which rendered the category of the “war criminal” nonsensical in those cases. If a bad or violent act was morally necessary, was it still a crime? Yet Nagel had suggested that ethical dilemmas could not be dissolved as easily as utilitarians claimed. It was not the case that where absolutism failed, utilitarianism stepped in. Utilitarianism would allow the decision of the agent to do wrong to become “all right,” providing them with a justification for acting. Nagel rejected this. The clash of principles could lead into “moral blind alleys,” where no course of action could be justified. Necessity could clash with absolutist principles, pushing up against their limits. Politics could require us to do great wrong. Sometimes, Nagel argued, “these two forms of moral intuition are not capable of being brought together into a single, coherent moral system . . . the world can present us with situations in which there is no honorable or moral course for a man to take, no course free of guilt and responsibility for evil.” To some problems, there were no perfect solutions.145

This framework had antecedents in the philosopher Bernard Williams’s critique of the intuitionist view of ethical conflict as a clash of obligations. On that view, when an agent chooses between obligations, one of these obligations, in the end, takes priority and is understood as the right choice. The “ought” that is not acted upon is eliminated and ceases to have a hold on the agent. Williams argued instead that moral conflict was more like a conflict of desires: after a choice is made, the desire is not eliminated but persists, as a moral residue. Moral conflicts were not solved without “remainder” once an agent has done what they ought, but instead have a legacy in the form of regret.146 As Robert Nozick put it, where certain principles are justifiably “outweighed” or “overridden” by others and someone is wronged by the act that issues from a decision, they may be owed reparation or at least explanation.147 Nagel’s moral impasse had structural similarities. Though he and Williams disagreed about much—Williams would later write of Nagel that he was more “transfixed” by the problems he set “at the expense of possible solutions”—Nagel, like Williams, disputed demanding notions of responsibility and choice. He was skeptical of the idea that it was possible to have a conflict of obligations and principles where nothing was left over after a choice was made or decision taken.148 Something remains.

In “Political Action: The Problem of Dirty Hands” (1973), Michael Walzer adapted a version of this in his account of political agency. The “dirty hands” scenario was one in which doing an act was morally wrong, but it was nonetheless right to do so. That was the challenge: not that an agent gets their hands dirty, but that doing so is morally right—even if, or precisely because, the agent feels guilty at having overridden the rules. Responding to Nagel, Walzer focused not on what was permissible according to moral rules, but on the dilemmas facing agents who break them. He was interested in the dissonance faced by an agent who makes a hard ethical choice, and in what happens to them when departing from the rules might be not only legitimate but a duty. What, Walzer asked, was the psychological experience of the political actor who faces “two courses of action both of which it would be wrong for him to undertake?” Nagel made clear that when the moral rules were broken, it does not “become all right.” For Walzer, sometimes it was both necessary and justifiable to override moral constraints. Yet even in such cases, there will be a residue. Drawing on Austin, Nozick, Williams, and others, Walzer joined the critique of act-utilitarianism. Act-utilitarians saw each act as justifiable only by its consequences and dissolved the dirty hands puzzle by making wrongs nonsensical. For Walzer, this was not right: when good men do bad things, there is a remainder, in the form of guilt.149

At its simplest, the problem of dirty hands was a meditation on the relationship of ethics and politics and the nature of political responsibility. In this rendering of Walzer’s argument, the threshold at which necessity claims and utilitarian reasoning entered politics was low. His examples included a scenario where a rebel leader must be tortured to find the location of a bomb, but also that of a local politician in a morally messy situation. Later, he suggested that true dirty hands situations arise only in situations of catastrophe, where extreme necessity trumps moral rules in the final calculation. This was a “utilitarianism of extremity,” the point at which politics trumped morality, where the “moral politician” on the national stage can override the rules of war in order to secure the community’s survival.150 Walzer presented three models for understanding “necessary” wrongs: the Machiavellian actor, whose evil deeds are justified only by the consequences; the Weberian “suffering servant,” whose wrongdoings are remedied by guilt; and Camus’s “just assassins,” who are punished for their wrongdoings by death and have their sins expiated. His preferred model was the last. Moral actors ought to accept punishment or do penance for violating a set of rules, as a way of acknowledging responsibility.151

In this way, civil disobedience and dirty hands cases were similar, though the analogy had limits: “In most cases of civil disobedience the laws of the state are broken for moral reasons, and the state provides the punishment. In most cases of dirty hands moral rules are broken for reasons of state, and no one provides the punishment.”152 Here Walzer was likely adapting the view of the dutiful civil rights protester who suffers punishment willingly. In practice, he argued, the moral rules constraining leaders were not enforced. All we have for the agents who break them are “the priest and the confessional.” Punishment was not a realistic possibility. Walzer did not look to Nuremberg or make the case for tribunals. In this respect, the call to accept the “reality” of dirty hands was a deradicalizing move. We need, Walzer wrote, to find a way of “paying the price ourselves.”153 After the war’s end, he suggested that citizens could be held responsible for an unjust war if they did not do enough to oppose it. Yet the moral burdens of war fell differently among citizens. The most responsible were those in positions of knowledge—intellectuals, who had the time and resources to oppose war and who, “like their leaders, and unlike those fellow citizens doing the fighting . . . are in no immediate danger.”154 By then, Walzer had given up on the prospect of punishment. Identifying guilty individuals, he suggested, was more politically important and plausible than punishing them. With no penalties forthcoming, we should be content with the right kind of guilt. He followed Levinson, who suggested that it was better for individuals to be held to account by the public documenting of their crimes. In place of punishment, shame and stigma would have to do.155

In Walzer’s dirty hands dilemma, his ambivalent relationship to the new liberal philosophy was visible. Contemporary critics placed Walzer both in the camp of the new “moral absolutism” and among the new “theorists of casuistry”—who responded to the philosophical foregrounding of moral rules and principles by turning to history, cases, and experience to challenge forms of idealization and abstraction.156 Walzer would further develop his methodological critique of analytical political philosophy in Just and Unjust Wars, its substance captured by its subtitle: A Moral Argument with Historical Illustrations.157 Philosophy, he suggested, had done well by turning to morality. But its focus on abstract rules and hypothetical choice situations neglected the everyday “talk” of citizens arguing about war.158 It was not necessary to theorize moral rules from scratch or put morality back into politics. It was already there, in the experiences of communities and in the habits, actions, and social psychologies of citizens. Walzer framed these arguments as realist and democratic challenges to Rawls and his followers. Compared to rigid thought experiments and trolley problems, the dirty hands dilemma was meant to be more like real life. Given the backdrop of Rawls’s early Wittgensteinian communitarianism, or the description of democratic community invoked by Dworkin or Cohen in their defenses of civil disobedience, the differences between Walzer and Rawls could be overdrawn. Some distance could, however, be put between the institutional practices of communities that Rawls envisioned and the absolutist moral rules and principles of action invoked elsewhere. Walzer capitalized on that distance to make his case.

Yet the problem of dirty hands also became a template for debates about public morality among political philosophers, who asked whether moral principles could apply to both public and private behavior, and what kinds of actions that were privately unacceptable might be necessary in politics.159 Political agency was increasingly portrayed as a dilemma of choice between principles, as the methods of applied ethics were used to explore forms of public decision-making, from torture to nuclear war.160 The appeal to morality, which as part of the defense of civil liberties against a mistrusted state apparatus had posited a source of authority beyond the state and its laws, now provided the philosophical tools for constraining unlimited war. But, as part of a new vision of public morality, it also became a form of political evasion. When philosophers examined the morality of powerful agents, they analyzed dirty hands, moral dilemmas, and tragic choices in isolation from both their institutional and ideological contexts and non-electoral mechanisms of accountability and scrutiny, thus focusing on extralegal determinations of guilt and psychologizing political problems of responsibility. Amid the ascent of other areas of distributional ethics, the distribution of responsibility, power, and agency remained an area of relative neglect.161 Just at the moment when philosophers were becoming most engaged in politics, their theories took a depoliticizing turn. Public morality and public ethics was understood as a sphere dedicated to powerful individual agents in moments of dramatic choice—an extension of the applied ethics that originated in the linguistic analysis of intention, agency, and responsibility.

The debates about agency thus helped carve out the boundaries of the new liberal political philosophy. Paradoxically, they helped consolidate its institutionalist focus by further delineating it. In the aftermath of the dirty hands debates, some philosophers were concerned to explicitly distinguish different sources of interpersonal, private, public, and political morality. Were the acts of individuals within institutions justified by the institutional principles that covered them?162 They turned to the question of whether principles of private, individual morality constrained the public, whether one was derivable from the other, and whether personal principles constrained the application of political principles. For Ronald Dworkin, the permissibility of public acts was constrained by reference to a core “liberal public morality,” a political principle.163 For Nagel, public and private morality had different sources. Impersonal consequentialist considerations applied to institutions, and public morality did not derive from private morality—“the state,” after all, “has no personal life.”164 The moral constraints on public institutions applied to the actors within them, yet the public point of view did not always justify overruling the private. There were limits on what officials could do in office even for institutional interests. Sometimes an agent’s responsibility would be absorbed by the moral defects of an institution, but the “strongest constraints of individual morality will continue to limit what can be publicly justified even by extremely powerful consequentialist reasons.”165

The distinction between public and private spheres of action came to overlay the distinction between institutional and interpersonal morality. Rawls had not conceptualized a private realm. But he distinguished the institutional and interpersonal and strongly associated the personal with the family.166 With legal and constitutional debates about privacy politically charged, and Rawls turning in Kantian mode to stress publicity, philosophers reasserted the liberal dichotomy of public and private.167 Yet the contrast of public and private action did not always map on to the institutional and individual morality distinction.168 As feminist critics of liberal philosophy like Carole Pateman later argued, it also involved a conflation of the private with the family and the domestic realm. Moreover, though the new liberal philosophy was concerned with the public and the institutional, the concept of the political was rarely to be found.169

Many of these ideas about war, responsibility, and agency among liberal philosophers would become canonical, in part because philosophical attention was turning elsewhere. The long 1960s had been an eventful decade, and the new philosophers of public affairs had reacted by focusing on political action and legal debate. But liberals were exhausted from the decade’s disorders—from the prolonged and unaccountable war, the threat posed by economic downturn, and George McGovern’s defeat by Nixon in 1972. When Rawls’s theory of justice was published, it was taken to embody a postwar liberalism that to many philosophers was still worth saving. It promised the possibility of agreement at a time when consensus was hard to come by. Philosophical attempts to grapple with war and responsibility would soon be submerged by the turn to institutions. Liberal egalitarianism was about to arrive.