Chapter Eight

“There’s Only ‘Ethics’…”

In 1958, in the infancy of the U.S. space program, a twenty-eight-year-old aerospace engineer named Ted Gordon was called into the office of his superior, J. L. (Jack) Bromberg. As chief project engineer for the Douglas Aircraft Company’s Thor program, Bromberg was responsible for overseeing the flight tests for the air force’s new sixty-two-foot-tall intermediate-range ballistic missile.

Bromberg had in mind no idle conversation. On October 4, 1957, the Soviets had launched Sputnik I, the first earth satellite. They followed in less than a month with Sputnik II, this time with a live dog in the payload. With those launches, the Soviet Union had leapfrogged into apparent technological supremacy in the race into space—and shamed the American research and educational establishment into a wholesale reexamination of its goals and outcomes. More important, however, the Soviets had sent tremors throughout the U.S. military and political establishments. Soviet premier Nikita Khrushchev’s threat that the grandchildren of Americans would live under communism was taken with frigid sincerity. The Soviets were known to have an atomic bomb, making them a major threat both to the Western nations and to the newly created third-world countries that had recently spun loose from decades of European colonialism. The Soviet alliance with China, cemented eight years earlier, meant that communism now stretched from the Adriatic to the Bering Straits and down to the China Sea. The Korean War, which cost the lives of thirty-four thousand American military personnel, had already demonstrated the expansionist intentions of the Communists. Now they seemed intent on expanding into space as well. In that turbulent hour, Western military power was inextricably linked to U.S. economic supremacy, which in turn depended on technological success. And the showpiece of U.S. technology was the budding space program.

By 1958, the nation was supporting two parallel space efforts. One, headed by Dr. Wernher von Braun, focused on the Redstone missile developed by the army’s Ballistic Missile Agency—the group that, on January 31, 1958, put the first U.S. satellite into orbit from Cape Canaveral, Florida. The other, centered on the Thor, was an air force enterprise. It was here that Bromberg and his young protégé found themselves in stiff competition not only with the Soviets but with the army. So far, the record was not good. Bromberg, whom Gordon once described as “a hard-driving taskmaster who accepts no alibis, no excuses, no reasons why a task is not completed on time,” was nicknamed “Thorhead” by his team. As head of the program, he had overseen the first three launches of the Thor Intermediate Range Ballistic Missile (IRBM). To his dismay, every one of them failed for technical reasons, some crashing on the launchpad. In a mixture of desperation and hope, he turned to young Gordon with an unprecedented opportunity: to write the countdown for the upcoming launches, and to become test conductor for the Thor program at Cape Canaveral.

Writing the countdown, as Gordon recounts it, was no simple task. Each launch of a Thor required that some three hundred thousand parts all function perfectly at the proper time. Thousands of parameters needed to be taken into account, involving hundreds of precisely calibrated actions by scores of engineers and mechanics. Each one had to be exactly right, and in just the proper sequence. Missiles, as Gordon recalled in his book First Into Outer Space, are “vastly complex, and this complexity is occasionally their undoing. Each system must work, each component part must function when commanded. A missile flight seldom fails because of a basic inadequacy in design; usually a two-bit part fails, a seemingly insignificant link in a basically strong chain.”

So all the two-bit parts are tested and retested—right up to the final ignition. Among the last tests to be performed, with less than a half hour to go, was one involving a switch located on the back end of the missile as it sat upright on its pad. The switch, set to operate the moment the missile lifted off, locked open all the valves that allowed fuel to reach the engines. It was secured by a pin preventing it from opening prematurely, a pin that was to be removed by hand in the final minutes of the countdown. But before the pin was removed, the switch was to be checked electrically to make sure it was not faulty.

Gordon, the retired chairman of the Futures Group and one of the world’s leading futurists, recalls that day vividly. A multistage Thor was on the launchpad. He was in the cramped concrete blockhouse when, to his horror, he saw the control panel light up, signaling that the first-stage main oxygen and fuel valves had suddenly opened. That meant that kerosene and liquid oxygen were being fed into the rocket’s combustion chamber, where they formed a volatile and extremely explosive mixture. The mixture, having nowhere else to go, gushed out of the chamber and dropped to the steel exhaust-deflector plate on the launchpad many feet below. The mechanics and engineers working on the pad scattered in every direction. They knew what could happen: They had already seen rockets on the launchpad engulfed in orange balls of fire, and melting down into white-hot wreckage. They were literally running for their lives, fleeing the holocaust that an accidental spark or electrical shock could ignite.

The spark never came. The oxygen warmed up and evaporated, and the potential inferno reduced itself to an oily but essentially harmless mess. Later, Gordon and his colleagues pieced together what had happened. The mechanic in charge of testing the fuel switch, it seems, had simply reversed the procedure specified in the countdown. He pulled the pin first, and then tried to do the test. That test tripped the switch, sending up the message that the rocket had left the pad, opening wide the valves for the launch and showering the pad with tens of thousands of pounds of fuel and oxygen.

What to do now? The launch was scrubbed, but the rocket could be refueled and the launch rescheduled without great difficulty. No lives had been lost; no one had even been injured. Yet the potential for disaster had been enormous. Had there been a spark, the explosion would have been catastrophic—not only in lives lost but in damage done to the morale, public approval, and congressional support for the budding space program. Had there been a spark, there might very possibly have been no more Thor or Jupiter launches, no continued enthusiasm for the National Aeronautics and Space Administration (NASA), and no landing of U.S. astronaut Neil Armstrong on the moon in 1969. In the absence of that spark, one question remained: What to do about the mechanic in whose hands the whole future of the nation’s space program might have been placed?

As the launch team sat around the blockhouse, Gordon recalls, smoking their ever-present cigars and waiting a half hour for the liquid oxygen to boil off the pad, opinions flew thick and fast. Fire him immediately, someone said. Suspend him with stiff penalties, said someone else. Transfer him to something less dangerous, said another. Above all, they agreed, just get him out of here. There was apparently no dilemma involved. He had done wrong. The only question was how, not whether, to punish him.

To all of which Gordon listened, until the time came for him to deliver his opinion—an opinion that created a dilemma where none had seemed to exist.

“There’s one thing we know about him for sure,” Gordon recalls saying. “He’ll never, ever, as long as he lives, do anything like that again. Maybe he’ll become our most reliable mechanic. Maybe we shouldn’t fire him. Maybe we should promote him!”

In the end, Gordon recalls, that’s what happened. The Thor tests went forward as planned, the U.S. space program flourished, and it was not a Soviet but an American crew that reached the moon. And never again did that mechanic blow a countdown.

Nine Checkpoints for Ethical Decision-Making

The American poet e. e. cummings, writing about the sculptor Gaston Lachaise in the 1920s, coined a phrase that in many ways describes not only aesthetic but ethical processes. Lachaise, he wrote, exhibited “intelligence functioning at intuitional velocity.”

It’s an apt characterization for ethical decision-making. As Gordon’s dilemma suggests, ethical issues can arise in the least suspecting moments. Just when everything seems to be progressing with utmost order, something can come along and deliver potentially deadly blows before we’ve even begun to grasp their significance. To grapple with them requires rational acts of the mind. But the mind is often called upon to operate without a full understanding of causation, with only a hint of possible consequences, and with little room for reflection. If sound decisions are to be made, the intelligence truly must function at “intuitional velocity.”

The Prisoner’s Dilemma

Why be moral? One of the most common answers comes from the rationalist perspective. It argues that reason, assessing arguments for and against morality, will inevitably conclude that immorality is irrational.

But morality is a complex topic—nowhere more so than in tough conundrums concerning one’s own future. One such conundrum, rooted in a 1950 lecture on game theory given by distinguished Princeton mathematician Albert W. Tucker, seeks to show that reason and morality may not coincide.

The hypothetical case, widely cited in decision-making literature, centers on two prisoners accused of conspiring against the state. They are hauled before a judge, who points out that each has two alternatives: to confess, or not to confess. The judge, in a plea-bargaining arrangement, lays out the deal:

  • If one confesses and the other does not, the confessor will be released and the nonconfessor will be imprisoned for ten years.
  • If both confess, both go to prison for five years.
  • If neither confesses, both go to prison for a year.

Suppose they agree not to confess. They are sentenced to one year each. But why, then, would not one of them confess at that point—setting himself free, though plunging the other into ten years incarceration? Since that temptation is strong on both sides, however, why wouldn’t the other also confess—thereby reducing his ten-year sentence to five? In other words, would it not be in the interests of each to confess if he had the slightest suspicion that the other was going to do so?

But suppose they could confess in secret. They may have made a solemn pact with one another never to confess. But isn’t it still to the rational advantage of each to confess? And if so, must not each prisoner strongly suspect that the other will do so?

The dilemma? On one hand, it is right to keep one’s word. On the other, it is right to maximize one’s freedom from imprisonment. Furthermore, it is only by adhering to the promises made in their don’t-confess agreement that each can be sure of serving less than five years.

Yet if rationality is the only basis for ethical choice—and that’s a very large “if”—neither will adhere to those promises. By behaving unethically and breaking promises, they enter a crapshoot that might set them free but might make things worse. Yet to behave ethically and keep their word guarantees that there will be some minimal punishment all around.

Tough choice? Indeed—if rationality is the only standard. It is on this question of sacrificing self-interest—even enduring some “irrational” hardship for the sake of the general good—that so much of ethics turns.

That does not mean, however, that there is no logical and sequential process for ethical decision-making. True, we may not be aware that a pattern exists. But that does not mean there is no pattern. We’re surrounded, in fact, by evidence of highly patterned processes that happen at intuitional speed. The jazz pianist improvising in an up-tempo riff is not mentally saying, “Now I must move to a D-seventh chord, since I’ve just come from an A-minor-seventh.” But that’s exactly where she’s moving. Neither does the pitcher, whirling to fire to first base and stop a would-be steal, consciously calculate angle, distance, speed, and torque before hurling the ball. Yet all those parameters have been factored into his throw. Developing real skill at jazz or baseball—or ethics—requires that intelligence fuse with intuition, that the processes be internalized, and that decisions be made quickly, authoritatively, and naturally. For musician, athlete, and moral thinker, making good decisions usually requires a patient investment in process—and plenty of practice.

So far, this book has provided opportunities for such practice by laying out a number of examples, along with a good deal of commentary on ethical thinking. It’s time now to wrap these threads together into a coherent process—not necessarily as a checklist to be applied in the heat of the moment, but as a guide to the underlying structure of ethical decision-making. The following nine steps, or checkpoints, suggest an orderly sequence for dealing with the admittedly disorderly and sometimes downright confusing domain of ethical issues.

  1. Recognize that there is a moral issue. This step is vitally important for two reasons. First, it requires us to identify issues needing attention, rather than to brush past them without another look. Second, it requires us to sift genuinely moral questions from those that merely involve manners and social conventions—or that take us into realms of conflicting values that are not so much moral as economic, technological, or aesthetic. This recognition is not always easy. Nor is it without danger. Too much diligence here can turn us into self-righteous hypermoralists sensing sin at every turn. Yet too little can lead us into an apathy or a cynicism that breezily dismisses even the most compelling ethical challenge.
  2. Determine the actor. If this is a moral issue, whose is it? Is it mine? The operative distinction here is not whether I am or am not involved. In matters of ethics, we’re all involved. Why? Because we all live within a context of community, and communities depend on ethical interrelations. Reminding us that “no man is an island, entire of itself,” John Donne instructed us never to “send to know for whom the bell tolls; it tolls for thee.” So the question is not whether I am involved but whether I am responsible—whether I am morally obligated and empowered to do anything in the face of the moral issues raised. Warning: In some formulations of ethical decision-making, this determination of actors includes a determination of stakeholders. The problem with stakeholder analysis, however, is that the very assumption that there are “stakes” in a dilemma implies an outcome-oriented mode of thinking. Those who venture into such analysis are typically so predisposed to an ends-based utilitarianism that they overlook other ethical principles. That severely limits their options. Rule-based thinkers, after all, couldn’t care less about “stakes,” since what’s at issue is obedience to a fundamental principle so universal that it operates equally for everyone. Both Kantians and utilitarians, however, need to know the actor.
  3. Gather the relevant facts. Good decision-making requires good reporting. That is especially true in making ethical decisions. Not to know the way events have unfolded, what finally happened, what else might have happened, who said what to whom and when they said it, who may have suppressed information, or who was culpably ignorant or innocently unaware—not to know these things leaves crucial voids in the understanding. Why? Because ethics does not happen in a theoretical vacuum but in the push and pull of real experience, where details determine motives and character is reflected in context. Also important to fact-gathering is an assessment of future potential: Robert Frost, in his famous decision-making poem about the two roads that “diverged in a yellow wood,” notes that before deciding which way to go he “looked down one as far as I could” until it disappeared “in the undergrowth.” Part of fact-gathering involves just that kind of peering as far as possible into the future.
  4. Test for right-versus-wrong issues. Does the case at hand involve wrongdoing? Here various tests apply.
    • The legal test asks whether lawbreaking is involved. If the answer is an obvious yes, the issue is one of obedience to the enforceable laws of the land as opposed to the unenforceable canons of a moral code. The choice, in that case, is not between two right actions but between right and wrong—a legal rather than moral matter.
    • The regulations test may kick in even when the law is silent. Are there clearly understood and widely shared codes of conduct with a profession—a journalist’s need to protect sources, a real-estate agent’s obligation to recognize that a potential client “belongs” to a colleague who made a prior contact? While you can’t be arrested for violating these codes, you can quickly find yourself dishonored within your professional guild.
    • The stench test, relying on moral intuition, is a gut-level determination. Does this course of action have about it an indefinable odor of corruption that makes you (and perhaps others) recoil and look askance? The stench test really asks whether this action goes against the grain of your moral principles—even though you can’t quite put your finger on the problem. For many people, it’s a common and surprisingly reliable indicator of right-versus-wrong issues.
    • The front-page test asks, “How would you feel if what you are about to do showed up tomorrow morning on the front pages of the nation’s newspapers?” What would be your response, in other words, if what you took to be a private matter were suddenly to become entirely public? If such a consequence makes you uncomfortable, you are probably in right-versus-wrong territory.
    • The Mom test asks, “If I were my mother, would I do this?” The focus here is not only on your mother, of course, but on any moral exemplar who cares deeply about you and means a lot to you. If putting yourself in that person’s shoes makes you uneasy, think again about what you’re on the verge of doing: It could well be wrong.

    It may be worth noting here that the latter three tests align themselves with our three decision-making principles. The stench test is at bottom a form of rule-based reasoning, asking not about consequences but about visceral principles. The front-page test, by contrast, is a form of ends-based reasoning that looks to outcomes: Only if people know what I’m doing (it seems to assume) will there be any consequences, and consequences are what matter. The Mom test, requiring care-based reasoning, is a form of the Golden Rule that asks you to put yourself in the shoes of another—in this case, a person of high moral stature—to determine the rightness or wrongness of an action.

    If an issue fails one or more of these tests, there’s no point going on to the following steps. Since you’re dealing with a right-versus-wrong issue, any further elaboration of the process will probably amount to little more than an effort to justify an unconscionable act.

  5. Test for right-versus-right paradigms. If the issue at hand passes the right-wrong tests, the next question is, What sort of dilemma is this? Try analyzing it in terms of the four dilemma paradigms: truth versus loyalty, self versus community, short-term versus long-term, and justice versus mercy. The point of identifying the paradigm, remember, is not simply to classify the issue but to bring sharply into focus the fact that it is indeed a genuine dilemma in that it pits two deeply held core values against each other.
  6. Apply the resolution principles. Once the choice between the two sides is clearly articulated, the three resolution principles can be brought to bear: the ends-based or utilitarian principle; the rule-based or Kantian principle; and the care-based principle based on the Golden Rule. The goal, remember, is not to arrive at a resolution based on a three-to-nothing or two-against-one vote. Instead, it is to locate the line of reasoning that seems most persuasive and relevant to the issue at hand.
  7. Investigate the “trilemma” options. This step, listed here for convenience, can kick into action at any point throughout this process. Is there, it asks, a third way through this dilemma? Sometimes that middle ground will be the result of a compromise between the two rights, partaking of each side’s expansiveness and surrendering a little of each side’s rigidity. Sometimes, however, it will be an unforeseen and highly creative course of action that comes to light in the heat of the struggle for resolution.
  8. Make the decision. This step, surprisingly, is sometimes overlooked. Perhaps that’s because the intellectual wrestling required in the previous steps can seem exhaustive, leaving little energy for the final decision. Or perhaps it’s that a quasi-academic mind-set comes into play, confusing analysis with resolution and failing to move from the theoretical to the practical. Whatever the reason, one thing is clear: At this point in the process, there’s little to do but decide. That requires moral courage—an attribute essential to leadership and one that, along with reason, distinguishes humanity most sharply from the animal world. Little wonder, then, that the exercise of ethical decision-making is often seen as the highest fulfillment of the human condition.
  9. Revisit and reflect on the decision. When the tumult and shouting have died and the case is more or less closed, go back over the decision-making process and seek its lessons. This sort of feedback loop builds expertise, helps adjust the moral compass, and provides new examples for moral discourse and discussion.

    When we test Ted Gordon’s dilemma against these nine checkpoints, what do we find?

First, Gordon knew there was a moral issue here. Indeed, his colleagues were shouting it out to him from all sides: Something terribly wrong had been done (however innocently) and restitution had to be made. There was no way, apparently, that he could walk away from the decision-making process.

Second, it was apparent that Gordon was the actor: He was in charge, and the issue truly was his.

Third, the facts were apparent by the time Gordon gathered his team to discuss the fate of the mechanic. Notice, however, how vital those facts were to the decision. Suppose the mechanic had blown the countdown because his foot had slipped just as he was pulling the pin—the result of a blob of grease left behind by another careless team member. Suppose the countdown itself had been wrongly written or inaccurately typed. Suppose the fuel had started to gush before he had pulled the pin—evidence of a failure elsewhere in the system. Any such mitigating factor would have significantly altered the discussion in the blockhouse. And, of course, suppose there had been a spark. Suppose lives had been lost, the rocket destroyed, the nation’s space program aborted. Could Gordon have argued as easily that the mechanic should be retained?

Fourth, there was plenty of wrong done here. But Gordon himself was not facing a right-versus-wrong choice. There was no legal requirement that the mechanic be disciplined, nor any inviolable professional stricture. Nor was there any notable stench, any fear of publicity, or any concern about Mom. This was, for Gordon, a right-versus-right issue.

Fifth, the paradigm that most seems to fit here is justice versus mercy. The voices surrounding Gordon were howling for justice—and in many ways they were right. He chose mercy—also right.

Sixth, from what we know of the reasoning at the time, Gordon seems to have placed strongest emphasis on the ends-based resolution rule. What mattered to him were the consequences. How would this employee behave in the future? Gordon thought he could tell, and he based his decision on that assessment, which in the end proved right. He did not, apparently, reason that he would not want to be fired if he had made such a mistake (a care-based approach), nor that the potential danger was so great that discipline should be enforced regardless of the fact that no explosion occurred (a rule-based approach).

Seventh, this outcome reflects a decision based on one side of the dilemma—clearly in support of mercy—rather than a “trilemma” compromise down the middle. It could have been the latter, of course: Gordon might well have agreed to keep the employee on, but to penalize him severely while doing so.

Eighth, a decision was actually taken. The discussion in the blockhouse was no idle chatter. It led to action.

Ninth, through the years the decision has given grounds for reflection—so much so that, thirty-five years later, it surfaced in one of our seminars as a key experience in Gordon’s own personal history. We don’t know, from the narrative given here, the extent to which it stood out in the lives of the others involved at Cape Canaveral or found itself woven into the culture of space-program lore as a point of commentary. It is probably fair to say, however, that it has been the subject of some revisiting by at least one person other than Gordon: the mechanic who, the next day, still had his job on the launch team.

Public and Private Ethics: Distinctions Without Differences

The nine steps listed here, then, clearly apply to dilemmas raised in aerospace engineering. But are they relevant elsewhere? Indeed they are. They apply to:

The list of possible actors and potential dilemmas, in fact, is as endless as human inventiveness and as relevant as tomorrow’s headlines. Yet in each case the dilemmas lend themselves to the same process of discussion and analysis. Why? Because at bottom there is no such thing as “aerospace engineering ethics” that can be distinguished in any significant way from “medical ethics,” “education ethics,” “journalism ethics,” or ethics in any other field.

This point may not be as obvious as it should be. A great deal has been made of the different flavors of professional ethics. We’re tempted to think that each discipline, profession, and avocation has its own set of moral principles, its own unique ways of thinking about ethical dilemmas, its own patented resolutions. True, each specialty has issues unique to its field. There’s as little reason for real estate agents to think hard about the ethics of cloning, for example, as there is for genetic researchers to be concerned about making upmarket homes available to minorities. But when you strip away the specifics and penetrate to the core values underlying these dilemmas, the resulting ethical structures lend themselves to just the sort of analysis and resolution developed here.

That fact is important for two reasons. First, it helps deflate a subtle form of ethical relativism that insists that all ethics flows out of, and is bounded by, the situational specifics of a particular case. Such a view starts by saying, “Different professions, different ethics.” Carried to its extreme, it insists that you and I have divergent ethical standards simply because we are individuals—“Different people, different ethics.” Such a thesis, refusing to acknowledge any common ground of shared values, guts the potential for building consensus on any basis but fear, ignorance, or malice. Second, the fact that there’s only “ethics” removes a divide. It helps dispel the notion that public ethics is fundamentally different from private ethics, and that the way an individual behaves and makes decisions in one of those arenas has no real relevance to what he or she does in the other.

This public/private discussion has a long history. Socrates insisted that the two realms were essentially separate. For him, only the individual who remains in private life can remain fully principled, since public life demands compromises that make true morality impossible. Following a similar line, the American writer Henry David Thoreau argued that since one’s private conscience could be the only reputable guide to behavior, any involvement with public life would inevitably erode the moral sense. “The only obligation which I have a right to assume is to do at any time what I think right,” he wrote—a view that naturally led him to scorn any public or political “obligation” that would compel him to act against his conscience. Even the English philosopher Thomas Hobbes, taking an opposing road, arrived at much the same destination. He argued that individuals in public positions who allowed private morality to influence their decision-making were in fact doing a disservice to the political sphere. Such an individual, Hobbes felt, had agreed to be subject to a public morality; to pursue the interests of a private set of values would be to violate that agreement. For Socrates, Thoreau, and Hobbes, the old distinction between public and private morality was a very real one.

Recently, however, this distinction has frayed. Some of the shredding comes from feminist philosophers, who point out that this view compounds the problem of abuse against women, especially rape and violence occurring in the home. It was not until late in the twentieth century that such abuse ceased to be written off as a private matter of no real relevance to the public world—a view that, of course, depends on the recognition that public and private are distinct. In the world of business, too, the public/private split can seem artificial. “There’s only ‘ethics,’” says James K. Baker, longtime chairman of Arvin Meritor and former president of the U.S. Chamber of Commerce. “What you do over here is no different from what you do over there. Let’s not think that you’ve got to adhere to one standard at home and another standard at work. There’s only one thing.”

The distinction also comes to grief on the shoals of common sense. Few people these days are under the illusion that an employee who is unethical in personal financial matters is likely to be thoroughly principled at work, or that a corporate executive can be a cad in family matters and a paragon of virtue in the office. As in business, so in political life—a point that became obvious during the 1987 presidential campaign in the United States when candidate Gary Hart, a married man, asserted that his dalliances with Miami model Donna Rice were irrelevant to his public life and should be of no concern to the public. The public and the press—particularly the Miami Herald, which broke the story—thought otherwise. That attitude marked a clear shift from the 1960s, when the widely observed extramarital affairs of President John F. Kennedy were scrupulously suppressed by news editors. These days, few politicians find cover behind the public-private distinction, as President William J. Clinton discovered when his relationship with Monica Lewinsky ultimately led to his impeachment. Ethics is increasingly seen to be woven into a seamless whole, as the public grows more insistent that one’s public and private lives must fuse into a morally consistent entity. Moral consistency, in fact, is an effective test for ethical action—a point illustrated in our seminars by Bill, a recently retired former senior executive for a major manufacturing corporation in the United States. During the peak years of the Vietnam War, he found himself working for a company that, among its many products, supplied materiel for the armed forces. Because it was a good job in an area of work he very much enjoyed, he did not spend much time dwelling on the military aspects of the corporation.

But in the 1968 presidential campaign, he found himself attracted to the Democratic candidate, Eugene McCarthy, the senator from Minnesota who staked out a strong antiwar position. When a friend asked Bill if he would volunteer an evening’s time at the local campaign headquarters, he agreed. One thing led to another, however, and before long, he found himself cast as the leading spokesman for the campaign in his community, quoted in the newspapers and clearly identified with McCarthy’s positions.

One day, a few months before the Democratic convention in Chicago, his boss called Bill into his office. The topic for discussion: Bill’s political activities. The corporation, with a staunchly conservative bent and a long tradition of support for the nation’s military, was uncomfortable. It was awkward, Bill’s boss said, having one of its senior people take such an outspoken role against what appeared to be the interests of the corporation and, in its view, of the nation. Might Bill want to consider scaling back the level of his political activities?

Sobered, Bill talked it over with his wife. The dilemma was clear: Should he stick with his political activities or with his job? In fairness, he realized, the corporation had not made the dilemma quite so explicit since there was no threat to fire or demote him. There was just an expression of discomfort—although, Bill felt, the possible consequences could be read between the lines. In discussing the situation at home, the usual issues arose: the children’s schooling, the mortgage on the house, the difficulty in finding a comparable job. One side argued strongly for the freedom of political expression—the right, guaranteed by all that the nation stood for, to express dissent openly and honestly without fear of reprisal. The other side argued strongly for corporate allegiance—the need, felt by any organization, for a sense of unity and common purpose around an agreed-upon set of objectives. Furthermore, one side argued for the unfettered individual conscience, while the other argued for the compromises that produced a salary and helped make family life pleasant and affordable.

Given the nine steps set forth above, it’s clear that Bill was aware of the moral issue (step 1). It’s also, clear that he was the actor (step 2). Did he know enough (step 3) to make the decision? He thought he did, especially since he felt he could see through the conversation with his boss to a deeper but unspoken threat. Nor was he placed in a right-wrong issue (step 4), since there was nothing inherently bad about his political activities. The most appropriate dilemma paradigms (step 5), as he explained during our seminar, were truth versus loyalty (his need to stand up for what he thought was “true” about the nation’s role in the war versus his allegiance to his corporation) and self versus community (his own need to retain his job and support his family versus the corporation’s need for unity and commitment).

The resolution (step 6) involved some deep thinking—which, even though Bill was not then using the exact terms we’ve used here, probably followed similar lines of reasoning. Ends-based, utilitarian thinking might well have argued that the “greatest number” here was not himself and his own political conscience but his family—in which case he would have abandoned the campaign. Or it could have argued that the corporation was the greatest number—also leading him to give up McCarthy’s quest. But it also might have argued that the nation as a whole—the largest number of all—superseded either his family or the corporation, leading him to stay with his campaign. The rule-based approach, by contrast, would have looked for overarching maxims, which could have ranged all the way from “Don’t bite the hand that feeds you” (urging him to align himself with the corporation) to “Follow your conscience” (demanding that he stay with the campaign). Under the care-based reasoning of the Golden Rule, Bill might have leaned toward the views of his boss by putting himself in the corporation’s shoes. But he could also have asked, “What kind of ‘others’ do I wish to live with in this society? Do I most want others to get involved in the activities of a civil society? Or do I most want them to mind their own business and leave public affairs to others?” Under that logic, he would probably have chosen the campaign over the corporation.

Was there (step 7) a trilemma option here? Bill didn’t see one. So his decision (step 8) was simply to persist in his political work—knowing that, as the convention approached, he would be even more visible than before.

In the end, Bill chose to continue working with the campaign. As he revisited his decision (step 9) several decades later, he told us he felt it had been the right one. He heard nothing further from the boss. McCarthy lost the election. And Bill remained happily employed by that corporation until he moved to another firm eight years later.

Bill’s dilemma may not strike some readers as a genuine moral problem. It may seem right from the outset that the only ethical resolution would be to do what he ultimately did—phrased as “taking a stand for principle” or “following the dictates of conscience” or “doing what you’ve got to do.” After all, if this had been a novel or a film, it would probably have been presented as the morally courageous lead character, Bill, standing up to the big, bad, faceless corporation. We’re so accustomed to seeing that stereotype, in fact, that we tend to overlook some of the important step-three pieces of information. Bill liked his job. He liked his colleagues. He approved of most of what the corporation did. He believed in paying his bills, providing security for his family, and contributing to his community as a prosperous, taxpaying citizen. Only as we cut through the stereotype and let these facts speak do we see this as a real dilemma.

Yet the very fact that we cheer on Bill in his quest for individual expression suggests an important point. All of us place a high value on this thing called moral consistency. We expect that the “right” resolution, here, will be one that aligns Bill’s life as a public citizen with his life as an employee and a family man. We expect his values to be the same in each sphere. Had he chosen not to pursue the campaign, we would have been tempted to say he “buckled under pressure.” Had he done that, the word hypocrisy might have come to mind to describe Bill’s willingness to hold one set of personal views (against the war) while working for a firm that publicly (as a military supplier) espoused another. What we applaud in Bill’s choice is moral consistency—a clear congruence between the actions and the values.

Nowhere, apparently, do we applaud that congruence more loudly than when the consistency bridges the apparent gap between public and private ethics. The truly consistent individual, the one who generally wins our highest praise as an exemplar of virtue, is the one whose actions in public and in private are morally identical. In theory, we may appreciate the distinction between public and private ethics. In practice, we not only merge the two but tend to hold in some suspicion those who don’t. That’s not to say that those around us—or even we ourselves—always act up to this level of moral consistency. It’s simply to say that we intuitively seem to recognize that, when public and private ethics diverge, something is morally amiss.

Condoms, Communists, and Conservation: Three Public Issues

So far, we’ve considered dilemmas that have arisen largely in the private realm. Can this process help us make sense of dilemmas that lie squarely in the public sphere—those that affect the entire nation or the entire world, but in which we may be “actors” only by virtue of being citizens? Yes. To see how, consider three contemporary issues that have far-reaching ramifications; the condoms-in-the-schools controversy, the post-Communist world order, and the environment-versus-development tension.

Ethics, AIDS, and Safe Sex

When U.S. basketball superstar Magic Johnson announced on November 7, 1991, that he was HIV positive, he recharged the debate over “safe sex.” Earlier that year, when New York City adopted a plan to distribute condoms to students to help prevent the spread of AIDS, the ensuing political surf swept away the city’s schools chancellor, Joseph A. Fernandez, who was voted out of office by the city’s board of education. So when in 1993 the New Haven, Connecticut, public schools voted to distribute condoms to fifth-graders, the debate intensified. With AIDS in that city a leading cause of death among men and women aged twenty-five to forty-four, some school board members felt they had to act. Not surprisingly, they couched their actions in moral terms. “If there is anything that you can do to prevent [such deaths], then it is your moral obligation to do so,” school board chair Patricia McCann-Vissepo told the New York Times after the vote. The opposition, too, took a moral stand. Board member Arthur J. Bosley Jr., who voted against the policy, worried that condom distribution “can and will send a message [to students] that we are sanctioning their [sexual] activities.”

Like all significant ethical debates, this one features two core values in opposition. On one side stands respect for life, emphasizing that you don’t kill and that you help prevent others from being killed. Those who hold to this value seek to protect even those who, ignorantly or willfully, pay no heed to the deadly danger of AIDS.

On the other side stands respect for sexual continence, emphasizing that you don’t indulge promiscuity and that you encourage others not to do so. Those who cleave to this position argue that continence is vital to society, since it is a cornerstone of the marriages that build stable families and communities.

The first value argues that, since AIDS is so often fatal, the highest good is to prevent death at all costs. If that requires supplying teenagers with condoms, so be it. In this view, the right to avoid death takes precedence over all other rights: Even if it could be shown that supplying condoms destroys family values and wrecks the social fabric, the right to live would still have priority. Here three arguments suggest themselves:

  • Finality. Death is, of course, terminal. Once surrender the right to life, and all other rights are meaningless, since the individual will not live to see them put into practice. What does it matter that we arrive at a good society if so many must be killed to create it?
  • Centrality. Respect for life is one of the most widely discussed of human values. It underlies such powerful issues as abortion and euthanasia. Extended to other species, it lies at the center of debates over the environment and some forms of vegetarianism. Taken to its extreme, it informs some brands of pacifism. And, of course, it surfaces in discussions of crime, violence, and capital punishment. Surely (so the argument goes) a value so close to the heart of what it means to be human should take precedence over other values.
  • Compassion. The highest form of compassion resides in affirming another’s right to exist. That may require that we help protect others—especially the young—from the subtle influences of self-deceit, sensuality, and self-denigration that cause them to indulge in promiscuous sexuality without realizing its implications. At least (so the argument goes) keep them alive until they are mature enough to take responsibility for their own actions.

On the other side stand those who argue that, to prevent the growth of a sex-on-impulse society, teenagers ought to be discouraged from premarital sex. If a society can be kept alive only by indulging its wanton sensual impulses, what sort of life is that? Surely self-discipline is also at the core of what it means to be human—since self-restraint naturally leads one away from killing, while merely refusing to let others kill themselves does not necessarily teach self-restraint. Here, too, three arguments arise:

  • Chastity. Often assumed to mean simply sexual abstinence, its primary and more useful dictionary definition, is “freedom from unlawful sexual activity.” Though popularly seen as outmoded, the concept is central to any serious consideration of sexuality. Why? Because if chastity were irrelevant, the assumption would have to be that sexual activity is an unqualified good and should be practiced without restraint. Were such license given rein, it would destroy commitment, affection, and trust—qualities that characterize the most intimate human relationships.
  • Childrearing. With the advent of birth-control devices, sexual relations are increasingly separated from their natural consequences—pregnancy and childbearing. But love, sex, and the perpetuation of humankind are tightly bound together. To pretend otherwise is to fragment an immensely powerful social compact. It is also to subject those raised under the free-condom regime to one of two things after marriage: a jarring adjustment to a new and unfamiliar life of fidelity, childrearing, and the greater good of the community; or the perpetuation of early habits of promiscuous sexual activity, with its well-documented damage to marriage, family, and society.
  • The broader community. When sexuality focuses on immediate personal gratification—as it often does among teenagers—the larger context gets lost. The community depends on the transmission to the young of stable, long-term values. Doing “whatever turns you on” is the attitude that promotes drug addiction, alcoholism, and greed—hardly an adequate ethical standard for sustainable communities. Distributing condoms fuels the conviction that it’s okay to follow your instincts and that there will be no adverse consequences of doing so.

There are powerful arguments, then, behind both the don’t-kill and the don’t-commit-adultery principles. For some, saving lives is worth any possible offense against sexual mores. For others, satisfying the desires of some teenagers is hardly worth the long-term debasement of family and community.

The paradigm? Long-term versus short-term fits well. If the issue is the short-term saving of lives, preventing AIDS is essential; if it’s the far-reaching good of society, chastity is vastly superior to condoms. Also relevant here are the claims of the self against the needs of the community, where self argues for lifesaving condoms in the schools and community for family-saving efforts to inhibit the increase of sexual license.

How might an ends-based thinker judge this one? Here, the greatest number might well seem to be the teenagers who are at risk of dying—in which case, hand out the condoms. One could argue, on the other hand, that society as a whole is the greater number. In this view, taking measures that reduce AIDS without promoting sexual activity will be far better for far more people in the future: What does it matter if we save a few teenagers and destroy society in the process? The problem with this view, in part, lies in public perception. Today’s teenagers have names and faces. Unlike the anonymous multitudes of future generations, these people can be counted, and the benefit of a condom policy can be measured in real numbers. Utilitarianism, then, builds a strong case for the condom policy.

By contrast, a rule-based approach asks, “What rule would I like to see universalized in the behavior of everyone else from now on?” Does the distribution of condoms set up a rule—“Sexual indulgence is okay at any age”—that creates a sustainable society? Most of us, handed a choice between an impulsive, sex-on-demand world and a community of liberty hedged with self-restraint, would choose the latter. Is such a community worth dying for—or, more precisely, letting the young die for? In the past, some have thought so: Patriotism has always taken its energy from those for whom some values (like freedom, independence, and self-government) took precedence over their own right to live. The rule-based thinker, here, may come down on the side of a rule that says, “Don’t commit adultery” despite consequences that include the death of some unwary teenagers. In practice, such thinkers may feel strongly that such a rule is especially appropriate for fifth-graders—more so, perhaps, than for older teenagers.

Care-based thinkers, facing these issues, will extend themselves into the consciences of the teenage population. Yes, they may say, if I were a teenager I would want to be shown the virtues of abstinence, the power of true affection, and the joy of sexuality in the context of deep and constant love. But I would also want to be protected against the raging tides of my own libido and the consequences of uncontrolled passion. Keep me alive, and I may become good; let me die, and I’ll never get to goodness. Thus the Golden Rule may come down on the side of condom distribution.

Here, as in so many public issues, society is longing for a trilemma resolution. Is there a way to quench teenage sexuality so that it never needs to get to the free-condom level? Or is there a way to remove the danger of AIDS so that teenagers, however much they lack responsibility, will survive? To date, modern medicine is shuffling toward the latter without much success. More hopeful, many feel, are programs now being developed in the schools that instill the virtues of abstinence and self-discipline. For the time being, however, debate continues over this genuine dilemma.

The Post-Communist World Order

Much ink has been shed dissecting the military, political, and economic consequences of the collapse of communism. That’s fitting: Along with the rise of international terrorism, it’s the major global development of our time. When the Berlin Wall was breached in November 1989, it signaled the end of a Cold War between the superpowers that had been the single most powerful political determinant in the world since the 1950s. With the end of communism came the consolidation of America’s sway in the world. By at least three measures of greatness she remains superior. One is the ability to project military force across the globe, in which the United States stands head and shoulders above any other single nation. Another is in the development of breakthrough technologies, where America still holds the lead, although against increasingly vigorous competition.

The third measure, less widely understood, has to do with the ethical ideals that undergird the civil society. The United States remains the only major nation founded not on border tiffs or ethnic rivalries but on a set of ideals about freedom, equal rights, and the individual’s relation to the state. True, these ideals sometimes seem badly tattered in a society that permits escalating levels of homelessness, poverty, violence, and addiction. Yet they still exert a powerful field of moral magnetism around the world. Every country whose citizens long for the tremendously infectious idea of democracy calibrate that yearning against the American standard. Some, seeing only the America of reality television and Paris Hilton, turn away in revulsion. But many more, seeing the America of The Federalist Papers and Horatio Alger, strive for emulation. The fact remains that the values of civil society are nowhere more understandable, appealing, and exportable than in the United States.

That fact, and the nation’s curious position as the lone world-class power, has profound ethical implications for America’s future. Why? Three trends seem relevant here.

The first concerns security and military power. In the Cold War era, defense efforts by the member nations of the North Atlantic Treaty Organization (NATO) had essentially one goal: to prevent the spread of communism. Western governments, having made the moral choice in favor of democracy, knew which side to take when regional conflicts erupted: They looked to see where the Communists were, and chose the other party. To be sure, that stand produced as strange a gaggle of authoritarian bedfellows as democracy ever had, including the Shah of Iran, Philippine strongman Ferdinand Marcos, and Nicaraguan dictator Anastasio Somoza Debayle. But the presence of the overarching moral imperative—to keep communism at bay—relegated any qualms about abusive dictators, rightly or wrongly, to the ethical backseat.

Exit communism, and the ethics leaps to the front. Without the Cold War’s rigid categories, the West must now sort out the morality of regional conflicts on a case-by-case basis. Is it right to send back illegal immigrants who seek a better life for themselves in the States? Should we try to stop the killing in Darfur? Should we recognize the authority of the United Nations when its resolutions conflict with our interests? Should we intervene with North Korea when it possesses nukes? Needed: a framework for ethical analysis to replace instinctual anticommunism.

The second trend concerns the ethical vacuum surrounding the citizens of formerly Communist nations. If ethics is obedience to the unenforceable, and if law is obedience to the enforceable, then seventy years of communist influence saw a deliberate concerted effort to replace ethics by law. In a state where everything is regulated, nothing of consequence need be left to individual discretion—at least in theory. Conceptually, then, there is little need for ethics. In practice, of course, many citizens in the formerly Communist countries survived with their ethics intact. Sadly lacking, however, is the public tradition of ethical behavior—the habit of right actions taken not out of fear of punishment or promise of reward but simply because it’s the right thing to do. That ethical vacuum mightily complicates the efforts of Western businesses and governments to build trading relations in many of these countries. Reports of corruption, personal greed, and everyman-for-himselfism, brought back by Western visitors, continue to defy imagination. Question: Can the West’s really important export—democracy—take root without an ethical base?

The third trend arising from the collapse of communism is a reconsideration of American individualism. Since the days of Emerso-nian self-reliance and the frontier mentality, Americans have tilted toward the rights of the individual over those of the group—despite de Tocqueville’s warning that excessive individualism would finally destroy the “public virtues” of the nation. In the face of communism, that tilt became a profound list—not only from fear of being called “pinko” or “fellow traveler,” but out of concern that the forces of Big Brother were poised to crush out all individuality. These days, Westerners have less to fear from a Communitarian balancing of the rights of the self with the needs of the community. Under the threat of terrorism, in fact, the scale may be tipping toward an excessive concern for community security at the expense of individual privacy—a tension sharply illustrated by the U.S. prison camps at Guantánamo Bay, Cuba.

These trends raise a profound dilemma for the United States. On one hand, it is right to extend a helping hand to the post-Soviet world, however much it may be mired in the leftover amorality of communism. It is right for several reasons. Our self-interest dictates that we expand trade, to benefit ourselves while we help others. Our commitment to democracy urges us to spread its values widely in an effort to make the world a safer, fairer, and more peaceable place for its citizens. And our compassion requires that these citizens, as fellow sojourners on a shrinking globe, be provided with a full complement of human rights. Nor is such a helping hand unprecedented. The Marshall Plan, through which American money rebuilt Europe after World War II, helped create the global prosperity we enjoy today.

On the other hand, it is right to prevent ourselves from falling into foreign relationships that sap our vitality and confine our own ability to grow. The nation’s founders rejoiced that an ocean lay between them and the European courts—for the simple reason that, imagining that they were self-sufficient, they thought they had no need for artfully crafted treaties and the entanglements of alliances. Isolationism has a long and noble history in the United States.

These two views are both right—and apparently exclusive of each other. On two points, however, there is no disagreement. First, this is a moral (rather than merely legal) question. Second, the United States, as the remaining superpower, is unavoidably the actor. The paradigm? Self versus community, where “self” is a single nation and “community” is the world. Also relevant here is justice versus mercy, where the latter calls for helping others while the former reminds us to be fair to our own citizenry first.

What do the resolution principles tell us? So far, most of the discussion surrounding approaches to the post-9/11 world has been ends-based. Consequences loom large. What is right is often deduced as a kind of back-formation: Determine where you want to come out, decide how to get there, and declare that to be the “right” thing to do.

A rule-based approach, setting aside such consequentialism, insists on following fundamental maxims. The most applicable global “rules,” perhaps, are those contained in the United Nations’ Universal Declaration of Human Rights (1948), the Helsinki Final Act (1975), and the European Union Charter of Fundamental Rights (1996). These agreements combine a number of political rights pertaining to the security of the person with a number of economic and social rights designed to meet basic human needs. Those who cleave to these precepts make adherence to human rights a litmus test for governments: In 1977, President Jimmy Carter spoke for this view when he declared in a speech to the UN General Assembly that “no member of the United Nations can claim that mistreatment of its own citizens is solely its own business.” That view, expanded, lies behind the uneasy agreement of U.N. member states to intervene in Rwanda, Sudan, the Congo, and other nations where governments have appeared to be slaughtering their own citizens. Though often scorned by pragmatists, this rule-based approach has had results. At least in the restructuring of the post-Communist world order during the late 1980s, writes Harlan Cleveland, these rights played a key role. “No government,” says Cleveland, “not even the totalitarian Soviets or military dictators or even the long dug-in South African authorities, seemed able to ignore entirely the ultimate enforcer that the U.S. Declaration of Independence calls ‘the general opinion of mankind.’”

Our third resolution principle, using the care-based approach, asks us to extend ourselves into the minds and hearts of the post-Soviet citizenry. So we try to answer such questions as “If we were the Russians, what would we want to have done to us?” To do so, we must first grasp the concept of otherness and learn to feel what it is like to be “the Russians.” Here the dimension of cultural understanding comes into play. We often find we understand this otherness better through art than diplomacy, literature than politics, feature writing than news reporting, movies than statistics, music than lectures. The care-based approach begins with empathy, with feeling the life of another from the inside out, and with understanding the currents and desires of that life in its own context. As global communications improves, the potential for care-based resolutions increases: As more and more Westerners see Russian films, and as they travel in the post-Soviet world, the human face comes more sharply into focus. Result: The care-based approach may well argue for significant economic aid to Russia—although, if the otherness we identify is that of America’s homeless and unemployed, we might well oppose such aid.

The bottom line? Moving closer to center stage may be a set of rule-based convictions—fired by the success of such human-rights campaigns as those of Amnesty International, Freedom House, and the Helsinki Watch organizations—and the care-based principles that naturally flourish whenever humans get close enough to one another’s cultures to feel compassion.

Conservation Versus Consumption

One of the major ethical issues of the global future pits environmentalists against developers. That’s nothing new. Eight years before the official closing of the American frontier in 1890, Norwegian playwright Henrik Ibsen wrote An Enemy of the People, a polemic exploring the ethical issues surrounding a financially profitable but contaminated and unhealthy swimming-bath in a small Norwegian town. The dilemma facing Dr. Stockmann and his fellow townspeople was stark: shut the baths to control disease, or keep them open to maintain the town’s lifestyle. Within another few years, America would be plunged into its first major preservation-versus-exploitation debate in the controversy over the Hetch Hetchy dam in Yosemite National Park. In 1930, the American poet Hart Crane captured the essence of such issues in a telling image:

The last bear, shot drinking in the Dakotas

Loped under wires that span the mountain stream….

Since then, environmental issues have rolled forward in a kaleidoscope of events: Rachel Carson’s Silent Spring in 1962, the Endangered Species Act in 1973, the two hundred million people in 140 countries turning out for Earth Day in 1990—and the ongoing debate over global climate change that coalesced into a treaty at the Earth Summit in Rio de Janeiro, Brazil, in 1992; was initially adopted in Kyoto, Japan, in 1997; entered into force in 2005; and is scheduled for reconsideration at the UN Climate Change Conference in Copenhagen in 2009. At the heart of each lies the same core dilemma: how to protect the natural environment while permitting human development.

If that issue is serious today, it will be crucial tomorrow. The reason: global population growth. While often seen as a problem in and of itself, population growth would in fact be irrelevant were it not for its impact on the environment. If the biosphere were infinitely expandable—if, as in the past, new populations could simply move onward into uninhabited lands so vast that a human presence made hardly a dent—population growth would hardly matter. The problem is quite otherwise. Rapidly rising populations are confronting a finite and oddly fragile environment. As George D. Moffett pointed out in 1994 in Critical Masses: The Global Population Challenge, the impact can be spelled out in a litany of familiar statistics still valid today:

  • Population growth now adds some nine thousand to eleven thousand people to the globe every hour—the equivalent of a new Dallas or Detroit in two days, a new Germany every eight months, a new Africa and Latin America combined every ten years.
  • More than 90 percent of this growth will take place in the one hundred or so nations of the developing world that are least able to provide for these new individuals.
  • Most of this growth will occur in urban areas. Many cities in third-world countries are doubling in size every twelve to fifteen years.
  • This growth is unprecedented. It took us hundreds of thousands of years to reach, by the early 1880s, our first billion people. Now, at 6.7 billion, we add a new billion each decade, heading toward a total of between 9 billion and 20 billion in the next century.

Yet the very pressure that gives such cogency to environmental concerns also fires the need for development. Are we willing to let all these new people starve and freeze in the dark?

Will we deny them access to the same resources that have sustained us? Will we promulgate regulations and ideals that enshrine nature’s rights at the expense of human rights? Of course we must control future population growth—but what do we do in the here and now with all those who have already been born?

Even if you live in the relative comfort of North America, with its low population density and immense tracts of preserved land, these issues shape your future. Sometimes they cause us real anguish, as in the case of the beluga or white whales—the species celebrated by Herman Melville in Moby-Dick. An endangered species in the Gulf of St. Lawrence, belugas eat so much fish in those toxic-laden waters that their bodies are considered to be “hazardous waste” when beached along the shore. Sometimes, however, such issues simply provoke sighs over man’s inanity—as when a paper mill in rural Maine, clearing the grating on its water-intake pipe of rocks lodged there during raging spring torrents, was solemnly ordered by state officials not to return the perfectly clean rocks to the riverbed but to truck them thirty miles to a soon-to-be-overloaded landfill.

Whatever the case, these issues feature two opposing core values. On one hand stands the value of preserving nature from the onslaughts of man. Humans are broadly adaptable, able to live on arctic ice floes and equatorial deserts, in dense cities or deserted mountains. Most other species are not so flexible: Wipe out their habitat, and they disappear. Some of the most vigorous and scrappy animals are acutely sensitive to habitat changes. Burrowing owls, for example, can endure all kinds of predators and still come out on top. But they depend on abandoned prairie dog holes for their nests. Wipe out prairie dogs, as western ranchers have been doing, and even the hardiest of burrowing owls can’t survive.

To save habitats, then, is to save species. Why does saving species matter? One reason is their beauty and the lessons they teach. Another has to do with effective management of natural lands: Most species have an important niche in the habitat as food for something or as consumer of something else. Still another is for science, allowing us to take genes from certain kinds of wild maize and merge them into commercial corn to produce vigorous and disease-resistant strains. Finally, of course, is the sheer right of a species to exist—or, to put it another way, the unconscionable human pride that thinks it has the right to destroy forever another form of life.

Such preservation, at bottom, is one of the deepest symbols of our humanity. No other species is gifted with such capacity for rational foresight and long-range planning. To defer immediate gratification for the sake of offspring we will never see is an intensely human act: To plant oaks beside your house on the frontier, knowing that a century later they will shade your great-grandchildren, is to show conscious respect for an environmental future in ways no other species can. Conservation, then, is not simply a luxury that we can overlook if we choose. It is part and parcel of our very humanity.

On the other hand stands an equally valid core value concerning human development. Among the most fundamental duties that humans have to one another are those that guarantee safety, warmth, food, shelter, and the right to propagate. The faces of the world’s children, peering through our television screens from refugee camps or third-world slums, cry out for policies that could put even a few scraps of food into their mouths. Such help could conceivably come in the short term, of course, through a straightforward redistribution of current wealth: If rich countries simply taxed themselves to death, some of these children would be fed. But the best long-term help comes through the development of economic opportunities.

Such development depends on education, religious approval, willingness to work, family structures that recognize the needs and rights of women, and many other intangibles. But it also depends on creating something of value that someone else needs and wants to buy. That usually requires raw materials and energy—the very things nature has always provided. To be sure, there are environmentally “clean” service-oriented jobs in information technology, insurance, advertising, tourism, communications, and other areas. But even those depend upon the prosperity generated somewhere in the world through a manufacturing base, which almost always involves some exploitation of natural resources. To refuse that exploitation, then, is to condemn the world’s poor to continued poverty—a condemnation that seems all the more inequitable when promoted by those in the developed world who already enjoy significant prosperity.

These two sides, clashing together, produce the environment-versus-development dilemma. It seems to fit three paradigms:

  • It is right to honor the short-term demands for survival by developing economic paths out of poverty. Yet it is right to respect the long-term demands for survival by assuring a sustainable environment.
  • The rights of the individual require us to supply food, clothing, and shelter despite the hardship on the environment. The rights of the community require that our common environmental heritage be protected despite the hardship on the individual. (This paradigm, however, can be put the other way: It is right for me as an individual to have access to unblemished wilderness tracts, though it may be right that, in order for my community to survive, everyone has access to the resources on that tract.)
  • The greatest justice will be served by saving the environment out of fairness to those yet to be born, while the greatest mercy will be to provide for those who are suffering today.

How do our resolution principles help us? Ends-based thinkers, brooding upon consequences, lay out sober prophecies of future doom and gloom—on both sides of the issue. Global warming vies for our attention with prognostications of future job losses and welfare increases. To the ends-based thinker, a close study of such figures, and the methodologies behind them, is essential: How else will we know what “the greatest good” will be? Not surprisingly, then, the policy-makers’ well-known penchant for utilitarianism plunges modern society into endless rounds of expert testimony, scientific debate, and statistical saber-rattling—the assumption being that whoever gets it intellectually right will also have captured the moral high ground.

Rule-based thinkers look on all this with wry detachment. The moral sense, to them, has little to do with such arcane debates. What rule, they ask instead, should be universalized? If it is to save species at all costs, then that must be done regardless of consequences. If, on the other hand, it is to honor every individual’s basic human dignity by supplying food and shelter, that must take precedence, no matter what happens. What gives these thinkers the shudders is the spectacle of moral inconsistency, a waffling set of policies that change every few years depending on scientific fashion or public whim. Get the rule right, they argue, and carry it out in full trust that it will produce the highest sense of goodness.

The care-based thinker may well dismiss both these views—the first for its cold disregard of suffering, the second for its rigid demand for consistency. What, they ask, would I want to have done to me? Living in a Dhaka slum, I would want a meal, an education, a job, a sense of hope—not a lecture on saving the whales. Living in a Los Angeles suburb, however, I would want a set of policies that would compel my entire community—myself included—to support alternatives to the gasoline-powered cars whose exhausts once engulfed me in smog. Placing my highest emphasis on caring for others—and observing that there are more slum-dwellers than suburbanites—I might finally come down more in favor of supporting the former than the latter.

This dilemma also gives us a clear look at another part of the resolution process: locating the trilemma options. Among the most encouraging signs of progress has been the growth of coalitions that involve both environmentalists and developers. From a past filled with the strident animosities of stark opposition, we seem to be moving toward a greater recognition of the fact that like all true dilemmas, this one has a lot of right on both sides. The trilemma goal—saving the environment while at the same time providing economic development—is being met in some areas. Already supermarkets are offering reusable fabric bags as an alternative to plastic ones. The once-ubiquitous water bottle is becoming increasingly unpopular as it becomes clear that not enough people are recycling. Hybrid cars, compact fluorescent light bulbs, and four-minute showers are looking more attractive and affordable as energy prices go up. Ecotourism is on the rise, helping travelers visit unspoiled areas with damaging them. In these and other ways, a resolution process as old as Aristotle’s Golden Mean is on the twenty-first century’s agenda.

More Public Issues

The discussion of these three public issues—involving AIDS, the new world order, and the environment—is meant to help us bring the lens of ethics to bear on problems of a national and international scope. These are not, by any means, the only right-versus-right dilemmas needing ethical analysis and resolution. Dozens of other global issues cry out for attention, including:

If ethics is as valid in a public as in a private arena, these issues ought to be amenable to thoughtful analysis from an ethical perspective. That’s not to say they won’t also benefit from more familiar forms of analysis through economic, technological, historical, or political lenses. They will. Subjected to ethical scrutiny, however, they yield up a different kind of understanding. Through that scrutiny, we come closer to answering the question that, more than any other, seems to be commanding public attention as we move into the twenty-first century: Of all the things we could do, what’s the right thing to do?