VIETNAM HAD LEFT DEEP and tormenting scars across the body politic. It was not like the century’s earlier wars that had ended with most Americans feeling victorious and the wars’ original opponents at least acquiescent. After World War II those who had rallied against Nazism knew they had helped put down a monstrous and dangerous tyranny. After Korea, those who had supported the final settlement were satisfied to have restored the regional balance of power first upset by North Koreans attacking south and then by Americans attacking too far north.
After Vietnam, all felt defeated—the hawks who had pressed for a “victory,” the doves who had wanted out, the veterans who returned to a sullen, ungrateful republic, the allies who had been enlisted in a hopeless cause, the Saigon leaders who felt deceived and betrayed, the anticommunist South Vietnamese who faced an anguished choice of fleeing to parts unknown or living under communist rule. Vietnam had displayed the ultimate strategic failure and moral bankruptcy of the “middle way,” of “bipartisan” foreign policy making, day-to-day, step-by-step escalation and de-escalation. An undeclared war for ill-defined goals had ended with Americans frustrated, embittered, and divided.
“It was the guerrilla war to end all guerrilla wars until it somehow became simply a war to be ended,” wrote Max Frankel in The New York Times a few days after Nixon announced the cease-fire of January 1973. “It was the proxy war to contain international Communism until it somehow became the central embarrassment to an era of Communist-capitalist détente. It was devised by a generation that wanted no more Munichs, meaning betrayals by appeasement, and it spawned a generation that wants no more salvations by intervention, no more Vietnams.” Few could have guessed that the war would come to an even more tragic end two and a half years later, or that already it had helped to trigger the chain of events that would bring about the collapse of the Nixon presidency long before hostilities ended.
On Sunday morning, September 8, 1974—thirty days after he took office—President Ford granted a “full, free, and absolute pardon unto Richard Nixon” for all federal offenses “he has committed or may have committed” or had helped commit as President. At once a fire storm of outraged telephone calls and telegrams broke upon the White House. Press and television editorialists thundered. But no one could do anything; the presidential power to pardon Presidents—or anyone else—was absolute and irreversible. Appearing on national television with the pardon in front of him, Ford stated that the former President would be excessively penalized in undergoing a protracted trial, “our people would again be polarized in their opinions,” and the “credibility of our free institutions of government would again be challenged at home and abroad.” He then signed the document in full view of the cameras.
Some Americans believed that nothing would prove the credibility of free institutions more dramatically, or set a better example for dictatorships abroad, than the willingness to put a President on trial. Many simply suspected a Ford-Nixon deal. It was rumored that aides in the Ford White House knew of a call from Nixon to the new President: if Ford refused to grant him a full pardon, Nixon would announce publicly that Ford had promised the pardon in exchange for the presidency. Ford boldly appeared before the House Judiciary Subcommittee on Criminal Justice to declare, “There was no deal, period”; but the investigative reporting of Seymour M. Hersh suggested that Ford did have an outright arrangement with the man who had made him Vice President and then President. Another possibility was that such carefully protected multi-channel negotiations were conducted between the two through intermediaries that the parleys aroused hopes that melded into expectations that led to understandings that emerged as clear promises of a pardon, all conducted with the winks and nods, whispers and silences, gestures and mumbles that constitute the language of brokering politicians. Or perhaps Ford acted, as he claimed, in behalf of what he considered “the greatest good of all the people of the United States whose servant I am.”
In a very different fashion, nevertheless, Nixon went on trial anyway. Ford’s demonstration of presidential power and the debate over its cause and justice, the trials of high Watergate figures in the following months, the voluminous memoirs of Watergate heroes and villains in the following years, and Nixon’s disbarment from practicing law in New York State had the more important effect of putting not merely Nixon but his whole Administration on trial, and even more, of exposing the most extraordinary and pervasive abuse of power in high places. There emerged a frightening portrait of an Administration conducting a political war of attempted extermination against its political enemies at home even while it was waging a military struggle in Southeast Asia. Viewing street demonstrators and student protesters not as legitimate political opponents but as threats to national security and subverters of the national interest, the White House developed a siege mentality.
This was not the first time an Administration had hunkered down into a psychology of besiegement; the previous Administration had exhibited similar signs and strains. But ultimately there was a difference between Lyndon Johnson and Richard Nixon. Until the end LBJ’s instinct had been to move out to people, to consult Republicans like Dirksen, to include critics like George Ball. It was better, he liked to tell intimates, to have such people “inside the tent pissing out than outside the tent pissing in.” And in the end Johnson was willing to quit voluntarily. Nixon’s instinct was for exclusion—to suspect anybody and everybody, ignore them, fire them, exile them. And in the end he in effect was forced out of office.
But before that end the Nixon White House had abused power with awesome ingenuity. They had set up an extensive “enemies list” that ranged from political opponents like Jane Fonda, Shirley Chisholm, and Edmund Muskie to the heads of eastern universities and foundations, along with media figures, actors, even athletes, and included a mistake or two—non-enemy Professor Hans Morgenthau made the list because he was confused with enemy Robert Morgenthau, U.S. Attorney in New York City. They conducted a private investigation of Senator Edward Kennedy’s 1969 automobile accident at Chappaquiddick in which a woman drowned. They tapped their foes and one another with wild abandon. They tried to subvert the IRS, the CIA, the FBI for political purposes. Though the so-called Huston Plan, which outlined a sinister program of surveillance of American citizens and proposed the use of “surreptitious entry”— burglary—for intelligence-gathering, was blocked by a nervous J. Edgar Hoover, it revealed the illegal lengths to which the Administration was willing to go in its war against political enemies. Parts of the plan were later implemented, and it was the inspiration of the “Plumbers” unit that burglarized the office of Daniel Ellsberg’s California psychiatrist and of the team that broke into the Democratic National Committee offices in Washington’s Watergate complex.
Not only the political war plans but the planners told much about the Nixon White House. Some of the inmates were of the order of Charles Colson, who liked to call himself a “flag-waving, kick-’em-in-the-nuts, anti-press, anti-liberal Nixon fanatic” and “the chief ass-kicker around the White House.” Others were younger men like White House counsel John Dean, attractive, clean-cut, affable, flexible—ever so flexible. And willing—ever so willing. When Nixon asserted to Dean on September 15, 1972, that the White House had not used the FBI or the Justice Department against its enemies, but that “things are going to change now. And they are either going to do it right or go,” Dean exclaimed, “What an exciting prospect!”
The more that Watergate unfolded in the trials and memoirs of participants, in the brilliant reporting of Washington Post correspondents and of Hersh and J. Anthony Lukas and others, the more it appeared to be a morality tale, complete with villains and saints, winners and sinners, and a Greek chorus of Washington boosters and critics.
The Polo Lounge, the Beverly Hills Hotel, Los Angeles, June 17, 1972. Jeb Stuart Magruder, deputy director of the Committee to Reelect the President— called CRP by its friends, CREEP by its foes—was breakfasting with aides when the phone call came from Washington. It was G. Gordon Liddy, insisting that Magruder drive ten miles to a “secure phone.” “I haven’t got time,” Magruder replied impatiently. “What’s so important?” Liddy said, “Our security chief was arrested in the Democratic headquarters in the Watergate last night.” Magruder: “What?”
Magruder knew what. If the men who had tried to plant listening devices in the Democratic National Committee could, through CRP security chief James McCord, be linked to Liddy, counsel to CRP, they could be linked to himself, to his boss, CRP director John Mitchell, and therefore to his boss, the President of the United States. Magruder and his assistants hurried to Mitchell’s suite in the hotel. They had only one thought at this point, Magruder remembered later: How could they get McCord out of jail? Some way must be found. “After all, we were the government; until very recently John Mitchell had been Attorney General of the United States.” The break-in was not just hard-nosed politics; it was a crime that could destroy them all. With White House power behind them, it seemed inconceivable that they could not fix the problem. The decision for a cover-up was immediate and automatic; no one suggested anything else.
For Jeb Stuart Magruder, like many others caught in the Watergate web, this was a moment of truth, a point of passage. Magruder was no hard-boiled, cynical politico who had fought his way up from the precincts. A Staten Island high school and Williams College graduate, he had worked for IBM and other big corporations, run two small cosmetics companies in Chicago, managed southern California for Nixon in 1968 and accepted with alacrity a White House post as deputy director of communications the next year. Considered the perfect PR man, he was to go through much of the same anguish, the same passage from arrogance to humiliation, in the following nightmarish months of exposure as others in the White House, but he later related his experience more reflectively and revealingly than his colleagues.
During the “siege” days of 1970, Magruder recalled, the White House existed in “a state of permanent crisis.” Now, after the Watergate break-in, the spacious mansion turned into a Hobbesian world of all against all, a Shakespearean stage of suspicious, frightened men shaken from their pinnacle and clawing for survival. To cover up the burglary White House chiefs and operatives destroyed their own documents, pried open and emptied the safes of others, pressured the CIA to pressure the FBI to limit its investigation. They arranged hush money for the burglars, though, as John Dean noted, “no one wanted to handle this dirty work. Everyone avoided the problem like leprosy.” The White House “thought Mitchell should ‘take care’ of the payments because he had approved the Liddy plan” to burglarize the DNC, while the former Attorney General blamed the White House for sending him Liddy and pressing him for intelligence. Finally a “fund-raiser” was found in the President’s personal attorney, Herbert Kalmbach, who over the next two months gathered $220,000 in $100 bills—soon to be known as CREEP calling cards. Now the frightened men in the White House began to jettison not only records but themselves and one another. Kalmbach quit while under FBI investigation.
In October 1972, Washington Post reporters Bob Woodward and Carl Bernstein, after months of patient sleuthing and with the guidance of a well-informed source—“Deep Throat”—whose identity only Woodward knew, tied the Watergate break-in to “a massive campaign of political spying and sabotage conducted on behalf of President Nixon’s re-election and directed by” White House and CRP officials. During the January 1973 trial of the burglars, Judge John J. Sirica, dissatisfied with the efforts of Attorney General Richard Kleindienst’s prosecutors, questioned defense witnesses from the bench. Late in March, McCord, whom “the government” had failed to spring and who feared a severe sentence if he refused to cooperate, charged that others besides the burglars had been involved and that perjury had been committed at the trial. The President was following every move.
The Oval Office, February 28, 1973. John Dean was once again reporting to the President. The two discussed ways to obstruct the select committee the Senate had established under Democrat Sam Ervin of North Carolina and Dean assured Nixon that despite the setbacks, the cover-up was still viable:
DEAN: We have come a long road on this thing now. I had thought it was an impossible task to hold together until after the election until things started falling out, but we have made it this far and I am convinced we are going to make it the whole road and put this thing in the funny pages of the history books rather than anything serious because actually—
NIXON: It will be somewhat serious but the main thing, of course, is also the isolation of the President.
DEAN: Absolutely! Totally true!
But by March 13, the scenario Dean presented to the President was less optimistic:
DEAN: There is a certain domino situation here. If some things start going, a lot of other things are going to start going, and there can be a lot of problems if everything starts falling. So there are dangers, Mr. President.… There is a reason for not everyone going up and testifying.
And on March 21, against Nixon’s enthusiasm for continued hush-money payments—“You could get a million dollars. You could get it in cash. I know where it could be gotten”—Dean warned:
DEAN: I think that there is no doubt about the seriousness of the problem we’ve got. We have a cancer within, close to the Presidency, that is growing. It is growing daily. It’s compounded, growing geometrically now, because it compounds itself.… Basically, it is because (1) we are being blackmailed; (2) People are going to start perjuring themselves very quickly that have not had to perjure themselves to protect other people in the line. And there is no assurance—
NIXON: That that won’t bust?
DEAN: That that won’t bust.
Dean by this time was wondering who “would have to fall on his sword for the President.” Himself? “Yes, I thought. Then, no. There had to be another way.” But if he refused to fall, he might be pushed. Dean’s other way was to beat his co-conspirators in the White House to the federal prosecutors and the Ervin committee and cut a deal.
Senate caucus room, hearings before the Watergate Committee, testimony of John Dean, June 25-29, 1973. Ponderously, inexorably, the two rival branches of the federal government were wheeling up their artillery against the abuse of presidential power. The judicial branch had demonstrated its power in Judge Sirica’s court; there was talk of impeachment in the House, though it had yet to initiate such proceedings; the Senate select committee hearings had opened on May 17.
The counter-tactic Dean, and fellow Nixon aides John Ehrlichman and H. R. Haldeman, in consultation with the President, had devised the White House taking “a public posture of full cooperation,” as Dean recalled, while privately trying to “restrain the investigation and make it as difficult as possible to get information and witnesses.” But with White House and CRP officials—Magruder and Dean himself among them—now jumping ship, the dark and criminal underside of the Nixon White House was being exposed to the Ervin committee and the full glare of television lights. On June 25, Dean began his testimony:
DEAN: To one who was in the White House and became somewhat familiar with its interworkings, the Watergate matter was an inevitable outgrowth of a climate of excessive concern over the political impact of demonstrators, excessive concern over leaks, an insatiable appetite for political intelligence, all coupled with a do-it-yourself White House staff, regardless of the law.
Dean’s reading of his 245-page opening statement took up the entire first day of his testimony. The next day, Georgia Democrat Herman Talmadge questioned him:
TALMADGE: Mr. Dean, you realize, of course, that you have made very strong charges against the President of the United States that involves him in criminal offenses, do you not?
DEAN: Yes sir, I do.
But Dean kept his finger coolly pointed at the President. Later that day, Joseph Montoya, Democrat of New Mexico, questioned him:
MONTOYA: Now, on April 17, 1973, the President said this: “I condemn any attempts to cover up in this case, no matter who is involved.” Do you believe he was telling the truth on that date?
DEAN: No, Sir.
MONTOYA: Will you state why?
DEAN: Well, because by that time, he knew the full implications of the case and Mr. Haldeman and Mr. Ehrlichman were certainly still on the staff and there was considerable resistance to their departure from the staff.
And on July 28, Tennessee Republican Howard Baker asked Dean the question he asked almost every witness—the question of the summer:
BAKER: What did the President know and when did he know it, about the cover-up?
DEAN: I would have to start back from personal knowledge, and that would be when I had a meeting on Sept. 15 [1972] when we discussed what was very clear to me in terms of cover-up. We discussed in terms of delaying lawsuits, compliments to me on my efforts to that point. Discussed timing and trials, because we didn’t want them to occur before the election.
As the White House launched a massive counterattack—it was the word, it argued, of a self-acknowledged leader of the cover-up fighting and bargaining to save his skin against that of the President of the United States— John Dean’s credibility became a chief topic of discussion. During the former counsel’s testimony, a man just outside the caucus room assembled an impromptu jury of twelve fellow spectators to pass judgment on Dean’s veracity and, implicitly, on Nixon’s guilt. The vote was unanimous in Dean’s favor. Two other spectators called out, “Make it fourteen.”
Day after day that summer, the Ervin committee elicited the damning testimony: that the “enemies list” was designed for the harassment of its targets through the IRS and other means; that an attempt was made to forge State Department cables in order to implicate President Kennedy in the assassination of Vietnamese President Diem; that Nixon had tape-recorded his conversations in the White House and his hideaway office in the Executive Office Building; that Ehrlichman deemed the burglary of Daniel Ellsberg’s psychiatrist as within the constitutional powers of the President.
More charges and revelations emerged from the committee and other investigations: that the President had taken fraudulent income-tax deductions; that he had used sizable government funds to improve his estates in Key Biscayne, Florida, and at San Clemente, California; that the financier and manufacturer Howard Hughes had made large secret donations of cash, supposedly for campaign purposes but apparently spent on private expenses by Nixon, his family, and his friends. The plea-bargained resignation in October 1973 of Spiro Agnew—charged with federal income-tax evasion for payoffs from construction company executives he had accepted while governor of Maryland and even as vice president—added to the portrait of a pervasive corruption surpassing even Grant’s and Harding’s Administrations.
Inch by inch Nixon fell back, fighting all the way, making public explanations that were soon proven false or declared “inoperative” by the White House itself, throwing his closest associates out of his careening sleigh as the wolves relentlessly closed in. Shortly before the Ervin committee began its hearings he shoved Haldeman and Ehrlichman out of the White House with garlands of praise and replaced Kleindienst as Attorney General with Elliot L. Richardson, who chose his old Harvard law professor, Archibald Cox, as special prosecutor. The struggle now was over presidential tapes that were believed relevant to the investigation. When Nixon balked at releasing the tapes either to Cox or to the Ervin committee, both subpoenaed him for this crucial evidence. Ordered by Judge Sirica to turn the tapes over to the court, the President proposed a compromise arrangement so egregiously self-protective that Cox turned it down.
Then the “Saturday Night Massacre”—Nixon commanded Richardson and then Deputy Attorney General William D. Ruckelshaus to sack Cox for defying a presidential order to give up his pursuit of the tapes through the courts. Both refused—Richardson resigned, Ruckelshaus was dismissed. Solicitor General Robert H. Bork, of the Yale law faculty, was hurriedly driven to the White House and designated Acting Attorney General, and promptly fired Cox. The outburst of public outrage once again drove Nixon back on the defensive. He agreed to hand the tapes over to Sirica, then reversed his “abolition” of the special prosecutor’s office and chose for Cox’s replacement a man whom Nixon expected to be more pliable, a conservative Texas Democrat, corporate lawyer, and reputed law-and-order man named Leon Jaworski. Nixon aroused more public suspicion when he handed over to Sirica a crucial tape with an eighteen-minute gap, which the court’s panel of experts found had been caused by repeated, probably deliberate erasures.
Leon Jaworski did indeed turn out to be a law-and-order man. When the President, citing the doctrine of executive privilege, refused to turn over to him additional tapes involving conversations with his aides, Jaworski argued first before Judge Sirica, who upheld him, and then, when Nixon took the case to the Court of Appeals, went to the Supreme Court for “immediate settlement.” On July 24, 1974, the Court rendered its decision in United States of America v. Richard Nixon, President of the United States. Chief Justice Warren E. Burger, after reading a brief tribute to his recently deceased predecessor, Earl Warren, summarized the Court’s unanimous finding. The President’s claim to executive privilege, the opinion held, “to withhold evidence that is demonstrably relevant in a criminal trial would cut deeply into the guarantee of due process of law and gravely impair the basic function of the courts.” The privilege could not “prevail over the fundamental demands of due process of law in the fair administration of justice.” When the news reached Nixon at San Clemente, according to Anthony Lukas, “the President exploded, cursing the man he had named chief justice,” and reserving a few choice expletives for Harry A. Blackmun and Lewis F. Powell, Jr., his other appointees. At first Nixon seriously considered challenging the Court, but he feared adding to the likely impeachment charges, and the Court’s unanimity made it impossible to claim that the decision was insufficiently definitive. In all the months of slow Chinese torture that Nixon suffered, it was probably the news from the High Bench that gave him the most sudden, piercing pain.
The Judiciary Committee, Room 2141, Rayburn House Office Building, 7:45 p.m., July 24, 1974. Over a hundred reporters looked on and about 40 million Americans watched on television as Chairman Peter Rodino rapped his gavel on the table. Solemnly he reminded the members of their responsibilities. “Make no mistake about it. This is a turning point, whatever we decide. Our judgment is not concerned with an individual but with a system of constitutional government. It has been the history and the good fortune of the United States, ever since the Founding Fathers, that each generation of citizens and their officials have been, within tolerable limits, faithful custodians of the Constitution and the rule of law.” But the minds of his fellow committee members were very much on one individual—the President of the United States. There had been doubts that this unwieldy committee of thirty-eight members, many of them highly partisan, and polarized between “Democratic Firebrands” and “Republican Diehards,” could handle the tough, risky task of impeachment.
Impeachment! During much of 1973 few even in the media had dared mention the word; it smacked of the impeachment and trial 105 years earlier of Andrew Johnson, an episode ill regarded in most recent histories. For months the committee and its big staff had been sorting through White House tapes and other records that the President, dragging his feet at every stage, had turned over to the committee or the courts. Day by day the specter of impeachment became more real. The Republican minority on the committee were an especially anguished lot. Many were personally as well as politically loyal to Nixon, who had done them many favors, including trips into their districts to give their campaigns the White House blessing. This was the case with Hamilton Fish, Jr., who to boot was the fourth consecutive Hamilton Fish to serve as a Republican member of Congress. His father, famed as a target of FDR’s jibes at “Martin, Barton, and Fish,” was still active, at eighty-five, in backing Nixon and demanding whether there could be “fair and impartial justice among the left-winged Democrats on the House Judiciary Committee who received large campaign contributions from organized labor.”
But the younger Fish was slowly moving toward impeachment after the Saturday Night Massacre, which had socked him “right in the gut,” and after reading tape transcripts. Some Republicans were outraged by the unending stream of revelations; others held out for “our President,” while Rodino sought to “mass” the committee in order to stave off accusations of blind partisanship. The committee rose to the occasion, with some noble utterances during its deliberations.
Barbara Jordan, Texas Democrat, woman, black: “Earlier today we heard the beginning of the Preamble to the Constitution of the United States. ‘We, the people … ’ It is a very eloquent beginning. But when that document was completed on the seventeenth of September in 1787 I was not included in that, ‘We, the people.’ I felt somehow for many years that George Washington and Alexander Hamilton just left me out by mistake. But through the process of amendment, interpretation, and court decision I have finally been included in ‘We, the people.’ Today, I am an inquisitor. My faith in the Constitution is whole, it is complete, it is total. I am not going to sit here and be an idle spectator to the diminution, the subversion, the destruction of the Constitution.”
M. Caldwell Butler, Virginia Republican: “For years we Republicans have campaigned against corruption and misconduct.… But Watergate is our shame. Those things have happened in our house and it is our responsibility to do what we can to clear it up.” He announced that he was inclining toward impeachment. “But there will be no joy in it for me.”
On July 27, 1974, the committee voted 27-11 to recommend impeachment on the ground that the President had “engaged personally and through his subordinates and agents, in a course of conduct or plan designed to delay, impede, and obstruct the investigation” of the Watergate burglary. Two days later the committee voted, 28-10, an article charging that Nixon’s conduct had violated the constitutional rights of citizens and impaired the proper administration of justice. The votes reflected a precarious coalition of committee Democrats and Republicans; the majority of Republicans still stood by their President.
But on August 5, Richard Nixon, in obedience to the Supreme Court decision, released the transcripts of three conversations which showed beyond any doubt that six days after the Watergate break-in—on June 23, 1972—he was at the center of the conspiracy to cover up that crime, obstructing justice by plotting to block the FBI investigation. “I was sick. I was shocked,” a middle-level White House official told a journalist. “He had lied to me, to all of us. I think my first thought, before that sank in, was of those Republicans on the Judiciary Committee … those men who had risked their careers to defend him.”
Now one of those men—Charles Wiggins—said, “I have reached the painful conclusion that the President of the United States should resign.” If he did not, “I am prepared to conclude that the magnificent career of public service of Richard Nixon must be terminated involuntarily.”
The White House, August 7-9, 1974. In the final days the two Nixons—the shrewd, confident calculator and the narcissist hovering between dreams of omnipotence and feelings of insecurity—emerged in the Watergate crucible. Even now he was a cold head counter, yet he appeared to be swinging erratically between holding out to the bitter end and throwing it all up. Senators Barry Goldwater and Hugh Scott and House Minority Leader John J. Rhodes arrived at the White House on August 7 to brief the President on the situation in Congress. After some small talk:
SCOTT: We’ve asked Barry to be our spokesman.
NIXON: GO ahead, Barry.
GOLDWATER: Mr. President, this isn’t pleasant, but you want to know the situation and it isn’t good.
NIXON: Pretty bad, huh? … How many would you say would be with me—a half dozen?
GOLDWATER: More than that, maybe sixteen to eighteen.… We’ve discussed the thing a lot and just about all of the guys have spoken up and there aren’t many who would support you if it comes to that.
I took kind of a nose count today, and I couldn’t find more than four very firm votes, and those would be from older Southerners. Some are very worried about what’s been going on, and are undecided, and I’m one of them.
NIXON: John, I know how you feel, what you’ve said, I respect it, but what’s your estimate?
RHODES: About the same, Mr. President.
NIXON: Well, that’s about the way I thought it was. I’ve got a very difficult decision to make, but I want you to know I’m going to make it for the best interests of the country.… I’m not interested in pensions. I’m not interested in pardons or amnesty. I’m going to make this decision for the best interests of the country.
SCOTT: Mr. President, we are all very saddened, but we have to tell you the facts. NIXON: Never mind. There’ll be no tears. I haven’t cried since Eisenhower died. My family has been fine. I’m going to be all right.… Do I have any other options?
There were no options. After a bit more small talk his visitors left. The next night the President addressed the nation on television. He was calm, restrained. “As we look to the future, the first essential is to begin healing the wounds of this Nation, to put the bitterness and divisions of the recent past behind us and to rediscover those shared ideals that lie at the heart of our strength and unity as a great and as a free people.” There was no admission of guilt, no word about the lies he had told or the laws he had broken or the trust he had violated; he would say only that “some of my judgments were wrong”—but he announced his resignation, effective the next day, August 9, at noon.
In the last hours the President vacillated between mourning and brief bouts of euphoria, between weeping and laughter. In his departing speech to cabinet and staff on the morning of August 9, he talked about the White House—“this house has a great heart”—and about his father, “a streetcar motorman first”—and about his mother—“my mother was a saint.” Then an admonition from this man whose great hatreds had contributed to his fall: “… always remember, others may hate you, but those who hate you don’t win unless you hate them, and then you destroy yourself.”
Finally the scene, etched on the memory of America, of Nixon and his wife and daughters, their eyes brimming, walking out to the waiting helicopter. There in the door he turned to the crowd, and waved, a contorted smile on his face. From Andrews Air Force Base he and Mrs. Nixon flew west on the Spirit of ’76 to their home in San Clemente. Somewhere over central Missouri the presidency of Richard Nixon came to its end.
What did the President know? When did he know it? And when would the American people know what and when the President knew? These continued to be the critical questions facing Americans during most of the months of Watergate. The stunning answer came on August 5, 1974, when Nixon released the three damning tapes of June 23, 1972. At last the “smoking gun” lay before the people.
Knowing what had happened evoked the more compelling and intractable questions: Why? How could it have happened? Was Watergate due to one man, Richard Nixon, and his flaws of character? If so, why had he been joined in criminal and immoral acts by another thirty or forty men, not all of whom were close to him? Was Watergate, then, a product of the political institutions in which these men operated—of the “imperial presidency,” a hostile and biased media, the whole political and constitutional system? But what had shaped these institutions—psychological forces within the political elite, a corruption of the American national character, economic and social tendencies inherent in an individualistic, dog-eat-dog culture?
Nixon’s apologists defended him as a victim rather than a villain—as the legatee of dishonorable precedents set by previous Presidents, as the butt of a vindictive press, as simply acquiescing, in son-in-law David Eisenhower’s words, “in the non-prosecution of aides who covered up a little operation into the opposition’s political headquarters,” a long-established practice “that no one took that seriously.” John Kenneth Galbraith had predicted at the time of Nixon’s resignation that someone would also advance the argument that “there’s a little bit of Richard Nixon in all of us.” Galbraith added, “I say the hell there is!”
A more persuasive explanation of Watergate put the whole episode in a political and institutional context. The “swelling of the presidency,” wrote presidential scholar Thomas E. Cronin, had produced around the President a coterie of dozens of assistants, hundreds of presidential advisers, and “thousands of members of an institutional amalgam called the Executive Office of the President.” This presidential establishment had become a “powerful inner sanctum of government, isolated from traditional, constitutional checks and balances.” George E. Reedy, former press secretary to LBJ, saw beneath the President “a mass of intrigue, posturing, strutting, cringing, and pious ‘commitment’ to irrelevant windbaggery”—a “perfect setting for the conspiracy of mediocrity.” John Dean remembered the “blind ambition” that had infected him and others in the White House.
Jeb Magruder wrote that the President’s mounting insecurities and passions over Vietnam and the antiwar protests led to Watergate, for Presidents set the tone of their Administrations. But, Magruder continued, it was not enough to blame the atmosphere Nixon created. “No one forced me or the others to break the law,” he said. “We could have objected to what was happening or resigned in protest. Instead, we convinced ourselves that wrong was right, and plunged ahead.”
It was the sting of the media that drove Nixon to dangerous and desperate retaliatory tactics, some of his supporters contended. In fact, each side—all sides—exaggerated the extent to which the media supported their adversaries and, even more, the actual influence wielded by the media. The Watergate “battle of public opinion” was more like a vast guerrilla war in which a variety of political and media generals and colonels fought for advantage in the murk. Nixon repeatedly used television to reach the viewers over the heads of the press, but his credibility was suspect. Investigative reporters burrowed away, looking for fame as well as facts. Polls were used to influence public opinion as well as to test it. A polling organization friendly to the White House asked its sample: “Which action do you yourself feel is the more morally reprehensible— which is worse—the drowning of Mary Jo Kopechne at Chappaquiddick or the bugging of the Democratic National Committee?”
The nature of the public-opinion battle, moreover, changed during the two-year struggle. People tended to react to the early revelations as Democrats and Republicans, or as Nixon admirers and haters. The press, too, tended to divide along lines of party or presidential preference in 1972, when about seven of every ten newspapers endorsed Nixon and about one out of every twenty McGovern. At first the pro-Nixon newspapers tended to play down or ignore the Watergate burglary; then they faltered and shifted in the face of the avalanche of evidence of wrongdoing. All in all, two close students of Watergate public opinion concluded, grass-roots political attitudes had less an active than a reactive role in Watergate; they were not so much a powerful, cohesive force pressing for a certain action as simply a melting away of Nixon’s old constituency, especially after the smoking-gun revelation, and he was left without his “base.”
In the end it was not in the “tribunal” of public opinion that the issue was settled but in the more formal tribunals of the American political and constitutional system. If Nixon hoped he could save himself eventually as he had done with his “Checkers” appeal in 1952, he was underestimating the legislative branch. A Congress that for months had been stupefied and almost immobilized by shocking revelations roused itself to concerted action, voting in committee to impeach. Even so, the system might well not have worked except for investigative reporters who refused to quit, a remarkable series of blunders by Nixon, stretching from his original taping of the White House to his failure to destroy the tapes, and other “chance” or aberrant developments. In some respects the constitutional system thwarted rather than impeded action. The separation of powers and checks and balances, political scientist Larry Berman concluded, had not stopped “the espionage, the plumbers, the dirty tricks, the cover-up, the secret bombing of Cambodia, all the culmination of presidential imperialism.” It was the capacity of key persons—journalists, legislators, judges, even men in the White House circle—to rise in the face of doubt and suspicion and to defy the royal court itself that made all the difference by 1974.
Ultimately, Watergate became a test of moral leadership—a test that the White House dramatically failed to meet. There was no sense of embarrassment or shame, Magruder said later, “as we planned the cover-up. If anything, there was a certain self-righteousness to our deliberations. We had persuaded ourselves that what we had done, although technically illegal, was not wrong or even unusual.” Their foes were making a mountain out of a molehill. “We were not covering up a burglary, we were safeguarding world peace.” Besides, hadn’t previous presidencies—especially the JFK White House—prepared the way for “hardball politics”? Kennedy men had stolen the election of 1960 from their boss, some in the White House still believed. They recalled the old story that members of Kennedy’s White House loved to tell about the JFK staff man who was complaining of the misdeeds of the “rascals” in the enemy camp. When another staff man remarked mildly that there were rascals in their own camp as well, the first man said, “Ah, but they are our rascals!”
It fell to men like Archibald Cox and Elliot Richardson to exercise moral leadership in the crucible, but a desperate President could always find a complaisant man to carry out his orders. It was hard to accept the truth that the President was a liar, Elizabeth Drew reflected as she read through the smoking-gun tapes. It seemed impossible that a President was “capable of looking at us in utter sincerity from the other side of the television camera and telling us multiple, explicit, barefaced lies.” She felt torn between the “idea that people must be able to have some confidence in their leaders and the idea that in this day of image manipulation a certain skepticism may serve them well.” Here was a fundamental question about moral leadership in the American democracy. Perhaps a New Yorker cartoon of the time hinted at the nature of the problem, if not the answer. It showed one man telling another man in a bar: “Look, Nixon’s no dope; if the people really wanted moral leadership, he’d give them moral leadership.”
After Judge Sirica sentenced Jeb Magruder to a term of ten months to four years in prison, the former White House aide had ample time to analyze “why Watergate.” He blamed excessive power in the White House, Nixon’s “instinct to overreact in political combat,” and the lawbreaking by White House staff “out of a combination of ambition, loyalty, and partisan passion.” Analyzing his own wrongdoing, he ascribed it in some degree to the failure of his college professors to teach morality. Thus at Williams, political scientist Frederick L. Schuman had advocated a “tough power-politics approach to international diplomacy,” chaplain William Sloane Coffin was too rigid and confrontational a moralist, and James MacGregor Burns, while ideological, did not teach enough morality in his politics courses. He reflected with some bitterness that Coffin had defended draft-card burning and other illegal activities, in turn provoking illegality in the White House, even though “two wrongs do not make a right.”
Other Watergate wrongdoers also could reflect on the fortunes of political war. More than a score went to jail. Convicted of conspiracy, perjury, and obstruction of justice, Haldeman served eighteen months in prison; later he became vice president of a real estate development firm in Los Angeles. Ehrlichman, convicted of the same three crimes, emerged from eighteen months in jail fifty-seven, bearded, affable, and ready to embark on a career of writing about his Washington days. John Mitchell, convicted of conspiracy, obstruction of justice, and lying under oath, served nineteen months before being paroled, only to go through more years of ill health, disbarment, and separation from his wife, Martha, who later died of cancer. Charles Colson and Jeb Magruder, after serving seven months, turned successfully to Christian ministerial activities. G. Gordon Liddy, convicted both of the break-in and of conspiring to raid the office of Daniel Ellsberg’s psychiatrist, served the longest Watergate prison sentence—fifty-two months—because he refused to talk, even under subpoena; he later became a celebrity lecturer and security consultant. A dozen other Watergaters served brief sentences, then turned to business, law, writing, and lecturing. All the key Watergate participants save Mitchell wrote works of fiction or fact about the episode; many of these were best-sellers.
And the chief co-conspirator? Ten years after the break-in, Richard Nixon had published his own best-selling memoir, moved from California to Manhattan to the New Jersey hinterland, continued to receive federal pensions and free office space and clerical help like all former Presidents, and earned over $3 million from his writings. He was lionized at home and abroad, defended his actions in television interviews, wrote on foreign policy and foreign leaders, advised President Reagan, and was attacked by Haldeman and others for further distorting the record. Nixon told a CBS interviewer that he “never” thought about Watergate. “I’m not going to spend my time just looking back and wringing my hands about something I can’t do anything about.”
Nixon’s slow and careful reentry into public life only intensified the anger of those who believed he had deserved a conviction rather than a pardon. Here was the old-boy network in spades, now deciding the leadership of the nation. Here was the pinnacle of pardons, the Everest of exculpation, something for the book. It was the same old story—the big guys take care of themselves while the little guys get it in the neck.
Nixon’s white-collar crimes happened to coincide with a dramatic rise in public consciousness of lawbreaking by persons in corporations, government, and other organizations. The FBI in the 1970s found that the frequency of such crimes as bribery, kickbacks, payoffs, and embezzlement was growing at an alarming rate. The costs of white-collar crime in goods and services, the Bureau estimated, eventually caused about a 15 percent markup for consumers. The FBI was now assigning over a fifth of its agents to the investigation of such crimes.
The moral ambiguity of the Nixon pardon was matched by confusion over the very nature of white-collar crime. Some Americans believed that, however heinous Nixon’s misdeeds were, the pardon was justified because no other punishment could compare with the humiliation and mortification an American President must endure in quitting office in the face of looming impeachment. And so with white-collar people. A bank teller, an insurance clerk, a postal employee, a politician living in a tight-knit community and known to everyone, some Americans believed, faced far more embarrassment on conviction than thieves working some distance from their communities.
Many investigators of white-collar crime had little patience with such popular distinctions. In a social economy undergoing rapid organizational and technological changes during the 1960s and 1970s, they faced trying problems in even defining, measuring, and understanding white-collar crime before tackling the questions of deterrence and punishment. Two seasoned investigators virtually threw up their hands over the task of definition, finally settling for “an intuitively satisfying understanding” encompassing a broad range of offenses. A widely accepted working definition was “illegal acts committed by nonphysical means and by concealment or guile, to obtain money” and other personal or business advantages. Even so, differences over definition were so wide that estimates of the incidence of white-collar crime also varied widely. If the FBI was finding an alarming growth, was this because the Bureau was broadening its definition of white-collar crime, or because its expanded white-collar unit was uncovering more offenses, or because there really was an increase in white-collar crime?
Even more daunting was the assignment of responsibility, which in turn involved the imposing of penalties for crimes committed. Since the vast majority of these misdeeds occurred in organizations, who should be held responsible—the leaders, the subleaders, the rank and file, the whole organization? The President, all or part of the presidential staff, the whole presidency, indeed the whole executive branch? A corporation’s top executives, its middle managers, the local managers perhaps “just following orders,” the whole corporation, the capitalist system?
Responsibility might, on the other hand, be so widely diffused in a corporation that individual liability would be impossible to determine, or it might be hidden in its interstices. Or it might be both expedited by electronic technology and cloaked within it. With business operations increasingly computerized and computers serving as “vaults” for corporate assets, computer-related crime—difficult to detect when perpetrated skillfully, with higher per-incident losses than other white-collar crimes—became a “universal and uniform threat.”
Most of the problems centered in large corporations. This was not new; two hundred years ago the Lord Chancellor of England had asked, “Did you ever expect a corporation to have a conscience, when it has no soul to be damned, and no body to be kicked?” and he was rumored to have added in a stage whisper, “By God, it ought to have both.” Corporations also posed some of the most dramatic issues of responsibility. Thus in the 1970s the Ford Motor Company built Pintos with the metal fuel tank located behind the rear axle. Although a safety device would have cost only eleven dollars per car, Ford did not remedy the situation, despite a rash of rear-end collisions causing fiery deaths and injuries. After three young women were burned to death, an Indiana jury in 1980 acquitted Ford of reckless homicide charges. The Firestone Tire and Rubber Company, the first American firm to supply steel-belted radial tires in large quantities to automakers, acknowledged a large number of accidents associated with the tires. In 1978, pressed by the “feds,” Firestone agreed to recall over seven million radial tires. Although many asbestos manufacturers knew of the danger of asbestosis and pneumoconiosis among their workers, few companies moved to rectify the situation.
Given the murky distribution of power in large corporations, to whom should punishment be meted out for misdeeds? And how large should the penalties be? Judges and juries faced dilemmas. If they levied moderate fines on executives—who in any event were usually protected against such liability so long as they had acted in “good faith”—these costs could be absorbed by the corporation and community in various ways and hence could not serve as much of a deterrent. If the court whacked the corporation with a huge fine, perhaps in the millions, the burden might fall on the innocent—the great majority of stockholders, the white-collar employees who might be denied a pay raise, and even the workers who might lose jobs if the fine was large enough to force the firm to scale down its operations or even submit to bankruptcy. An individual misdeed thus might be converted into a community crisis.
Most Americans were far more concerned about “street crime” than about white-collar crime, except when the latter had a physical impact, such as corporations leaving former employees gasping for breath on a hospital bed or customers incinerated in gasoline explosions. The estimated number of all offenses—street and white-collar—known to police almost quadrupled between 1960 and 1983, from 3.4 million to 12.1 million. Even when adjusted for a population increase from 179.3 million in 1960 to 234 million in 1983, the rise was still startling—from 1,887 per 100,000 persons to 5,159. Violent crimes such as murder, forcible rape, robbery, and aggravated assault quadrupled, while property crimes such as burglary, larceny-theft, and motor vehicle theft tripled. Thus there was a marked increase in crimes of an especially ugly and devastating nature.
An alarmed public watched the rising crime rate—and experienced it. In the early 1980s Americans, regardless of race, sex, education, income, size of city, or party membership, responded “more” to the Gallup poll question “Is there more crime in this area than there was a year ago, or less?” People felt less safe at home at night, more fearful of walking alone at night within a mile of their homes, widely concerned that their property might be vandalized, their home burglarized, that they might be robbed in the street and injured in the process.
The polling returns showed some anomalies. The fear was disproportionate to the actual numbers of victims. In 1982 fewer than one half of 1 percent of the national sample had been injured by a burglar at home, but nearly a third allowed that they worried at least a “good amount” about its happening. At the same time, 51 percent of the sample answered “too much” to the question “Do you think television news gives too much attention to stories about crime, not enough attention, or what?” while 29 percent answered “about right” and 18 percent “not enough.”
The intensity of the national debate over the “cause and cure of crime” rose even faster than the crime rate. The centerpiece of the debate was the issue of poverty and crime. Since some supporters of LBJ’s “War on Poverty” had touted it as a fundamental solution to the problem of crime arising from material poverty, it was easy for critics of that war, including anti-Great Society Republicans and the ideological right as well as skeptical scholars, to point out that crime had risen along with affluence. “Liberals first denied that crime was rising,” James Q. Wilson wrote. “Then, when the facts became undeniable, they blamed it on social programs that, through lack of funds and will, had not yet produced enough gains and on police departments that, out of prejudice or ignorance, were brutal and unresponsive. It was not made clear, of course, just why more affluence would reduce crime when some affluence had seemingly increased it, or why criminals would be more fearful of gentle cops than of tough ones.”
Others preferred to probe the “root causes” of crime. As a result of the “baby boom” following World War II, it was noted, the segment of the population from fifteen to twenty-four years old grew by over a million persons a year during the 1960s. This was said to be a highly crime-prone group that may have caused roughly half of the crime rise. Still others found the source of crime, especially white-collar crime, in the structure of economic and political power within American corporate capitalism. Still others contended that a growing underclass of the poor and alienated had been left in the slums; that the superabundance of expensive consumer goods for the middle classes, including costly cameras, color televisions, and the like, had boosted both the opportunity and the temptation of crime; that drugs in the 1960s and thereafter, like alcohol in the 1920s, had created an addictive subculture dependent on crime; that the root of the problem was not poverty in the measurable monetary sense but a culture of ignorance, apathy, degradation, mental and physical illness, low motivation, and damaged self-esteem that had little connection with rising affluence. Most likely the “root cause” lay in the reinforcing interaction of all or most of these factors.
There was little debate over the impact of rising crime on the criminal justice system. The number of criminal cases filed in all federal district courts rose from 28,000 in 1960 to almost 36,000 in 1983, in federal appeals courts from 623 to 4,790. Criminal trials completed soared from 3,500 to over 6,600 between the same years. These criminal cases in the federal courts had to compete for personnel and funds with an almost fivefold jump in federal civil filings, and all these with an explosion of criminal and civil cases in state and local courts. This enormously stepped-up caseload fell on a system of criminal justice with often ancient features—sheriffs and police, bailiffs and bail vendors, parole and probation officers, arraignments, charges, hearings, trials, postponements, continuances, depositions, grand and petit juries, sentences, appeals, revocations, clemency and pardons—a system that had hardly changed in essential form from the days of Dickens’s Bleak House. It was as if a great mass of cars, buses, and trucks had suddenly overwhelmed an ancient network of roads, ferries, canals, horse trails, and tollhouses.
Months and years of delay was only the first of a series of stinging indictments brought against the whole criminal justice system. Plea bargaining had become so extensive, it was charged, that the court proceeding had often become but a charade in which pleas were recorded before a judge, satisfying the demands of justice while mocking them. Drawn disproportionately from the lower-middle and middle classes, juries often were not competent to rule on street crime. The failings of the system, wrote a Harvard law professor and former prosecutor, were not “isolable, incidental features of a generally sound process” but characteristic and intrinsic features. The “revolution in criminal procedure” that people liked to talk about, he said, was more a matter of just spinning wheels.
As Americans neared the bicentennial of the establishment of the federal judiciary in 1789, the intellectual disarray of the national deliberation over the state of the criminal justice system overshadowed the institutional disorder of the system itself. Lacking were close analyses of the relationship of immediate expedient means, short-run ends, and ultimate goals of criminal justice; careful ordering of policy priorities linked to fundamental values such as liberty and equality; a consideration, both imaginative and empirical, of the wider psychological, legal, and political ramifications of the social pathology of the culture of poverty; a capacity intellectually to transcend expediency and everyday coping in dealing with problems rising under the criminal justice system. Perhaps the most poignant, if not most serious, reflection of this intellectual disarray was a hardly concealed retreat from theory in favor of specific policies aimed at particular problems. Policy analysis probed not the root cause of a problem but what policy measures and tools might produce “at reasonable cost a desired alteration”—typically a reduction in specified forms of crime. There was one great advantage to “incapacitation” (incarceration) as a crime control strategy, James Wilson wrote—“it does not require us to make any assumptions about human nature.”
In the wake of Watergate, the mushrooming of street crime, and horrendous insults by corporations and by government to people’s health and their natural environment, there still was no grand debate over American values and the principles and workings of the criminal justice system. Gone were the days when French and American revolutionists had fought to protect the legal rights of individuals, regardless of social rank or class, against the establishment thinkers seeking to defend the ancient prerogatives of state and church. Gone even were the times, fifty years back, when “legal realists,” often of New Deal persuasion, had jousted with sacrosanct legal principles that, embodied in judicial findings, could be applied “dispassionately” to current problems. Aside from a few intellectual ventures—notably the critical legal studies movement at Harvard and a handful of other law schools—the debate of the 1980s took the typically American form Tocqueville had noted a century and a half before: grandiose rhetoric about vague but compelling principles like “equal justice under law” and numerous small devices for tinkering with the system, with no firm analytical linkage between values, ordered priorities, and specific measures.
The clouds of rhetoric obscuring the ideological battle did not, however, fully cloak the trench warfare over specific principles and policies. In general, conservatives favored deterrence theories and practices, notably incarceration; in general, liberals supported reformation ideas and measures, notably rehabilitation. These conflicting principles affected choices made and policies pursued across the vast range of the criminal justice system—availability of public defenders, sentencing, parole and probation, indeed the whole gamut of Fourth, Fifth, and Sixth Amendment liberties. In the absence of intellectual clarity, however, these issues were typically settled on the basis of the short run, the expedient, the “practical,” of the “facts” of each case rather than an overarching intellectual framework. Thus, plea bargaining was used by prosecutors and defense attorneys alike as a means to obtain quick, acceptable settlements. Rarely did the question arise as to the greater stakes involved. “If the punishment imposed is usually a ‘normal price’ for the crime and the defendant’s benefit from his bargain is less than he hoped,” Lloyd Weinreb observed, “nevertheless he is institutionally encouraged to believe that he is trading some of his freedom in order not to be deprived of more.” And the easy trading of freedom—freedom from confinement—hardly accorded with this supreme American value.
By and large during the 1970s the hardheaded men, the practical people, the “tough-minded” pragmatists were in charge of the American criminal justice system. They were not stick-in-the-muds—they called for many a reform to energize the system: more police, more judges, more and bigger jails, capital punishment, “fiat-time sentencing,” severer sentences, various reorganizations of the court and penal systems. Their ultimate recourse in practicality was something no one loved in principle—the jail. For incarceration evaded all the tough intellectual issues and put the lawbreaker in a controlled situation.
But the inmates had their practicality too. “The warden might control his subordinates,” a student of criminal justice noted, “but together they did not completely control the prison. An inmate subculture with a clearly defined power structure and differentiated roles exercised considerable power of its own. A trade-off between staff and inmates developed: the inmates accepted the general routine of prison life (‘did time’), and in return the staff overlooked systematic violations of prison rules (contraband money, bootleg alcohol and drugs, pervasive homosexuality, including gang rapes, and random violence). Prison violence was frighteningly routine.” On their own turf, someone gibed, the inmates were conducting their own form of plea bargaining.
Practical, hardheaded men were in charge at upstate New York’s huge Attica prison in 1971: a warden who had worked his way up from the rank of prison guard and won a reputation as a disciplinarian; a seasoned corrections commissioner who would negotiate with inmates up to a point; a worldly governor, Nelson Rockefeller, who preferred to leave “local” crises in the hands of experienced professionals at the scene. Violence had swept New York prisons in recent months and these men knew that Attica, overcrowded with 2,200 inmates, was seething with unrest; they seemed less aware of subtler forces, such as the reaction of Attica’s militant blacks to the killing of their hero, George Jackson, during an alleged breakout effort at San Quentin prison in California. A sudden fracas at Attica opened the floodgates of hatred between the inmates, most of them black, and the all-white guard unit. Hundreds of inmates, sweeping through cellblocks, beat up some of the guards, seized forty hostages, set fire to the chapel and the school. Shortly they formed a governing body and issued a set of demands.
During four taut and anguished days the inmates and the local authorities negotiated, while Rockefeller stuck to his Albany office and a group of observers, including the journalist Tom Wicker and the civil rights attorney William Kunstler, served as go-betweens. They could not break the deadlock. Suddenly, after inmates replied “Negative - negative!” to what amounted to an ultimatum from the authorities, state police and prison guards armed with rifles and shotguns moved in behind clouds of tear gas. Nine minutes later forty-three persons, including ten hostages, lay dead or dying. Cornered, cowed, stripped naked, surviving inmates were crowded back through tunnels to their cells.
A few months later, in a poetry workshop for Attica inmates, Clarence Phillips wrote:
What makes a man free?
Brass keys, a new court
Decision, a paper signed
By the old jail keeper? …
What makes a man free?
Unchained mind-power and
Control of self— Freedom now!
Freedom now! Freedom now!
The man who succeeded Gerald Ford on January 20, 1977, had swept onto the American political scene like a gust of fresh air during the presidential primaries of the previous winter. A proud “outsider,” an ex-governor from Georgia, Jimmy Carter had bested nationally known Democratic pros like Henry M. Jackson, Sargent Shriver, and Morris Udall in the preconvention battles. Then he had narrowly defeated President Ford— the first time a White House incumbent had been beaten since Hoover in 1932—with an assist from the Watergate albatross hanging over the GOP.
By inauguration day Carter had acquired a lustrous media image. After years of mendacity and mediocrity in government, now appeared this man of religious conviction and high ethical standards. After years of drift and deadlock in government, a leader of proved competence—competence at running a business and a state, a submarine and a tractor, and those tough primary campaigns, a demanding, clearheaded man—seemed to have stepped forward. Even his appearance—his bluff, open face creased by a wide smile, his hair style that looked both stylish and rustic, his quick, buoyant ways—set him off from the gray, sedate men in high office.
To be sure, there was an air of mystery behind the sunny façade. For a man with relatively brief political experience he showed an astonishing flair for capturing nationwide media attention. As an upward striver who, in the judgment of political scientist Betty Glad, had “proved to be a good, but not extraordinary, governor,” he seemed to be aiming a bit prematurely for the top job. A proud and self-confessed “idealist” speaking out in round biblical terms, he said also that he was an “idealist without illusions”—which, as in the case of John Kennedy, appeared to leave him plenty of leeway. He had a knack of seeming both above politics and very canny in political maneuver and combat. He appeared religious but not pious, compassionate but not sentimental, moral but not moralistic.
If some in the media were put off by his southern Baptist ways—his joyful hymn singing and hand holding in church, his appeals for more love and compassion, his southern accent that seemed to grow thicker the nearer he was to home—many Americans were happy that he was from the Deep South, the first President to have roots in that region for over a century. Surely he would bring fresh regional and cultural perspectives to bear on big government in Washington. People looked to him to transcend the racial conflicts that had wounded blacks, the South, and the nation. And “Solid Southerners” were pleased that at last there was a President who spoke without an accent.
And now, on inaugural day, he demonstrated afresh his human touch and media appeal when he bounded out of his limousine on the way to the White House and walked down Pennsylvania Avenue hand in hand with the new First Lady and daughter Amy. In office he promptly pardoned Vietnam draft violators—a happy contrast with Ford’s full pardon of Richard Nixon. When Carter fulfilled a campaign promise to keep close to the people by conducting a presidential “town meeting” in a small Massachusetts community, he brought the government home from its Washington remoteness, and incidentally offered a harvest of photo opportunities.
Carter’s deepest concern was for human rights. This commitment had quickened in his own recent immersion in the struggle for civil rights in the South. He had been slow to enlist in that struggle, he freely admitted, but by the end of his governorship, “I had gained the trust and political support of some of the great civil rights leaders in my region of the country. To me, the political and social transformation of the Southland was a powerful demonstration of how moral principles should and could be applied effectively to the legal structure of our society.” He well knew the view that Presidents had to choose between Wilsonian idealism and Niebuhrian realism, or between morality and power, but he rejected the dichotomy. “The demonstration of American idealism” was a practical and realistic “foundation for the exertion of American power and influence.”
Carter was publicly pledged to the campaign for human rights. “Ours was the first nation to dedicate itself clearly to basic moral and philosophical principles,” he had said in accepting his nomination for President. In his inaugural address he proclaimed that people around the world “are craving, and now demanding, their place in the sun—not just for the benefit of their own physical condition, but for basic human rights.” His Secretary of State, Cyrus Vance, and his national security aide, Zbigniew Brzezinski—both members of the eastern foreign policy establishment— stood with him in his dedication to a “principled yet pragmatic defense of basic human rights,” as Vance summarized it.
How apply this noble principle? The President could draw from a broad array of human rights—the heritage of civil and political liberties, such as freedom of thought, religion, speech, and press, forged over the centuries; the right to participate in government, a right much broadened in the Western world during the nineteenth century; personal protection rights against arbitrary arrest and imprisonment, inhuman treatment or punishment, degradation or torture, denial of a proper trial; or a battery of newfound freedoms, such as rights to food, shelter, health care, education. Many of these rights were embodied in the United Nations charter, the Universal Declaration of Human Rights that in 1948 virtually all nations had approved at least in broad terms, and in the Helsinki agreements.
A week after Carter’s inauguration the State Department warned Moscow that any effort to silence the noted physicist and dissident Andrei D. Sakharov would be a violation of “accepted international standards in the field of human rights.” Dobrynin promptly telephoned Secretary Vance to protest interference in Soviet internal affairs. Undeterred, the Administration appeared to launch a campaign, expressing concern over the arrests of dissidents Aleksandr Ginzburg, Anatoly Shcharansky, and Yury Orlov, receiving a dissident in exile at the White House, planning substantial boosts in funding for Radio Free Europe and Radio Liberty and in broadcasts to Russia by the Voice of America. In mid-March 1977 the President, in an address to the United Nations General Assembly, once again stated his intention to press for human rights globally.
At the same time Carter was determined to pursue SALT II negotiations, despite warnings that the two efforts would collide. Vance, who had been urging quiet diplomacy as opposed to public pressure on human rights, journeyed to Moscow in late March hoping at least to pick up on negotiations as Ford and Kissinger had left them in Vladivostok, and if possible to move ahead with a much more ambitious and comprehensive plan of the President’s. The Soviets, who had responded to the campaign for the dissidents with more arrests, were in no mood to bargain. After cataloguing alleged human rights violations in the United States and attacking the new SALT proposals as harmful to Soviet security, Brezhnev sent Vance home empty-handed.
Nothing more clearly reflected the fundamental ambiguity and the later shift in the Carter foreign relations approach than its rapidly evolving human rights policy. What began as Vance’s ambivalent “principled yet pragmatic” posture became increasingly an attack on specific human rights violations that the Kremlin on its side chose to interpret as an onslaught against Soviet society. The American “‘defense of freedom,’” said Pravda, was part of the “very same designs to undermine the socialist system that our people have been compelled to counter in one or another form ever since 1917.” When Carter contended that he was upholding an aspiration rather than attacking any nation, the Russians tended to suspect Brzezinski’s motives rather than the President’s moralisms. The human rights files in the Carter presidential library make clear what happened: the President’s early Utopian tributes to human rights encouraged Soviet, Polish, and other dissidents and their American supporters to put more pressure on the Administration, especially through sympathizers like the national security aide. At the same time that Moscow was condemning American human rights violations at home, black leaders were complaining to the White House that the Administration was retreating on its promises to minorities.
Nearer home Carter applied his foreign policy of “reason and morality” with considerable success during his first year in the White House. He and his wife had a long-standing interest in Latin America, had traveled there, and knew some Spanish. He saw in Latin America, according to Gaddis Smith, a “special opportunity to apply the philosophy of repentance and reform—admitting past mistakes, making the region a showcase for the human-rights policy.” Of past mistakes there had been plenty—years of intervention, occupation, and domination. FDR’s Good Neighbor policy had brought little surcease. The CIA-managed coup in Guatemala in 1954, John Kennedy’s abortive invasion of Cuba, LBJ’s intervention in the Dominican Republic, the efforts of the Nixon Administration and the CIA to undermine the duly elected leftist President of Chile, Salvador Allende, and their contribution to his eventual overthrow and death—all these and more still rankled in the memories of Latin American leaders, liberal and radical alike.
No act of Yankee imperialism was more bitterly recalled than the imposition on the Republic of Panama in 1903 by the United States and its Panamanian “puppets” of a treaty defining a strip of land ten miles wide connecting the two oceans while cutting Panama in two and giving the northern colossus near-sovereignty in perpetuity over the canal and the surrounding area. Following a bloody fracas between Panamanians and troops of the United States in 1964, negotiations had been dragging along under Johnson, Nixon, and Ford pointing toward the renegotiation of the 1903 treaty. Carter decided to move quickly. He was well aware of the virulent opposition to a settlement by nationalists in both countries. “We bought it, we paid for it, it’s ours and we’re going to keep it,” had been one of Ronald Reagan’s favorite punch lines in the 1976 presidential primaries. Backed up by Defense Secretary Harold Brown’s view that the canal could best be kept in operation by “a cooperative effort with a friendly Panama” rather than by an “American garrison amid hostile surroundings,” Carter and Vance negotiated with Panama two treaties, one repealing the 1903 treaty and providing for mixed Panamanian-United States operation of the canal until December 31, 1999, the second agreement defining the rights of the United States to defend the canal following Panama’s assumption of control on that date.
The White House then threw itself into the battle for Senate ratification, using the time-honored tools of exhortation, bargaining, and arm twisting. The opposition counterattacked with its traditional devices of delay and diversion. Only a mighty effort to mobilize every scrap of his influence enabled the President in the spring of 1978 to win acceptance of the treaties, and then by only the thinnest of margins and at considerable loss of political capital. So virulent was the opposition that, as Carter glumly noted in his memoirs, a number of senators “plus one President” were defeated for reelection in part because of their support of the treaties.
As usual the Middle East confronted Washington with the most intractable problems of all. How defend Israel’s security without antagonizing the Arab states? How persuade the Israelis to be more conciliatory toward the Arabs? How find a humane solution to the plight of the Palestinians, whether inhabitants of the occupied Gaza Strip and West Bank or holders of a precarious Israeli citizenship? How strengthen friendly Arab states militarily enough to steel their resistance to Soviet power but not embolden them also to threaten Israel? Carter approached these problems not only with the traditional top-priority commitment of Washington to Israel based on domestic political and national security considerations, but also with a deep moral concern. He believed “that the Jews who had survived the Holocaust deserved their own nation,” and that this homeland for the Jews was “compatible with the teachings of the Bible, hence ordained by God.”
For sixteen months Carter and Vance conducted an intensive, often desperate search for peace in the Middle East. It was their good fortune that Egypt was ruled by the remarkably farsighted President Anwar el-Sadat, with whom Carter established cordial personal relations, and that Israel came to be headed by a tough negotiator, Menachem Begin, who had enough standing with Israeli hard-liners to risk agreement with the Egyptians. In his own efforts in Washington and in the Middle East, Carter proved himself a resourceful and indefatigable mediator. Often his hopes flagged, particularly after Israeli troops invaded Lebanon in March 1978 in retaliation for a terrorist assault that cost the lives of thirty-five Israelis, all but two of them civilians. To maintain credibility with the Arabs he supported a UN condemnation of the invasion and demand that Israel withdraw its forces. The reaction of American Jews was so sharp, Carter wrote later, that “we had to postpone two major Democratic fund-raising banquets in New York and Los Angeles because so many party members had cancelled their reservations to attend.”
Caught between implacable forces, Carter resolved in July 1978 that it “would be best, win or lose, to go all out” to obtain a peace agreement. He persuaded Sadat and Begin to attend together a September meeting at Camp David. For thirteen days the President and his aides conducted with the two leaders a kind of footpath diplomacy between the cabins. The upshot was Sadat and Begin’s agreement to two sets of guidelines: a framework for an Egyptian-Israeli peace providing that the Sinai would be turned over to Egypt by stages while protecting certain Israeli interests there; and a separate framework for “Peace in the Middle East,” providing for a five-year period during which a self-governing authority under Egypt, Israel, and, it was hoped, Jordan, would replace the existing Israeli military government in the West Bank and Gaza, while the three nations negotiated the final status of the territories.
At perhaps the high point of his presidency Carter declared to a joint session of Congress, with Begin and Sadat present: “Today we are privileged to see the chance for one of the sometimes rare, bright moments in human history.” But nothing important ever came easy for Jimmy Carter. When Begin and Sadat were unable to agree on final peace arrangements before the planned deadline of mid-December 1978, the President decided as “an act of desperation” to fly to Cairo and Jerusalem for personal diplomacy. Once again he demonstrated his flair for mediation, gaining agreement from both sides on the remaining thorny issues, with the aid of inducements and guarantees from the United States. Amid much pomp and circumstance, Sadat and Begin signed the final agreement on the White House lawn late in March 1979.
Wrote Carter in his diary, “I resolved to do everything possible to get out of the negotiating business!”
Over all these efforts abroad there fell—at least in American eyes—the shadow of the Kremlin. No matter how much the White House denounced violations of human rights outside the Kremlin’s orbit the issue always came back to Soviet repression of dissidents. A major disturbance could not erupt in a newly emerging African nation without suspicion in the White House that Moscow plotters were afoot. The Administration began its peacekeeping effort in the Middle East in cooperation with the Soviet Union, only to turn away from it out of fear that Moscow was interested less in peace than in extending its own influence in the region. The more Washington pursued its rapprochement with Peking, the more it encountered hostility in Moscow. The Administration suspected that the Russians were bolstering their military strength in Cuba. Even the Panama settlement, which seemed far outside the Soviet sphere of influence, was almost fatally jeopardized by those Americans who feared that the strategically vital canal would under Panamanian control prove vulnerable to Soviet political or military threat.
The view from Moscow was clouded by its perception of an ever more threatening America. Washington was seeking to exclude Soviet influence in the Middle East—a strategic area in Russia’s own back yard. The Americans were trying not only to make friends with the Chinese but to arm them against the Soviet Union, and thereby encircle it. Washington was trying to block the Soviet Union, as the mother communist nation, from exercising its right and duty to help both stabilize and strengthen “national liberation” movements in the fledgling nations. Above all was the matter of arms—the Soviet Union was on the verge of achieving some kind of nuclear parity with the United States, at which point the Carter Administration undertook a big new arms program that could result only in a spiraling arms race.
Both sets of perceptions were misconceptions. Washington was more interested in restoring triangular diplomacy with China than in exacerbating the Sino-Soviet rupture. The Russians were more interested in stability in the Middle East than in military advantage. Each side saw itself as defensive, peace-loving, cooperative, the other as offensive, aggressive, destructive. Looking at Moscow, Washington remembered the brutal invasion of Hungary in 1956, the shipping of missiles to Cuba in 1962, the suppression of Czechoslovakia in 1968. Looking at Washington, Moscow recalled the attack north of the 38th parallel into North Korea in 1950, the occupation of the Dominican Republic in 1965, the bombing of North Vietnam and invasion of Cambodia in the 1960s and 1970s.
Mutual suspicion and hostility of the two superpowers touched every part of the globe—even the smallest and weakest nations. The tiny Yemens were a prime example. South Yemen, with its major naval facilities at Aden, the former British port, accepted aid from Moscow and gave it access to the port. North Yemen, fearful of the Yemenis to the south, wanted American military aid. When the Soviets began to give heavy aid to Ethiopia in support of its dispute with Somalia, Brzezinski saw a new Soviet threat to the Middle East. The canny North Yemenis, seeing their chance to pay off the southerners and gain more aid from Washington, sent alarmist reports of a looming invasion from the south. Alert to this mortal peril, Washington sent American arms and advisers to North Yemen and dispatched the carrier Constellation to the Arabian Sea “to demonstrate our concern for the security of the Arabian Peninsula.” In the end several Arab states mediated the scrap between the Yemenis—and North Yemen made an arms deal with the Russians twice the size of the American deal. The fight between the Yemenis, scholars later concluded, had not been plotted by Moscow. “The United States,” according to historian Gaddis Smith, “was responding, not to a reality, but to imaginary possibilities based on the assumption of a sinister Soviet grand design.”
Nor was Washington plotting in most of these situations. It was a classic case of confusion rather than conspiracy. At the center of the confusion was the President himself. He continued to be convinced, during his first year in office, that he could crusade against human rights violations in Russia and at the same time effectively pursue détente with Moscow. During his second year he was still talking détente and SALT II but emphasizing also the need to strengthen United States forces in Europe to meet the “excessive Soviet buildup” there. By mid-1978, Carter’s ambivalence was so serious that Vance formally requested a review of relations with the Soviets, noting “two differing views” of the relationship. The emphasis, Vance said, had been on balancing cooperation against competition; was the emphasis now merely on competition? When Carter at Annapolis in June reaffirmed détente but now spoke a language of confrontation, the press complained about “two different speeches,” the “ambiguous message,” and general “bafflement.” Moscow, however, viewed the speech solely as a challenge.
Carter was now enveloped in a widening division, especially between Vance and Brzezinski. The Secretary of State, who had built his reputation largely on high-level negotiations during the 1960s, spurned ideology in favor of détente through persistent—and if necessary severe—diplomacy. The national security adviser, son of a prewar Polish diplomat, had taken a hard line toward Moscow since the 1950s. In 1962, during the Cuban missile crisis, from his Columbia University post he had telegraphed the Kennedy White House a warning against “any further delay in bombing missile sites.” Under Carter the two men repeatedly disagreed over policy toward the Soviet Union—most notably the extent to which the “China card” should be played against Moscow. And they insistently denied the disagreement—until it came time for their memoirs. Vance remembered the national security adviser as afflicted with “visceral anti-Sovietism.” Brzezinski evaluated the Secretary of State “as a member of both the legal profession and the once-dominant Wasp elite,” operating according to “values and rules” that were of “declining relevance” to both American and global politics.
The President saw the two men as balancing each other’s strengths and weaknesses, but instead of moving steadily between them, he followed a zigzag path. His aim was still a summit meeting with the Russians for a climactic effort to achieve a second SALT agreement. During 1978, however, playing the “China card” in a manner tantamount to playing with fire, he allowed Brzezinski to journey to China, where the security adviser urged the not unwilling Chinese to step up their diplomatic and political moves against Moscow. The morbidly distrustful Russians suspected that the Yankees might sell “defensive weapons” to the Chinese.
The summit was further delayed while Carter amid intense publicity received and entertained Deng Xiaoping at the White House at the end of January 1979, only a few weeks after Washington broke formal diplomatic-relations with Taiwan and established full relations with China. Carter and Deng got along famously, signing agreements for scientific and technological cooperation. The Chinese leader even confided to the President his tentative plans to make a punitive strike into Vietnam because of Hanoi’s hostility to Peking. Carter tried weakly to discourage this, but the Chinese attacked within three weeks of Deng’s visit to Washington.
On the eve of flying off in June 1979 for the summit the President announced his decision to develop the MX missile. By the time he and Brezhnev met in Vienna, much of the will in both camps for comprehensive peacemaking had slackened. Brezhnev, old and ailing, seemed to have lost his energy and grasp of issues. The two men signed a package of agreements, elaborately and cautiously negotiated over a period of many months, providing for limitations in land-based missiles, submarine-based MIRVed missiles, bombers equipped with multiple missiles, and other arms. SALT II was still a respectable step forward—if the step could be taken. Following the Soviet occupation of Afghanistan shortly after New Year’s 1980, however, the President asked the Senate to defer action. This delay, and Reagan’s condemnation of the treaty during the 1980 campaign, killed SALT II’s chances—the most profound disappointment of his presidency, Carter said later.
Historians will long debate the causes of the malaise that afflicted Jimmy Carter’s presidency about the time he began the last third of his term. Was it largely a personal failure of leadership on the part of Carter and his inner circle at a crucial point in his Administration? Or was the loss of momentum and direction during 1979 more the result of factors that plague every President—intractable foreign and domestic problems, a divided party, a fragmented Congress, a hostile press, limited political resources? Or was it a matter of sheer bad luck—a series of unpredictable events that overwhelmed the Administration?
In his disarmingly frank way, Carter himself admitted a failure of personal leadership. In midsummer of 1979, he removed his government to Camp David and summoned over a hundred Americans—political, business, labor, academic, and religious leaders—for long consultations, and then emerged to declare in an eagerly anticipated television speech that the nation was caught in a crisis of confidence, a condition of paralysis and stagnation, to which his detached, managerial style of leadership had contributed and at the center of which was the energy crisis, whose solution could “rekindle our sense of unity, our confidence in the future.” At a specially convened cabinet meeting two days after the speech, he stated, according to a participant, “My government is not leading the country. The people have lost confidence in me, in the Congress, in themselves, and in this nation.” A week before his 1980 election defeat he graded himself on CBS’s 60 Minutes, giving his presidency a B or a C plus on foreign policy, C on overall domestic policy, A on energy, C on the economy, and “maybe a B” on leadership. For a President, B and C are failing grades.
Carter’s shifts toward the middle ground in domestic policy and confrontation in Soviet relations, along with his loss of popularity at home, had opened up a leadership vacuum that was bound to attract a liberal-left Democrat of the stripe of Robert Kennedy, Eugene McCarthy, George McGovern, or indeed the 1976-style Jimmy Carter. Would Edward Kennedy run? Since his brother Robert’s assassination, Democratic party leaders and rank-and-file enthusiasts had been trying to recruit him, but the young senator had proved to be a master at saying no. Now, thoroughly disappointed by Carter, he decided to take on the toughest of political assignments, unseating a President of one’s own party. At first Kennedy appeared unable to define his alternative program coherently, and when he took a fling at the dethroned Shah of Iran and the “umpteen billions of dollars that he’s stolen from Iran,” the media treated this as a campaign gaffe to be derided rather than a policy issue to be debated.
Carter’s early handling of the seizure in November 1979 of the American embassy in Teheran and sixty-three American hostages produced the usual rally-’round-the-President surge in public opinion. Kennedy failed to gain momentum after running far behind the President in the Iowa caucuses. Later the senator picked up strong support in urban areas when he spoke firmly for détente abroad and anti-inflation controls at home, but he never headed his adversary. Some of the President’s men argued that Kennedy’s run hurt Carter in the fall contest with Ronald Reagan, but Democrats showed their usual capacity for reuniting before the final battle. In retrospect it appeared that Carter had defeated himself, largely by appearing to have faltered as a strong leader, a sitting duck for Reagan’s charges of inadequacy and indecision.
Hamilton Jordan, Carter’s aide and confidant, wrote after the 1980 defeat that he had “found many forces at play today that make the art of governing very difficult”—an “active and aggressive press,” the fragmentation of political power, congressional resistance, special-interest groups, and the like. Conditions, in effect, made governing impossible. Leading students of the Carter presidency instead fixed the blame on the President himself. “Carter lacked any sense of political strategy,” wrote political scientist Erwin C. Hargrove, “and thereby the majority of citizens came to believe that he was not in control of the events which most concerned them.” If Carter was bedeviled by weak party support, congressional factionalism, bureaucratic power groups—and by the “iron triangles” interlocking these resistance forces—the question arises: to what degree did he seek to curb or even master these by leading and refashioning his divided party, for example, or by improving his poor congressional liaison office? He devoted little time to rebuilding either the party or the liaison office.
The “bad luck” theory of Carter’s decline holds that he was simply engulfed by forces over which he had no control—the energy crisis, soaring gasoline prices, steep inflation, high interest rates, Kennedy’s challenge, and above all the continuing hostage crisis and the brutal Soviet intervention in Afghanistan. As great leaders have demonstrated, however, setbacks can be—or can be made to be—spurs to action.
Perhaps Carter’s greatest failure stemmed from his moralism in foreign policy combined with his flair for media showmanship. His reaction to the hostage seizure in Iran and to the Afghanistan intervention was not to put the crises in perspective and restrain public opinion but to dramatize the issues and further inflame the public. This politically expedient course, reflecting also Carter’s moral judgment, brought the heady feeling in the short run of being the true spokesman and leader of the people, but it had severe longer-run effects. In helping to arouse the public, and then responding to that aroused public, Carter raised hopes and expectations inordinately. But Iran held on to the hostages and the Russians remained in Afghanistan. Nothing is more dangerous for a leader than a widening gap between expectations and realization.
This gap paralleled and exacerbated another one—between Carter’s idealistic, uplifting foreign policy pronouncements and day-to-day specific-policies. Preachments were not converted into explicit guidelines. A strategic approach was lacking. When initiatives had to be taken and tough choices made, the Administration lacked a hierarchy of priorities that could fill the gap between its global activism and the routine application of foreign policies. Carter alternated between born-again moralizing and engineering specifics. In this respect he shared one of the oldest intellectual weaknesses of American liberal activism.
At the 1978 Harvard commencement a gaunt and towering figure out of the Russian past denounced not the evils of Soviet communism, as many in the audience had expected, but the American culture in which he had taken refuge. Aleksandr Solzhenitsyn delivered a powerful attack on the ideas and symbols most Americans held sacred—liberty, liberal democracy, even the pursuit of happiness. “Destructive and irresponsible freedom,” he said, had produced an “abyss of human decadence”—violence, crime, pornography, “TV stupor,” and “intolerable music.” Solzhenitsyn had long been sounding the tocsin against freedom Western style. Civil liberty, he had said almost a decade before, had left the West “crawling on hands and knees,” its will paralyzed, after it had supped “more than its fill of every kind of freedom.” Indeed, to regard freedom “as the object of our existence” was nonsense.
If the Harvard audience responded to most of Solzhenitsyn’s stinging attack with a measure of composure, it was perhaps in part because the university was by tradition a forum of protest. Graduate student Meldon E. Levine, delivering the English Oration in 1969, at the height of the student protest, in that same Harvard Yard, had challenged his audience of alumni, faculty, and parents to live up to their own standards of equality and justice, courage and trust. The students were “affirming the values which you have instilled in us,” he said, “AND WE HAVE TAKEN YOU SERIOUSLY.” And almost two centuries before, President Samuel Langdon of Harvard had lamented, “Have we not, especially in our Seaports, gone much too far into the pride and luxuries of life?” Was it not a fact that “profaneness, intemperance, unchastity, the love of pleasure, fraud, avarice, and other vices, are increasing among us from year to year?”
For more than three centuries, indeed, Americans had worried about other Americans’ loss of virtue. Over the years, the definition of virtue— and of vice—had taken many forms. During the twenty years or so following President Langdon’s protest of 1775 the most widely accepted idea of virtue was the subordination of private interests to the public good, demonstrated by direct, day-to-day participation in civic affairs. What were these private interests to be suppressed? Certainly the blasphemy and drinking and carnality and pleasure seeking that Langdon complained of, but even more the commercial avarice and frenzied moneymaking that promoted those vices and ultimately corrupted the republic. Vice and virtue were locked in a never-ending struggle for the soul of America.
The framers of the Constitution had enjoyed few illusions as to how that struggle might turn out, for their political and military battles in the 1770s and 1780s and their study of political philosophy had left them pessimistic about the nature of man, in contrast to its potential under the right circumstances. They had limited faith in the capacity of their fellow Americans to exhibit the classical virtues of self-discipline, courage, fortitude, and disinterested public service, even less faith in the power of Calvinism’s austere morality to control appetites and passions, and only a faint hope that a benevolent tendency within human nature to sociability and community, as articulated by the Scottish philosophers, would prevail under the raw conditions of American life. Unwilling to pin their hopes on human virtue, they had fashioned rules and institutions—most notably the Constitution—that at the very least would channel and tame the forces of passion and cushion the play of individual and group interest.
Two centuries later, Americans were more divided than ever as to the cause and cure of vice and the nature and nurture of virtue. Ostensibly these matters were left to the deliberations and preachments of churchmen, but they too were divided in numberless ways, even within their own denominations. Families, schools, the military, the workplace, the tavern added to the variety of ethical codes. And beyond all this, American men and women professed moralities that they did not follow in practice. In the mid-1950s Max Lerner found that the moral code prescribed that “a man must be temperate in drink, prudent in avoiding games of chance, continent in sex, and governed by the values of religion and honor.” A woman must be chaste and modest. But this formal code had been replaced by “an operative code which says that men and women may drink heavily provided they can ‘carry’ their liquor and not become alcoholics; that they may gamble provided they pay their gambling debts, don’t cheat, or let their families starve; that a girl may have premarital sexual relations provided she is discreet enough not to get talked about or smart enough to marry her man in the end; that husband or wife may carry flirtations even into extramarital adventures, provided it is done furtively and does not jeopardize the family; or (if they are serious love affairs) provided that they end in a divorce and a remarriage.”
Even as the moral code was cloaking “real sex” during the 1950s, unsentimental biologists were ripping aside the shields between sexual pretension and practice. In 1953, Dr. Alfred Kinsey and his colleagues published Sexual Behavior in the Human Female a few years after their similar study of the human male. In a huge sample of respondents, over 90 percent of the males and over 60 percent of the females had, by their own account, practiced masturbation; about half the men and over a quarter of the women reported having had some homosexual experience; 8 percent of the males and about half that percentage of females admitted some experience with bestiality. Tens of millions of Americans, in short, were sexually “perverse,” according to the moral code. The Kinsey studies recorded the assertion of 71 percent of the men and 50 percent of the women that they had practiced premarital intercourse in their teens; indicated that adultery and illegitimacy were far more common and widespread than commonly supposed; reported that many boys of low-income families claimed to have had intercourse with scores and even hundreds of different girls. So tens of millions of Americans were “immoral” by the standards of the received morality.
If the practitioners of “vice” found considerable solace in the Kinsey survey, the shock of the findings further fragmented moral attitudes in a society that was already “half Babylonian and half Puritan,” in Lerner’s phrase, further divided a religiosity that had both a “soft,” tolerant side and a “hard,” condemnatory side, in John P. Diggins’s. Now that their worst suspicions had been vindicated as to how people behaved sexually no matter how innocently they talked, media moralists and religious fundamentalists renewed their campaign against permissive laws and standards. Civil libertarians, in turn, sprang to the defense of books, films, plays, television programs, and, above all, magazines such as Playboy and later Penthouse and the raunchier Hustler that depicted and exploited sexual behavior and sexual fantasies.
Both sides—all sides—appealed to the symbol of Freedom. Sexual liberationists spoke up for the freedom to defy the majority and indulge in deviant forms of sexual behavior in their pursuit of happiness. Moralists pressed for censorship of sexually explicit expression, but they had to deal with libertarians in their midst who opposed the heavy hand of the censor and contended that family, school, and church should do the work of combating sexual “misbehavior” and its depiction.
The conflict over erotica and its expression cut deep into the bone and tissue of American society. Feminists, united so passionately over so many burning issues, were divided over censorship of pornography. Some argued that the depiction of violent porn should be shorn of its First Amendment protection. Sexual degradation of women, rape, harassment, battering, sadism—the mere depiction of such misbehavior influenced men’s behavior. Others contended that “evil thoughts” did not necessarily lead to evil action, that the problem was the depiction not of sexual violence but of sexual violence, that damage to women could just as well be caused by violence without sex.
The “experts,” as usual, differed among themselves. A 1970 commission found no evidence that “exposure to explicit sexual materials” played a “significant role” in causing delinquent or criminal behavior among youth or adults, while a 1986 Reagan Administration commission found ample evidence of a causal link between violent pornography and aggressive behavior toward women.
Some feminists, conceding that thought did not always lead to action, turned to antidiscrimination laws as the vehicle for curbing the depiction of sexual violence. Antipornography feminists Andrea Dworkin and Catherine MacKinnon proposed an ordinance granting any woman a cause of action if she had been coerced into a pornographic performance and granting all women the right to bring suit against traffickers in pornography for assault or other harm alleged to be caused by pornography. Others, organized in the Feminist Anti-Censorship Taskforce (FACT), protested that the Dworkin-MacKinnon ordinance was vague in its definition of pornography, and it was largely on this and on First Amendment grounds that a federal judge invalidated an enactment of the ordinance in Indianapolis, a decision which the Supreme Court later affirmed. FACT, according to philosophy professor Rosemarie Tong, contended that the antipornography feminists had left the core question begging: “What kinds of sexually explicit acts place a woman in an inferior status? An image of rape? An image of anal intercourse? An image of the traditional heterosexual act in which a man’s body presses down on and into a woman’s?” What worried FACT most, according to Tong, was the refusal of the ordinance “to recognize the degree to which what we see is determined by what we are either told to see or want to see.” The issue was not only content but intent and context.
Once again a policy issue had ended up—even among friends—as a sharp difference over fundamental values. Feminist antipornographers demanded to be free of masculine domination, “compulsory heterosexuality,” depictions of the degradation of women; they asserted that sexual equality between men and women was integral to equal rights and legal and political equality. The feminist sexual liberationists demanded their own kind of freedom and warned, in Tong’s words, that an antiporn campaign “could usher in another era of sexual suppression” and “give the moralists, the right-wingers, the conservatives a golden opportunity to limit once again human sexual exploration.” Again American values were proving inadequate as guidelines to policy.
It has long been accepted in international affairs that nation-states are not required to observe the same standards of behavior—of mutual respect, reciprocity, understanding, honor—expected of the relationships of individuals in ordered societies. “If we had done for ourselves what we did for the state,” Cavour said, “what scoundrels we would be.” Niebuhr distinguished sharply between “moral man” and “immoral society.” Kenneth Thompson observed that morality within the nation “can be manageable, convincing and attainable,” while “the international interest is more remote, vague and ill-defined.”
Religious leaders, however, have not been so willing to let nation-states evade the demands of morality and mutuality. In colonial times Quakers, Mennonites, Amish, and Shakers spread widely their teachings about peace and nonviolence. During the next century Congregationalists, Unitarians, and leaders in other Protestant denominations set up numerous “nonresistance” societies, culminating in the formation of the League of Universal Brotherhood in 1847. This organization, which after a few years boasted a membership in the tens of thousands, carried on its condemnation of all war in mass publications and at international peace congresses. In the early twentieth century the Catholic Church, preoccupied with efforts to establish itself in sometimes alien or nativist communities, had only the barest involvement in the American peace movement. While some Catholic groups embraced a tradition of social dissent and constituencies of the poor, “a patriotic inclination to celebrate American society; a fear of criticism arising from marginal social status and the general Catholic respect for authority,” in Mel Piehl’s words, dampened Catholic radical social efforts.
American Catholics, numbering fifty million by the 1980s, were broadening their participation in American politics, higher education, business—and peace movements. No longer did they need to prove their patriotism by uncritically embracing an aggressive foreign and military policy. While Protestant, Jewish, and other religious leaders also stepped up their peace efforts, American Catholic bishops effected an amazingly rapid transition from support of the U.S. effort in Vietnam in 1966 to condemnation in 1971 on the grounds of its “destruction of human life and moral values.”
The swing of the bishops toward a strong peace stance was expedited by an earlier shift in the Vatican. Even in the face of aggressive cold war behavior by its mortal enemy, Soviet communism, the papacy called increasingly for steps to control the nuclear genies. In his historic 1963 encyclical, Pacem in Terris, John XXIII, recognizing the “immense suffering” that the use of modern arms would inflict on humanity, declared it “contrary to reason to hold that war is now a suitable way to restore rights which have been violated.” Pressure was exerted within the American church by peace-minded groups, most notably the United States section of the international Catholic organization Pax Christi. After a faltering start, Pax Christi became well organized during the late 1970s and by 1983 had chapters in all fifty states and a powerful corps of leaders, as exemplified by Bishops Thomas J. Gumbleton of Detroit and Carroll T. Dozier of Memphis. Operating outside the institutional church but stirring Catholic consciences “from the left” were individual militants such as former nun Elizabeth McAlister and priests such as the Berrigan brothers.
In the spring of 1983, while the Reagan Administration carried on a foreign relations rhetoric that vacillated between the aggressive and the bellicose, the Catholic bishops issued their “Pastoral Letter” entitled “The Challenge of Peace: God’s Promise and Our Response.” The statement was at once traditional and radical, Patricia Hunt-Perry has noted—“traditional in the sense that the bishops grounded their pronouncements solidly in Catholic dogma, Biblical text, and the teachings of Catholic saints such as Augustine and Aquinas,” but radical in its application of traditional doctrines to the urgency of a “whole human race” facing, in the bishops’ words, a “moment of supreme crisis in its advance toward maturity.”
“We are the first generation since Genesis with the power to virtually destroy God’s creation,” the bishops warned. “We cannot remain silent in the face of such danger.” The letter assigned to Americans the “grave human, moral and political responsibilities” to see that a “conscious choice” was made to save humanity. “We must shape the climate of opinion which will make it possible for our country to express profound sorrow over the atomic bombing in 1945.” The willingness to initiate nuclear war “entails a distinct, weighty moral responsibility; it involves transgressing a fragile barrier—political, psychological and moral.” Striking major military or economic targets “could well involve such massive civilian casualties” as to be “morally disproportionate, even though not intentionally indiscriminate.”
In general, the bishops underplayed legal, technical, and policy arguments in order to speak all the more powerfully with their collective moral voice. Indeed, the final draft eliminated earlier references in the main body of the text to such specific issues as the MX missile. But the bishops did propose that the “growing interdependence of the nations and peoples of the world, coupled with the extragovernmental presence of multinational corporations, requires new structures of cooperation.” They boldly confronted one of the toughest questions—that of the “just war”—and reviewed the conditions necessary to it: just cause, declaration of war by a competent authority, comparative justice, right intention, last resort, probability of success, and proportionality between destruction inflicted and the aims to be achieved. And they expressed special concern about the impact of the arms race—“one of the greatest curses on the human race”— on the poor.
In the debate that followed, it was clear at least that the pastoral letter had raised the moral tone and urgency of the nuclear issue. George Kennan called it “the most profound and searching inquiry yet conducted by a responsible collective body into the relations of nuclear weaponry, and indeed of modern war in general, to moral philosophy, to politics and to the conscience of the nation state.”
A decline in courage was the most striking feature in the West, Solzhenitsyn had said at Harvard, especially “among the ruling and intellectual elites.” America’s refusal to win the Vietnam War, he added amid un-Harvard-like hisses, was the ultimate evidence of the loss of willpower in the West. Solzhenitsyn could hardly have been pleased by the position of the Catholic bishops—all the more because they were religious leaders who should have stood militantly united against Soviet communism. But Solzhenitsyn could hardly have found the American kaleidoscope of ideas, groups, parties, and leaders anything but baffling. The head of the European Economic Community, Jacques Delors, made a more penetrating observation of American world policy as Reagan neared the end of his first term. He saw an “increasingly aggressive and ideological” Administration carrying “a bible in one hand and a revolver in the other.” In truth, though, Delors could have said this of most presidential Administrations since World War II.
Washington’s human rights policy exposed the division and confusion about foreign policy in the American mind. In December 1948 the United Nations General Assembly had adopted and proclaimed the Universal Declaration of Human Rights “as a common standard of achievement for all peoples and all nations.” This elevated document laid out three sets of fundamental rights. First, proclaiming that “all human beings are born free and equal in dignity and rights,” it set forth the historic intellectual and political freedoms—“the right to life, liberty and the security of person” and the “right to freedom of thought, conscience and religion.” Then a set of procedural guarantees including provisions that “no one shall be subjected to torture or to cruel, inhuman or degrading treatment or punishment” or “be subjected to arbitrary arrest, detention or exile.”
Finally, a set of basic economic and social rights: “to work, to free choice of employment, to just and favorable conditions of work and to protection against unemployment,” the right to equal pay for equal work, “the right to a standard of living adequate for the health and well-being of himself and of his family,” the right to free education, at least in the lower grades.
How could such a diversity of nations, with the varieties of subcultures, live up to such a wide range of specific rights which were, in turn, obligations placed on their own governments? In part merely by ignoring or evading them. In part by defining or interpreting them to suit their own political needs. And in part by establishing radically differing sets of priorities among these three major sets of basic rights in the instrument.
For Americans, individual civil and political rights emerged out of a long and precious tradition—Magna Carta, the English Petition of Right and Bill of Rights of the seventeenth century, the French Declaration of the Rights of Man of 1791, the American Bill of Rights ratified in the same year. These in essence were protections for the rights of individual citizens against the state. But it is “not enough to think in terms of two-level relationships,” Vernon Van Dyke contended, “with the individual at one level and the state at another; nor is it enough if the nation is added. Considering the heterogeneity of humankind and of the population of virtually every existing state, it is also necessary to think of ethnic communities and certain other kinds of groups.” Trade unions, public or semi-public corporations, cultural entities, semi-autonomous regional groups might claim rights against both the state and the individual.
The most serious potential clash among the doctrines of the Universal Declaration of Human Rights lay between the first and the third major set of rights—political freedoms such as those of thought and religion, opinion and expression, of movement within and among nations, as against social and economic freedoms such as rights to employment, decent pay, education, health, food, housing. Third World countries inescapably stressed the socioeconomic rights in the Universal Declaration over the individualist. “You know, professor,” a junior minister of an African nation said to a Zimbabwean academic, “we wish imperialists could understand that the sick and hungry have no use for freedom of movement or of speech. Maybe of worship! Hunger dulls hearing and stills the tongue. Poverty and lack of roads, trains, or buses negate freedom of movement.”
Russians and Americans differed even more sharply over individualistic political rights versus collective socioeconomic freedoms. When Washington accused Moscow of violating personal political rights in its treatment of dissidents, the Kremlin gleefully retaliated by accusing Washington of violating the social and economic rights of the poor in general and blacks in particular. The Carter Administration sought to deflect such ripostes by emphasizing that human rights encompassed economic and social rights as well as political and civil liberties. “We recognize that people have economic as well as political rights,” Secretary of State Vance said in 1978.
Still the debate continued, and rose to new heights during the Reagan Administration as conservative ideologues found official rostrums from which to belabor Soviet repression, while Soviet propagandists found ample material for exploitation in stories in American journals and newspapers about the poor and the homeless.
At the dawn of the last decade of the second millennium A.D., as Westerners prepared to celebrate the bicentennials of the French Declaration of the Rights of Man and the American Bill of Rights, human rights as a code of international and internal behavior—especially as embodied in the UN declaration of 1948—were in practical and philosophical disarray. Rival states used the Universal Declaration to wage forensic wars with one another over the fundamental meaning of freedom. It had proved impossible for national leaders to agree on priorities and linkages among competing rights, most notably between economic-social and civil-political.
And yet the world Declaration of Rights still stood as a guide to right conduct and a symbol of global aspiration. In both domestic and international politics it was invoked, on occasion, with good effect. As cast into international instruments, human rights law, David Forsythe concluded, “is an important factor in the mobilization of concerned individuals and groups who desire more freedom, or more socio-economic justice, or both. This mobilization has occurred everywhere, even in totalitarian and authoritarian societies.” And the conflict over the meaning and application of international human rights invited the tribute of hypocrisy. “The clearest evidence of the stability of our values over time,” writes Michael Walzer, “is the unchanging character of the lies soldiers and statesmen tell. They lie in order to justify themselves, and so they describe for us the lineaments of justice. Wherever we find hypocrisy, we also find moral knowledge.” Thus the idea of freedom and justice and human rights binds the virtuous and the less virtuous together, in hypocrisy and in hope.