CHAPTER 15

The Decline of Leadership

FOR AMERICANS 1987 WAS to be a special year of remembrance and perhaps of renewal. In a winter of deep cold and heavy snows New Englanders commemorated the guerrilla struggles of the Shays rebels during the same kind of harsh weather two hundred years earlier. Late in May 1987 scholarly conferences in Philadelphia marked the bicentennial of the arrival of delegates to the Constitutional Convention and the leadership of James Madison and his fellow Virginians in offering a bold new plan for a stronger national government. In September, on the bicentenary anniversary of the convention’s close, Philadelphia burst into pomp and pageantry as hundreds of thousands celebrated with floats and balloons and fireworks.

The festivities barely concealed an undercurrent of concern and disillusion. The bicentennial year began amid revelations of gross failures in the Reagan White House, of an Administration out of control as a few men conducted their own “rogue” foreign policy with the government of Iran and with the Contra opposition to Nicaragua’s leftist Sandinista government. Reagan’s erratic leadership, both foreign and domestic, compared poorly with that of his fellow conservative Margaret Thatcher, who won her third general election, and that of the worldly Soviet party boss, Mikhail Gorbachev, who had launched a bold effort to modernize the Soviet economy and democratize the political system.

At home Congress and the President gave a classic demonstration of the workings of the checks and balances by failing to agree on or enforce measures sharply to reduce the annual deficit and eventually tackle a national debt nearing $3 trillion. The nation continued to struggle with economic ills whose solution appeared beyond human wit—regional decline, inner-city blight, lack of affordable housing, large sectors of entrenched poverty. Abroad Americans faced brutal competition from Japan and other exporting nations. The United States had now become a dependent nation. Financially, wrote Felix Rohatyn, “we are being colonized.”

The floundering leadership of 1987 stood in stark contrast to the bold, purposeful work of the Framers of the Constitution two centuries earlier. Even those who wondered whether the Constitution was good for another two centuries—or even two decades—freely granted that the Founding Fathers had displayed a collective intellectual leadership without peer in the Western world. Above all they had displayed during their four months in Philadelphia the capacity to stand back from the existing national government—the Articles of Confederation—and summon the institutional imagination and political audacity to fashion a whole new structure of government. Two hundred years later proposals to make even small structural changes in the constitutional system evoked emotional opposition from some members of the public and the academy—and almost complete indifference from officeholders who were struggling unsuccessfully to make the present system work.

The Framers had shown remarkable flexibility and responsiveness to the public as well, especially when it became clear that delegates to the state ratifying conventions in 1787 and 1788 would approve the new charter only if the Constitution makers guaranteed that consideration of a Bill of Rights would be one of the first duties of the new Congress. That Bill of Rights, drafted by James Madison in the summer of 1789, endorsed by Congress that fall, and ratified by the states between 1789 and 1791, became the crownpiece of the Constitution. Its enactment also meant that celebration-weary Americans would have to gird themselves for another series of bicentennial commemorations from 1989 through 1991.

New Yorkers who liked celebrations were in luck. Manhattan was where Madison had drafted the noble statement and where Congress had been sitting when it passed the proposed amendments. Indeed, New Yorkers had already held their celebration of the Bill of Rights in 1986, when they had seized on the centennial of the erection of the Statue of Liberty to stage a great weekend festivity, amid a swarm of old sailing ships in New York Harbor and a spectacular display of fireworks in the evening.

The celebration revealed a deep hunger on the part of people to return to the past, to touch and savor it. Liberty Weekend, designed to stress the great statue’s welcome to immigrants, turned into a preview also of the Bill of Rights commemoration, as orators, pundits, and plain people explored the deeper meanings of freedom as the central value in American life and history. The celebrations took on a poignant aspect as speakers conjured up memories of the illustrious leaders of the past such as Thomas Jefferson and John Adams, devotees of liberty like Tom Paine and Patrick Henry. Two hundred years later, was there a single leader who could be compared with these men? What had happened to that fierce devotion to liberty? Could Congress formulate, would the states ratify, would the people approve a 1989 Bill of Rights as bold and sweeping as that drawn up two centuries earlier?

That weekend, while President Reagan and other dignitaries paid homage to the restored and relighted Miss Liberty, a score of panelists— women’s leaders, trade unionists, educators, the mayor of the city—met in a hotel on Broadway to debate the next hundred years of freedom. The future of individual liberty seemed safe in the hands of panelists who had passed in front of garish, X-rated movie houses to enter the hotel but rejected censorship of pornography, who were concerned that their children could buy rock records with sexually explicit lyrics but favored identifying the contents rather than banning them. After New York’s Mayor Edward Koch complained that women had complained when his minions had placed signs in bars warning pregnant women against drinking, NOW president Eleanor Smeal asked not that the signs be taken down but rather that other signs be posted warning men against drinking and thus endangering babies while driving home.

When the question shifted from the protection of individual liberty against government to that of the advancement of freedom through government—that is, from “freedom of speech and religion” to “freedom from fear and want”—the conferees became far more divided. Elie Wiesel, a survivor of Auschwitz and head of the United States Holocaust Memorial Council, sharply posed the issue of social and collective freedom when he called on America to open its doors to anyone who wanted to enter, for economic as well as political reasons. We are all free only to the extent others are free, he said. The discussion turned to equality as inseparably linked with liberty. Could the American constitutional system not only protect liberty but broaden social freedom and deal with the nation’s enduring inequalities?

Some of the conferees answered that the system worked, or at least could be made to work. However fragmented and stalemated, it could at least fend off arbitrary governmental intervention, and at the same time could be used as a positive means for expanding economic and social freedoms. Other conferees were doubtful. The system was too slow, too ponderous, too exposed to control by economic and social elites. Still others believed, though, that great leadership of the quality of Jefferson and Lincoln and EDR might make the system work. If anyone at the conference thought morosely about the state of the current leadership in both parties, no one wanted to mention the matter on a pleasant Fourth of July weekend of happy celebration.

Republicans: Waiting for Mr. Right

Few at the Liberty Weekend conference would have offered as a model leader the man who a few miles away was hailing the renovated Miss Liberty in a speech filled with his usual pieties and banalities. Even though Ronald Reagan had twice won both the governorship of California and the presidency of the United States, many in the press and academia still viewed him as merely a rigid ideologue whose hard-core conservatism was cushioned by a relaxed, easygoing manner and prettified by disarming, even self-deprecating, jokes and anecdotes. It was easy to compare his mind, as Harding’s had been compared, to stellar space—a huge void filled with a few wandering clichés. Or to picture, as Garry Trudeau had in his comic strip, an intrepid explorer pushing through the tangled filaments of the President’s brain in an effort to discover how—or whether—it worked.

Even after his six years in the White House spotlight, many President watchers were still misjudging Ronald Reagan. They did not see the committed political activist and strategist behind the façade. They saw the Reagan who appeared on the screen, an “aw shucks” old boy, with bobbing head, face turning and smiling, shoulders rising and falling—a showcase of ingratiating body language. They heard that long-honed voice over the radio every Saturday, easily rising and receding, alternating between mellowness and intensity, hovering at times “barely above a whisper,” Roger Rosenblatt wrote, “so as to win you over by intimacy, if not by substance.” They chuckled at the perfectly timed joke or anecdote or observation. Many of his stories turned out to be untrue even during his presidential years, when at least his speech writers should have been more careful, and his misstatements and tall tales were numerous enough to be collected and published in book form.

But few appeared to care when Reagan was found out, contradicted, refuted. “There he goes again,” the public seemed to smile indulgently. It took a long time for President watchers to understand that Reagan was not a man of details, specifics, particulars. Theodore White had called him a man more of ideas than of intellect, but he proved to be a man less of ideas than of stances, shibboleths, stereotypes. He was a strategist rather than a tactician, a hedgehog who knew one big thing, in Herodotus’ famous phrase, rather than a fox who knew many little things. What Reagan had known in the 1960s was that he must and could rid the Republican party of its liberal elements, marry the GOP to the burgeoning conservative causes and movements, fight off the far-right extremists, reunite Republicans around a clearly conservative doctrine, mobilize disaffected Democrats and blue-collar workers behind a Reagan candidacy, denounce the Russians—and win.

In retrospect this strategy would seem obvious and even easy, but it had not so appeared at the time. The dominant image in the minds of Republican party politicians in the late 1960s and early 1970s was the crushing Goldwater defeat of 1964. Never mind the excuses—that no one could have overcome the Kennedy remembrance that year or outbid LBJ as peace leader. The practical pols knew their history—a moderate Eisenhower had won in 1952 and 1956, Nixon with his conservative, red-baiting image had lost in 1960, a “new Nixon,” bleached and smoothened, had won in 1968, and any solid right-wing candidate, no matter how attractive personally, would yield centrist voters and hence presidential elections to the Democrats. It was this Goldwater syndrome that Reagan had to overcome if he was to put himself at the head of the GOP and take it to victory.

In the mid-1970s, however, many conservatives were by no means convinced that the Republican party could be their ticket to power. To them the GOP seemed irretrievably in the hands of Gerald Ford and his Vice President-select, the hated Nelson Rockefeller. Ford had been in the White House for hardly half a year when the American Conservative Union and Young Americans for Freedom jointly sponsored a conference at Washington’s Mayflower Hotel to consider forming an independent party to challenge both the GOP and the Democracy. Enraptured by the vision of a new conservative party that would unite rural Southerners and northern blue-collar workers, they urged Reagan to lead the effort. He gave his answer in a banquet speech before a packed ballroom. After winning cheers for a denunciation of the Ford policies, he dampened the fire-eaters by demanding, “Is it a third party we need, or is it a new and revitalized second party…?”—a committedly conservative GOP.

Reagan would wait—but the movement conservatives would not let him wait. In mid-June 1975, only four months after the Mayflower conference, a group of conservative leaders, including mail-order entrepreneur Richard Viguerie, columnist Kevin Phillips, Colorado brewer Joseph Coors, several supporters of George Wallace, and a cross section of conservative organization leaders, confronted Reagan at a private dinner. Phillips led the attack, asserting that the GOP was falling apart just as the Whigs had divided and collapsed in the 1850s, that it was time for a new conservative party that could overcome both Democrats and Republicans in a three-party battle, that Reagan must make a direct bid for the Wallace backers and hence must break whatever ties he had with Republican liberals and even Republican moderates. A free-for-all followed. Some in the group told Reagan harshly that he lacked “fire in his belly” and was dawdling while Ford picked up conservative support for the approaching 1976 campaign. Viguerie, arguing that the GOP had become “unmarketable,” urged the governor to unite conservatives and independents in a New Majority. Then Reagan told them his decision—he would fight Ford for the Republican nomination; as the Republican nominee he would unite conservatives, independents, and conservative Democrats in a broad coalition; and if necessary he would propose that the Republican party change its name. In effect he would transform the GOP or, failing that, abandon it. Half persuaded, and lacking any alternative candidate of national standing (Wallace was still disabled by his gunshot wounding in 1972), the conservatives could only assent. Reagan soon demonstrated that he did have fire in his belly by taking Ford on in the 1976 presidential primaries despite pressure from many GOP leaders to wait his turn. He proved his commitment when he returned to the fray in 1980 as an unrepentant conservative and Republican, and then after winning both the nomination and the election, made clear that he would govern as a conservative, and as head of a conservative Administration.

The media for the most part interpreted the outcome of the 1980 presidential election as a repudiation of Carter rather than a victory for Reagan. Soaring prices, astronomical interest rates, the President’s apparent scolding of the American people in his “malaise” speech, his long agonizing months of being held hostage to the hostage situation in Iran—all these and much else were cited as proof. This view of the election outcome, however, revealed the bias of some liberals, who refused to believe that any authentic, outspoken conservative could win the presidency of the United States—the Goldwater syndrome at work. In fact, Reagan won the election by persuasive appeals to the right-wing vote and to disaffected independents, full exploitation of the remarkable direct-mail and fund-raising apparatus of the Republican national party, and his skillful coalition building between GOP regulars and movement conservatives, as reflected in the choice of George Bush for running mate.

Then, to the astonishment of many, especially of Democrats who had become accustomed to their winning candidates reneging on promises of peace and reform within a year or two of coming into office, Reagan began to govern just as he had promised—as a conservative. He promptly appointed a conservative cabinet headed by Alexander Haig as Secretary of State, and ordered a freeze of federal hiring of civilian employees. Then, after recovering from a near-fatal assassination attempt, he repudiated Carter’s human rights approach, crushed striking air controllers and their union, called the Russians names, and engaged in a variety of symbolic acts that left no doubt that he was a conservative who meant it.

Reagan had shrewdly recognized—or perhaps had simply sensed intuitively—that he should move ahead strongly in domestic economic policy even at the expense of dramatic initiatives abroad. Tax policy offered the best opportunity to redeem his campaign promises and publicize his departure from “discredited” New Deal policies of Carter and his Democratic predecessors. Working closely with congressional leaders, Vice President Bush, and the Republican Senate, the Administration pushed its legislative program of cutting personal income taxes across the board over thirty-three months; reducing the maximum tax on all income from 70 to 50 percent; indexing tax rates to soften the impact of graduated income taxes on rises in personal income; reducing the maximum tax on capital gains from 28 to 20 percent; liberalizing deductions for contributions to individual retirement accounts; lowering estate and gift taxes; providing business with tax breaks. The tax reductions were tied closely to a package of budget cuts that slashed toward the heart of the New Deal, Fair Deal, and Great Society domestic programs—education, health, housing, urban aid, food stamp programs, the National Endowments for the Arts and for the Humanities, and the Corporation for Public Broadcasting, and even federal subsidies for school meals—but never never never defense spending.

The Reagan Revolution was underway. The head revolutionary spurred his troops on television, in speeches to joint sessions of Congress, in huddles with key senators and representatives, in trips out into the country. Revolutionary fervor needed a doctrine, and this took the form of the supply-side theory that lowering taxes would produce prosperity by giving producers more capital for production and giving consumers more money for consumption. Savings, investment, and growth would be stimulated, the budget ultimately balanced through growth in the tax base. For all-out supply-siders, these were at the heart of a much wider program, described later by Reagan’s budget chief, David Stockman, as a “whole catalogue of policy changes, ranging from natural gas deregulation, to abolition of the minimum wage, to repeal of milk marketing orders, to elimination of federal certificates of ‘need’ for truckers, hospitals, airlines, and anyone else desiring to commit an act of economic production. It even encompassed reform of the World Bank, and countless more.” All of this was designed to overcome the stagflation of the late 1970s.

Since the House of Representatives had remained in Democratic hands in the 1980 election, it was imperative that the President bring around a large segment of the opposition. In a House vote on his crucial budget bill within four months of his inauguration, sixty-three Democrats—over a quarter of the Democracy’s House membership—broke ranks to support the Administration. Many of these were from the South and West—an exciting hint of the possibilities of a future party realignment. No President since Roosevelt, Time opined six months after Reagan’s inauguration, had “done so much of such magnitude so quickly to change the economic direction of the nation.”

The euphoria was not for long. During late 1981 and 1982 the economy plunged into recession. Once again the media headlined stories of bank failures, farm closures, bankruptcies, desperate family and individual crises. The jobless rate rose to over 10 percent, the highest since the great depression of the 1930s. The hard facts threw the White House economists and their colleagues outside into disarray. The supply-siders defended themselves with the classic explanation of dogmatists who fail—their program had not been tried hard enough, or long enough, or this or that vital ingredient was missing. Stockman was now mainly concerned that future Reagan budgets would be more and more out of balance in a recession situation. Republican party leaders feared that the GOP would be tagged with another “Hoover depression.”

And Reagan? The President had now fallen into a severe political and intellectual bind. He had not been able to balance the budget—politically he dared not cut Social Security and other major safety-net spendings, viscerally he not only opposed cuts in his planned military buildup but also wanted a huge boost in defense spending. And in the end he still confronted Speaker Thomas P. O’Neill’s power base in the House. Doctrinally he remained absolutely rigid in the face of his failure. Time and again emergency budget meetings in the Oval Office were trivialized as the President retreated into anecdotage about timeserving bureaucrats or wasteful projects. Stockman noted that when Pete Domenici, a Republican senator friendly to the White House, confronted Reagan with the need to raise taxes, the President jotted down notes during the presentation but only in order to rebut it.

“Damn it, Pete,” he said, “I’m just not going to accept this. This is just more of the same kind of talk we’ve heard for forty years.”

It became clear during that first year that Reagan the campaign and electoral strategist was a different man from Reagan the head of government. The grand coalition builder, who had spoken from the stump in pieties and platitudes that united people, now proved himself unable to think in terms of the hard policies and priorities that linked the overall values to day-to-day governmental choices and operations. Stockman complained of Reagan’s habit of castigating spending in the abstract while he shrank from the “real bullets” he would have to face politically if he took on the welfare state’s gigantic entitlement programs. Few in the White House pressed their chief to make fundamental strategic choices; rather they echoed his dogmatics or lobbied for some pet solution of their own. As Ralph Nader noted in a preface to a study of “Reagan’s ruling class,” the people around the President showed a remarkable sameness of “attitudes, ideologies, and even styles of thinking and explaining.” Not for Reagan was FDR’s penchant for peopling his Administration with challenging intellectual eclectics.

But one grand asset Reagan retained—political luck. The recession came early enough for some Reaganites to blame it on the lingering effects of Carter policies, early enough too for the White House to wait it out and hope for recovery by election time. The very spending that Reagan condemned in principle while authorizing in practice, combined with such anti-inflationary developments as an oil price decline over which the White House had little control, brought a strong recovery. Although large pockets of poverty and unemployment persisted, the recovery remained vigorous enough to project Reagan toward his massive reelection triumph of 1984. It was hard enough for Walter Mondale, leading a divided, irresolute party, to take on an incumbent whose personal popularity remained high, who had managed to maintain his electoral coalition despite sporadic complaints from both movement conservatives like Viguerie and liberal and moderate Republicans in the Senate. But former Vice President Mondale, forced to share some of the blame for failed Carter economic policies, also faced a Republican Santa Claus who continued to disperse federal money to thousands of vested interests and welfare projects even while preaching economy and thrift and budget balancing. He faced a chameleon who alternated between attacking government and exploiting his government. It was no contest from the start.

Reagan’s big reelection set the stage for a major piece of unfinished domestic business, tax reform. For some time the Administration had been tracking proposals by Republican congressman Jack Kemp, an early booster of supply-side economics, for further cuts in personal rates, and by two Democrats, Senator Bill Bradley and Congressman Richard Gephardt, for lower rates combined with a slash in deductions. The President virtually stole the issue from the Democrats during the 1984 campaign and made it his own. A few months later, in his State of the Union address to Congress, he made tax reform the central domestic initiative of his second term. “Let us move together with an historic reform of tax simplification for fairness and growth,” he proclaimed, promising to seek a top rate of no more than 35 percent. Evidently having taken to heart the lessons of his first-term setbacks, Reagan showed a good deal of flexibility in bargaining with congressional leaders and factions over the specifics of his tax proposal.

Again and again the Administration measure seemed to die on Capitol Hill, only to be resurrected by a President absolutely committed to tax reform and ever ready to make political forays out into the country to channel public pressure toward Congress. White House leadership was crucial; while three-quarters of Americans in a poll favored a simplified tax system and almost three-fifths considered the existing system unfair, they listed the tax system fifth in importance among economic problems, behind the deficit, unemployment, interest rates, and inflation. The Administration had to fight off lobbyists who put heavy pressure on legislators. The bill curtailed consumer interest deductions, and she had a Chrysler plant in her district, a congresswoman said. She thought she should vote no, but “I couldn’t do it.”

Reaganites were proud that they had overcome “Lame-Duck-Itis,” as they felicitously called it. Historically, however, the problem of the presidential lame duck was less the second term than the second two years in each term, following the congressional midterm elections.

It was clear even as Reagan took command of tax policy that he must share leadership with committee chieftains and party lieutenants in Congress, which retained its constitutional authority over revenue measures. In foreign policy, on the other hand, the Chief Executive had held the dominant role both under the Constitution and by custom. If many conservatives had been sorely disappointed by Reagan’s compromises over taxation and domestic policies, hawkish right-wingers had been entranced by his fulminations against communism. Rhetorically, at least, he had entered the White House as the most bellicose peacetime President since Theodore Roosevelt. While TR had tended to strike out in all directions, however, Reagan had eyed the reds with the steely hostility of a frontiersman targeting a band of Indians.

Reagan’s anticommunist rhetoric had long been unbridled. He wrote in a 1968 volume: “We are faced with the most evil enemy mankind has known in his long climb from the swamp to the stars.” Junior-partner communists were just as bad: the North Vietnamese, of course controlled by Moscow, were “hard-core, hard-nosed, vicious Communists” who were going to “fudge, cheat and steal every chance they get.” Americans should have stayed in both Korea and Vietnam, gone all out, and won. Reagan liked to talk about Lenin’s “plan” to “take Eastern Europe,” organize the “hordes of Asia,” about Lenin’s “prediction” that eventually the United States, the last bastion of capitalism, would “fall into our outstretched hand like overripe fruit.” For some hawks it was as rousing as a Hollywood scenario. Would Ronald Reagan ride to the rescue?

Many Americans were apprehensive when this fire-breather entered the White House, but experienced President watchers professed not to be worried. The reins of power, they said reassuringly, would tame the rider. It was one thing to rant outside the While House, something else to handle the perplexing everyday questions that took on all the grayish hues between black and white. Wait until he had to consult with heads of state, foreign envoys, wait until he had to read cables bristling with the endless complexities of real-life international politics. After all, hadn’t Teddy Roosevelt in the White House become a conciliator, mediating the Russo-Japanese War and even winning the Nobel Peace Prize?

For a time the new President did indeed moderate his oratory. He preached peace sermons, called the nuclear threat “a terrible beast” before the West German Bundestag, told the British Parliament that nuclear weapons threatened, “if not the extinction of mankind, then surely the end of civilization as we know it.” Hedrick Smith of The New York Times and other journalists wondered whether he was deserting his sharply ideological, anti-détente rhetoric for a more moderate, centrist foreign policy. Soon, however, the President returned to his rhetorical battles, like an old soldier pulling his saber down from over the fireplace. In a March 1983 speech to Christian evangelists in Florida, he labeled the Soviet Union an “evil empire,” called totalitarian states the focus of wickedness in the modern world, and warned that America’s struggle with communist Russia was a struggle between right and wrong. “Adopting a perspective very similar to John Foster Dulles in the fifties,” Betty Glad wrote, “Reagan has not changed,” even though technological and political change had undermined the old assumptions of a Pax Americana.

Was it perplexing that this genial, charming, worldly septuagenarian could label as diabolical the creed of a large portion of humankind, that he sounded like the Ayatollah Khomeini when the Iranian revolutionary castigated the United States as “the great Satan”? Not if one kept in mind the old actor’s love for white hat--black hat scenarios, his authentic fears that the communists might stamp out the kind of individualism he had absorbed in Dixon, Illinois, or—in the view of some biographers—his displacement of earlier insecurities and embarrassments, such as his father’s alcoholism, onto the treacherous world outside.

But the main reason for Reagan’s bellicose rhetoric may have been much simpler. He did not really mean it—mean it in the sense of converting ideology into action against powerful opponents. In divorcing his foreign policy from his rhetoric he was carrying to the extreme the tendency of recent American leaders to enunciate vague and lofty values without reducing them to operating principles, policy choices, clear priorities. Reagan’s 1988 summit with Gorbachev typified the President’s bent for high rhetoric—now friendly, now hostile to the Kremlin—that had little relation to the skimpy policy results of the meeting.

It took anticommunist hawks a long time to recognize Reagan’s separation of rhetoric and reality, in part because he gave them occasional swigs from the heady old ideological bottle, in part because Carter had moved so far toward an aggressive anti-Moscow posture after 1979 that his successor could offer no sharp break in policy even had he wished to. By the end of his first term, however, the anticommunist true believers were expressing keen disillusionment. Why did a President who attacked the “evil empire” lift the grain embargo that was destabilizing the Soviet economy? When the Polish authorities declared martial law and cracked down on the Solidarity movement, why did not Washington bring the crisis to a boil by declaring Poland in default for failure to pay interest on its debts to Western banks? Why not step up support for “freedom fighters” in Afghanistan and Angola? Why not do more for the Nicaraguan Contras despite the Boland Amendment? Why perpetuate the Democrats’ abandonment of Taiwan, even if Peking was a counterweight to Moscow? In the Persian Gulf, why put America’s commercial concerns about oil so far ahead of anti-Soviet militance? In Europe, why not try harder to stop the Western subsidy of an oil pipeline that would help the Soviet export economy? Could George Will’s quip be true—that the Administration loved commerce more than it loathed communism?

Still, Reagan largely held the support of far-rightists even as they grumbled. His huge reelection sweep was a tribute to his continued coalition-building skills. To be sure, some in the extreme right sat on their hands, but they had no other place to go: the ballot offered no Strom Thurmond, no George Wallace, for whom they could vote in indignation. And certainly Reagan was better than Carter, better even than Nixon. The hardest test of loyalty came after the Iran-Contra revelation late in 1986. While many conservative Republican politicians recoiled in dismay, it was the movement conservatives who rallied to their leader’s support.

The Iran-Contra hearings dramatized the price of stances not converted into operating policies. Bizarre initiatives, fouled-up communication, cowboy-style forays, even a little private enterprise for profit, were the colorful parts of the story. But behind it all was a lack of clear guidelines from the White House, even more a lack of knowledge in the Oval Office. The whole affair was a caricature of the incoherence and inconsistency that characterized the Reagan Administration in foreign policy. After eighteen months Alexander Haig had quit as Secretary of State in part because he could not deal with the protective cordon around Reagan, in larger part because he felt unable to “restore unity and coherence” to foreign policy. These qualities continued to elude the Reagan White House—and all the more as it moved into lame-duck status.

The Structure of Disarray

“The true Reagan Revolution never had a chance,” wrote David Stockman as he reviewed his White House years. “It defied all of the overwhelming forces, interests, and impulses of American democracy. Our Madisonian government of checks and balances, three branches, two legislative houses, and infinitely splintered power is conservative, not radical. It hugs powerfully to the history behind it. It shuffles into the future one step at a time. It cannot leap into revolutions without falling flat on its face.”

Stockman had come belatedly to a revelation that had struck many of his fellow practitioners years before. During the Carter presidency, several score former senators, cabinet officers, governors, mayors, women activists, as well as scholars, journalists, and lawyers, had begun meeting from month to month only a few blocks from the White House to assess the health of the American political system. The frustrations and deadlocks that most of these politicians and administrators encountered in merely trying to make the government work rivaled Stockman’s more ideological disappointments. Early in 1987, even before the full import of the Iran-Contra scandals was known, this group made public its bleak diagnosis of the present condition and future prospects of the American political system.

As befitted its name—Committee on the Constitutional System—the group concentrated on structural and institutional disorders. In the bicentennial year of 1987 it found serious strains and tensions in the nation’s governing processes. The committee pointed to the huge, “unsustainable deficits” that defied the good intentions of legislators and President. It pointed to foreign and national security programs, where focus and consistency were frustrated “by an institutional contest of wills between Presidents and shifting, cross-party coalitions within the Congress.” It pointed to presidential-Senate conflict over treaty-making. Over forty pacts submitted to the Senate for ratification since World War II either had been rejected or had never even come to a vote. Among those not voted on were SALT II, treaties on underground nuclear tests, several human rights conventions, and a variety of trade, tax, and environmental pacts. Just as the President’s threat of veto often chilled measures in Congress, so the Senate’s threat of inaction or negative action could freeze the ratification process.

Other monitors found the disarray outside the constitutional system even more serious than the delay and deadlock within it. They pointed to the falling-off of voter turnout at almost all levels of government, reflecting widespread apathy toward matters political and pervasive distrust of government, which was also indicated in poll after poll. They deplored the dominance of media and personality politics, the power of interest-group politics coupled with the decline of parties, the huge and ever-rising costs of running for office, the endless campaigns that maximized problems of campaign finance while boring the public. Critics noted the hypertrophy of some organs of government in the midst of the weakness and disarray— the rise of the “imperial presidency” and of the equally imperial judiciary.

The 1987 monitors willy-nilly had joined one of the country’s oldest vocations—criticizing the system. The Framers’ failure in 1787 to add a Bill of Rights had left hundreds of state and local leaders suspicious of the new constitution. The most striking turnabout on the Constitution was conducted by some of the leading Framers when they had to run the government they had planned. After all their denunciations of “party spirit” and their careful engineering of a system of checks and balances designed to thwart popular majorities, Hamilton and Madison and their allies in the 1790s fashioned and captained party factions in Congress and the Administration that unified government to a degree.

During the early 1800s abolitionists attacked the Constitution for countenancing slavery and women leaders condemned it for failing to grant their sex voting and other rights. Southerners flailed it for encouraging centralizing tendencies in the national government, tendencies legitimated by the decisions of Chief Justice John Marshall and his nationalist brethren in cases striking down state interferences with national economic power. The victory of the North in the Civil War and the passage of the Reconstruction amendments consolidated national—and for several decades Republican—predominance in the constitutional system. Early in the new century, as progressives and radicals assessed the suffering and wastage caused by a virtually unregulated system of private enterprise, the Constitution came under attack as a conservative and elitist frame of government still designed to thwart the aspirations of the masses of people.

Progressives during the Theodore Roosevelt and Woodrow Wilson eras managed to democratize the Constitution. Under the Seventeenth Amendment all United States senators would be directly elected by the voters rather than by state legislatures. Many states adopted the initiative, referendum, and recall. Under their own indomitable leadership, women won the right to vote in national elections. Political parties were “purified” and “democratized” by the adoption of reform measures substituting party primaries for nominating conventions, establishing nonpartisan elections in many cities and even states, and eliminating straight-ticket voting that had encouraged less informed voters to ballot for the whole party slate with one check mark. Progressive-era democratization turned out to be a largely middle-class effort whose main result was not purifying politics but curbing the impact of party leadership and party policy on government. Since political parties were often the only “lobby” or “interest group” that low-income workers, immigrants, blacks, domestics, the jobless, the very young, and the very old possessed, the decline of party meant a major alteration in the foundations of government power.

For a century and a half the constitutional frame of the government remained intact, like some grand old pyramid towering serenely over the desert storms. It was a tribute to the wisdom of the builders of 1787 that their edifice, despite wear here and erosion there, carried on its main role of institutionalizing the checks and balances among President, two houses of Congress, and the judiciary. In a century when a number of upper houses were abolished or defanged in other Western democracies, the American Senate retained its panoply of powers. The absolute veto of House and Senate on each other remained, as did the qualified vetoes of President and Congress on each other. A Rip Van Winkle returning to Washington a century after the Capitol was built and proceeding from the White House along Pennsylvania Avenue to Capitol Hill, would have found the same several branches, separated from one another, everything quite in place, just as the Framers had wanted.

Within this structure, however, powers shifted, processes changed, with the ebb and flow of political combat. The presidency had assumed far more massive power than the Framers could have dreamed—and yet had lost control of large sections of the executive branch when regulatory commissions, the Federal Reserve Board, bureaus supported in Congress and the country by special interests, were cut off from supervision by even the most vigilant of Presidents. Even within the Executive Office itself, the President’s control was not absolute, as the Iran-Contra revelations disclosed. The Senate held a veto on the rest of the government but still was subject to internal veto by a few determined filibusterers. The House of Representatives, once a relatively disciplined body under Speakers called “czars,” was fragmented by party factions, committee and subcommittee chairpersons, activist staffs, and interest groups and their lobbyists.

Save in war, the Framers’ fundamental strategy of government was not harshly tested until the depression years, when the public demanded that the government act. So effective was FDR’s masterful combination of moral leadership, indefatigable horse trading, and delicate manipulation that the failure of the New Dealers to end unemployment and rural and urban poverty was not fully recognized. It was only after World War II, when analysts compared the limited economic success of the New Deal with the massive wartime improvement in employment, wages, public housing, nutrition, that scholars and practitioners proposed changes to strengthen the institutional linkages between President and Congress: simultaneous election of President and all legislators; a joint executive and legislative cabinet to set policy; a broadening of the impeachment power; Senate ratification of treaties by majority rather than two-thirds vote.

Two hundred years earlier the Founding Fathers had not only framed proposals far more bold and sweeping than these; they had written them into a constitution and then prevailed on suspicious but open-minded grass-roots leaders to adopt them. Politicians, scholars, and journalists largely ignored the proposals of the mid-twentieth century “re-framers” or greeted them with hostility and ridicule.

But some critics responded to would-be reformers with analysis rather than anger, challenging the reformers’ basic assumption that constitutional checks and balances impeded good government. “There are two fundamental arguments for a constitutional system of separate institutions sharing powers: It helps preserve liberty and it slows the pace of political change,” wrote political scientist James Q. Wilson. “Those arguments are as valid today as they were in 1787.” The interplay of conflicting leaders might bring slower, more incremental progress, but it would be safer and sounder.

Was there a better way—some means of shaping a more unified, effective, and responsible government that would not open the Pandora’s box of constitutional alteration? A group of political scientists, meeting in the late 1940s, had urged in effect that Americans return to the party system that the “party framers”—not only Madison and Hamilton but Jefferson, Jackson, Van Buren, and later the great Republican party leaders—had shaped in the century after the founding. Their report, “Toward a More Responsible Two-Party System,” proposed some significant institutional changes: stronger national party organization in the form of a top governing council of fifty party and electoral leaders; a biennial national convention; a more representative national committee and more cohesive and disciplined House and Senate parties based on a combination of stronger leadership and democratic decision making by caucuses.

These party proposals met much the same response as had schemes for constitutional modernization: hostility, derision, inattention, along with some scholarly analysis. While the report had some impact on the thinking of journalists, it largely lay neglected both by politicians and by academics. Proposals for party change during the 1960s and 1970s took an entirely different direction: proportional representation of women, young people, and minorities in the selection of Democratic party convention delegates and other devices to “democratize” the Democracy, in the spirit of party reform.

Was it possible to have both strong parties and democratic parties? A number of state Democratic parties, such as those in Iowa, Minnesota, and Massachusetts, combined participatory, caucus-based local structures with good organization and leadership at the state level. Nationally the Republicans liberalized a bit some internal party processes and urged state parties to bring in more women, minority persons, the old and the young, and “heritage groups.” The GOP also modernized its fund-raising and promotional activities, which helped bring the Republicans their stunning presidential victories of 1980 and 1984.

For some years “constitution modernizers” and “party renewers” pursued their separate paths. Each group in its own way sought to outwit the Framers—to pull together the government branches that the Constitution put asunder. The modernizers would do so by modifying the constitutional checks and balances, the renewers by building party ties that would bind President and Congress, House and Senate, despite the checks and balances. Party renewers contended that constitutional modernization, desirable though it might be, would never occur because the American people would oppose any major tampering with the sacred document. Constitution modernizers replied that strong enough party bridges could never be built over the wide constitutional chasms that separated legislators and executives.

During the 1980s the two groups bridged their own chasm to a considerable degree. Constitution modernizers recognized that they could not outwit the Framers unless they used the Framers’ own strategy. If the essence of the checks and balances was to seat President, senators, and representatives in separate and rival constituencies, then the antidote was to build a nationwide two-party constituency so that the leaders of the winning party could govern with the support of their partisans across the nation. If conflict among branches of the government could be transformed into conflict between a government party and a “loyal opposition” party, the former could expect to have considerable control over policy, at least until the next presidential election.

Party renewers, for their part, came increasingly to recognize that the two major parties had become so infirm that they could never revive on their own, even to the level of strength they had enjoyed in the nineteenth century. Present-day parties needed artificial stimulation—and if institutional checks and balances had tended to fragment the parties, then knitting the government together organizationally or structurally might in turn unify the parties.

After many a summit conference, constitution and party renewers agreed on a “minimal” program: granting representatives four-year terms concurrent with the presidential, thus abolishing the “unrepresentative” midterm election; granting senators eight-year concurrent terms so that President, representatives, and senators would take office together and thus provide the basis for teamwork; permitting members of Congress to sit in the cabinet without giving up their congressional seats; replacing the two-thirds treaty requirement in the Senate with a simple majority-rule requirement in both chambers; broadening the impeachment power so that Presidents could be removed not only for malfeasance but also for losing the confidence of both parties in Congress and in the nation; strengthening national parties, especially in their opposition role; and, perhaps most important of all, establishing the foundations of party leadership unity at the grass roots by allowing voters to choose between party slates for federal offices and “vote the party ticket.”

The American people as a whole were supremely uninterested in these proposals. Most leaders, having risen to office under the existing system, were reluctant to junk it. Bold thinking about radical institutional reform was as rare in the 1980s as it had been rife in the 1780s. Two hundred years later it was still hard to outthink or outperform the Framers.

As the last barrage of star bombs lighted up the Philadelphia skies in September 1987, Americans concluded months of unbridled constitution worship during that bicentennial year. It was clear that they honored the Founding Fathers in every regard—with one exception. They were in no mood to emulate the Framers’ willingness to stand back from the existing constitution in 1787—the Articles of Confederation—and not only criticize but alter and in the end abolish it. During the flush times of the late 1980s Americans would only celebrate the Constitution, not criticize or even cerebrate about it.

The reason was not only Constitution worship. Americans had an instinctive feeling, buttressed by years of surviving crises, that in a pinch their ultimate safeguard lay not in constitutions and parties but in the President. It was to the White House that they had turned for reassurance, inspiration, consolation, explanation, drama. FDR’s forthright actions during the Hundred Days of 1933, his later responses to the Allies’ need for war aid, Truman’s quick action after war broke out in Korea, Kennedy’s mobilization of the whole executive branch to force Big Steel to roll back a price increase—these and countless other incidents fed the image of the President as western sheriff riding to the rescue.

Why go through the painful effort of changing the system when the law-and-order men were so easily available? But did the lawmen truly stand for law and order? Watergate, of course, revealed the opposite, but the cast of characters seemed so bizarre that defenders of the presidency could dismiss it as an aberration. Washington insiders knew of myriad other White House misadventures and cover-ups, but it remained for the Iran-Contra revelations to dramatize the extent to which the President—and hence the people—had lost control of the presidency.

The popular idea of presidential abuse of power was of a Nixon or Johnson seeking to seize control, but their reasons for power grabs may have lain more in presidential frustration than in presidential feistiness. In many cases, the White House acted because the system as a whole seemed paralyzed in the face of crisis, whether depression, Nazi aggression, civil rights violations, the “communist menace,” or hostage seizures. Under such pressure the White House could become a “rogue presidency”—an unsaddled beast, unable to control itself, on the rampage through the wilderness of the American political system. Iran-Contra and all the other excesses and usurpations demonstrated that this beast might be too hard to tame, too dangerous to ride.

Hence Americans had come to rely on the judiciary both to tame the presidency and to take leadership on its own. With neither the Congress nor the President able or willing to act on civil rights, the Supreme Court had moved into the vacuum, most notably with its epochal Brown decision of 1954. Since that time—and even after LBJ and his Democratic Congress put through the great civil rights measures of 1964 and 1965—the Court had continued to make policy in the most sensitive areas: First Amendment liberties, women’s rights, the environment, affirmative action, criminal procedure, privacy.

The resurgence of the “imperial judiciary,” of “government by judiciary,” of the Supreme Court as super-legislature, intensified a debate that had proceeded off and on ever since John Marshall’s assertion of judicial power in 1803. During Marshall’s leadership of the Court, champions of states’ rights denounced its nationalizing thrust and its government by judiciary. They complained less of judicial power, and antislavery leaders complained more, with the arrival of the Taney Court and the enunciation of Dred Scott in 1857, in which the Court for the first time vetoed a major substantive act of Congress. The course of the debate over the next century demonstrated that in judicial politics as much as legislative and electoral, much depended on whose ox was gored.

The quickest flip-flop occurred during the 1930s and 1940s. As the High Court dismantled part of the New Deal in 1935 and 1936, New Dealers denounced the “nine old men” and their power, while conservatives toasted “judicial independence.” Within a decade or two the right was denouncing the “Roosevelt Court” and its reach, while liberals exulted in the Court’s upholding of New Deal and Fair Deal measures and their implementation—and welcomed especially its intrusion into desegregation and other civil rights areas. The stakes in this legal, political, and ideological battle became much higher as the judiciary moved into wider and wider policy fields and as claimants turned to the courts for relief because they could not win action from the legislative and executive branches.

It was clear that conservatives would have to wait for another shift of the party pendulum before they could hope for a switch in Supreme Court philosophy. Eisenhower had not been much help with his appointment of Earl Warren as chief justice—a choice the President later called his biggest mistake—but Ike after all was a moderate Republican in rightists’ eyes, and hence prone to errors of this sort. With Nixon’s election in 1968 and reelection in 1972, conservatives could hope for a return to judicial sanity after the excesses of Warren and his brethren. And Nixon came through with the appointments of two dependably conservative jurists in William H. Rehnquist and, above all, Warren E. Burger, whom he named chief justice in 1969 after Warren’s resignation. At last the Court would be following the election returns.

But it was not all that simple. For one thing, the Burger Court harbored several holdovers from previous Administrations, including the leader of the liberal faction, William L. Brennan—another Eisenhower appointee— and Thurgood Marshall. No judge worthy of presidential appointment and Senate confirmation, moreover, was likely to tread a narrow ideological line, whatever his background. Once faced with concrete cases, the justices were constrained by constitutional heritage, judicial precedent, decisions of lower courts coming up on appeal, the exchanges in their own semi-weekly conferences, the attitudes of their clerks fresh out of law school, and above all by the complexity and intractability of cases before them. Nixon’s two other appointees showed the influence of office: Lewis F. Powell proved a consummate centrist, becoming in the later years of the Burger Court a crucial and unpredictable swing vote in close decisions, and Harry A. Blackmun often joined the liberals, casting the deciding vote—and writing the opinion—in Roe v. Wade.

So the Burger Court brought no judicial counterrevolution. Rather it followed a meandering middle way as it mediated among issues. Thus on school desegregation the High Court in 1973 held that the Denver school board had practiced a policy of segregation in choosing sites for school buildings and in its pupil transfer plans, but the Court reflected widespread public opposition to busing when in 1974 it rejected a broad plan to integrate the overwhelmingly black school systems of metropolitan Detroit with fifty-three overwhelmingly white suburban school districts, and in 1976 vetoed a federal district court’s plan for Pasadena that barred any school from having a majority of black students. In a 1978 case, Regents of the University of California v. Bakke, the Burger Court struck down a medical school quota system that allotted a fixed number of admissions for minority applicants. Applying the Fourteenth Amendment in this first great test of “reverse discrimination,” the Court held that Allan Bakke, a white applicant whose test scores were superior to those of some of the minority applicants accepted by the medical school, had been denied his right to equal protection. But while disapproving the school’s “explicit racial classification” and its fixed quotas, the Court acknowledged that the State had “a legitimate and substantial interest” in ameliorating or eliminating “the disabling effects of identified discrimination.” In the thorny field of discrimination against women the Court generally was protective of women’s rights in specific cases but refused to adopt a rigorous test that would deem gender classification, like racial classification, as inherently suspect, unless it served an overriding State interest.

Expected to offer a consistently law-and-order interpretation of the Fourth Amendment, the High Court cautiously picked its way between upholding and vetoing state law-enforcement procedures. The Court, perhaps recognizing that for millions of American commuters their car as well as their home was now their castle, threaded an especially narrow course on searches of vehicles. “While the justices gave state police broad latitude to conduct auto searches,” students of the Court wrote, “they prohibited warrantless interrogation of motorists to check driver’s licenses and registrations without probable cause suggesting possible criminal activity. If the Burger Court permitted police to search the passenger compartment of a car stopped for a traffic violation and to seize evidence subsequently used to prosecute for violation of narcotics laws, it also prohibited the search of a vehicle’s luggage compartment.” Between the back seat and the luggage compartment lay a narrow line indeed.

In other areas the Burger Court also took a mixed position. On protection against self-incrimination, it continued the Warren Court’s Miranda doctrine but refused to broaden it. Its ruling in Roe v. Wade was a victory for women’s rights, but the Court afterward sustained denials of public funding for abortions. It protected or even enlarged free speech in some cases but narrowed it in others, as in cases of pornography or of leafleting or picketing in privately owned shopping malls.

Rejecting such eclecticism, the Republican right greeted with new hope Burger’s decision to quit the Court in 1985 to head the nation’s official bicentennial commission, and with even greater hope Reagan’s elevation of Rehnquist to Burger’s seat and his choice of two associate justices with impeccable conservative credentials. A minor consequence of conservative satisfaction was that the broad issue of the Court’s power to invalidate laws on a variety of grounds—one of the most important and potentially explosive issues of American democracy—was hardly touched upon during the cerebrations of the 1987 bicentennial. Instead, oceans of ink and flights of oratory were devoted to a lesser though equally fascinating question, original intent.

This question was propelled into the forensic arena when Reagan’s Attorney General, Edwin Meese III, calling for a “Jurisprudence of Original Intention,” intimated that his chief would pick only judges whose applications of the Constitution would reflect the intentions of the Framers. Law school professors, historians, political scientists, and Supreme Court justices pounced on this dubious notion, proved conclusively that it was wrong historically—and then wondered whether they had won a glorious intellectual victory on a side issue. The scenario was repeated when Reagan proposed Robert H. Bork for the High Court—the Senate foes won the debate over original intent, handily vetoed Bork, but were left with a Pyrrhic victory after the President found an almost equally conservative substitute.

The central issue of judicial power was also one of intellectual leadership. In their meandering course, the Burger Court, and for a time at least the Rehnquist Court, appeared deceptively eclectic, moderate, practical. But that course concealed a lack of jurisprudential and philosophical coherence in the very heart and brain of the federal judiciary. At best the Reagan Court was marking time; at worst it was losing time, failing to develop clear and consistent operational standards related to the nation’s values, storing up trouble for the future, its incoherence rivaling what the Iran-Contra hearings revealed in the presidency, and with perhaps equally grave consequences.

In the late 1980s the structure of government remained intact, the balances in the old clock still operating, the springs and levers still in place. A Republican President and a Democratic Congress nicely checked each other; House and Senate held an absolute veto power over each other; the Supreme Court had an all but final veto over the two political branches. Inside these separated institutions lay political and intellectual conflicts that contained the seeds of enormous change and potential crisis.

Realignment?: Waiting for Lefty

For a century and a half the two main parties had proceeded down the political mainstream, rolling along with the inevitability of the Mississippi. But just as storm and flood had periodically roiled the placid waters of that river of American history, so political movements and ideological tempests had disrupted the steady flow of two-party politics. And American politicians, like the people living along the riverbank, knew that the flood would come again—but they did not know when.

Social protests and political movements had risen and fallen with some regularity over time. Before the Civil War abolitionism had challenged both Democrats and Whigs and the easy accommodations they had made with slavery. In the 1890s aroused agrarians had moved into the Democratic party, even wresting the presidential nomination from the centrist Clevelandites. In the 1930s several streams of protest had coalesced as desperate farmers, urban reformers, western progressives had taken a dominant role in the Democracy. In the 1970s and 1980s conservatives of varied stripes had merged with Republican party regulars to put Ronald Reagan into office and keep him there.

These movements might appear to have erupted with the suddenness of a spring freshet and then subsided as quickly. Each protest, in fact, had sources deep within the politics and morality of its period. Outrage over slavery had aroused the consciences of men and women in both the Democratic and Whig parties, triggered third-party forays such as that of the Liberty party, cut deep divisions not only between parties and between major interests but within them, and convulsed the entire political system by the 1860s. The lightninglike capture of the Democratic party by the Bryanites in 1896 was the product of years of intense agrarian unrest, western greenback and silver movements, organizational efforts by the Farmers’ Alliance leaders and rank and file, years of populist agitation, the devastatingly low farm prices and other hard times of the nineties. Fighting Bob La Follette’s Progressive party of 1924 and Al Smith’s presidential candidacy of 1928, followed by the farm movements of the great depression, helped pave the way for Roosevelt’s presidency and for the New Deal expansion of both the political appeal and the social philosophy of the Democracy. And on the American right both economic and evangelical leaders had fought a long battle, first winning and then losing with Gold-water in 1964, flirting with George Wallace and other elements North and South hostile to civil rights, and losing once again with Reagan in the GOP nomination fight of 1976, before achieving their breakthrough in the 1980s.

Thus movement politics had collided and combined with party politics throughout American history. Like their counterparts in other countries, American social protest movements were unruly, untidy, and unpredictable in effect, but they displayed continuities and similarities in their very dynamics. The pattern was clear, even dramatic: these movements emerged out of economic stress and social tension and erupted in conflict, often violent. After a time they dominated political debate, overshadowed more traditional issues, cut across existing lines of party cleavage, polarized groups and parties. The immediate test of success was whether the movement could force one major party or both of them to embrace its cause. The test of long-run success was whether the movement left the whole party system altered and, even more, left the political landscape transformed.

The great transformations that had occurred, in the antecedents of such critical elections as those of 1860 and 1896, and the series from 1928 to 1936, have been studied in great detail by exceptionally able historians and political scientists. The main interest was usually in the rise and fall of parties, since their fate in elections could be so easily measured. But party change contained a paradox—despite all the turmoil the nation had undergone, the Democratic party had existed ever since the 1830s and the Republican party since the 1850s. These staid old parties had entered and left office like Box and Cox but had continued to move down the political mainstream, capsizing and sinking third parties in the process.

Hence on closer inspection, the critical question was not so much party realignment as party reconstitution. The most significant case of this kind of change in the twentieth century was the shift of the Democratic party under Roosevelt, Truman, Kennedy, and Johnson. Behind FDR’s leadership the Democracy became much more of an urban, trade union, ethnic, and poor people’s party, but—partly because of Roosevelt’s need of support from internationalists of all stripes during the war years—it retained its old and solid base in the white South. Truman’s bold civil rights stance, Kennedy’s Catholicism and growing commitment to civil rights, and LBJ’s comprehensive civil rights program accelerated the reconstitution of the party. Blacks forsook their ancient allegiance to the Republican party of Lincoln and flocked to the Democracy; white southern Democrats forsook the party of Grover Cleveland and Woodrow Wilson to move first toward third-party ventures and then toward their old partisan adversaries, the Republicans.

For Southerners, switching parties was not easy. Their leaders in particular were “in a bind with the national Democratic party,” as Republican Representative Trent Lott of Mississippi noted. “If they subscribe to the national Democrat party’s principles, platform, they are clearly going to alienate the overwhelming majority of the white people in Mississippi.” If they stayed with the national party’s base, “they wind up with blacks and labor and your more liberal, social-oriented” Democrats. “Put those groups together and they are a minority in Mississippi.” So Republican party leaders were ready at the front gate to welcome the Southerners. The Goldwater-Reagan party, having ousted the liberal Rockefeller wing, was prepared to usher southern ex-Democratic leaders into the inner councils of the purified GOP. Congressional converts like South Carolina senator Strom Thurmond were soon making Republican party policy, and convert John Connally of Texas even ran for the Republican presidential nomination.

The test of this reconstitution lay in the political grass roots of the South, and here the shift was dramatic. The percentage of white Southerners identifying themselves as Republicans rose eight points between 1979 and 1984 and then jumped an astonishing ten points further the following year, a movement which public-opinion analyst Everett Carll Ladd saw as “an almost unprecedentedly rapid shift in underlying party loyalties across a large and diverse social group.”

By the late 1980s the Republican party had reconstituted itself as the clearly conservative party of the nation. Ronald Reagan presided as a conservative; all the Republican presidential aspirants of 1988 endorsed his Administration and bore, in one way or another, the Reagan stamp. Reagan Republicans had conducted “half a realignment,” in popular terms. They posed a challenge that the Democratic party leadership was failing to meet as the 1988 election approached.

That challenge was as much philosophical and ideological as political and electoral. The GOP’s rightward tack appeared to leave a huge unoccupied space in the middle of the political spectrum. To all the Democratic presidential aspirants save Jesse Jackson this space was an enticement. How logical it appeared for Democrats to shift some of their appeal to the center while holding their traditional support on the left, and forge a moderate-centrist-liberal coalition much like the winning North-South alliance the national Democracy had maintained for decades before that strategy crumbled in the face of the black revolt. But in a battle against conservative Republicans, they could not talk centrism without being accused of the sin of “me-tooism.” And me-tooism was hardly the answer to the Democratic dilemma. “If American voters are in a conservative mood,” Arthur Schlesinger, Jr., wrote, “they will surely choose the real thing and not a Democratic imitation.”

So what the Democrats talked was not conservatism or even centrism but pragmatism. The American Enterprise Institute political analyst William Schneider, after sitting in on a 1986 board meeting of a candidate’s think tank—it happened to be Gary Hart’s—noted that certain words kept coming up: parameter, interactive, consensus, instrumental, modernize, transition, dialogue, strategic, agenda, investment, decentralize, empowering, initiative, and entrepreneur. But the word of the day, he noted, was pragmatic. “Be pragmatic in all things,” the group seemed to be saying. “Be not ideological.” The Democrats’ selection of Massachusetts Governor Michael Dukakis as their presidential nominee in 1988, and his choice in turn of Texas Senator Lloyd Bentsen as his running mate, met this test.

What did they mean by pragmatism? Whether the candidates used the term or not, it was clear they meant what was practical, realistic, sensible— what worked. But what was the test of workability? By what values was workability measured? Into this forbidding “ideological” land the candidates were reluctant to venture.

What politicians mean by workability is usually what promotes their immediate candidacies, rather than an ultimate cause or creed. Thus was pragmatism degraded into the most self-serving kind of doctrine, a pragmatism that would not have recognized its intellectual ancestry. Indeed, presidential candidates in the 1980s, especially the Democrats, were embracing their brand of pragmatism so enthusiastically as to make it into a doctrine, even an ideology—anathema to Charles Peirce and John Dewey.

To a degree, pragmatism was a convenient way to avoid labeling. Did it conceal an agenda? “Pragmatic,” Schneider noted, had become “this season’s Democratic code word of choice for market-oriented, rather than government-oriented, solutions.” For most of the Democratic candidates, it seemed, pragmatism meant some form of market capitalism. Then how much of an ideological gap separated them from Reaganism? The leaders of the Democratic liberal-left answered “too little” and proposed a clearly contrasting alternative.

That alternative was a movement strategy as against a strategy of mainstream and marketplace. It was in key respects an old-fashioned idea: if social protest movements had been vital to the renewal and redirection of political parties in the past, and if the needs and aspirations of large sectors of American society remained unmet, then the Democrats must make their mightiest effort to reach out to movement leaders and rank and file. These, in some combination, were its natural and traditional constituency— women, peace groups, blacks, union labor, small farmers, ethnics, youth, the poor, and the jobless.

However familiar a political alliance this was for the Democrats, the question in the late 1980s was whether the partners—party regulars and movement activists—were ready for one another. The Democratic party leadership hardly appeared ready for bold initiatives. That leadership, indeed, had cut its structural ties with movement activists when, earlier in the 1980s, the Democrats’ regular midterm policy conference had been discontinued. That midterm conference, a grand assembly of both Democratic party regulars and delegates representing women and minority groups, had been noisy, expensive, untidy, unpredictable, sometimes a bit embarrassing. But it had also linked the party establishment to creative and dynamic electoral groups; by abandoning it, the leadership cut off some of its own intellectual and political lifeblood. The presidential aspirants, focusing on their own campaigns, could not be expected to restore the connection. Several of them in fact were founders of the Democratic Leadership Council, a centrist group that made no secret of its intention to rescue the Democracy from control by “extremists” and “ideologues”— the very groups that had lost their footing at the midterm conference. Michael Dukakis, in choosing Lloyd Bentsen as his running mate and shunning liberal-left stands on tough issues like taxes, presented the moderate face of the Democratic party to the nation. In Massachusetts, however, he had won elections in part because of his skill at uniting party regulars with movement activists.

Nationwide the activists for their part had decidedly mixed desires and capacities to marry or remarry the Democratic party. The movements themselves were divided organizationally. Peace activists were morselized into tens of thousands of local groups individually or cooperatively conducting local rallies, demonstrations, and protest action. Women’s groups had the same types of divisions along with a particular inhibiting factor— many women’s organizations, especially the large and influential League of Women Voters, were nonpartisan and hence barred from forming organizational links with the Democrats. Blacks were overwhelmingly Democratic in their voting but proudly separate in most political endeavors. Some activists in all these movements shunned party politics as a matter of principle, on the ground that Democratic party leaders had betrayed, sold out, neglected, forgotten, or otherwise mistreated them over the years. Other movement leaders spurned any kind of conventional politics at all, preferring to put their energies into street activism.

Then there were the “young,” tens of millions of them. It was calculated that by the late 1980s those born from 1946 onward, in the baby boom, would comprise around 60 percent of the electorate. But this was a demographic “cohort,” not a voting bloc. Some had become the yuppies who were distinguished mainly by having no distinctive political attitudes beyond a vague and ineffectual anti-establishmentarianism. Historian Robert McElvaine, however, detected among the immense number of baby-boomers a group that was not upwardly mobile, affluent, or typically professional. “During the 1950s and ’60s, the average American’s inflation-adjusted income increased by 100 per cent between the ages of twenty-five and thirty-five,” McElvaine noted. “For those who were twenty-five in 1973, however, their real income had risen by only 16 per cent when they reached thirty-five in 1983.” Because many of these young people had lived the hard-pressed lives about which New Jersey rocker Bruce Springsteen sang, McElvaine called them the “Springsteen Coalition.”

Inspired by childhood memories of King and Kennedy, disillusioned by Watergate and Vietnam and much that followed, these young persons retained both a sense of grievance and a streak of idealism that might surface in their voting in the 1990s. But would they vote? Or would they contribute more than their share to the steadily declining voter turnout of the late twentieth century? The same question could be asked of the protest movements that made up the Democratic party’s natural constituency. Movements that had supplied zest and fresh blood to party politics now appeared passive, dispirited. They were part of the impasse of the system, not solvents of it.

If movements as well as parties were fixed in the immobility of American politics, was it likely that Americans might experience incremental, brokerage politics under transactional leadership for years to come? Or was it possible that they would enter another period of social protest, movement politics, and major party transformation and bring on a critical realignment? The answer would turn on the quality of leadership and the character of its followership.

Neither the movement nor the party leadership of the nation gave much promise in the late 1980s of moving Americans out of their political immobility. Leaders were scarce whose capacities could compare with those of the great leaders of the past—with Dr. Townsend’s skill in mobilizing the elderly in the 1930s, with the kindling power of John L. Lewis or the labor statesmanship of Walter Reuther, with the intellectual and political audacity of the early leaders of the women’s movement, or with the galvanizing power and charisma of King and his fellow protesters. As for the Democratic party, virtually all the candidates for President—even Jesse Jackson—exhibited great skill at working within the system. Few of these “pragmatists” hinted at a potential for transcending the system, mastering it, transforming it if necessary. To be sure, any one of the candidates might display great leadership capacities upon attaining office, as Franklin Roosevelt had done. But FDR had not had to go through the modern presidential recruitment process that tested candidates more for their ability to campaign than for their capacity to govern.

Was there no alternative, then, to politics as usual? One possible development that could “break the system wide open” was an economic catastrophe of the magnitude of the great depression, or at least of a severe recession following a stock market plunge like that of “Black Monday” in October 1987. Some liberals and Democrats were predicting such an event, some even forecast a likely time of onset, but the prospect that the nation had to wait for a catastrophe in order to take actions that might have prevented it seemed as wretched as the notion that the world would have to go through a nuclear crisis before it would take the necessary steps to forestall nuclear war.

Some kind of desperate crisis might be necessary, however, for liberal-left Democrats to employ the most ambitious and radical means of opening up the system, a mobilization of the tens of millions of Americans not participating in electoral politics. By the time of the 1984 election the number of voters, even in the presidential race, where the participation rate was much higher than for lower offices, had fallen spectacularly—to roughly half the potential electorate. Americans, who like to view their country as something of a model of democracy, had the poorest voter-turnout record of all the industrial democracies. Aside from the occasional laments of editorial writers, however, Americans did not appear unduly disturbed by this travesty of democracy.

Democrats on the left had special reasons to be concerned, for the poor, the jobless, and the ethnics were disproportionately absent from the polling place. These no-shows represented a huge array of constituencies that the Democrats were failing to tap. A strenuous effort by a national voter registration group helped persuade some states to relax the registration barriers that had kept some people from voting and to allow the use of government offices as registration places, but even in those states turnout remained low. The root difficulty was that many low-income, less educated nonvoters did not see the point of voting—for them electoral participation in America was a middle-class game which they did not care to join.

It would take rare leadership to overcome their ignorance and alienation, to attract them to the polls, to enable them to vote their deepest, most authentic, and abiding needs. Aside from Jesse Jackson, who demonstrated a remarkable talent for mobilizing low-income blacks and whites in his 1988 presidential primary campaign, this kind of leadership was missing, at least on the left, in the America of the late 1980s. Such leadership could not be manufactured—it emerged out of a people’s heritage, values, aspirations, and took its color and energy from conflict. Great leadership historically had never been possible except in conditions of ideological battle. Such conflict was not in sight in a nation whose liberal leaders, or aspirants to leadership, appeared wholly content with a politics of moderation, centrism, and consensus.

A Rebirth of Leadership?

As the 1988 presidential nominating races got underway more than a year before the first primary, it appeared unlikely that one election could break open the party and the constitutional gridlock that gripped the American political system. From the very start the presidential candidates were entangled in one of the worst leadership recruitment systems in the Western world. The presidential primary not only pitted them against fellow party leaders in endless and acrimonious combat; it forced them to mobilize personal followings that after the combat might persist as dispersive and even destructive forces within the parties and within their Administration. It was not surprising that the governor of a large state, such as New York’s Mario Cuomo, would reject this process whereas fifty-six years earlier Franklin D. Roosevelt had found it possible to perform as governor and to campaign for the presidency in the much less demanding nominating procedure of 1932. How defensible was a selection process that choked off the recruitment of some of the best and busiest leaders?

In other Western democracies the parties served not only as recruiting agencies for leaders but as training grounds for leadership. Party mentors identified, coached, and promoted promising young men and women—sometimes in actual party schools. By and large, the more doctrinal the party, the more effective its recruitment and training programs. It was only when the GOP became a more ideological party that it “engaged in an extensive program of political education for legislative candidates and their managers,” John F. Bibby reported. But this effort was exceptional; in this century American parties have been too flaccid, underfinanced, and fragmented to serve as schools of leadership. Other sectors of American society, among them corporations, the military, and government agencies, taught forms of leadership, but these were specialized programs that served the purposes of the organization rather than the broader needs of the general public.

Typically, Americans were trained to be effective politicians—good brokers, manipulators, money raisers, vote winners. Since the very nature of the governmental system, with its rival branches and complex dispersion of power, put a premium on transactional leadership, American politics offered endless play to lawyers and other negotiators and mediators. The system would long ago have collapsed without their capacity to grease the machinery at the connecting points. But what if the machinery was failing anyway? Appeals for creative, transforming leadership were frequent in the 1980s but vain. Such leadership could not be summoned like spirits from the vasty deep.

When political leaders fail, Americans often turn to the next most available saviors or scapegoats—the educators. The era from Vietnam and Watergate to Iran-Contra generated even more than the usual calls for reforming or revolutionizing the nation’s secondary schools and, even more, its colleges and universities. Most of the proposals, dusted off for the latest crisis, embodied the special ideological, professional, or career interests of the reformers—more cross-disciplinary studies, more emphasis on reading the classic writings of the great philosophers from Plato on, strengthening the liberal arts curriculum, and the like. There was much emphasis in the 1980s on teaching and the taught, but significantly less on the teachers. Few of the reformers appeared to comprehend that teachers were the people’s first and most influential set of leaders, as role models, opinion shapers, inspirers, disciplinarians, embodiments of ongoing middle-class, ethnic, and political traditions.

If the public had recognized the central importance of teachers, perhaps proposed reforms would have focused more on these human beings. Or perhaps not, because “reforming” the human beings would have appeared far more difficult and dangerous than manipulating processes or techniques. Still, the quality of the teachers—their competence, breadth of knowledge, intellectual vigor, commitment to the classroom, and professionalism—was far more important than their specific mode of teaching, set of readings, or place in the curriculum.

This centrality of the teacher made all the more crucial and ominous a finding during the “educational crisis” of the 1980s that received little attention at the time compared with the headlines dwelling on superficialities. From a random sample of 2,500 Phi Beta Kappa members and of almost 2,000 Rhodes scholars, Howard R. Bowen and Jack H. Schuster concluded in 1985 that fewer and fewer of the nation’s most intellectually promising young people were entering or planning careers in higher education. This finding had the direst implications for the quality of the best kind of teaching as leadership for the half century ahead; and, as the analysis concluded, it was also significant that “the academy, it seems, grows less and less attractive as a house of intellect, as a nurturing and stimulating environment for the gifted and creative.”

This finding was widely ignored, perhaps because by implication it called for the most prodigious effort to draw the truly best and brightest of the nation’s youth into teaching. The Bowen-Schuster report noted, as had so many earlier findings, that the “quality of working conditions for faculty also has deteriorated markedly over the past decade and a half; less clerical support, overcrowded facilities, outmoded instrumentation, tighter library budgets, and poorly prepared students.” No improvement was expected for another decade or so. To overcome these deficiencies in public higher education would call for the kind of clear goals, dependable funding, long-range planning, firm commitment, steady policy making, and persistent follow-through that were so uncommon in American government.

What should good teachers teach? Not what to think but how to think— that is, how to think across a wide span of disciplines, values, institutions, and policies in a highly pluralistic, fragmented culture. Educators based their claim to priority ultimately on the proposition that the products of liberal arts or humanities programs, as exemplified by Rhodes scholars and Phi Betas, had shown such intellectual grasp of a variety of subjects as to equip them as political leaders to deal with the diverse and continually shifting problems they would face as leaders.

But could any group—even an educational elite—cope with the combination of political fragmentation and intellectual disarray that threatened the American future?

The intellectual disorder had manifested itself during the past half century in the loose collection of hazy ideas that passed as the American idea-system; in the flowery platitudes of candidates, whether about communism or the family or the deficit or poverty; in the once famous New York School of art that fractured into several New York schools and later into an endless succession of styles; in the hopes for a unified social science declining in the face of ever-multiplying subdisciplines and specializations; in the disintegration of the humanities into a “heap or jumble” that reminded Allan Bloom of the old Paris flea market.

A century and a half ago Tocqueville had observed that science could be divided into three parts: the most abstract and theoretical principles; general truths derived from pure theory but leading “by a straight and short road to practical results”; and methods of application and execution. On the practical matters, he noted, “Americans always display a clear, free, original, and inventive power of mind,” but few concerned themselves with the theoretical and abstract. On the other hand, Tocqueville said, American orators and writers were given to speaking in the most inflated, grandiloquent style about vast topics.

The American’s “ideas,” Tocqueville summed up, were either extremely minute and clear or extremely general and vague: “what lies between is a void.” The idea of freedom was his best example. It is the best example today of the “Tocquevillian void.”

Of all the central ideas in the American experiment the concept of freedom had been the most glorious, compelling, and persistent—and also the most contrarily defined, trivialized, and debased. The Declaration of Independence of 1776 was essentially a paean to liberty, a term that has been long used as an equivalent to freedom. Eleven years later the Constitution would secure “the Blessings of Liberty to ourselves and our Posterity” and in 1791 the French Constitution, responding to the same Enlightenment values, incorporated a Declaration of Rights asserting that “men are born and live free and equal as regards their rights.” Within seventy-five years “freedom” had become so evocative, and yet so hazy, as to be invoked by Union soldiers “shouting the battle cry of freedom” against slavery, by Confederate troops “shouting the battle cry of freedom” against Yankee oppression, and by a black regiment singing, “We are going out of slavery; we’re bound for freedom’s light.” During the past century speakers and writers across the entire political spectrum, from American communists to the extreme right, have invoked the term. It was rare to hear a major speech by Reagan, or by the Democratic aspirants of 1988, that did not appeal to freedom or liberty or their equivalents. It was even rarer to hear them spell out what they meant, except in more banalities, shibboleths, and stereotypes.

Did it matter that Tocqueville’s void still loomed toward the end of the twentieth century—that orators continued to “bloviate” and millions of men and women went about their minute, day-to-day decision-making with no linkage between the two? There would be no practical will to action, the philosopher Charles Frankel wrote, unless value judgments were made—and made explicit. If there was to be conversion of social theory into social action on a scale large enough to shape the whole society, a social philosophy that explored “the basic choices available” and offered “an ordered scheme of preferences for dealing with them” was indispensable.

Any one of our animating ideas was complex enough—had to be complex to be so attractive to so many different minds. Liberty was the prime example. A word that appears on our coins, on the marble walls of public monuments like the Lincoln and Jefferson memorials, in virtually every stanza of the great national anthems, had to resonate appealingly through many classes, regions, and occupations. But what did it mean, as a guide to action? Only negative liberty—freedom from arbitrary regulation by public or private power wielders? Or also positive liberty—the freedom to take purposeful steps, often in social and economic areas, to realize one’s goals? Both freedoms could be left at first to the private sphere, but as society became more complex and interrelated, the two liberties increasingly impinged on each other and on the public realm. This happened most dramatically with slavery, and led to one of Lincoln’s wisest reflections. “The world has never had a good definition of the word liberty,” he declared in 1864, “and the American people, just now, are much in want of one. We all declare for liberty; but in using the same word we do not all mean the same thing. With some the word liberty may mean for each man to do as he pleases with himself, and the product of his labor; while with others the same word may mean for some men to do as they please with other men.…”

Events expanded the concept of liberty, and further complicated it. Franklin Roosevelt not only took the lead in defending the Western democratic definition of freedom against Adolf Hitler’s perversion of it, but in proclaiming the Four Freedoms he nicely balanced the negative liberties of speech and religion from arbitrary public and private action against the positive liberties of national military security and personal economic security. Later, contending that “necessitous men are not free men,” he said, “We have accepted, so to speak, a second Bill of Rights under which a new basis of security and prosperity can be established for all—regardless of station, race, or creed.” The President then listed a set of positive economic rights that would constitute the agenda for liberal Democratic Administrations and candidacies in the years ahead.

The struggle over negative liberty—personal protection against authority—attracted some of the most impressive intellectual leadership in the history of the nation. The philosophical heritage of individual liberty, the Jeffersonian and Lincolnian defenses of this supreme value, the fervent conservative vindication of property rights, the vigilance of the American Civil Liberties Union and like-minded groups, the presence on the High Court of justices with the commitment of Louis Brandeis, Harlan Stone, Felix Frankfurter, Hugo Black, William Douglas, Earl Warren, the zeal for civil liberties on the part of appellate judges such as Learned Hand of New York—all of these had variously combined to establish the federal judiciary as, on the whole, the prime definer as well as protector of civil liberties. The enunciation by the High Court during the 1940s of the “preferred position” doctrine, holding that First Amendment freedoms deserved the highest priority in the hierarchy of constitutional protections and presuming to be unconstitutional any law that on its face limited such freedoms, further insulated individual liberty against arbitrary interference.

Still, civil libertarians could not be complacent as the Bill of Rights bicentennial neared. The judiciary’s record since the founding had been uneven. And when, in 1987, the Chief Justice of the United States, along with the latest Reagan appointee, joined in a minority vote to sustain the constitutionality of a Louisiana statute requiring the teaching in public schools of the creationist theory of human origin, civil libertarians had to assess the implications for the future of appointments by a series of conservative Presidents.

“All men are created equal.” If the Court had helped fill Tocqueville’s void in the area of civil liberty, the same could not be said about the record of the nation’s intellectual and political leadership in meeting the flat commitment that Americans of 1776 had made to the principle of equality except for slaves and women. This failure was understandable in part because the realization of economic and social equality was intellectually an even more daunting venture than the protection of individual liberty. But even the most essential preliminary questions had not been answered: What kind of equality was the issue—political, social, economic, gender, racial, or other? Guaranteed by what private or public agency, if any? Equality for whom—blacks as well as whites? Equality when? This last question was of crucial importance to low-income Americans long assured that their opportunity would come if only they waited long enough. It had taken almost a century for the nation to take the primitive step of making child labor illegal.

The intellectual confusion over equality was sharply reflected in the ancient debate between equality of condition and equality of opportunity. It was in part a false debate, for very few Americans wanted absolute or even sweeping equality of condition. But even the sides of the debate were mixed up. In part because Herbert Hoover and other enlightened conservatives had contended that inequality of condition was acceptable as long as all the “runners” had the same place at the starting line, many on the left spurned that kind of equality as brutal capitalist competitiveness.

But in fact equality of opportunity was a most radical doctrine. If the nation actually wanted persons to achieve positions for which their basic potentials of intelligence and character fitted them, then government must be more than a referee at the starting line; it must intervene at every point where existing structures of inequality barred people from realizing those potentials. If the nation wanted to open the way for people to realize their “life chance,” then government or some other agency must act early in their lives to help them obtain the motivation, self-assurance, literacy, good health, decent clothes, speech habits, education, job opportunity, self-esteem that would enable them really to compete.

Neither in action nor in analysis did the government fill this Tocquevillian void. Perhaps the political leadership did not wish to, for granting true equality of opportunity would call for innovative social analysis as well as bold and comprehensive governmental action—would call indeed for a program for children’s rights rivaling earlier programs for the poor, women, and minorities. Some presidential candidates in 1988 were cautiously discussing such policies as much-expanded child care and paid leaves for parents of newborns, but no Marshall Plan for children was in sight.

The vital need for a set of findings firmly seated in clear and compelling moral principles and linked in turn to explicit policy choices was met, almost miraculously it seemed, in 1984 by the 120-page first draft of the Roman Catholic bishops’ “Pastoral Letter on Catholic Social Teachings and the U.S. Economy.” The letter was unsparing of American leadership. The level of inequality in income and wealth in the nation was morally unacceptable. “The fulfillment of the basic needs of the poor is of the highest priority. Personal decisions, social policies and power relationships must all be evaluated by their effects on those who lack the minimum necessities of nutrition, housing, education and health care.” Again and again the bishops assailed selfishness, consumerism, privilege, avarice, and other ugly characteristics of American society. Speaking from their hearts trained in compassion and their heads trained in moral reasoning, from their pastoral closeness to the needs of people and their experience with government programs, the bishops magnificently filled the gap between high moral principle and explicit economic policy.

If an air of old-fashioned morality hung over the bishops’ letter, some of the solutions too sounded old-fashioned to some critics. In calling for help to the needy abroad the bishops appeared to ignore findings that a great deal of American aid, instead of helping the poor in Third World countries, had come under the control of powerful and rich elites who portioned it out among themselves. Thus the American poor to some degree were subsidizing the foreign rich. And when the bishops proposed empowering the poor, at home and abroad, critics noted that it was precisely in power, among other things, that the poor were poor; they might not know how to gain and exert power effectively any more than they were able to gain and spend money. In many other respects too, solving poverty was extraordinarily difficult. But the bishops would hardly have denied this.

In any event, few were listening, or at least acting. Three years later the richest 1 percent of American families were approaching the peak share of the nation’s wealth of 36 percent attained in 1929. For the poorest 20 percent of American families, annual incomes in real dollars were one-third less than in 1972. Almost half of the new jobs created during the decade paid less than a poverty income. The stock market, however, was booming, and millions of middle-class Americans were engaged, like their government in Washington, in a spending spree.

As it turned out, the two-hundredth birthday of the Constitution in 1987 was an occasion much more for celebration than cerebration. Serious debate about the Constitution was minimal, except for one unplanned episode. Reagan’s nomination of Robert Bork for the High Court provided a classic demonstration of the type of presidential-congressional struggle so carefully planned by the Framers, and provided also a Senate forum for debate over major constitutional issues such as “original intent.” But those who hoped that 1988—the opening, year of the Constitution’s third century—might prove an occasion not only for testing Reagan conservatism at the polls but also for debate over sharply posed constitutional issues were to be disappointed on both counts. Major governmental restructuring occurred that year not in Washington but in the Soviet Union.

The electoral politics of 1988 turned out to be a disgrace to an “advanced democracy.” After spending hundreds of days on the campaign road and millions of dollars before a single vote was cast, two coveys of candidates, Democratic and Republican, underwent a series of primary elections so rife with opportunism, so repetitious, and finally so anticlimactic as to bore the electorate before the main campaign even started. Two conventions full of fervid oratory but void of dramatic roll-call votes merely ratified the results of the primaries. The campaign that followed was the most scurrilous in recent American history, the most intellectually degrading since the campaign of bias against Al Smith in 1928.

A principal reason for George Bush’s victory, studies indicated, was general satisfaction on the part of most voters with what the Democrats called the “credit-card economy.” Most respondents in a nationwide poll during the early fall of 1988 said that they viewed themselves as better off than they had been eight years earlier, and most expected to be still better off four years hence. Few heeded warnings of future economic disarray or collapse as the trade and federal budget deficits continued to soar. Basic also to the Bush victory was a large and solid conservative constituency; 42 percent of the respondents in an August CBS News/New York Times poll held that the Reagan Administration’s approach had not been conservative enough. Perhaps decisive in the outcome, however, was the GOP’s expert manipulation of media, money, and symbols, and of the Republican candidate himself.

Long before election day, large numbers of voters were protesting the low level of the campaign. Many resolved on principle not to vote for president. An unprecedented number of newspapers refused to endorse either candidate. Voter registration campaigns floundered; after all the pronunciamentos about the biggest registration drive yet, the percentage of eligible Americans who were registered to vote by election time had dropped over two percentage points from four years before. And on election day the voter turnout—the ultimate test—fell to 50 percent, the lowest rate since the Coolidge-Davis race in 1924.

The Bush Republicans had proved that Reaganism could win without Reagan. They had failed, however, to convert presidential Republican votes into congressional or gubernatorial majorities; indeed, the Democrats gained small increases in their Senate, House, and statehouse ranks. One test for President Bush—a former chairman of the Republican National Committee—was whether he would now prove able to further modernize the GOP even while he sought to draw more Reagan Democrats into the party. Still, the Republicans had shaped a conservative, Sunbelt-based electoral strategy that worked in 1988.

And the response of the defeated? Many Democratic party leaders still failed to comprehend that for over a decade the Reagan Republican party had been conducting ideological warfare against the Democracy as the party of liberalism; that the Republicans had won this war in election after election; that the Democratic party lacked bold, creative, and innovative ideas, instead beating a timid retreat into calculation, centrism, and consensus. The Democracy would face huge tasks: to democratize and invigorate both its internal organization and the governmental system itself; to approach women, blacks, labor, peace activists, environmentalists and others less as vote pools to be tapped and more as partners in continuing social experimentation and change; to draw to the polls tens of millions who are, demographically, potential voters for the liberal-labor-left—if given inspired leadership of Rooseveltian quality.

Intellectually this would demand of the Democrats a clearheaded array of values and a grasp of priorities and relationships among values, thus filling the “Tocquevillian void” with a structure of well-formed ideas, experiments, and policies. “If you go back and read William James on pragmatism,” scientist Michael Maccoby once remarked to some colleagues, “what he said was that truth would be discovered neither by the tough-minded people who live by numbers nor the tender-minded who live by ideology, but rather by people who make their ideals explicit, are willing to test them out and experiment with them constantly in the real world. That was really the essence of the American experiment.” But what truth had the American experiment established? “Politics in the United States,” wrote historian Alan Brinkley, “always has been afflicted with a certain conceptual barrenness. Efforts to create meaningful ‘values,’ to find a useful ‘moral core’ for our public life have competed constantly and often unsuccessfully against the belief in liberty, the commitment to personal rights unconstrained by any larger conception of a common purpose.”

The volatility of the American character by the 1970s, Maccoby said, “makes the role of leadership absolutely crucial,” measured by both clarity of values and the experimental attitude. Yet transformative political and intellectual leadership had been conspicuously absent in the seventies and eighties, especially on the left. Was there still a place in the American scheme for the leader who could transcend the medley of special interests, carry through great projects, and provide creative and transforming leadership to the nation? Much would depend on the propulsive force by which leadership would be projected into office. Backed by a broad and militant mandate, such leadership would have a chance. Whether such a mandate would develop in the 1990s was not clear. Believers in the pendulum theory of politics now expected a great shift toward the left, but history has been known to play tricks on people with patterns. Leadership was not in automatic supply.

Americans had no need for a hero, a spellbinder, a messiah. They needed committed men and women who could mobilize, and respond to, tens of thousands of rank-and-file leaders who in turn could activate hundreds of thousands of persons in the neighborhoods and precincts, thus creating a movement. This movement would both guide the top leaders and sustain them, just as hundreds, then thousands of black militants had rallied behind King and pressed him toward ever bolder action. Americans needed political leaders who, like Roosevelt on the left and Reagan on the right, could merge movement leadership with party and electoral organization, in order to win and hold governmental power.

Such leadership, such followership, can be founded only on intellectual and moral commitment to values and principles, to ideology in the true sense of the word. In the United States it can be founded only on the values of liberty and equality, of freedom, that Americans have been extolling for two centuries or more. Most conservatives will define freedom as individualism and libertarianism, most liberals and radicals as the Four Freedoms, as sharing and solidarity. That is a rational basis for conflict. During the next political cycle, in the wake of Reagan conservatism, it would enable a leader on the left to have a special rendezvous with destiny—as President to confront the oldest continuing challenge in America: the broadening of real equality of opportunity combined with the expansion of individual liberty.

Bob Dylan had sung in 1963:

The line it is drawn, the curse it is cast

The slow one now will later be fast

As the present now will later be past

The order is rapidly fadin’

And the first one now will later be last

For the times they are a-changin’