The Third Party System marked the first of several distinct eras of competition between Republicans and Democrats. Unlike the Whigs, the Democrats withstood sectional divisions and endured to oppose Republican president Abraham Lincoln’s commitment to military victory in the Civil War as well as Republican efforts to establish political and civil rights for the freed slaves during the postwar Reconstruction. Partisan conflict over racial issues in the 1860s would give Republicans a sizable advantage in northern states, create a solidly Democratic South after the demise of Republican-run Reconstruction governments, and define voter loyalties for another 70 years. The Republicans were the party of activist government in the late nineteenth century, whereas Democrats continued to defend limited government and states’ rights.
In the pivotal election of 1860, Abraham Lincoln prevailed over a field of candidates that included Senator Stephen Douglas of Illinois, the regular Democratic nominee; Vice President John Breckinridge, the candidate of bolting southern Democrats; and former senator John Bell of Tennessee, the candidate of the compromise Constitutional Union Party. Although Lincoln won only 40 percent of the popular vote, he carried every northern state and gained a substantial majority of Electoral College votes. Just six years after its birth, the new Republican Party had won control over the national government. Abraham Lincoln, elected because of the dissolution of the old political order, would have the cheerless task of presiding over the near dissolution of the nation itself. The selection of a Republican president was unacceptable to many Southerners. Even before Lincoln took the oath of office on March 4, 1861, seven southern states had seceded from the Union. On April 12, 1861, the Civil War began with the bombardment of Fort Sumter in South Carolina.
The war dragged on for four long years and transformed the nation. When Lincoln issued the Emancipation Proclamation, which freed all slaves still held by the Confederacy, he committed the federal government for the first time to a decisive stand against slavery. The Lincoln administration also instituted a graduated income tax, established a national banking system, facilitated the settlement of western lands, and began the nation’s first draft of soldiers. Lincoln won reelection in 1864 and, with the South still out of the Union, his Republican Party gained decisive majorities in both houses of Congress and passed the Thirteenth Amendment, which ended slavery in the United States. The election also ensured that the Republicans would dominate the reconstruction of the Union after the guns fell silent. “I earnestly believe that the consequences of this day’s work will be to the lasting advantage, if not the very salvation, of the country,” Lincoln said. But Lincoln’s assassination in April 1865 meant that he would not live to fulfill his prophecy.
At the time of Lincoln’s death, the big questions of Reconstruction were still unresolved. Under what terms would the South be restored to the Union? To what extent would the federal government act to provide civil rights, civil liberties, economic security, and political rights to the newly freed slaves? In the face of opposition from Lincoln’s successor, the former Democrat Andrew Johnson, Congress enacted a fairly ambitious program of Reconstruction. It included civil rights laws; the Fourteenth Amendment, which guaranteed “equal protection under the law”; and the Fifteenth Amendment, which prohibited the denial of voting rights on grounds of race, religion, or previous servitude. Still, in the late 1860s and 1870s, Republicans would be unable to prevent the unraveling of Reconstruction and the “redemption” of southern states by Democratic leaders committed to white-only government and to the exploitation of cheap black labor.
The disputed presidential election of 1876 marked the end of Reconstruction. Although Democratic candidate Samuel J. Tilden, the governor of New York, won the popular vote against Republican governor Rutherford B. Hayes of Ohio, the outcome of the election turned on disputed Electoral College votes in Florida, South Carolina, and Louisiana. With the Constitution silent on the resolution of such disputes, Congress improvised by forming a special electoral commission of eight Republicans and seven Democrats. The commission voted on party lines to award all disputed electoral votes to Hayes, which handed him the presidency. The years from 1876 to 1892 were marked by a sharp regional division of political power growing out of Civil War alignments and a national stalemate between Republicans, who dominated the North, and Democrats, who controlled the South. White Southerners disenfranchised African Americans and established an all-white one-party system in most of their region. The stalled politics of America’s late nineteenth-century Gilded Age resulted in a seesaw series of close elections: with neither party able to gain a firm hold on government or the electorate, the White House would change hands in every contest of the era except 1880, when Republican James A. Garfield won by some 2,000 votes. During this period the difference between Democratic and Republican percentages of the national popular vote for president averaged only about 1 percent, whereas differences in the vote between the North and the South averaged about 25 percent.
This combination of major-party stasis and electoral uncertainty left the nation unable to cope with the deep depression of the mid-1890s, which shattered the second administration of Democratic president Grover Cleveland. In 1884 Cleveland became the first Democrat to win the White House since Buchanan in 1856. Four years later he narrowly prevailed in the popular vote but lost in the Electoral College to Republican Benjamin Harrison. Becoming the only American president to serve two nonconsecutive terms, Cleveland defeated Harrison in 1892. With Cleveland declining to compete for a third term in 1896, the insurgent Democratic candidate William Jennings Bryan was unable to shake off the legacy of Cleveland’s failures. With his candidacy endorsed by the People’s Party, Bryan embraced such reform proposals as the free coinage of silver to inflate the currency, a graduated income tax, arbitration of labor disputes, and stricter regulation of railroads—policies that were at odds with his party’s traditional commitment to hands-off government. Despite Bryan’s defeat by Republican William McKinley, his nomination began to transform the political philosophy of the major parties. By vigorously stumping the nation in 1896, Bryan also helped introduce the modern style of presidential campaigns. In turn, the Republicans, who vastly outspent the Democrats, pioneered modern fund-raising techniques.
The Republican Party dominated American politics from 1896 through 1928. Except for the two administrations of Woodrow Wilson, from 1913 to 1921, the GOP controlled the presidency throughout this period, which was marked by foreign expansionism and the rise and fall of progressive reform. Although William Jennings Bryan emerged as the reformist leader of the Democratic Party, it took the ascendancy of Theodore Roosevelt and a new generation of Republican progressives to add domestic reform to the expansionist policies begun by President McKinley during the Spanish-American War of 1898. Roosevelt became president through four unpredictable turns of fate. First, President William McKinley’s vice president, Garret A. Hobart, died in 1899. Second, Republican boss Thomas C. Platt saw his chance to rid New York of its reformer governor, Theodore Roosevelt, by promoting him for vice president on McKinley’s ticket in 1900. Third, McKinley defeated Bryan in a rematch of 1896, and fourth, Roosevelt became president six months into his vice presidency when McKinley died of a gunshot wound inflicted by anarchist Leon Czolgosz. During two terms in office, Roosevelt put his progressive stamp on the presidency. His sustained McKinley’s expansionist foreign policies and gave concrete expression to his idea that government should operate in the public interest by steering a middle course between unchecked corporate greed and socialistic remedies.
After stepping down from the presidency in 1908, Roosevelt was so disappointed with his hand-picked successor, William Howard Taft, that he sought a third term as president. In 1912 Roosevelt, Taft, and Senator Robert M. La Follette of Wisconsin fought the first primary-election campaign in American history, battling one another in the dozen states that had recently established party primaries. Although Roosevelt garnered more primary votes than Taft and La Follette combined, most convention delegates were selected by party bosses, who overwhelmingly backed Taft. The disgruntled Roosevelt launched an insurgent campaign behind the new Progressive Party that advocated reforms such as women’s suffrage, tariff reduction, old-age pensions, and laws prohibiting child labor. Roosevelt siphoned off about half of the voters who had backed Taft in 1908. Democratic candidate Woodrow Wilson, the governor of New Jersey, held the Democrats together and won the election with only 42 percent of the popular vote. Roosevelt finished second in the popular vote with 27 percent, compared to 23 percent for Taft. It was the largest vote ever tallied by a third-party candidate.
During his two terms in office, Wilson pioneered the modern liberal tradition within the Democratic Party. Under his watch, the federal government reduced tariffs, adopted the Federal Reserve System, established the Federal Trade Commission to regulate business, and joined much of the Western world in guaranteeing voting rights for women. Wilson also increased America’s involvement abroad and led the nation victoriously through World War I. Wilson had a broad vision of a peaceful postwar world based upon America’s moral and material example. He became the first president of any party to advocate a strong internationalist program that centered on America’s leadership in a League of Nations. He would not realize this vision, although it would be largely achieved under Franklin Roosevelt and Harry Truman after World War II.
The reaction that followed World War I and Wilson’s failed peace plans led to the election of conservative Republican Warren G. Harding in 1920. Republicans won all three presidential elections of the 1920s by landslide margins and maintained control over Congress during the period. Republican presidents and congresses of the 1920s slashed taxes, deregulated industry, restricted immigration, enforced Prohibition, and increased protection tariffs. In 1928, when Commerce Secretary Herbert Hoover decisively defeated Governor Al Smith of New York—the first Catholic presidential candidate on a major party ticket—GOP senator William Borah of Idaho said, “We have an opportunity to put the Republican Party in a position where it can remain in power without much trouble for the next twenty years.” But Democratic weakness concealed the party’s resilient strength. The Democrats’ pluralism, which melded diverse voters from outside America’s elite—whites in the South, working-class Catholics and new immigrants in the North, and reformers in Mountain States—helped the party weather adversity, evolve with changing circumstance, and survive in contests for congressional and state offices. The Democrats’ 1924 presidential candidate, John W. Davis, said after the election: “I doubt whether a minority party can win as long as the country is in fairly prosperous condition. . . . Some day, I am sure, the tide will turn.”
The tide turned after the crash of 1929 began the nation’s longest and deepest depression and led to a two-tiered realignment of the American party system. First, between 1930 and 1932 the Democrats benefited from a “depression effect” that swelled the ranks of party voters throughout the United States but neither restored the Democrats to majority status nor reshuffled voter coalitions. Second, after Franklin Roosevelt won the presidential election of 1932, the “Roosevelt effect” completed the realignment process. FDR’s liberal New Deal reforms and his inspirational leadership created a positive incentive for loyalty to the Democratic Party.
In 1936, after losing badly in four consecutive presidential and midterm elections, Republicans seemed nearly as obsolete as the Whigs they had displaced in 1854. Since 1928 the party had lost 178 U.S. House seats, 40 Senate seats, and 19 governorships. The GOP retained a meager 89 House members and just 16 senators. As Democrats completed the realignment of party loyalties, they recruited new voters and converted Republicans. From 1928 to 1936, the GOP’s share of the two-party registration fell from 69 percent to 45 percent in five northern states and from 64 to 35 percent in major cities. The durable new Democratic majority—the so-called Roosevelt coalition—consisted of white Protestant Southerners, Catholics and Jews, African Americans, and union members.
Republicans recovered sufficiently in the midterm elections of 1938 to regain a critical mass in Congress and to join with conservative southern Democrats to halt the domestic reform phase of the New Deal. However, Republican hopes to regain the presidency and Congress in 1940 were dashed by the outbreak of war in Europe. A nationwide Gallup Poll found that respondents preferred Roosevelt to any challenger, although most said that they would have backed a Republican candidate in the absence of war abroad. Roosevelt won an unprecedented third term and led the nation into a war that largely ended America’s traditional isolation from foreign entanglements.
In the first postwar election, held in 1946, the GOP’s new slogan, “Had Enough?” evoked scarcity, high prices, and labor strife under Harry Truman, who had become president after Roosevelt’s death in April 1945. In postwar Britain, voters defeated Winston Churchill and his conservative majority in Parliament. Americans, however, could not dispatch the Democrats in a single blow. The midterm elections of 1946 issued no policy mandate to Republicans in Congress, who had to confront a president armed with veto power, the bully pulpit, and the initiative in foreign affairs. After leading America into the cold war against communism, Truman unexpectedly won the election of 1948, and Democrats regained control of Congress.
Four years later, in the midst of disillusionment over a stalled war in Korea and Democratic corruption, the war hero Dwight David Eisenhower became the first Republican president in 20 years. But Eisenhower had no intention of turning back the clock to the 1920s. Instead, he governed as a “modern Republican” who steered a middle course between Democratic liberals and the right wing of the Republican Party. However, modern Republicanism neither stole the thunder of Democrats nor attracted independents to the GOP. Although Eisenhower remained personally popular, Democrats controlled Congress during his last six years in office. When he stepped down in 1960—the Twenty-Second Amendment, ratified in 1951, barred presidents from seeking third terms—a Democrat, John F. Kennedy, became America’s first Catholic president by defeating Eisenhowers’ vice president, Richard Nixon.
Kennedy and his successor, Lyndon Johnson, presided over a vast expansion of the liberal state. These Democratic presidents embedded the struggle for minority rights within the liberal agenda and, in another departure from the New Deal, targeted needs—housing, health care, nutrition, and education—rather than groups, such as the elderly or the unemployed. Its civil rights agenda would eventually cost the Democratic Party the allegiance of the South, but in the near term the enfranchisement of African Americans under the Voting Rights Act of 1965 offset losses among white Southerners. Still, the Johnson administration unraveled under pressure from a failing war in Vietnam and social unrest at home. In 1968 Richard Nixon became the second Republican to gain the White House since 1932. Although Nixon talked like a conservative, he governed more liberally than Eisenhower. Nixon signed pathbreaking environmental laws, backed affirmative action programs, opened relations with mainland China, and de-escalated the cold war.
The Watergate scandal and Nixon’s resignation in 1974 dashed any hopes that Republicans could recapture Congress or pull even with Democrats in party identification. However, conservative Republicans began rebuilding in adversity. They formed the Heritage Foundation to generate ideas, the Eagle Forum to rally women, new business lobbies, and Christian Right groups to inspire evangelical Protestants. Although Democrat Jimmy Carter won the presidential election of 1976, his administration was unable to protect its constituents from the ravages of “stagflation”—an improbable mix of slow growth, high unemployment, high inflation, and high interest rates. Carter also exhibited weakness in foreign affairs by failing to gain the quick release of American hostages seized by militants in Iran or to halt the resurgent expansionism of the Soviet Union.
After defeating Carter in 1980, Republican Ronald Reagan, the former actor and governor of California, became the first conservative president since the 1920s. The election of 1980 did not match the shattering realignment of 1932. Republicans did not gain durable control of Congress, long-term domination of the presidency, or an edge in the party identification of voters. Nonetheless, the election profoundly changed American politics. It brought Republicans into near parity with Democrats, enabled Reagan to implement his conservative ideas in domestic and foreign policy, and moved the national conversation about politics to the right. In 1980 Republicans gained control of the Senate and held an ideological edge in the House. In his first year as president, Reagan cut taxes, reduced regulation, shifted government spending from domestic programs to the military, and adopted an aggressive approach to fighting communism abroad. Although Reagan won reelection after a troubled economy recovered in 1984, the “Reagan revolution” stalled in his second term. Still, Reagan presided over the beginning of the end of the cold war, a process completed by his vice president, George H. W. Bush, who won the presidency in 1988.
Republican progress stalled when moderate Democrat Bill Clinton of Arkansas defeated Bush in the presidential election of 1992. However, conservatives and Republicans rebounded in 1994, when the GOP regained control of both houses of Congress for the first time in 40 years. Although Republicans failed to enact their most ambitious policy proposals or prevent Clinton’s reelection in 1996, the congressional revolution of 1994, no less than the Reagan revolution of 1980, advanced conservative politics in the United States. The elections gave Republicans unified control of Congress for most of the next dozen years, established Republicans as the dominant party in the South, polarized the parties along ideological lines, and forestalled new liberal initiatives by the Clinton administration.
In the disputed presidential election of 2000, Republican George W. Bush, the governor of Texas, trailed Vice President Al Gore in the popular vote by half a percent. But the Electoral College vote turned on contested votes in Florida. On December 12, the U.S. Supreme Court stopped a recount of Florida’s votes with Bush ahead by 537 votes out of 6 million cast. Bush won a bare majority of 271 Electoral College votes, including all in the South and about one-third elsewhere. He won overwhelming support from white evangelical Protestants and affluent voters.
Bush’s conservative backers in 2000 brushed aside suggestions from media commentators that the president-elect fulfill his promise to be “a uniter, not a divider” and emulate Rutherford B. Hayes, who governed from the center after the disputed election of 1876. Dick Cheney, who was poised to become the most influential vice president in American history, added, “The suggestion that somehow, because this was a close election, we should fundamentally change our beliefs I just think is silly.” Bush advanced the conservative agenda by steering major tax cuts and business subsidies through Congress, advancing a Christian conservative agenda through executive orders, and aggressively opposing foreign enemies abroad.
Although Bush won reelection in 2004, under his watch conservatism, like liberalism in the 1970s, faced internal contradictions. Conservatives had long opposed social engineering by government, but in Iraq and Afghanistan the Bush administration assumed two of the largest and most daunting social engineering projects in U.S. history. Conservatives have also defended limited government, fiscal responsibility, states’ rights, and individual freedom. Yet the size and scope of the federal government and its authority over the states and individuals greatly expanded during the Bush years. These contradictions, along with the Republicans’ loss of Congress in 2006 and Democrat Barack Obama’s victory in the 2008 presidential race, suggest that the conservative era that began in 1980 had come to an end.
See also Democratic Party; Republican Party.
FURTHER READING. Walter Dean Burnham, Critical Elections and the Mainsprings of American Politics, 1970; John Gerring, Party Ideologies in America, 1828–1996, 1998; Allan J. Lichtman, White Protestant Nation: The Rise of the American Conservative Movement, 2008; David R. Mayhew, Electoral Realignments: A Critique of an American Genre, 2004; Albert J. Nelson, Shadow Realignment, Partisan Strength and Competition, 1960 to 2000, 2002; Theodore Rosenof, Realignment: The Theory That Changed the Way We Think about American Politics, 2003; Byron E. Shafer, End of Realignment? Interpreting American Electoral Eras, 1991; Joel Silbey, The American Political Nation, 1838–1893, 1991.
ALLAN J. LICHTMAN
In many ways, the history of the Electoral College reflects the evolution of a persistent problem in American politics, one summarized in political scientist Robert Dahl’s succinct question “Who governs?” The answers Americans have given have changed throughout the nation’s history. The framers of the Constitution believed the college would balance tensions among the various states and protect the authority of the executive from the influence of Congress and the population at large. More recently, however, debates over the college have centered upon whether it performs these functions too well, and, in so doing, hampers democratic values increasingly important to Americans.
Many of the delegates to the Constitutional Convention in 1787 initially were convinced that the president should be chosen by majority vote of Congress or the state legislatures. Both these options steadily lost popularity as it became clear the Convention did not want to make the presidency beholden to the legislature or to the states. However, many delegates also found distasteful the most viable alternative—direct election by the populace—due to fears that the public would not be able to make an intelligent choice and hence would simply splinter among various regional favorite-son candidates.
On August 31, 1787, toward the end of its third month, the convention created the Committee on Postponed Matters, or the “Committee of Eleven,” to solve such problems. Chaired by David Brearley of New Jersey and including Virginia’s James Madison, within four days of its organization, the committee proposed that electors, equal in number to each state’s congressional delegation and selected in a manner determined by the state legislatures, should choose the president. These electors would each choose two candidates; when Congress tabulated the votes, the candidate with the “greatest Number of Votes” would become president and the runner-up vice president. In the case of a tie, the House of Representatives would choose the president and the Senate the vice president. This plan proved acceptable to the convention because it was a compromise on many of the points that had rendered earlier proposals unworkable—it insulated the president from the various legislatures but preserved the process from undue popular influence. Similarly, in basing its numbers on the bicameral Congress, the college moderated the overwhelming influence of the populous states. Though the phrase “Electoral College” was not included in the Constitution, the plan was encoded in Article II, section I.
Over the first few presidential elections, states experimented with various means of choosing their electors. In the first presidential election, for example, 11 states participated. Four held popular elections to select electors; in five the legislature made the decision. The remaining two combined these methods; the legislature chose individuals from a field selected by general election.
Despite this carefully constructed compromise, the practicalities of electoral politics gradually overtook the college’s system. Most influential was the surprisingly quick emergence of political parties that coalesced around individual candidates. By 1800, the Democratic-Republican Party and its rival Federalist Party had gained control of many state governments and began to manipulate local methods for selecting electors; the Federalist parties of Massachusetts and New Hampshire, for example, were in command of those states’ legislatures and reserved to those organizations the right to select electors. In the next presidential election, the legislature, doubting its ability to secure the states’ electors for the Federalists, switched to a system in which each congressional district selected one elector, only to revert back to legislative control in 1808. Similarly, in Virginia the Democratic-Republican Party shifted the authority to a winner-take-all general election, where favorite-son candidate Thomas Jefferson was assured to gain a majority and sweep the state’s electoral votes. Thus, despite the original expectation that independent electors would gather and deliberate over the most qualified candidate, they were increasingly selected to represent their parties and to cast their votes accordingly.
The 1800 election also revealed perhaps the greatest flaw in the Electoral College as established by the Constitution. The Democratic-Republican electors chosen in 1800 obediently voted for their party’s choice for president, Jefferson, and vice president, New York’s Aaron Burr. However, the convention had not anticipated such party-line voting, and the tabulation of the electors’ votes revealed an inadvertent tie. In accordance with the Constitution, the election was thrown to the House, where Federalist representatives strove to deny their archenemy Jefferson the presidency. It took 36 ballots before the Virginian secured his election. As a result, in 1804 the Twelfth Amendment was added to the Constitution, providing that electors should cast separate ballots for the president and vice president. Despite several recurrences of such crises in the system, only one other constitutional reform of the college has been adopted; in 1961, under pressure from citizens complaining of disenfranchisement, the Twenty-Third Amendment was added to the Constitution; it granted the District of Columbia three electoral votes.
As the nineteenth century progressed, such manipulations as occurred in Massachusetts and New Hampshire gradually faded in favor of assigning electors to the winner of the general election. The combination of new styles of mass politics that presidential contenders like Andrew Jackson embodied and the allure that the winner-take-all system held for confident parties meant that by 1836 South Carolina was the only state in the Union that clung to legislative choice against popular election, and even that state capitulated after the Civil War. Despite the occasional crisis in which states have resorted to legislative choice—such as Massachusetts in 1848, when a powerful bid by the Free Soil Party meant that no party gained a majority of the popular vote, or Florida in 2000, when the legislature selected a slate of electors in case the heated contest over the disputed popular vote was not resolved—this system has remained ever since.
This does not mean, however, that it has always worked perfectly. As concerns over regional balance and the fitness of the electorate have receded, debate has centered on the awkwardness of the combination of popular ballots and state selection. For example, the winner-take-all system ensures that the minority in each state is disenfranchised when the electors cast their votes. Indeed, despite the universal desire to empower the general electorate, it remains quite possible for the president to be chosen by a minority of the popular vote. In the three-way election of 1912, for example, Democrat Woodrow Wilson won more than 80 percent of the electoral vote despite winning only a plurality of the popular vote—barely 41 percent. Similarly, Democrat Bill Clinton was elected in 1992 when his 43 percent of the popular vote—a plurality—translated into nearly 70 percent in the Electoral College. Though neither of these elections was in danger of being thrown to the House of Representatives, a similar three-way election in 1968 raised such fears; indeed, the independent candidate George Wallace hoped to gain enough electoral votes to force such an event and gain concessions from either Republican Richard Nixon or Democrat Hubert Humphrey. Nixon, however, gained a close majority in the Electoral College.
Despite earning the appellation “minority president” from their weak showing in the popular election, Nixon, Wilson, and Clinton did at least receive pluralities. Several other times, the uneven correlation between the popular vote and the Electoral College resulted in the loser of the former attaining the presidency. In 1888, Republican Benjamin Harrison defeated the incumbent Democrat Grover Cleveland in the Electoral College despite losing the popular vote; Cleveland’s graciousness, however, assured a smooth transition of power. The other such elections—1824, 1876, and 2000—were met with discontent and protest from the losing party. Indeed, though correct constitutional procedure was followed in each case, all three elections were tainted with accusations of corruption and manipulation, allegations exacerbated and legitimated by each eventual president’s failure to win the majority of the popular vote.
In 1824 the presidential election was a contest among several Democratic candidates, and a situation the Convention had hoped to avert occurred: the nation split along regional lines. Andrew Jackson gained a plurality of the popular and electoral vote, primarily in the South and middle Atlantic. Trailing in both totals was John Quincy Adams, whose base was in New England. The other candidates, Henry Clay and William Crawford, won only three and two states, respectively (though both also won individual electoral votes from states that divided their totals). Despite his plurality, Jackson was unable to gain a majority of the electoral vote, and the election was again, as in 1800, thrown to the House of Representatives. There, Clay threw his support to Adams, who was selected. Despite the fact that correct procedure was followed, Jackson denounced Adams and Clay for thwarting the will of the people and subsequently swept Adams out of office in 1828.
In 1876 Democrat Samuel Tilden led Republican Rutherford B. Hayes by more than a quarter million popular votes. However, the results in four states, Oregon and the southern states of Florida, South Carolina, and Louisiana—all three of which were expected to easily go for Tilden—were disputed. Without the electoral votes of these states, Tilden found himself one vote short of a majority. All four states sent competing slates of electors to the session of Congress that tabulated the votes. In 1865 Congress had adopted the Twenty-Second Joint Rule, which provided that contested electoral votes could be approved by concurrent votes of the House and Senate. However, the rule lapsed in January 1876, leaving Congress with no means to resolve the dispute. In January 1877, therefore, Congress passed the Electoral Commission Law, which established—for only the particular case of the 1876 election—a 15-member commission, consisting of 5 members of the House, 5 of the Senate, and 5 justices of the Supreme Court, which would rule on the 15 disputed electoral votes. Seven seats were held by members of each party; the remaining seat was expected to go to David Davis, an independent justice of the Supreme Court. However, Davis left the commission to take a Senate seat, and his replacement was the Republican justice Joseph Bradley. Unsurprisingly, the commission awarded each disputed vote to Hayes, 8 to 7. Hayes thus edged Tilden in the college, 185 to 184. Though Democrats threatened to filibuster the joint session of Congress called to certify the new electoral vote, they agreed to let the session continue when Hayes agreed to end Reconstruction and withdraw federal troops from the South. The Hayes-Tilden crisis resulted in the 1887 Electoral Count Act, which gave each state authority to determine the legality of its electoral vote but also provided that a concurrent majority of both houses of Congress could reject disputed votes.
The act was invoked to resolve such a dispute in 1969 and again in the first two presidential elections of the twenty-first century. The 2000 election mirrored the Hayes-Tilden crisis; as in 1876, the Democratic candidate, Al Gore, held a clear edge in the popular vote, leading Republican George W. Bush by half a million votes. However, the balance in the Electoral College was close enough that the 25 votes of Florida would decide the election. Initial returns in that state favored Bush by the slimmest of margins but recounts narrowed the gap to within a thousand. Finally, however, the Supreme Court affirmed Bush’s appeal to stop the recounts; the Republican was awarded a 537-vote victory in the state and consequently a majority in the Electoral College. Democrats in the House of Representatives attempted to invoke the 1887 law to disqualify Florida’s slate of electors but failed to gain the necessary support in the Senate to put the matter to a vote. Bush’s successful 2004 reelection campaign against Democrat John Kerry also sparked discontent, and concerns about the balloting in Ohio prompted House Democrats to again invoke the law. This time, though, they were able to gain enough Senate support to force a concurrent vote; it affirmed Ohio’s Republican slate of electors by a large margin.
These controversies have highlighted growing discontent with the intent and function of the Electoral College, and the reasoning behind the Constitutional Convention’s adoption of the institution has been increasingly marshaled against it. While the founders hoped that electors would select the president based on reasoned discussion, 24 states now have laws to punish “faithless electors” who defy the results of their states’ popular election and vote for another candidate, as has occurred eight times since World War II. While the founders hoped the Electoral College would create a presidency relatively independent of public opinion, it has come under fire since Andrew Jackson’s time for doing exactly that.
Multiple measures have been proposed to more closely align the Electoral College with the popular vote. One of the more commonly mentioned solutions is proportional representation; that is, rather than the winner of the presidential election in each state taking all that state’s electoral votes, the state would distribute those votes in proportion to the election results. Such a reform would almost certainly enhance the chances of third parties to gain electoral votes. However, since the Constitution requires a majority of the Electoral College for victory, this solution would most likely throw many more presidential elections to the House of Representatives. For instance, under this system the elections of 1912, 1968, and 1992 would all have been decided by the House. Thus, proportional representation would undo two of the Framers’ wishes, tying the presidency not only closer to the general public but perhaps unintentionally to Congress as well. The Colorado electorate rejected a state constitutional amendment for proportional representation in 2004.
A similar policy is often referred to as the “Maine-Nebraska rule,” after the two states that have adopted it: Maine in 1972 and Nebraska in 1996. It is reminiscent of the district policy that states such as Virginia and Massachusetts implemented in the early years of the republic. Maine and Nebraska allot one electoral vote to the winner of each congressional district, and assign the final two (corresponding to each state’s two senators) to the overall winner of the state’s popular vote. While this technique seems to limit the potential chaos of the proportional method, it does not actually solve the problem: if every state in the Union adopted the Maine-Nebraska rule, it would still be possible for a presidential candidate to lose the election despite winning the popular vote.
A third state-based reform of the Electoral College system gained significant support in April 2007, when the Maryland legislature passed a law calling on the rest of the states to agree to assign their electors to whichever presidential candidate wins the popular vote. This would effectively circumvent the Electoral College, while retaining the elector and Congress’s tabulation of the vote as a symbolic, constitutional formality.
Finally, many commentators have called for a constitutional amendment simply eliminating the Electoral College entirely, arguing that, in addition to the possibility of presidential victors who have lost the popular vote, the electoral system artificially inflates the value of votes in small states (due to the constitutionally mandated minimum of three votes to every state), discourages minority parties, and encourages candidates to ignore states they believe they cannot win. However, the college is not universally unpopular; its supporters counter that the system maintains political stability and forces candidates to expend effort on states with small populations that they might otherwise bypass. Additionally, supporters of the Electoral College maintain that it is an important connection to the federal system envisioned by the framers of the Constitution.
Some observers have noted that disputes over the college tend to follow fault lines already existing in American politics. Gore’s loss in the 2000 election inspired many Democrats to look at the college with a critical perspective; additionally, more rural states, small in population, that oppose losing the influence the Electoral College gives them tend to support Republican candidates. Heavily urban states with more concentrated populations tend to vote Democratic. Thus, the regional differences the Convention hoped to moderate through the Electoral College have been effectively translated into partisan differences that the college exacerbates. However, the constitutional barriers to removing the college likely ensure it will remain on the American political landscape for the foreseeable future.
See also elections and electoral eras; voting.
FURTHER READING. Richard McCormick, The Presidential Game: The Origins of American Presidential Politics, 1982; Arthur Schlesinger, Jr., The History of American Presidential Elections, 1789–1968, 1971; Paul D. Schumaker and Burdett A. Loomis, eds, Choosing a President: The Electoral College and Beyond, 2002.
MATTHEW BOWMAN
For most of its history, the United States has depended on its own abundant supply of energy resources. If there is a common political theme in the history of American energy and politics, it is the desire to maximize domestic production and to stabilize competition between private firms. Although these imperatives dominated the energy policies of the nineteenth and twentieth centuries, events in the 1970s caused a dramatic reversal of direction. This article traces the development of public policies in the coal and oil industries, since these commodities supplied the bulk of American energy throughout the nation’s history and were the primary target of state and federal policy makers. This necessarily excludes important developments in the political history of utilities, electrification, and the development of nuclear power, but it allows for a long-range perspective on American energy policy.
At the time of the American Revolution, the predominant forms of energy came from human or animal power and firewood. Coal was in limited use, but American policy makers viewed the domestic coal trade as vital to the young nation’s future. To encourage the growth of a coal industry the federal government moved to protect American colliers from foreign competition. The original tariff on coal imports in 1789 was 2 cents per bushel. It increased gradually over the years, until in 1812 it reached 10 cents a bushel, or about 15 percent of the wholesale price of British coal. After the War of 1812, tariff rates on coal dropped in 1816 to 5 cents a bushel, which ranged between 10 and 25 percent of the price of foreign coal in New York, and remained at about that range until 1842. British imports bounced back after 1815, but they never again exceeded more than 10 percent of American production. Tariff levels did not completely push British coal out of American markets, but they did severely restrict its ability to compete with the domestic product. By the postbellum decades, the federal government had eliminated the tariff on anthracite, and levels on bituminous coal bottomed out at 40 cents a ton in 1895. Domestic production dominated American coal markets, and the United States became a net exporter of coal in the 1870s.
Under the umbrella of federal protection, state governments encouraged the rapid development of coal mining in the antebellum period. First, in the 1830s, Pennsylvania’s legislature exempted anthracite coal from taxation and promoted its use in the iron industry through a liberal corporate chartering law. As anthracite use increased, Pennsylvania officials refused to grant exclusive carrying or vending rights to any company engaged in transporting anthracite to urban markets. As a result, a diverse group of canals and railroads served Pennsylvania’s relatively compact anthracite fields. Competition between the Schuylkill Navigation Company’s all-water route and the Lehigh Coal and Navigation Company’s rail and water connections to Philadelphia, for example, ensured that anthracite prices remained low. To expedite the exploitation of new coalfields, many states commissioned geological surveys. North Carolina employed a state geologist to catalog the state’s mineral resources in 1823; by 1837, 14 states had followed suit. The annual reports and final compilation of the state geological surveys served to underwrite the cost of finding viable coal seams and marking valuable mineral deposits for entrepreneurs. In some cases, such as Pennsylvania and Illinois, the state geologist specifically targeted coalfields as the primary emphasis of the survey. In others, a more general assessment of mineral resources occurred. Although state geologists would find, label, and survey the coalfields, it was up to private firms to mine, carry, and sell the coal to domestic and industrial consumers. Pennsylvania’s leadership in this field was apparent, as that state boosted coal production levels to nearly 6.5 million tons, or about three-fourths of U.S. production, by 1850.
The period following the Civil War saw a heightened role for railroads in American energy policy. Railroad companies appealed to state legislatures for the right to buy or lease coal lands—a combination of mining and carrying privileges that many antebellum policy makers were unwilling to tolerate. In Pennsylvania an 1869 law authorized railroad and canal companies to purchase the stocks and bonds of mining firms. By the 1870s the Philadelphia and Reading Railroad embarked on an ambitious plan to purchase enough anthracite coal lands to set prices. The Philadelphia and Reading failed in its attempt to monopolize anthracite, but it and other regional railroads became increasingly powerful in the late nineteenth century. The nation’s bituminous fields were too large for any significant concentration of power, but in the more compact anthracite fields, a distinct combination of large mining and carrying companies formed during the 1880s to keep prices high even as they forced small-scale colliers to sell their coal at rock-bottom prices. State-level attempts to impose rates on railroads, as well as early attempts by federal authorities to regulate the coal trade under the auspices of the Interstate Commerce Commission (1887) or the Sherman Anti-Trust Act (1890), fell flat, as policy makers at both state and federal levels remained focused on maintaining high production levels, keeping prices low, and bringing new coalfields into production. For example, Congress created the United States Geological Survey in 1879, an agency charged with the topographic and geological mapping of the entire nation. This institution combined several surveys that had been created for military and scientific purposes in order to catalog valuable mineral resources of the nation just as the antebellum surveys did for their respective states.
The relations between labor and capital in energy production also became a concern for policy makers by the late nineteenth century. Prior to the Civil War, coal mining was done on a relatively small scale. Individual proprietorships could survive with a few skilled miners who exerted total control over the hiring of laborers, the construction of the shafts and tunnels, and the cutting and hauling of the coal from the mine. Experienced miners thus acted as independent contractors throughout most of the nineteenth century. The corporate reorganization of coalfields created new pressures on firms; now they needed to increase production and cut costs at every turn. Many mine operators sought to use the autonomy of miners for their own benefit by pressing tonnage rates down, docking miners for sending up coal with too many “impurities,” and paying miners in scrip rather than cash. Although the transformation of work at the coal seam itself with the introduction of machine cutters would not occur until the 1890s, labor relations aboveground changed rapidly during and after the Civil War. Small-scale unions formed in individual coalfields and struck, with varying effectiveness, for higher wages. Since the largest variable cost in coal mining is in labor, colliers insisted on the ability to control wages and fought to keep unions from organizing in American coalfields. In 1890, however, a national trade union, the United Mine Workers of America (UMWA), formed in Columbus, Ohio. For the next half century, the UMWA struggled to win collective bargaining rights in the nation’s geographically diverse and decentralized coal trade.
Federal authorities were drawn into the regulation of the nation’s coal trade during the early twentieth century. In the anthracite fields of Pennsylvania, for example, labor disputes and an attempt by a handful of railroad operators to manipulate prices provoked federal action. When miners in the anthracite fields sought the aid of the UMWA to secure an eight-hour day, decent wages, and safe working conditions, managers of coal companies responded with intimidation, lockouts, and violence. The long and crippling strike of 1902, which threatened energy supplies across the eastern seaboard by shutting down anthracite production, drew President Theodore Roosevelt into the fray. By declaring that he would negotiate a “square deal” between labor and management, Roosevelt set a precedent for federal intervention. A square deal, however, did not create a federal mandate for collective bargaining rights for coal miners; nevertheless, it did offer some modicum of governmental oversight. In 1908 the Justice Department, under the authority of the Interstate Commerce Commission (ICC), filed a major lawsuit against anthracite railroads accused of manipulating prices and intimidating independent colliers. Results were mixed; the Supreme Court upheld the “commodities clause” of the ICC, which banned the direct ownership of coal mines by railroads, but informal relationships between large coal companies and railroads continued to dominate the region. Finally, the U.S. Bureau of Mines, created in 1910, enforced safety regulations on reluctant coal operators. Mine safety remained a concern for the industry, but miners in this period did at least see a modest increase in protection from hazardous working conditions. In all these cases, federal intervention in the nation’s coal trade preserved the nineteenth-century focus on high levels of production, even as new legislation and policy decisions produced minor victories for small colliers and mine workers.
Throughout most of the nineteenth century, the U.S. government’s energy policy consisted of tariff protection and the promotion of new coalfields, either actively by creating agencies like the U.S. Geological Survey or passively by refusing to regulate the coal trade. At the advent of the twentieth century, however, federal policy makers found themselves drawn into major conflicts over competition and labor, as well as the rise of oil and natural gas as new sources of energy. The era between 1880 and 1920 truly was the reign of “King Coal”: production soared from 80 million tons to 659 million tons, an increase of more than 800 percent, and coal accounted for 70 percent of the nation’s energy consumption. Crude oil production increased seventeenfold, from 26 million barrels in 1880 to 443 million in 1920; yet American refineries still focused on illumination and lubrication products, rather than fuel, throughout the late nineteenth and early twentieth centuries. The discovery of massive reserves in the Spindletop oil field in southeastern Texas, the Mid-Continent Field of Kansas and Oklahoma, and southern California during the decade before World War I all pointed toward American oil’s bright future as a source of energy.
Natural gas also became a significant energy source in the early twentieth century. Producers pumped 812 billion cubic feet by 1920, an increase from the 128 billion cubic feet they secured in 1900. As nationwide reserves came into production, interstate railroads and pipelines made state-level oversight difficult, and the bulk of energy policy shifted to the federal level throughout the twentieth century. The rise of oil and gas, moreover, created more challenges for policy makers and forced them to regulate production, competition, and price setting in unprecedented ways.
The specific crises brought by World War I created a new kind of regulatory regime and signaled the growing importance of oil in the nation’s energy economy. In response to the anticipated coal shortage triggered by the American declaration of war in 1917, the Federal Trade Commission explored the possibility of federal controls over prices and distribution. Although initial attempts to coordinate the nation’s coal trade failed when the Council for National Defense’s Committee on Coal Production folded in the summer of 1917, Congress granted President Woodrow Wilson broad authority over energy production and consumption, including the right to set the price of coal. This resulted in the creation of the United States Fuel Administration, a wartime agency designed to coordinate coal, petroleum, and railroad operations in both military and civilian sectors of the economy. Its director, Harry Garfield, was unfamiliar with the vagaries of the well-organized anthracite and vast decentralized bituminous coal industries; attempts to fix prices failed, and a coal shortage crippled the American economy in the winter of 1917–18. To boost production levels, the USFA encouraged the opening of new mines and restricted coal consumption among non-war-related industries and households. Wartime demand for petroleum boosted production but also created headaches for the USFA’s Oil Division. Petroleum shortages during the war were less debilitating, however. The price stability caused by the artificially high demand provoked much needed conservation and storage reforms among private petroleum producers and put new oil fields into production. Some USFA officials advocated a continuation of the command-and-control approach to energy policy after the war ended in November 1918. Pre-war energy markets had faced debilitating gluts and shortages, coupled with unstable prices. Since energy reserves in the United States were still abundant, they argued, public oversight might help stabilize both production and consumption of vital commodities like coal, oil, and natural gas.
Despite calls for a continued presence of federal authority, Congress cut appropriations for the USFA at the war’s end. By 1919 the coal trade was experiencing serious problems. Wartime demand had boosted the number of American coal mines from 6,939 in 1917 to 8,994 in 1919. When new orders for coal waned, mine operators reduced wages and laid off miners at the same time that millions of American soldiers returned in search of work. A series of strikes rocked the bituminous and anthracite industries in the 1920s as the UMWA continued its efforts to organize the nation’s miners, now 615,000 strong. By 1932 the number of miners dropped to less than 400,000 and the American coal industry was in disarray. The creation of the National Industrial Recovery Act (NIRA) the following year set up price codes for coal and helped stanch the bleeding. The UMWA also benefited from Section 7(a) of the NIRA, which provided for collective bargaining. By 1935 federal support for unionization swelled the ranks of the UMWA to more than half a million. But when the Supreme Court declared the NIRA unconstitutional in that same year, the short-lived stability faded. Federal support for collective bargaining continued, but the American coal trade returned to its old familiar pattern of decentralized, uncoordinated production, which kept prices relatively low for consumers but profit margins razor-thin for mine operators. Labor relations became even more heated, as the UMWA, under the leadership of its forceful president John L. Lewis, challenged attempts to cut wage rates at every turn.
Oil and gas producers also suffered during the Great Depression, even as production levels reached all-time highs of nearly a billion barrels of crude petroleum in 1935. In Texas, state regulation, under the aegis of the Texas Railroad Commission, helped maintain some price stability by limiting production. Oil producers also had price and production codes under the NIRA, but without the labor conflicts of their counterparts in coal, no controversy over collective bargaining hit the industry. With the termination of the NIRA, no broad regulatory agency appeared in the petroleum industry. Instead, a consortium of six major oil-producing states formed to regulate production and stabilize the industry. The 1935 Interstate Compact to Conserve Oil and Gas joined Colorado, Kansas, Illinois, New Mexico, Oklahoma, and Texas together to replicate the state-level programs of the Texas Railroad Commission in a national setting. Natural gas came under federal control with the passage of the Natural Gas Act of 1938, which regulated interstate trade in gas, including the nation’s growing network of pipelines for shipping natural gas. In oil and gas the control of interstate traffic remained an essential, and ultimately effective, way to ensure price stability without radically expanding government oversight.
By the advent of World War II, coal had declined to about 50 percent of the nation’s energy consumption, even though reserves of coal showed little sign of depletion. When the UMWA’s 400,000 coal miners struck in 1943, federal officials seized some mines and reopened them under government control. Coal remained important, but the increasing popularity of automobiles, oil for heating, and electric motors in industry made petroleum the dominant element of the American fuel economy. The Petroleum Administration for War, under the leadership of Secretary of Interior Harold Ickes, coordinated the flow of oil for civilian and military uses. Some rationing occurred to avoid costly shortages, but most significant for long-term growth were the thousands of miles of new pipelines to connect the oil fields of the Southwest to the rest of the nation. Following the war, the Oil and Gas Division of the Department of the Interior was established to stabilize the oil industry through price data sharing, pipeline policies, and consulting trade organizations such as the National Petroleum Council. The World War II years also introduced nuclear power into the nation’s energy future, although the use of nuclear power was not widespread immediately after the war.
As the consumption of energy skyrocketed in the post–World War II decades, oil continued to grow in importance. Most significant for energy policy, the United States became a net importer of oil in 1948. As the booming postwar economy grew, American producers attempted to stem the flow of foreign oil by persuading President Dwight D. Eisenhower to set up mandatory oil import quotas. A year later a cartel of major oil-producing nations such as Saudi Arabia, Iraq, Iran, Kuwait, and Venezuela formed the Organization of Petroleum Exporting Countries (OPEC). OPEC had little impact on American energy markets, and the extent to which the United States depended upon foreign oil remained untested, until the Arab-Israeli conflict of 1973 triggered an embargo by Arab nations on exports to the United States. By early 1974 the price of oil had nearly quadrupled, triggering the first “oil shock” in the American economy. The embargo ended that same year, but throughout the 1970s American dependence on Middle Eastern oil became a political issue, coming to a head with the overthrow of the pro-American regime in Iran in 1979. From that point on, energy policies became intertwined with foreign policy, particularly in the Middle East.
The 1970s brought a major reversal in American energy policy, the repercussions of which are still shaking out today. Nuclear power, once considered a major source of energy for the future, became politically toxic after the 1979 partial meltdown of a reactor core at the Three Mile Island facility in Pennsylvania. The oil shocks of that decade, moreover, suggested that traditional policies directed at maximizing domestic production of energy sources were no longer adequate for the United States. In response to this new challenge, Congress passed the National Energy Act in 1978, which aimed to reduce gasoline consumption by 10 percent, cut imports to make up only one-eighth of American consumption, and increase the use of domestic coal—about one-third of American energy consumption in the early 1970s—to take advantage of abundant reserves. President Jimmy Carter also created the Department of Energy (DOE) in 1978 and promoted the further exploration of alternative energy sources such as solar, wind, and wave power. Although the DOE’s emphasis on traditional versus alternative energy sources seemed to wax and wane with changes in presidential administrations, alternatives to the well-established fossil fuel sources of energy have demanded the attention of policy makers in Washington. In this regard, energy policy in the twenty-first century will take a different course from the first two centuries of the nation’s history.
See also business and politics; environmental issues and politics; transportation and politics.
FURTHER READING. Sean Patrick Adams, Old Dominion, Industrial Commonwealth: Coal, Politics, and Economy in Antebellum America, 2004; William Childs, The Texas Railroad Commission: Understanding Regulation in America to the Mid-Twentieth Century, 2005; John Clark, Energy and the Federal Government: Fossil Fuel Policies, 1900–1946, 1987; Martin Melosi, Coping with Abundance: Energy and Environment in Industrial America, 1985; David Nye, Consuming Power: A Social History of American Energies, 1997; David Painter, Oil and the American Century: The Political Economy of US Foreign Oil Policy, 1941–1954, 1986; Joseph A. Pratt, “The Petroleum Industry in Transition: Anti-Trust and the Decline of Monopoly Control in Oil,” Journal of Economic History 40 (December 1980), 815–37; Richard Vietor, Energy Policy in America since 1945, 1984.
SEAN ADAMS
Today, the phrase “environmental issues and politics” invariably refers to debates about problems such as pollution, species extinction, and global warming. But the United States had environmental policies long before the rise of the modern environmental movement. Indeed, the modern movement is partly a rejection of earlier American ideas about government and nature.
From the founding of the nation, government at all levels encouraged the development of land. To promote the real-estate market, New York City created a street grid in 1807. States built canals—“artificial rivers”—to facilitate commerce. Beginning in 1862, the federal government gave land to settlers willing to improve the landscape by establishing farms. In the 1930s, the federal government sought to promote economic development in the South and the West by constructing vast systems of dams. In many ways, state and federal policy has encouraged exploitation of natural resources, from timber to oil. The federal government also has helped individuals and businesses to conquer nature: federal agencies have predicted the weather, controlled wildfires, protected cattle and sheep from predators, and kept floodwaters at bay.
The first challenges to the nation’s pro-development spirit came in the decades before the Civil War. A small group of artists and writers began to celebrate the undeveloped countryside as a romantic escape from civilization and a sublime source of national pride. In some states, farmers began to complain about dwindling stocks of fish. Though a few of the fish defenders were concerned about pollution, most argued that the principal threats to fisheries were dams built to power mills. Residents of a few cities also took legal action to rid their neighborhoods of manufacturing smoke and stenches.
The antebellum questioning of development was limited to a tiny minority. In the period from 1865 to 1915, however, many more Americans sought government action to address what we now call environmental problems. They organized to stop pollution, conserve natural resources, and preserve wild places and wild creatures. Many urban Americans also sought to renew their relationship with nature.
The activism of those formative years was a response to the profound environmental changes brought by unprecedented urbanization, industrialization, and immigration. Hundreds of towns became congested, polluted industrial cities. The vast forests of the Great Lakes region were cut down. Millions of acres of grassland were transformed into farms and ranches. Many creatures that once were important parts of the landscape were driven to the edge of extinction or beyond.
To many Americans, the industrial city seemed to be a great experiment, a new form of civilization that promised much but that might prove unsustainable. Many of the doubts were environmental. The urban environment was far less healthy than the rural or small-town landscape. Would cities ever become places where births exceeded deaths? Many observers also feared the moral and social effects of separating so many millions from contact with nature.
Without remedial action, the rapid transformation of the countryside also portended harm as well as good. Would the United States continue to have the resources necessary to grow richer and more powerful and to take its ordained place on the world stage? The symbolic closing of the frontier in 1890 led many Americans to conclude that the nation no longer could take superabundance for granted.
Municipal governments felt the greatest pressure to assume new responsibilities. In 1865 most cities had no sanitary infrastructure. The explosive concentration of people and industrial activity threatened to turn urban areas into environmental hellholes. The leaders of many cities responded by greatly expanding the power of municipal government. They created boards of health. They also built sewer systems, took responsibility for collecting garbage and cleaning streets, established parks, protected sources of drinking water, and regulated “the smoke nuisance.”
The urban environmental reforms of the Gilded Age and Progressive Era had mixed consequences. The new sanitary infrastructure greatly reduced mortality from epidemic diseases, especially cholera and typhoid. In most cities, the parks became valuable oases. But the antismoke regulations did little to improve air quality. Though most cities were able to improve their drinking water, many forms of water pollution continued unabated, and some grew worse: most cities dumped untreated sewage into nearby rivers and harbors.
At the federal level, the concern about the nation’s environmental future led to a dramatic change in land policy. After decades of trying to privatize the public domain, the government decided that millions of acres never would be sold or given away. Those lands instead were to be national forests, parks, and wildlife refuges. To manage the forest reserves, the government established a new kind of bureaucracy, run by scientifically trained experts. The parks initially were the responsibility of the U.S. Army, but the government established the National Park Service in 1916, and the agency soon became a powerful promoter of outdoor recreation.
State governments also responded to new environmental demands. Many established boards of public health, fish-and-game departments, and forest commissions. At a time when the federal government had limited capacity, states also took the initiative in studying environmental problems. Massachusetts undertook the first systematic surveys of river pollution in the 1870s. In the 1910s, Illinois pioneered the study of environmental hazards in the workplace, and these investigations led to a deeper understanding of the health effects of air pollution.
Many of the new laws and agencies met resistance. In debates about pollution, business leaders often argued that environmental degradation was the price of progress. Immigrants sometimes resisted sanitary and conservation regulations. In national parks and forests, officials were challenged by people who no longer could use those areas for subsistence.
The support for environmental initiatives in the Gilded Age and Progressive Era came largely from the well-to-do. Many professional men supported conservation, preservation, and antipollution efforts. To progressives, social and environmental reform went hand-in-hand. The reform cause always had some backing from the business community. In sheer numbers, however, the greatest support came from middle-and upper-class women. Because so many environmental issues involved the traditionally feminine concerns of beauty, health, and the well-being of future generations, women often argued that they were especially equipped to address environmental problems. That argument became a justification for suffrage as well as a rationale for professional careers.
Until recently, scholars paid little attention to environmental issues and politics from the end of the Progressive Era until the first stirrings of the modern environmental movement in the 1950s. But a number of new works make clear that the age of Franklin D. Roosevelt deserves more attention from environmental historians. Though grassroots activism was relatively limited, government and university research on environmental problems in the inter-war period provided a foundation for future reform efforts. Federal environmental policy also became much more ambitious in the New Deal years.
Among the many agencies established by New Dealers, several had conservation missions, including the Soil Conservation Service, the Tennessee Valley Authority, and the Civilian Conservation Corps. The new agencies were partly a response to natural disasters. In the 1930s, dust storms devastated the Great Plains, while floods wreaked havoc on much of the eastern third of the continent. In addition to providing relief, the government undertook to prevent a recurrence of such disasters. That preventive effort depended on a new recognition that dust storms and floods were not entirely acts of nature: in both cases, federal policy makers concluded, human action had turned climatic extremes into economic and social tragedies.
The new conservation agencies were not conceived together, yet all became part of the New Deal attempt to end the Depression. Like the conservationists of the Progressive Era, the New Dealers believed that conservation would ensure future prosperity. But their joining of environmental and economic goals was more explicit. The Tennessee Valley Authority (TVA) was a development agency for the nation’s most destitute region. By controlling the South’s rampaging rivers, the TVA would stimulate industry and improve rural life. The Civilian Conservation Corps (CCC) put 3 million men to work on conservation and economic development projects: The corps reclaimed denuded landscapes by planting trees, worked with farmers to protect soil from erosion, and built outdoor-recreation facilities, including parks, trails, and roads, to attract visitors.
In contrast to the Progressive Era, when the federal government sought to influence private decision making by demonstrating “wise use” of resources in public forest reserves, many of the New Deal conservation initiatives sought to have a direct impact on the management of privately owned land. The Soil Conservation Service encouraged the formation of thousands of county conservation districts and provided financial and technical assistance to millions of farmers. For the first time, the New Dealers sought a role for the government in land-use planning. The Taylor Grazing Act of 1934 greatly strengthened the ability of federal officials to control the way ranchers used millions of acres of the public domain. Though a New Deal effort to plan development on the Great Plains failed, the TVA had a far-reaching impact on the South.
New Dealers also spread the conservation gospel more than any previous adminstration. Government photographs of the dust bowl and the rural South became iconic images. Two government-sponsored films—The Plow That Broke the Plains and The River—publicized the New Deal argument about the social causes of the period’s great natural disasters. The 3 million men who joined the CCC were instructed in conservation principles. Almost all the enrollees came from cities, and the government hoped that CCC work would strengthen their bodies and persuade them that contact with nature had many benefits. Historians now credit the CCC with broadening the constituency for environmental protection.
In other ways, however, New Deal policy left a mixed legacy. Though officials hoped to revitalize rural America, many New Deal measures ultimately encouraged large enterprises rather than small farms. The soil conservation effort checked some of the worst agricultural practices, but few farmers truly accepted a new land ethic. In the decades after World War II, environmentalists often argued that New Deal conservation put economic development ahead of ecological balance.
The modern environmental movement became a major political and social force in the 1960s. The great symbol of the movement’s emergence was the inaugural Earth Day in 1970, when approximately 20 million Americans gathered in thousands of communities to seek action in addressing “the environmental crisis.” It was the biggest demonstration in U.S. history.
Three broad developments explain the rise of environmentalism after World War II. First, the unprecedented affluence of the postwar years encouraged millions of Americans to reject the old argument that pollution was the price of economic progress. Instead, they argued that the citizens of a rich nation should be able to enjoy a healthy and beautiful environment. Second, the development of atomic energy, the chemical revolution in agriculture, the proliferation of synthetic materials, and the increased scale of power-generation and resource-extraction technology created new environmental hazards. From atomic fallout to suburban sprawl, new threats provoked grassroots and expert protest. Third, the insights of ecology gave countless citizens a new appreciation of the risks of transforming nature. Rachel Carson’s 1962 best-seller Silent Spring—a powerful critique of chemical pesticides—was especially important in popularizing ecological ideas.
Even before the first Earth Day, government at all levels had begun to respond to new environmental demands. In 1964, for example, the federal government created a system of “wilderness” areas. But the explosion came in the 1970s—the environmental decade. A series of landmark federal laws addressed such critical environmental problems as air and water pollution, endangered species, and toxic waste. The federal government and many states established environmental-protection agencies. A “quiet revolution” gave state and local officials unprecedented power to regulate the use of privately owned land. In many communities, Earth Day led to the creation of ecology centers, some short-lived and some enduring. The early 1970s brought new national environmental organizations with different goals than the conservation and preservation groups established in the late nineteenth century. Colleges and universities established environmental studies programs. The 1970s also saw the first attempts to create environmentally friendly ways of organizing daily life, from recycling to efforts to grow organic food.
The sources of support for the new movement were varied. Many Democrats concluded that a liberal agenda for affluent times needed to include environmental protection. Middle-class women often saw environmental problems as threats to home and family. Young critics of the nation’s institutions were especially important in the mobilization for Earth Day. To varying degrees, old resource-conservation and wilderness-preservation organizations took up new environmental issues in the 1950s and 1960s. Many scientists warned the public about the environmental dangers of new technologies. The environmental cause also depended on the institutional support of many professional groups, from public-health officials to landscape architects. Though still based largely among white, well-to-do residents of cities and suburbs, the modern movement was more demographically diverse than its predecessors.
Despite the popularity of the environmental cause in the early 1970s, the new movement had powerful opposition. The coal industry organized a coalition to try to defeat or weaken the Clean Air Act of 1970, while the National Association of Homebuilders led a successful campaign against national land-use legislation. Though a handful of unions supported antipollution initiatives, many labor leaders sided with management in opposing environmental regulation. The successes of environmentalists also provoked a backlash. In the so-called Sagebrush Rebellion, Western timber and cattle interests challenged federal management of forest and grazing lands. The revolution in state and local regulation of land use soon sparked a “property rights” movement.
The opposition grew stronger after the oil crisis of 1973. Because the production and distribution of oil comes at a steep environmental cost, environmentalists already had begun to lobby for the development of alternative forms of energy, and the crisis might have made that case more compelling. But the sudden scarcity of a critical resource instead strengthened the position of those who saw environmentalism as a terrible drag on the economy. The oil crisis was perhaps the final blow to the postwar boom. Both inflation and unemployment worsened, and the hard times brought a revised version of the old argument about jobs and environmental protection: The nation could have one or the other, but not both.
The backlash against environmentalism helped Ronald Reagan win the presidency in 1980. Reagan promised to remove restrictions on energy development, eliminate thousands of environmental regulations, and privatize millions of acres of the public domain. But he was only partly successful. The resurgence of conservatism forced environmentalists to give up any hope of expanding the federal government’s power to protect the environment. The Reagan administration was unable, however, to undo the environmental initiatives of the early 1970s. In the 1980s, the membership of environmental organizations reached new highs.
The rise of concern about global warming in the late 1980s did not change the basic political dynamic. The federal government undertook few important environmental initiatives in the generation after Reagan left office. Though scientists, environmentalists, and many others called with increasing urgency for bold action to limit human-induced climate change, their efforts did not break the political stalemate at the federal level. But the environmental movement was more successful in other arenas. Environmental ways of thinking and acting are more common now in many basic American institutions, from schools to corporations.
See also energy and politics.
FURTHER READING. Richard N. L. Andrews, Managing Nature, Managing Ourselves: A History of American Environmental Policy, 2nd ed., 2006; Stephen Fox, The American Conservation Movement: John Muir and His Legacy, 1985; Robert Gottlieb, Forcing the Spring: The Transformation of the American Environmental Movement, revised and updated ed., 2005; Samuel P. Hays, Beauty, Health, and Permanence: Environmental Politics in the United States, 1955–1985, 1987; Martin V. Melosi, The Sanitary City: Urban Infrastructure in America from Colonial Times to the Present, 2000; Carolyn Merchant, The Columbia Guide to American Environmental History, 2002; Adam Rome, The Bulldozer in the Countryside: Suburban Sprawl and the Rise of American Environmentalism, 2001; Adam Rome, “ ‘Give Earth a Chance’: The Environmental Movement and the Sixties,” Journal of American History 90 (September 2003), 525–54; Ted Steinberg, Down to Earth: Nature’s Role in American History, 2002; Thomas R. Wellock, Preserving the Nation: The Conservation and Environmental Movements, 1870–2000, 2007.
ADAM ROME
In 1789 the United States was a new republic on multiple levels. The new nation represented the first attempt at a continent-sized republic in the history of the world. Regarding organized competition for national power as both immoral and likely to bring on civil war or foreign intervention, the founders designed the constitutional system to prevent the development of political parties, then lined up unanimously behind the country’s most revered public figure, General George Washington, as the first occupant of the powerful new presidency.
On a more substantive level, the question of what kind of nation the United States would become was completely open. Despite their commitment to unity, American political leaders turned out to have very different ideas about the future direction of the country. To northeastern nationalists like Alexander Hamilton, America was “Hercules in his cradle,” the raw materials out of which they hoped to rapidly build an urban, oceangoing commercial and military empire like Great Britain. Upper South liberals like Thomas Jefferson hoped for a reformed version of their expansive pastoral society, gradually purged of slavery and gross social inequality, and sought to stave off traditional imperial development. Less enlightened planter-politicians in the more economically robust lower South looked forward to building new plantation districts on rich lands just becoming available and to acquiring the slave population they expected to build and work them. Still other Americans who were relatively new to politics—artisans, immigrants, and town dwellers of the middle and lower ranks—held to the radicalism of 1776 and saw America as the seedbed in which the most democratic and egalitarian political visions of the Enlightenment would flower.
All these potential futures were predicated on a distinctive role for the American state and different policies. With Alexander Hamilton ensconced as Washington’s treasury secretary and de facto prime minister, his option got the first trial. Proceeding on the assumption that ample revenues and stable credit were the “sinews of power,” Hamilton set up a British-style system of public finance, with a privately owned national bank and an interest-bearing national debt. Secretary of State Jefferson and Hamilton’s old ally Representative James Madison protested the resulting windfall profits for northern financial interests and bristled at the freedom from constitutional restraint with which Hamilton acted—there was no constitutional provision for a national bank or even for creating corporations. Hamilton’s reading of the Constitution’s “necessary and proper” clause as allowing any government action that was convenient or conducive to its general purposes horrified Jefferson and Madison as tantamount to no constitutional limitations at all.
Despite his own rapid social climb, Hamilton’s expansive approach to government power went along with a dismissive attitude toward popular aspirations for political democracy and social equality. This attitude was quite congenial to the wealthiest men in every region, merchants and planters alike, and these elites formed the backbone of what became the Federalist party. As long as they held power, the Federalists did not consider themselves a party at all, but instead the nation’s rightful ruling elite. While admitting that republican government was rooted in popular consent, they favored strict limits on where and when consent was exercised. Political contention “out of doors” (outside the halls of government) should stop once the elections did.
Slaveholding democrats such as Jefferson and Madison stood in a more ambiguous relation to popular democratic aspirations than their northern followers realized, but they shared in the distaste aired in the press for the “monarchical prettinesses” built up around President Washington to strengthen respect for the new government. These included official birthday tributes, a magnificent coach and mansion, and the restriction of access to the presidential person to official “levees” at which guests were forbidden the democratic gesture of shaking the president’s hand. This monarchical culture sparked the first stirrings of party organization, when Jefferson and Madison recruited poet Philip Freneau to edit the National Gazette, a newspaper that first named and defined the Republican (better known as Democratic-Republican) opposition and became the model for the hundreds of partisan newspapers that were the lifeblood of the early party system.
From the inward-looking modern American perspective, it may be surprising that the catalyst for full-scale party conflict actually came from outside the republic’s borders. Yet the early United States was also a new nation in the global order of its time. The great political upheavals of Europe during the 1790s elicited tremendous passions in America and reached out inexorably to influence the nation’s politics, often through the direct manipulations of the great powers.
As was the case in other former colonies, U.S. politics inevitably revolved partly around debates over the nation’s relationship with the mother country. Respecting Great Britain’s wealth and power, Hamilton and the Federalists generally favored reestablishing a relatively close economic and political relationship with the British; Democratic-Republicans were less quick to forget their revolutionary views and sought to forge some completely independent status based on universal republicanism and free trade.
Then there was the question of the old alliance with France. Initially, the French Revolution was un-controversial in the United States, but as the French situation became more radical and bloody it divided the rest of the Atlantic World. Just as Washington began his second term, France was declared a republic, King Louis XVI was executed, and Great Britain joined Austria and Prussia’s war against France, setting up a conflict that would involve the young United States repeatedly over the next 20 years. Both sides periodically retaliated against neutral American shipping and sought to push American policy in the desired direction.
Hamilton and the Federalists recoiled from the new French republic. Yet, despite its violence and cultural overreach (rewriting the calendar, closing the churches, etc.), the radicalized French Revolution was wildly popular in many American quarters, especially with younger men and women who had been raised on the French alliance and the rhetoric of the American Revolution. When Edmond Genet, the first diplomatic envoy from the French republic, arrived in America in April 1793, he found enthusiastic crowds and willing recruits for various projects to aid the French war effort, including the commissioning of privateers and the planned “liberation” of Louisiana from Spain.
The political response to Genet was even more impressive. A network of radical debating societies sprang up, headed by the Philadelphia-based Democratic Society of Pennsylvania. The societies were modeled on the French Jacobin clubs and opened political participation to artisans and recent immigrants. While not founded as party organizations, they provided a critical base for opposition politics in Philadelphia and other cities. Genet’s antics flamed out quickly, but it was the Democratic-Republican Societies that struck fear into the hearts of the constituted authorities. President Washington denounced the clubs as “self-created”—extra-constitutional and illegitimate—and blamed them for the western troubles known as the Whiskey Rebellion.
The Democratic-Republican Societies as such withered under Washington’s frowns, but the opposition grew even stronger when Chief Justice John Jay brought back a submissive new treaty with Great Britain at a time when the British navy was seizing neutral American ships and impressing American sailors by the score. French minister Pierre Adet bought a copy of the secret document from a senator and saw that it fell into the hands of printer and Democratic Society leader Benjamin Franklin Bache. Bache disseminated it widely, helping to evoke the biggest demonstrations America had seen since the Stamp Act crisis. Once Washington reluctantly signed the treaty, the opposition forces shifted their focus to the House of Representatives, where they tried to deny the appropriation of funds needed to implement certain provisions of the treaty. This effort asserted a right of popular majorities to influence foreign policy that the Framers had tried to block by vesting the “advise and consent” power in the Senate only. This tactic narrowly faltered in the face of a wave of pro-treaty petitions and town meetings skillfully orchestrated and funded by the heavily Federalist merchant community—an early example of successful public lobbying by business interest groups.
By the middle of 1796, the only constitutional alternative the opposition seemed to have left was electing a new president. Washington desperately wanted to retire, but with only the unpopular Vice President John Adams available as a plausible replacement, Hamilton convinced Washington to delay his farewell until a month before the 1796 presidential voting was scheduled to take place. Though hampered by a fragmented electoral system in which only a few states actually allowed popular voting for president, the Republicans mounted a furious campaign that framed the election as a choice between British-style monarchy or American republicanism. These efforts propelled Thomas Jefferson to a second-place finish that, under the party-unfriendly rules of the Electoral College, made him Adams’s vice president. Unfortunately, Jefferson’s victory in the swing state of Pennsylvania was clouded by published French threats of war if Adams was elected.
Federalists were in a vengeful mood and unexpectedly dominant when the next Congress convened. French diplomatic insults and prospective French attacks on American shipping stoked a desire to deal harshly with the new nation’s enemies, within and without. Amid the “black cockade fever” for war against France, the Federalists embarked on a sweeping security program that included what would have been a huge expansion of the armed forces. The enemies within were a wave of radical immigrants, especially journalists, who had been driven from Great Britain as the popular constitutional reform movement there, another side effect of the French Revolution, was ruthlessly suppressed.
The Federalists’ means of dealing with the new arrivals and the problem of political opposition more generally were the Alien and Sedition Acts. The Alien Acts made it easier for the president to deport non-citizens he deemed threats, such as the aforementioned refugee radicals, and lengthened the delay for immigrants seeking citizenship. The Sedition Act made criticism of the government a criminal offense, imposing penalties of up to 2,000 and two years in prison on anyone who tried to bring the government or its officers “into contempt or disrepute; or to excite against them the hatred of the good people of the United States.” Of course, it was almost impossible to engage in the normal activities of a democratic opposition without trying to bring those in power into some degree of public “contempt or disrepute.”
The power grab backfired badly. Ignored by his own cabinet, Adams belatedly decided to make peace with France. Despite prosecuting all the major opposition editors, jailing a critical congressman, and backing legal persecution with occasional violence and substantial economic pressure, the Federalists found themselves dealing with more opposition newspapers after the Sedition Act than before, and a clearly faltering electoral position. Leaning heavily on a politicized clergy, Federalists defended their New England stronghold with the first “culture war” in American political history, painting Jefferson as an effete philosopher, coward, and atheist who might be part of the international Illuminati conspiracy to infiltrate and destroy the world’s religions and governments. Voters were invited to choose “God—and a Religious President; Or impiously declare for Jefferson—and No God!!”
By the end of 1800, all that was left was scheming to avoid the inevitable: Pennsylvania Federalists used the state senate to block any presidential voting in that banner Republican state, while Alexander Hamilton tried to torpedo Adams with a South Carolina stalking horse for the second election in a row. When strenuous partisan campaigning produced exactly equal electoral vote totals for Jefferson and his unofficial running mate, Aaron Burr, congressional Federalists toyed with installing the more pliable Burr as president. As angry Democratic-Republicans prepared for civil war, the Federalists backed down and accepted Jefferson’s election after 35 ballots were cast.
The 1800 election permanently resolved the question of whether popular democratic politics “out of doors” would be permitted, but Jefferson and his successors did not regard the party conflict itself as permanent or desirable. Jefferson’s goal was not so much total unity as a one-party state in which the more moderate Federalists would join the Democratic-Republicans (as John Adams’s son soon did) and leave the rest as an irrelevant splinter group. Though Jefferson believed the defeat of the Federalists represented a second American revolution, his policies represented only modest changes. Few of the working-class radicals who campaigned for Jefferson found their way into office. Hamilton’s financial system was not abolished but only partially phased out, with the national debt slowly repaid and the national bank preserved until its charter expired.
In contrast, Jefferson’s victory had a profound impact on the democratization of American political culture. Though the electoral system had been fragmented and oligarchic in 1800, it became much less so once “the People’s Friend” was in power. Jefferson self-consciously dispensed with “monarchical prettinesses” in conducting his presidency, receiving state visitors in casual clothing, inviting congressmen in for “pell-mell” dinner parties, and personally accepting homely tributes from ordinary Americans such as the Mammoth Cheese from the Baptist dairy farmers of Cheshire, Massachusetts. Putting the French machinations of 1796 behind them, the Democratic-Republicans had successfully taken up the mantle of patriotic national leadership, and one of their most successful tactics was promoting the Jefferson-and democracy-oriented Fourth of July as the prime day of national celebration, over competitors that commemorated Washington’s birthday or major military victories. Rooted in celebrations and mass meetings knitted together by a network of newspapers, the Jeffersonian politics of patriotic celebration proved a powerful form of democratic campaigning. Using these methods, vibrant though fractious Democratic-Republican parties took control in most of the states, moving to win areas (such as New England, New Jersey, and Delaware) that Jefferson had not carried in 1800 and posting some of the highest voter turnouts ever recorded.
In response, the Federalists began behaving more like a political party, dropping self-conscious elitism, intensifying their culture war, and putting more money and effort into political organization as they fought to hold on. Hamilton suggested organizing Christian constitutional societies to better promote the idea of a Federalist monopoly on religion. New England Federalists amplified their claims to include the suggestion that Jefferson and his followers were libertines who wanted to destroy the family. From 1802 on, Federalists also began to bring up slavery periodically, not to propose abolishing the institution, but instead to use it as a wedge issue to keep northern voters away from Virginia presidential candidates. The revelations about Jefferson’s relationship with his slave Sally Hemings were used as another example of his libertinism and disrespect for social order, while the three-fifths clause of the Constitution was decried as evidence that Southerners wanted to treat northern voters just like their slaves.
While there was no direct foreign interference after 1800, postcoloniality remained the dominant fact of American political life through the end of the War of 1812. Though wildly popular and serving Jefferson’s larger agenda of agricultural expansion without heavy taxation or a large military, the Louisiana Purchase owed much to Napoleon’s desire to destabilize the United States and set it against Spain and Britain, and he achieved that goal. Jefferson and his two successors lived in fear of territorial dismemberment at the hands of European powers allied with Indians, slaves, and disloyal American politicians, among them Jefferson’s first vice president, Aaron Burr, who was courted as a possible leader by both western and eastern disunionists.
Beginning late in Jefferson’s second term, the Napoleonic Wars subjected the country to renewed pressures, as both the British and French took countervailing measures against American shipping. It was now the Republicans’ turn to clamor for war, this time against Great Britain. Having largely dismantled the Federalist military buildup and loath to violate his small military/low taxation principles, in 1807 Jefferson unleashed a total embargo on all foreign trade. In so doing, he hoped to impel the belligerent powers to treat American ships more fairly by denying them needed raw materials, especially food, and markets for their manufactured goods. The embargo policy failed miserably and brought disproportionate economic suffering to commercial New England, but it proved a godsend for the Federalists.
Beginning in 1808, the Federalist party stormed back to competitiveness in much of the North and stabilized itself as a viable opposition party, despite never managing to field a national candidate who could seriously challenge Madison or Monroe. New York’s De Witt Clinton came the closest, but rather than a Federalist he was an independent Democratic-Republican sometimes willing to cooperate with them. State and local Federalists did better once Madison and Congress finally broke down and declared war in 1812. The hapless nature of the American war effort soon lured the Federalists to their death as a national party, or at least the sickness unto death. In 1814 the once-again firmly Federalist New England states called a special convention at Hartford that was widely perceived as disunionist.
Andrew Jackson’s shocking, lopsided victory at New Orleans in January 1815 revolutionized the war’s public image. The Battle of New Orleans placed the Federalists in a dangerously ignominious light. It also made many of the existing postcolonial issues irrelevant, with Jackson having shown the great European powers that further intervention on the North American mainland would be costly and fruitless. “Hartford Convention Federalist” became a watchword for pusillanimity and treason. Federalist political support retreated to its strongest redoubts and declined even there. In 1818 even Connecticut, a “Land of Steady Habits” that neither Jefferson nor Madison had ever carried, fell to the Democratic-Republicans.
The Federalists’ decline went along with a collapse of the key distinctions of the old party conflict. Most Republican officeholders were gentlemen attorneys and planters who had long since broken with the old Jacobin radicals and joined the ruling elite themselves. At the same time, the war had convinced many younger Republican leaders that a more powerful and prestigious federal government was necessary to keep the country safe and united. During Madison’s second term, a second Bank of the United States was created, and ambitious plans were made for federally funded improvements in the nation’s transportation facilities and military capabilities. Before leaving office, Madison—citing constitutional qualms—vetoed a Bonus Bill that would have spent some of the money earned from the new bank on the planned internal improvements.
Though once the fiercest of partisans, James Monroe faced only token regional opposition in the 1816 election and arrived in the presidency ready to officially call a halt to party politics. He toured New England, and many former enemies embraced him, but what a Federalist newspaper declared an “Era of Good Feeling” was rather deceptive. While Democratic-Republican officials in Washington accepted a neo-Hamiltonian vision of governing, and squabbled over who would succeed to the presidency, powerful and contrary democratic currents flowed beneath the surface. Popular voting was increasingly the norm for selecting presidential electors and ever more state and local offices, and property qualifications for voting were rapidly disappearing. This change had the side effect of disfranchising a small number of women and African Americans who had hitherto been allowed to vote in some states on the basis of owning property.
Republican radicals in many states, often calling themselves just Democrats or operating under some local label, refused to accept the partisan cease-fire and battled over economic development, religion, the courts, and other issues. Pennsylvania Democrats split into New School and Old School factions, with the former controlling most of the offices but the latter maintaining the Jeffersonian egalitarianism and suspicion of northern capitalism. Divisions over debtor relief following the Panic of 1819 left Kentucky with two competing supreme courts, and civil war a distinct possibility. When Congress tried to upgrade member living standards in 1816 by converting their 6 per diem allowance to a
1,500 yearly salary, backlash among voters and the press forced 80 percent of the members who voted for the pay raise out of office.
The collapse of national party lines also permitted a number of other threatening developments as politicians operated without the need or means to build a national majority. For example, General Andrew Jackson was allowed to seize Spanish territory in Florida to aid the expansion of the southern cotton belt. Jackson’s actions were then repudiated by the Monroe administration and pilloried by House Speaker and presidential candidate Henry Clay, only to be vindicated in the end by another contender, Secretary of State John Quincy Adams, and his Transcontinental Treaty with Spain.
Slavery also emerged as a major national issue when the northern majority House of Representatives voted to block Missouri’s entry into the union as a slave state. Though slavery had existed in every state at the time of the Revolution, the North had largely, though gradually, abolished it by the 1820s. Coming seemingly out of nowhere, the proposal of New York congressman James Tallmadge (a Clintonian Democrat) sparked one of the fullest and most honest debates on slavery that Congress would ever see, with various members adopting the sectional positions that would eventually bring on the Civil War. Nationalists led by Henry Clay and President Monroe worked out a compromise to the Missouri Crisis, but a group of Democratic leaders centered around New York’s Martin Van Buren, already disturbed by the rampant crossing of party lines, concluded that the old Jeffersonian coalition of northern workers and farmers with southern planters would have to be resurrected if the union was to survive. Unfortunately, that meant limiting further congressional discussions of slavery, along with additional national development programs that might break down the constitutional barriers protecting slavery and southern interests.
Van Buren’s first try at executing his plan was doomed by the lack of party institutions possessing any semblance of democratic authenticity. Though for many years the congressional caucus system of nominations had been under attack from radical democrats, in 1824, Van Buren and his allies still used it to nominate their favorite, Treasury Secretary William Crawford, despite the fact that Crawford was gravely ill and most members of Congress refused to attend the caucus. In the presidential election, with no clear party distinctions operating, Crawford and fellow southern contender Henry Clay were beaten by surprise candidate Andrew Jackson and Secretary of State John Quincy Adams. Jackson’s sudden political rise, launched by cynical western Clayites but then supported by “Old School” Democrats back east, had already swamped the candidacy of John C. Calhoun, who stepped into the vice presidency. Jackson appeared to have attracted the most popular votes, in a light turnout. But with no Electoral College majority, Congress elected the younger Adams in a “corrupt bargain” with Clay that was permissible within the existing constitutional rules but flew in the face of an increasingly democratic political culture.
The deal that made Adams president also played into the hands of those looking to resuscitate the old Jeffersonian coalition at his expense. Van Buren and his allies somewhat grudgingly decided that the popular but excitable General Jackson could supply the electoral vitality they needed. Denied the presidency, Jackson vowed revenge on Clay and Adams, and his supporters in Congress set out to turn the Adams administration into a lurid caricature that would ensure Jackson’s election in 1828.
Though among the most far-sighted and conscientious of men, John Quincy Adams was a failure as a president. A useful ally of Jackson and the South before 1824, he assumed that his good intentions toward all sides were understood. so he forged ahead with a continuation and expansion of the Monroe administration’s nonpartisan approach and nationalist policies. In his first presidential speeches, Adams urged Congress to literally reach for the skies, proposing not only an integrated system of roads and canals but also federal funding for a national university, a naval academy, and for scientific exploration and research, including geographic expeditions and astronomical observatories—“those light-houses of the skies,” as Adams called them in a much-lampooned turn of phrase. Knowing that such an ambitious expansion of government would meet public resistance in a still largely rural nation, Adams made one of the more tone-deaf comments in the history of presidential speechmaking, urging Congress not to be “palsied by the will of our constituents” in pursuing his agenda. This remark, and the imperious attitude behind it, was a gift for enemies who were already bent on depicting Adams’s election as a crime against democracy.
After that the Jacksonian onslaught never ceased and Adams’s every action was ginned into a scandal by a hostile Congress and a burgeoning Jacksonian press. Adams’s decision to send a U.S. delegation to Panama for a meeting of independent American states sparked a congressional fracas that lasted so long the meeting was over before the Americans could arrive. This labored outrage over the Panama Congress was partly racial in nature because diplomats from Haiti would be present. Though Adams undoubtedly ran one of the cleanest, least partisan administrations, refusing to fire even open enemies such as Postmaster General John McLean, Jacksonians also mounted cacophonous investigations of malfeasance and politicization in the president’s expenditures and appointments.
Like the election of 1800, the 1828 election was both a democratic upheaval and a nasty culture war. This time the anti-intellectual shoe was on the other foot, as relatively unlettered new voters were urged to identify with Old Hickory’s military prowess over Adams’s many accomplishments. The slogan was “John Quincy Adams who can write/and Andrew Jackson who can fight.” At the same time, Adams drew support from the forces associated with the emerging “benevolent empire” of evangelical Christianity. With an eye on evangelical voters, Adams partisans muckraked Jackson’s life relentlessly for moral scandal. The general probably deserved the “coffin handbills” calling him a murderer for his dueling and several incidents of his military career, but the detailed eviscerations of his staid, respectable marriage, in which Jackson was accused of seduction and his dying wife Rachel of wantonness and bigamy, have few equals in the annals of American political campaigning. Such aggressive moralizing failed to save Adams’s presidency, but it did help define one of the enduring boundaries of the Second Party System, pitting middle-class evangelicals against more secular-minded democrats, immigrant workers, and slaveholders.
See also Democratic Party, 1800–28; federalism; Federalist Party; slavery; War of 1812.
FURTHER READING. Norma Basch, “Marriage, Morals, and Politics in the Election of 1828,” Journal of American History 80 (1993), 890–918; George Dangerfield, The Awakening of American Nationalism, 1815–1828, 1965; Stanley Elkins and Eric McKitrick, The Age of Federalism, 1993; Richard E. Ellis, The Jeffersonian Crisis: Courts and Politics in the Young Republic, 1974; Philip S. Foner, ed., The Democratic-Republican Societies, 1790–1800: A Documentary Sourcebook of Constitutions, Declarations, Addresses, Resolutions, and Toasts, 1976; Robert Pierce Forbes, The Missouri Compromise and Its Aftermath: Slavery and the Meaning of America, 2007; Richard Hofstadter, The Idea of a Party System: The Rise of Legitimate Opposition in the United States, 1780–1840, 1969; Jeffrey L. Pasley, “The Tyranny of Printers”: Newspaper Politics in the Early American Republic, 2001; Jeffrey L. Pasley, Andrew W. Robertson, and David Waldstreicher, eds., Beyond the Founders: New Approaches to the Political History of the Early American Republic, 2004; Kim T. Phillips, “The Pennsylvania Origins of the Jackson Movement,” Political Science Quarterly 91, no. 3 (Fall 1976), 489–508; C. Edward Skeen, “Vox Populi, Vox Dei: The Compensation Act of 1816 and the Rise of Popular Politics,” Journal of the Early Republic 6 (Fall 1986), 253–74; Alan Taylor, “From Fathers to Friends of the People: Political Personas in the Early Republic,” Journal of the Early Republic 11 (1991), 465–91; David Waldstreicher, In the Midst of Perpetual Fetes: The Making of American Nationalism, 1776–1820, 1997.
JEFFREY L. PASLEY
Addressing a grieving nation on November 27, 1963, five days after the assassination of President John F. Kennedy, Lyndon B. Johnson offered a simple message of reassurance. In his inaugural address in January 1961, Kennedy had declared, “Let us begin.” President Johnson amended that injunction to “Let us continue.” But if there was one quality that characterized neither his presidency nor those that followed over the next decade and a half, it was continuity. The years between 1964 and 1980 brought wrenching political, social, economic, and cultural changes to the United States.
Johnson had sound political reasons to position himself as caretaker of Kennedy’s legacy. He feared being seen as an interloper, an illegitimate successor to a martyred hero. Accordingly, he presented his legislative agenda for 1964 as the fulfillment of his predecessor’s work. He actually hoped to go far beyond Kennedy’s domestic record: “To tell the truth,” Johnson confided to a prominent Kennedy associate in early 1964, “John F. Kennedy was a little too conservative to suit my taste.”
Nineteen sixty-four turned into a year of triumph for Johnson and for a resurgent American liberalism. In his January State of the Union address, the president announced plans for an “unconditional war on poverty.” In May, in a speech to students at the University of Michigan, he offered an even more ambitious agenda of reforms, under the slogan “the Great Society,” promising “an end to poverty and racial injustice,” as well as programs to improve education, protect the environment, and foster the arts. Two days before the Independence Day holiday in July, he signed into law the Civil Rights Act, the most significant federal legislation advancing the rights of black citizens since the Reconstruction era, with provisions outlawing segregation in public facilities and discrimination in employment and education. In August he signed the Economic Opportunity Act, the centerpiece of his war on poverty, funding local anti-poverty “community action agencies,” and programs like the Job Corps (providing vocational training to unemployed teenagers) and VISTA (a domestic version of the Peace Corps).
Rounding out his triumphant year in November, Johnson soundly defeated conservative challenger Barry Goldwater. Strengthened Democratic majorities in the eighty-ninth Congress gave the president a comfortable margin of support for his reform agenda. Johnson went on to send 87 bills to Congress in 1965, including such landmark measures as Medicare and Medicaid, the Voting Rights Act, the Clean Water Act, and the Immigration Reform Act.
But other events in 1964–65 had troubling implications for the future of American liberalism. The war in South Vietnam was another Kennedy legacy that Johnson inherited, and it was not going well. Communists were making military gains in the countryside, and in Saigon one unpopular regime followed another in a series of military coups. On November 22, 1963, just under 17,000 U.S. servicemen were stationed in South Vietnam; a year later, under Johnson, the number had grown to 23,000. During the election campaign, Johnson sought to downplay the war, promising not to send “American boys nine or ten thousand miles away from home to do what Asian boys ought to be doing for themselves.” But there was an ominous sign of a widening war in August, when North Vietnamese PT boats allegedly attacked American destroyers in the Gulf of Tonkin, and Johnson authorized a retaliatory air strike against North Vietnamese naval bases. He also secured passage of the Gulf of Tonkin Resolution, a joint congressional resolution providing him open-ended authorization for the use of U.S. military force in Southeast Asia.
Vietnam turned into a major war six months later, in February 1965, when Johnson ordered the start of a massive and continuous bombing campaign against North Vietnam. A month later he began sending ground combat forces to South Vietnam. By the end of 1965, there were close to 185,000 U.S. troops in South Vietnam, and more than 2,000 Americans had died in the war.
At home in 1965, the civil rights movement led by Dr. Martin Luther King Jr. reached its high point in the spring with a campaign for voting rights in Selma, Alabama. Johnson had not originally intended to bring a voting rights bill before Congress in 1965, but public outrage at bloody attacks by Alabama authorities on King’s nonviolent followers changed his mind. On August 6, 1965, the President signed the Voting Rights Act, fulfilling the long-deferred promise of democracy for African American citizens in the South, perhaps the greatest achievement of the Johnson administration.
But just five days later, the poverty-stricken black neighborhood of Watts in Los Angeles exploded in rioting. Before police and National Guardsmen were able to suppress the outbreak 34 people were killed and more than 250 buildings burned down. Nineteen sixty-five proved the first of a series of “long hot summers” of similar ghetto conflagrations. America’s racial problems were now a national crisis, not simply a southern issue. The political consequences were dramatic. White Southerners were already switching their allegiance to the formerly unthinkable alternative of the Republican Party, in reaction to Democrats’ support for civil rights. Now white working-class voters in northern cities were departing the Democratic Party as well, to support candidates who promised to make the restoration of “law and order” their highest priority. The New Deal coalition of the Solid South, the industrial North, and liberal intellectuals that had made the Democratic Party the normal majority party since the 1930s was unraveling.
Liberalism was coming under attack from another and unexpected quarter, the nation’s campuses. Inspired by the civil rights movement, a new generation of radical student activists, the New Left, had been growing in influence since the early 1960s. In the Kennedy years New Leftists had regarded liberals as potential allies in tackling issues like racism and poverty. Johnson’s escalation of the war in Vietnam now led New Leftists, centered in the rapidly growing Students for a Democratic Society (SDS), to view liberals as part of the problem instead of part of the solution. Within a few years the New Left embraced a politics of militant confrontation that had little in common with traditional liberalism. Young black activists in groups like the Student Non-Violent Coordinating Committee (SNCC) abandoned both nonviolence as a philosophy and integration as a goal, as they raised the slogan “Black Power.”
Meanwhile, the war in Vietnam escalated. By the end of 1967, there were almost a half million U.S. troops fighting in South Vietnam and nearly 20,000 Americans killed in action. The U.S. military commander in Vietnam, General William Westmoreland, sought to shore up shaky public support for the war by proclaiming in November 1967 that “the light at the end of the tunnel” was in sight.
Nineteen sixty-eight was the year that many things fell apart, not least the American public’s belief that victory was at hand in Vietnam. The Communists’ Tet Offensive, launched at the end of January, proved a great psychological victory. At the end of March, following a near defeat at the hands of antiwar challenger Eugene McCarthy in the New Hampshire primary, President Johnson announced a partial halt in the bombing campaign against North Vietnam and coupled that with a declaration that he would not run for reelection in the fall. Five days later, Martin Luther King Jr. was assassinated in Memphis, Tennessee, sparking riots in a hundred cities. In June, after winning the California Democratic primary, Robert Kennedy was assassinated in Los Angeles. Eugene McCarthy continued a dispirited race for his party’s presidential nomination, but Vice President Hubert Humphrey, who did not compete in a single primary, arrived in Chicago at the Democratic National Convention in August with enough delegates to guarantee his nomination. The convention would be remembered chiefly for the rioting that took place outside the hall as antiwar protesters and police clashed in the streets.
In the presidential campaign in the fall, Humphrey was crippled by his inability to distance himself from Johnson’s Vietnam policies. Many liberal voters chose to sit out the election or cast their votes for fringe candidates. Former Alabama governor and arch-segregationist George Wallace, running as a third-party candidate, also drained conservative Democratic votes from Humphrey. In the three-way race, Republican candidate Richard Nixon won the White House with a plurality of popular votes.
Following his election Nixon promised Americans that his administration would “bring us together,” but that was neither his natural instinct as a politician nor his strategy for building what one adviser called an “emerging Republican majority.” George Wallace’s strong showing in the election suggested the advantages of pursuing a “southern strategy,” raising divisive issues of race and culture to peel off traditional Democratic voters in the South and Southwest and in working-class neighborhoods in the North.
During the 1968 campaign, Nixon had spoken of a “secret plan” to end the war in Vietnam. In private conversations with his closest adviser, Henry Kissinger (national security adviser 1969–72 and thereafter secretary of state), Nixon would speak of the war as a lost cause. But his policies suggested that he hoped against reason that some kind of victory could still be achieved, and he would not become “the first American president to lose a war.” His problem was to find a way to prosecute the war while giving the American public the reassurance it sought that the war was in fact winding down. His solution was to begin a gradual withdrawal of American ground combat forces from South Vietnam in 1969, while stepping up the air war, including launching a secret bombing campaign against Cambodia.
Meanwhile, American soldiers continued to die (a total of 22,000 during Nixon’s presidency). The antiwar movement grew in size and breadth; in November 1969 half a million Americans marched in Washington, D.C., to protest the war. In May 1970, when Nixon broadened the role of U.S. ground forces, sending them into Cambodia in what he called an “incursion,” nationwide protests on and off college campuses forced him to back down, although he argued that a “silent majority” supported his policies.
In the end, Nixon’s combative style of governing and his penchant for secrecy proved fatal to his presidency and historical reputation. In the first months after taking office, he ordered the illegal wiretapping of some of Kissinger’s aides to find out who had leaked news of the secret bombing of Cambodia to the press. Later he would authorize the creation of an in-house security operation, known as the “Plumbers,” to stage break-ins against and play dirty tricks on political opponents. Some of the Plumbers were arrested in a bungled break-in at the Democratic National Committee headquarters in the Watergate complex in June 1972. The subsequent Watergate scandal attracted little attention during the campaign, and Nixon coasted to easy reelection over his Democratic opponent, George McGovern. And in January 1973, with the signing of the Paris Peace Accords, the war in Vietnam was formally ended.
But in the months that followed, as the Watergate burglars were brought to trial, the cover-up initiated by the White House unraveled. The Senate launched a formal investigation of Nixon’s campaign abuses. Indictments were handed down against top Nixon aides and associates, including former attorney general John Mitchell. Vice President Spiro Agnew resigned after pleading no contest on unrelated corruption charges and was succeeded in office by House minority leader Gerald Ford. Finally, in August 1974, facing impeachment, Nixon resigned.
On taking office, President Ford attempted to reassure the nation that “our long national nightmare” had ended, but soon after he saw his own popularity plummet when he offered a blanket pardon to President Nixon for any crimes committed while in the White House. The Democrats did very well in the fall 1974 midterm elections, and liberals hoped that the whole Nixon era would prove an aberration rather than a foretaste of an era of conservative dominance. But the overall mood of the country in the mid-1970s was one of gloom, rather than renewed political enthusiasm, either liberal or conservative. The most striking feature of the 1974 elections had been the marked downturn in voter turnout. The country’s mood worsened the following spring, when the Communists in Vietnam launched a final offensive leading to the fall of the U.S.-backed regime in Saigon. It was now apparent to many that 58,000 American lives had been wasted. Some Americans feared that a resurgent Soviet Union had the United States on the run throughout the world, not just in Southeast Asia.
And there now seemed to be new foreign opponents to contend with, among them the Arab oil-producing nations that launched a devastating oil embargo in the aftermath of the 1973 Arab-Israeli War. The subsequent leap in energy prices was but part of the economic woes of the late 1970s, including the loss of manufacturing jobs overseas, rising unemployment, declining real wages, soaring inflation, and a mounting trade deficit. Americans had come to take for granted the long period of prosperity that followed World War II, and now it was clearly coming to an end.
The immediate political beneficiary of this doom and gloom was a self-defined political outsider, Jimmy Carter, a one-term former governor of Georgia, who promised voters in 1976 a “government as good as its people,” and beat out a field of better-known rivals for the Democratic presidential nomination. As a Southerner and a born-again Christian, Carter hoped to embody a return to a simpler and more virtuous era; he was deliberately vague in campaign speeches as to his preference in political philosophy and policies. With the economy in tatters and the memory of Watergate still strong, it was a good year to run against an incumbent Republican president, but Carter barely squeaked out a victory in November over Gerald Ford, in an election in which just over 50 percent of eligible voters turned out to vote, the lowest level in nearly 30 years.
Carter’s outsider status did not help him govern effectively once in office. He disappointed traditional Democratic interest groups by showing little interest in social welfare issues. And Republicans attacked him for what they regarded as his moralistic and wishy-washy foreign policy. The economic news only worsened in the later 1970s, with increased unemployment and inflation. And in 1979, when Islamic militants seized the American embassy in Teheran and took several dozen embassy staff members as hostages, Carter’s political fate was sealed. The hostage taking provided the perfect issue for the Republican presidential nominee in 1980, former movie star and California governor Ronald Reagan, who vowed that in a Reagan administration America would “stand tall” again. In a three-way race that included independent Republican John Anderson, Reagan handily defeated Carter, and the Republicans regained control of the Senate for the first time since 1952. This political landslide was the beginning of a new era known as the “Reagan revolution.”
See also civil rights; liberalism; New Left; Vietnam and Indochina wars; voting.
FURTHER READING. Robert Dallek, Flawed Giant: Lyndon Johnson and His Times, 1961–1973, 1999; George C. Herring, America’s Longest War: The United States and Vietnam, 1950–1975, 4th ed., 2001; Maurice Isserman and Michael Kazin, America Divided: The Civil War of the 1960s, 3rd ed., 2008; James T. Patterson, Grand Expectations: The United States, 1945–1974, 1997; Rick Perlstein, Nixonland: America’s Second Civil War and the Divisive Legacy of Richard Nixon, 1965–1972, 2008; Bruce Schulman, The Seventies: The Great Shift in American Culture, Society, and Politics, 2001.
MAURICE ISSERMAN
The years between 1952 and 1964 have long been viewed as an era of consensus. During this period, the fierce struggles over political economy that had dominated the country during the 1930s and 1940s receded, as the Republican Party under the leadership of President Dwight D. Eisenhower accepted the reforms of the New Deal and Fair Deal years. Both political parties agreed about the positive role that government could play in ensuring economic growth and security. Labor unions abandoned radical politics, while most critics of capitalism had been silenced by the anticommunism of the McCarthy years. And many leading political and intellectual figures celebrated the country’s ability to transcend destructive conflicts of ideology or politics.
Yet when we look at the period more closely, it is clear that in many ways the consensus was far from complete. Under the surface, many business leaders and conservative activists already dissented from the New Deal order. Leading industrial companies had begun their migration from northern and midwestern cities to the southern United States. A significant conservative political subculture that organized around a reaction against the New Deal order began to emerge. Long before the turn toward black power, the civil rights movement was already meeting with great hostility, even in the North. Today, many historians see the period as one of continued, if at times submerged, political and economic conflict more than genuine consensus.
Dwight D. Eisenhower, elected in 1952, presided over many of the consensus years. Eisenhower was a genial and popular World War II military leader, and his victory in the Republican primary over Senator Robert Taft of Ohio, who had been a stalwart opponent of the New Deal since the 1930s, seemed to mark a dramatic shift on the part of the Republican Party. Eisenhower viewed fighting the changes of the New Deal as a political dead end; as he wrote to his more conservative brother, “Should any political party attempt to abolish social security, unemployment insurance, and eliminate labor laws and farm programs, you would not hear of that party again in our history.” He advocated “modern Republicanism,” a new fiscal conservatism that nonetheless accepted the transformations of the 1930s and 1940s. Eisenhower was not a liberal. He generally refused further expansion of the welfare state; he reduced the federal budget, gave oil-rich lands to private companies for development, and opposed national health insurance. But he also maintained and expanded Social Security, raised the minimum wage, and distanced himself from the anti-Communist paranoia of Senator Joseph McCarthy of Wisconsin (although he was reluctant to challenge McCarthy openly). Business and labor, he believed, needed to come together in a common program, united in opposition to communism and confidence in the ability of the state to bring about consensus in areas that, left to the private economy alone, would be fraught with conflict.
The declining opposition of the Republican Party leadership to the New Deal program helped bring the two political parties together. And, more than anything else, what they came to agree about was economic growth. The age of consensus was defined by its leaders’ abiding confidence in the idea that the government could and should intervene in the economy in certain circumscribed ways in order to ensure the continued expansion of national wealth. The resulting material abundance, they believed, would permit the resolution of virtually all social problems. Many different thinkers from across the political spectrum argued that economic growth and the constant expansion of material wealth made class conflict obsolete, and that the mass-consumption economy emerging in the decade was fundamentally different from the old, exploitative capitalism of the pre–New Deal years. Government intervention in the economy through fiscal and monetary policies—as suggested by British economist John Maynard Keynes—could tame the frenetic cycle of recession and expansion, ensuring a smooth and steady economic course. Such involvement by the state in economic life did not need to challenge private property or capitalism; on the contrary, by reducing poverty, it would lessen social unrest.
Indeed, the high proportion of the workforce represented by labor unions—over one-third, a high point for the century—caused many commentators to argue that the working class itself no longer existed, having been replaced by a broad middle class. Suburban homeownership, made possible by new federally subsidized loans for veterans and the construction of highways, extended to working-class Americans the possibility of participation in what historian Lizabeth Cohen has called a “consumers’ republic.” Televisions and movies and the rise of a national mass culture—especially a youth culture—helped create the image of a single, unified middle class. The class divisions of the early years of the century had been transcended by a broad consensus organized around continued material expansion and mass consumption; the old era of economic conflict was over.
If there seemed to be a new consensus around the basic virtue of a capitalism regulated and managed by strategic government intervention, the two political parties and most leading intellectual and economic figures shared an equally powerful hostility to communism and the Soviet Union. Senator McCarthy’s mudslinging career came to a close in 1954, when he accused the U.S. Army of harboring spies in a set of televised hearings that made clear to the country that the senator from Wisconsin had lost whatever support he had once had in Washington. But the Red Scare that had dominated American politics in the late 1940s and early 1950s—peaking with the electrocution of Julius and Ethel Rosenberg in 1953 for giving the Soviets atomic secrets in the 1940s—had already reshaped the country’s political culture.
In this context, the number of people willing to publicly affiliate themselves with the radical left dwindled greatly. Membership in the Communist Party (never very high to begin with) collapsed, and the radical political scene that had flourished in the party’s orbit dried up. The liberal establishment, no less than conservatives, joined in making anticommunism central to national politics. In the late 1940s, liberal historian Arthur Schlesinger Jr. identified the struggle against communism as the preeminent issue defining liberalism in the postwar period. Labor unions sought to expel Communists from their ranks. Virtually no one was openly critical of the underlying assumptions of the anti-Communist world-view in the 1950s.
The cold war and fear of nuclear holocaust helped cement the anti-Communist consensus in foreign policy. The threat of a third world war in which nuclear weapons might annihilate all of humanity deepened the broad agreement about the necessity of fighting communism. Even after the Korean War ended in 1953, the United States remained an active supporter of anti-Communist governments throughout the decade. The CIA helped engineer coups against leftist leaders in Guatemala and Iran; the American government supported Ngo Dinh Diem’s anti-Communist regime in Vietnam and then later acquiesced in his overthrow. In the atmosphere of broad agreement about the necessity of containing communism, few objected to such actions.
Although many of them viewed the consensus warily, intellectuals during the 1950s did much to create the idea that fundamental political conflicts were a relic of the past in the modern United States. They gave various diagnoses for what they saw as a new quiescence. Some thinkers, like sociologist Daniel Bell and theologian Reinhold Niebuhr, believed that Nazism, Stalinism, and the end of science in the atomic age had all helped to create a sense of the limits of ideas and of reason in guiding human affairs. As Bell put it in his 1960 book The End of Ideology, the sharply confrontational political ideas of the 1930s and 1940s—socialism, communism, fascism, traditional conservatism—had been “exhausted” in the America of the 1950s: “In the West, among the intellectuals, the old passions are spent.” American historians such as Richard Hofstadter and political scientists like Louis Hartz argued that a basic agreement about the principles of laissez-faire capitalism had endured throughout the nation’s past. Others, like sociologist David Riesman, suggested that the rise of a consumer society sustained personality types that sought assimilation to the norm rather than individuality, terrified to stand out and constantly seeking to fit in. And C. Wright Mills wrote that the rise of a “power elite” limited the range of collective action available to the rest of the nation, leading to a stunned apathy in political life.
The election of John F. Kennedy in 1960 over Richard Nixon (Eisenhower’s vice president) seemed to mark a new political direction, as his calls to public service contrasted with the private, consumer-oriented society of the 1950s. But in other ways Kennedy’s administration did not diverge significantly from the politics of the earlier decade. He delivered the first major tax cuts of the postwar period, and while he justified them by talking about the incentives that they would offer to investment, the cuts were crafted along Keynesian lines in order to stimulate the economy. He continued to advocate a fierce anticommunism, speaking of a struggle between “two conflicting ideologies: freedom under God versus ruthless, godless tyranny.” The ill-fated Bay of Pigs invasion of Cuba in 1961 (followed by the showdown with the Soviet Union over the stationing of nuclear weapons on Cuba the next year) showed that he was more than willing to translate such rhetoric into military action.
In all these ways, Kennedy’s administration—which was cut short by his assassination in November 1963—was well within the governing framework of the era of consensus.
Yet by the early 1960s, the limits of the consensus were also becoming clear. The most important challenge to consensus politics was the civil rights movement, which called into question the legitimacy of the image of the United States as a land of freedom opposed to totalitarianism and Fascism. The legal victory of the National Association for the Advancement of Colored People in Brown v. Board of Education, the 1954 Supreme Court decision that ruled segregation of public schools unconstitutional, helped to expose the harsh inequalities that African Americans faced throughout the country. Starting with the boycott of segregated public buses in Montgomery, Alabama, the following year, African Americans began to engage in new political strategies, most importantly that of nonviolent civil disobedience, which embodied a politics of moral witness and protest that was deeply at odds with the political style of consensus. Martin Luther King Jr. emerged as a national leader out of the bus boycott, and his Southern Christian Leadership Council was the largest organization coordinating the movement’s strategy. In the early 1960s, civil rights activists engaged in sit-ins at department store counters that refused to serve black people, registered black voters in the South, and resisted the system of Jim Crow legislation in countless other ways.
Although the roots of the civil rights movement were in the South, the image of ordinary people taking charge of their lives through direct action also inspired white students and others in the North. In 1962 a small group of students gathered in Port Huron, Michigan, to write the Port Huron Statement, which would become the founding manifesto of Students for a Democratic Society, the largest organization on the New Left. In the same year, Michael Harrington published The Other America, a book that called attention to the persistence, despite two decades of economic growth, of deep poverty within the United States—in the inner cities, in Appalachia, and among the elderly.
As the consensus began to fray from the left, it also came under increased criticism from the right. In the business world, many corporate leaders continued to resent the new power of labor unions and the federal government. While the unions and the idea of the legitimacy of the state were too powerful to challenge openly, some corporations did donate money to think tanks and intellectual organizations (such as the American Enterprise Association, later the American Enterprise Institute) that criticized Keynesian liberalism. Others, like General Electric, adopted hard-line bargaining strategies in attempts to resist their labor unions, a tactic that caught on with companies such as U.S. Steel at the end of the 1950s, when a brief recession made labor costs more onerous. And some corporations, including RCA and General Motors as well as countless textile shops, sought to flee the regions where labor unions had made the most dramatic gains in the 1930s and 1940s, leaving the North and Midwest for the southern and southwestern parts of the country, where labor was much weaker.
In the late 1950s, organizations like the John Birch Society and Fred Schwarz’s Christian Anti-Communism Schools flourished, especially in regions such as southern California’s Orange County, where affluent suburbanites became increasingly critical of liberalism. The civil rights movement met with massive resistance from white Southerners, who removed their children from public schools to start separate all-white private academies, insisted that segregation would endure forever, and turned to violence through the Ku Klux Klan in an attempt to suppress the movement. In the North, too, white people in cities like Chicago and Detroit continued to fight the racial integration of their neighborhoods, revealing the limits of the New Deal electoral coalition.
These varied forms of conservative rebellion came together in the 1964 presidential campaign of Arizona Senator Barry Goldwater. Goldwater, a Phoenix businessman and department store owner, was a leading critic of what he called the “dime store New Deal” of Eisenhower’s modern Republicanism. His open attacks on labor unions, especially the United Auto Workers, won him the respect of anti-union businessmen. He had supported a right-to-work statute in Arizona that helped the state attract companies relocating from the North. And his opposition to the Supreme Court decision in Brown v. Board of Education, as well as his vote against the Civil Rights Act of 1964, won him the allegiance of whites in the North and South alike who were afraid of the successes of the civil rights movement. Lyndon B. Johnson defeated Goldwater soundly in the 1964 election, which at the time was seen as the last gasp of the old right. In later years, it would become clear that it was in fact the first campaign of a new conservatism, which would be able to take advantage of the crisis that liberalism fell into later in the 1960s and in the 1970s.
The reality of the “era of consensus” was sustained struggle over the terms of the liberal order. But despite such continued conflicts, the two major political parties, along with many intellectual figures, continued to extol the ideal of consensus in ways that would become impossible only a few years later.
See also anticommunism; civil rights; communism; conservatism; foreign policy and domestic politics since 1933; Korean War and cold war; labor movement and politics; liberalism.
FURTHER READING. Taylor Branch, Parting the Waters: America in the King Years, 1954–1963, 1988; Lizabeth Cohen, A Consumer’s Republic: The Politics of Mass Consumption in Postwar America, 2003; Robert M. Collins, More: The Politics of Economic Growth in Postwar America, 2000; Elizabeth Fones-Wolf, Selling Free Enterprise: The Business Assault on Labor and Liberalism, 1945–1960, 1994; Gary Gerstle, “Race and the Myth of the Liberal Consensus,” Journal of American History 82, no. 2 (September 1995), 579–86; Robert Griffith, “Dwight D. Eisenhower and the Corporate Commonwealth,” American Historical Review 87, no. 1 (February 1982), 87–122; David Halberstam, The Fifties, 1993; Godfrey Hodgson, America in Our Time, 1976; Lisa McGirr, Suburban Warriors: The Origins of the New American Right, 2001; James Patterson, Grand Expectations: The United States, 1945–1974, 1996; Richard Pell, The Liberal Mind in a Conservative Age: American Intellectuals in the 1940s and 1950s, 1985; Rick Perlstein, Before the Storm: Barry Goldwater and the Unmaking of the American Consensus, 2001; David Stebenne, Modern Republican: Arthur Larson and the Eisenhower Years, 2006; Thomas Sugrue, The Origins of the Urban Crisis: Race and Inequality in Postwar Detroit, 1996; Wendy Wall, Inventing the “American Way”: The Politics of Consensus from the New Deal to the Civil Rights Movement, 2008.
KIM PHILLIPS-FEIN
See liberal consensus and American exceptionalism.
See cabinet departments; presidency.