California, Oregon, and Washington had dissimilar political origins and came to statehood in different ways and at different times. Nonetheless, the three states came to share several common political characteristics, notably experiences with progressivism in the early twentieth century, the frequent use of direct democracy in state and local politics since then, and, recently, strong Democratic majorities in most urban areas along the coast and Republican majorities in inland areas.
Several European nations laid claim to the Pacific Coast. Spanish explorers established settlements in Alta California after 1769, and a Russian settlement in what is now northern California lasted from 1812 to 1841. Spanish, British, and American ships visited the Pacific Northwest in the late eighteenth century. In 1804 President Thomas Jefferson dispatched the Lewis and Clark expedition in part to find a route to the Pacific and strengthen the American claim to the Northwest.
In 1818, the United States and Great Britain created a “joint occupancy” for the Oregon country—everything west of the Rocky Mountains between Alta California and Russian North America. The Adams-Onís Treaty (1819), between the United States and Spain, set the northern boundary of Alta California at the forty-second parallel. Missionaries from the United States began work in the Willamette Valley in 1834, and settlers soon followed along the Oregon Trail. American settlers created a provisional government in 1843, and it functioned as the civil government until 1848.
In the 1844 presidential campaign, Democrats sometimes invoked Manifest Destiny to demand annexation of Texas and the Oregon country. Their candidate, James K. Polk, won the election and began to carry out his party’s platform. Some Oregon enthusiasts insisted on “54–40 or Fight,” meaning the entire jointly occupied region, but in 1846 the United States and Britain compromised on the forty-ninth parallel. In 1848 Congress organized the region as Oregon Territory.
Congress annexed the Republic of Texas in 1845. The same year, Polk asked Mexico to sell Alta California and Nuevo México to the United States. A major attraction was the Bay of San Francisco, the best natural harbor on the Pacific Coast. The Mexican government refused to sell and continued to claim Texas. War was declared on May 13, 1846.
Earlier, a U.S. Army unit commanded by John C. Frémont had entered California, allegedly on a mapping expedition. By then, northern California included several American and European settlers, some of whom held Mexican land grants. In mid-June 1846, not knowing that war had been declared, American settlers at Sonoma proclaimed a California Republic and raised a crude flag that included a grizzly bear in its design. Soon after, a U.S. Navy detachment sailed into San Francisco Bay with news of the war. The Bear Flaggers, Frémont’s troops, and the Navy took control of northern California. Mexicans offered sharper resistance in southern California, but U.S. forces took control there by mid-January 1847.
By the Treaty of Guadalupe Hidalgo (1848), ending the war, the United States purchased all of Alta California and Nuevo México, including all or parts of Texas, New Mexico, Arizona, California, Nevada, Utah, Colorado, and Wyoming, for less than half the price Polk had offered before the war.
In 1849, when gold seekers began to pour into northern California, the region was under Navy control and not yet organized as a territory. A convention soon met to draft a state constitution. The constitution, written in both English and Spanish, included a provision from Mexican law permitting a married woman to own property in her own name, the first such guarantee in any state constitution. Congress in 1850 approved statehood for California, with its modern boundaries.
Admission of California was hotly contested because the proposed constitution barred slavery, and admission of California as a free state would break the balance between slave states and free states. The Compromise of 1850 included admission of California as a free state among its many provisions but provided only a lull in the regional conflict over slavery.
Both Oregon Territory and the new state of California faced questions regarding land titles. In creating Oregon Territory, Congress voided laws passed by the provisional government, thereby calling into question the validity of land titles. Under pressure from settlers, in 1850 Congress approved the Oregon Donation Land Act, which provided for the award of up to 320 acres per person.
The land question in California involved Spanish and Mexican land grants, which often were large and vaguely defined. Congress in 1851 set up a commission to review land titles. Over five years the commissioners heard more than 800 claims and confirmed more than 600. Nearly all were appealed through the courts. The legal proceedings dragged on interminably, and many successful applicants sold their land to pay costs. Further complicating matters, squatters settled on some ranchos and refused to leave. Most scholars of the subject have agreed with Henry George, a journalist, who in 1871 called it a “history of greed, of perjury, of corruption, of spoliation and high-handed robbery.”
In San Francisco, the largest city in the West, political processes broke down twice in the 1850s. In 1851, responding to a rash of robberies, burglaries, and arson, merchants and ship captains formed a Committee of Vigilance. Despite opposition from city and state officials, the committee constituted itself as an impromptu court and hanged four alleged wrongdoers, whipped one, and banished several. In 1856 the Committee of Vigilance revived and took control of the city, establishing a force of nearly 6,000 well-armed men, mostly merchants and businessmen. City officials, the major general of the militia (William T. Sherman), the governor, and other prominent political figures all opposed the committee, but it disarmed the state militia, hanged four men, and banished about 20. The committee then established a political party and yielded power only after its candidates won the next election.
In California and Oregon Territory, a new approach to Indian reservations evolved in the 1850s. Native Americans in the east had usually been moved westward and given a reservation for each tribe. In the far west, reservations often were established by region, not tribe, and peoples from various tribes were put together regardless of the relations between them. In the 1850s, California officials approved stringent regulations over the many California Indians outside the reservations. California Indians were frequently the victims of random violence. More than one historian has concluded that genocide is the only appropriate term for the experience of California Indians during the 1850s and 1860s, and similar violent episodes took place in Oregon.
In 1853 Congress divided Oregon Territory along the Columbia River and forty-sixth parallel into Oregon and Washington territories. Four years later, Oregonians seeking statehood submitted a constitution and two other questions to voters. The voters approved the proposed constitution, decided by nearly 3 to 1 to ban slavery in the new state, and chose by an even larger margin to bar free African Americans from living in Oregon. Given the close balance in Congress, approval for statehood was uncertain but finally came in 1859.
Slavery roiled politics through the 1850s. Prompted by Democrats with southern proclivities, the California legislature prohibited African Americans from voting, serving on juries, marrying whites, or testifying in state courts, and applied similar restrictions to American Indians and Chinese immigrants. By 1859 California Democrats split into two camps, each led by a U.S. senator. David Broderick’s faction opposed slavery; William Gwin’s faction had southern sympathies. Tension ran even higher when a Gwin supporter killed Broderick in a duel. In 1860 California voted for Abraham Lincoln, as did Oregon.
When secession led to civil war, the two Pacific Coast states were securely committed to the Union. Though comprising just 2 percent of the Union’s population, Californians donated a quarter of all funds raised by the Sanitary Commission, the humanitarian organization that assisted Union troops, and raised more volunteers per capita than any other state. California volunteers helped to rout a Confederate army from New Mexico Territory and occupied much of the West.
During the war, Republicans moved to tie the union together with iron rails. The Pacific Railroad Act (1862) incorporated the Union Pacific (UP) company to build a railroad westward and permitted the Central Pacific company to build eastward to meet the UP. The Central Pacific was controlled by four Sacramento merchants, all Republicans, including Leland Stanford, who was elected governor in 1861. For a quarter-century, the Central Pacific and its successor, the Southern Pacific (SP), dominated rail transportation in California and elsewhere in the West. Most Californians also understood the SP to be the most powerful force in state and local politics.
In 1871 Newton Booth, a Republican opponent of the SP, won the California governorship just as the Granger movement began to affect state politics. Grangers joined other SP critics in 1873 to create the People’s Independent Party, which did well in the elections of 1873 and helped elect Booth to the U.S. Senate in 1875. After 1875, however, the Granger movement quickly faded.
A Granger party also appeared in Oregon, where Republicans had been in control since the early 1860s. Oregon Republican leaders were generally conservative and business-minded, and intraparty conflicts stemmed more from personalities than principles. In 1874, though, Grangers and other farmer groups formed a short-lived Independent Party that showed substantial strength in state legislative elections.
The 1876 presidential election thrust Oregon into national headlines. The Republican, Rutherford B. Hayes, carried Oregon, but national returns showed him trailing Samuel Tilden, the Democrat. Republicans challenged the returns from Louisiana and Florida; if successful, Hayes would have a one-vote majority in the Electoral College. Democrats then challenged one Oregon elector as unqualified; if successful, their ploy would have thrust the election into the House of Representatives, which had a majority of Democrats. Ultimately, a congressional election commission with a Republican majority accepted all of Oregon’s electoral votes as Republican, along with the electoral votes of Louisiana and Florida, giving Hayes a one-vote majority.
After 1877, teamster Denis Kearney attracted a political following in San Francisco by condemning the monopoly power of the SP and arguing that monopolists used Chinese workers to drive down wages. He soon led the Workingmen’s Party of California (WPC) and provided its slogan, “The Chinese Must Go.” The WPC briefly dominated San Francisco politics, winning elections in 1878 and 1879. Oakland and Sacramento also elected WPC mayors.
The WPC’s greatest statewide success came in 1878, in elections for a constitutional convention. WPC and Granger delegates comprised a majority and wrote into the new constitution an elected railroad commission to regulate rates and restrictions on Chinese immigrants. The constitution also declared water subject to state regulation and guaranteed equal access for women to any legal occupation and to public colleges and universities. Controversial for its restrictions on corporations, the new constitution nonetheless won a majority from voters. Many of the provisions restricting Asians were invalidated by the courts.
Anti-Chinese agitation also appeared in Oregon and Washington Territory. In 1882 such western opposition to Chinese immigration led Congress to ban further immigration of laborers from China. In the mid-1880s anti-Chinese mobs appeared throughout the West, sometimes associated with the Knights of Labor. In Washington Territory the anti-Chinese movement spawned a short-lived largely unsuccessful reform party.
In California during the 1880s, voters divided closely between the Republicans and Democrats. The SP continued its prominence in state politics—symbolized in 1885 when Stanford won election to the U.S. Senate amid allegations of vote buying. That decade marked the political apogee of Christopher Buckley, a blind San Francisco saloon keeper who emerged as “boss” of the city’s Democrats and a power in state Democratic politics. In 1891 charges of bribery led Buckley to leave the country, and his organization fell apart.
Washington Territory grew slowly until the 1880s, when railroad construction finally connected Puget Sound directly with the Midwest. Washington statehood was delayed not only by slow population growth but also by partisan maneuvering in Congress, where the Democratic majority in the House feared that Washington statehood would mean more Republican electoral votes. When Republicans won secure control of both houses and the presidency in 1888, statehood for Washington followed in 1889.
Populism affected all three states. In 1891 the California Farmers’ Alliance launched a state Populist Party, focusing their campaign against the SP. They took 9 percent of the 1892 presidential vote and won one congressional seat and eight seats in the state legislature. Populist candidates later won the mayor’s office in San Francisco and Oakland. In the 1896 presidential election, however, Republicans took California by a tiny margin. That margin soon widened. Between 1898 and 1938 no Democrat won the California governorship, and Republicans typically had large majorities in the state legislature, as California became one of the most Republican states in the nation.
In Oregon delegates from farmers’ organizations, prohibitionists, and trade unions formed a new party in 1889, and the new party promoted the Farmers’ Alliance. By 1892 these groups had aligned with the national Populist Party. The most prominent Populist in Oregon was Governor Sylvester Pennoyer, elected in 1886 as a Democrat and reelected in 1890 as a Democratic-Populist fusionist. In the 1892 presidential election, Oregon cast one of its electoral votes for Populist James B. Weaver because a Democratic-Populist candidate for elector received enough votes to edge out a Republican. Republicans won the other three electoral votes. Populists won some seats in the Oregon legislature in 1892 and 1894 but accomplished little. Republicans swept the Oregon elections in 1896 and usually dominated state politics thereafter.
As in California, Populists made a decent showing in Washington’s major cities, and Spokane elected a Populist mayor in 1895. Not until 1896, however, did Populists win more than local elections; that year, in fusion with the Democrats, they carried Washington for William Jennings Bryan, elected the governor and a majority of the legislature, and then sent a Silver Republican fusionist to the U.S. Senate. The party soon died out, however, and Republicans dominated the Washington statehouse in the early twentieth century.
During the late nineteenth century, women promoted a range of reform issues, including woman suffrage. In 1878 California’s U.S. senator Aaron A. Sargent introduced, for the first time, a proposed federal constitutional amendment for woman suffrage. The Washington territorial legislature approved woman suffrage in 1883, but it was ruled unconstitutional by the territorial supreme court in 1888. Woman suffrage came before California voters in 1896, but a large negative vote in San Francisco and Oakland overcame the small favorable majority elsewhere.
The three Pacific Coast states moved in similar political directions in the early twentieth century. All experienced progressivism, became more conservative in the 1920s, and moved toward the Democrats and the New Deal in the 1930s.
Much of Oregon progressivism centered on William U’Ren, a Populist turned Republican. U’Ren was attracted to the single-tax proposed by Henry George but concluded that it was unlikely to be adopted without a popular vote, so he began to promote the initiative and referendum (I&R), part of the Populist platform. U’Ren pushed and prodded until voters approved I&R through a constitutional amendment in 1902. Between 1904 and 1914, Oregonians voted on 136 initiatives, approving 49, and I&R became known as the Oregon System. Successful initiatives included a railroad commission, bank regulation, a child labor law, recall, a minimum wage, home rule for cities, and a direct primary. Governor Oswald West, a Democrat elected in 1910, frequently resorted to the initiative when the legislature refused reforms he sought.
When an Oregon law mandating protection for women workers was challenged, the Supreme Court’s decision upholding protection, in Muller v. Oregon (1908), set an important precedent.
Events in Oregon influenced Washington progressives, especially I&R and recall, which were adopted early and used regularly, including recall of the mayors of Seattle and Tacoma in 1911. The Washington legislature also created regulatory commissions for railroads and other industries and established minimum wages for women and children, maximum hours for women, limits on child labor, workman’s compensation, and the direct primary.
California came late to progressivism, but legislators finally adopted the direct primary in 1909, which led to the nomination of Hiram Johnson for governor in 1910. Johnson, a Republican, lambasted the SP and won, as did other progressives. The 1911 legislature produced more than 800 new laws and 23 constitutional amendments, including I&R, recall, regulation of railroads and public utility companies, the eight-hour day for women, restrictions on child labor, workman’s compensation, and an investigation of corruption and inefficiency in state government.
In 1912 Theodore Roosevelt ran for president as candidate of the new Progressive Party, and he chose Johnson as his running mate. Roosevelt carried six states, including California and Washington.
All three Pacific Coast states were in the vanguard of states adopting woman suffrage. Washington became the fifth state to do so, in 1910. California followed in 1911, and Oregon in 1912.
California experienced another round of progressive reform in 1913, including laws restricting political parties. After 1913 California had more nonpartisan elected offices than any other state. Other legislation that year included reforms promoted by women’s groups and the creation of three new commissions: Industrial Welfare (health, safety, and welfare of women and children), Industrial Accidents, and Immigration and Housing (migrant farm labor). The Alien Land Act, prohibiting immigrants ineligible for citizenship (those from Asia) from owning land, was intended in part to embarrass President Woodrow Wilson and the Democrats. Johnson carried the progressive banner into the U.S. Senate in 1917 and served until his death in 1945.
Progressivism transformed politics and government in all three states, adding new functions, especially the regulation of public utilities and protection of workers and consumers. The progressives’ assault on political parties transformed the ground rules of state politics. The initiative became an important source of policy making. And women entered the political arena in a significant way.
Many progressives decried any role for economic class in politics, but class-based political groups appeared in all three states. Between 1901 and 1905, in San Francisco the Union Labor party won the mayoralty three times and took other local offices, then returned to power in 1909, despite revelations of earlier corruption. Socialists won local offices in several places; in 1912 Eugene Debs, the Socialist presidential candidate, received 12 percent in California and Washington and 10 percent in Oregon, compared to 6 percent nationwide. The Industrial Workers of the World established a significant presence in the lumbering areas of Oregon and Washington. World War I brought a surge of wartime patriotism, and these radical groups drew strong opposition.
Opposition to radicals continued after the war. In 1919 the Seattle Central Labor Council (unions affiliated with the American Federation of Labor) called a general strike in support of striking shipyard workers. Largely successful, the general strike lasted three days, but conservatives and antilabor groups held it up as an example of the dangers posed by radicals.
Progressivism waned after the war but did not disappear. Hiram Johnson continued as a strong progressive voice in the U.S. Senate and also staunchly opposed the League of Nations. Throughout the 1920s, a large majority of California voters registered as Republicans but divided closely between the progressive and conservative wings, making the Republican primary more important than the general election. Similar patterns appeared in Oregon and Washington, but Republican progressives there rarely mounted significant challenges to conservative dominance. Nonetheless, in 1924 Robert La Follette drew a third of the vote in California and Washington, double his national average, and a quarter of the vote in Oregon, edging the Democrats out of second place in all three states.
The Ku Klux Klan appeared in all three Pacific Coast states in the 1920s. In Oregon the Klan and other groups promoted a 1922 initiative requiring children to attend public school. Passed by a large margin, the law aimed at closing Catholic parochial schools, but the state supreme court declared it unconstitutional in 1924, and the U.S. Supreme Court did the same in Pierce v. Society of Sisters (1925). Also in 1922, Walter Pierce, a Democrat, received Klan support in his campaign for governor and won by a large margin. A prohibitionist and progressive, committed to public ownership of the electrical industry, Pierce nonetheless got little support from Republican progressives. In Washington in 1924, voters overwhelmingly defeated a Klan-sponsored initiative, modeled on the Oregon law, to require all children to attend public schools. The Klan showed strength in several California cities but played no significant role in state politics.
The Great Depression and the New Deal of Franklin D. Roosevelt revived Democratic fortunes. Roosevelt carried the three states by 57–58 percent in 1932 and 64–67 percent in 1936. During the 1930s, Democrats won U.S. Senate seats in California and Washington and took the governorship in all three states.
The California gubernatorial election of 1934 drew national attention, but electoral politics were pushed out of the headlines earlier that year by the three-state longshore and maritime strikes, which shut down shipping for three months, and by the four-day San Francisco general strike, all of which conservatives blamed on Communists. Upton Sinclair, author of The Jungle (1906) and a former Socialist, won the Democratic nomination for governor with a program called End Poverty In California (EPIC). Though voters flocked to register as Democrats, Sinclair lost after a torrent of attacks that broke new ground in negative campaigning. The winner, Republican Frank Merriam, disappointed conservatives by supporting a new income tax and increasing the sales tax. A referendum to repeal the new taxes failed. The 1938 election marked the high point of Communist support for the Democrats, but Democrats’ success rested primarily on a base of EPIC organizing and strong support from AFL and CIO unions, brought together by an antilabor initiative. Led by Culbert Olson, their gubernatorial candidate, Democrats swept nearly every statewide office and took a majority of the state assembly. The Senate, however, remained Republican. A broad liberal legislative agenda, including health care for nearly all workers and their families, wages and hours legislation, civil rights, and other initiatives, was defeated. Olson lost in 1942, Democrats in most other races.
Clarence Martin, a Democrat, won the governorship of Washington in 1932. As in California, the state legislature completely revised the tax code, shifting the major revenue source from property taxes to sales, income, and excise taxes. The income tax, however, was ruled unconstitutional. In 1935 leftist, labor, and farm organizations formed the Washington Commonwealth Federation (WCF), drawing inspiration from EPIC. The WCF was so active and successful in pushing Washington to the left that Postmaster General James Farley in 1936 jokingly referred to the “the forty-seven states . . . and the Soviet of Washington.” Communist Party members did take an active part in the WCF, and a few were elected to office as Democrats.
In Oregon, Julius Meier, running as an independent candidate, committed to public development of hydroelectric power, won the governorship in 1930 but failed to accomplish his goal. He was succeeded by Charles Martin, a conservative Democrat, who increasingly attacked the New Deal and was not renominated in 1938. Republican governor Charles Sprague, in turn, proved to be such a progressive that he lost his renomination bid in 1942 to a conservative. An Oregon Commonwealth Federation, modeled on the WCF, was less successful than its Washington counterpart.
The New Deal brought important changes to Pacific Coast states. The Bonneville, Grand Coulee, and other dams gave the Pacific Northwest a bonanza of cheap, publicly generated electricity, which stimulated industrial development and prompted the creation of public power districts. New Deal labor policies brought many new members into unions; most voted Democratic and pushed the party to the left. In California Democrats have consistently outnumbered Republicans among registered voters since 1934. Democratic registered voters in Oregon increased sharply in the 1930s, first outnumbered Republicans in the early 1950s, and have consistently outnumbered Republicans since 1958. Similar data does not exist for Washington, but election results suggest a pattern more like California than Oregon.
By the late 1930s and early 1940s, support for the New Deal and the Democrats ebbed, especially among the middle class and farmers, even as war industries contributed to a boom in manufacturing and union membership. Republicans won the Oregon governorship in 1938 and held it and the state legislature until 1956. Republicans won the Washington governorship in 1940 and held it for 12 of the next 16 years, although Democrats usually controlled at least one house of the legislature. In California, the gubernatorial victory of Earl Warren in 1942 launched 16 years of Republican control in Sacramento. Similar patterns appeared in the region’s congressional delegations, although Democrats held more seats than before the New Deal.
After World War II, Democrats began to accumulate considerable congressional seniority, notably the two Washington senators, Warren Magnuson (1944–81) and Henry “Scoop” Jackson (1953–83). Both held significant leadership positions, as did Alan Cranston (1969–93) from California. Wayne Morse, from Oregon, first a Republican, then an independent, and finally a Democrat, served from 1945 to 1969. Beginning in the late 1960s, Oregon voters repeatedly returned moderate Republicans to the Senate: Mark Hatfield (1967–97) and Robert Packwood (1969–95), both of whom held leadership positions. Similar patterns characterized some members of the House; two House members, both Democrats, served as Speaker: Thomas Foley, from Washington, who was Speaker from 1989 to 1995, and Nancy Pelosi, from California, who was first elected Speaker in 2007.
The late 1950s marked an important turning point for Democrats. In California, Edmund G. “Pat” Brown won the governorship in 1958, and, for the first time since the 1880s, Democrats controlled both houses of the legislature. Brown and the Democrats enacted a massive water project, a major expansion of higher education, highway construction, and a fair employment practices act. A controversial fair housing act and demonstrations at the University of California, Berkeley, contributed to Brown’s defeat for a third term in 1966. In Washington a Democrat, Albert Rosellini, won the governorship in 1956 and, with a Democratic legislature, adopted a long list of administrative reforms and expanded higher education and highways. A Democrat, Robert Holmes, won the governorship in Oregon in 1956 but was defeated in 1958, and Republicans led the state for the next 16 years.
Republicans held the governorships in all three states by the mid-1960s. Elected governor of California in 1966 and reelected in 1970, Ronald Reagan championed conservative values but proved more pragmatic in practice. Promising to “cut, squeeze, and trim” the budget, he made deep cuts in higher education and mental health funding but nonetheless produced the largest budgets up to that time, requiring significant tax increases. He sent the National Guard to Berkeley to suppress demonstrations but signed the most liberal abortion bill in the country. His commitment to cutting taxes and reducing welfare forecast his presidency. Reagan was succeeded by the sometimes enigmatic Edmund G. “Jerry” Brown, son of Pat Brown and a Democrat, but voters then turned to Republicans for the next 16 years.
Daniel Evans won the Washington governorship in 1964, despite a nationwide Democratic landslide, and served three terms. A Republican, he promoted liberal environmental policies, endorsed legal abortions, expanded higher education, and supported an income tax.
In Oregon Democrats took a majority in the state legislature in 1958, for the first time in the twentieth century, but Mark Hatfield, a moderate Republican, won a closely contested election for governor. Hatfield worked to expand higher education and to bring a more diversified economy to Oregon. His successor, Tom McCall, also a moderate Republican, served two terms and initiated policies to clean up the environment and create the first state-level land-use planning system. In 1973 Oregon became the first state to decriminalize possession of small amounts of marijuana, and was followed by California and a few other states.
Recent decades have brought increasing ethnic and gender diversity among elected officials, especially in California. Since 1970 African Americans have served as mayors of Los Angeles, Oakland, San Francisco, and Seattle; Latinos as mayors of Los Angeles and San José; and Asian Americans as mayors of Long Beach, Sacramento, and San José. Asian Americans have served as U.S. senator from California and governor of Washington. Willie Brown, an African American, holds the record as longest-serving Speaker of the California Assembly, and the most recent three speakers include two Latinos and an African American woman. Since 1970 women have served as governors of Oregon and Washington, and California and Washington were the first states to have two women simultaneously serving as U.S. senators. From 1993 to 2004, Washington led the nation in the percentage of women in the state legislature. Women have been elected as mayors of most of the region’s major cities. Gays and lesbians have served on city councils and in state legislatures, but not as mayor of a major city, member of Congress, or governor. A dramatic breakthrough in gay and lesbian rights came in 2008, when the California Supreme Court ruled that restricting marriage to heterosexual couples violated the state constitution’s guarantee of equal rights. In response, evangelical Christians, Catholics, and Mormons mobilized to pass a constitutional amendment defining marriage as between a man and a women; advocates of same-sex marriage vowed to continue the fight for equal treatment.
The three Pacific Coast states remain distinctive in their reliance on direct democracy. Most elections include a list of initiative measures. California’s Proposition 13 of 1978 launched a “taxpayer revolt” that spread to other states, and California’s Proposition 45 of 1990 established term limits and inspired similar measures elsewhere, including Washington in 1992. Oregon’s Proposition 16 of 1994 legalized physician-assisted suicide; similar initiatives have appeared on the ballot elsewhere but none passed until Washington’s measure in 2008. California’s Proposition 215 of 1996 legalized marijuana use for medical purposes; voters passed similar measures elsewhere, including Washington. California Republicans used the recall in 1994–95 to punish members of their party for crossing party lines in the legislature, and, in 2003, California voters grabbed international headlines when they recalled Governor Joseph “Gray” Davis and replaced him with movie star Arnold Schwarzenegger.
In California, Proposition 13 also generated a revolution in the use of the initiative. In Proposition 98 of 1988, the California Teachers Association used the initiative to mandate funding for public K–14 education. Taken together, Propositions 13 and 98 presented a new version of direct democracy: people could vote not to tax themselves but could mandate expenditure of public funds. By the end of the 1990s, some political observers pointed to the initiative as a central culprit in creating a dysfunctional state government.
During the first decade of the twenty-first century, patterns that began in the 1980s continued to mark state politics in the Pacific Coast. Democratic presidential candidates John Kerry in 2004 and Barack Obama in 2008 carried all three states, and Democrats did well in the 2006 elections, although Schwarzenegger won a second term against a weak Democratic candidate. In Oregon in 2008, Jeff Merkley, a Democrat, defeated incumbent Senator Gordon Smith, one of the last remaining moderate Republicans in Congress. A map of voting behavior in those elections shows all three states with blue (Democratic) counties along the coast, especially in urban areas, and red (Republican) counties, typically more rural and inland. Voting on initiatives and referenda often reflected the same configuration. Thus, interior voters, especially in agricultural areas, behave politically more like voters in agricultural areas in parts of the Midwest or like the voters to their east, in Idaho and Nevada. Coastal and urban voters behave much like urban voters in the northeastern United States.
FURTHER READING. Robert E. Burton, Democrats of Oregon: The Pattern of Minority Politics, 1900–1956, 1970; Jonathan Dembo, Unions and Politics in Washington State, 1885–1935, 1983; Robert E. Ficken, Washington: A Centennial History, 1988; Gayle Ann Gullett, Becoming Citizens: The Emergence and Development of the California Women’s Movement, 1880–1911, 2000; Robert D. Johnston, The Radical Middle Class: Populist Democracy and the Question of Capitalism in Progressive Era Portland, Oregon, 2003; Richard Coke Lower, A Bloc of One: The Political Career of Hiram W. Johnson, 1993; Greg Mitchell, The Campaign of the Century: Upton Sinclair’s Race for Governor of California and the Birth of Media Politics, 1992; Gary Murrell, Iron Pants: Oregon’s Anti-New Deal Governor, Charles Henry Martin, 2000; Earl Pomeroy, The Pacific Slope: A History of California, Oregon, Washington, Idaho, Utah, and Nevada, 1965; Ethan Rarick, California Rising: The Life and Times of Pat Brown, 2005; Shelby Scates, Warren G. Magnuson and the Shaping of Twentieth-Century America, 1997; Jules Tygiel, Ronald Reagan and the Triumph of American Conservatism, 2006; R. Hal Williams, The Democratic Party and California Politics, 1880–1896, 1973.
ROBERT W. CHERNY
Pacifism, the rejection of violence as a means of solving disputes, is a broad doctrine that encompasses a variety of ideas and practices and dates back to the earliest settlements in colonial America. Throughout much of American history, pacifism has been closely associated with religion, particularly the so-called historic peace churches (the Quakers, Mennonites, and Church of the Brethren). Pacifism has found a home in other religions as well.
Arguably, the earliest pacifists in American history were religious dissenters such as Roger Williams and Anne Hutchinson, who were banished in 1635 and 1638, respectively, from the Massachusetts Bay Colony for their heretical beliefs. Other, less prominent dissidents adhered strictly to nonviolent practices, even when they faced death sentences for their beliefs.
Historians such as Peter Brock, Charles Chatfield, and Meredith Baldwin Weddle have explored the history of pacifism in colonial America and found it to be a vibrant tradition that borrowed heavily from transatlantic ideas rooted in the Enlightenment and religious dissent. Some Quakers, such as itinerant eighteenth-century preacher John Woolman, preached against conscription and condemned the use of tax revenues for war purposes. Pennsylvania, with its policies of religious tolerance and separation of church and state, became a haven for a number of colonial pacifist sects.
During the American Revolution, nonviolent resistance, such as boycotts, public protests, petition drives, and other acts of noncooperation, coexisted with more violent forms of anti-British resistance. While pacifists enjoyed only a marginal presence in the Revolution, their cultural bark would prove much more powerful than their political bite, and ultimately helped influence the restrained treatment of Loyalists after the conflict.
Peace movements flourished in antebellum America, dovetailing with the broader landscape of pre–Civil War reform efforts. In 1815 David Low Dodge, a pacifist merchant, founded the New York Peace Society, the first of many such organizations formed during the first half of the nineteenth century. His 1809 tract The Mediator’s Kingdom, not of this world, but Spiritual inspired the creation of similar groups across the United States. Reverend Noah Worcester, a New Hampshire–born Unitarian and tireless advocate of peace, worked so hard to promote pacifist ideas that he earned the title “father of the American peace movement.” Pioneering American antiwar activist William Ladd, also a New Hampshire native, was a sea captain, chaplain, and author. During his life, Ladd was called the “Apostle of Peace.” His newspaper, Harbinger of Peace, brought a wide variety of pacifists together from several states and territories, and Ladd was one of the founders in 1828 of the American Peace Society. Twenty years later, in 1848, blacksmith Elihu Burritt founded the first secular pacifist organization, the League of Universal Brotherhood.
Much of the antebellum abolitionist movement, while militant, remained nonviolent between the 1830s and the eve of the Civil War. For practical more than doctrinal reasons, abolitionists seldom took up arms against lynchings, mob violence, arson, and shootings carried out by their foes. Influenced by the work of Henry David Thoreau and other pacifist writings of the New England renaissance, abolitionist leaders such as William Lloyd Garrison, Maria Chapman, and Frederick Douglass preached restraint. Still, most abolitionists refused to condemn the violence used by foes of slavery in the 1850s, most notably in Kansas in mid-decade and by John Brown at Harpers Ferry in 1859.
As Thomas Curran documented in Soldiers of Peace: Civil War Pacifism and the Postwar Radical Peace Movement (2004), pacifists confronted a number of challenges during the Civil War and ultimately emerged from the conflict somewhat less robust than before. But the Universal Peace Union flourished in the late nineteenth century, attracting thousands of members, and it eventually joined the chorus of anti-imperialist voices in protesting America’s involvement in the Spanish-American War (1898).
American pacifism in the twentieth century became increasingly secularized, although religious pacifism remained strong. Antiwar sentiments, robust before World War I, persisted on a smaller scale after President Woodrow Wilson declared war in 1917. More radical antiwar advocates in the Socialist Party and Industrial Workers of the World sometimes endured harsh treatment, such as prison sentences, loss of mail privileges, and in certain cases a loss of citizenship.
Despite the repression of antiwar activists in World War I, the American peace movement reemerged stronger than ever during the interwar period, especially in the Great Depression. The heyday of pre–World War II isolationism also created fertile ground for pacifism, especially on college campuses and in cities. Opinion polls from the era painted a portrait of an American public more receptive than ever to pacifist ideas.
World War II abruptly reversed that situation. Pacifism went into full retreat during the war. Tiny enclaves of pacifists working in government-run Civilian Public Service (CPS) camps or languishing in prison kept the movement alive through the war. In the postwar era, small groups of intrepid “radical pacifists” attempted to breathe new life into the movement. Even though the cold war chilled dissent, pacifists such as A. J. Muste, Bayard Rustin, Dorothy Day, David Dellinger, George Houser continued to organize protests against war and the arms race. This small but committed group developed a more sophisticated and nuanced theoretical framework for pacifism and nonviolent direct action.
Pacifists exercised tremendous influence within the civil rights movement. Arguably, the most famous pacifist in American history was Martin Luther King Jr., who constantly sought to keep the movement nonviolent. The Vietnam War also ushered in another brief golden age for pacifism. The anti–Vietnam War movement, thriving by 1967, was a boon for the American pacifist movement. During the 1960s and early 1970s, it found new life, colorful adherents, and a restored purpose. While pacifists were always a minority within the antiwar struggle, they exercised tremendous influence over the direction and tempo of the movement.
In the last quarter of the twentieth century and opening years of the new millennium, pacifism experienced many setbacks. While it enjoyed a temporary post–Vietnam War resurgence in the early 1980s around the nuclear arms race of the Reagan era and the looming prospect of U.S. intervention in Central America, it was once again in retreat by the 1990s. The antiglobalization movement fanned the embers of pacifism again, however, and it attracted a new, if small, number of followers in the aftermath of the September 11, 2001, terrorist attacks and the war in Iraq launched in 2003. Widening resistance against the Iraq War jump-started several moribund pacifist groups. While pacifism as a protest movement remains tiny, confined mostly to large urban centers, pacifism’s core ideas continue to capture the imagination of those Americans who envision a more peaceful future.
See also radicalism; religion and politics.
FURTHER READING. Peter Brock, Pacifism in the United States: From the Colonial Era to the First World War, 1968; Robert Cooney and Helen Michalowski, The Power of the People: Active Nonviolence in the United States, 1977; Staughton Lynd and Alice Lynd, eds., Nonviolence in America: A Documentary History, 1995; James Tracy, Direct Action: Radical Pacifism from the Union Eight to the Chicago Seven, 1996; Valarie H. Ziegler, The Advocates of Peace in Antebellum America, 1992.
ANDREW HUNT
Political party conventions perform a number of tasks. They generally meet every four years, several months in advance of a presidential election. The modern convention meets over several days to achieve various procedural and political goals. Leaders compose and approve the party platform, a policy statement including “planks,” or specific proposals, on which the party’s candidates run, as well as set rules for party procedure. In addition, leaders use the convention to address the party en masse. Minor figures are often given the opportunity to address the convention during the day while most delegates are in meetings; evening addresses, however, are heavily publicized and often delivered by major figures. The keynote speaker is often selected to fulfill some symbolic or political goal. For example, Zell Miller, a Democratic senator from Georgia endorsed George W. Bush at the 2004 Republican convention for president based on his national security credentials. Often a party’s rising stars are chosen to deliver prominent addresses. Two such speakers between 1988 and 2004—Bill Clinton and Barack Obama—were subsequently nominated as presidental candidates in their own right. The most visible and historically important task of the convention is the nomination of that party’s candidates for president and vice president.
Conventions are composed of delegates, apportioned among various state and territorial party organizations. Delegates vote for presidential and vice presidential nominees and on other procedural matters. Since the early 1970s, delegates of the two major parties have generally been bound to follow the results of state caucuses or primaries when they vote for candidates. Therefore, the identity of each party’s eventual nominee is often known weeks or even months before the conventions begin; primaries have historically been held over several months during the first half of the year, and the conventions not until late summer. The events themselves have increasingly become mere formalities, serving primarily as publicized launching pads for the final weeks of the presidential campaign.
The earliest conventions wielded a great deal of influence. By the early 1830s, the party founded by Thomas Jefferson had dominated American national politics for three decades. Contemporary Democratic political operatives like Martin Van Buren thought of a party as a system of officeholders who dispensed patronage. A caucus of prominent party leaders, therefore, generally selected each presidential candidate. President Andrew Jackson, who had held office since defeating incumbent John Quincy Adams in 1828, however, was a controversial figure, and opposition to him meant that schism and a viable two-party system would soon emerge. The appearance of the political convention facilitated the transformation.
In September 1831, the Anti-Masons—an insurgent northern group particularly powerful in New York that was fearful of what it imagined was a secret yet powerful Masonic influence on politics—organized a national convention. The Anti-Masons were imitating not American politicians (who had never held conventions) but social reformers and benevolent organizations (who had). The Anti-Masons, however, reconceived the system; their party was a mass movement, and a convention was a way to attract popular participation and establish egalitarian (as opposed to the imagined Masonic conspiracy) credentials.
In Baltimore, the Anti-Masons nominated William Wirt, a former attorney general, as their presidential candidate. The convention attracted a great deal of public attention, and mainstream politicians quickly followed suit. Later that year, a group calling themselves National Republicans (disaffected Democrats who hoped to unseat Jackson) met in convention, hoping to gain both popular attention and legitimacy as a viable opposition party; they nominated Senator Henry Clay of Kentucky. Jackson’s Democrats, however, did not let Clay’s convention stand unmatched. In early 1832, they also met in Baltimore and nominated Jackson for a second term with Van Buren as his running mate. The convention also established the “two-thirds” rule, requiring any candidate to receive that proportion of the party vote and each state’s delegation to vote unanimously. Both rules were designed to preserve the influence of the southern states. Democrats repeated the process in 1836, nominating Van Buren in Baltimore, and easily won the election, defeating three opposition candidates. In 1840 Van Buren had settled into the convention system enough to tinker with the format, directing the Democratic convention to issue the first party platform in history.
In 1836 the Whigs failed to organize anti-Jacksonian elements well enough to hold a convention; they were determined not to repeat the error. The 1839 Whig convention was held well in advance of the next election to give the party publicity and time to organize. It was the first convention to see jockeying for position, as the military hero William Henry Harrison outmaneuvered Winfield Scott and Clay for the nomination and then defeated Van Buren in the general election. In 1844, the Democratic convention was deeply divided over the proposed annexation of Texas. The first eight ballots failed to give any candidate the required two-thirds proportion of delegates, including former president Van Buren, who had the support of the majority of delegates but who had alienated the southern wing of the party by opposing annexation. James Polk, a relatively minor party figure and former governor of Tennessee, was unexpectedly nominated on the ninth ballot.
By the 1850s conventions were firmly established as a technique for gaining publicity, interest, and party legitimacy. As the Whig Party flagged, divided between northern and southern factions over the expansion of slavery, disaffected Whigs held mass meetings in Ripon, Wisconsin, and Jackson, Michigan, early in 1854. In June 1856, a national Republican Party was born at a convention in Philadelphia, which appointed a national committee, drew up a platform, and nominated John C. Fremont for president.
On the other hand, the fragmentation of the Democratic convention in 1860 signaled the collapse of that party. At the national convention held in April in Charleston, South Carolina, 50 southern delegates walked out, and after 57 ballots, the convention failed to produce a candidate. Two months later, Democrats met again in Baltimore. Again, southern delegates abandoned the convention; however, in desperation, the remaining delegates nominated Stephen Douglas for the presidency. The southern faction reconvened in Richmond, Virginia, and nominated John Breckenridge. The Republicans met in Chicago in May, and nominated Abraham Lincoln on only the third ballot. Lincoln went on to win the presidency. Four years later, in the midst of the Civil War, Lincoln declared the 1864 Republican convention would be renamed the National Union convention, and invited Democrats who opposed southern secession to attend. The convention nominated one of these men, Andrew Johnson, for vice president.
The post–Civil War era saw several transformations in national conventions. Baltimore, then strategically located at the midpoint of the nation, had long been the preferred location; the Anti-Masons had met there, as had the first six Democratic conventions and nearly all the Whig conventions (including the last convention of that party in 1860). After the war, however, the nation began to look west, and the conventions followed shifting patterns of settlement, economy, and transportation. For its first 60 years, from 1856 to 1920, the Republican Party held most of its conventions in the midwestern center of Chicago, only occasionally diverting to Philadelphia (three times, in 1856, 1872, and 1900) and once each in St. Louis (1896), Cincinnati (1876), Minneapolis (1892), and New York (1916). By the early twentieth century, the Democrats were going even farther across the nation, visiting Denver in 1908 and San Francisco in 1920.
The host city increasingly became a strategic selection, chosen to highlight an aspect of a party’s campaign or to appeal to a particular region or state. Both parties moved to increase their appeal across the nation. The 1924 Democratic convention took 103 ballots before nominating John Davis, a compromise candidate various party factions could agree upon. In 1936 the convention decided to drop the two-thirds rule. Additionally, the 1940 Republican primary in Philadelphia was the first to be televised, and the dramatic victory of dark-horse businessman Wendall Willkie on the sixth ballot count boosted public interest in the candidate selection process.
The Progressive movement of the early twentieth century also encouraged popular influence at conventions, beginning a series of political reforms to curb the power of convention delegates. Several conventions in the late nineteenth century included bitter candidate battles over delegates and surprise nominees. A 36-year-old representative from Nebraska, William Jennings Bryan, seized the Democratic nomination at a divided Chicago convention in 1896 on the power of his “Cross of Gold” speech, delivered in favor of adding the free coinage of silver to the party platform. Similarly, in 1880 the Republicans took 36 ballots to select dark horse James Garfield of Ohio, nominated primarily as an alternative to unpopular former president Ulysses S. Grant, who was seeking a nonconsecutive third term. In 1910 Oregon became the first state to establish a primary system for apportioning its delegates. By 1912, 11 other states had followed suit. Further, that year former president Theodore Roosevelt challenged incumbent William Howard Taft for the Republican nomination. Roosevelt swept the primaries, winning 9 out of 12, versus Taft’s single primary win, and 278 delegates to Taft’s 48 (Robert La Follette, another candidate, secured 36 delegates and 2 primaries). Taft, however, controlled the party machinery.
At the convention, Roosevelt was denied more than half the delegates he had won, and Taft easily secured the nomination. Following the tradition of dissatisfied convention dissenters, Roosevelt led his followers from the Republican convention to the Auditorium Theatre, where they voted to establish the Progressive Party, complete with a platform and endorsements for a number of local and state candidates. Both Taft and Roosevelt were defeated by Democrat Woodrow Wilson in the general election, and when Roosevelt refused to attend or accept the Progressive Party’s nomination at the 1916 convention, the party dissolved.
Following the tradition of the Anti-Mason Party, several other minor parties have held conventions throughout history both to attract publicity and rally their faithful. The Populist Party, born of an alliance between dissatisfied farmers and part of the union movement in the early 1890s, held a convention every four years between 1892 and 1908, nominating candidates for the presidency and other offices. The Prohibition Party, which primarily opposed the consumption of alcohol but also endorsed other social reforms, gained its widest support in the same period, though it held a convention as early as 1872. The Libertarian Party held a convention in Washington, D.C., every four years from 1972 to 2004, moving to Denver in 2008. Beginning in the 1990s, the Green Party and the Constitution Party also began holding conventions, the Greens most frequently in Los Angeles.
After World War II, primaries, though still limited in number, grew increasingly influential as a demonstration of a candidate’s ability to attract votes. In 1952 New Hampshire held an early primary, an event that became a tradition. The supporters of former general Dwight D. Eisenhower waged a surrogate campaign that defeated Senator Robert Taft. Though the primary was nonbinding, the defeat weakened the conservative Taft, who had been the presumed front runner for the nomination. Following New Hampshire, Eisenhower demonstrated enough electoral strength in the primaries to defeat Taft on the first ballot at the convention. The former general allowed the convention to choose his running mate, the conservative Richard Nixon of California. Similarly, in 1964 the insurgent conservative senator Barry Goldwater shocked party leaders, defeating Nelson Rockefeller, the governor of New York, as well as William Scranton, the governor of Pennsylvania, whom frantic moderates had convinced to run after Rockefeller’s weaknesses became evident. The 1964 convention was bitter, with Rockefeller and his supporters aiming rhetorical barbs at Goldwater and vice versa. But as with Eisenhower, Goldwater—the candidate who triumphed in the primaries—won the nomination on the first ballot.
In the same decade, the Democrats had virtually the opposite experience. In 1968 the party was in turmoil. President Lyndon Johnson, whose rigorous pursuit of the war in Vietnam made him extremely unpopular, declined to run for reelection. In the primaries, two candidates opposed to the war, Robert Kennedy and Eugene McCarthy, struggled for victories, a battle that ended with Kennedy’s assassination shortly after winning the California primary on June 4. Many of Kennedy’s supporters rallied to either McCarthy or Senator George McGovern; however, despite having not run in any of the 13 primaries, Vice President Hubert Humphrey controlled much of the remaining party organization and easily secured the nomination on the first ballot, outraging many McCarthy and Kennedy supporters. Humphrey maintained his support for the Vietnam War; his nomination was therefore unacceptable to many of the antiwar activists who rallied in the Chicago streets outside the convention. The demonstrations turned brutal when the Chicago police assailed protesters with clubs and tear gas; meanwhile, on the convention floor, Senator Abraham Ribicoff and Chicago Mayor Richard Daley clashed over the behavior of the police. That year, 1968, was the last time a major party held a convention in Chicago until 1996.
After Humphrey’s defeat at the hands of Richard Nixon in the general election, McGovern headed a commission that reformed the Democratic nominating system; primaries became vastly more influential and numerous. McGovern himself rode a string of primary victories to the nomination at the 1972 Democratic convention in Miami. McGovern’s commission had implemented several new rules, including a delegate quota system that guaranteed a certain number of seats to minority groups. This system was unpopular among such Democratic centers of power as organized labor; McGovern’s supporters, however, won a number of credential battles at the convention and easily defeated Senator Henry Jackson for the nomination on the first ballot. However, this was not the end of the convention’s troubles. McGovern’s selection of a vice president was protracted and poorly run; Senator Thomas Eagleton of Missouri was selected well behind schedule, which meant that the nominees’ acceptance speeches were given long after prime television hours. McGovern failed to receive the “bounce” in the polls that generally follows a convention, and overwhelmingly lost the general election to the incumbent Nixon.
Despite the failures of his convention and campaign, McGovern’s system of primaries generally worked well. In subsequent years, only in 1980, when Senator Edward Kennedy challenged the incumbent president Jimmy Carter in the Democratic primaries and forced a floor vote at the convention in New York City before conceding, was the identity of the Democratic nominee even theoretically in question when the convention began. In 2008 Senator Hillary Rodham Clinton, who had narrowly lost the race for delegates to Senator Barack Obama, was granted the formality of a floor vote. However, unlike Kennedy, she had already conceded the nomination to Obama.
A similar state of affairs prevailed among Republicans after the 1970s. In 1976, former governor Ronald Reagan of California had managed to force President Gerald Ford into a deadlock; Ford had won more delegates in the primaries, but neither he nor Reagan had secured enough to win the nomination outright at the convention in Kansas City. Reagan, who had the support of the party’s conservative wing, bid for the moderates’ support by announcing that he would select Senator Richard Schweiker as his running mate. The move backfired, however, and Reagan lost the support of many conservatives. Ford narrowly won the nomination on the first ballot, with 1,187 votes to Reagan’s 1,070, but conservatives managed to insert several planks in the party platform, including a call for an amendment to the U.S. Constitution that would outlaw abortion. Since then, the identity of the Republican nominee has been known by the end of the primary season.
In recent decades, the conventions have become little more than publicity events. In 2008, for example, Democratic nominee Barack Obama chose to deliver his acceptance speech to the general public in a football stadium, rather than solely to the party delegates in the convention hall. For this reason, the conventions have come under increasing criticism, particularly since both major parties receive public aid to fund the events.
See also campaigning; Democratic Party; elections and electoral eras; Republican Party.
FURTHER READING. James Chase, The Emergence of the Presidential Nominating Convention, 1789–1832, 1973; Congressional Quarterly, National Party Conventions, 1831–1972, 1972; Daniel Ward Howe, What Hath God Wrought: A History of the United States, 1815–1848, 2007; Costas Panagopoulos, Rewiring Politics: Presidential Nominating Conventions in a Digital Age, 2007; Byron Shafer, Bifurcated Politics: Evolution and Reform in the National Party Convention, 1988.
MATTHEW BOWMAN
Germans believe in a national “culture of solidarity.” The French profess a faith in liberté, égalité, fraternité. Citizens of Thailand cling to elaborate networks of patronage and deference. Americans boast of rugged individualism (which some Europeans derogate as “cowboy culture”). Every nation has a shared set of attitudes, assumptions, aspirations, and norms that are rooted in history and legend. We call this shared vision a people’s culture, and it forms the essential backdrop to politics and society.
Culture can be elusive—one historian described cultural studies as nailing jelly to a post. British anthropologist Sir E. B. Tylor proffered a definition in 1871: Culture is “that complex whole which includes knowledge, belief, art, morals, law, customs, and other capabilities and habits acquired by [people] as member[s] of society.” Margaret Mead boiled it down to a simple phrase: “Culture is the learned behavior of a society.” Clifford Geertz, an anthropologist at Princeton University, offered an even more direct and useful definition: “Culture is simply the ensemble of stories we tell ourselves about ourselves.”
Every people has its “ensemble of stories.” However, the shape and meaning of those stories is often contested—sometimes fiercely. After all, important events benefit some people and harm others; each group recalls a different tale, draws different lessons, and champions it own version over others.
Do Americans share a set of attitudes and assumptions embedded in myths about the past? Not long ago most social scientists thought so. They described a national consensus stretching back to the American founding. Important books bore titles like Henry Steel Commager’s The American Mind (published in 1950), Richard Hofstadter’s The American Political Tradition (1948), and Daniel Boorstein’s The Americans (1965). Critics occasionally damned the cultural consensus for its suffocating homogeneity, but few questioned its existence or challenged its content.
Today, agreement over a shared American culture has vanished, and three very different perspectives have emerged. First, some scholars insist that the traditional American culture is still going strong. Americans, they argue, remain deeply committed to core values like individualism, political rights, equal opportunity, and a wariness of government power. These add up, say proponents, to a great American Creed originally set down by Thomas Jefferson in the Declaration of Independence. Of course, the people of the United States have never fully lived up to their high-flying ideals, but each generation fights to close the gap between quotidian life and creedal aspirations.
Others mournfully view the American Creed as a fading relic of the past. Centrifugal forces press on our society and bode serious trouble for the grand old culture. Today, the United States “belittles unum and glorifies pluribus,” wrote Arthur Schlesinger Jr. in 1991. Almost 40 million people in the United States (or about one in eight) were born abroad. They cling to foreign values and resist the purifying fire of America’s melting pot. Ethnic militancy “nourishes prejudices, magnifies differences and stirs antagonisms.” Proponents of this view fret that a fierce politics of identity challenges traditional American culture. “Will the center hold?” asksed Schlesinger. “Or will the melting pot give way to the tower of Babel?”
A third view cheers the diversity. Proponents of this perspective reject the traditional accounts of American political culture. Images of consensus, they argue, chronicled the perspective of wealth and power while ignoring alternative voices. Perhaps the most popular exhibition in the brief against the old school lies in a fear expressed during the 1962 presidential address of the American Historical Association. Carl Bridenbaugh of Brown University warned his colleagues about a gathering storm. Once upon a time, scholars shared a common culture. Now, he fretted, historians were increasingly “products of lower middle class or foreign origins, and their emotions not infrequently get in the way of historical reconstruction. They find themselves in a very real sense outsiders on our past.” Bridenbaugh’s fears proved prophetic. A new generation of scholars began to read the nation in a fresh way: the real American culture, they argued—and argue still—lies in a rich amalgamation of immigrant voices, African American blues, and songs from the urban alleys. This perspective celebrates the American “Babel” as the welcome sounds of diversity. It sees cultural pluralism as nothing less than the mainspring of national renewal.
The debate continues. Did the Americans really share a political culture? If they had it, did they lose it? If they lost it, should they feel distressed or liberated? The answer is simple and reflects the central fact of every culture: contestation. Yes, the United States has a vibrant political culture. Where many observers go wrong is to search for a static conception celebrated on all sides. American culture is constantly debated and continuously evolving. Each generation of immigrants brings new perspectives to the ensemble of stories. African Americans insist that the black experience lies at the heart of the American experience—challenging past generations that shrugged aside slavery as, in Frederick Jackson Turner’s phrase, a mere “incident.” Liminal groups of every sort remake American culture as they struggle for legitimacy. The uproar over the national story reminds us that there is nothing inevitable or permanent about the ideas and groups that win a hearing and become part of the mainstream—or those that lose and fall to the margins.
Ironically, a notorious jeremiad got the bottom line exactly right. In a speech to the Republican National Convention of 1992, Patrick Buchanan rattled the mainstream media with his ferocious declaration: “We are . . . in a culture war . . . for the soul of America.” He failed to add that the “war” has waxed and waned for 300 years. What is most distinctive and timeless about American political culture is not the desire for freedom or the demand for rights or even the irresistible rise of Wal-Mart across the countryside, but the lively debate over each of those topics and many more. In short, American political culture was and is a constant work in progress.
Like every national culture, debates about the United States turn on a series of great national myths. Each powerfully resonates with at least some of the population. Each carries its own set of lessons. The central question is always the same: How do the stories add up? What do they tell Americans about America?
Consider three classic tales and lessons they bear.
Perhaps the best-known story that people in the United States tell about themselves begins with a legend made famous by Alexis de Tocqueville: the first Americans sailed away from Old World tyrannies and came to a vast, unpopulated land. In contrast to the people in Europe, those early Americans did not face powerful political or economic elites. Here there were no rigid social classes or repressive political authorities. Instead, as Tocqueville put it, “Americans were born equal instead of becoming so.” White men (this story gets a bit shaky once you include women or people of color) faced extraordinary opportunities. The land and its riches awaited them. Anybody could become a success—all it took was a little capital and a lot of work.
In this context, continues the famous American legend, the early settlers soon became unabashed individualists. After all, if success and failure lay in every individual’s own hands, there would be no need for government assistance or collective action. European serfs had to band together to fight for political rights and economic mobility. But Americans were free from the start.
This story helps explain why Americans are so quick to denigrate their government and to celebrate markets. After all, the legend comes with an unambiguous exhortation repeated down through history: hard work leads to economic success; the poor have no one to blame but themselves. Abraham Lincoln famously recited the upside of the market credo when he declared that any man who was “industrious, honest, sober, and prudent” would soon have other men working for him. He left the inevitable corollary to nineteenth-century preachers like Henry Ward Beecher: “If men have not enough it is from want of . . . foresight, industry and frugality. No man in this land suffers from poverty unless it be more than his fault—unless it be his sin.”
When contemporary historians began to examine the myths of rugged individualism, they discovered precisely the contrary—a robust collective life. Early Americans lived hard lives on a sparsely populated land and relied on one another for almost everything. When a barn burned down, the neighbors gathered and helped raise a new one. Public buildings—churches and meeting houses—were built by citizens working together. Historian Laura Ulrich Thatcher pored over household inventories and discovered that families even shared ownership of expensive cooking utensils—the lists of family possessions often include one-half or one-third of an iron pot or pan. Forget the legends about individuals on the frontier succeeding or failing on their own. Early Americans relied on their neighbors a great deal. They were communitarians more than individualists, republicans as much as liberals.
A focus on our common life offers a counter to the vigorous individualism and voracious markets that spring out of the first story. American idealists of every political persuasion invoke the nation’s fragile, recurring communal values. Conservatives see the American communal legacy as an opportunity to restore “traditional values”; leftists stress our obligations to one another and suggest programs like national health insurance.
At the same time, there is a more troubling aspect embedded in the communal tradition. Defining “us” also identifies “them.” In fact, the United States long ago developed a distinctive kind of American outsider—the un-American. The popular communal story, symbolized by the congenial melting pot, imagines a nation constantly cooking up a richer democracy with thicker rights. The darker alternative counters with a less cheerful story: Many Americans have faced repression simply for their ascriptive traits—their race, gender, ethnicity, or religion.
The two visions of community mingle in a long cultural dialectic. Generous American visions of equality and inclusion face off against prejudice and exclusion. The two impulses are evenly matched, Manichean twins wrestling for control of each historical moment. As William Carlos Williams put it in 1925, always poised against the Mayflower (a symbol of the quest for freedom) sails a slave ship (symbol of racial repression).
Still another story goes back to the Puritans sailing to New England in 1630. Those early settlers arrived in the New World facing the essential communal question: Who are we? The Puritans concocted an extraordinary answer: they were the community of Saints. Leadership, in both state and church, went to individuals who could prove that they were pre-ordained for salvation. The saints could vote, hold office, and enjoy full church membership. Citizens who had not demonstrated salvation (through elaborate rituals) were expected to follow the saints; they could not vote, hold office, or become full church members. And the irreparably damned had to be driven from the community—the settlers hung witches, slaughtered Native Americans, and sent heretics packing to Rhode Island. In short, moral standing defined leaders, allocated political privileges, defined the communities, and identified the dangerous “others.”
The Puritan legend concludes with a dynamic turn. Even before the settlers landed, Governor John Winthrop delivered one of the most famous sermons in American history. “We shall be as a city on a hill, the eyes of all people are upon us.” This strange idea—the tiny settlement at the edge of the Western world was on a mission from God and the eyes of all people were fixed on them—stuck and grew. The American lesson for the world has evolved over time—religious faith, political freedom, unfettered economic markets. But 350 years after John Winthrop delivered his sermon, Sacvan Berkovitch, a scholar specializing in early America, described the “astonishing” consequence: “a population that despite its bewildering mixture of race and creed . . . believe[s] in something called the American mission and . . . invest[s] that patent fiction with all the emotional, spiritual and intellectual appeal of a religious quest.”
Each of these three stories packs a different moral charge: Americans are rugged individuals, they wrestle over their common lives, they populate an international exemplar, a city on a hill. These are all the stuff of the national culture. They suggest national norms, serve up fodder for primal debates, and establish a setting for domestic debates, social policies, and international adventures.
Today, however, many political scientists dismiss the entire notion of political culture. Political cultures, they say, change slowly across the generations, while political events move fast. How can a constant (culture) explain a variable (politics)? Besides, the idea of national cultures imposes a kind of determinism on politics. Leftists have grown especially agitated about the ways conservatives have deployed cultural arguments; progressives resist the notion that poverty stems from “a cultural of poverty” (which blames and denigrates poor people regardless of the broader economic circumstances), and they reject efforts to ascribe tensions with the Arab world to a clash of cultures (which lets the United States off the hook for blundering international policies and writes off all the friction as the inevitable “clash between civilizations”).
In dismissing the idea of political culture, however, political scientists fall into an old error. Culture is not an unyielding political fact, cast in granite. It is vibrant and dynamic; it reflects a constant debate over what it means to be an American. The stories we tell about ourselves do not belong to either left or right, to the powerful, or the poor. They are, however, formidable weapons. They shape the ways in which people see themselves; they shape the national aspirations.
Still, the critics have a point. National culture cannot offer a complete picture of political developments. It cannot, by itself, explain why the American welfare state looks so different from Sweden’s or why Iraqis view the United States with suspicion. Politics also moves through constitutions and laws, leaders and political movements, exogenous shocks and the caprice of chance. Of course, those dynamics are also incomplete. We cannot fully explain political events without understanding political culture. It forms the backdrop for events in every nation. Leaders who seek to reshape welfare programs or remake foreign nations without heeding the national “ensemble of stories” rapidly come to rue their ignorance.
In short, the United States, like every other nation, has a rich national culture that should be read as a perpetual work in progress. Americans have been contesting their ensemble of stories since the first settlers stepped ashore and began to define themselves and their colonies. The battle heats up when the society appears in flux; moments of large-scale immigration, broad economic change, and shifting social relations (especially if they agitate race or gender norms) seem to foment particularly keen culture clashes. Still, every era witnesses its own exuberant debate about what the nation has been, what it is, and what it ought to be.
FURTHER READING. James Morone, Hellfire Nation: The Politics of Sin in American History, 1993; Arthur M. Schlesinger, Jr., The Disuniting of America: Reflections on a Multicultural Society, 1991; Rogers Smith, Civic Ideals: Conflicting Visions of Citizenship in U.S. History, 1997; Alexis de Tocqueville, Democracy in America, 1835, 1840, reprint, 1966; William Carlos Williams, In the American Grain, 1925.
JAMES A. MORONE
Populism has long been among the more fiercely contested yet promiscuously applied terms in the American political lexicon. It was coined by a Kansas journalist in 1890 as an adjectival form of the People’s Party, a radical third party organized in Kansas that blossomed into a national force in 1892. But in the lower case, populist soon became a common description for any rebellious movement of ordinary, working Americans. In recent decades journalists have affixed the term populist to persons and commodities that seem authentic, unadorned, and to have sprung from common sources. At times, this has included everything from plain-speaking politicians to bargain bookstores, from Bruce Springsteen’s recordings to cotton trousers, which, according to their manufacturer, are “steeped in grassroots sensibility and the simple good sense of solid workmanship.”
To cut through the confusion, one should define two kinds of populism—first, the historical movement itself; and second, the broader political critique and discourse. But a populism untethered to overtly political concerns is too vague and ubiquitous in American history to be useful as an interpretive category.
The Populist movement arose in the latter third of the nineteenth century, the period historians have traditionally called the Gilded Age. This was a time of rapid industrial and agricultural growth punctuated by sharp economic depressions. Absent state relief measures, increasing numbers of farmers were caught in a spiral of debt and many urban wage earners and their families were left hungry and homeless.
Populism was an insurgency largely made up of small farmers and skilled workers. It was strongest in the cotton states of the Deep South, the wheat-growing states of the Great Plains, and the mining states of the Rocky Mountains. The movement’s concerns were primarily economic: during a time of low commodity prices, small agrarian proprietors demanded equal treatment in a marketplace increasingly dominated by industrial corporations, banks, and large landowners. They formed local and state Farmers’ Alliances. Craft workers who resisted a cut in their wages and attacks on their trade unions believed that they shared a common set of enemies.
Populist activists proposed a variety of solutions—including nationalization of the railroads, a cooperative marketing system, a progressive income tax, and an end to court injunctions that hampered the growth of strong unions. But when a severe depression began in 1893, Populists focused on the need to inflate the money supply by basing the currency on silver reserves as well as the less plentiful reserves of gold, in hopes of spurring investment and rescuing small producers from an avalanche of debt. Most Populists were white evangelical Protestants who tended to favor prohibition and woman suffrage, “moral” issues that drew a good deal of controversy at the time.
In 1892, thousands of movement activists met to organize a national People’s Party. At founding conventions held in St. Louis and Omaha, the great orator Ignatius Donnelly proclaimed, “We meet in the midst of a nation brought to the verge of moral, political, and material ruin. . . . A vast conspiracy against mankind has been organized on two continents and is rapidly taking possession of the world.” The Populists, he promised, would bring the nation back to its presumably democratic roots. “We seek to restore the Government of the Republic to the hands of the ‘plain people’ with whom it originated,” he concluded.
This vision was notably silent about racial divisions among the “plain people”; equality was more preached than practiced in the Populist movement. During the late 1880s, Colored Farmers’ Alliances had sprung up in several states, and an umbrella group of the same name—led by a white Baptist minister—emerged at the end of the decade. But the Colored Alliance collapsed in 1891 when some of its members, who didn’t own land, went on strike against their employers, many of whom were members of the white Farmers’ Alliance. White Populists were no more hostile to black citizens than were most other white political actors at the time. In fact, such movement leaders as Thomas Watson of Georgia defended the right of black citizens to vote, in the face of violence by white Democrats in the South. But few white Populists from any region endorsed social integration or questioned the virtues of a past in which most African Americans had been held in bondage.
From its founding until 1896, the People’s Party drew a sizable minority of the ballots in a swath of rural states stretching from North Carolina west to Texas and Kansas and north into Colorado and Idaho. In 1892 James Weaver, the party’s first presidential nominee, won over a million votes, which translated to 8.5 percent of the popular vote and 22 electoral votes. During the 1890s, hundreds of Populists were elected to local and state offices, and the party boasted 50 members of Congress, some of whom ran on fusion tickets. These results emboldened insurgents and alarmed conservatives in both major parties.
In 1896 the Democrats emerged from a fierce internal battle with a presidential nominee from Nebraska, William Jennings Bryan, who had worked closely with Populists in his own state and was a well-known champion of the third party’s demand to remonetize silver. The People’s Party then met in its own convention and, after a bitter debate, voted to endorse Bryan. When he was defeated that fall, the party and the movement it led declined into the status of a sect. The party ran its own candidates for president and vice president in 1900, 1904, and 1908, and then disbanded. Only in 1904 did its ticket—led by Thomas Watson of Georgia—draw more than 100,000 votes.
But half a century after their demise, the Populists became the subject of a ferocious, dualistic debate among some of America’s most prominent historians and social scientists. From the early 1950s through the 1970s, such scholars as Oscar Handlin, Richard Hofstadter, C. Vann Woodward, Daniel Bell, and Lawrence Goodwyn disputed whether populism was conservative, defensive, and bigoted or the last, best chance for a true smallholders’ democracy. One side marshaled quotes from Populists that reeked of anti-Semitism, bucolic nostalgia, and conspiracy theorizing; the other stressed that the insurgents of the 1890s tried to remedy real grievances of workers and farmers and had specific, thoughtful ideas for reforming the system.
As a critique, however, populism predated the movement and survived it, with important alterations. Central to the original critique was an antagonism between a large majority of producers and a tiny elite of parasites. Such oppositional terms were used by the Country Party in eighteenth-century Britain and became powerful markers in American politics during the early nineteenth century. The producers were viewed as the creators of wealth and the purveyors of vital services; their ranks included manual workers, small farmers, small shopkeepers, and professionals who served such people. This mode of populism offered a vigorous attack on class inequality but one that denied such inequality had any structural causes. Populists have insisted that social hierarchies are artificial impositions of elites and doomed to vanish with a sustained insurgency of the plain people.
Populism represents the antimonopolistic impulse in American history. Populists are generally hostile to large, centralized institutions that stand above and outside communities of moral producers. They have a romantic attachment to local power bases, family farms, country churches, and citizen associations independent of ties to governments and corporations. The populist critique also includes an explicit embrace of “Americanism” that is both idealistic and defensive. In the United States, which most populists consider a chosen nation, all citizens deserve the same chance to improve their lot, but they must be constantly on guard against aristocrats, empire builders, and totalitarians both within and outside their borders who would subvert American ideals.
The populist critique is usually most popular among the same social groups who originated it during the late nineteenth century: farmers and wage earners who believe the economy is rigged against them. For example, in the 1930s, amid the first major depression since the Populist era, Huey Long and Father Charles Coughlin gained millions of followers among desperate white workers, farmers, and small proprietors by denouncing “international bankers” and calling for a radical redistribution of wealth.
But populist discourse has often floated free of such social moorings. Anyone who believes, or pretends to believe, that democratic invective can topple a haughty foe and that the judgment of hardworking, God-fearing people is always correct can claim legitimacy in the great name of “the People.” Thus, in the era of World War I, socialists on the Great Plains remade themselves into champions of the same small farmers they had earlier viewed as anachronisms in an age of corporate capitalism. The organization they founded, the Nonpartisan League, captured the government of North Dakota and came close to winning elections in several other neighboring states. During the 1930s and 1940s, industrial union organizers, including thousands of members of the Communist Party, portrayed themselves as latter-day Patrick Henrys battling such “Tory parasites” as Henry Ford and Tom Girdler, the antiunion head of Republic Steel.
From the 1940s through the 1980s, American conservatives effectively turned the rhetoric of populism to their own ends. During the “Red Scare” following World War II, they accused well-born figures in the federal government, such as Alger Hiss and Dean Acheson, of aiding the Soviet Union. In the 1950s and 1960s, the Right’s populist offensive shifted to the local level, where white homeowners in such cities as Detroit and Chicago accused wealthy, powerful liberals of forcing them to accept integrated neighborhoods and classrooms—with no intention themselves of living in such areas or sending their children to such schools. In four presidential campaigns from 1964 to 1976, George Wallace articulated this message when he championed “this average man on the street . . . this man in the steel mill . . . the beautician, the policeman on the beat.”
By the time Ronald Reagan was elected and reelected president in the 1980s, the discourse of populism had completed a voyage from Left to Right, although community and union organizers on the left continued to claim they were its rightful inheritors. “Producers” were now widely understood to be churchgoing, home-owning taxpayers with middling incomes; “parasites” were government officials who took revenues from diligent citizens and lavished them on avant-garde artists, welfare mothers, and foreigners who often acted to thwart American interests.
During the 1990s and the first decade of the twenty-first century, fear of the globalized economy spurred a new round of populist discourse. First, activists on both the labor left and the protectionist right accused multinational corporations and international bodies such as the World Bank and the International Monetary Fund of impoverishing American workers. Then the collapse of the financial system in 2008 revived anger at “Wall Street” for betraying the public’s trust and driving “Main Street” into bankruptcy. Economic populists continued to have the power to sting their enemies and, perhaps, stir a desire for social change.
See also agrarian politics; labor movement and politics.
FURTHER READING. Peter Argersinger, The Limits of Agrarian Radicalism: Western Populism and American Politics, 1995; Edward L. Ayres, The Promise of the New South: Life after Reconstruction, 1992; Alan Brinkley, Voices of Protest: Huey Long, Father Coughlin, and the Great Depression, 1982; Dan T. Carter, The Politics of Rage: George Wallace, the Origins of the New Conservatism, and the Transformation of American Politics, 1995; Lawrence Goodwyn, Democratic Promise: The Populist Moment in America, 1976; John Hicks, The Populist Revolt: A History of the Farmers’ Alliance and the People’s Party, 1931; Michael Kazin, The Populist Persuasion: An American History, rev. ed., 1998; Robert McMath, American Populism: A Social History, 1994; Jeffrey Ostler, Prairie Populism: The Fate of Agrarian Radicalism in Kansas, Nebraska, and Iowa, 1880–1892, 1993; C. Vann Woodward, Tom Watson, Agrarian Rebel, 1938.
MICHAEL KAZIN
The establishment of a national executive empowered to act independently of the legislature was one of the Constitutional Convention’s most consequential, and disquieting, innovations. The Revolution had targeted executive power as a threat, and both the states and the national government had kept it weak and subordinate. The supremacy of the representative assembly was a principle widely viewed as emblematic of the new republican experiment. The Constitution, however, rejected that principle. By creating a presidency equal in standing to the national Congress and by fortifying the new national executive with unity, energy, and institutional security, the framers pushed to the fore a very different conception of self-government.
In one sense, this innovation marked a clear retreat from the radical thrust of the Revolution. It is hard to miss the model of kingship behind the singular figure that the Constitution vests with “the executive power.” The president commands armies, suppresses insurrections, receives foreign emissaries, and pardons almost at will. The office stands watch over the legislature, its veto power potent enough both to protect its own independence and to check the programmatic impulses of simple majorities. This was, for all appearances, a conservative position designed to preserve order, manage affairs, and bring a measure of self-control to the government as a whole.
In another sense, however, the American executive drew upon and extended the principles of the Revolution. The presidency stands apart from the Congress but not from the people. As the decision to vest executive power in a single person focused responsibility for the high affairs of state, the selection procedure ensured that individual’s regular accountability to a national electorate. Provisions for a fixed term and a separate national election established the presidency as the equal of the Congress, not only in the powers at its disposal but also in its popular foundations. Overall, the construction of separate institutions each balanced against the others and each accountable to the same people underscored the sovereignty of the people themselves over any and all institutional expressions of their will.
How, and with what effect, this new arrangement of powers would work was far from clear. The few broad strokes with which Article II of the Constitution constructed the presidential office are indicative of the delicate political circumstances of its creation and of the strategic value of keeping its implications ambiguous. There is no counterpart in Article II to the crisply punctuated list of powers expressly vested in the Congress by Article I, section VIII. By the same token, the more implicit and open-ended character of the powers of the presidency gave freer reign to interpretation and the exigencies of the moment. It imparted to the office an elastic quality that incumbents were invited to exploit by their own wits.
A few issues became apparent early on. One was that the scope of presidential power would be fiercely contested. The administration of George Washington ventured bold initiatives in foreign policy, domestic policy, and law enforcement, and it backed them up with strong assertions of presidential prerogative. But each brief issued on behalf of the power and independence of the executive office provoked a strong reaction, and together they became a rallying cry for opposition. No less clear was the portentous character of the transfer of power from one president to another. John Adams’s initial decision to retain Washington’s cabinet aimed to assure stability and continuity in government operations across administrations, but it could not be sustained without severely handicapping the president in his efforts to exercise the powers of his office on his own terms. As Adams ultimately discovered, a president cannot maintain control of his own office if he does not first secure control over the other offices of the executive branch. The election of a new president would thenceforth bring in its train the formation of a new administration, with all that it implied for the disruption of established governing arrangements and the perturbation of governing coalitions.
Behind these early revelations loomed another: that presidential elections and presidential administrations would orient American national politics at large. Providing a focal point of responsibility for the state of the nation, the president spurred political mobilization, the articulation of national issues, and the reconfiguration of political cleavages. Ironically, an office designed with an eye to bringing order and stability to national affairs became a magnet for popular controversies and an engine of change.
Most of what we know of the presidency today is a product of latter-day embellishments. Two historical processes, in particular, figure prominently in its development: the democratization of the polity and the nationalization of the affairs of state.
The democratization of the polity was first expressed institutionally in the form of party development, and this gave rise in the nineteenth century to a party-based presidency. The emergence of two national parties in the 1790s distilled rival interests across great geographical distances and coordinated their actions for presidential elections. The Twelfth Amendment, ratified in 1804, formally separated the election of president and vice president and thus facilitated the formation of a national party ticket. The emergence of nominating conventions in the 1830s brought state party organizations together for the purposes of articulating a common platform and selecting candidates for the national ticket that would rally coalition interests.
Through innovations such as these, the presidency was connected organizationally to local bases of political support and integrated into national politics. Parties eased constitutional divisions of power by providing a base of common action among like-minded partisans in the presidency and the Congress. Just as importantly, the president lent support to his party base by distributing the offices at his disposal in the executive branch to its constituent parts at the local level. The spoils system, which rotated executive offices with each transfer of power and filled them with partisan supporters of the new incumbent, complemented the convention system. As the convention tied the president to local party organizations, the spoils tied local party organizations to the president. Each side had powerful incentives to support the other, and the tenacious community of interests that was formed helped to hold America’s contentious democracy together as it sprawled across a continent.
The presidency was recast again over the early decades of the twentieth century. The growing interdependence of interests in industrial America and the heightened stakes of world affairs for national security rendered the prior integration of presidency into a locally based party politics increasingly anachronistic. Progressive reformers sought to break down the community of interest that had animated the party presidency and to construct in its place powerful national bureaucracies capable of managing the new problems posed by industrialization and world power. At the same time, presidents asserted leadership more directly. They began to take their case to the people. The hope was that by rallying public opinion behind their policy proposals, they would catalyze concerted action across the dispersed and divided institutions of the Washington establishment.
With presidents and national bureaucracies assuming a more prominent role in governing, the office of the presidency itself was refortified. The passage of the Executive Reorganization Act in 1939 and the establishment of the Executive Office of the President (EOP) gave incumbents new resources for managing the affairs of state. Agencies of the EOP such as the Bureau of the Budget (later the Office of Management and Budget), the Council of Economic Advisors, and the National Security Council were designed by Congress with two concerns in mind: to help the president tackle national problems holistically and to assure Congress that the president’s recommendations for national action were based on the candid advice of trained professionals and policy experts.
In recent decades, institutional developments have supported greater claims on behalf of independent presidential action. In the 1970s primary elections displaced national party conventions as the chief mechanism for candidate selection. This has encouraged candidates for the office to build national political organizations of their own. A personal political organization is then carried over by the party nominee into the general election, and it is transferred again by the successful candidate into the offices of the White House. One effect has been to weaken the mutual control mechanisms of the old party-based presidency. Another has been the downgrading of the statutory agencies in the EOP, which progressive reformers had relied upon to institutionalize neutral advice, interbranch coordination, and information sharing. The locus of presidential power has shifted into the inner sanctums of the White House itself, where the incumbent’s personal control is least likely to be contested and where the strategic orientation revolves almost exclusively around the president’s own political priorities. The reforms of earlier generations, which relied on extraconstitutional devices to ease the separation of powers and integrate the presidency into the rest of the government, are giving way to new assertions on behalf of the unitary executive—assertions that accentuate the separation of powers, expand the legitimate domain of unilateral action, and delimit the reach of checks and balances.
Though reformers may have believed that bolstering the institution of the presidency would make for more effective political leadership, the connection between institutional development and political performance remains weak. More resources have not ensured more effective leadership; great performances dot the presidential history in a seemingly random fashion. This has led many observers to stress the importance of character, personality, and skill in the exercise of presidential power, and appropriately so: as the purview of American national government has expanded and the office of the presidency has grown more resourceful, incumbent competence has been placed at a premium.
Still, it is hard to discern any coherent set of personal attributes that distinguishes politically effective leaders from politically ineffective ones. Franklin Roosevelt and Lyndon Johnson both possessed extraordinary political skills. But while one reconstructed the standards of legitimate national action, the other self-destructed. Andrew Jackson and Andrew Johnson were rigid, vindictive, and divisive leaders. But one succeeded in crushing his opponents while the other was crushed by them. A long look back over this history suggests that the variable political effectiveness of presidential leadership is less a matter of the personal attributes of the incumbent than of the political contexts in which he is called upon to act.
One of the more striking patterns to be observed in presidential history is that the leaders who stand out both for their political mastery in office and their reconstructive political effects were immediately preceded by leaders who are widely derided as politically inept and out of their depth: John Adams and Thomas Jefferson, John Quincy Adams and Andrew Jackson, James Buchanan and Abraham Lincoln, Herbert Hoover and Franklin Roosevelt, Jimmy Carter and Ronald Reagan. In each historical pairing, we first find a president whose actions in office seemed self-defeating and whose chief political effect was to foment a nationwide crisis of legitimacy; next we find a president whose actions in office proved elevating and whose chief effect was to reset the terms and conditions of legitimate national government. On further inspection, it will be observed that the first president in each pair led to power a governing coalition whose commitments were well established but increasingly vulnerable to identification as the very source of the nation’s problems. The second president in each pair led to power an untested political insurgency. Each used his powers to define the basic commitments of that insurgency and secure them in a new governing coalition. Difficult as it is to distill a shared set of personal attributes that clearly distinguishes the incumbents on one side of these pairings from those on the other, the common political circumstances faced on each side are unmistakable.
This pattern points back to fundamental attributes of the presidential office, qualities that have held sway despite the dramatic developments to be observed over time in the accoutrements of institutional power. The most telling of these is the one first revealed in the transfer of power from George Washington to John Adams: the inherently disruptive political impact of the election of a new president and installation of a new administration. In one way or another, new presidents shake things up. The constitutionally ingrained independence of the office, the provision for separate elections at regular intervals to fixed terms, the institutional imperative that each new incumbent assert control over the office in his own right—all this has made the presidency a persistent engine of change in the American political system. It is precisely because all presidents change things in one way or another that the power to change things has been less of an issue in the politics of leadership than the authority that can be found in the moment at hand for actions taken and changes instigated. Unlike institutional power, which has developed in stages over the long course of American history, authority of this sort can shift dramatically from one president to the next.
It is not surprising that incumbents affiliated with established interests have a harder time sustaining authority for their actions than do incumbents who come to power from the opposition. The opposition stance plays to the institutional independence of the presidential office and supports its inherently disruptive political effects; affiliation compromises independence and complicates the meaning of the changes instigated. This difference is magnified when, as in the case of our starkly contrasting historical pairs, the political commitments of the dominant governing coalition are being called into question by events on the ground. Affiliated leaders like the Adamses, Buchanan, Hoover, and Carter could neither forthrightly affirm nor forthrightly repudiate the political interests to which they were attached. Actions that reached out to political allies served to cast the president as a symptom of the nation’s growing problems while actions that charted a more independent course tended to alienate the president from his base of political support. Lacking the political authority to secure firm ground for their actions, these presidents found themselves in an impossible leadership situation. In turn, Jefferson, Jackson, Lincoln, Franklin Roosevelt, and Reagan drew great advantages from the political disaffection created by the hapless struggles of their predecessors. Leading to power an insurgency defined largely by its forthright repudiation of an old establishment, they were able to rearticulate first principles, to sustain freedom of action across a broad front, and, ultimately, to locate a common ground of new commitments that their supporters would find authoritative.
Historically, the weaker the political ties binding presidents to established standards of action, the stronger the president has been in tapping the independence of his office and delivering on its promise of a new beginning. How, then, will recent developments in the institution of the presidency come to bear on this general rule? On the one hand, we might expect that as all presidents become more independent in the resources at their disposal, any limitations imposed by the political affiliations they bring into office will diminish. On the other hand, as these new resources become more self-contained and detached from those of other institutional actors, we may also discover new limits on the presidency’s capacity to play its vital historical role in the renewal and reinvigoration of the American polity as whole. In the past, the presidents who effectively reconstructed the terms and conditions of legitimate national government did not just break old bonds; they also forged new ties that knit the system back together. Whether a reintegration of that sort remains possible is an open question.
See also cabinet departments; Constitution, federal.
FURTHER READING. Peri E. Arnold, Making the Managerial Presidency: Comprehensive Reorganization Planning, 1905–1996, 1998; Steven G. Calabresi, “Some Normative Arguments for the Unitary Executive,” Arkansas Law Review 48 (1995), 23–104; James W. Ceaser, Presidential Selection: Theory and Development, 1979; William G. Howell, Power without Persuasion: The Politics of Direct Presidential Action, 2003; Harvey C. Mansfield, Taming the Prince: The Ambivalence of Modern Executive Power, 1989; Sidney M. Milkis, The President and the Parties: The Transformation of the American Party System since the New Deal, 1993; Terry M. Moe, “The Politicized Presidency,” in The New Direction in American Politics, edited by John E. Chubb and Paul E. Peterson, 235–71, 1985; Richard E. Neustadt, Presidential Power and the Modern Presidents: The Politics of Leadership from Roosevelt to Reagan, 1991; Arthur M. Schlesinger, Jr., The Imperial Presidency, 1973; Stephen Skowronek, The Politics Presidents Make: Leadership from John Adams to Bill Clinton, 1997; Charles C. Thach, The Creation of the Presidency, 1775–1789: A Study in Constitutional History, 1922; Jeffrey K. Tulis, The Rhetorical Presidency, 1988.
STEPHEN SKOWRONEK
The seventeenth and eighteenth centuries marked the transition from a powerful to a tamed executive branch of government, first in Great Britain and then in the United States. In the 1600s, England began the long, slow, and often violent process of wresting power away from the king and placing authority in the hands of the Parliament. This transition led to both the democratization of the polity and the control of executive power. The American Revolution of 1776 furthered this process by placing the executive branch within the constraints of a constitution (1787), a separation-of-powers system, the rule of law, and checks and balances. While the French Revolution that followed (1789) unleashed the best and the worst of democratic sentiments, it was to serve as a warning against unchecked power in the hands of the masses. The American Revolution occurred in the middle of these transitions and drew lessons from both the British and the French. From the British, Americans were convinced that the divine right of kings and executive tyranny had to give way to a controlled executive, and from the French experience, they learned that unleashing democracy without the rule of law and checks and balances could lead to a different but also quite dangerous form of tyranny.
The American experiment in empowering as well as controlling executive powers within a web of constitutional and political constraints led to the creation of a rather limited presidency. Yet over time, this executive branch would grow and expand in power and responsibility; both necessity and opportunity would allow for the growth of presidential power.
As the United States became a world power, it also became more of a presidential nation. In the early years of the republic, however, with few international responsibilities and fewer foreign entanglements, the presidency would be a rather small institution, with limited powers, whose holders would struggle to establish a place of power within the new system of government.
Of the framers’ handiwork at the Constitutional Convention, the presidency was the least formed and defined of the three governing institutions. Thus, while the office may have been invented by the framers, it was brought to life by George Washington and his successors.
The framers were concerned that this new president not become a tyrant or monarch. Having fought a revolution against the hereditary monarchy of Great Britain, they wanted to create an executive branch that, as its name implies, would preside, enact laws passed by the Congress, manage the government’s business, and be but one element of a three-part government. Designed to promote deliberation and not efficiency, this new government would not have an all-powerful figure at its head. But could such a government work in practice?
George Washington, the towering figure of his era, was, as first president (1789–97), a precedent setter. He was a man who could have been king, yet chose to be president. At the time of his inauguration, the American experiment was just that—an experiment; thus, everything that Washington did mattered. As he noted at the time, “I walk on untrodden ground.” Whatever Washington did would have an impact on what was to follow. One of the reasons the Constitutional Convention was willing to leave Article II, which created a presidency, somewhat vague and ill-defined was that the delegates knew Washington would occupy the office first, and they so trusted his republican sensibilities that they allowed him to set the tone for what the presidency would become.
Washington’s goal was to put the new office on a secure footing, create conditions in which a republican executive could govern, give the office some independence, and establish the legitimacy of the new republic. This was no small order. He attempted to be a national unifier at a time when divisions were forming in the new nation. His major effort toward this goal was to bring Alexander Hamilton and Thomas Jefferson, two bitter personal and ideological rivals, together in his first cabinet. This worthy goal would fail, however, and the clash between these two rivals was instrumental in forming the nation’s first political parties.
Washington exercised independent authority over treaty making, declared neutrality in the war between France and England, and asserted independence from Congress in managerial matters within the executive branch. For Washington, the president was not to be a messenger boy for Congress but instead an independent officer with powers and prerogatives. But if Washington set precedents for the ill-defined office, could he trust his successors to exercise such ambiguous authority with honor and dignity? The ambiguity that allowed him to invent an office could also potentially be used by less skilled men, with less character, to less benign ends.
John Adams followed Washington into the presidency (1797–1801), and his term in office was marked by internecine warfare between the president and members of his own Federalist Party. Adams’s disappointing presidency led to the shift in power from the Federalists to the Jeffersonians in the hotly contested election of 1800.
The next president to add markedly to the powers of the office was Thomas Jefferson (1801–9), who set an important precedent in that his inauguration marked the first of what would be many peaceful transfers of power from one party to another. Jefferson wanted to deceremonialize the presidency and make it a more republican institution. He did away with bowing, replacing the regal custom with the more democratic handshake; he abolished the weekly levee (a reception); ended formal state dinners; and abandoned the custom of making personal addresses to Congress. Jefferson also used the cabinet as an instrument of presidential leadership and exerted control over Congress by exploiting opportunities for party leadership.
When the opportunity to purchase the Louisiana Territory from France presented itself, Jefferson believed that he did not have the authority to act and had his cabinet draw up a constitutional amendment to give him such authority. But time was short, and an amendment might take months, if not years, to pass. Jefferson was confronted with a stark choice: act on his own questionable authority or lose one of the great opportunities for promoting the nation’s security and expanding its borders. Jefferson acted. It was one of the most important, if constitutionally questionable, acts in presidential history.
Perhaps Jefferson’s greatest contribution to presidential leadership was his linking of that which the framers separated: the president and Congress. Jefferson exercised a form of hidden-handed legislative leadership by inviting important members of his party in Congress to the White House for dinners, after which he would chart out a legislative strategy and agenda for his fellow party members to push through Congress. It was an effective tool of leadership and one that subsequent presidents exercised with mixed success.
The next three presidents faced something of a congressional backlash. James Madison (1809–17), James Monroe (1817–25), and John Quincy Adams (1825–29) had mixed success as presidents. The one key to the rise of presidential power in this era occurred during the Monroe administration, when, seeing European powers eyeing territories in the Americas, the president announced the Monroe Doctrine, a warning to European states to abandon imperial ambitions in the Americas. Monroe issued this declaration on his own authority, and it helped increase the foreign policy powers of the presidency.
The next president, Andrew Jackson (1829–37), was one of the most cantankerous and powerful men ever to occupy the White House, and he became president at a time of great social change in America. By the 1820s, most states no longer required men to own property in order to vote, and nearly comprehensive white male suffrage had arrived. Jackson recognized the potential implications of this momentous change and exploited it to his advantage. He claimed that, as the only truly nationally elected political leader in the nation, he was elected to speak for the people. By making the president the voice of the people, and linking presidential power to democracy, Jackson greatly added to the potential power of the office. Merging the presidency with mass democracy, Jackson used what he saw as his mandate to lead (some might say bully) the Congress. Such a link between the people and the president was something the framers feared, believing that this could lead a president to manipulate public opinion and emerge as a demagogue, destroying the possibility of true statesmanship and deliberative government.
After Jackson, a series of weaker presidents followed, and Congress reasserted its constitutional prerogatives. This became a pattern in American history: a strong president, followed by congressional reassertion, followed by a strong president, a backlash, and again strong congressional leadership. The three presidents after Jackson—Martin Van Buren (1837–41); William Henry Harrison (1841), who died after a month in office; and John Tyler (1841–45), the first vice president to assume office on the death of a president)—faced strong congressional leadership and were limited in what they could achieve.
James Polk (1845–49), however, changed that pattern. He demonstrated that a determined president could, in effect, start a war (the Mexican-American War). Polk was able to manipulate events along the Texas-Mexican border coaxing the Mexican force to attack, and by initiating aggressive action, preempt the Congress and force it to follow his lead. This initiative was yet another tool of presidential leadership added to the arsenal of presidential power.
Several weaker presidents followed Polk, as the nation became swept up in sectional rivalry and the issue of slavery. Rather than presidential government, this was an era of congressional dominance. Presidents Zachary Taylor (1849–50), Millard Fillmore (1850–53), Franklin Pierce (1853–57), and James Buchanan (1857–61) were weak presidents in a difficult age.
James Buchanan is an especially interesting case in presidential weakness. Sectional difficulties were heating up, and the slavery issue was causing deep divisions between the North and the South. Many southern states were threatening nullification, if not outright secession, from the Union. Despite the grave threat to the nation’s future, Buchanan did not believe he had the constitutional authority to prevent the southern states from seceding, and his self-imposed restraint meant that the rebellious states would meet little resistance. Buchanan saw slavery as a moral evil, but he conceded to the South a constitutional right to allow slavery to exist. He tried, but failed, to chart a middle course. Although a strong unionist, Buchanan’s limited conception of presidential power prevented him from taking the steps necessary to stem the breakup of the nation.
Buchanan was a strict constitutional constructionist; he believed the president was authorized to take only those actions clearly spelled out in the Constitution. This conception of the office severely limited Buchanan’s efforts to stem the tide of secession. In his last message to the Congress, delivered on December 3, 1860, Buchanan stated, “Apart from the execution of the laws, so far as this may be practical, the Executive has no authority to decide what shall be the relations between the Federal Government and South Carolina. . . .” Less than three weeks later, South Carolina seceded from the Union.
Buchanan left his successor, Abraham Lincoln, a seemingly unsolvable crisis. He told the incoming president, “If you are as happy, Mr Lincoln, on entering this house as I am in leaving it and returning home, you are the happiest man in this country.” Lincoln, however, was animated by the challenge. His presidency (1861–65) would reinvent the office and take its power to new heights.
The presidency was invented at the end of the era of the aristocracy, yet before the era of democracy had fully arrived. The framers of the U.S. Constitution created a republican form of government with limited powers, under the rule of law, within a constitutional framework, with a separation of powers system. From 1789 to 1860 the presidency they invented proved viable, resilient, and—at times—quite effective. The evolution of the presidency from idea to reality, from blank slate to robust office, resulted in the creation of a new office that achieved stability and independence. This experiment in governing was built on and grounded in the Constitution, but that document was only its starting point. In reality, the presidency was formed less by the Constitution and more by the tug-of-war over power between the president and Congress as politics played out over the first 70 years of the republic. In this sense, the presidency was created in practice more that at the drafting table.
The presidency before 1860 did not need to be large, powerful, or imperial. The United States was a relatively small nation, somewhat geographically isolated from the troubles of Europe, with few entangling alliances and no position of world leadership. As a secondary world power, the United States did not have to flex its military muscle or make its presence known across the globe. This allowed the presidency to develop free from the pressures of global responsibilities.
Likewise, the domestic demands placed on the federal government were more limited in this period than they are today. The federal government did less, and less was expected of it. The media was localized and, prior to the advent of radio (and then television and the Internet), tools of communication were limited. But, although the presidency was “small” as an institution, the seeds were planted in this era for the rise and growth of presidential power that was to follow. That growth would await the advent of the twentieth century, and the rise of the United States as a world military, political, and economic superpower.
See also era of a new republic, 1789–1827; Jacksonian era, 1828–45.
FURTHER READING. Michael A. Genovese, The Power of the American Presidency, 1789–2000, 2001; Ralph Ketcham, Presidents above Party: The First American Presidency, 1789–1829, 1984; Sidney M. Milkis and Michael Nelson, The American Presidency: Origins and Development, 1776–2002, 2003; Michael P. Riccards, The Ferocious Engine of Democracy: A History of the American Presidency, vol. 1, 1995.
MICHAEL A. GENOVESE
The presidential elections of 1860 and 1932 brought to power two of America’s most important presidents: Abraham Lincoln and Franklin D. Roosevelt. These two elections also marked the beginning and the end of an era in which Republicans dominated the office of president. The power of the presidency ebbed and flowed during the period from 1860 to 1932, depending on the personality and ambitions of the officeholder and the challenges of his times.
In 1860 Abraham Lincoln, the candidate of the Republican Party, founded just six years earlier, won the presidency as the only antislavery contender in a crowded field. Lincoln opposed the expansion of slavery beyond its existing boundaries, which many slaveholders regarded as tantamount to abolition. Consonant with the party’s emphasis on activist government and economic development, Lincoln’s platform also called for homestead legislation to promote western settlement, protective tariffs, and internal improvements. Although he won only 40 percent of the popular vote, he carried every northern state. The new president would have the cheerless task of presiding over the near dissolution of the nation itself. Even before Lincoln took the oath of office on March 4, 1861, seven southern states had seceded from the Union. On April 12, 1861, the Civil War began with the bombardment of Fort Sumter in South Carolina.
As a wartime leader, Lincoln became the most activist president to that time in U.S. history, expanding the powers of the presidency and the importance of the national government. Lincoln assumed broad powers to quell what he believed was a lawless domestic insurrection. When he issued the Emancipation Proclamation that freed all slaves still held by the Confederacy, he committed the federal government for the first time to a decisive stand against slavery. He summoned the militia to defend the Union, ordered a blockade of the Confederacy’s ports, expanded the regular army beyond its legal limit, and spent federal funds without congressional approval. He suspended the writ of habeas corpus, which now meant that persons could be imprisoned without charges, and authorized army commanders to declare martial law in areas behind the lines. The Lincoln administration also instituted a graduated income tax, established a national banking system, facilitated the settlement of western lands, and began the nation’s first draft of soldiers.
Lincoln won reelection in 1864, aided by Union victories in the fall of that year. He would not, however, survive to preside over postwar Reconstruction, a task that would fall to lesser leaders. After Lincoln’s assassination in mid-April 1864, Andrew Johnson, a Democrat and the former wartime governor of Tennessee, assumed the presidency. Lincoln had put Johnson on his ticket in 1864 to present a united front to the nation. Johnson’s tenure as president marked a significant decline in presidential power and prestige.
Johnson and the Republican Congress battled over Reconstruction policy, with Congress gaining the upper hand. Congress enacted civil rights laws, the Fourteenth Amendment guaranteeing “equal protection under the law,” and the Fifteenth Amendment, which prohibited the denial of voting rights on grounds of race, color, or previous servitude. Congress authorized the stationing of federal troops in the South to enforce Reconstruction. In 1868 Johnson became the first president impeached by the U.S. House of Representatives. The primary charge against him was that he had violated the Tenure of Office Act, which forbade him from firing cabinet members without the approval of the Senate. Conviction in the Senate failed by a single vote. However, Johnson’s political career was over, and a Republican, war hero Ulysses S. Grant, won the presidency in 1868. Johnson’s impeachment strengthened the hand of Congress relative to the presidency, but it also discredited the use of impeachment as a political weapon.
Grant proved to be a weaker president than he had been a general. He assumed office with no program of his own; he followed the precedents set by the Republican Congress. Despite a lack of enthusiasm for black equality, Grant supported measures to sustain and extend Reconstruction. He continued the circulation of paper money, reduction of the federal debt, protection of industry through tariffs, and subsidization of the railroads. But Reconstruction was already unraveling during Grant’s first term. Although the shell of federal power kept the South in Republican hands, the party that identified with black aspirations was unable to gain the support of white Southerners and thereby maintain its hold on the majority-white South.
Grant easily won reelection in 1872, but the advent of an economic depression in 1873 dashed his hopes for a bright second term. With Grant lacking ideas for reviving the economy, Congress acted on its own to expand the currency. But Grant, a president committed to the ideal of sound money, vetoed the legislation, leading to a paralysis of policy that endured through the end of the depression in 1878. Widespread corruption pervaded the Grant administration, which was also ineffectual in sustaining Reconstruction. By the end of Grant’s second term, white supremacist “Redeemers” had gained control of every southern state with the exceptions of Florida, Louisiana, and South Carolina.
The disputed presidential election of 1876 marked the end of Reconstruction and another low point in the powers of the presidency. Although Democratic candidate Samuel J. Tilden, the governor of New York, won the popular vote against Republican governor Rutherford B. Hayes of Ohio, the outcome of the election turned on disputed Electoral College votes in Florida, South Carolina, and Louisiana, the three states in which Republicans still retained power. With the U.S. Constitution silent on the resolution of such disputes, Congress improvised by forming a special electoral commission that ultimately consisted of eight Republicans and seven Democrats. The commission voted eight to seven on party lines to award all disputed electoral votes to Hayes, which gave him the presidency.
Hayes fulfilled his campaign promises to govern from the center and serve only a single term in office. By the time of the next presidential election, the nation had divided sharply into a solidly Democratic South that systematically moved to disfranchise black voters and a predominantly Republican North. The years from 1876 to 1892 were marked by a sharp regional division of political power growing out of Civil War alignments and a national stalemate between Republicans who dominated the North and Democrats who controlled the “redeemed” South. The Republican Party was also deeply divided between reformers and leaders, known as the Stalwarts, who were intent upon exploiting the political system for private gain. The Republican convention of 1880 held 35 ballots before nominating a dark horse candidate and mild reformer, Representative James A. Garfield of Ohio. In a gesture of conciliation to the Stalwarts, he chose as his running mate Chester A. “Boss” Arthur, who had held the key patronage position of customs collector of the Port of New York. The Democrats countered with former Civil War general Winfield Scott Hancock.
Garfield’s hairbreadth victory over Hancock, by some 2,000 popular votes, began a series of four consecutive presidential elections in which the major party candidates were separated by an average of just 2 percent of the popular vote. Garfield had served less than four months as president before succumbing to an assassin’s bullet. The newly inaugurated Chester Arthur disappointed his Stalwart friends by steering a middle course as president. Ironically, one of his crowning achievements was to sign into law the Pendleton Act of 1883 that established the federal civil service system. Arthur, however, had endeared himself to neither wing of the GOP and, in 1884, became the only sitting president in U.S. history to be denied his party’s renomination.
In 1884, with the victory of New York governor Grover Cleveland, the Democrats gained the White House for the first time in 24 years. Cleveland’s presidency harkened back to Andrew Jackson. Like Jackson, Cleveland believed in limited government, states’ rights, sound money, fiscal responsibility, free trade, and a president who protected the public from the excesses of Congress. Cleveland vetoed several hundred special pension bills for Union veterans and their dependents. In 1887 he also vetoed the Dependent Pension Bill, which would have mandated payments to most disabled veterans regardless of whether their disabilities resulted from wartime service. During his first term, Cleveland exercised the presidential veto more than twice as many times as all his predecessors combined.
In his reelection bid, Cleveland prevailed in the popular tally, but lost his home state of New York and with it the Electoral College. Republican president Benjamin Harrison sought to restore to his party the activism of Lincoln, with mixed success. Among his achievements was the McKinley Tariff of 1890, which substantially increased tariff rates. Harrison and other Republicans of the Gilded Age presented tariff protection as a comprehensive economic policy that would nurture industry, keep wages high, and strengthen domestic markets for agricultural goods. Harrison also gained passage of a pension bill similar to the legislation that Cleveland had vetoed. By 1893 pension payments would account for nearly half of the federal budget, and pensions would constitute the only substantial federal relief program until the New Deal of the 1930s. Congress also enacted the Sherman Anti-Trust Act, which outlawed corporate combinations or conspiracies “in restraint of trade or commerce.” Harrison failed, however, to steer new civil rights laws through Congress, and the Sherman Act proved ineffective in restraining the concentration of industry.
In 1892 Cleveland defeated Harrison in a rematch of the 1888 election and became the only American president to serve two non-consecutive terms. Cleveland’s election appeared to foreshadow a dramatic shift of party power in favor of the Democrats. The party seemed finally to have transcended the sectionalism of Civil War politics by combining its lock on the solid South with a revived ability to compete in the North. Cleveland’s second term, however, proved to be a disaster for both the president and his party. He would spend nearly all of his four years in office on a futile effort to combat the worst economic depression to date in the history of the United States. President Cleveland, captive of his commitment to hard money and limited government, refused to consider reforming the financial system, increasing the money supply, or providing relief for the unemployed. His only solution to the economic calamity was to maintain a currency backed by gold, which only exacerbated the monetary contraction that had depressed investments, wages, and prices.
Cleveland declined to run for a third term, and his party’s nominee, William Jennings Bryan, repudiated the president’s policies. Bryan began the transition to a Democratic Party committed to activist government. Bryan embraced such reform proposals as the free coinage of silver to inflate the currency, a graduated income tax, arbitration of labor disputes, and stricter regulation of railroads. Bryan also helped introduce the modern style of presidential campaigns by giving stump speeches across the nation in 1896. In turn, the Republicans, who vastly out-spent the Democrats, pioneered modern fundraising techniques.
Bryan lost in 1896 to Republican William McKinley, who benefited as president from the end of the depression in 1897. In domestic policy, McKinley steered new protective tariffs through Congress but otherwise followed a largely stand-pat approach to domestic matters. As president, however, he assumed the stewardship of a foreign empire and an expanded role in foreign affairs. As a result of winning the Spanish-American War of 1898, the United States acquired Puerto Rico, the Philippines, and several Pacific islands. It also established a protectorate over the Republic of Cuba. Although the United States would not endeavor to extend its formal empire of overseas territories in later years, it would frequently intervene in foreign nations to promote its values and interests. McKinley also expanded the powers of the presidency as he pioneered the steering of public opinion through a systematic program of press relations and speaking tours.
In 1900 McKinley won a rematch with Bryan, but in September 1901 he became the third president in 40 years to fall victim to an assassin. His successor—Theodore Roosevelt—was a man of different personality and ideas. Roosevelt was a showman with substance. During the remainder of McKinley’s term and a second term of his own, Roosevelt transformed the presidency, the nation, and the Republican Party. As president, Roosevelt was both a big-stick diplomat abroad and a reformer at home. He altered the agenda of the Republican Party by adding a progressive domestic agenda to the expansionist policies of his predecessor, William McKinley. Roosevelt would become the first president to champion the use of federal power to protect the public interest and to curb abuses of the new corporate order. Ultimately, he would became the leader of progressive movements throughout the nation that worked toward improving social conditions, purifying American civilization, ending corrupt political practices, conserving resources, and regulating business. Roosevelt believed that reform was necessary to ameliorate the harshest consequences of industrial society and to thwart the appeal of radical groups.
Roosevelt also expanded the margins of presidential power. Prior presidents had typically deferred to constitutional ambiguities insofar as executive powers and privileges were concerned, preferring to err on the side of caution—and, by extension, weakness. Roosevelt, however, believed that he could do anything in the public interest that was not specifically prohibited by the Constitution. He intervened in disputes between labor and capital, used executive orders to conserve federal lands, attacked corporate monopolies in court, mediated foreign disputes, and aggressively acquired the territory needed to build the Panama Canal. His presidency not only served as a template for future ones but fittingly began what has been called the American century.
After retiring from the presidency, Roosevelt anointed a handpicked successor, his secretary of war, William Howard Taft, who thwarted Bryan’s third bid for the presidency. But in 1912 Roosevelt was so disappointed with the moderate Taft that he unsuccessfully challenged the incumbent for the Republican Party’s presidential nomination. The disappointed Roosevelt launched a third-party campaign that split the GOP and handed the election to Democratic candidate Woodrow Wilson, the progressive governor of New Jersey.
During his two terms in office, Wilson continued the liberal transformation of the Democratic Party that Bryan had begun in 1896. Under his watch, the federal government reduced tariffs, adopted the Federal Reserve System, established the Federal Trade Commission to regulate business, inaugurated social welfare programs, and joined much of the Western world in guaranteeing voting rights for women. Wilson also continued America’s increasing involvement abroad and led the nation victoriously through World War I.
Like Theodore Roosevelt, Wilson also redefined the presidency, making the office both more powerful and active than before. Wilson was more engaged than any prior president in crafting legislation and steering it through Congress. A month after his inauguration, he addressed a special session of Congress to press for tariff reform, becoming the first president since John Quincy Adams in the 1820s to appear as an advocate before the legislature. Wilson also restored the practice of delivering the State of the Union address in person to Congress—a practice that Thomas Jefferson had discontinued after his election in 1800. Wilson also seized the initiative in foreign affairs. He attempted to broker peace between warring factions in World War I, and when that effort failed, led the nation into war. With his Fourteen Points, Wilson also articulated an ambitious vision for a postwar era marked by open covenants of peace, arms reductions, freedom of the seas, fair trade, self-determination for all people, and an international organization to keep the peace.
America’s postwar future belonged not to Woodrow Wilson but to conservative Republicans. Wilson failed to gain acceptance in the United States or abroad for his ambitious peace plans, and his poor health precluded any hope for a third-term campaign. In the presidential election of 1920, Republican Senator Warren Harding of Ohio prevailed on a platform that promised a “return to normalcy” for Americans tired of liberal reform, war, and waves of Catholic and Jewish immigrants from southern and eastern Europe. Republicans would win all three presidential elections of the 1920s by landslide margins and maintain control over Congress during the period. Republican presidents and congresses of the 1920s would slash taxes, deregulate industry, restrict immigration, try to enforce Prohibition, and increase protective tariffs. In 1928, when Commerce Secretary Herbert Hoover decisively defeated New York Governor Alfred E. Smith—the first Catholic presidential candidate on a major party ticket—many believed the Democratic Party was on the verge of extinction.
The tide turned after the stock market crash of 1929 began the nation’s longest and deepest depression. Unlike Grant and Cleveland, Hoover responded vigorously to the economic downturn. He held conferences of business leaders, sought to boost farm prices through federal purchases of commodities, and expanded federal public works projects. He assented to the formation of the Reconstruction Finance Corporation, which made low-interest loans to banks, railroads, and insurance companies. But he opposed federal regulation of business and vetoed legislation enacted by Democrats and progressive Republicans in Congress for direct aid to individuals and families—the so-called federal dole. The Depression failed to respond to his remedies, which Americas believed were inadequate to the challenges of the times. Franklin D. Roosevelt, the Democratic governor of New York, trounced Hoover in the presidential election of 1932, which ended conservative control of national government and marked the beginning of a new era of liberal politics in the United States.
See also Civil War and Reconstruction; conservative interregnum, 1920–32; Democratic Party; Reconstruction Era, 1865–77; Republican Party; sectional conflict and secession, 1845–65.
FURTHER READING. H. W. Brands, T. R.: The Last Romantic, 1997; Kendrick A. Clements, The Presidency of Woodrow Wilson, 1992; David Donald, Lincoln, 1995; Eric Foner, A Short History of Reconstruction, 1863–1877, 1990; Allan Lichtman, The Keys to the White House, 2008; Michael E. McGerr, The Decline of Popular Politics: The North, 1865–1928, 1986; Michael P. Riccards, The Ferocious Engine of Democracy: A History of the American Presidency, vol. 1, 1997.
ALLAN J. LICHTMAN
From 1932 to 2004, the powers and responsibilities of the presidency expanded together with the size and scope of the federal government. In 1932 the federal government spent less than 5 billion, including only about
700 million on the military. About 100 employees worked in the White House. In 2004 the federal government spent more than
2 trillion, the military budget hit
400 billion, and 1,800 people worked in the White House. Yet even as the federal bureaucracy has exploded, the modern president has become a celebrity figure, prized for his ability to inspire and lead the American people. Presidential success, moreover, has not always followed from presidential power. To the contrary, modern presidents have often fallen victim to the overreach that accompanies the arrogance of power.
The presidency from 1932 to 2004 can be partitioned into two distinct eras. From 1932 to 1980, presidents took the lead in establishing the modern liberal state. From 1980 to 2004, conservative presidents put their distinctive stamp on government in the United States.
In 1928 Herbert Hoover became the third consecutive Republican to win a landslide election to the presidency. But after 1929, Hoover battled the baleful consequences of a worldwide depression that resisted every remedy he tried. During the Hoover years, the Democratic opposition established the precedent of the permanent political campaign, with no pause between elections or deference to the presidency. Patrick Hurley, Hoover’s secretary of war and political advisor, lamented that “our political opponents tell the story [and] we are on the defensive.” Henceforth, every American president would be compelled to engage in a perpetual campaign.
Liberal Democrat Franklin Roosevelt’s smashing victory over Hoover in 1932 profoundly changed both the presidency and the nation. During the new administration’s first 100 days, conservatives watched in dismay as Roosevelt seized command of the legislative agenda more decisively than any prior president. He steered through Congress 15 major bills that addressed the banking crisis; got lawmakers to repeal Prohibition; created substantial relief and public works programs; and established recovery programs for agriculture and industry. Roosevelt became the first president to sell his policies to the public through fireside chats on the radio and freewheeling, twice-weekly press conferences. He had the ability both to inspire Americans with soaring rhetoric and to make ordinary folk believe that he, their patrician leader, truly understood and could help solve their problems.
After Roosevelt won a second decisive victory in 1936, he completed a political realignment that established the Democrats as the nation’s majority party, sustained by a coalition of African Americans, Catholics, Jews, union members, and southern white Protestants. Scholars have aptly noted that FDR’s reforms were incremental, modestly funded, and designed to rescue the capitalist economy. Nonetheless, Roosevelt’s New Deal was a transforming moment in American life. It challenged old structures of power, threw up new ones, and created new social roles and opportunities for millions of Americans who worked for government, labored in offices and factories, or farmed for a living. It advanced American pluralism by offering jobs and power to Catholics and Jews and a few African Americans without disrupting local traditions. President Roosevelt shifted the center of American politics by taking responsibility for steering the economy, promoting social welfare, regulating labor relations, and curbing the abuses of business. Henceforth, Americans would expect their president, Democrat or Republican, to assure prosperous times, good jobs, high wages, and aid for those unable to fend for themselves.
During an unprecedented third term, Roosevelt led the nation into a world war that ended America’s isolation from political entanglements abroad. The president assumed broad emergency powers during the war, and new federal agencies like the War Production Board foreshadowed the creation of America’s military-industrial complex. It was not Franklin Roosevelt, however, but his successor, Harry S. Truman, who brought World War II to a successful conclusion. After FDR’s death in April 1945, Truman became the first vice president to assume the presidency in the midst of a major war. Truman was shocked, nervous, and unprepared for the presidency. He told reporters, “I felt like the moon, the stars, and all the planets had fallen on me.” Truman, however, had a very personalized view of history that idealized great men overcoming impossible odds. He acted decisively to use the atomic bomb to end World War II and led the nation into the cold war and the Korean War.
Like his celebrated predecessor, Truman expanded the powers of the presidency. He steered through Congress legislation that created the Central Intelligence Agency, the Joint Chiefs of Staff, the Department of Defense, and a National Security Council within the Executive Office of the President. He began the first program for screening the loyalty of federal employees. He entered the Korean War without a declaration of war or even token approval from Congress. Under Truman, America developed a military structure to sustain its global strategic and economic responsibilities and an “invisible government” that wielded global power with little scrutiny from Congress or the public. As libertarian Lawrence Dennis said in 1947, whether Republicans or Democrats held the presidency, America’s “holy war on communist sin all over the world commits America to a permanent war emergency.” Hereafter, “the executive has unlimited discretion to wage undeclared war anywhere, anytime he considers our national security requires a blow to be struck for good agin sin.”
Amid the burdens of a stalemated war in Korea, a series of administration scandals, and challenges to his anti-Communist credentials by Senator Joseph McCarthy of Wisconsin and other Republicans, Truman declined to seek a third term. In 1952, Democrats nominated Illinois governor Adlai Stevenson. Among Republicans, war hero Dwight David Eisenhower competed for the Republican nomination against Senator Robert Taft of Ohio. In Eisenhower’s view, a Taft presidency would threaten national security because the senator still clung to isolationist ideas that would undo the collective security measures that contained communism and deterred World War III. In the last national convention to resolve a deadlock between candidates, Eisenhower won the nomination and eventually the presidency only after the convention voted to seat his Texas delegation, rather than a competing delegation pledged to Taft.
Although mocked as a president who loved golf and loathed governing, Eisenhower carefully directed the policies and decisions of his administration, often keeping his influence hidden rather than overt. More than any prior president, Eisenhower relied on a chief of staff—Sherman Adams—as a gatekeeper and on the work of executive agencies such as the National Security Council. He also made extensive use of executive privilege to shield staff members from congressional oversight. Politically, Eisenhower promised to steer a middle course that weaved “between conflicting arguments advanced by extremists on both sides of almost every economic, political, and international problem that arises.” He worked to balance the federal budget and control inflation. He believed in protecting the private economy from government meddling but also refused to roll back liberal reforms. He ratified Truman’s approach to collective security and sought to contain communism without trampling civil liberties at home.
Eisenhower achieved considerable personal popularity, but his middle-way approach failed to break the Democrats’ hold on the loyalty of voters and the control of Congress. In 1960 Democrat John F. Kennedy won election as America’s first Catholic president. Kennedy’s campaign, with its creative use of television, polling, image making, and a personal organization that was independent of the regular party machinery, also pointed to the future of American politics.
Kennedy was the first president since Franklin Roosevelt to inspire Americans with his rhetoric. Unlike later presidents, he spoke idealistically of shared sacrifice and the need for ordinary Americans to contribute to the common good, as envisioned in his most memorable line: “Ask not what your country can do for you, ask what you can do for your country.” Kennedy steered the nation through the Cuban Missile Crisis, negotiated the first arms control treaty with the Soviets, and began the process that led to the end of segregation in America. Kennedy also accelerated the arms race with the Soviet Union and expanded America’s commitment to far-flung areas of the world. However, Kennedy might not have led the United States to escalate the Vietnam War. Shortly before his assassination in late November 1963, he was working on a plan that contemplated withdrawing one thousand troops initially and extracting most American forces from Vietnam by 1965.
If Kennedy was cool and detached, his successor Lyndon Johnson was engaged and passionate. Johnson could talk endlessly about politics and had little interest in anything else. He also had a burning ambition to make his mark on the world and to help the less privileged. Johnson used his physical size to influence others and achieve his aims. It was not unusual for Johnson to stand inches away from another, bodies touching and eyes locked. The “Johnson treatment” was almost hypnotic. Yet he could just as easily alienate anyone who rebuffed him or refused his gifts.
After crushing conservative Republican Barry Goldwater in the presidential election of 1964, Johnson used his legislative skills to engineer a major expansion of the liberal state. Johnson imbedded the struggle for minority rights within the liberal agenda and, in another departure from the New Deal, he targeted needs—housing, health care, nutrition, and education—rather than groups such as the elderly or the unemployed.
But Johnson could not focus solely on domestic reform. Two days after his inauguration, Ambassador Maxwell Taylor cabled from Vietnam, “We are presently on a losing track and must risk a change. . . . The game needs to be opened up.” The pugnacious president would not display unmanly personal and national weakness, encourage Communist aggression, and damage America’s credibility by running from a fight. He began an air and ground war in Vietnam and ultimately dispatched some 550,000 American troops to the small Asian nation. Johnson promised the nation victory but privately told his cabinet that at best America could achieve a “stalemate” and force a negotiated settlement. Ultimately, the gap between inflated expectations and minimal achievements in Vietnam led to Johnson’s “credibility gap” with the American people. In 1967 a frustrated president pleaded with his generals to “search for imaginative ideas to put pressure to bring this war to a conclusion”—not just “more men or that we drop the Atom bomb.” Without military answers to the problems, on March 31, 1968, a dispirited Johnson told a national television audience that, rather than seeking reelection, he would work on bringing peace to Vietnam.
In 1962 Richard Nixon, after losing elections for the presidency and the governorship of California, said, “You won’t have Nixon to kick around anymore, because, gentlemen, this is my last press conference.” Six years later, Nixon completed the most improbable comeback in American history by narrowly winning the presidential election of 1968. Yet, from the early days of his presidency, Nixon exhibited the fear and suspicion that ultimately doomed a presidency marked by such accomplishments as the passage of pathbreaking environmental laws, the opening of relations with mainland China, and the deescalation of the cold war. Nixon told his staff that they were engaged in a “deadly battle” with eastern businessmen and intellectuals. He said, “No one in ivy league schools to be hired for a year—we need balance—trustworthy ones are the dumb ones.” Jews were especially “untrustworthy. . . . Look at the Justice Department. It’s full of Jews.” Few business leaders “stood up” for the administration “except Main Street biz.” Nixon brooded over his enemies in the press—“75 percent of those guys hate my guts”—and complained about needing to “keep some incompetent blacks” in the administration. “I have the greatest affection for them, but I know they ain’t gonna make it for 500 years.”
After engineering a landslide reelection in 1972, Nixon planned to bring the federal budget and bureaucracy to heel by refusing to spend funds appropriated by Congress and reorganizing government to expand presidential power. This power grab failed, however, as the Watergate scandal shattered Nixon’s second term. Watergate involved far more than the botched break-in at Democratic Party headquarters in the Watergate complex in Washington, D.C., in June 1972. As moderate Republican senator Edward Brooke of Massachusetts said, “Too many Republicans have defined that dread word ‘Watergate’ too narrowly. It is not just the stupid, unprofitable break-in attempt. . . . It is perjury. Obstruction of justice. The solicitation and acceptance of hundred of thousands of dollars in illegal campaign contributions. It is a pattern of arrogance, illegality and lies which ought to shock the conscience of every Republican.”
After Nixon resigned in August 1974, Democrats swept the midterm elections and sought to curb what they saw as a runaway presidency. They limited the president’s war-making powers, expanded congressional input on the budget, and placed new restrictions on the CIA and the FBI. Such measures largely failed to return the balance of governmental power to Congress. Nonetheless, President Gerald Ford, whom Nixon had appointed vice president under authority of the Twenty-Fifth Amendment to the Constitution after the resignation of Spiro Agnew in 1973, struggled to govern after pardoning Nixon for Watergate-related crimes. However, conservative Republicans began rebuilding in adversity. They formed the Heritage Foundation to generate ideas, the Eagle Forum to rally women, new business lobbies, and Christian Right groups to inspire evangelical Protestants.
Although Democrat Jimmy Carter defeated Ford in 1976, he failed to cure an economy suffering from “stagflation” (an improbable mix of high unemployment, slow growth, and high inflation). Under Carter’s watch America also suffered humiliation abroad when he failed to gain the release of hostages taken by Islamic militants in Iran. In 1980 conservative Republicans found an appealing candidate in Ronald Reagan, the former Hollywood actor and two-term governor of California. Reagan decisively defeated Carter in 1980, running on a forthright conservative platform. He promised to liberate Americans from the burdens of taxation and regulation, rebuild the nation’s defenses, and fight communism with new vigor.
As president, Reagan delivered on most of his promises. He cut taxes, reduced regulation, and shifted government spending from domestic programs to the military. Like Roosevelt and Kennedy, Reagan emerged as a “Great Communicator,” able to inspire Americans with his words and style. During his first term, Reagan restored luster to a tarnished presidency and optimism to the nation. As journalist Bob Greene wrote, Reagan “manages to make you feel good about your country. . . . All those corny feelings that hid inside of you for so long are waved right out in public by Reagan for everyone to see—and even while you’re listing all the reasons that you shouldn’t fall for it, you’re glad you’re falling. If you’re a sucker for the act, that’s okay.”
Reagan cruised to easy reelection in 1984 after a troubled economy recovered during the election year. To borrow a metaphor from Isaiah Berlin, most modern American presidents are foxes who know a little about everything, poke their noses everywhere, and revel in detail. Reagan, however, was a hedgehog who knew a few things but knew them very well and left the management to others. Reagan’s detached style helped him weather the Iran-Contra scandal of 1986–87 that stemmed from the sale of arms to the terrorist state of Iran and the illegal diversion of the profits to the Contra fighters who were battling a left-wing government in Nicaragua. Although the “Reagan revolution” in domestic policy stalled during the second term, he achieved a major breakthrough in foreign policy, despite antagonizing his conservative supporters. Conventional thinkers on the right or left failed to understand how Reagan could weave together seemingly contradictory ideas. He was a warrior against evil and a man of peace who dreamed of banishing nuclear weapons from the Earth. He was a leader of principle and a pragmatist who understood better than his right-wing critics how the world had changed since 1980. In the teeth of conservative opposition, Reagan steered through the Senate a landmark treaty to eliminate nuclear missiles in Europe that he negotiated with reformist Soviet leader Mikhail Gorbachev. In 1988 Reagan foreshadowed the end of the cold war when he said that the Soviet’s “evil empire” was from “another time, another era.”
It was Reagan’s successor, George H. W. Bush, who presided over the collapse of the Soviet Empire. Bush took office with no guarantees that communism would collapse without bloodshed. He seemed shy and awkward but not overmatched, at least in foreign affairs. His realistic, steady-hand diplomacy prodded events forward without provoking a Soviet backlash. Bush drew a contrast between himself and the flamboyant Reagan when he said that, although conservatives told him to “climb the Berlin Wall and make high-sounding pronouncements . . . [t]he administration . . . is not going to resort to such steps and is trying to conduct itself with restraint.” Not a single Soviet soldier fired a shot to preserve communism in Eastern Europe in 1989. The Soviet Union crumbled in 1991; the same year that Bush led a multinational coalition that liberated Kuwait from the Iraqi armies of Saddam Hussein.
In 1992, however, Bush’s success in foreign policy could not overcome a sluggish economy, his lack of vision in domestic policy, and the appeal of his Democratic challenger, Bill Clinton. Clinton positioned himself as a “new kind of Democrat” armed, like Eisenhower, with a “third-way philosophy” that purported to transcend left and right.
However, the future of the Clinton administration turned on a battle over the president’s plan to guarantee health care coverage to all Americans. Representative Dick Armey of Texas privately told Republicans that the health care debate was “the Battle of the Bulge of big-government liberalism.” If the GOP could defeat Clinton’s health care plan, he said, “It will leave the President’s agenda weakened, his plan’s supporters demoralized, and the opposition emboldened. . . . Historians may mark it as the end of the Clinton ascendancy and the start of the Republican renaissance.”
Armey proved to be a reliable prophet. Republicans won the health care battle and regained control of both houses of Congress in 1994 for the first time in 40 years. The elections established Republicans as the majority party in the South, polarized the parties along ideological lines, and forestalled any major new liberal initiatives by the Clinton administration. While Clinton won reelection in 1996 and survived impeachment by the Republican-controlled House of Representatives, his party failed to regain control of Congress during his tenure or to win the presidential election of 2000.
Although president-elect George W. Bush lost the popular vote in 2000, his advisors rejected advice that Bush govern from the center. Dick Cheney, who was poised to become the most influential vice president in American history, said, “The suggestion that somehow, because this was a close election, we should fundamentally change our beliefs I just think is silly.” Even before the al-Qaeda terrorist attacks of September 11, 2001, Bush had moved domestic policy to the right and adopted a more aggressive, unilateralist approach to foreign affairs than his Democratic predecessor.
President Bush narrowly achieved reelection in 2004. However, his years in office revealed deep contradictions within his conservative movement. With the rebuilding of Iraq, a conservative administration that disdained social engineering assumed the most daunting such project in American history. Similarly, the president built a form of big government that contradicted conservatives’ rhetorical defense of limited government, states’ rights, fiscal responsibility, and individual freedom. Although conservatives had once rallied against the excessive presidential powers under Roosevelt, Truman, and Johnson, Bush greatly expanded executive prerogatives through unprecedented secrecy in government, expanding the domestic surveillance of Americans, exercising political control over the legal and scientific agencies of government, and aggressively using executive signing statements to reserve the option to override provisions of federal law. More forthrightly than any prior president, he asserted America’s right to wage preemptive war against potential enemies. President Bush’s terms in office exposed a paradox at the heart of the modern presidency. Although his tenure was a high watermark in presidential power, it also added to a deep-seated distrust of the presidency that had begun with Johnson’s deceptive war and continued through the Watergate and Iran-Contra scandals and the impeachment of President Clinton. The Bush era ended with the election of Democrat Barack Obama, America’s first African American president, who entered the presidency with a solidly Democratic U.S. House and Senate.
See also conservative ascendancy, 1980–2008; era of confrontation and decline, 1964–80; era of consensus, 1952–64; New Deal Era, 1932–52.
FURTHER READING. Robert Dallek, Hail to the Chief: The Making and Unmaking of American Presidents, 2001; Lewis L. Gould, The Modern American Presidency, 2003; William Leuchtenburg, In the Shadow of FDR: From Harry Truman to Ronald Reagan, 1983; Allan J. Lichtman, The Keys to the White House, 2008 Edition, 2008; Michael P. Riccards, The Ferocious Engine of Democracy: A History of the American Presidency, from Theodore Roosevelt to George W. Bush, 2003; Richard Shenkman, Presidential Ambition: How the Presidents Gained Power, Kept Power, and Got Things Done, 1999.
ALLAN J. LICHTMAN
The press has played a major role in American politics from the founding of the republic. Once subordinate to politicians and the major parties, it has become increasingly independent, compelling politicians and elected officials to develop new strategies to ensure favorable publicity and public support.
Newspapers in the colonial era were few in number and very different from what they would later become. Operated by individual entrepreneurs who produced a variety of printed materials, newspapers included little political news. Instead, their few columns were devoted to foreign news and innocuous correspondence that would not offend colonial officials or the wealthy patrons on whom printers relied for much of their business.
This began to change during the Revolutionary era, when printers were drawn into the escalating conflict with Great Britain. Adversely affected by the Stamp Act, many printers opened their columns to opponents of British rule and eventually became champions of American independence. Others sided with the British and often found themselves the objects of popular wrath. After the war most printers returned to publishing uncontroversial items, but an important precedent had been set. Politicians and elected officials recognized that they could use the press to win support for favored causes, and ordinary Americans now saw newspapers as a medium through which they might gain knowledge about public affairs and become active citizens. Believing that a free press could spur public enlightenment and political engagement, Congress passed laws that reduced periodical postal rates and encouraged publishers to share and reprint their correspondence.
By the early 1790s, then, most Americans considered newspapers vital to the health of the republic, providing a medium through which politicians and the public could communicate, learn about issues, and develop policies that were shaped by rational, informed debate.
Almost immediately, however, the appearance of a very different kind of journalism confounded this expectation. Sparked by divergent plans for the future of the new republic, competing factions emerged within George Washington’s administration and Congress, and by the mid-1790s each faction had established partisan newspapers championing its point of view. These publications were subsidized through patronage, and, though they had a limited circulation, the material they published was widely reprinted and discussed, and contributed to the establishment of the nation’s first political parties, the Federalists and the Democratic-Republicans.
Newspapers like Philip Freneau’s National Gazette, the most prominent Democratic-Republican organ, crafted distinctly partisan lenses through which readers were encouraged to view the world. Specializing in gossip, innuendo, and ad hominem attacks, these newspapers sought to make readers fearful about the intentions of their opponents. The strategy was quite effective at arousing support and mobilizing voters to go the polls—after all, the fate of the republic appeared to be at stake. But it hardly made the press a fount of public enlightenment, to the dismay of many an observer.
The rabid and unexpected partisanship of the 1790s culminated in the passage by the Federalist-dominated Congress of the Sedition Act (1798), which was designed to throttle the most intemperate journalistic supporters of the Democratic-Republicans by criminalizing “false, scandalous, and malicious writing” that defamed government officials. Though resulting in relatively few prosecutions, the law sparked an uproar that benefited Thomas Jefferson and his allies and created a groundswell of support for the principle of freedom of the press. In the wake of Jefferson’s election to the White House, the act’s sponsors were unable to extend its life and it expired in March 1801.
The partisan press expanded in the early 1800s and reached the peak of its influence during the age of Jackson. Publishers, eager for government printing contracts, allied themselves with leading politicians and devoted their columns to publicizing their candidacies and policy aims. Newspaper publishers were particularly important in promoting Andrew Jackson, serving in his kitchen cabinet, and enabling him to develop a national following. Jackson’s rise to power prompted a dramatic polarization of newspapers, a divide that was essential to the emergence of the Democrats and the Whigs, truly national parties that were organized down to the grass roots.
Political parties were not the only organizations to establish newspapers. Religious denominations and reform societies also founded newspapers and journals of opinion and advocacy to attract supporters and influence public opinion. Evangelical groups were especially enterprising in their use of newspapers and other printed tracts to win converts and promote piety, and in the 1820s and 1830s these efforts often spilled over into broader campaigns to improve public morality. By constructing a network of affiliated publications that extended through much of North and by developing narrative themes that were at once sensational and didactic, the religious and reform newspapers of the early 1800s were important pioneers of modern journalism and popular culture.
The most controversial reform organs were abolitionist newspapers like William Lloyd Garrison’s The Liberator, which was launched in 1831 and inspired many similar publications. Making use of the communications infrastructure developed by the religious press, abolitionist newspapers spread throughout the North and were sent en masse to cities and towns in the South in hopes of kindling opposition to slavery in the region. To suppress their dissemination, pro-slavery activists broke into post offices and seized and burned any copies they found. While this tactic was effective at minimizing the spread of antislavery sentiment, it angered and alarmed many Northerners, bolstering abolitionist claims that the republic was imperiled by the tyrannical designs of the “Slave Power.”
Despite their effectiveness in helping to build national parties and raising public awareness of social and political issues, the partisan press and reform press were widely criticized, and their limitations paved the way for a new kind of publication, a commercial mass-circulation press that first appeared in the 1830s. Inexpensive, widely accessible, and written in a colorful style designed to entertain as well as inform, newspapers like the New York Sun sparked a revolution in journalism as publishers, impressed by the commercial potential of an unabashedly popular journalism, rushed to establish similar publications. By opening their papers to advertising, publishers of the “penny press” discovered a lucrative source of revenue and freed themselves from dependence on political parties and patrons. They acquired an incentive to expand their readership to include working-class people, who had never been targeted by newspaper publishers, and to plow their profits into new technologies that allowed them to enlarge their publications and vastly increase the range of topics they covered.
Filling their columns with material of general interest, publishers like James Gordon Bennett, founder of the New York Herald, invented the modern concept of “news.” And while much of it was about politics, when Bennett and his rivals expanded coverage of other realms they diminished the prominence and centrality of political news, which became one of many different kinds of reportage. The penny press also treated political news differently, and, as it gained readers, its perspective on politics and public affairs became more influential. Most publishers recognized the strength of partisanship, and supported one party or another. Yet, because commercial imperatives encouraged publishers to reach across lines of class, ethnicity, and party, they often confined their partisanship to editorials, where it was less likely to offend.
This is not to say that the commercial mass-circulation press was objective. Editors and publishers—until after the Civil War, they were usually one and the same—had strong points of view and were not squeamish about inserting them into news reports. But their reliance on advertising allowed editors to aspire to a new role as tribunes of the public. In many instances, this meant standing by their party; in others, however, it meant criticizing it. Publishers like Bennett or Horace Greeley relished opportunities to display their independence and commitment to the public interest, a gambit inspired as much by commercial intent—the desire to attract a broad readership—as by disgust for the excesses of partisanship.
The trend toward a less partisan brand of political reporting was reinforced by the establishment of wire services like the Associated Press, which provided members with news from Washington and state capitals and eschewed partisanship out of commercial necessity. Under the influence of such services, by the 1880s, most political reporting had become standardized and largely descriptive, consisting of transcripts of speeches, legislative hearings, and official pronouncements. Most of this material was gathered by salaried wire service and newspaper correspondents, not, as in years past, by freelance correspondents who also worked for elected officials or the major parties. Just as their employers viewed themselves as independent of party, so too did increasing numbers of reporters, a trend that accelerated in the early 1900s when big-city newspapers became large business, and journalists began to think of themselves as professionals.
But the commercial orientation of the mass-circulation press also pulled journalists in another direction, toward an emphasis on entertainment values. In the 1880s and 1890s, determined to attract more immigrant and working-class readers, publishers like Joseph Pulitzer and William Randolph Hearst created an even more popular and entertaining brand of journalism that emphasized scandal, personalities, and a wide variety of human-interest material. Political news in their publications became increasingly sensational, as editors focused on exposé of corruption and mounted highly publicized crusades. A similar imperative affected magazine journalism, inspiring the muckraking campaigns of Cosmopolitan and McClure’s. Spurred by recognition that much of the public was sincerely concerned about social problems, the sensational press played a key role in building support for reform. By transforming politics into entertaining yet sordid morality tales, however, they also may have encouraged public cynicism and disengagement from politics.
Many middle-class and upper-class Americans were appalled by the new journalism, and, in response to its rise, Adolph Ochs transformed the New York Times into a more sober and “informational” alternative. In the early 1900s, other papers followed Ochs’s lead, creating a new divide between a popular journalism directed at lower middle-class and working-class readers and a self-styled “respectable” press that was targeted at the educated and well-heeled. But publishers of respectable newspapers, in response to consumer demand, were soon compelled to publish features and human-interest stories as well, blurring the differences between the two kinds of journalism. Indeed, by the 1920s, the most salient distinction between the sensational press and the respectable press was the relative restraint that the latter displayed when covering many of the same stories. Even in the respectable press, political news was designed to entertain as well as inform, an increasingly difficult mission now that newspapers had to compete for the public’s attention with motion pictures and other forms of popular culture.
The commercial transformation of journalism had a major impact on politicians and government officials. Not surprisingly, it forced them to present themselves in a less partisan light. Seizing the opportunities created by the spread of human-interest journalism, politicians sought to appear as “practical idealists,” party members who were nonetheless sensitive to broader concerns and willing to break with their party if necessary. To that end, politicians began to hire press secretaries and public relations advisors, usually former journalists who knew how to exploit the conventions of news gathering to gain favorable coverage for their clients. The federal government also began to employ public relations and advertising techniques, most notably in its effort to build public support for American involvement in World War I. Led by George Creel, an acclaimed journalist, the government’s campaign sparked an orgy of hyperpatriotism, demonstrating how mass-mediated propaganda could mold public opinion and potentially influence the democratic process.
Alarmed by the ease with which politicians, the government, and economic elites could use the press to get free publicity, journalists began to produce more interpretive and objective forms of news, particularly of topics like politics. This important trend was inspired by a belief that the world was too complex to be understood by readers, and that the job of the press was to digest, analyze, and interpret events and developments so that the public could make sense of them. Newspapers hired columnists like Walter Lippmann and Dorothy Thompson to provide “expert” commentary on political events. Their columns were disseminated by syndicates to newspapers around the country, enabling them to reach a nationwide audience. Interpretive news also became a staple of the weekly newsmagazine Time. Founded in the early 1920s, it exerted a wide influence on newspaper as well as magazine journalism. The commitment of print media to interpretive news was reinforced by the spread of radio. As radio became the principal medium through which most Americans heard about late-breaking news, newspapers and magazines redoubled their emphasis on more detailed coverage.
By the 1940s, the press had become a vital institution, providing the public with information about candidates and elected officials, covering primary campaigns and nominating conventions, and offering regular reports on the vastly expanded operations of federal, state, and local government. The lens through which most of this news was filtered was the commercial, feature-oriented, largely nonpartisan perspective pioneered by the cheap popular press and further refined by more respectable organs and the major wire services. Despite persistent differences in tone among newspapers and magazines—differences attributable to their intended audiences—the political news that most Americans read was relatively standardized, a blend of interpretive reporting, analyses, commentary, and “personalized” features. Much of it was quasi-official in origin, inspired by the efforts of politicians and government officials to attract publicity or direct attention to a particular issue. More often than not, this was because the routines of news gathering encouraged close contact between journalists and official sources, an arrangement that made the news media a reliable platform for establishment points of view.
The spread of television in the 1950s did little to alter the situation. To display their commitment to the public interest, the major networks and local stations produced news and public affairs programming, covering events like the 1954 Army-McCarthy hearings and airing documentaries on issues like civil rights, the alienation of youth, and the arms race. However, it wasn’t until the expansion of the nightly network news broadcasts to 30 minutes in the early 1960s, and a similar increase in local news programming, that television became the main source of political news for most Americans. Making use of new video and satellite technologies that enabled extensive coverage of the era’s tumultuous events—from the Kennedy assassination to the Watts uprising to the debacle in Vietnam—television news broadcasts began to attract more viewers, sparking a gradual yet inexorable decline in newspaper readership. The centrality of television news became even more pronounced in the 1980s and 1990s with the rise of cable television and the popularity of news channels such as CNN.
The public’s growing reliance on television for news had significant repercussions. No less than in the print media, advertising and entertainment values came to dominate television at every level, encouraging network officials to decrease coverage of politics and make what little they offered more superficial and entertaining. Under pressure to make the news “pay,” a trend brilliantly satirized in the movie Network (1976), television journalists were forced to produce more human-interest stories and sharply limit airtime devoted to political stories that were overly complex or considered boring. With less airtime devoted to politics, politicians and elected officials gradually learned to express themselves in compact “sound bites,” a technique that placed a premium on wit and personality and further degraded public discourse. This shift was particularly evident in coverage of election campaigns. Aware of the power of television, candidates and their campaign managers in the 1960s made increasing use of modern advertising and public relations methods, a process in which candidates’ personalities were literally sold to the public. This trend was reinforced in the 1970s, when electoral reforms heightened the importance of primary elections, which the mass media, led by the major networks, transformed into highly publicized “horse races.”
Beginning in the late 1960s, the press became increasingly aggressive and adversarial. Disconcerted by recognition that government and military officials had lied about the situation in Vietnam, journalists began to seek a wider range of sources and question official reports in a spirit not seen since the early 1900s. Journalists came to see themselves as public watchdogs responsible for exposing malfeasance and providing Americans with the truth. The publication of the Pentagon Papers, a top-secret history of the Vietnam War that was leaked to the New York Times, and the aggressive investigative reporting of the Washington Post that precipitated the Watergate scandal were perhaps the most famous manifestations of this trend. But it influenced many newspapers, magazines, television news departments, and individual journalists, inspiring them to express critical views of important institutions, including some of the large corporations for which they worked. To foster public debate, newspapers established op-ed pages and expanded their roster of columnists, making editorial pages less uniform and predictable. By the early 1980s, however, much of the mainstream press had backed away from this adversarial stance. Chastened by charges of liberal bias, journalists went out of their way to appear fair to conservatives, and in the 1990s, eager to display their balance, they zealously contributed to the right’s persecution of Bill Clinton.
The post-1960s era also witnessed a tremendous increase in alternative sources of political news, as journalists sought new platforms to produce in-depth and adversarial reportage. These alternatives included underground newspapers, political magazines specializing in advocacy journalism, politically oriented network and cable talk shows like The McLaughlin Group, Crossfire, and The Daily Show, and innumerable political Web sites and blogs. Many of these sources specialized in ideologically inspired, openly subjective reporting and commentary, creating a new field where news and opinion were hopelessly blurred. Often targeted at true believers rather than a broad audience, they vastly enlarged the parameters of political discourse and made it easier for citizens to gain access to diverse views. This was clearly an advance over the more limited, elite-driven discourse that prevailed from the 1920s through the early 1960s, particularly given the ability of government and the corporate behemoths that own the major media to exploit the conventions of journalism to project their own self-interested versions of reality.
But it is an open question whether the welter of often fiercely partisan and ideologically driven sources of political news in America serves—or will ever serve—the larger cause of public enlightenment. Can a mode of discourse that is designed at least in part to entertain, in a popular culture marketplace that is fragmented into increasingly specialized niche markets, ever contribute to inclusive, constructive debate? Or will it reach its logical conclusion and become another species of show biz?
FURTHER READING. Gerald J. Baldasty, The Commercialization of News in the Nineteenth Century, 1992; Stuart Ewen, PR! A Social History of Spin, 1996; James Fallows, Breaking the News: How the Media Undermine American Democracy, 1996; Thomas C. Leonard, The Power of the Press: The Birth of American Political Reporting, 1986; David Paul Nord, Communities of Journalism: A History of American Newspapers and Their Readers, 2001; Geneva Overholser and Kathleen Hall Jamieson, eds., Institutions of American Democracy: The Press, 2005; Stephen Ponder, Managing the Press: Origins of the Media Presidency, 1897–1933, 1999; Michael Schudson, The Good Citizen: A History of American Civic Life, 1998; Paul Starr, The Creation of the Media: Political Origins of Modern Communications, 2004.
CHARLES L. PONCE DE LEON