V

image

veterans

American military veterans were important political actors even before the United States became a nation. All of the American colonies except Connecticut, Delaware, and Quaker Pennsylvania provided pensions for wounded veterans, with South Carolina even holding out the possibility of freedom as a benefit for enlisted slaves. And when it came to pensioning disabled veterans, as the Continental Congress did in 1776, there was ample precedent in the kingdoms of Europe—France, Britain, Prussia, and Russia all had national military hospitals and rudimentary disability pension systems in place by 1780. But the place of the veteran in a self-conscious republic was different and has evolved in unique ways since the Revolution.

When it came to “service pensions”—stipends paid simply on the basis of past military service—some early congressmen balked. In a republic, they argued, military service was a duty of citizenship. Service pensions represented the entering wedge for standing armies and political patronage, creating dependence and (since service pensions were typically limited to officers) invidious distinctions of rank. But under wartime pressures, Congress promised all troops lump-sum payments at the war’s end (1778), and Continental officers half-pay pensions for life (1780). When officers of General George Washington’s army encamped at Newburgh, New York, demanded full pensions or a cash equivalent as the price of their disbandment, Congress defused the situation with the Commutation Act of 1783, which provided officers with five years’ full pay instead of half-pay pensions for life. Noncommissioned indigent veterans, however, would not be pensioned until the Service Pension Act of 1818, and full-service pensions did not arrive until 1832. State militiamen, who made up much of the estimated 232,000-man Revolutionary Army, were excluded from federal benefits entirely. Thus, at its outset the U.S. pension system drew distinctions between officers and men, federal and state troops, and three classes of the deserving: war invalids, indigent “dependents,” and soldiers whose only claim to benefits was service.

Continental Army veterans also received warrants for large tracts of land in the public domain, mainly in the Old Northwest Territory and the Southwest Territory, under acts of 1776 and 1780, while land-rich states such as Virginia and New York made grants of their own. Eventually, title to 2,666,080 acres was issued on the basis of Revolutionary War claims. But conflicting state land claims, wars with Native American nations, and a law that, for a time, restricted sales to 4,000-acre parcels made land warrants of small value to most veterans until the late 1790s, by which time most had been sold to speculators. The same thing happened to officers’ commutation certificates: by the time the federal government emerged from default in 1791, many officers had sold their certificates for as little as twelve and a half cents on the dollar.

Attitudes toward Continental veterans gradually evolved from republican worries about vice and patronage to widespread sympathy for their suffering in old age that made the 1818 pension act possible. But Revolutionary War service did not lead to public office (after George Washington, it took eight presidential elections before a military veteran was even nominated), and the few public Revolutionary commemorations tended toward the civic and classical rather than the military: Washington appears in a toga atop Baltimore’s Washington Monument (1829), while the Bunker Hill Monument in Charles-town, Massachusetts (1843), is a simple classical obelisk. The Society of the Cincinnati, an officers-only veterans’ hereditary order that had provoked fears of aristocracy at its founding in 1783, had declined to only six northeastern state chapters by 1832.

The short wars of the early nineteenth century did little to alter this picture. Individual veterans such as William Henry Harrison and Zachary Taylor parlayed military service into political careers, but veterans as such did not organize—there was no recognizable “veteran vote.” A tiny Society of the War of 1812 led a fitful existence from 1853 into the 1890s, when it became a hereditary order; the National Association of Mexican War Veterans was not formed until 1874 and lasted barely into the twentieth century. Veterans of both wars continued to benefit from federal land grants and invalid pensions, but dependent and service pensions came to War of 1812 veterans only in 1871 and to Mexican War veterans in 1887 (dependent) and 1907 (service). The pensioning of Mexican War volunteers was politically difficult because so many of them were Southerners who later fought for the Confederacy. The law finally enacted in 1887 excluded those whose wounds had been sustained in Confederate service and those politically disbarred by the Fourteenth Amendment.

Veterans in Politics

The Civil War marked a watershed in the relation of veterans to society and politics. Union veterans created mass organizations to lobby for their interests, the most powerful of which was the Grand Army of the Republic (GAR), organized in 1866. Nearly all northern towns had GAR posts, which functioned as centers of sociability, providers of charity, and promoters of a conservative brand of American patriotism in schools and on public holidays such as Memorial Day (first proclaimed nationally by GAR commander in chief John Logan in 1868). The GAR pushed the federal government and the states to erect soldiers’ homes (12 did so by 1888); won land grants and special treatment under the Homestead Act for veterans; persuaded some northern states to give Union veterans preference in hiring; and lobbied ceaselessly for the expansion of the Pension Bureau, whose new building (1882; now the National Building Museum) was the largest public space in Washington until 1971.

The largest impact of the Union veterans was on pension legislation, mainly the Arrears Act (1879) and Dependent Pension Act (1890). The latter granted a pension to nearly all Union veterans at a time when many were still in their fifties. By 1891 military pensions accounted for one dollar of every three spent by the federal government, and at the high point of the Civil War pension system in 1902, 999,446 persons, including widows and orphans, were on the rolls. By 1917 the nation had spent approximately image5 billion on Union Army and Navy pensions. Civilian reformers such as E. L. Godkin attacked the “unmanliness” of those who accepted service pensions and the many frauds riddling the system, especially under the administration of Benjamin Harrison and his profligate pension commissioner, James Tanner.

With more than 400,000 members at its height in 1890, the GAR had the political muscle to make itself heard. It created an organized bloc of voters in the North for which both parties—but mainly Republicans—contended by increasing pension benefits, authorizing expensive monuments (such as Grand Army Plaza in Brooklyn, New York, and the Soldiers and Sailors Monument in Indianapolis, Indiana), and sponsoring “patriotic” state laws such as those requiring schoolhouses to fly the American flag. The pension system also created reciprocal benefits for the Republican Party, because the need for revenue to pay pension benefits justified the high tariffs Republican industrialists sought. At the same time, by putting money into the hands of Union veterans, Republicans created a loyal voting constituency. Especially before the extremely close election of 1888, Democrats charged that the important swing states of Indiana, Ohio, and Pennsylvania were being flooded with expedited pension payments.

In the South, Confederate veterans organized late and at least partly in reaction to the GAR. Barred from federal entitlements, they obtained pensions and soldiers’ homes from most southern states, though such benefits were usually modest and limited to the disabled or indigent. Georgia’s Confederate disability pensions, for example, averaged only 44 percent of the federal rate in 1900. The United Confederate Veterans (UCV), founded in 1889, presided over a veterans’ culture that shifted ground from intransigence in the 1870s to a romantic “lost cause” sensibility in the 1890s that even Union veterans could accept with some reservations. In 1913 Union and Confederate veterans held a highly publicized reunion at Gettysburg, where President Woodrow Wilson declared the Civil War “a quarrel forgotten.”

The Spanish-American War produced only 144,252 veterans and two significant organizations: the United Spanish War Veterans (1904), which soon faded, and the Veterans of Foreign Wars (VFW), founded in 1913. Unlike the GAR and UCV, the VFW admitted veterans of subsequent wars, a policy that has allowed it to persevere into the present. On the other hand, the VFW policy of limiting membership to overseas veterans initially hampered the organization in competition with the more inclusive American Legion (founded in Paris in 1919). The Legion quickly became the most popular organization among the approximately 4 million American veterans of World War I. It adopted the GAR’s internal structure of local post, state department, and national encampment; consulted with aging GAR members on political strategy; and continued the Grand Army’s program of flag ritualism and “patriotic instruction.”

In other ways, however, the situation facing World War I veterans was markedly different. Whereas the soldiers of 1865 had come back mostly to farms, those of 1919 returned primarily to cities, where joblessness was acute and vocational training scarce. When Interior Secretary Franklin Lane in 1919 proposed the traditional remedy of land grants, he discovered that most arable public land had already been given away. Instead, like other belligerents (notably Germany and Britain), the United States began moving away from the nineteenth-century model of land grants, pensions, and warehousing veterans in hospitals and toward a model of physical rehabilitation and vocational training. All veterans’ programs were finally consolidated in the Veterans Bureau (1921), which in 1930 became the Veterans Administration (VA).

The pension system of 1919 also differed significantly from the expensive, politically partisan, and fraud-riddled Civil War regime. Instead of a system of entitlements, the War Risk Act of 1917 allowed World War I soldiers to pay small premiums in return for life insurance and future medical care. However, its early administration was corrupt, and veterans’ hospitals proved too few in number and unable to cope with late-developing disabilities such as shell shock. World War I veterans never did receive service pensions, and were eligible for non-service-related disability pensions only briefly, from 1930 to 1933. Instead, politicians opted for “adjusted compensation,” a bonus approved in Congress in 1924 and payable in 1945, designed to make up for wartime inflation and lost earnings. Veterans were seriously divided on the propriety of the bonus, even after Depression hardships drove 20,000 of them to march on Washington, D.C., in 1932 as a Bonus Army demanding its immediate payment. Although troops led by General Douglas MacArthur violently expelled the veterans from Anacostia Flats, the bonus was finally paid in 1936.

The worldwide labor and political strife following 1918 sharpened the hard edge of veteran nationalism. Faced with revolution in Russia, chaos in Germany, a general strike in Seattle, and race riots in cities such as Chicago, the American Legion came out immediately against “Bolshevism,” which it defined broadly to include every organization from the Communist Party to the League of Women Voters. Legion members helped break strikes of Kansas coal miners and Boston police in the summer of 1919, and from the 1920s through the 1950s, they made war on “Reds.” Legionnaires helped bring a House Un-American Activities Committee into existence in 1938 and aided FBI probes of subversion thereafter. The Legion was strongest in small cities and among prosperous members of the middle class; like the GAR, it left racial matters largely to localities, which in practice usually meant segregated posts.

World War II and After

By the time the 12 million veterans of World War II began to return home, the New Deal had institutionalized social welfare spending. Thus, despite the unprecedented scope of the GI Bill, officially titled the Servicemen’s Readjustment Act of 1944, few commentators expressed the worries about fraud and dependence that dogged earlier veterans’ relief. Drafted by former Legion commander Harry Colmery, the GI Bill provided World War II veterans with free college educations and medical care, unemployment insurance for one year, and guaranteed loans up to image4,000 to buy homes or businesses. Other legislation guaranteed loans on crops to veterans who were farmers, reinstituted vocational training, and tried to safeguard the jobs of those returning from war. GI Bill educational and vocational benefits proved so popular that they were extended to veterans of Korea and Vietnam and to peacetime veterans in the Veterans Readjustment Benefits Act (1966). By the 1970s, the VA was spending more than all but three cabinet departments; it achieved cabinet status in 1989. By 1980 benefits distributed under the GI Bill totaled image120 billion.

Unlike previous wars (but like subsequent conflicts in Korea and Vietnam), World War II was fought mainly by conscripts, which may have made taxpayers more willing to compensate veterans for their “forced labor.” These veterans were slightly younger and better educated than World War I veterans and demobilized into considerably less class and racial strife. For the first time, they also included significant numbers of women (the 150,000 members of the Women’s Army Corps and 90,000 Naval WAVEs), who qualified for GI Bill benefits. Still, most of the returnees joined older veterans’ groups rather than forming new ones: Legion membership, which had fluctuated between 600,000 and 1 million before 1941, reached a record 3.5 million in 1946, while VFW membership rose from 300,000 to 2 million. Among liberal alternative groups founded in 1945, only AMVETS reached 250,000 members.

Politically, World War II ex-soldiers did not vote as a recognizable bloc, but veteran status was an enormous advantage to those seeking office. Joseph McCarthy, for example, was elected to the Senate as “Tail Gunner Joe,” while magazine articles trumpeted John F. Kennedy’s heroism aboard his boat, PT-109. Every president from Dwight Eisenhower to George H. W. Bush (except Jimmy Carter, a postwar Naval Academy graduate) was a World War II veteran, a string unmatched since the late nineteenth century. In the postwar years, it became normal for the president to address the annual American Legion convention. Culturally, World War II veterans received heroic treatment in movies such as The Longest Day (1962) and in a neoclassical World War II Memorial (2004) that stands in stylistic contrast to the bleaker Vietnam (1983) and Korean War (1995) memorials on the Mall in Washington, D.C.

The Korean and Vietnamese conflicts produced none of the triumphalism that followed World War II. Although the VA continued to grow—its 2009 budget request was for image93.7 billion, half of it earmarked for benefits—the Legion and VFW struggled throughout the 1960s and 1970s to attract new veterans whose attitudes toward war and nationalism were ambivalent. After the Vietnam War, which the older organizations supported fiercely, young veterans felt alienated from a society that often ignored or pitied them. In 1967 they formed the first significant antiwar veterans group, the Vietnam Veterans Against the War (VVAW; after 1983, the Vietnam Veterans of America, VVOA). With fewer than 20,000 members, the VVAW publicized war atrocities and lobbied for American withdrawal. In the 1980s, more Vietnam veterans began to join the Legion and VFW, bringing those groups up to their 2008 memberships of approximately 3 million and 2.2 million, respectively. The treatment of veterans suffering from post-traumatic stress disorder/(PTSD) and exposure to defoliants in Vietnam became important issues for these organizations, often bringing them into conflict with the Defense Department.

In the years since Vietnam, relations between veterans and society have changed in several ways. Subsequent military actions in Grenada, Bosnia, Kuwait, and Iraq have been carried out by volunteer forces, making military experience more remote from the day-to-day lives of most Americans. About 15 percent of those serving in the military are now women, a fact that may eventually change the traditional veteran discourse about war as a test of masculinity—the dedication of the first memorial to military service women at Arlington National Cemetery in 1997 marked the change. And the gradual passing of the World War II generation has produced a wave of nostalgia for veterans of that war similar to the one that engulfed Civil War veterans toward the end of their lives.

See also armed forces, politics in the.

FURTHER READING. Mary R. Dearing, Veterans in Politics: The Story of the G.A.R., 1952; Ihor Gawdiak et al., Veterans Benefits and Judicial Review: Historical Antecedents and the Development of the American System, 1992, downloadable from http://handle.dtic.mil/100.2/ADA302666; David A. Gerber, ed., Disabled Veterans in History, 2000; William H. Glasson, Federal Military Pensions in the United States, 1918; Laura S. Jensen, Patriots, Settlers and the Origins of American Social Policy, 2003; Stuart McConnell, Glorious Contentment: The Grand Army of the Republic, 1866–1900, 1992; William Pencak, For God and Country: The American Legion, 1919–1941, 1989; John P. Resch, Suffering Soldiers: Revolutionary War Veterans, Moral Sentiment, and Political Culture in the Early Republic, 1999; Paul Starr, The Discarded Army: Soldiers after Vietnam, 1973; Dixon Wecter, When Johnny Comes Marching Home, 1944.

STUART MCCONNELL

image

Vietnam and Indochina wars

Apples and Dominoes

The makers of U.S. foreign policy after World War II often used analogies to explain to the American people the need for cold war commitments. In 1947, trying to justify to a skeptical Congress an outlay of economic and military aid to Greece and Turkey, Undersecretary of State Dean Acheson warned of the consequences of even a single Communist success in southeastern Europe: “Like apples in a barrel infected by the corruption of one rotten one, the corruption of Greece would infect Iran and all to the East.” By 1954 the cold war had gone global, and much of the foreign policy concern of President Dwight Eisenhower was focused on Southeast Asia and particularly Indochina (the states of Vietnam, Laos, and Cambodia), where the French were engaged in a struggle to restore their colonial status, despite the clear preference of most Indochinese to be independent of outsider control. The Vietnamese independence movement was led by Ho Chi Minh, a Communist of long standing.

Contemplating U.S. military involvement on the side of the French, Eisenhower told a press conference why Americans should care about the fate of Vietnam. “You have a row of dominoes set up and you knock over the first one, and what will happen to the last one is the certainty that it will go over very quickly. . . . The loss of Indochina will cause the fall of Southeast Asia like a set of dominoes.” Communism would not stop with just one or two victories. The loss to communism of strategically and economically important Southeast Asia would be a serious setback.

As U.S. involvement in the Vietnam War grew over the years, resulting in the commitment of hundreds of thousands of ground troops and the lavish use of airpower over North and South Vietnam after early 1965, the domino theory that underpinned it evolved. John F. Kennedy, who inherited from Eisenhower a significant financial commitment to the South Vietnamese government of Ngo Dinh Diem and discovered that U.S. military advisors and Central Intelligence Agency officers were hard at work on Diem’s behalf, publicly professed his faith in the domino theory. By the time Lyndon Johnson succeeded to the presidency, following Kennedy’s assassination in November 1963, Johnson’s foreign policy advisors, most of them inherited from Kennedy, had concluded that the dominoes were perhaps less territorial than psychological. Withdrawal from Vietnam would embolden America’s enemies and discourage its friends everywhere. The United States would lose credibility if it abandoned South Vietnam—no one, not even the European allies, would ever again take the Americans’ word on faith. The final domino was not, as one official put it, “some small country in Southeast Asia, but the presidency itself”; the American people would not tolerate a humiliating defeat (the word defeat always carried the modifier humiliating) in Vietnam and would cast out any president judged responsible for having allowed it to occur.

Political Constraints

American presidents faced a dilemma each election cycle during the war in Vietnam. As Daniel Ellsberg, a Pentagon advisor turned antiwar advocate, put it, presidents could not commit large numbers of American soldiers to combat in Southeast Asia, yet at the same time they were not supposed to lose the southern part of Vietnam to communism. All-out war was politically unacceptable. The perception that the United States had abandoned its friends to totalitarianism and reneged on its word was equally unacceptable. A president’s political effectiveness therefore depended on a war that could not be lost, but one whose costs remained low enough, in blood and treasure, to keep the American people from growing restive.

Sensing this, the presidents who confronted in Vietnam the rise of a nationalist-Communist independence movement attempted to keep the American role in the conflict out of the public eye. The war must be fought, said Secretary of State Dean Rusk (1961–69) “in cold blood,” by which he meant not remorselessly but with restraint. The first U.S. commitment, to what was then a French-backed regime in the south that was supposed to be an alternative to the popular Ho Chi Minh, was a small portion of aid provided by the administration of Harry Truman in 1950, mainly obscured by the far more visible war in Korea. Despite his warning about the dominos falling, Eisenhower also limited U.S. involvement in Vietnam, shouldering aside the French after 1954 and sending funds and advisors to help Diem, but refusing to order airstrikes, send in combat troops, or otherwise stake his reputation on the outcome of the conflict. “I am convinced,” wrote the president, “that no victory is possible in that type of theater.”

Kennedy, too, refused to commit U.S. combat troops to Vietnam. He did secretly insert several hundred Special Forces to help train the South Vietnamese army. But he resisted pleas by some advisors, in early 1961, to intervene with force in Laos, and spurned recommendations by others to send a Marine “task force” to confront the Communists militarily. Even Johnson, who would authorize the introduction of over half a million troops into the war, tried to escalate quietly, never declaring war, seldom making a speech on the war, and issuing such announcements as there were about the escalations on Saturday afternoons, so as to avoid the full attention of the media.

American Public Opinion and the War

At first, and in good part because of his efforts to keep the war off the front pages, Johnson enjoyed high approval ratings for his policy of quiet but determined escalation. Gallup pollsters had asked a couple of questions about Indochina during 1953 and 1954, as French struggles hit the newspapers, then left the subject altogether until the spring of 1964, when they cautiously inquired, “Have you given any attention to developments in South Vietnam?” Just 37 percent of respondents said they had. That August, after Congress passed the Tonkin Gulf Resolution, which granted the president the latitude to conduct the war in Vietnam as he wished, Gallup asked what the country “should do next in regard to Vietnam.” Twenty-seven percent said, “[S]how [we] can’t be pushed around, keep troops there, be prepared,” 12 percent wanted to “get tougher” using “more pressure,” and 10 percent said, “[A]void all-out war, sit down and talk.” The largest percentage of respondents (30 percent) had “no opinion.”

By February 1965, the month in which the Johnson administration decided to begin systematically bombing targets in North Vietnam, over nine-tenths of those polled had heard that something was going on in Vietnam. Sixty-four percent thought the United States should “continue present efforts” to win the war, and of this group, 31 percent were willing to risk nuclear war in the bargain. (Just 21 percent thought that would be unwise.) Only late in 1965 did pollsters begin to ask whether Americans approved or disapproved of the president’s handling of the war. Fifty-eight percent approved, just 22 percent did not. It is worth noting that in Gallup’s annual poll seeking the world’s “Most Admired Man” (women were measured separately), Johnson topped the list for three years running through 1967.

Johnson’s anxieties about the war were growing nonetheless. The conflict was intrinsically vicious: once he committed U.S. combat troops to the fight, in March 1965, casualties began to mount. Johnson was also worried about the domestic political implications of a protracted, indecisive struggle, and even more a failure of nerve that would allow the Vietnamese Communists to take over South Vietnam and thereby revive talk that the Democratic Party was the refuge of appeasers, with himself as Neville Chamberlain. Above all, Johnson needed congressional and popular support for the legislation known collectively as the Great Society, his ambitious effort to undo racism and poverty in the United States. He worried most about the right wing. “If I don’t go in now and they show later that I should have,” he confided in early 1965, “they’ll . . . push Vietnam up my ass every time.” So he went in incrementally, hiding the war’s true cost and hoping to keep it on low boil while his reform agenda went through, anticipating that his level of commitment would be enough to keep the right satisfied, yet not too much to antagonize the left, which, by early 1965, had begun to object to the escalating conflict.

By mid-1966, Americans were evenly divided over whether they approved or disapproved of the war; 15 months later, by 46 percent to 44 percent, those polled said that the country had “made a mistake sending troops to fight in Vietnam.” In the meantime, the 1966 midterm elections favored the Republicans, who had ably exploited fears of urban violence, an indecisive war, and a protest movement by young people who seemed, to many Americans, unruly and unpatriotic.

Rising Protests, and the Tet Offensive

By the fall of 1967, American political culture had been affected by the emergence of the antiwar movement. Starting in the early 1960s with scattered concerns about the escalating war, then catalyzed by Johnson’s decisions to bomb and send troops in early 1965, the movement grew rapidly on college campuses, incorporating those who believed the war an act of American imperialism, pacifists, scholars of Asian history and politics, seekers of righteous causes, those who believed sincerely that Vietnam was a wicked war, and many who worried that they or someone they loved would be drafted and sent to the killing fields of Southeast Asia. Large as antiwar demonstrations had become by late 1967—100,000 people rallied against the war in Washington that October—it was never the case that most Americans were protesters or that most Americans sympathized with them.

Yet the protests unnerved Johnson and undeniably affected the nation’s political discourse. The president was afflicted by taunts outside the White House: “Hey, hey, LBJ, how many kids did you kill today?” Members of Congress found, at best, confusion about the war among their constituents, and, at worst, open anger about a conflict that seemed to be escalating without cause or explanation. Family members of key Vietnam policy makers asked increasingly sharp questions about the war at the dinner table, and the estrangement of friends who had turned against the war caused much grief; Secretary of Defense Robert McNamara’s wife and son developed ulcers as a result of the strain.

Late in 1967, concerned about his slipping poll numbers and the war’s possible damage to his domestic program, Johnson called his field commander home to reassure the public. General William Westmoreland was an outwardly confident man who believed that the killing machine he had built would grind the enemy down with superior training, firepower, and sheer numbers. The end of the war, Westmoreland told the National Press Club on November 21, “begins to come into view.” Optimism reigned throughout South Vietnam. Victory “lies within our grasp—the enemy’s hopes are bankrupt.” Early January poll numbers bounced slightly Johnson’s way. Then, in the middle of the night on January 30, the start of the Tet holiday in Vietnam, the National Liberation Front (NLF), sometimes known as the Viet Cong, and North Vietnamese soldiers launched a massive offensive against American and South Vietnamese strongholds throughout the south. Countless positions were overrun. The beautiful old capital of Hué fell, and thousands of alleged collaborators with the Saigon government were executed. Tan Son Nhut airbase, just outside Saigon, was shelled. Even the grounds of the American embassy were penetrated by enemy soldiers. The body count, the ghoulish measure of progress in the war demanded by Westmoreland’s strategy of attrition, rose dramatically on all sides.

In the end, the enemy failed to achieve its military objectives. The U.S. embassy grounds were retaken within hours. Tan Son Nhut remained secure, along with Saigon itself. Hué was restored to the South Vietnamese government, though only after days of brutal fighting and subsequent revenge killings. The southern-based NLF was badly cut up, its forces having been used as shock troops during the offensive. North Vietnamese officials admitted years later that they had overestimated their ability to administer a crushing blow to the South Vietnamese and American forces during Tet.

In the United States, however, the Tet Offensive seemed to confirm Johnson’s worst fears that an inconclusive, messy war would irreparably damage his political standing. It did no good to point out that the enemy had been beaten. Had not Westmoreland, and by extension Johnson himself, offered an upbeat assessment of South Vietnam’s prospects just weeks earlier? Had not Americans been assured, time after time, that their military was invincible, its rectitude unquestionable? The war came home in direct ways—the upsurge in the number of American casulties; television coverage of a South Vietnamese policeman summarily executing an NLF suspect on a Saigon street; the twisted logic of a U.S. officer who said, of the village of Ben Tre, “we had to destroy the town in order to save it.” Mainstream media reflected new depths of popular discouragement. In March, when Gallup asked whether the time had come for the United States to “gradually withdraw from Vietnam,” 56 percent agreed, and only 34 percent disapproved of the idea. Altogether, 78 percent believed that the country was making no progress in the war.

The Unmaking of Lyndon Johnson

Johnson’s first impulse was to toughen his rhetoric and stay the course. If the generals wanted more troops, they could have them. Secretary McNamara left the administration and was replaced by Johnson’s old friend and presumed supporter Clark Clifford. But the erosion of public support for the war now undercut official unity in Washington. Clifford conducted a quick but honest analysis of the situation in Vietnam and concluded that the military could not guarantee success, even with a substantial infusion of troops. Advisors who had previously urged a sustained commitment now hedged: former secretary of state Acheson, a noted hard-liner, told Johnson that American “interests in Europe” were in jeopardy, in part because the nation was hemorrhaging gold at an alarming rate. On March 12, the president was nearly beaten in the New Hampshire presidential primary by the low-key Eugene McCarthy, who challenged Johnson’s Vietnam War policies. The close call left Johnson despondent and concerned about a bruising primary campaign. To the surprise of even close friends, Johnson announced on March 31 that he would not seek reelection but would instead dedicate himself full time to the pursuit of a negotiated peace in Vietnam.

The war had wrecked Johnson and now tore apart his party. The Democrats split following Johnson’s withdrawal from the campaign. Some backed McCarthy. Many flocked to the candidacy of Robert Kennedy, who had turned against the war, but whose assassination on June 6 ended the dream that the Democrats would unite under a popular, socially conscious, antiwar leader. George McGovern entered the fray as a stand-in for Kennedy, but he lacked Kennedy’s charisma and connections. At its chaotic convention in Chicago that August, in which protesters clashed with Mayor Richard Daley’s notoriously unsympathetic police (and came off second best), the party nominated Johnson’s vice president, Hubert Humphrey. Loyal to Johnson, who nevertheless maligned him repeatedly, Humphrey at first clung to the discredited policy of toughness on Vietnam. But when polls showed that he was running behind the Republican nominee Richard Nixon (who claimed to have a “secret plan” to end the war) and only slightly ahead of right-wing independent George Wallace (who guaranteed a military victory in Vietnam if the Communists refused to come to heel), Humphrey shifted his position. He would, he said, stop bombing the north. His tone moderated. Despite what had seemed long odds, in the end Humphrey nearly won, falling just 200,000 votes short of Nixon in the popular tally.

The Nixon-Kissinger Strategy, and War’s End

Nixon had managed to rise from the political dead by cobbling together a coalition of white Americans fed up with disorder in the streets, militant blacks, militant students, and the indecisive war. Many of those previously loyal to the Democratic Party built by Franklin Roosevelt now defected to the Republicans. They included Catholics, ethnic voters in suburbs, and southern whites who were conservative on social issues but not quite ready to stomach the extremism of Wallace. They were part of what Nixon would call “the great silent majority,” whom he presumed wanted “peace with honor” in Vietnam, whatever that meant.

Nixon and his national security advisor, Henry Kissinger, set out to recast diplomacy and liquidate the war. They employed a two-track approach. They would escalate the bombing of enemy targets and initiate attacks in third countries (Cambodia and Laos) in order to demonstrate to Hanoi their determination not to be bullied. At the same time, they would attempt to negotiate an end to hostilities, in part by pursuing détente with the Soviet Union and China. Nixon coldly gauged that most of the domestic, political cost of the war resulted from the death of American soldiers. He therefore proposed to substitute Vietnamese lives for American ones, through a program called “Vietnamization.” Nixon continued bombing, but he also funded an expansion of the South Vietnamese army (ARVN) and equipped it with the latest weapons. And in late 1969, he began to withdraw U.S. troops, reducing the need to draft more young men and thus removing the most toxic issue around which the antiwar movement had gathered.

Still the protests did not end. People remained angry that the war dragged on, that Americans and Vietnamese continued to die in great numbers. The expansion of the American war into Cambodia and Laos brought renewed fury. The continued opposition to Nixon’s policies, information leaks concerning a secret campaign to bomb Cambodia in 1969, and the disclosure, by Daniel Ellsberg, of the secret Pentagon Papers study in 1971, inspired Nixon to establish the clandestine White House “Plumbers,” whose job it was to wiretap the telephones of the administration’s self-construed enemies and even to burgle offices in search of incriminating information. The capture of a Plumbers’ team at the Watergate complex in Washington in June 1972 ultimately led to the unraveling of the Nixon presidency. The attempt to cover up illegal behavior would be traced to the Oval Office.

A peace treaty was signed in Vietnam in January 1973. Both North and South soon violated its terms. Weakened by the Watergate scandal, Nixon was unable to prevent Congress from closing the valve on U.S. support for the South Vietnamese government. And in the summer of 1973, Congress passed the War Powers Act, designed to prevent presidents from conducting war as high-handedly as Johnson and Nixon had done, at least without disclosure to the legislature. Nixon was forced to resign in August 1974. When, the following spring, the North Vietnamese launched a powerful offensive against South Vietnam, and the ARVN largely crumbled, the new president, Gerald Ford, and Henry Kissinger, now secretary of state, tried to get Congress to loosen the purse strings on military aid to the besieged Saigon regime.

But Congress, and the majority of Americans, had had enough. They felt they had been lied to about the war, and they refused to trust Ford. Stung by Vietnam, many Americans now turned inward, shunning the kinds of foreign policy commitments they had seemed to accept so readily during the first three decades of the cold war. Americans had grown skeptical about what critics called, in the aftermath of Vietnam, the “imperial presidency,” which acted without proper, constitutional regard for the wishes of the other branches of government or the temper of the people. The Vietnam War thus reshaped international and domestic politics, albeit temporarily. The continued usurpations of power by presidents since 1975 remind us that the supposed lessons of Vietnam—greater caution and humility in foreign affairs, greater transparency at home in the process by which war is undertaken—did not endure.

See also era of confrontation and decline, 1964–80; era of consensus, 1952–64; Korean War and cold war.

FURTHER READING. Jeffrey Kimball, Nixon’s Vietnam War, 1998; Walter LaFeber, The Deadly Bet: LBJ, Vietnam, and the 1968 Election, 2005; Fredrik Logevall, Choosing War: The Lost Chance for Peace and the Escalation of War in Vietnam, 1999; Robert S. McNamara, with Brian Van-DeMark, In Retrospect: The Tragedy and Lessons of Vietnam, 1995; Andrew J. Rotter, ed., Light at the End of the Tunnel: A Vietnam War Anthology, 2nd ed., 1999; Robert D. Schulzinger, A Time for Peace: The United States and Vietnam, 1941–1975, 1997; Neil Sheehan, A Bright Shining Lie: John Paul Vann and America in Vietnam, 1988; Melvin Small, Johnson, Nixon, and the Doves, 1988; Marilyn B. Young, The Vietnam Wars 1945–1990, 1991.

ANDREW J. ROTTER

image

voting

The right to vote in the United States has a complex history. In the very long run of more than 200 years, the trajectory of this history has been one of expansion: a far greater proportion of the population was enfranchised by the early twenty-first century than was true at the nation’s birth. But this long-run trend reveals only part of the story: the history of the right to vote has also been a history of conflict and struggle, of movements backward as well as forward, of sharply demarcated state and regional variations. It is also the story, more generally, of efforts to transform the United States into a democracy: a form of government in which all adults—regardless of their class, gender, race, ethnicity, or place of birth—would have equal political rights. That history took nearly two centuries to unfold, and in key respects, it continues unfolding to the present day.

Democracy Rising

The seeds of this history were planted in the late eighteenth century, as the new American nation was being forged out of 13 former colonies. The Founding Fathers were staunch believers in representative government, but few, if any, of them believed that all adults (or even all adult males) had the “right” to participate in choosing the new nation’s leaders. (Indeed, it was unclear whether voting was a “right” or a “privilege,” and the word democracy itself had negative connotations, suggesting rule by the mob.) The founders had diverse views, but most believed that participation in government should be limited to those who could establish their independence and their “stake” in the new society through the ownership of property. Many agreed with William Black-stone’s view that people “in so mean a situation that they are esteemed to have no will of their own” would be subject to manipulation if they had the franchise, while others feared that such persons might exercise their will too aggressively. Neither the original Constitution, ratified in 1788, nor the Bill of Rights, ratified in 1791, made any mention of a “right to vote.”

After some internal debate, the men who wrote that Constitution, meeting in Philadelphia in 1787, decided not to adopt a national suffrage requirement: they left the issue to the states. This was a momentous decision—it meant that the breadth of the right to vote would vary from state to state for most of the nation’s history, and the federal government would have to struggle for almost two centuries to establish national norms of democratic inclusion. Yet this decision was grounded less in principle than in pragmatic political considerations. By the late 1780s, each state already had a suffrage requirement, developed during the colonial era or during the first years of independence. The designers of the Constitution worried that any national requirement would be opposed by some states—as too broad or too narrow—and thus jeopardize the process of constitutional ratification. In Federalist 52, James Madison wrote, “One uniform rule would probably have been as dissatisfactory to some of the States as it would have been difficult to the convention.” The only allusion to the breadth of the franchise in the Constitution was in Article I, section II, which specified that all persons who could vote for the most numerous house of each state legislature could also participate in elections for the House of Representatives.

Thus, at the nation’s founding, suffrage was far from universal, and the breadth of the franchise varied from one state to the next. The right to vote was limited to those who owned property (ten states) or paid taxes of a specified value (New Hampshire, Georgia, and Pennsylvania)—only Vermont, the fourteenth state, had no such test. African Americans and Native Americans were expressly excluded by law or practice in South Carolina, Georgia, and Virginia. In New Jersey alone were women permitted to vote, and they lost that right in 1807.

Within a short time, however, popular pressures began to shrink the limitations on the franchise: the first two-thirds of the nineteenth century witnessed a remarkable expansion of democratic rights. These changes had multiple sources: shifts in the social structure, including the growth of urban areas; a burgeoning embrace of democratic ideology, including the word democracy; active, organized opposition to property and tax requirements from propertyless men, particularly those who had served as soldiers in the Revolutionary War and the War of 1812; the desire of settlers in the new territories in the “west” to attract many more fellow settlers; and the emergence of durable political parties that had to compete in elections and thus sometimes had self-interested reasons for wanting to expand the electorate. As a result of these social and political changes, every state held at least one constitutional convention between 1790 and the 1850s.

In most states, enough of these factors converged to produce state constitutional revisions that significantly broadened the franchise. By the 1850s, nearly all seaboard states had eliminated their property and taxpaying requirements, and the new states in the interior never adopted them in the first place. The abolition of these formal class barriers to voting was not achieved without conflict: many conservatives fought hard to preserve the old order. Warren Dutton of Massachusetts argued that, because “the means of subsistence were so abundant and the demand for labor great,” any man who failed to acquire property was “indolent or vicious.” Conservatives like New York’s chancellor James Kent openly voiced fears of “the power of the poor and the profligate to control the affluent.” But most Americans recognized that the sovereign “people” included many individuals without property. “The course of things in this country is for the extension, and not the restriction of popular rights,” Senator Nathan Sanford said at the 1821 New York State Constitutional Convention.

A number of states in the interior expanded the franchise in another way as well: to encourage new settlement, they granted the franchise even to non-citizens, to immigrants who had resided in the state for several years and had declared their “intention” to become citizens. In the frontier state of Illinois, for example, one delegate to the 1847 constitutional convention argued that granting the vote to immigrants was “the greatest inducement for men to come amongst us . . . to develop the vast and inexhaustible resources of our state.” Increased land values and tax revenues would follow. In the course of the nineteenth century, more than 18 states adopted such provisions.

However, the franchise did not expand for everyone. While property requirements were being dropped, formal racial exclusions became more common. In the 1830s, for example, both North Carolina and Pennsylvania added the word white to their constitutional requirements for voting. By 1855 only five states—all in New England—did not discriminate against African Americans. “Paupers”—men who were dependent on public relief in one form or another—suffered a similar fate, as did many Native Americans (because they were either not “white” or not citizens).

Still, the right to vote was far more widespread in 1850 or 1860 than it had been in 1790; and the reduction of economic barriers to the franchise occurred in the United States far earlier than in most countries of Europe or Latin America. The key to this “exceptional” development, however, resided less in any unique American ideology of inclusion than in two peculiarities of the history of the United States. The first—critical to developments in the North—was that property and taxpaying requirements were dropped before the industrial revolution had proceeded very far and thus before an industrial working class had taken shape. Massachusetts and New York, for example, dropped their property requirements in the early 1820s, before those two states became home to tens of thousands of industrial workers. (In Rhode Island, the one state where debates on suffrage reform occurred after considerable industrialization had taken place, a small civil war erupted in the 1830s and 1840s, when two rival legislatures and administrations, elected under different suffrage requirements, competed for legitimacy.) In contrast to Europe, apprehensions about the political power—and ideological leanings—of industrial workers did not delay their enfranchisement. The second distinctive feature of the American story was slavery: one reason that landed elites in much of the world feared democracy was that it meant enfranchising millions of peasants and landless agricultural laborers. But in the U.S. South, the equivalent class—the men and women who toiled from dawn to dusk on land they did not own—was enslaved and consequently would not acquire political power even if the franchise were broadened.

Indeed, the high-water mark of democratic impulses in the nineteenth-century United States involved slavery—or, to be precise, ex-slaves. In an extraordinary political development, in the immediate aftermath of the Civil War, Congress passed (and the states ratified) the Fifteenth Amendment, which prohibited denial of the right to vote to any citizen by “the United States or by any State on account of race, color, or previous condition of servitude.” The passage of this amendment—a development unforeseen by the nation’s political leadership even a few years earlier—stemmed from the partisan interests of the Republican Party, which hoped that African Americans would become a political base in the South: an appreciation of the heroism of the 180,000 African Americans who had served in the Union Army, and the conviction that, without the franchise, the freedmen in the South would soon end up being subservient to the region’s white elites.

The Fifteenth Amendment (alongside the Fourteenth, passed shortly before) constituted a significant shift in the involvement of the federal government in matters relating to the franchise—since it constrained the ability of the states to impose whatever limitations they wished upon the right to vote—and was also a remarkable expression of democratic idealism on the part of a nation in which racism remained pervasive. Massachusetts senator Henry Wilson argued that the extension of suffrage would indicate that “we shall have carried out logically the ideas that lie at the foundation of our institutions; we shall be in harmony with our professions; we shall have acted like a truly republican and Christian people.”

Hesitations and Rollbacks

Yet in a deep historical irony, this idealism was voiced at a moment when the tides of democracy were already cresting and beginning to recede. Starting in the 1850s in some states and accelerating in the 1870s, many middle-and upper-class Americans began to lose faith in democracy and in the appropriateness of universal (male) suffrage. An unsigned article in the Atlantic Monthly noted in 1879:

Thirty or forty years ago it was considered the rankest heresy to doubt that a government based on universal suffrage was the wisest and best that could be devised . . . Such is not now the case. Expressions of doubt and distrust in regard to universal suffrage are heard constantly in conversation, and in all parts of the country.

The sources of this ideological shift were different in the South than they were elsewhere, but class dynamics were prominent throughout the nation. In the Northeast and the Midwest, rapid industrialization coupled with high rates of immigration led to the formation of an immigrant working class whose enfranchisement was regarded as deeply undesirable by a great many middle-class Americans. The first political manifestation of these views came in the 1850s with the appearance and meteoric growth of the American (or Know-Nothing) Party. Fueled by a hostility to immigrants (and Catholics in particular), the Know-Nothings sought to limit the political influence of newcomers by restricting the franchise to those who could pass literacy tests and by imposing a lengthy waiting period (such as 21 years) before naturalized immigrants could vote. In most states such proposals were rebuffed, but restrictions were imposed in several locales, including Massachusetts and Connecticut.

The Know-Nothing Party collapsed almost as rapidly as it had arisen, but the impulse to limit the electoral power of immigrant workers resurfaced after the Civil War, intensified by huge new waves of immigration and by the numerous local political successes of left-leaning and prolabor third parties, such as the Greenback Labor Party and several socialist parties. “Universal Suffrage,” wrote Charles Francis Adams Jr., the descendant of two presidents, “can only mean . . . the government of ignorance and vice: it means a European, and especially Celtic, proletariat on the Atlantic coast; an African proletariat on the shores of the Gulf, and a Chinese proletariat on the Pacific.” To forestall such a development, proposals were put forward, sometimes with success, to reinstitute financial requirements for some types of voting (for municipal offices or on bond issues, for example) and to require immigrants to present naturalization papers when they showed up at the polls. Gradually, the laws that had permitted noncitizens to vote were repealed (the last state to do so, Arkansas, acted in 1926), and by the 1920s, more than a dozen states in the North and West imposed literacy or English-language literacy tests for voting. (New York, with a large immigrant population, limited the franchise in 1921 to those who could pass an English-language literacy requirement; the law remained in place until the 1960s.) Many more states tightened residency requirements and adopted new personal registration laws that placed challenging procedural obstacles between the poor and the ballot box. In the West, far more draconian laws straightforwardly denied the right to vote to any person who was a “native of China.”

In the South, meanwhile, the late nineteenth and early twentieth centuries witnessed the wholesale disfranchisement of African Americans—whose rights had supposedly been guaranteed by the passage of the Fifteenth Amendment. In the 1870s and into the 1880s, African Americans participated actively in southern politics, usually as Republicans, influencing policies and often gaining election to local and even state offices. But after the withdrawal of the last northern troops in 1877, southern whites began to mount concerted (and sometimes violent) campaigns to drive African Americans out of public life. In the 1890s, these “redeemers” developed an array of legal strategies designed expressly to keep African Americans from voting. Among them were literacy tests, poll taxes, cumulative poll taxes (demanding that all past as well as current taxes be paid), lengthy residency requirements, elaborate registration systems, felon disfranchisement laws, and confusing multiple box balloting methods (which required votes for different offices to be dropped into different boxes). These mechanisms were designed to discriminate without directly mentioning race, which would have violated the Fifteenth Amendment. “Discrimination!” noted future Virginia senator Carter Glass at a constitutional convention in his state in 1901. “That, exactly, is what this Convention was elected for—to discriminate to the very extremity of permissible action under the limitations of the Federal Constitution, with a view to the elimination of every negro voter who can be gotten rid of.” These strategies were effective: in Louisiana, where more than 130,000 blacks had been registered to vote in 1896, only 1,342 were registered by 1904. Once the Republican Party was so diminished that it had no possibility of winning elections in the South, most states simplified the practice of discrimination by adopting a “white primary” within the Democratic Party. The only meaningful elections in the South, by the early twentieth century, were the Democratic primaries, and African Americans were expressly barred from participation.

This retrenchment occurred with the tacit, if reluctant, acquiescence of the federal government. In a series of rulings, the Supreme Court upheld the constitutionality of the disfranchising measures adopted in the South, because they did not explicitly violate the Fifteenth Amendment. Meanwhile, Congress repeatedly debated the merits of renewed intervention in the South but never quite had the stomach to intercede. The closest it came was in 1890, when most Republicans supported a federal elections bill (called the Lodge Force bill), which would have given federal courts and supervisors oversight of elections (much as the Voting Rights Act would do in 1965); the measure passed the House but stalled in the Senate. As a result, the South remained a one-party region, with the vast majority of African Americans deprived of their voting rights for another 75 years. In both the North and (far more dramatically) the South, the breadth of the franchise was thus narrowed between the Civil War and World War I.

Half of the Population

While all of this was transpiring, a separate suffrage movement—to enfranchise women—was fitfully progressing across the historical landscape. Although periodically intersecting with efforts to enfranchise African Americans, immigrants, and the poor, this movement had its own distinctive rhythms, not least because it generated a unique countermovement of women opposed to their own enfranchisement who feared that giving women the vote could seriously damage the health of families.

The first stirrings of the woman suffrage movement occurred the late 1840s and 1850s. Building on democratizing currents that had toppled other barriers to the franchise, small groups of supporters of female suffrage convened meetings and conventions to articulate their views and to launch a movement. The most famous of these occurred in 1848 in Seneca Falls, New York, hosted by (among others) Elizabeth Cady Stanton—who would go on to become one of the movement’s leaders for many decades. With roots in the growing urban and quasi-urban middle class of the northern states, the early suffrage movement attracted critical support from abolitionists, male and female, who saw parallels between the lack of freedom of slaves and the lack of political (and some civil) rights for women. Indeed, many leaders of this young movement believed that, after the Civil War, women and African Americans would both be enfranchised in the same groundswell of democratic principle: as Stanton put it, women hoped “to avail ourselves of the strong arm and the blue uniform of the black soldier to walk in by his side.” But they were deeply disappointed. The Republican leadership in Washington, as well as many former abolitionists, displayed little enthusiasm for linking women’s rights to the rights of ex-slaves, and they thought it essential to focus on the latter. “One question at a time,” intoned abolitionist Wendell Phillips. “This hour belongs to the negro.” As a result, the Fifteenth Amendment made no mention of women (and thus tacitly seemed to condone their disfranchisement); even worse, the Fourteenth Amendment explicitly defended the voting rights of “male” inhabitants.

Women also suffered a rebuff in the courts. In the early 1870s, several female advocates of suffrage—including Susan B. Anthony, a key leader of the movement—filed lawsuits after they were not permitted to vote; they maintained that the refusal of local officials to give them ballots infringed their rights of free speech and deprived them of one of the “privileges and immunities” of citizens, which had been guaranteed to all citizens by the Fourteenth Amendment. In 1875, in Minor v. Happersett, the Supreme Court emphatically rejected this argument, ruling that suffrage did not necessarily accompany citizenship and thus that states possessed the legal authority to decide which citizens could vote.

Meanwhile, activists had formed two organizations expressly designed to pursue the cause of woman suffrage. The first was the National Woman Suffrage Association, founded by Stanton and Anthony in 1869. A national organization controlled by women, its strategic goal was to pressure the federal government into enfranchising women across the nation through passage of a constitutional amendment akin to the Fifteenth Amendment. The second was the American Woman Suffrage Association, which aimed to work at the state level, with both men and women, convincing legislatures and state constitutional conventions to drop gender barriers to suffrage. For two decades, both organizations worked energetically, building popular support yet gaining only occasional victories. A federal amendment did make it to the floor of the Senate but was decisively defeated. By the late 1890s, several western states, including Utah and Wyoming, had adopted woman suffrage, but elsewhere defeat was the norm. In numerous locales, small victories were achieved with measures that permitted women to vote for school boards.

In 1890 the two associations joined forces to create the National American Woman Suffrage Association (NAWSA). Gradually, the leadership of the movement was handed over to a new generation of activists, including Carrie Chapman Catt, who possessed notable organizational skills and a somewhat different ideological approach to the issue. Older universalist arguments about natural rights and the equality of men and women were downplayed, while new emphasis was given to the notion that women had distinctive interests and that they possessed qualities that might improve politics and put an end to “scoundrelism and ruffianism at the polls.” Nonetheless, opponents of woman suffrage railed at the idea, denying that any “right” to vote existed and calling the suffrage movement (among other things) an attack “on the integrity of the family” that “denies and repudiates the obligations of motherhood.” Organized opposition also came from some women, particularly from the upper classes, who felt they already had sufficient access to power, and from liquor interests, which feared enfranchising a large protemperance voting bloc.

Resistance to enfranchising women also stemmed from a broader current in American politics: the declining middle- and upper-class faith in democracy that had fueled the efforts to disfranchise African Americans in the South and immigrant workers in the North. As one contemporary observer noted, “the opposition today seems not so much against women as against any more voters at all.” In part to overcome that resistance, some advocates of woman suffrage, in the 1890s and into the early twentieth century, put forward what was known as the “statistical argument”: the notion that enfranchising women was a way of outweighing the votes of the ignorant and undesirable. In the South, it was argued, the enfranchisement of women “would insure . . . durable white supremacy,” and, in the North, it would overcome the “foreign influence.” Elizabeth Cady Stanton, among others, joined the chorus calling for literacy tests for voting, for both men and women—a view that was formally repudiated by NAWSA only in 1909.

Still, successes remained sparse until the second decade of the twentieth century, when the organizational muscle of NAWSA began to strengthen and the movement allied itself with the interests of working women and the working class more generally. This new coalition helped to generate victories in Washington, California, and several other states between 1910 and 1915. In the latter year, reacting in part to the difficulties of state campaigns—and the apparent impossibility of gaining victories in the South—Catt, the president of NAWSA, embraced a federal strategy focused on building support in Congress and in the 36 states most likely to ratify an amendment to the federal Constitution. Working alongside more militant organizations like the Congressional Union and the National Woman’s Party, and drawing political strength from the growing number of states that had already embraced suffrage, NAWSA organized tirelessly, even gaining a key victory in New York with the aid of New York City’s Tammany Hall political machine.

The turning point came during World War I. After the United States declared war in the spring of 1917, NAWSA suspended its congressional lobbying, while continuing grassroots efforts to build support for a federal amendment. More influentially, NAWSA demonstrated the importance of women to the war effort by converting many of its local chapters into volunteer groups that sold bonds, knitted clothes, distributed food, worked with the Red Cross, and gave gifts to soldiers and sailors. This adroit handling of the war crisis, coupled with ongoing political pressure, induced President Woodrow Wilson, in January 1918, to support passage of a suffrage amendment “as a war measure.” The House approved the amendment a day later—although it took the Senate (where antisuffrage southern Democrats were more numerous) a year and a half to follow suit. In August 1920, Tennessee became the thirty-sixth state to ratify the Nineteenth Amendment, and women throughout the nation could vote.

Democracy as a National Value

The passage of the Nineteenth Amendment was a major milestone in the history of the right to vote. Yet significant barriers to universal suffrage remained in place, and they were not shaken by either the prosperity of the 1920s or the Great Depression of the 1930s. African Americans in the South remained disfranchised, many immigrants still had to pass literacy tests, and some recipients of relief in the 1930s were threatened with exclusion because they were “paupers.” Pressures for change, however, began to build during World War II, and they intensified in the 1950s and 1960s. The result was the most sweeping transformation in voting rights in the nation’s history: almost all remaining limitations on the franchise were eliminated as the federal government overrode the long tradition of states’ rights and became the guarantor of universal suffrage. Although focused initially on African Americans in the South, the movement for change spread rapidly, touching all regions of the nation.

Not surprisingly, such a major set of changes had multiple sources. World War II itself played a significant role, in part because of its impact on public opinion. Americans embraced the war’s explicitly stated goals of restoring democracy and ending racial and ethnic discrimination in Europe; and it was not difficult to see—as African American political leaders pointed out—that there was a glaring contradiction between those international goals and the reality of life in the American South. That contradiction seemed particularly disturbing at a time when hundreds of thousands of disfranchised African Americans and Native Americans were risking their lives by serving in the armed forces. Accordingly, when Congress passed legislation authorizing absentee balloting for overseas soldiers, it included a provision exempting soldiers in the field from having to pay poll taxes—even if they came from poll tax states. In 1944 the Supreme Court—partially populated by justices appointed during the New Deal and comfortable with an activist federal government—reversed two previous decisions and ruled, in Smith v. Allwright, that all-white primaries (and all-white political parties) were unconstitutional. Diplomatic considerations—particularly with regard to China and other allies in the Pacific—also led to the dismantling of racial barriers, as laws prohibiting Asian immigration, citizenship, and enfranchisement were repealed.

During the cold war, foreign affairs continued to generate pressure for reforms. In its competition with the Soviet Union for political support in third world nations, the United States found that the treatment of African Americans in the South undercut its claim to be democracy’s advocate. As Secretary of State Dean Acheson noted, “the existence of discrimination against minority groups in the United States is a handicap in our relations with other countries.” The impetus for change also came from within the two major political parties, both because of a broadening ideological embrace of democratic values and because the sizable migration of African Americans out of the South, begun during World War I, was increasing the number of black voters in northern states. Meanwhile, the postwar economic boom took some of the edge off class fears, while the technological transformation of southern agriculture led to a rapid growth in the proportion of the African American population that lived in urban areas where they could mobilize politically more easily. The changes that occurred were grounded both in Washington and in a steadily strengthening civil rights movement across the South and around the nation.

This convergence of forces, coupled with the political skills of Lyndon Johnson, the first Southerner elected to the presidency in more than a century, led to the passage in 1965 of the Voting Rights Act (VRA). The VRA immediately suspended literacy tests and other discriminatory “devices” in all states and counties where fewer than 50 percent of all adults had gone to the polls in 1964. It also authorized the attorney general to send examiners into the South to enroll voters, and it prohibited state and local governments in affected areas from changing any electoral procedures without the “preclearance” of the civil rights division of the Justice Department. (This key provision, section 5, prevented cities or states from developing new techniques for keeping African Americans politically powerless.) The VRA also instructed the Justice Department to begin litigation that would test the constitutionality of poll taxes in state elections. (Poll taxes in federal elections had already been banned by the Twenty-Fourth Amendment, ratified in 1964.) The VRA, in effect, provided mechanisms for the federal government to enforce the Fifteenth Amendment in states that were not doing so; designed initially as a temporary, quasi-emergency measure, it would be revised and renewed in 1970, 1975, 1982, and 2006, broadening its reach to language minorities and remaining at the center of federal voting rights law.

Not surprisingly, six southern states challenged the VRA in federal courts, arguing that it was an unconstitutional federal encroachment “on an area reserved to the States by the Constitution.” But the Supreme Court, led by Chief Justice Earl Warren, emphatically rejected that argument in 1966, maintaining that key provisions of the VRA were “a valid means for carrying out the commands of the Fifteenth Amendment.” In other cases, the Supreme Court invoked the equal protection clause of the Fourteenth Amendment to uphold bans on literacy tests for voting and to strike down poll taxes in state elections. In the latter case, Harper v. Virginia, the Court went beyond the issue of poll taxes to effectively ban—for the first time in the nation’s history—all economic or financial requirements for voting. Wealth, wrote Justice William O. Douglas in the majority opinion, was “not germane to one’s ability to participate intelligently in the electoral process.” In subsequent decisions, the Court ruled that lengthy residency requirements for voting (in most cases, any longer than 30 days) were also unconstitutional.

Three other elements of this broad-gauged transformation of voting rights law were significant. First was that in the late 1940s and early 1950s, all remaining legal restrictions on the voting rights of Native Americans were removed. Although the vast majority of Native Americans were already enfranchised, several western states with sizable Native American populations excluded “Indians not taxed” (because they lived on reservations that did not pay property taxes) or those construed to be “under guardianship” (a misapplication of a legal category designed to refer to those who lacked the physical or mental capacity to conduct their own affairs). Thanks in part to lawsuits launched by Native American military veterans of World War II, these laws were struck down or repealed.

The second development affected a much broader swath of the country: the Supreme Court, even before the passage of the Voting Rights Act, challenged the ability of the states to maintain legislative districts that were of significantly unequal size—a common practice that frequently gave great power to rural areas. In a series of decisions, the Court concluded that it was undemocratic “to say that a vote is worth more in one district than in another,” and effectively made “one person, one vote” the law of the land.

The third key change was precipitated by the Vietnam War and by the claim of young protesters against that war that it was illegitimate to draft them into the armed services at age 18 if they were not entitled to vote until they were 21. Congress responded to such claims in 1970 by lowering the voting age to 18. After the Supreme Court ruled that Congress did not have the power to change the age limit in state elections, Congress acted again in 1971, passing a constitutional amendment to serve the same purpose. The Twenty-Sixth Amendment was ratified in record time by the states.

The post–World War II movement to broaden the franchise reached its limit over the issue of felon disfranchisement. Most states in the 1960s deprived convicted felons of their suffrage rights, either for the duration of their sentences or, in some cases, permanently. Many of these laws, inspired by English common law, dated back to the early nineteenth century and were adopted at a time when suffrage was considered a privilege rather than a right. Others, particularly in the South, were expressly tailored in the late nineteenth century to keep African Americans from registering to vote.

The rationales for such laws had never been particularly compelling, and in the late 1960s they began to be challenged in the courts. The grounds for such challenges, building on other voting rights decisions, were that the laws violated the equal protection clause and that any limitations on the franchise had to be subject to the “strict scrutiny” of the courts. (Strict scrutiny meant that there had to be a demonstrably compelling state interest for such a law and that the law had to be narrowly tailored to serve that interest.)

The issue eventually reached the Supreme Court, in Richardson v. Ramirez (1974), which decided that state felon disfranchisement laws were permissible (and not subject to strict scrutiny) by a phrase in the Fourteenth Amendment that tacitly allowed adult men to be deprived of the suffrage “for participation in rebellion, or other crime.” The meaning of “or other crime” was far from certain (in context it may have been referring to those who supported the Confederacy), but the Court interpreted it broadly in a controversial decision. In the decades following the ruling, many states liberalized their felon disfranchisement laws, and permanent or lifetime exclusions were consequently imposed in only a few states by the early twenty-first century. During the same period, however, the size of the population in jail or on probation and parole rose so rapidly that the number of persons affected by the disfranchisement laws also soared—reaching 5.3 million by 2006.

The significant exclusion of felons ought not obscure the scope of what had been achieved between World War II and 1970. In the span of several decades, nearly all remaining restrictions on the right to vote of American citizens had been overturned: in different states the legal changes affected African Americans, Native Americans, Asian Americans, the illiterate, the non–English speaking, the very poor, those who had recently moved from one locale to another, and everyone between the ages of 18 and 21. Congress and the Supreme Court had embraced democracy as a national value and concluded that a genuine democracy could only be achieved if the federal government overrode the suffrage limitations imposed by many states. The franchise was nationalized and something approximating universal suffrage finally achieved—almost two centuries after the Constitution was adopted. Tens of millions of people could vote in 1975 who would not have been permitted to do so in 1945 or 1950.

New and Lingering Conflicts

Yet the struggle for fully democratic rights and institutions had not come to an end. Two sizable, if somewhat marginal, groups of residents sought a further broadening of the franchise itself. One was ex-felons, who worked with several voting rights groups to persuade legislators around the country to pass laws permitting those convicted of crimes to vote as soon as they were discharged from prison. The second group consisted of noncitizen legal residents, many of whom hoped to gain the right to vote in local elections so that they could participate in governing the communities in which they lived, paid taxes, and sent their children to school. Non-citizens did possess or acquire local voting rights in a handful of cities, but the movement to make such rights widespread encountered substantial opposition in a population that was increasingly apprehensive about immigration and that regarded “voting and citizenship,” as the San Francisco Examiner put it, as “so inextricably bound in this country that it’s hard to imagine one without the other.” Indeed, many Americans believed that ex-felons and non-citizens had no legitimate claim to these political rights—although they were common in many other economically advanced countries.

More central to the political life of most cities and states were several other issues that moved to center stage once basic questions about enfranchisement had been settled. The first involved districting: the drawing of geographic boundaries that determined how individual votes would be aggregated and translated into political office or power. Politicians had long known that districting decisions (for elections at any level) could easily have an impact on the outcome of elections, and partisan considerations had long played a role in the drawing of district boundaries. The equation changed, however, when the Supreme Court’s “one person, one vote” decisions, coupled with the passage of the Voting Rights Act, drew race into the picture. This happened first when (as expected) some cities and states in the South sought to redraw district boundaries in ways that would diminish, or undercut, the political influence of newly enfranchised African Americans. The courts and the Department of Justice rebuffed such efforts, heeding the words of Chief Justice Earl Warren, in a key 1964 districting case, that “the right of suffrage can be denied by a . . . dilution of the weight of a citizen’s vote just as effectively as by wholly prohibiting the free exercise of the franchise.”

Yet the task of considering race in the drawing of district boundaries involved competing values, opening a host of new questions that federal courts and legislatures were to wrestle with for decades. What was the appropriate role for race in districting decisions? Should districting be color-blind, even if that meant that no minorities would be elected to office? (The courts thought not.) Should race be the predominant factor in drawing boundaries? (The courts also thought not.) In jurisdictions where African Americans constituted a sizable minority of the population, should legislatures try to guarantee some African American representation? (Probably.) Should the size of that representation be proportional to the size of the African American population? (Probably not.) Did nonracial minorities—like Hasidic Jews in Brooklyn—have similar rights to elect their own representatives? (No.) The courts and legislatures muddled forward, case by case, decade by decade, without offering definitive answers to questions that were likely insoluble in the absence of a coherent theory of representation or a widely accepted standard of fairness. Between 1970 and the beginning of the twenty-first century, the number of African Americans, Hispanics, and Asian Americans elected to public office rose dramatically, but clear-cut, definitive guidelines for districting without “vote dilution” remained out of reach.

A second cluster of issues revolved around the procedures for voter registration and casting ballots. Here a core tension was present (as it long had been) between maximizing access to the ballot box and preventing fraud. Procedures that made it easier to register and vote were also likely to make it easier for fraud to occur, while toughening up the procedures to deter fraud ran the risk of keeping legitimate voters from casting their ballots. By the 1970s, many scholars (as well as progressive political activists) were calling attention to the fact that, despite the transformation of the nation’s suffrage laws, turnout in elections was quite low, particularly among the poor and the young. (Half of all potential voters failed to cast ballots for presidential elections, and the proportion was far higher in off-year elections.) Political scientists engaged in lively debates about the sources of low turnout, but there was widespread agreement that one cause could be found in the complicated and sometimes unwieldy registration procedures in some states. As a result, pressure for reforms mounted, generally supported by Democrats (who thought they would benefit) and opposed by Republicans (who were concerned about both fraud and partisan losses). Many states did streamline their procedures, but others did not, and, as a result, Congress began to consider federal registration guidelines.

What emerged from Congress in the early 1990s was the National Voter Registration Act, a measure that would require each state to permit citizens to register by mail, while applying for a driver’s license, or at designated public agencies, including those offering public assistance and services to the disabled. First passed in 1992, the “motor voter bill” (as it was called) was vetoed by President George H. W. Bush on the grounds that it was an “unnecessary” federal intervention into state affairs and an “open invitation to fraud.” The following year, President Bill Clinton signed the measure into law, placing the federal government squarely on record in support of making it easier for adult citizens to exercise their right to vote. Within a few years, the impact of the bill on registration rolls had been clearly demonstrated, as millions of new voters were signed up. But turnout did not follow suit in either 1996 or 1998, suggesting that registration procedures alone were not responsible for the large numbers of Americans who did not vote. During the following decade, some Democratic activists turned their attention to promoting registration on election day as a new strategy for increasing turnout.

Meanwhile, Republican political professionals sought to push the pendulum in the opposite direction. Concerned that procedures for voting had become too lax (and potentially too susceptible to fraud), Republicans in numerous states began to advocate laws that would require voters to present government-issued identification documents (with photos) when they registered and/or voted. The presentation of “ID” was already mandated in some states—although the types of identification considered acceptable varied widely—but elsewhere voters were obliged only to state their names to precinct officials. Although Democrats and civil rights activists protested that photo ID laws would create an obstacle to voting for the poor, the young, and the elderly (the three groups least likely to possess driver’s licenses), such laws were passed in Georgia and Indiana in 2005, among other states. After a set of disparate rulings by lower courts, the Indiana law was reviewed by the Supreme Court, which affirmed its constitutionality in 2008 in a 6-to-3 decision. Although the Court’s majority acknowledged that there was little or no evidence that voting fraud had actually occurred in Indiana, it concluded that requiring a photo ID did not unduly burden the right to vote. In the wake of the Court’s decision, numerous other states were expected to pass similar laws. How many people would be barred from the polls as a result was unclear. In Indiana’s primary election in the spring of 2008, several elderly nuns who lacked driver’s licenses or other forms of photo ID were rebuffed when they attempted to vote.

Conflict over the exercise of the right to vote could still be found in the United States more than 200 years after the nation’s founding. Indeed, the disputed presidential election of 2000, between Al Gore and George W. Bush, revolved in part around yet another dimension of the right to vote—the right to have one’s vote counted, and counted accurately. Perhaps inescapably, the breadth of the franchise, as well as the ease with which it could be exercised, remained embedded in partisan politics, in the pursuit of power in the world’s most powerful nation. The outcomes of elections mattered, and those outcomes often were determined not just by how people voted but also by who voted. The long historical record suggested that—however much progress had been achieved between 1787 and 2008—there would be no final settlement of this issue. The voting rights of at least some Americans could always be potentially threatened and consequently would always be in need of protection.

See also civil rights; race and politics; woman suffrage.

FURTHER READING. Ellen DuBois, Feminism and Suffrage: The Emergence of an Independent Women’s Movement in America, 1848–1869, 1999; Ron Hayduk, Democracy for All: Restoring Immigrant Voting Rights in the United States, 2006; Alexander Keyssar, The Right to Vote: The Contested History of Democracy in the United States, 2000; J. Morgan Kousser, The Shaping of Southern Politics: Suffrage Restriction and the Establishment of the One-party South, 1880–1910, 1974; Jeff Manza and Christopher Uggen, Locked Out: Felon Disenfranchisement and American Democracy, 2006; Allison Sneider, Suffragists in an Imperial Age: U.S. Expansion and the Woman Question, 1870–1929, 2008.

ALEXANDER KEYSSAR