War and politics have always been entwined in American history. Politicians and pundits often complained that politics intruded upon war or war upon politics; generals should wage war without second-guessing by politicians, some demanded. The very phrase “war and politics” treats the two as separate if linked entities. But instead they mutually constitute each other, especially if “war” is understood expansively as all activities, institutions, and attitudes involving military power. War defined American politics, not merely intruded upon it, just as politics defined war. This relationship was not unchanging, however. It became more consequential to Americans and the world as the scale of American wars and military prowess grew.
The enmeshment of war and politics was inevitable: modern states exist in part to wage war, and war is an extreme form of politics. Some Americans hoped that the United States would be an exception, having witnessed European monarchies and dictatorships deploying war to address personal, imperial, ideological, or racial ambitions. But American exceptionalism was impossible.
The relationship began with the nation’s founding. Imperial and local politics sparked the war for American independence, and the nation owed its political existence to war. The Constitution set the terms of enmeshment by giving political authorities control of war and its institutions. Only Congress could declare war and fund it even if war were undeclared (undeclared wars erupted almost from the start). It also had power “to raise and support armies,” “maintain a navy,” and “make rules” for the armed forces. Its power of the purse was striking (beyond what most European legislatures possessed). Congress could not dictate deployment, strategy, and tactics, but it could set the fiscal terms that made those things possible. The president was made “commander in chief of the army and the navy” and of state militias “when called into the actual service of the United States” but not commander in chief of all government or of the nation, as some presidents and other political figures later presumed. The Constitution was notably briefer about the president’s war powers than about those of Congress. Whether brevity established an implicit check on presidential power or a tacit blank check for it periodically roiled American politics. Civilian secretaries of war and the navy (superseded after 1947 by a secretary of the new Department of Defense) headed cabinet departments, although their authority varied widely. Civilians also headed other agencies, proliferating in modern times, that had war-related functions, from the State, Justice, and Treasury departments at the nation’s founding to the Veterans Administration (1930), Central Intelligence Agency (1947), and the Department of Homeland Security (2003). Americans phrased these arrangements as imposing “civilian” control of the military, but “civilian” often meant “political.”
Most military officers accepted political control, even if they chafed at the decisions, forces, and strategies that politicians provided. Among advantages for officers, civilian supremacy made politicians—often more determined than officers to initiate war or wage it aggressively—more responsible for the controversial decisions and ghastly mistakes that war usually entails. Civilian supremacy prevailed in political dramas, as when Abraham Lincoln fired generals in the Civil War and President Harry Truman fired General Douglas MacArthur during the Korean War (an act condemned by some cold warriors as an intrusion on war making by politicians with a defeatist or subversive mentality). Some officers challenged political control during the cold war by favoring a nuclear first strike on the Soviet Union or China, conducting unauthorized spy missions, or broadcasting a Christian political agenda. But they were few and their damage to political control was minimal.
War was key to the creation of the American state—the activity it most expansively and expensively undertook. War justified its general scale and many of its specific measures, such as a federal income tax (imposed during the Civil War, reestablished in 1913, and greatly expanded during World War II), a welfare system pioneered through veterans benefits, and scientific and medical innovations by the armed forces. “War is the health of the State,” the radical critic Randolph Bourne declared in attacking America’s entry into World War I. Conservatives sometimes suspected much the same, as when they asserted that President Franklin D. Roosevelt sought to use rearmament and war to consolidate the New Deal and the Democratic Party’s hegemony (though World War II undermined both). Americans also expressed their political debt to war by justifying state initiatives as warlike in character: in 1933 FDR wanted the nation to respond to the Depression “as if invaded by a foreign foe”; later presidents waged “war” on crime, disease, drugs, and other challenges. Appeals to war as a model for national action overrode Americans’ chronic suspicions of an activist state. They also made war an even more political category.
Similarly, Americans imagined war as serving political purposes, not just the nation’s defense or expansion. War, it was said, would Americanize immigrants serving as soldiers (a favorite idea in World War I), crush subversive people and ideas, enhance social mobility (the military is “the greatest equal opportunity employer around,” President George H. W. Bush boasted in 1991), revive a flagging economy, spur technological development, and unite a fractious nation. Americans rarely assumed that the perils and benefits of war involved combat alone.
The actions and institutions of war propelled the nation’s development. Military force subdued Native Americans and conquered new lands. The Civil War aside, U.S. wars before at least 1941 were efforts at national aggrandizement, not survival; the Mexican-American War (1846–48) secured vast territories in the American West; the Spanish-American War of 1898 expanded America’s power and holdings in the Caribbean and the Pacific. The armed forces also promoted development by undertaking exploration, charting canal and railroad routes, building dams and ports, and cultivating technical expertise when the nation lacked other technological institutions (the U.S. Military Academy at West Point was the nation’s first engineering school, among its functions). The military’s developmental role was often highly visible, as with its building of the Panama Canal (completed in 1914) and its promotion of nuclear, aviation, space, and computer technologies (the Internet had origins in a Defense Department program). Sometimes the military remained in the background except during disaster, as in 2005, when hurricane Katrina spotlighted the Army Corps of Engineers, the politically astute builder of much of America’s infrastructure. In these ways, the role of the armed forces was political as well as military.
War and politics also intersected in the scramble for military spending and the resulting connections between civil and military institutions. The desire of local authorities—mayors, legislators, businessmen—for military bases and contracts is an old story, though its scale swelled in the twentieth century. Often it meant overriding the military’s judgment about where to erect a base, whether to develop a weapon, or which company should build it. Many politicians who decried civilian interference in other military matters were masters of military pork. Especially in the twentieth century, military spending directed resources, population, and political influence toward southern and western states. From the start, the armed forces used civilian institutions for research, weapons, supplies, and services, and civilians went to work for the military while officers retired to jobs in defense or other businesses. The use of private organizations for quasi-military operations, an old practice by states and especially evident in America’s post-9/11 military conflicts, further blurred the line between “civilian” and “military.”
War and politics were also enmeshed in how Americans understood citizenship. African Americans’ Civil War military service helped underwrite the citizenship they acquired, in theory, during and after the war. Through America’s post-9/11 conflicts, non-citizens’ military service guaranteed their citizenship. Since service was overwhelmingly a male activity—coerced during periods of conscription—citizenship was gendered in this way as in others. Beyond legal citizenship, war reshaped political and social citizenship. Military service in World War II strengthened citizenship for millions of Americans of eastern and southern European descent. Colin Powell, a career officer and Joint Chiefs of Staff chairman, became the highest-ranking African American in government as secretary of state (2001–5). The second woman in a cabinet post was Oveta Culp Hobby, World War II commander of the Women’s Army Corps and then secretary of health, education, and welfare (1953–55). Military service lubricated upward mobility and social change, especially as measured by prominent figures. Likewise, those barred from military service or denied equality in it felt treated as lesser citizens—hence the long struggle over racial desegregation of the armed forces, ordered by Truman in 1948; conflicts over women’s place in military service; and the 1993 battle over “gays in the military.”
Veterans also had housing, health, education, and employment benefits lacked by most Americans, even as critics regarded those benefits as puny or badly managed. Veterans’ elevated status was hardly a constant. Anxiety periodically erupted that veterans, especially of combat situations, would return damaged, disruptive, or dangerous. White Southerners feared demobilized black Union troops, and freed slaves feared ex-Confederates in the Ku Klux Klan. Anxiety surged during World War II—one reason for the famous 1944 GI Bill, or Servicemen’s Readjustment Act, which gave unprecedented benefits to most of the war’s 16 million veterans. Anxiety resurfaced when Vietnam War veterans were often diagnosed with post-traumatic stress disorder. Still, the sense of veterans as especially entitled or deserving citizens generally prevailed, as evident in the number of presidential candidates who were veterans. Those candidates were especially successful when regarded as heroes in victorious wars—Washington, Jackson, Harrison, Taylor, Grant, Theodore Roosevelt, Eisenhower, Kennedy—although military service was no guarantee of electoral victory, as Nixon in 1960, Dole in 1996, Kerry in 2004, and McCain in 2008 learned.
The presidency underlines how war and politics constituted each other. War or its apparent threat underwrote the presidency’s expanding powers, both legal and illegal. Major crises, none more so than 9/11, produced presidential claims that constitutional provisions, international laws, and humanitarian norms should be altered, suspended, or reinterpreted. War also brought greater power for individual presidents, though less often lasting glory. Many Americans suspected presidents of using war for political gain, but presidents usually achieved little that endured. Those who secured lasting luster—Lincoln and Franklin D. Roosevelt—died before the emergence of the sour aftermath war usually presents. Woodrow Wilson’s presidency crumbled after World War I; Republicans seized the White House in 1921. Truman and the Democrats barely survived World War II’s aftermath and then succumbed to the Korean War; a Republican, General Dwight D. Eisenhower, became president in 1953. The Vietnam War and their handling of it destroyed the presidencies of Lyndon Johnson and Richard Nixon (his abuse of war powers shaped the Watergate crisis of 1973–74). Difficult wars readily damaged presidents, as George W. Bush found in the Iraq War, but even a triumphant Gulf War gave no lasting political traction to his father, defeated in 1992 by Bill Clinton. By the same token, three of the four post-1945 presidents who served two full terms—Eisenhower, Reagan, and Clinton—avoided costly war making and remained popular. War was as fickle in its political ramifications as in its conduct and global consequences, often overwhelming the state’s ability to control it and ensnaring presidents.
When war went badly, accusations of unwarranted political interference usually intensified. After Japan’s attack on Pearl Harbor on December 7, 1941, with American forces in retreat or defeat, critics charged that Roosevelt had connived to bring the United States into World War II or even to allow Japan’s attack to proceed. But few complained about political intrusion when later operations pushed by Roosevelt and his civilian advisors succeeded—the invasion of France in 1944, the bombing of Nazi Germany and Imperial Japan, and the use of the atomic bomb in August 1945. Likewise, suspicion of politicians’ meddling intensified after 1945, when U.S. wars in Korea, Vietnam, Iraq, and elsewhere had dubious or disastrous outcomes. Success quieted suspicion. Failure stoked it.
So did uncertainty. The cold war arms race, portending a possible nuclear cataclysm, sparked diverse suspicions. Nationalist conservatives charged that politicians denied the military the tools of victory given how the metaphoric “button” of push-button warfare lay under the president’s thumb. Cold war liberals suspected that generals like Air Force chief of staff Curtis LeMay schemed to control the button. The growth of a vast civilian bureaucracy aggravated suspicions. Complaints about the number-crunching oversight of the military imposed by Secretary of Defense Robert McNamara (1961–68) prepared the ground for accusations that civilians, especially McNamara and Johnson, hamstrung their generals. Left to their own devices, accusers charged, the generals might have won the Vietnam War.
Faith in a wise officer corps able to win wars ignored institutional realities, however. Top officers disagreed about whether and how to wage war, especially given their intense service rivalries: the Air Force, Navy, Army, and Marine Corps usually had competing schemes for victory, with each also divided within. Indeed, those differences invited or compelled civilian superiors to “intrude”—to meddle, mediate, or mandate. No fount of secret wisdom, the uniformed military mirrored, though inexactly, divisions about war elsewhere in the body politic.
As formal declarations of war ceased after World War II, Americans could readily imagine a distinction between war and politics. The last protracted debate about entering war came before Pearl Harbor, after which power to initiate war lay with the presidency, positioning itself as above politics, not with Congress, the more obviously (though substantively no more) political body. To varying degrees, military actions were undertaken by presidents operating in haste, secrecy, and deception—hardly circumstances conducive to freewheeling debate. Congress trailed behind with various measures, usually approved overwhelmingly, that authorized operations. Hence political contests erupted about the conduct and consequences of wars more than their initiation, especially since most wars seemed dissatisfying or disastrous. As earlier, civilians and service personnel, and voices abroad, charged U.S. forces and leaders with illegal, excessive, or misguided use of force or torture against enemy soldiers, civilians, and captives.
The practice of politicians and pundits criticizing presidents and generals was bipartisan, however partisan at any moment. Many Democrats tried to shield the White House from criticism when their party held the presidency. Many turned against Nixon later in the Vietnam War and George W. Bush in the Iraq War. Likewise, Republicans, often defenders of presidential prerogative and military wisdom, second-guessed Clinton’s use of force amid Yugoslavia’s disintegration in the 1990s.
War and politics were above all interwoven because the United States waged war frequently. It became a foremost participant in the militarization of the modern world. Perhaps no state waged war more often, even though, or perhaps because, the cost in American lives was light (the Civil War aside), compared to that of its enemies and allies, even in a losing war like Vietnam’s. If the Founding Fathers hoped that war would play only an episodic role in American politics, the episodes became so numerous as to be nearly continuous, though often the incidents were not declared or widely recognized as wars (as in Nicaragua in the 1920s and Beirut in 1983).
Efforts to portray war and politics as distinct arenas were not persuasive, but they did express a desire to restrain the course by which war defined much of American life. Americans partook of the appeals and benefits of war, but they also remained suspicious of them.
See also Caribbean, Central America, and Mexico, interventions in, 1903–34; Civil War and Reconstruction; Korean War and cold war; Iraq wars of 1991 and 2003; Mexican-American War; Vietnam and Indochina wars; war for independence; War of 1812; World War I; World War II.
FURTHER READING. Andrew Bacevich, The New American Militarism: How Americans Are Seduced by War, 2005; John R. Gillis, ed., The Militarization of the Western World, 1989; Samuel P. Huntington, The Soldier and the State: Theory and Politics of Civil-Military Relations, 1957; Linda K. Kerber, No Constitutional Right to Be Ladies: Women and the Obligations of Citizenship, 1998; Richard H. Kohn, “How Democracies Control the Military,” Journal of Democracy 8.4 (1997), 140–53; Idem, ed., The United States Military under the Constitution of the United States, 1789–1989, 1991; Walter Millis, Arms and Men: A Study in American Military History, 1956; Michael S. Sherry, In the Shadow of War: The United States since the 1930s, 1995; Russell F. Weigley, The American Way of War: A History of United States Military Strategy and Policy, 1973.
MICHAEL SHERRY
Politics shaped the eight-year war for independence by the English colonies on the North American mainland (1775–83). In 1774 Britain decided to press its ten-year effort, against persistent and sometimes violent American resistance, to tighten imperial control of the colonies. When the British garrison in America was reinforced and its commander, Thomas Gage, appointed royal governor of Massachusetts, the colonies convened a “Continental” Congress in Philadelphia to coordinate resistance.
Few colonists expected or wanted war, but almost all opposed British policies, and preparations began by training the militia, a civilian military force established by law in all colonies except Pennsylvania. Britain had made Boston, the apparent heart of American resistance, its primary target for tough measures, and war began there in April 1775. British troops marching out of Boston to destroy a reported cache of arms met a small band of local militia, someone fired, and fighting continued through the day along the march route back to the town. As the news spread, thousands of New England militia rushed to Boston, blockading the British garrison, which was soon reinforced by troops and warships. In June, Gage tried to destroy a fortified rebel position near the town, on Bunker’s Hill. The rebels gave way after repeated frontal attacks, but the British troops suffered heavy losses.
By declaring the colonies in rebellion, King George III unleashed his army and navy against the Americans. Congress, concerned by wavering in the colonies south of New England, accepted the proposal of John Adams, a Massachusetts delegate, to appoint a Virginia delegate with military experience, George Washington, to command the militia force around Boston and to rename it the Continental Army. The king’s proclamation of rebellion, and the appointment of a Virginian to command the army, did much to unify the American effort.
Local committees, urged on by Congress, prepared for open warfare against the world’s strongest military power. American reinforcements marched to Boston through the summer, and Congress authorized an invasion of Canada through the Hudson-Champlain corridor, aimed at denying British forces a base for attack on the American frontier. Appealing to the French-speaking settlers of Canada as liberators, the invaders enjoyed success at first, but had faltered by the New Year, and then collapsed when ice melted on the St. Lawrence, and British reinforcements sailed up the river.
The stalemate at Boston ended when the British decided to evacuate the town in March 1776 and to move operations to New York City, where the Hudson River offered a highway deep into the American interior. Washington believed that defending New York was politically necessary to sustain morale, especially in the uncertain middle colonies (New York, New Jersey, and Pennsylvania), but with no navy and an inexperienced army, the complex geography of the New York port area made his mission almost impossible, though the repulse by South Carolina militia of a British seaborne attack on Charleston was encouraging. As new troops from the southward arrived, Washington raced to train them and prepare defenses around the heights of Brooklyn on western Long Island. Gage’s successor, William Howe, spent all summer on Staten Island building up a force of about 25,000, including hard-bitten mercenaries hired in Germany. Howe belonged to an aristocratic family affiliated with the parliamentary opposition, critical of the government’s handling of the American problem. Rumors at the time, and some historians since, have suggested that Howe, who had personally led the attack at Bunker’s Hill, lacked the stomach for killing English colonists who claimed their rights under the British Constitution.
Congress, aware after a year of fighting that all-out war was about to begin, declared the rebellious colonies in July 1776 to be the independent United States. When Howe began to embark his army in late August and move toward Long Island, Washington hoped for a fight like that at Bunker’s Hill in 1775. Instead, Howe flanked the American position and inflicted a crushing defeat on the Continental Army. But he stopped short at the Brooklyn bastion, where fleeing American soldiers had assembled, and prepared to lay siege. That night, in fog and darkness, Washington took his men across the river to Manhattan, where he rallied them for a gradual retreat up the island. Howe, not for the last time, pursued slowly, and the Americans even struck back once from the heights of Harlem, slowing but not stopping the British army. Washington stood again in late October at White Plains in Westchester County; again Howe won the battle but failed to destroy the American army. Washington then divided what was left of his force and took the larger part across the Hudson into New Jersey.
American morale, military and civilian, was at low ebb in the last weeks of 1776. Declaring independence had heartened many but had decisively alienated others who had supported American rights but would not break the British connection. These “Loyalist” Americans, estimated at a half-million throughout the former colonies, perhaps a quarter of the total white population, were especially numerous in the mid-Atlantic States. Another half-million Americans, slaves mostly in the South, often ran away to the British whenever possible; a smaller number of African Americans, about 5,000, served as soldiers on the American side. As Washington’s men straggled across New Jersey, with the British in cautious pursuit, popular support for Congress and the army virtually collapsed in that state.
At the Delaware River, Washington crossed into Pennsylvania with winter setting in. Rumors ran that he would soon be replaced by one of two former British officers who had joined the American cause, Charles Lee or Horatio Gates. Many, even those close to Washington, thought he had lost his grip and credibility. His letters begged Lee to join him quickly with the soldiers left east of the Hudson, and they show more desperation than determination. Instead of joining, Lee let himself be captured by a British patrol. Congress, fleeing Philadelphia for the safety of Baltimore, did not replace Washington and even granted him dictatorial powers to direct the war for six months. His choices were limited: disband the army and withdraw to resist in the western hills or gamble on a counterattack. He chose the latter. With some support from the Pennsylvania militia, he crossed the icy river in late December with soldiers who had spent the past year learning war the hard way, surprising and destroying a German brigade at Trenton, then withdrawing into Pennsylvania. Howe reacted quickly, and almost trapped Washington when he boldly recrossed into New Jersey. But Washington escaped to surprise and destroy another brigade at Princeton before heading for the protective hills of northern New Jersey.
The unexpected victories at Trenton and Princeton rallied the American cause, gave Washington solid support in Congress, and won notice and credit overseas, where Congress was seeking help. Colonial stocks of munitions, plus some captured ones, were enough for the first year, but foreign aid was vital to continuing the war. France was a historic enemy, but it was the only European power likely to help the rebellious British colonies. Benjamin Franklin of Pennsylvania, with a wealth of experience abroad and already known in Europe as a rustic genius, arrived in France just as Washington struggled on the Delaware. Clandestine shipments of French military supplies to America were already underway, but Franklin would seek more as well as money and ships.
The year 1777 was decisive. The British planned to invade from Canada and to use most of their New York force to take the rebel capital of Philadelphia. Congress let Washington enlist three-year volunteers to rebuild his army, with which he tried to defend Philadelphia. His defeated army crept back to a winter camp at Valley Forge, not far from the occupied capital. Gates, in the north, managed to stall the invading army from Canada at Saratoga, on the upper Hudson, and compel it to surrender. Washington’s tenacity, plus American victory at Saratoga, induced France to ally with the United States and go to war with Britain early in 1778.
Congress rejected proffered British concessions, Howe was recalled and replaced by Henry Clinton, and British leaders revised their strategy. France was now the main enemy, and the American war had to be coordinated with the protection of the valuable West Indies. British troops evacuated Philadelphia and concentrated at New York, with its great, accessible port. Washington’s army stood just out of reach in the Hudson Highlands. From 1778 onward it was a war of attrition, of political will as much as of military force. The House of Commons angrily grilled the recalled Howe on why the Americans had not been crushed. Congress hoped for a miracle, and grew weaker as its leading members returned to their states and rampant inflation sapped the Continental paper money that had served to mobilize resources during 1775–77. With no power to tax, Congress could only resort to begging the states to support their own troops. The British meanwhile turned the war toward the South, where large slave populations, strong pro-British Native American tribes on the frontier, and a reported abundance of Americans fed up with the war seemed to beckon, while the navy could shuttle more readily between the mainland and the West Indies.
In late 1778, a small British force invaded Georgia, easily took Savannah, and reestablished royal government. A year later, Clinton himself sailed from New York, with a much larger force, to invade South Carolina. For the first time the British attempted to exploit the military potential of Loyalists. Loyalist regiments recruited in the North were part of the invading force, and, as the British advanced, they organized Loyalist militia to hold and secure areas cleared of rebels. Charleston fell to a siege in May 1780, yielding 5,000 American prisoners; news of the victory caused a sensation in England. Congress sent Gates to the rescue. In command, he risked the small force left to him against a bigger, better disciplined British force at Camden in August and was destroyed, his reputation in tatters.
Washington, safe in the highlands, refused to move. He saw the British troops and ships departing New York for the South as an opportunity and resisted pleas from old friends like Governor Thomas Jefferson of Virginia to help the South. British forces were raiding freely into the Chesapeake and up its rivers, and in the Carolinas the war seemed lost. In mid-1780 Washington expected the arrival in Rhode Island of a French expeditionary force of more than 4,000 regular soldiers. His aim was to combine forces, and with the aid of the French navy to attack New York, destroy its depleted garrison, and win the war. When the French arrived, their commander was dubious, and, in September, a leading American general, Benedict Arnold, defected to the enemy. At the New Year, the Pennsylvania Continental troops mutinied for their pay and the promise of discharge after three years, and later New Jersey troops did the same. The French believed that the American effort would collapse after 1781, and many American observers agreed.
Under pressure, Washington sent his best general, Nathanael Greene, to take command in the South after the defeat of Gates at Camden. Even before Greene arrived in late 1780, however, the war had begun to turn against the British. Undefended by any regular force against British occupation, South Carolinians turned to a hit-and-run insurgency with small bands under local leaders who sought to hurt the British but especially to punish the Americans who had joined them. Exceptionally vicious, chaotic warfare erupted in the Carolinas 1780–81, pitting neighbors against neighbors. It was the civil war, Americans against Americans, implicit in the new British strategy. One large Loyalist force wandered too far westward into modern Tennessee, where it was surrounded by a rapidly assembled group of American militia and massacred. Greene understood what was happening, and worked well with the insurgent leaders. He avoided battle whenever possible but led British forces under Lord Cornwallis, left in command when Clinton returned to New York, on an exhausting chase northward over the hills and valleys of the Carolinas. When Cornwallis followed Greene, Loyalist militia could not hold their “liberated” areas against insurgent attack.
Failing to catch Greene, Cornwallis finally sought refuge in the Virginia tobacco port of Yorktown in mid-1781. He expected supplies, reinforcements, and perhaps evacuation. Instead a French fleet appeared off Chesapeake Bay. Engaging the British fleet, it fought and won a battle for control of the sea off Yorktown. Washington, disappointed that the French navy was not coming north to support an attack on New York, joined the French regulars in Rhode Island in a rapid march to Virginia, where Cornwallis soon found himself in a giant trap. With skilled French engineers pushing the siege forward every day, Cornwallis, cut off by land and sea, surrendered his army on October 19.
Military victory at Yorktown did not win the war, but news of Yorktown in Parliament brought down the wartime government, replaced by men who were opposed to the war. Desultory skirmishing occurred around occupied Charleston and New York; bitter feuds meant continued violence in the Carolina backcountry; and fighting continued in parts of the West. The British army and navy went on fighting the French. French troops and ships in America sailed away from Virginia to defend the West Indies, and, in the United States, something like a tacit armistice held, while the politicians in London and Paris spent two years negotiating a final peace.
The years 1778–83 were difficult for Congress. The French alliance, which many Americans saw as a guarantee of victory, sowed conflict and mistrust in Congress. Suspicion of wily Europeans who might make peace at American expense, skeptism toward the clever colleagues sent abroad to represent American interests, and mistrust of one another all played out in a body steadily weakened by loss of power to the states, a failing currency, and the ongoing departure of its most capable members. Only under direct French pressure did Congress finally ratify the Articles of Confederation in early 1781, one of the lowest points of the war, creating a weak central government out of an ad hoc convention of states. Washington was a mythic national hero when peace finally came in 1783, but Congress was given little credit for its part in achieving American independence.
In the Treaty of Paris, the United States gained international recognition, a western boundary on the Mississippi, and an end to the burdens as well as the benefits of membership in the empire. Britain kept Canada, and France saw that its ally Spain got back some of what it had lost in 1763: Florida, the Gulf Coast, and effective control of the Mississippi River. Americans were happy as the British army sailed home, but their economy was in ruins, and many were aggrieved that the impact of war had fallen so unevenly on regions and individuals, with only a feeble national government to address those grievances. American losers in the war were thousands of Loyalists, abandoned by the British; Native Americans, most of whom had sided with the Crown against rapacious American frontiersmen; and African Americans, who received little for service to either side. A few thousand black people were freed for their military service, but thousands more who had fled to the British for protection were re-enslaved, and a more rigorously enforced slave system took hold in the postwar Southern states.
A mythic version of the war became part of American political culture: unprepared citizen-soldiers, defeated at first but surviving, tenaciously holding their own against the best the Old World could throw at them, and winning through great hardships to ultimate victory. The national myth tended to neglect the crucial role played by the French alliance, and ignored the widespread popular apathy and considerable resistance by “loyal” Americans. If the myth faulted any Americans, they were the Congressmen and state officials who had played “petty politics” despite national peril.
See also Declaration of Independence; era of a new republic, 1789–1827.
FURTHER READING. Robert M. Calhoon, The Loyalists in Revolutionary America, 1760–1781, 1973; Colin G. Calloway, The American Revolution in Indian Country: Crisis and Diversity in Native American Communities, 1995; Jonathan R. Dull, A Diplomatic History of the American Revolution, 1985; John Ferling, Almost a Miracle: The American Victory in the War of Independence, 2007; David Hackett Fischer, Washington’s Crossing, 2004; Felix Gilbert, To the Farewell Address: Ideas of Early American Foreign Policy, 1961; Don Higginbotham, The War of American Independence: Military Attitudes, Policies, and Practice, 1763–1789, 1971; Piers Mackesy, The War for America, 1775–1783, 1964; Robert Middlekauf, The Glorious Cause: The American Revolution, 1763–1789, rev. ed., 2005; Jack N. Rakove, The Beginnings of National Politics: An Interpretive History of the Continental Congress, 1979; Charles Royster, A Revolutionary People at War: The Continental Army and American Character, 1775–1783, 1979.
JOHN SHY
The War of 1812 was officially fought over the rights of neutral carriers and the impressment of American seamen. Because the conflict failed to win any formal concessions from Britain, Federalist critics condemned the war as an unnecessary failure. Their judgment omitted all reference to Federalist activities prior to and during the conflict. The Federalists’ resistance to war with Britain helped provoke it, while their efforts to obstruct its conduct forced their retirement from national politics at its conclusion. Most Americans at the time thought the Federalist leadership, not the Republicans, had failed the nation.
Following its Revolution, the United States acquired a new central government with powers analogous to those of Europe’s nation-states. When the Atlantic World plunged into war after 1792, the conflict proved a blessing for the infant American state in one respect. No better way existed for the federal government to establish its authority than by solving national problems that the individual states had been unable to address. Shays’s Rebellion in 1787 had revealed the obstacles the states faced in dealing with the Revolutionary War debt. The earnings derived from the transfer of Europe’s seaborne commerce to America’s neutral vessels ensured that Alexander Hamilton’s ambitious plan for funding the debt would succeed. But war between France and Britain also entailed risks, because good relations with one of the great powers meant bad relations with the other. The U.S. accommodation with Britain after 1794 soured relations with France, igniting a limited naval war between 1798 and 1800. John Adams succeeded in bringing the “Quasi-War” to a conclusion shortly before the presidential election of 1800, but the taxes accompanying the conflict contributed to Thomas Jefferson’s presidential victory and the election of a Republican Congress.
The defeated Federalists worried that the Republicans would compromise the nation’s neutrality by allying with France. The Federalists had courted Britain during the 1790s, because most federal revenue derived from taxes on British imports. They counted on France’s hostility to neutralize the enmity Americans bore Britain after the Revolutionary War. Federalist leaders, especially in New England, feared the Republicans would promote bad relations with Britain to maintain their power. But, except for a few dissident minorities outside the Northeast, the nation increasingly identified with the Jeffersonian Republicans. Jefferson’s success in acquiring the Louisiana Territory from France strengthened the New Englanders’ sense of isolation, because everyone assumed new states formed from the western territories would vote Republican. This assessment appeared to be confirmed by Jefferson’s landslide reelection in 1804.
However, escalation of the European war after 1805 clouded the Republicans’ prospects. Lord Nelson’s naval victory at Trafalgar and Napoleon’s triumphs on the Continent made each belligerent supreme in one arena but unable to strike its adversary in the other. In 1806 Britain proclaimed a paper blockade—that is, one too extensive for any navy systematically to enforce—of the adjacent French coast in an effort to surmount this difficulty. Napoleon countered with a paper blockade of the British Isles. When Britain responded by ordering vessels making for Europe to enter a British port and pay British duties, Napoleon decreed any vessel that did so or was visited by a British warship to be a lawful prize. Jefferson reacted to the aggressions of both Great Powers with a general embargo on American shipping and exports. Though the measure hurt the American economy, it seemed preferable to war with either or both of the offending powers. But war with Britain seemed most likely because, six months earlier, a British frigate had attacked the USS Chesapeake to remove four alleged deserters.
Federalists led by Senator Timothy Pickering of Massachusetts contended the embargo favored France at Britain’s expense. Pickering claimed Napoleon had forced the embargo on Jefferson to complete France’s “continental system” of isolating Britain and to provoke war between the United States and Britain. Congressional sponsorship for such views emboldened the Federalist legislature of Massachusetts to urge wholesale resistance to the embargo. Though the Republicans warned that the only alternative to the embargo was war, the Federalists assumed they were bluffing. They knew the Republicans feared war was incompatible with republicanism because the French Republic had recently evolved into a military dictatorship under the pressure of the European wars. War would also reverse the progress the Republicans had made in retiring the Revolutionary debt. Though some Americans prepared to risk an appeal to arms, the majority preferred peace. The Federalists sought to split the Republican congressional majority by supporting a dissident Republican, Dewitt Clinton, as Jefferson’s successor, rather than James Madison. Though a majority of the Republican congressional caucus backed Madison, it also modified the embargo to apply only to France and Britain instead of declaring war against either or both powers. The policy, known as nonintercourse, affected France more than Britain because the latter’s naval supremacy allowed it to procure American commodities in neutral ports while denying French vessels comparable access.
Both Napoleon and the British minister in Washington, David Erskine, saw nonintercourse as a capitulation to Britain. Napoleon responded by ordering the sequestration, a conditional form of confiscation, of all American vessels entering ports under his control. Erskine sought to consolidate British advantage by proposing that the United States and Britain lift their trade restrictions against each other, conditional upon nonintercourse remaining in effect against France. To unify the badly divided nation, Madison accepted Erskine’s offer, only to have the British government repudiate it and replace Erskine with the pugnacious Francis Jackson. Ambassador Jackson accused the Madison administration of entering the Erskine Agreement knowing it would be repudiated in order to provoke antagonism against Britain. Federalists then took the part of the British government against the Republican administration. This convinced Madison and his Republican followers that the Federalists were a disloyal minority bent on subverting America’s republican institutions.
Other matters besides Britain’s commercial restrictions troubled Anglo-American relations. To maintain its blockade of France, the British navy continued impressing American seamen. At the same time, British commercial interests took advantage of the collapse of Spanish authority in the New World following Napoleon’s attempt to place his brother on the Spanish throne in 1808. Madison feared that British attempts at political control would soon follow. West Florida, which extended to New Orleans, looked particularly ripe for British appropriation if the United States did not act first. Madison bided his time until the indecisive Eleventh Congress passed a law (Macon’s Bill Number 2) that offered France an arrangement resembling the Erskine Agreement but directed against Britain. When Napoleon accepted, Madison issued orders for the peaceful occupation of West Florida by American forces. Madison would not have pursued such a course had he and other Republican leaders not concluded that Britain, backed by Federalist partisans, constituted the principal threat to the republic’s future.
Mobilizing a Republican majority for war with Britain proved difficult. In addition to Republican misgivings about militarism, the administration faced unwavering Federalist opposition. But the 12th Congress proved more determined than the 11th Congress to avenge the humiliations that Federalists, in conjunction with Britain, had inflicted on the nation. The impressment of American seamen also solidified public opinion behind the war hawks. Still, the Republicans could not brand the Federalists as official enemies because doing so worked at cross-purposes with unifying the republic.
Nor could they get France to stop seizing American vessels, as Napoleon had promised to do in responding to Macon’s Bill Number 2. The emperor was much more interested in provoking a war between the United States and Britain than in America’s trade. Because the British government used France’s actions to justify its commercial restrictions, Napoleon continued promising much but delivering little. He did not formally revoke France’s decrees until May 1812, and then with a decree that bore a bogus date of April 1811. France’s behavior emboldened Britain to insist that its enemy’s decrees be repealed, as they affected Britain’s and America’s commerce, before British restrictions would be lifted. That made Britain seem more unreasonable than France, but the difference was not enough to silence the Federalists, who continued adamantly to oppose war with Britain. A minority, however, pushed for war with both powers as a way of preventing war with either of them. The Republicans replied weakly that France had done something to satisfy American demands while Britain had done nothing.
Madison called on Congress to begin military preparations in November 1811. Since invading Canada was the only way the United States could strike at Britain, war had to be declared in the spring to allow time for operations before the ensuing winter. But neither the preparations for hostilities nor the diplomatic maneuvering surrounding the declaration observed these requirements. An initial war loan fell far short of its goal, partially because of Federalist opposition, while the administration waited in vain for an answer to its latest ultimatum to Britain.
One of King George III’s periodic fits of insanity, combined with the assassination of Prime Minister Spencer Perceval, slowed the British response. The retreat of Russian forces before Napoleon’s invasion of that nation together with economic difficulties exacerbated by nonintercourse eventually led Britain to lift its restrictions affecting American commerce on June 23, 1812. But Congress had declared war four days earlier. Had the news arrived sooner, it might have averted hostilities. Madison responded coolly to proposals for a truce, however, once he learned of Britain’s action. The political difficulties of unifying the Republicans led him to fear the effect the combined intrigues of the British and the Federalists would have on the Republicans. Had the British also been ready to abandon impressment, peace would have followed. But anger over impressment had become so widespread that Madison needed more than commercial concessions from Britain to suspend hostilities.
The administration soon regretted its hard line as, aside from several indecisive victories at sea, the war began disastrously. In August William Hull surrendered a large garrison at Detroit without firing a shot, while two other attempts to invade Canada collapsed ingloriously. News of Napoleon’s retreat from Russia followed these defeats. While Madison only wanted commercial cooperation from France, he had counted on Napoleon holding his own against Britain and the other European powers. Instead France grew weaker as Britain grew stronger. Madison readily accepted the czar’s offer of mediation early in 1813, only to have Britain reject it.
These setbacks failed to make Congress easier to manage, thanks in part to the use the Federalists made of France’s fraudulent repeal of its decrees. Freshman representative Daniel Webster proposed resolutions to the special congressional session of May–July 1813 that demanded full disclosure of the administration’s dealings with France prior to declaring war on Britain. Congressional Republicans passed responsibility for answering this challenge to Secretary of State James Monroe. His long document justifying the administration’s actions failed to silence the Federalist claim that the administration had let itself be maneuvered into war by Napoleon. The Federalists also hoped to obstruct the war effort by insisting that it be financed by direct taxes. They expected this would destroy the Republicans’ popularity, as direct taxes had destroyed theirs during 1799–1800.
The British government agreed to direct negotiations after U.S. Admiral Oliver Perry won decisive control over Lake Erie in September 1813. But Napoleon’s abdication in April 1814 freed the British government from any pressure to conclude at a speedy peace. Instead it directed all British military power against the United States. The new strategic situation made reconstituting a national bank, whose charter had been allowed to expire in March 1811, a national priority for the American government. Though the Federalists supported a national bank in principle, they insisted that its notes be redeemable for specie (precious metals), while Boston’s Federalist banks were busy engrossing the nation’s specie supply. Britain had absolved eastern New England from its blockade of the American coast until April 1814, making Boston the creditor for the rest of the nation. Boston’s banks then called on the state banks outside New England to redeem their notes in specie, which they proved unable to do. The ensuing banking crisis obstructed the government’s coordination of military operations and thus contributed to the burning of Washington at the end of August. The British also seized a third of Maine’s coastline. Such developments did not provide an auspicious setting for the peace negotiations with Britain beginning in Europe. Never had the republic seemed in more peril.
Instead of helping to defend the nation, New England’s Federalist leadership tried to turn that peril to its own advantage. While the governor of Massachusetts put out feelers to his counterpart in Nova Scotia soliciting British military intervention, the state’s legislature called for a regional convention to meet at Hartford. Federalists were prepared to go to such extremes because reports of Britain’s initial peace terms made it clear the Republicans would reject them. Though American forces had repelled a British invasion at Plattsburgh, the Federalists knew a large enemy force was moving against New Orleans. Its seizure would put the western two-thirds of the nation at Britain’s mercy. Meeting in December 1814, the Hartford Convention framed a set of constitutional amendments designed to enhance the Federalists’ power in the nation. The amendments were to be presented to Congress for acceptance along with the demand that New England be allowed to defend itself. Everyone understood that New England would conclude a separate peace with Britain if the rest of the nation refused to submit to the Federalist minority.
The commissioners carrying the Hartford Convention’s demands arrived in Washington at the same time as news of the conclusion of a peace in Europe based on the status quo ante bellum and of Andrew Jackson’s victory over the British at New Orleans. These two events transformed the fortunes of the republic overnight, making the Federalists look like a disloyal minority bent on humiliating the nation. The Hartford commissioners had no choice but to retreat in disgrace. The republic had unexpectedly survived despite all that the Federalists had done to prostrate it before Britain. But Federalist leaders did more than disgrace themselves in the eyes of other Americans. They also destroyed their power base in the New England states. Their policies had assumed the weakness of the republic compared to a powerful monarchy like Britain.
Within a year of the peace, Madison could predict that the nation would be debt free by 1835. A vigorous postwar recovery removed the last thread of justification for Federalist actions. Few realized that the European rivalries fueling the division between Federalists and Republicans had also come to an end. Their disappearance left what survived of the Federalist leadership without a rallying cause in their home states. By the mid-1820s hardly any remnants of the party remained.
See also Democratic Party, 1800–1828; era of a new republic, 1789–1827; Federalist Party; war and politics.
FURTHER READING. Henry Adams, History of the United States of America during the Administrations of James Madison, 1986; James M. Banner, Jr., To the Hartford Convention: The Federalists and the Origins of Party Politics in Massachusetts, 1789–1815, 1970; Roger Brown, The Republic in Peril: 1812, 1964; Richard Buel, Jr., America on the Brink: How the Political Struggle over the War of 1812 Almost Destroyed the Young Republic, 2005; Lawrence D. Cress, “ ‘Cool and Serious Reflections’: Federalist Attitudes towards the War of 1812,” Journal of the Early Republic 7 (1987), 123–45; Peter P. Hill, Napoleon’s Troublesome Americans: Franco-American Relations, 1804–1815, 2005; Matthew Mason, “ ‘Nothing Is Better Calculated to Excite Division’: Federalist Agitation against Slave Representation during the War of 1812,” New England Quarterly 75 (2002), 531–61; Robert A. McCaughey, Josiah Quincy 1772–1864: The Last Federalist, 1974; Bradford Perkins, Prologue to War: England and the United States 1805–1812, 1961; Burton Spivak, Jefferson’s English Crisis: Commerce, Embargo, and the Republican Revolution, 1979; J.C.A. Stagg, Mr. Madison’s War: Politics, Diplomacy, and Warfare in the Early Republic, 1783–1830, 1983; Steven Watts, The Republic Reborn: War and the Making of Liberal America, 1987.
RICHARD BUEL JR.
Welfare originated as a positive term in the early twentieth century. It signified attempts to professionalize and modernize old practices of relief and charity. This positive connotation of welfare and “welfare state” lasted through the New Deal of the 1930s and even into the 1940s. It came under attack in two stages. During the cold war, in the late 1940s and 1950s, opponents associated welfare with European socialism and un-American ideas. Then, in the 1960s, as unmarried women of color with children began to dominate public assistance rolls, welfare acquired the combined stigmas of race, gender, and illicit sex.
This narrow, pejorative use of the term welfare obscures its true meaning and inhibits understanding of the American welfare state. In the original sense—as used from the early twentieth century through post–World War II years—the terms welfare and welfare state referred to a collection of programs designed to assure economic security for all citizens by guaranteeing the fundamental necessities of life. The welfare state is how a society ensures against common risks—unemployment, poverty, sickness, and old age—that in one way or another confront everyone.
The American welfare state confronts universal problems with a distinctive architecture—much broader and more complex than is usually realized. It is not usefully described as either public or private. Instead, its economy is mixed, and its composition reflects American federalism—the division of powers between the federal government and the states. This American welfare state consists of two main divisions, with subdivisions within each. Each subdivision is rooted in a different location in American history and, to some extent, has followed its own trajectory over time.
The first division is the public welfare state. Its subdivisions are public assistance, social insurance, and taxation. Public assistance, the oldest form of welfare, consists of means-tested programs. Its origins lie in the Elizabethan poor laws, which the colonists brought with them in the seventeenth century. Embodied in “outdoor relief,” aid given to people in their homes rather than in an institution, public assistance has a long and controversial history. Although subject to state law, public assistance, with a few exceptions, was administered locally, usually by counties. In the early twentieth century, state governments introduced a new form of public assistance, “mothers’ pensions,” small amounts of money given to a limited number of worthy widows. During the Great Depression of the 1930s, the federal government introduced two public assistance programs paid for with matching state-federal funds. They were Old Age Assistance, by far the largest until it was eliminated by the growth of Social Security, and Aid to Dependent Children, a federalization of state mothers’ pensions, which later became Aid to Families with Dependent Children (AFDC), or what most Americans referred to as welfare, and, in 1966, Temporary Aid to Needy Families (TANF), which replaced AFDC.
A fierce critic of public assistance, President Richard Nixon surprised both his supporters and critics by proposing to replace AFDC with the Family Assistance Plan, a variant of a negative income tax. Opposed by conservatives, who objected in principle, and welfare rights advocates, who thought its benefits inadequate, the plan died. Instead, in 1974 Congress bundled public assistance for the indigent elderly, blind, and disabled, into a new program, Supplemental Security Income.
In 1996 welfare reform legislation—the Personal Responsibility and Work Opportunity Reconciliation Act–passed overwhelmingly in both the House and Senate with bipartisan support and was signed into law by President Bill Clinton on August 22. The legislation capped a long process of negotiation between Clinton and Congress and drew on widespread hostility to public assistance. The legislation, which reoriented public assistance toward what was called the transition to work, abolished the quasi-entitlement to public assistance embodied in AFDC. Its overarching goal was to move people from public assistance into a job in the regular labor market. States could meet this goal by contracting out welfare administration to private firms.
The TANF program has two major components. Both are block grants to states that are intended to help families leave welfare. One gives cash to families in need to support their children while they look for work, and discourages them from having more children outside of marriage. The other component bundles together money for major child-care programs for low-income families.
Two features of the new legislation attracted the most attention. One was time-limited public assistance, which mandated a maximum lifetime benefit of five years, although states were permitted to set shorter limits. The other feature took benefits away from legal immigrants who had been in the United States less than five years; again, states could impose even harsher restrictions on immigrants than the federal government. (Prodded by President Clinton, Congress restored some of these benefits to immigrants in 1997 and 1998.) One other important aspect of the bill was its emphasis on enforcing payment of child support by absent fathers.
The most dramatic change following the new legislation was a rapid drop in the welfare rolls by more than half. Supporters of welfare reform hailed this decline as testimony to the bill’s success. With little debate, Congress inserted even tougher work requirements into the legislation’s reauthorization, included as part of the Deficit Reduction Act that was signed by President George W. Bush on February 8, 2005. Many observers, however, were not sure that the drop in the welfare rolls resulted only from the new rules or that it should be the measure of the success of welfare reform. The decline, which had begun before the passage of the 1996 bill, reflected three major influences: job growth in a strong economy, individuals either discouraged from applying or sanctioned off the rolls, and work incentives in the legislation. Moreover, leaving welfare did not mean escaping poverty. Many of the jobs held by former public assistance recipients paid poorly, lacked health and retirement benefits, and did not offer avenues for advancement. A large proportion of poor women with children exchanged public assistance for working poverty.
Social insurance, whose origins lie in nineteenth-century Europe, is the second subdivision in the American welfare state. Social insurance programs are not means tested. They provide benefits to everyone who meets certain fixed criteria, such as being 65 years of age or older. They are based on a rough insurance analogy, because potential beneficiaries pay premiums in advance. They have been either state or federal-state programs. Always much more generous than public assistance, social insurance benefits have increased at a more rapid rate over time. The result is that the gap between them and public assistance has progressively widened. The first form of social insurance in the United States was workers’ compensation, introduced by most states in the early twentieth century. Few states developed old-age or unemployment insurance. Federal social insurance emerged in a burst with the Social Security Act of 1935, which introduced a complicated federal-state program of unemployment insurance and a federal program of old-age insurance known as Social Security. At first these programs were very restrictive. Social Security excluded agricultural and domestic workers and did not pay benefits, which initially were very low, until 1940. Although social insurance and unemployment insurance originally discriminated against African Americans and women, expansions of coverage have reduced inequities in benefits. Overall, Social Security has been the most effective federal public social program in American history.
Over time, Social Security’s coverage expanded, benefit levels increased, disability benefits were added, and in the 1970s, benefits were pegged to inflation. In the burst of social spending during the Great Society years, from the mid-1960s through the early 1970s, Congress passed a major extension to social insurance: Medicare, health insurance for the elderly, along with Medicaid, a medical public assistance program for the poor. By the late 1970s, largely as a result of Social Security’s benefits, the elderly, who in 1960 had a poverty rate three times that of any other age group, were less likely to be poor than any other segment of the American population. At the same time, Medicare and Medicaid transformed access to medical care for the elderly and poor.
A third division of the public welfare state is taxation. Low-income people receive benefits indirectly through tax credits given to businesses and real estate developers to create jobs and housing. But the most important program is the Earned Income Tax Credit. Started in 1975, the EITC was expanded greatly under President Clinton in the 1990s. It supplements the income of workers whose earnings fall below a predetermined level. The EITC costs more than AFDC ever did or than TANF does now. It has, however, been effective in boosting people from slightly below the poverty line to just above it.
The private welfare state has two main subdivisions. The first of these consists of charities and social services, which have a long and varied history. Some stretch far back in American history; others are much newer. Contrary to myths this private welfare state has never been adequate to relieve the needs of individuals and families without sufficient health care, income, or housing. In the 1960s, federal legislation funded the expansion of social services. As a result, the character of nominally private agencies and social services changed, because they began to receive a large share of their budgets from federal, state, and local governments. American governments operate relatively few services themselves. Instead, they run social services by funding private agencies. Without government funds, most private agencies would close their doors. In effect, they have become government contractors.
The second subdivision in the private welfare state consists of employee benefits. More than six of ten Americans receive health insurance through their employers. Many receive retirement pensions as well. Although a few businesses and governments provided pensions before World War II, employee benefits developed into mass programs only in the 1940s and 1950s. Fought for by trade unions, they received government sanction in 1949 from the National Labor Relations Board, which required employers to bargain over (though not to provide) employee benefits. Employee benefits fit within the framework of the welfare state because they have been encouraged by the federal government (which allows employers to deduct their cost from taxes) and are regulated by federal legislation. Without them, the public welfare state would have assumed a very different form. In recent decades, the percentage of workers covered by health insurance and retirement benefits has decreased. Employees pay much more for their health care than in the past and receive it through some variant of managed care. In the private sector, most pensions now require defined contributions, which leave future benefits to the vagaries of individual investment decisions and the market, rather than, as in the past, offering defined benefits, which guaranteed the income employees were to receive in retirement.
With these employee benefits added to its economy, the United States appears less of a welfare laggard compared to other developed nations. When nations are arrayed in a hierarchy according to public social spending, the United States and Japan are at the bottom, widely separated from the top. However, when private social welfare is added, the rank order remains the same but the distance is greatly reduced. Including benefits distributed through the tax code would shrink it even more. What is unique about the United States welfare state is the distinctive way in which it delivers its benefits.
In the 1980s, public social policy coalesced around three great objectives that began to redefine the American welfare state. The first objective was the war to end dependence—not only the dependence of young unmarried mothers on welfare but all forms of dependence on public and private support and on the paternalism of employers. The second objective was to devolve authority; that is, to transfer power from the federal government to the states, from states to counties, and from the public to the private sector. The third aspect was the application of market models to social policy. Everywhere, the market triumphed as the template for a redesigned welfare state. Used loosely and often unreflectively as the organizational model toward which public programs should aspire, the market model emphasized competition, privatization, and a reliance on supply and demand to determine policies and priorities. Examples include the replacement of AFDC with TANF and the shift to managed health care and defined contribution pensions; other examples are found everywhere throughout the public and private welfare states.
None of the forces redefining the welfare state originated in the 1980s, but in those years they burst through older tendencies in public policy and combined to form a powerful and largely bipartisan tide. With only a few exceptions, political arguments about the welfare state revolved more around details than great principles. An exception was the battle over the future of Medicare and Social Security that escalated during the administration of President George W. Bush. Conservatives wanted to move both programs toward privatization, which would fundamentally change the model on which they were built, but massive public opposition prevented Bush’s plans for Social Security from reaching the floor of Congress.
Bush had partial success reforming Medicare. On December 8, 2003, he signed the controversial Medicare Modernization Act, which introduced a prescription drug benefit known as Medicare Part D. Instead of a uniform benefit administered by Medicare, the Bush scheme relied on private insurers to offer plans that fit the program’s guidelines. The legislation forbade Medicare to negotiate directly with drug companies for lower prices, as the Veterans Administration did. It exempted low-income seniors from premiums, moving those eligible for Medicaid into the new drug program, and it reduced premiums for others with near-poverty incomes. But it handed extra dollars to insurance companies for seniors enrolled in Medicare Advantage Plans (managed-care plans that combined medical and prescription benefits). Medicare paid these private health plans about 12 percent more than it would cost to care for the same patients in the traditional Medicare program. Private insurers reaped a windfall from the requirement that Medicaid recipients enroll in the plans. The Democratic congressional majority proved unable to lift the prohibition on negotiating drug prices or to scale back the advantages granted private insurers. It did not even attempt to alter the complicated prescription drug plan that left many seniors still paying thousands of dollars for their medications each year.
By 2007 living-wage ordinances had passed in many cities; elections in several states showed strong support for an increased minimum wage; the lack of universal and affordable health insurance had become the number-one domestic issue; and the presidential campaign of John Edwards had focused national attention on poverty for the first time in decades. These developments held out hope for improving the economic security of the working poor and the accessibility of health care for the nonelderly. But the prospects for a reversal of the trends that had redefined and attenuated the nation’s welfare state remained dim.
See also Social Security.
FURTHER READING. Edward D. Berkowitz, The American Welfare State: From Roosevelt to Reagan, 1991; Gosta Esping-Andersen, The Three Worlds of Welfare Capitalism, 1990; Colin Gordon, Dead on Arrival: The Politics of Health Care in Twentieth-Century America, 2003; Linda Gordon, Pitied But Not Entitled: Single Mothers and the History of Welfare, 1890–1935, 1994; Christopher Howard, The Hidden Welfare State: Tax Expenditures and Social Policy in the United States, 1997; Michael B. Katz, In the Shadow of the Poorhouse: A Social History of Welfare, 10th ed., 1996; Idem, The Price of Citizenship: Redefining the American Welfare State, 2008; Frances Fox Piven and Richard A. Cloward, Poor People’s Movements: How They Succeed, Why They Fail, 1977; Ellen Reese, Backlash against Welfare Mothers: Past and Present, 2005; Theda Skocpol, Protecting Soldiers and Mothers: The Political Origins of Social Policy in the United States, 1992.
MICHAEL B. KATZ
The Whig Party was a formidable force in the antebellum United States. From the late 1830s until the early 1850s, roughly half of the American electorate was made up of Whigs. The party won two of the four presidential elections in which it participated—in 1840 and 1848. Because the two Whigs who were elected president—William Henry Harrison and Zachary Taylor—died in office and were succeeded by their vice presidents, John Tyler and Millard Fillmore, four Whigs ultimately held the office.
Some of the best-known politicians of the day were Whigs, including Henry Clay and the great orator Daniel Webster. During his congressional career, John Quincy Adams consistently acted with the Whigs although he first ran as an Anti-Mason. Leaders of the party included influential Southerners such as Robert Toombs and Alexander Stephens. The greatest educational reformer of the day, Horace Mann, was a Whig, as was William H. Seward, and Abraham Lincoln had a long association with the party. The two best-known congressional leaders of Radical Reconstruction, Charles Sumner and Thaddeus Stevens, had started their political careers as Whigs.
In 1824 all presidential candidates were Republican and deeply involved with the administration of the last of the Virginia dynasty, James Monroe. Andrew Jackson was a U.S. Senator from Tennessee, John Quincy Adams was the secretary of state, William Crawford was the secretary of the treasury, Clay was the speaker of the House of Representatives, and Calhoun, who became vice president, was the secretary of war. When no one received a majority of the electoral votes, the election was thrown into the House of Representatives. The choice of Adams, who came in second in the popular vote, created the movement to make Jackson president in 1828.
The merger of the Albany Regency, the Richmond Junto, and the Nachez Junto, local political cliques at the time, was the beginning of the Democratic Party. The Adams supporters were not as inept as often portrayed, but an alliance of the Jackson and Crawford forces of 1824 could have easily outvoted them. Political organization was moving toward modern parties, but on different rates at different levels. Most important in the North was the Anti-Masonic movement, which opposed the influence of secret societies in state politics and was one of the precursors of the Whigs.
A more general source of the future Whig Party was those who supported the Adams administration, who formed the National Republicans to oppose Jackson in 1832. They held a national convention and nominated Clay for president. These proto-Whigs advocated what they called the American System, a plan to establish a national bank, a protective tariff, federal support for internal improvements, and the colonization of freed blacks in Africa.
In 1833 and 1834, people began to use the term Whig to describe the anti-Jackson opposition. The name referred to the English Whigs, who had been associated with the parliamentary opposition to the king from the late seventeenth century to the mid-nineteenth century. The American Whigs originated as a party of congressional opposition to the imperial executive, “King Andrew.”
In preparation for the election of 1836, the Democratic Republicans held a convention to anoint Martin Van Buren as Jackson’s successor. There was no Whig convention, because there was as yet no national Whig Party. Anti-Jackson groups ran a variety of candidates for president and vice president. Four opposition candidates received electoral votes for president: William Henry Harrison, from Ohio; Daniel Webster, from Massachusetts; Hugh Lawson White, from Tennessee; and Willie P. Mangum, from North Carolina. White, who received 26 electoral votes, openly denied he was a Whig, and Mangum, who received South Carolina’s 11 electoral votes, had not agreed to run. The organizational confusion made the election of 1836 the only election in American history in which the Senate had to choose the vice president when none of the four candidates received a majority of the electoral vote.
As the Democratic and Whig parties coalesced in the late 1830s, debates emerged in the states about which could legitimately use “democratic” in its label. By 1840 the two major parties had taken on the official names of the American Democracy and the Democratic Whigs.
The Whigs had held their first party convention in 1839. Henry Clay was the obvious presidential candidate, but the New Yorkers, led by Seward and Thurlow Weed, blocked his nomination and put forth Harrison, who had won 73 electoral votes in 1836. To balance the ticket, the convention chose John Tyler from Virginia.
Because a Democratic editor accused Harrison, a retired general and presidential aspirant in 1836, of wanting to stay at home in his “log cabin” and drink “hard cider,” the election has been tainted with this image and the idea that the Whigs did nothing but stage gigantic parades and mouth empty speeches. Yet these political activities brought mass participation to American politics. Voter turnout skyrocketed: more than 80 percent of the white adult men went to the polls. The result was a stunning victory for the Whig candidate.
Historians have often asked, “Who were the Whigs?” The partisan battle was neither a simple matter of the rich against the poor, nor one between immigrants and the native-born. The Whigs won two presidential elections, held House majorities in the 27th and 30th Congresses, and did well in practically all of the states from the late 1830s to the early 1850s. Several studies have shown that ethnoreligious affiliation affected partisan perspectives in both sections. Groups such as Irish Catholics were overwhelmingly Democratic, while the various white, Anglo-Saxon Protestant descendents of the Puritans inspired by the Second Great Awakening in the North were heavily Whig. In the South, local conditions often determined the way these factors played themselves out in politics. Where the few free African Americans could vote, they tended to oppose the followers of Jackson until, in most states, the Jacksonians disenfranchised them. While many Democrats were extraordinarily rich—in the northern cities merchants, in the South plantation owners—Whigs tended to control the economically dynamic areas in both sections. Above all, the Whigs differed essentially from their opponents about the proper role of the state in governing economic and moral behavior. It was a matter of attitude. The poor, up-by-your-bootstraps men were primarily Whigs.
During these years, congressional behavior represented a distinctly partisan pattern. Even at the state level, in elections, legislative behavior, and constitutional conventions, partisan conflict reflected attitudes that mirrored differences on federal policy. From Maine to Mississippi, the Whigs emphasized the positive role of government by creating the “credit system”—charter in private- and state-related banks to create not only most of the money supply of the country but also to make loans to farmers and small businessmen—building roads and canals, supporting public education, and generally encouraging morality in public life. In contrast, the Democrats distrusted the actions of the legislatures and viewed the governors as “tribunes of the people” with the power to veto the excesses of government; they embraced laissez faire in all aspects of life.
The election of 1840 took place in the midst of a depression; the contrasting economic proposals of the parties were salient. The Democrats, who wrote the first real party platform in American history, emphasized their commitment to laissez faire and state’s rights. The Whigs did not write a platform, but Whig speakers made the party’s position clear. Clay began the campaign with a three-hour speech that emphasized banking and monetary policy. Webster spoke in the South and Virginian John Minor Botts toured the North, spreading similar ideas. Harrison—the first presidential candidate to speak widely—echoed the same Whig themes.
After Harrison caught pneumonia and died a month after his inauguration, the presidency fell to Tyler. Congressional Whigs looked toward sweeping economic change by reviving a national bank to control credit and currency, increasing the tariff to encourage domestic production, altering land policy to distribute revenues to the states for the development of internal improvements, and passing a federal bankruptcy law. The central pillar of the Whig economic program, called the “Fiscal Bank of the United States,” passed Congress in August, but Tyler vetoed it as overextending the power of the federal government to create banking corporations. After negotiations with Tyler, the Whigs pushed through a slightly revised measure, but Tyler vetoed this as well.
Tyler’s vetoes alienated most Whigs. The entire cabinet resigned except Webster, who was in the midst of negotiating the Webster-Ashburton Treaty with England. While this was ostensibly over boundary disputes between the United States and Canada in both Maine and Minnesota, it also touched on other conflicts between Great Britain and the United States, ranging from extradition of criminals to cooperation in ending the African slave trade.
Attempts by Clay and the congressional Whigs to provide for the distribution of the proceeds of land sales to aid the states in providing internal improvements and to revise the tariff did lead to legislation in 1841 and 1842, yet in both cases they were forced to compromise. As Tyler remade and remade again his cabinet, he moved closer to the Democrats. Eventually Calhoun, who had returned to the Democratic Party, served as his secretary of state and oversaw the annexation of Texas, which Tyler thought would revive his presidential prospects and his historical memory.
The slavery question was more troubling for the Whigs than for the Democrats and was the rock upon which their ship would eventually founder. From the mid-1830s on, northern Whigs opposed what they called “the slave power”—the political power exercised by southern planters. Led by John Quincy Adams, northern Whigs fought the “gag rules” that restricted congressional consideration of antislavery petitions. While a few southern Whigs did eventually vote to end the gag, this issue separated northern and southern Whigs who voted together on economic matters.
Under the Tyler administration, the question of annexing the territory that would become Texas posed another problem for the Whigs. Secretary of State Abel Upshur of Virginia secretly negotiated a treaty with the Texans to annex the Republic of Texas. When the treaty became public, both northern and southern Whigs bitterly opposed it. They argued that it would create sectional discord because dividing the area into five new slave states could give the South control of the Senate and the Texans’ boundary demands could lead to war with Mexico. While the Whigs’ argument proved correct on both counts (an increase in sectionalism and a war with Mexico), the Democrats generally embraced annexation and expansion.
After the annexation treaty was defeated, Tyler moved to annex Texas in an unconventional way, by a joint resolution of Congress, which passed in a sharply partisan vote. Against vigorous Whig opposition, the administration of Tyler’s Democratic successor, James K. Polk, also moved to institute the Democratic economic agenda. Polk resisted any internal improvements at federal expense and vetoed several acts to improve rivers and harbors in the Great Lakes region, thus alienating some Midwestern Democrats.
Most important, however, by ordering General Zachary Taylor to move his troops in Texas to the Rio Grande, Polk precipitated the events that led to the U.S. war with Mexico. Whigs were forced to support the declaration of war, although a southern Whig, Garrett Davis, said, “It is our own President [Tyler] who began this war.” Abraham Lincoln, then a young congressman from Illinois, called on the president to show Congress “the spot” on “American soil” where “American blood” had been “shed,” as Polk had stated in his war message.
Moral reformers were more likely to be Whigs than Democrats. In relation to the slavery question, most of the gradualists, who advocated the colonization of free blacks in Africa or Latin America, were Whigs, as were most immediate Abolitionists. Those who wished to use the government to deal with social dependents, either in prisons or public schools, were Whigs. Supporters of women’s rights and anti-bellum pacifism also tended to be Whigs, although some atheist pacifists were Democrats. While it is always difficult to define the American middle class, the Whigs were more likely than their opponents to represent bourgeois values.
The salience of the slavery issue in American politics in the 1840s would eventually destroy the Whig Party and ultimately lead to the Civil War. During the debate on a bill to fund the Mexican-American War, Pennsylvania Democrat David Wilmot introduced an amendment that would exclude slavery from any territories acquired from Mexico. While the Whigs split along sectional lines over the Wilmot Proviso, the party did extremely well in the elections immediately following its introduction.
The Proviso, injected into the election of 1848 by the Free Soil Party, grew out of a conflict in the New York State Democratic Party between supporters of Van Buren and his opponents. The national Democratic convention refused to seat the delegations of either faction and nominated Lewis Cass of Michigan on a platform that denied the power of Congress to act on slavery in the territories. The Van Burenites walked out and then dominated the Free Soil convention in August, which nominated Van Buren for president with Charles Francis Adams, son of the former president, as his running mate.
The Whigs held their convention in Philadelphia that June and passed over Clay in favor of a hero of the Mexican-American War, Zachary Taylor, with the conservative New Yorker Millard Fillmore as his running mate. Taylor’s slaveholding and the Whigs’ refusal to write into their platform any position on slavery in the territories alienated some Northerners. Many, such as the “Conscience Whigs” of Massachusetts, joined the Free Soil movement. Party leaders hoped that southern Whigs would be satisfied by the fact that Taylor owned a large plantation in Louisiana. He was able to retain national support, gaining the electoral votes of eight slave states and seven free states. Because Van Buren received 10.3 percent of the popular vote, Taylor won with a plurality of 47.3 percent, although he received a majority of electoral votes, 163 versus 127 for Cass. The Whig was able to win because the Free Soilers split the New York Democratic vote. Having made their point, the New York Free Soilers, following their leader, moved back into the Democratic fold. The Free Soil Whigs, however, had permanently broken with their party.
The attempt of the new president to organize California and New Mexico kept the issue of slavery in the territories alive. After the discovery of gold at Sutter’s Mill in 1848, the population of California jumped to over 100,000. In the fall, Californians wrote a constitution banning slavery and establishing a state government. Taylor recommended to Congress that California immediately be admitted to the Union. This, along with an earlier speech the president made in Pennsylvania that opposed the expansion of slavery, and his enforcement of the law against a filibustering expedition in Latin America, alienated southern Whigs from their president.
At the end of January, Senator Clay put forth a series of resolutions addressing the difficult questions facing Congress that became the basis for the Compromise of 1850. Because of Clay’s initiative and Webster’s powerful speech on March 7 favoring compromise, this was long credited as a Whig measure, but Clay’s “Omnibus” failed due to opposition from President Taylor. After Taylor’s sudden death, which made the pro compromise Millard Fillmore president, the Illinois Democrat Stephen A. Douglas was able to shepherd the five acts that constituted the “compromise measures” through Congress in September. The roll call vote revealed a Whig Party in disarray.
The sectional split in the party led many Whigs in the Cotton South to join state “union” parties, which served as a stepping-stone for the movement of some former Whigs into the Democratic Party. Yet most southern Whigs, particularly those in the Upper South, remained loyal to the party and participated in the presidential election of 1852. In their convention that year, the sectional split was apparent in both the choice of the candidate and the platform. President Fillmore, the candidate of most southern Whigs, was rejected in favor of General Winfield Scott, but the platform supported the compromise measures of 1850 over the opposition of antislavery Northerners. Although Scott made a respectable showing in the South and won the electoral votes of Kentucky and Tennessee, sizable numbers of southern Whigs stayed home.
In the North, ex-Whigs, who had voted for the Liberty Party and the Free Soil Party, made up a majority of the voters for the Free Democrat, John P. Hale, who received nearly 5 percent of the popular vote. While Scott got more popular votes than any previous Whig candidate, in part because of the growth of the population, he was overwhelmed in both the popular (50.8 percent to 43.9 percent) and electoral (254–42) vote by Franklin Pierce.
The Whig Party was finally destroyed in the North by the emergence of the nativist Know-Nothings (the American Party) in local elections in 1853 and sectional furor over the Kansas-Nebraska Act in 1854. Most southern Whigs in 1856 joined the American Party, and most Northerners moved into the Republican Party, although a sizable number remained Know-Nothings until the party final split over slavery. In 1856 a rump group of Whigs met in Baltimore and nominated Fillmore, who had previously been put forth by the American Party. This was to be the last formal act of the Whig Party.
Its early death made the once vibrant party of Clay, Webster, Adams, Seward, and Lincoln a mystery to modern Americans. The Whigs gave the nation not only some of the most important politicians of the Civil War era, such as Radical Republicans like Thaddeus Stevens and Charles Sumner who defined Reconstruction, but also the economic policy that, enhanced by a commitment to civil rights, became the “blueprint for modern America” when enacted by the Republicans during the Civil War and Reconstruction. In the period often called the era of Jacksonian Democracy, many historians have caricatured the Whigs. Since the mid-twentieth century, American historians have shown clearly that the development of American democracy has involved not only heroes of the Democratic Party, like Jefferson and Jackson, but their opponents as well.
FURTHER READING. Lee Benson, The Concept of Jacksonian Democracy: New York as a Test Case, 1961; Leonard P. Curry, Blueprint for Modern America: Nonmilitary Legislation in the First Civil War Congress, 1968; Ronald P. Formisano, The Birth of Mass Political Parties: Michigan, 1827–1861, 1971; Idem, The Transformation of Political Culture: Massachusetts Parties, 1790s–1840s, 1983; Michael F. Holt, The Rise and Fall of the American Whig Party: Jacksonian Politics and the Onset of the Civil War, 1999; Daniel Walker Howe, The Political Culture of the American Whigs, 1979; Robert V. Remini, Daniel Webster: The Man and His Times, 1997; Idem, Henry Clay: Statesman for the Union, 1991; William G. Shade, Democratizing the Old Dominion: Virginia and the Second Party System, 1824–1861, 1996; Joel H. Silbey, The Shrine of Party: Congressional Voting Behavior, 1841–1852, 1967; Glyndon G. Van Deusen, William Henry Seward: Lincoln’s Secretary of State, the Negotiator of the Alaska Purchase, 1967; Irvin G. Wyllie, The Self-Made Man in America: The Myth of Rags to Riches, 1954.
WILLIAM G. SHADE
The movement for woman suffrage is at once the historical foundation of American feminism and, along with the labor movement and the movement for black political and civil rights, one of the formative processes in the history of American democracy. Begun in the wake of Jacksonian franchise expansion, and reaching its formal victory in the late years of American progressivism, the struggle for women’s political rights is best understood less as a sustained campaign than as a series of distinct but cumulative movements, each with its own philosophies, strategies, constituencies, and leaders. The antebellum reform era, Reconstruction, late-nineteenth-century populism, and twentieth-century progressivism each witnessed its own characteristic campaign for women’s voting rights.
The initial exclusion of women from political rights was barely necessary to articulate, so obvious did it seem to all. In speaking of “persons” and “citizens,” the laws and constitutions of the early republic did not need to specify males; the identity of political personhood and maleness was generally assumed. The political virtue necessary for the trustworthy exercise of franchise rights was understood to require a level of rationality and personal independence that men, and only men, had. Women were too emotional, too economically dependent, too immersed in the private world of family to be imaginable as active public citizens; and besides, husbands represented their wives as fathers did their children in the larger family of the republic. The only exceptions to the widespread assumption that popular voting rights were thoroughly male in character were in church balloting, where women members participated, and during a brief, almost accidental episode of women voting—so long as they were unmarried and propertied—in New Jersey in the late eighteenth century. Once discovered, the legislature remedied its error, and New Jersey women slipped back with their sisters in other states into political invisibility.
The accepted date for the first clearly articulated demand for woman suffrage is 1848, made at the Seneca Falls, New York, women’s rights convention. The timing links the origins of woman suffrage advocacy to expansions in the franchise for all white men, regardless of property holding, which took place in the previous decades. Following the expansion of the white male franchise, political parties began to multiply, and popular involvement in partisan politics grew rapidly. The most controversial reform movement in the country, abolitionism, made its influence felt in party politics in 1848 with the establishment of the Free Soil Party. These and similar political moves in temperance reform made women’s interest in politics immediate and compelling, as well as a matter of egalitarian principle.
The first women to call for equal rights to the franchise did so as part of a broader demand for greater opportunities and equal rights: access to higher education, admission to all professions and trades, independent economic rights for married women, and the formal recognition of religious leadership and moral authority. In the words of the 1848 Seneca Falls Declaration of Sentiments, “Woman is man’s equal—was intended to be so by the Creator, and the highest good of the [human] race demands that she should be recognized as such.” Of all these demands, woman suffrage was the most controversial. Electoral politics not only was an exclusively male activity but also was thought to be corrupt and self-serving, which offended the moral sensibilities of the reform-minded women at the Seneca Falls convention. The author of the woman suffrage resolution, 33-year-old Elizabeth Cady Stanton, instead saw the right to vote as fundamental because it laid the basis for the political power to realize all their other demands. Her case for woman suffrage was supported by the one man at the Seneca Falls convention who also suffered disfranchisement: Frederick Douglass.
This controversy over woman suffrage led antebellum women’s rights advocates to focus on other demands, in particular full economic rights for married women. In some ways, this was a necessary precursor to full-fledged suffrage agitation, because the nearly universal condition of adult women was marriage, and so long as women lacked legal individuality and economic rights within that relationship, the case for their political empowerment was difficult to make. In addition, while changes in women’s economic rights could be made legislatively, enfranchisement had to occur through constitutional change—at that point state by state, a much more daunting prospect. Nonetheless, substantial numbers of women’s signatures on petitions were submitted in the 1850s in at least one state, New York, on behalf of their full franchise rights.
During the Civil War, Elizabeth Stanton and Susan B. Anthony, leaders of the antebellum movement in New York, were already calling for a reconstitution of the American nation on the basis of “civil and political equality for every subject of the Government,” explicitly including “all citizens of African descent and all women.” The demand for political equality rose to the summit of the women’s rights agenda in the years immediately following the war, in the wake of the commitment of the Radical wing of the Republican Party to establish the full national citizenship of ex-slaves and the federal voting rights of African American men. During Reconstruction, the constitutional locale for suffrage expansion shifted from the states to the national level, as evidenced by the Fourteenth and Fifteenth Amendments to the federal Constitution. This assertion of citizenship at the national level and the precedent for establishing a broad franchise in the U.S. Constitution connected the demand for woman suffrage with the resurgent nationalism of the postwar period. In this context, women’s rights agitation became a movement with a wider constituency, with political equality at its forefront.
In 1866, along with Lucy Stone of Massachusetts, Stanton and Anthony formed the American Equal Rights Association to link the struggles of woman and black suffrage and influence the constitutional amendment then under debate in the direction of broad, universal rights. Passage and ratification of the Fourteenth Amendment offered woman suffragists both hope and discouragement. On the one hand, the first article defined national citizenship quite broadly, as “all persons born or naturalized in the United States.” On the other hand, the second article, which established penalties against any states that persisted in disfranchising citizens, explicitly excluded women. For the first time, the U.S. Constitution employed the adjective “male” to modify the noun “citizens.” The Fifteenth Amendment, which explicitly prohibited disfranchisement on the basis of “race, creed or color,” similarly ignored discriminations of “sex.”
At first Stanton and Anthony pressed for another amendment to the federal Constitution, to prohibit the states from denying the right to vote “on account of sex,” modeled exactly on the wording of the Fifteenth Amendment. They formed a society, the National Woman Suffrage Association (NWSA), and organized women from New York to California (but not in the former Confederacy) to campaign for political rights. Accusing this new organization of threatening the victory of black suffrage (for the Fifteenth Amendment was not yet ratified), a second group of women’s rights activists, under the leadership of Stone and her husband, Henry Blackwell, founded the American Woman Suffrage Association (AWSA), which held back from national constitutional demands in favor of state campaigns to establish women’s voting rights.
For a brief period, from about 1870 to 1875, the NWSA set aside its campaign for a new constitutional amendment in favor of pressing Congress and the courts to accept an innovative interpretation of the Fourteenth Amendment that would include woman suffrage. Ignoring the second section of the Amendment, they focused on the first section’s broad definition of national citizenship and argued that women were persons, hence national citizens. What other content could there be to national citizenship, their reasoning continued, than that of political enfranchisement? This strategic “New Departure,” as NWSA labeled it, was pursued through a bold campaign of direct-action voting during the 1872 presidential election. Women activists went to the polls by the hundreds, asserting their right to vote on the basis of this constitutional construction. And, amazingly, while many failed to cast their ballots, some were allowed to vote.
One of these was Susan B. Anthony, who talked a hapless election official into accepting her vote (for Ulysses S. Grant). Within days, in one of the most famous incidents in the history of the suffrage movement, Anthony was arrested by federal marshals for the crime of “illegal” voting. Ward Hunt, the judge assigned to her case, recognized its political explosiveness. He instructed the jury to find her guilty but did not execute the fine or penalty, thus preventing her from invoking habeas corpus and appealing her case up the judicial hierarchy. In 1875 the case of Missourian Virginia Minor brought the suffragists’ New Departure argument before the U.S. Supreme Court. Minor had been prohibited from voting and sued the St. Louis election official, Reese Happersett, for violation of her rights. In Minor v. Happersett, a brief but devastating decision, the Court ruled that while women were indeed national citizens, citizenship did not carry with it the inherent right to vote, which was instead a privilege bestowed by government on those deemed reliable and worthy to wield it. The Minor decision sent the woman suffrage movement back to the strategy of securing a constitutional amendment specifically to enfranchise women. A bill for such an amendment was first introduced into the U.S. Senate by Republican Aaron Sargent of California in 1878. The Minor decision also put the Court’s stamp on a narrow construction of the postwar amendments, and a highly conservative theory about voting rights in general, consistent with other decisions undercutting suffrage for freedmen.
During the last quarter of the nineteenth century, woman suffragism changed both as a popular movement and as a political demand. The expansion of white middle-class women’s public activities—in higher education, women’s clubs, and voluntary social welfare activities—created an enlarged constituency. Unlike the advocates of the antebellum period, these women were not generally radical, not particularly committed to a broad agenda of women’s emancipation, and interested in political participation less for principle than to gain leverage for their particular reform concerns. These changes in constituency overran the old antagonisms between AWSA and NWSA, and in 1890 the two groups came together in the National American Association of Woman Suffrage (NAWSA), the largest, most inclusive suffrage organization for the next 30 years.
Meanwhile, in the electoral arena, the growth of radicalism was challenging electoral politics. Rural and small-town people, fed up with the seemingly identical positions of the two major parties and feeling squeezed by the growth of national corporate power, formed third parties and won offices in state legislatures and governors’ mansions in the Midwest and West. The demand for woman suffrage was resurrected in these “People’s Parties,” not as it had been advanced in the 1860s but as constitutional change at the state level. In Colorado in 1893 and Idaho in 1896, voters not only swept Populist candidates into office but also voted amendments to their state constitutions enfranchising women. Women were able to vote in all elections held in these states, including those for president and U.S. Congress. Similar amendments to the Kansas and California state constitutions failed. But a second front in the battle for woman suffrage had now opened. By 1911, women in six states, all of them west of the Mississippi, were exercising their right to vote and thus becoming a force in national politics.
Within a decade the base of the woman suffrage movement had shifted from rural areas to cities and to wage-earning and college women. This last phase of the movement was a crucial element of progressive reform. Much more attuned to the politics of class developments than ever before, twentieth-century suffragists made their case for political rights in terms of amelioration of working women’s conditions, protection of poor mothers from the pressures of the labor market, and the contributions that women could make to government social and economic welfare programs. This shift in constituency had an impact on suffragist tactics. Suffrage activism became decidedly modern in tone and argument. Women activists marched in the streets (or drove their cars) in disciplined formation, used new media such as movies and advertising to advance their cause, and—perhaps most important—confidently entered the halls of legislatures to advocate for their cause.
Populist radicalism had disappeared from the political scene, but some of its causes—including woman suffrage—reappeared on the left (or progressive) wing of the Republican Party. Following the pattern of populist suffragism, progressive suffragists concentrated on state venues, in their case industrial powerhouses such as New York, Pennsylvania, Massachusetts, and Ohio. In parallel fashion, the Reconstruction-era campaign for an amendment to the federal Constitution was revived. In March 1913, suffragists marched in the streets of Washington, D.C., one of the first national demonstrations of this type, to demand that Congress pass legislation for a woman suffrage amendment to the federal Constitution. This led to the formation of a new suffrage organization, the Constitutional Union, initially a division of NAWSA. Leaders Alice Paul and Lucy Burns were both college graduates and veterans of the British suffrage movement, whose members used more militant tactics. Personal, generational, and political differences separated them from the leadership of NAWSA, which still concentrated on lobbying politicians.
While the militants profited from the strategic agility of a small, cadre-based structure, the giant NAWSA had to hold together a tremendous diversity of women. In the context of early twentieth-century politics, the most explosive of these potential divisions had to do with race. The southern white women who worked within the Democratic Party were consistently shadowed with charges that a woman suffrage amendment would enfranchise black women and bring back the horrors of “black Republicanism.” Black woman suffrage advocates, who had been made unwelcome in NAWSA as early as 1899, were no more hospitably received among the militants, as Alice Paul considered the issue of racial discrimination a distraction from her cause. But whereas NAWSA was tied to a nonpartisan approach, lest the Republican and Democratic commitments of their different regional white constituencies come into conflict, the militants plunged directly into the national partisan fray.
Starting in 1914, when congressional legislation for a woman suffrage amendment began to make progress, the militants pressured the national Democratic Party to take up their cause. During the election of 1916, the Congressional Union renamed itself the National Woman’s Party. Its organizers traveled throughout the West urging enfranchised women to vote against the Democrats to penalize the party, especially President Woodrow Wilson, for not making women’s voting rights a party measure. In the short run, this strategy failed. Wilson was reelected, and within a month of his inauguration, the United States entered the Great War.
The final political maneuvers that led to the passage of congressional woman suffrage legislation played out in this context. Wilson, beholden to the southern wing of his party, initially wanted no part of the campaign for a federal amendment. However, as he turned his attention to his postwar plans, he saw the need for women’s political support. In 1918 he finally declared his support for a federal amendment. Even then, antisuffragists fought intense battles against the inevitable. Legislation passed the Senate in January 1919. The ratification process took another 16 months. In the end, the state that took the Nineteenth Amendment over the top and into the Constitution was Tennessee, one of the few southern states with two-party politics that suffrage advocates could mobilize.
Much ink has been spilled over the question of which wing of the Progressive Era suffrage movement, the NWP militants or the NAWSA moderates, was responsible for victory. During and immediately after the war, women were being enfranchised all over North America and Europe, and the ratification of the Nineteenth Amendment to the U.S. Constitution was part of this process. From an even wider framework, the credit goes not to a single organization or leader but to 75 years of building support among diverse constituencies of women, sufficient political will, and sophisticated arguments for women’s political equality with men.
While the passage of the Nineteenth Amendment did not lead to an immediate and dramatic change in voting patterns, neither did it terminate women’s political activism. Groups of women substituted policy and reform goals for their previous efforts to win the vote. Notably, NAWSA became the U.S. League of Women Voters. However, the 1920s was a conservative decade, and women voters found it difficult to advance many of their progressive goals. Within a decade, women’s voting had become so normal that younger women barely remembered the long and hard fight to win it.
See also feminism; voting.
FURTHER READING. Jean Baker, ed., Sisters: The Lives of America’s Suffragists, 2005; Nancy F. Cott, The Grounding of Modern Feminism, 1989; Ellen Carol DuBois, Feminism and Suffrage: The Emergence of an Independent Women’s Movement in America, 1848–1869, 1978; Idem, Harriot Stanton Blatch and the Winning of Woman Suffrage, 1997; Idem, Woman Suffrage and Women’s Rights, 1998; Eleanor Flexner, Century of Struggle: The Woman’s Rights Movement in the United States, revised ed., 1975; Lori Ginzburg, Unitidy Origins: A Study of Antebellum Women’s Rights in New York, 2005; Susan Marilley, Woman Suffrage and the Origins of Liberal Feminism in the United States, 1820–1920, 1997; Rebecca Mead, How the Vote Was Won: Woman Suffrage in the Western United States, 1868–1914, 2006; Rosalyn Terborg-Penn, African American Women in the Struggle for the Vote, 1850–1920, 1998; Marjorie Spruill Wheeler, New Women of the New South: The Leaders of the Woman Suffrage Movement in the Southern States, 1993.
ELLEN CAROL DUBOIS
Few Americans could have predicted that conflict in Europe in the summer of 1914 would lead to four years of war, U.S. military intervention, and the transformation of American politics. Decades of rivalry among the European powers had prompted minor conflicts in the Balkans and North Africa between 1909 and 1914; war came after Gavrilo Princip, a Bosnian Serb nationalist, assassinated Archduke Franz Ferdinand, heir to the throne of the Austro-Hungarian Empire, on June 28, 1914. Ultimatums and secret treaties drew all Europe’s major powers into war by the beginning of August, pitting the Central Powers of Germany, Austria-Hungary, and the Ottoman Empire against an alliance of France, Britain, and Russia. On the battlefield, an initial German drive met stiff resistance from the British and French; by September 1914, the two sides dug into a thousand-mile system of trenches that remained more or less unchanged until the war’s end four years later.
Americans reacted with concern to the outbreak of war, but many thought the conflict would be a minor clash; most, even the normally bellicose former president Theodore Roosevelt, urged inaction. Isolationists eschewed entangling alliances; progressives who believed that a century of peace had advanced society beyond war urged the United States to stay out. So did President Woodrow Wilson. Elected in 1912 due to a divided Republican Party, Wilson wanted to continue his domestic agenda. Together with a heavily Democratic Congress, Wilson had spent the first year of his term shepherding through reforms in labor relations and political economy.
Both parties hailed President Wilson’s call on August 19, 1914, that Americans be “impartial in thought as well as in action.” But in practice, the United States was never entirely neutral. News coverage leaned toward support for Britain; the cutting of transatlantic cables connecting North America with Germany ensured that Americans received nearly all their war news from a British perspective. Awareness of German atrocities in Belgium and gruesome industrialized warfare in the trenches—including machine guns, tanks, mustard gas, and daily casualties in the tens of thousands—horrified the American public and tended to amplify support for the Allies. Nor was the United States ever fully neutral in its actions: Americans more or less ceased trading with the Central Powers (especially after a British blockade of continental Europe) and lent them little money; by contrast, loans to Allied governments expanded, and trade with the Allied Powers increased fourfold. U.S. dependence on transatlantic commerce meant that German submarine warfare would increasingly pose a threat to American lives and livelihood.
On May 7, 1915, a German submarine torpedoed the British passenger ship Lusitania, killing 1,198 people, including 128 Americans. Wilson continued to speak of neutrality, insisting that “there is such a thing as a nation being so right that it does not need to convince others by force that it is right,” but his diplomatic communications with Germany were so stern that his antiwar secretary of state William Jennings Bryan resigned in protest on June 8, 1915. (Wilson replaced him with Robert Lansing, openly anti-German from the outset.) In Congress, supporters of neutrality—which included both southern Democrats and midwestern progressives—sought to keep the United States from being pulled into war by world events. Representative Jefferson McLemore (D-Texas) introduced a resolution blocking Americans from traveling on the ships of the warring powers. (Indeed, the Lusitania had been carrying munitions.) McLemore’s resolution was narrowly defeated; war was delayed by a German announcement in the fall of 1916 that it would not attack passenger ships without warning.
The issue of war dominated the 1916 election. Democrats campaigned for Wilson as the man who “kept us out of war.” The slogan referred not only to European events but to the Mexican Revolution as well. On March 9, 1916, revolutionary leader Pancho Villa led a raid on Columbus, New Mexico; Wilson responded with a massive deployment of U.S. troops under the leadership of Major General John J. Pershing. The border conflict raged as Congress debated the nation’s wartime “preparedness.” The National Defense Act of August 1916 increased the authorized strength of the U.S. Army and gave the president the power to federalize state militias for overseas service; the Naval Act of 1916 called for substantial construction of ships. At Chicago in June, Republicans nominated Supreme Court Justice Charles Evans Hughes, who ran a lackluster campaign. Nevertheless, the 1916 presidential election was one of the closest in American history. Wilson lost ten of the states he had won four years earlier, and had he not managed a 3,800-vote victory in California, Hughes would have entered the White House.
Soon after his reelection, Wilson proposed a negotiated end to the war. In a speech to the Senate on January 22, 1917, Wilson called for “peace without victory” and urged the formation of a postwar league of nations. Meanwhile, two weeks earlier, German war planners had adopted a new strategy in the hope of breaking the war’s stalemate: submarine warfare to starve the British and a final push on Paris. Renewed submarine attacks were sure to bring the United States into the war, but the Germans gambled that they could win before Americans fully mobilized. The Germans announced their plan on January 31; three days later Wilson severed diplomatic relations.
On March 1, the American public learned of the Zimmermann telegram, a cable from a German diplomat inviting Mexico to join Germany’s side of the war in exchange for the restoration of Mexican territory lost to the United States in 1848. Three attacks on American merchant ships in March 1917 brought renewed demands for U.S. entry into the war. Wilson called the newly elected 65th Congress into special session, and on April 6, 1917, Congress heeded his call to make the world “safe for democracy,” declaring war on Germany by a vote of 373 to 50 in the House of Representatives and 82 to 6 in the Senate.
The United States began the war with a comparatively small military force. Despite defense legislation passed the previous year, in April 1917, the army numbered just 120,000 men, the navy had about 300 ships, and neither officers nor enlisted men in either service had substantial field experience. On May 18, 1917, Wilson signed the Selective Service Act, requiring the registration of eligible men for conscription. The bill sharply divided Democrats; Wilson relied on the leadership of Representative Julius Kahn (R-California) to see it through Congress. Overall, some 24 million men between the ages of 18 and 45 registered; about 2.7 million were drafted, and about 2 million more volunteered (particularly in the navy and the marines, which did not rely on the draft). The War Department constructed 32 training camps (carefully distributing 16 in the North and 16 in the South), and while initial mobilization was slow, the army eventually moved 2 million troops to Europe in the space of 18 months.
War mobilization required a substantial expansion of federal presence into areas of Americans’ everyday lives. Lacking a large federal bureaucracy to manage the task—and drawing on the Progressive Era’s political culture of voluntarism—the Wilson administration tapped existing organizations and social networks to carry out much of the work on the homefront. In Washington, D.C., those volunteers included “dollar-a-year men,” corporate executives who took war leadership positions for a token salary. Among them was George Creel, an advertising executive who headed the Committee on Public Information (CPI). The CPI spread the Wilson administration’s case for the war, spending its 100 million budget on a media blitz and mobilizing tens of thousands of volunteers, known as “four-minute men” for the brief speeches they made in movie theaters, urging Americans to enlist, to buy bonds, and save food. Voluntarist rhetoric also shaped the War Industries Board; Wilson tasked chairman Bernard M. Baruch with coordinating industrial production. In the winter of 1917–18, as fuel shortages hit consumers and tangled railroad schedules delayed needed war materials, the government took over control of both the coal and railroad industries. The National War Labor Board, established in April 1918, managed relations between business and labor. During the war, unions enrolled 1.5 million members and won such victories as the eight-hour workday, equal pay for women, and collective bargaining rights, but many of labor’s gains were temporary and restricted to those in war industries.
The United States Food Administration (USFA), established in May 1917, was led by Herbert Hoover, a business leader who had already earned an international reputation for coordinating relief efforts for European civilians. Americans did not experience rationing, except for some regulations on wholesalers and restaurants and modest limits on sugar. USFA policies did far less to increase the food supply than did the incentives of market forces. But the 500,000 volunteers (most of them women) who led local campaigns made the USFA a public success and made Hoover the only American during the postwar era whose election to the presidency drew on his wartime record.
As the federal budget increased from 1 billion in 1916 to
19 billion in 1919, paying for the war became an ongoing political contest. Progressives supported increased taxation, and won modest victories in the application of income taxes and “excess profits” taxes on corporations; together these raised about one-third of the war’s costs. Most, however, came from the
23 billion raised through the bond sales of the Liberty Loan program. As in other facets of war mobilization, bond sales depended on the arm-twisting of local volunteers, a mass media campaign, and substantial financial incentives for large-scale purchasers.
The Wilson administration mobilized its supporters and suppressed its opponents. In June 1917, the Espionage Act drastically restricted freedom of speech; in May 1918, amendments collectively known as the Sedition Act went even further. Thousands were arrested, most of them German Americans, pacifists, or radical leftists. Eugene V. Debs, leader of the Socialist Party and winner of over 900,000 votes in the 1912 election, was sentenced to ten years in prison for a speech he gave in Canton, Ohio, in June 1918; the radical Industrial Workers of the World was essentially crushed. German citizens in the United States lived under the Alien Enemies Act; about 6,000 were interned over the course of the war. States substantially amplified federal legislation; voluntary associations lent a hand as well. Various organizations challenged wartime restrictions, including several New York groups that coalesced into the American Civil Liberties Union after the war. They won few victories.
Wartime politics accelerated some political movements that had been on the national agenda for decades. Temporary measures meant to conserve grains and regulate soldiers’ drinking prompted the adoption of Prohibition as national policy; in December 1917, Congress sent the Eighteenth Amendment, prohibiting the production or sale of alcohol, to the states; it was ratified in January 1919. Supporters of woman suffrage—who had been pressing the issue at the state level with little success—made a political breakthrough during the war. Millions of moderate suffragists in the National American Woman Suffrage Association called for the vote as a reward for wartime sacrifice; radicals in the smaller National Woman’s Party marched before the White House to embarrass the Wilson administration. Bitter rivals, the two groups contributed separately to the passage of the Nineteenth Amendment, ratified in August 1920.
European immigrants found that war opened some avenues for inclusion and closed others. Jewish and Catholic groups participated prominently in war mobilization; elsewhere, concerns about ethnic diversity led to strictures against private schools and bilingual education and an early attempt to establish a federal Department of Education to regulate schools. Submarine warfare and European conscription all but closed off transatlantic migration after 1914, and the changing world situation heightened Americans’ concerns with national identity. In February 1917, over Wilson’s veto, Congress adopted legislation requiring a literacy test for migrants and effectively barring nearly all Asian migrants. Further restrictive acts in 1921 and 1924 shaped the demographic character of American society for two generations.
Despite the fact that African American organizations overwhelmingly supported the war effort, the black press and black political groups were subject to systematic surveillance. Individual black workers in the South—many of whom migrated to cities in the South or North—faced intimidation. Violence culminated in 1919, when some 70 African Americans were murdered in public lynchings and race riots rocked cities such as Washington, D.C.; Omaha, Nebraska; and Chicago, Illinois. Ideological shifts and the death of Booker T. Washington in 1915 opened the door for a new generation of leaders; the National Association for the Advancement of Colored People added thousands of names to its rolls in 1919, and Marcus Garvey began recruiting members to the Universal Negro Improvement Association, which established its first U.S. branches in 1917.
At the front, the American Expeditionary Force under General Pershing kept its distance; the United States insisted on being called an “associated” rather than an “allied” nation, lest its men be used as cannon fodder by the British and French generals whom Pershing disdained. American troops participated in large numbers in the Second Battle of the Marne in July 1918 and played a key role in the Battle of the Meuse-Argonne in September–October 1918, an extended assault that pushed back the Germans. Soon the armies of the Central Powers surrendered, and their governments collapsed; an armistice, signed on November 11, 1918, brought the fighting to an end. About 116,000 Americans had lost their lives, nearly half them from disease, especially the global influenza epidemic of 1918.
On May 27, 1918, President Wilson told congressional leaders that for the duration of the war, “politics is adjourned,” but nothing was further from the truth. Wilson’s wartime relations with Congress were never easy, and after the armistice, they worsened. Voters frustrated by Wilson’s war policies and the increased cost of living targeted the Democrats in the 1918 midterm elections—all the more so because in October, Wilson had asked the American public to treat the election as a referendum on the war and vote for Democrats. The move backfired, and Republicans took decisive control of both houses.
The postwar Congress faced several pressing issues. Political and social unrest at home dominated the headlines in 1919. The Bolshevik Revolution of November 1917 had brought radical socialists to power in Russia; soon thereafter the Russians left the war, signing a treaty at Brest-Litovsk on March 3, 1918. Widespread belief in the United States that German agents had fomented the Bolshevik Revolution (Germany had given modest support to Vladimir Lenin) fanned fears of espionage and subversion in the United States. Conflict peaked during the Red Scare of 1919. A general strike in Seattle, Washington, in February and a shutdown of the steel industry in September galvanized popular support for drastic measures by states and the federal government aimed at radicals, unionists, and noncitizens. The Justice Department’s Bureau of Investigation (later renamed the Federal Bureau of Investigation) expanded in size and power.
International issues also occupied Americans’ minds, particularly the peace settlement. On January 8, 1918, Wilson had announced the famous Fourteen Points that he believed could guide postwar relations. Most of these 14 points were specific calls for territorial adjustment, reflecting Wilsonian principles of free trade, national self-determination, and freedom of the seas; the final point called for a “general association of nations.” Wilson personally led the 1,300-person American delegation to the peace negotiations in Paris. The Treaty of Versailles finally signed on June 28, 1919, little resembled Wilson’s proposals, but he hoped that a functioning League of Nations (as called for in Article Ten of the treaty) could hammer out any remaining details. Returning to Washington, Wilson urged the Senate on July 10, 1919, to adopt the treaty or “break the heart of the world.”
In the Senate, supporters (mostly Democratic Wilson loyalists, now in the minority) had to sway the votes of senators who gathered in blocs of mild reservationists, strong reservationists, and “irreconcilables”—senators opposed to the treaty in any form. Wilson embarked on a national speaking tour to build support for the League of Nations but had to return to Washington after collapsing in Pueblo, Colorado, on September 25, 1919. Wilson suffered a stroke on October 2, 1919, and never fully recovered, and the nation entered a constitutional crisis that was carefully hidden from public view. First Lady Edith Wilson controlled access to the president and wielded extraordinary power together with Wilson’s secretary Joseph Tumulty. (Vice President Thomas Marshall, widely regarded as a political nonentity, was excluded from decision making.) Wilson’s illness meant that the League fight had lost its leader, and the treaty twice went down to defeat. On November 19, 1919, Senator Henry Cabot Lodge called for a vote on the treaty with some amendments, but Wilsonian Democrats and the irreconcilables joined together to block it. Then, on March 19, 1920, the Senate voted down Wilson’s original version.
World War I substantially expanded the presence of the federal government in Americans’ everyday lives, and brought the United States to leadership on the world stage. Wartime politics brought culminating victories for some progressive issues but an end to progressivism in general. The war divided the Democratic Party and united the Republicans, and set the course of American politics until the Great Depression a decade later.
See also foreign policy and domestic politics, 1865–1933; progressivism and the Progressive Era, 1890s–1920.
FURTHER READING. Christopher Capozzola, Uncle Sam Wants You: World War I and the Making of the Modern American Citizen, 2008; John Whiteclay Chambers, To Raise an Army: The Draft Comes to Modern America, 1987; John Milton Cooper, Breaking the Heart of the World: Woodrow Wilson and the Fight for the League of Nations, 2001; Ellis W. Hawley, The Great War and the Search for a Modern Order: A History of the American People and Their Institutions, 1917–1933, 1979; David M. Kennedy, Over Here: The First World War and American Society, 1980; Theodore Kornweibel, Jr., “Investigate Everything”: Federal Efforts to Compel Black Loyalty during World War I, 2002; Arthur S. Link et al., eds., The Papers of Woodrow Wilson, 69 vols., 1966–94; Seward Livermore, Politics Is Adjourned: Woodrow Wilson and the War Congress, 1916–1918, 1966; Joseph A. McCartin, Labor’s Great War: The Struggle for Industrial Democracy and the Origins of Modern American Labor Relations, 1912–1921, 1997; Paul L. Murphy, World War I and the Origin of Civil Liberties in the United States, 1979.
CHRISTOPHER CAPOZZOLA
World War II had a powerful impact on American life. The most extensive conflict in human history changed political, diplomatic, economic, and social configurations and provided the framework for the postwar years. Forced to work closely with other members of the Grand Alliance—Great Britain and the Soviet Union—to defeat the Axis powers—Germany, Italy, and Japan—the United States became, in President Franklin D. Roosevelt’s words, the “arsenal of democracy.” In the process, the nation overcame the ravages of the Great Depression of the 1930s, became a dominant world power, and prepared for a new era of prosperity when the war was won.
Military victory was always the first priority. Roosevelt was willing to do whatever was necessary to defeat the nation’s foes in Europe and Asia. He understood the need for American involvement in the struggle after Germany rolled into Poland in September 1939, even though formal entrance did not come until the surprise Japanese attack on the American fleet at Pearl Harbor, Hawai‘i, on December 7, 1941. The United States had begun to prepare for war with a major increase in defense spending in 1940 but still found itself at a disadvantage with the destruction of ships and planes in Hawai‘i. Japan’s calculation that it could win the Pacific war before the United States could revive failed in the face of a huge mobilization effort. The tide turned at the Battle of Midway in mid-1942. American carrier-based planes defeated the enemy and dealt a major blow to Japanese military might. The United States continued its relentless campaign by attacking island after island in preparation for a final assault on the Japanese home islands.
Meanwhile, the United States was engaged in top-level diplomacy to craft a combined military strategy in Europe. Roosevelt met with British Prime Minister Winston Churchill even before American entrance into the war and settled on attacking the Axis first in North Africa through what Churchill called the “soft underbelly” rather than launching a frontal attack on Germany. Though Roosevelt initially favored direct engagement, and Joseph Stalin, autocratic leader of the Soviet Union, likewise sought action to reduce pressure on the Eastern front, Churchill, mindful of the huge losses in the trenches during World War I, refused to push ahead directly until he was assured of success. Roosevelt and Churchill met again at Casablanca, Morocco, in early 1943, after the successful North African campaign, and determined to move into Italy next. Other meetings—which now included Stalin, took place in Teheran, Iran, in late 1943; at Yalta, in the Crimea, in early 1945; and finally in Potsdam, Germany, in mid-1945. Those meetings called for the cross-channel invasion that began on D-Day, June 6, 1944, and culminated in the defeat of Germany a year later. They also confronted the larger political questions of the shape of the postwar world and determined on the future borders of Poland and Allied occupation zones for Germany.
Political considerations likewise played a part in bringing the war in the Pacific to an end. Atomic energy became an issue in this campaign. The Manhattan Project to create a new atomic bomb had its origins in a letter from the world-famous physicist Albert Einstein to Roosevelt in August 1939, suggesting that a rapid self-sustaining nuclear reaction splitting the atoms in the nucleus of uranium might unleash a tremendous amount of energy. Roosevelt was interested, and the committee he established grew into a huge operation, in time including 37 facilities in the United States and Canada. Significantly, the United States told Great Britain about the developmental effort but chose not to divulge that information to the Soviet Union, a decision that had important postwar implications. Meanwhile, Roosevelt assumed from the start that a bomb, if it could be created, was a weapon of war to be used when ready. But Roosevelt died in April 1945 before the bomb was available, and Harry S. Truman, his successor, had to make the decision about its use. As the end of the war approached, the U.S. Navy proposed a blockade of the Japanese islands, the U.S. Army prepared for a major invasion, and some American diplomats suggested that the Japanese might surrender if assured they could retain their emperor. Truman’s decision was not to choose any of those actions but to let the process that was underway continue to its logical conclusion. So, two atomic bombs were dropped—on Hiroshima first, then Nagasaki—in August 1945. The war ended a week later.
A political commitment to focusing on military issues above all had significant consequences. Though some government officials understood the dimensions of Adolf Hitler’s “Final Solution” to exterminate all Jews, a persistent anti-Semitism in the State Department prevented word from reaching those in authority who might have taken action to save some of the victims. Only toward the end of the war did the United States begin to deal effectively with refugees from Nazi Germany. Even so, Roosevelt was not ready to move aggressively in any way he deemed might compromise the military effort, and his single-minded concentration on what he felt were the major issues of the war made him less sensitive to the plight of people he might have helped.
Roosevelt also acquiesced in the internment of Japanese Americans on the West Coast of the United States. Tremendous hostility followed the Japanese attack on Pearl Harbor and led to demands that all Japanese—even those born in the United States and therefore American citizens—be evacuated, and eventually detained in ten camps in a number of Western states. In a debate at the top levels of government, FDR sided with Secretary of War Henry L. Stimson on the need to move out all West Coast Japanese to forestall sabotage and bolster national security. Executive Order 9066 gave military officials the power to “prescribe military areas . . . from which any or all persons may be excluded,” and a new War Relocation Authority established the detention camps in which 110,000 Japanese Americans spent the war. It was a travesty based on the single-minded effort to win the war as quickly and expeditiously as possible.
Roosevelt used his political clout to embark on a major industrial mobilization effort. He understood that putting the nation on a war footing required enormous organizational adjustments. As Stimson observed, “If you are going to try to go to war, or to prepare for war, in a capitalist country, you have got to let business make money out of the process or business won’t work.” Business leaders who had incurred presidential wrath for resistance to New Deal programs now found themselves in demand to run the government agencies coordinating war production. Paid a dollar a year by the government, these businessmen remained on company payrolls and continued to be aware of the interests of their corporations. They helped devise different incentives to get business to cooperate, including the cost-plus-a-fixed-fee system, in which the government paid companies for all development and production costs for wartime goods, and then paid a percentage profit as well.
Political considerations surfaced as a huge network of wartime agencies developed to coordinate war production. Military leaders assumed a dominant role and sometimes complicated the bureaucratic process. When mobilization failed to work effectively, Roosevelt, who never liked to dismantle administrative structures or fire people who worked for him, responded by creating one agency after another, with new ones often competing with old ones, to produce the weapons of war. That pattern let him play off assistants against one another and to retain final authority himself. “There is something to be said . . . for having a little conflict between agencies,” he once said. “A little rivalry is stimulating, you know. It keeps everybody going to prove that he is a better fellow than the next man.” And, of course, the final injunction was to “bring it to Poppa.”
On the mobilization front, one agency also followed another. There was the National Defense Advisory Commission, then the Office of Production Management, then the War Production Board, and eventually the Office of War Mobilization. And there were comparable agencies dealing with employment, wage and price levels, and a host of other issues.
The system worked well. The economy, benefiting from a quadrupling of defense spending in 1940, quickly moved into high gear, and the corrosive unemployment, which had been the most prominent feature of the Great Depression, vanished. By the middle of 1945, the United States had produced 300,000 airplanes, 100,000 tanks and armored cars, and 80,000 landing craft, along with 15 million guns and 41 billion rounds of ammunition.
Always the astute politician, Roosevelt recognized that propaganda could help mobilize support for the war. Yet he was concerned with the excessive exuberance of the Committee on Public Information, which had been the propaganda agency during World War I, and he was intent on keeping control in his own hands. To that end, he established a new Office of War Information to help get the message about America’s role in the war to people at home and abroad. Made up, in characteristic fashion, of a series of predecessor agencies, such as the Office of Facts and Figures and the Foreign Information Service, the Office of War Information sought to broadcast and illuminate the nation’s aims in the war. It portrayed the liberal terms of Roosevelt’s “four freedoms”—freedom of speech, freedom of worship, freedom from want, and freedom from fear—and the Atlantic Charter, endorsing the self-determination of nations, equal trading rights, and a system of general security agreed upon by Roosevelt and Churchill.
For groups suffering discrimination in the past, the war brought lasting social and economic gains that changed the political landscape. For women and African Americans, in particular, the war was beneficial and provided a model for future change.
Women were clearly second-class citizens at the start of the struggle. Many occupations were closed to them, and in the positions they did find, they usually earned less than men. The huge productive effort gave women the chance to do industrial work, especially as military service took men overseas. “Rosie the Riveter” posters encouraged women to work in the factories, and they did. At the peak of the industrial effort, women made up 36 percent of the civilian workforce. At the same time, demographic patterns changed. In the past, working women had usually been single and young. Now an increasing number of married women found their way into the workforce, and by the end of the war, half of all female workers were over 35.
African Americans likewise benefited from wartime needs. When the war began, their unemployment rate was double that of whites, and they found themselves concentrated in unskilled jobs. They faced constant slights. One black American soldier who was turned away from a lunchroom in Salina, Kansas, watched German prisoners of war served at the same counter. “This was really happening,” he said. “It was no jive talk. The people of Salina would serve these enemy soldiers and turn away black American G.I.s.”
Blacks pushed for equal opportunities. The Pittsburgh Courier, an influential African American newspaper, proclaimed a “Double V” campaign—V for victory in the war overseas and V for victory in the campaign for equality at home. In 1941, A. Philip Randolph, head of the Brotherhood of Sleeping Car Porters, pushed for a massive march on Washington, D.C., to dramatize the cause of equal rights, and only called it off when Roosevelt signed Executive Order 8802 creating the Fair Employment Practices Committee to investigate complaints about discrimination and take appropriate action. Meanwhile, black airmen finally got the chance to fly, and black students picketed segregated restaurants in Washington, D.C., thus foreshadowing the civil rights movement of the 1950s and 1960s.
The world of electoral politics reflected the transformations taking place at home. The political world has always mirrored major issues of the day, and electoral contests have long helped to articulate national values and views. The major wartime elections—presidential and congressional—clearly reflected wartime concerns.
The war brought a change of focus. Roosevelt recognized that New Deal reform had run its course by the time the war began. He summed up the transformation in a press conference at the end of 1943. The New Deal had come about when the patient—the United States—was suffering from a grave internal disorder. But then, at Pearl Harbor, the patient had been in a terrible external crash: “Old Dr. New Deal didn’t know ‘nothing’ about legs and arms. He knew a great deal about internal medicine, but nothing about surgery. So he got his partner, who was an orthopedic surgeon, Dr. Win-the-War, to take care of this fellow who had been in this bad accident.” At the end of the 1930s, Roosevelt began to encounter a coalition of Republicans and conservative Democrats who resisted further liberal initiatives. That congressional coalition remained intact for the duration of the war, dismantling remaining New Deal programs, but providing the president with full support for the military struggle. Democrats retained congressional majorities in both houses, but Roosevelt had to back away from programs not directly related to the war.
In 1940 Roosevelt sought an unprecedented third presidential term. Recognizing that American involvement in the European war was likely, he felt he had no choice but to run. He faced Republican Wendell Willkie, an Indiana business executive who argued that the New Deal had gone too far. When Willkie asserted that Roosevelt would lead the nation into war, the president declared, “I have said this before, but I shall say it again and again and again: Your boys are not going to be sent into any foreign wars.” Reminded that an attack might leave him unable to keep his promise, he retorted that in case of an attack, it would no longer be a foreign war. Roosevelt won nearly 55 percent of the popular vote and a 449 to 82 victory in the Electoral College.
Four years later, Roosevelt chose to run again. The war was still underway, and while the president was politically strong, he was now ailing physically. He suffered from heart disease and appeared worn out. Because of the precarious state of his health, the choice of a running mate became increasingly important, and the Democratic Convention nominated Senator Harry Truman as the vice presidential candidate. This time, Roosevelt ran against Republican Thomas E. Dewey, governor of New York. Fighting back personal attacks, Roosevelt rose to the occasion and was victorious again. He won about 54 percent of the popular vote, with a 432-to-99 electoral vote margin.
World War II changed the course of U.S. history. It enlisted the support of the American people, on the battlefield and in factories back home. It forced the nation to work closely with its allies to defeat a monumental military threat. And in the process, it changed political configurations at home and abroad as the United States faced the postwar world.
See also foreign policy and domestic politics since 1933; New Deal Era, 1932–52.
FURTHER READING. Michael C. C. Adams, The Best War Ever: America and World War II, 1994; John Morton Blum, V Was for Victory: Politics and American Culture During World War II, 1976; John W. Jeffries, Wartime America: The World War II Home Front, 1996; William L. O’Neill, A Democracy at War: America’s Fight at Home and Abroad in World War II, 1993; Richard Polenberg, War and Society: The United States, 1941–1945, 1972; Allan M. Winkler, Franklin D. Roosevelt and the Making of Modern America, 2006; Idem, Home Front U.S.A.: America during World War II, 2nd ed., 2000; Idem, The Politics of Propaganda: The Office of War Information, 1942–1945, 1978.
ALLAN M. WINKLER