C

image

cabinet departments

The first three cabinet departments—the Departments of State, War, and the Treasury—were organized in 1789 by the U.S. Congress at President George Washington’s urgent request. The addition of the Department of Homeland Security in 2002 brought to 15 the total number of cabinet departments created since then. Charting the emergence of cabinet departments provides a shorthand guide of sorts to American history. The westward expansion of settlement, the extension of agriculture into unfamiliar environments, industrialization, urbanization, the emergence of the United States as a world power, and the advent of twentieth-century social movements in pursuit of political equality and economic security for all American citizens—each of these trends in American history was eventually reflected in the president’s cabinet. Indeed, in a nation with a political culture supposedly premised on a mistrust of “big government,” the existence of 15 cabinet departments within the executive branch stands as a frank acknowledgment of reality: there are, in fact, matters that require oversight and management on the part of the federal government. The order in which the cabinet departments were created provides not just a rough indication of the general contours of American history, then, but also suggests how and when Americans arrived at a consensus that the role of the federal government ought to be expanded to address a particular problem.

Political scientists often refer to “inner” and “outer” cabinet departments. Inner cabinet departments are generally understood as those performing essential governmental tasks, including national defense, finance, and enforcement of the law. These were among the first cabinet departments formed. The outer cabinet departments, by contrast, emerged over the course of the nineteenth and twentieth centuries as new and unforeseen problems surfaced. The creation of these cabinet departments often met with controversy and delay, but most Americans have come to accept the role of the federal government in these areas as well.

The Inner Cabinet Departments

The inner cabinet departments perform functions nearly universally recognized as essential for any successful nation-state. These functions include forging relations with foreign nations, providing for the national defense, ensuring a sound financial system, and enforcing the law. Accordingly, during the first months of 1789, George Washington lobbied Congress successfully for the establishment of the Departments of War, State, and the Treasury. President Washington also established the Office of the Attorney General, forerunner to the Department of Justice and the fourth inner cabinet department. The Department of Homeland Security, created in the wake of the September 11, 2001, terrorist attacks on the United States, has arguably been the only addition made to the inner cabinet since 1789.

The Department of State. The Department of State was the first cabinet department created by Congress and is the oldest department in the executive branch. President Washington signed the law creating a Department of Foreign Affairs in July 1789 in order to formulate and carry out the new nation’s foreign policy. After Congress placed a number of domestic duties into the new department’s portfolio—including the administration of the census and the management of both the U.S. Mint and the nascent Library of Congress—the department was renamed the Department of State in September 1789. Although these domestic obligations were shuttled to other departments during the nineteenth century, the name stuck. Since its inception, the State Department has served as the primary representative of the American government and its citizens in the international community. It assumes responsibility for maintaining diplomatic relations with other nations and assists and protects U.S. citizens traveling or living overseas.

The office of the secretary of state was a prestigious position during the early years of the republic—Thomas Jefferson, James Madison, James Monroe, and John Quincy Adams all used the State Department as a stepping-stone to the presidency. But given its emphasis on foreign policy, it is not surprising that the department itself grew slowly during the early nineteenth century, when isolationism and a focus on continental expansion prevailed. As the United States became a leading exporter of goods in the decades following the Civil War, however, the State Department’s consular functions became more important. And as the United States became more involved in hemispheric and then world affairs beginning in the late nineteenth century, the department grew accordingly. State Department officials also took steps to professionalize the department. Written examinations—complete with foreign-language tests—began in 1895. Secretary of State Philander Knox (1909–13) introduced geographic divisions into the organization of the department and encouraged area expertise among its employees. The Rogers Act of 1924 unified the department’s diplomatic and consular services, thereby creating the modern Foreign Service of the United States. By World War II, the department was on its way to developing a career-oriented foreign service complete with improved salaries, merit-based promotions, and an increased emphasis on language and cultural training.

The development of the State Department during the World War II and cold war eras was even more notable. The number of employees at the department increased from 1,100 to nearly 10,000 between 1940 and 1950. In 1949, moreover, the department reorganized into geographic bureaus focusing on inter-American, Far Eastern, European, Near Eastern, African, and international organization affairs, reflecting the geographic scope of postwar American foreign policy.

Other additions, such as the Bureau of Economic Affairs (1944), the Bureau of Intelligence (1957), and the Bureau of Cultural Affairs (1960), indicated the wide range of interests that the shapers of U.S. foreign policy had in the rest of the world. And whereas the secretary of state position in the nineteenth century proved to be a prestigious perch from which to launch a bid for the presidency, during these immediate post–World War II years when State Department influence was at its apex, secretaries of state George C. Marshall (1947–49), Dean Acheson (1949–53), and John Foster Dulles (1953–59) pursued cold war policies that endured for more than a generation, profoundly affected the U.S. role in international affairs, and altered American domestic politics. Yet at the very height of its influence, the State Department lost its monopoly on foreign affairs. Three cold war era creations—the Defense Department, the Central Intelligence Agency, and the National Security Council—circumscribed the State Department’s power to shape American foreign policy. Secretary of State Henry Kissinger (1973–77) wielded considerable influence in the administration of President Richard Nixon, for example, but this was due as much to his concurrent position as national security advisor as to the power inherent in the office of the secretary of state. Similarly, President George W. Bush’s secretary of state, Colin Powell (2001–5), proved unable to restrain the administration’s hawkish foreign policy in advance of the 2003 invasion of Iraq.

Nevertheless, the Department of State remains the primary agency in charge of implementing U.S. foreign policy, even if it now competes with other policy actors at the formulation stage. The State Department currently maintains relations with nearly 180 countries, and, in 2007, it had approximately 30,000 employees and a budget of image10 billion.

The Department of the Treasury. Established in September 1789, the Department of the Treasury is the second oldest cabinet department still in existence. The Treasury Department performs many different functions, unified by money. It functions as the chief manager of the nation’s financial and economic policies and ensures the solvency of the U.S. government. Its domestic duties include overseeing the collection of taxes and tariffs, allocating budgeted funds, borrowing the money necessary to operate the federal government, safeguarding the integrity of the nation’s banks, manufacturing the nation’s coins and printing its currency, and advising the president on matters of domestic and international economics.

The Department of the Treasury has historically served as one of the largest law enforcement agencies in the federal government: it enforces federal tax laws and investigates counterfeiting and the evasion of taxes and customs. These responsibilities have remained relatively consistent since 1789, although the department’s power to manage the nation’s finances naturally increased commensurate with the increasing power of the federal government. The Treasury Department expanded considerably during the Civil War, for example, when secession led both to a precipitous drop in revenue and a costly war to reunite the Union. The Bureau of Internal Revenue (forerunner to the Internal Revenue Service) was established in July 1862 to ensure a steady stream of revenue during the war effort. Similarly, the increased role of the United States in world affairs in the aftermath of the World War II led to an expansion in Treasury Department activities. The department helped shape the 1944 United Nations Monetary and Financial (Bretton Woods) Conference and has remained one of the dominant influences on the International Monetary Fund and the World Bank.

On the other hand, more recent events have stripped the Treasury Department of many of its law enforcement functions. Heightened concerns surrounding national security in the twenty-first century led to the transfer of the U.S. Customs Service and the U.S. Secret Service to the new Department of Homeland Security in 2002; the law enforcement arm of the Bureau of Alcohol, Tobacco, and Firearms was transferred to the Department of Justice by the same Homeland Security Act. This reorganization left the modern Treasury Department with an image11 billion annual budget and 110,000 employees spread throughout the Alcohol and Tobacco Tax and Trade Bureau, the Comptroller of the Currency, the Bureau of Engraving and Printing, the Financial Crimes Enforcement Network, the Internal Revenue Service, the U.S. Mint, the Bureau of the Public Debt, and the Office of Thrift Supervision.

The Department of Defense. Perhaps the most readily identifiable cabinet department, the Department of Defense is responsible for training, equipping, and deploying the military forces that defend the security of the United States and advance the nation’s interests abroad. The Defense Department is the largest cabinet department in terms of human resources. It manages 1.4 million active-duty military men and women and 1.2 million Army Reservists and National Guard members. The Defense Department also employs approximately 700,000 civilians. Its budget was image440 billion in 2007, although it remains unclear how much of the costs of the ongoing wars in Afghanistan and Iraq were included in that figure.

The Department of Defense is the direct successor to the War Department, which was originally established alongside the Foreign Affairs (State) Department in July 1789 and was in charge of all land military forces from 1789 until 1947. Concerns over disorganization and inefficiency grew as the Army, Navy, and Air Force developed into discrete military units and made coherent military planning increasingly difficult. Between 1920 and 1945, for example, over 50 bills called for the uniting of the armed forces into a single organization, and concerns about inefficiency and redundancy were only exacerbated by the nation’s participation in World War II and the looming cold war.

These anxieties culminated in the National Security Act of 1947, which eliminated the War Department and subsumed the Departments of the Army, Navy, and Air Force under the newly created National Military Establishment (NME). A series of 1949 amendments to this National Security Act reconstituted the NME as the Department of Defense, stripped the service branches of department status, and centralized command and control of all branches of the armed forces under the secretary of defense.

The Department of Justice. Often described as “the largest law firm in the nation,” the Department of Justice is charged with enforcing federal laws. As is the case with the Department of Defense, the Justice Department can be traced back to the events of 1789, even though the department itself did not come into existence until much later. In 1789 Congress established the office of the attorney general to represent the interests of the United States at the Supreme Court and to advise the president on legal matters. But the legal work of a growing nation quickly became more than the small office of the attorney general could manage. An avalanche of costly litigation during the Civil War era led to the creation of the Department of Justice.

As established by the Judiciary Act of 1870, the Department of Justice—led by the attorney general—was ordered to conduct the legal business of the federal government, including all civil and criminal cases in which the United States had an interest. The 1870 legislation also established the Justice Department as the primary agency responsible for the enforcement of federal law. This obligation to enforce federal laws has meant that as such laws have moved into new legal territory, so, too, has the department expanded: alongside the original Civil, Criminal, and Tax Divisions, the modern Justice Department also houses Antitrust, Civil Rights, Environment and Natural Resources, and National Security Divisions—all legacies of legal developments in the twentieth and twenty-first centuries.

Also included in the current Justice Department are the Drug Enforcement Administration, the Federal Bureau of Investigation, the Federal Bureau of Prisons, and the U.S. Marshals Service. Since 2003 the department has housed the Bureau of Alcohol, Tobacco, Firearms, and Explosives. In 2007 the Department of Justice employed 110,000 people and had a budget of image23 billion.

The Department of Homeland Security. The Department of Homeland Security’s purpose is to safeguard the homeland against catastrophic domestic events, including acts of terrorism as well as natural disasters. The product of the largest governmental reorganization since the National Security Act of 1947 and an amalgam of 22 agencies and bureaus—such as the U.S. Customs Service and the U.S. Secret Service (both from the Treasury Department), the Immigration and Naturalization Service (from the Department of Justice), the U.S. Coast Guard, and the Federal Emergency Management Agency—Homeland Security instantly became the third-largest cabinet department, upon its creation in 2002, with approximately 200,000 employees. Early signs suggest that this reorganization has not been seamless. The department’s hapless response to Hurricane Katrina in 2005 raised many questions about its readiness to handle similar natural disasters or a major terrorist attack. The Department of Homeland Security budget was image46.4 billion in 2008.

The Outer Cabinet Departments: 1849–1913

The cabinet departments created during the nineteenth and early part of the twentieth centuries—the Departments of the Interior, Agriculture, Commerce, and Labor—reflected the demands of a nation in the throes of westward expansion and economic upheaval.

The Department of the Interior. The Department of the Interior has been one of the more enigmatic cabinet departments. Proposals for a “home department” or a “home office” to manage federal territories, Indian Affairs, and internal improvements surfaced as early as 1789, but the Department of the Interior was not created until 1849 when the present-day southwestern United States became part of the national domain as a result of the U.S.-Mexican War. Yet even though the heart of the new Interior Department was the General Land Office, transferred from the Treasury Department in 1849, the department’s identity as the nation’s primary manager of the nation’s public lands and natural resources remained partially obscured by the other tasks during much of the nineteenth century. In addition to the General Land Office, the department was given so many other miscellaneous offices—the Patent Office, the Office of Indian Affairs, a Pension Office serving Army veterans, the nation’s first Office of Education, the first Federal Bureaus of Agriculture and Labor—that it became known as “the department of everything else.” There seemed to be little coherence in the department’s early mission.

But the history of the Department of the Interior from the late nineteenth century to the present is one in which it cast off many of these miscellaneous tasks and focused more intently on land management, natural resource use, and conservation—so much so that Secretary of the Interior Harold L. Ickes (1933–46) fought to reconstitute it as the Department of Conservation. If anything, though, the increased specialization on natural resource management only made the department’s mission more complex: at times the Interior Department facilitated the development—even rank exploitation—of the country’s natural resources; at other times it enforced strict conservation and preservation measures. The U.S. Geological Survey, for example, was created within the Interior Department in 1879 to survey the lands and mineral resources in order to facilitate the development of the U.S. West. But between the 1870s and the 1890s, the department also preserved lands that would eventually become Yellowstone, Yosemite, Sequoia, and Rainier National Parks.

The pattern of exploitation and conservation continued into the twentieth century: the department’s Bureau of Reclamation rarely encountered a river it would not dam for irrigation and hydroelectricity, but the National Park Service and the Fish and Wildlife Service walled off millions of acres of land from future development. When the Interior Department tried to strike a balance between preservation and use, it did so with predictably controversial results. The Bureau of Land Management, for example, has closed remaining public lands to entry and offered instead to allow western ranchers to lease access. But ranchers were quick to complain that the department needlessly “locked up” resources, while conservationists decried “welfare ranchers” determined to use public lands on the cheap. The struggle among developers, conservationists, and preservationists for the control of the nation’s public lands and natural resources is woven into the Interior Department’s history.

Today the Interior Department consists of the Bureaus of Indian Affairs, Land Management, and Reclamation; the Minerals Management Service and the Office of Surface Mining; the National Park Service and the Fish and Wildlife Service; and the U.S. Geological Survey. The department manages over 500 million acres of public lands and nearly 500 dams and 350 reservoirs; oversees 8,500 active oil and gas operations on 44 million acres of the Outer Continental Shelf; operates nearly 400 national parks, monuments, seashores, battlefields, and other cultural sites, and over 500 national wildlife refuges; and conducts government-to-government relations with over 500 recognized Native American tribes. The Interior Department has 67,000 employees serving at 2,500 locations at an annual budget of image16 billion.

The Department of Agriculture. Unlike the Interior Department, the Department of Agriculture (USDA) began as a department that catered specifically to one economic interest—farmers. But over its century and a half of service, the USDA’s mission has widened considerably to include not only the original goal of higher farm incomes through the promotion of agricultural research, businesslike farm management, the efficient use of the latest machinery, and better marketing practices, but also the improvement of the overall quality of life in the nation’s rural areas, the protection of agricultural ecosystems through soil and water conservation programs, the promotion of U.S. agricultural goods in overseas markets, ensuring the safety of the U.S. food supply for consumers, educating the public about proper nutrition, and administering food stamp and school lunch programs to low-income Americans.

It is no surprise that suggestions for a Department of Agriculture can be traced back to the 1780s—a time when the United States was truly a nation of farmers. But early federal aid to agriculture was limited to a small seed collection and distribution program established in 1839 in the State Department’s Patent Office. As agricultural settlement proceeded westward, however, and after years of lobbying on the part of agricultural societies, an independent Department of Agriculture was established in 1862 in order “to acquire and diffuse . . . useful information on subjects connected with agriculture.”

At first, the USDA did little more than continue the seed collection program run by the Patent Office. But with the elevation to cabinet department status in 1889, the USDA embarked on a period of professionalization and expansion, particularly under Secretaries James Wilson (1897–1913) and David Houston (1913–20). During these decades the USDA engaged in more scientific research. The U.S. Forest Service was created in 1905 to professionalize the management of the nation’s forest resources, for example, while bureaus or offices of entomology, soil chemistry, roads, weather, and agricultural economics were also established. The Smith-Lever Act of 1914 created the Cooperative Extension Service in order to disseminate the department’s scientific and technological know-how to the nation’s farmers.

Under the Depression-era management of Secretary Henry A. Wallace, the USDA’s mission continued to expand. The Soil Conservation Service made the conservation of soils and water one of the department’s tasks, while the Rural Electrification Administration and the Resettlement Administration sought to foster a better quality of life in the nation’s rural communities. During the post–World War II era, the USDA turned its attention to issues of consumer safety, ensuring the safety and quality of the nation’s food system, fighting hunger, and promoting proper nutrition.

These accumulated duties are now divided among the USDA’s many bureaus and services. The Agricultural Marketing Service helps farmers market their products in domestic markets, while the Foreign Agricultural Service seeks to improve overseas markets for U.S. farm products. The Agricultural Research Service and the Animal and Plant Health Inspection Service provide farmers with research and information to increase productivity, aided by the Cooperative Extension Service. The Economic Research Service and the National Agricultural Statistics Service keep agricultural statistics and provide farmers with economic information. The Natural Resources Conservation Service helps farmers follow sound environmental practices. The Forest Service ensures the conservation and wise use of the nation’s forest lands. The Farm Service Agency and Rural Development programs extend credit and federal aid to farmers and rural communities. The Food and Nutrition Service and the Food Safety Inspection Service ensure the safety of the nation’s food supply, administer the federal government’s antihunger programs, and seek to educate American consumers on matters of health and nutrition. The USDA had a budget of approximately image77 billion in 2007. It employs 110,000 people.

The Department of Commerce. The mission of the Department of Commerce is “to foster, promote, and develop the foreign and domestic commerce” of the United States. In other words, the Commerce Department exists to promote the conditions necessary for economic growth. To this end, it creates and disseminates the basic economic data necessary to make sound business decisions. It promotes scientific and technological innovation; facilitates foreign trade and tries to ensure the competitiveness of American businesses in international markets; and grants patents and registers trademarks.

The framers of the Constitution discussed the idea of creating a secretary of commerce and finance, with many of these tasks ending up as part of the Department of the Treasury’s domain. But the advent of industrialization, the increase in American exports, and the overall growth in the size and scale of the American economy during the late nineteenth century led to increased demands on the part of business organizations such as the National Association of Manufacturers and the U.S. Chamber of Commerce for a separate cabinet department devoted exclusively to the needs of American business.

The Panic of 1893 served to underscore the need for better coordination and management of business conditions. In 1903 President Theodore Roosevelt signed a law creating the Department of Commerce and Labor; the two were divided into two separate cabinet departments in 1913. At the outset, the Commerce Department was charged with overseeing domestic and foreign commerce, manufacturing, shipping, the nation’s fisheries, and its transportation systems.

The department peaked early. In its defining era under the leadership of secretary Herbert Hoover (1921–28), the Department of Commerce expanded by thousands of employees; its annual budget grew from approximately image1 million to image38 million; and the Building and Housing Division (1922), the Bureau of Mines and the Patent Office (1925), the Aeronautics Division (1926), and the Radio Division (1927) were established.

These functions are currently divided among the department’s many offices and bureaus. The Economic and Statistics Administration and the Bureau of the Census provide business with data about the state of the economy. The International Trade Administration facilitates international commerce. The Economic Development Administration and the Minority Business Development Agency promote economic growth and business opportunity in economically troubled regions and underserved communities. The National Institutes of Standards and Technology, the National Oceanic and Atmospheric Administration, the National Technical Information Service, and the National Telecommunications and Information Administration each, in their own way, promote technological and scientific innovation among U.S. businesses. The Patent and Trademark Office seeks to encourage innovation through the protection of intellectual property rights. In 2007 the Commerce Department employed approximately 40,000 employees and had a budget of approximately image6.5 billion.

The Department of Labor. The culmination of nearly half a century of vigorous agitation for a “voice in the cabinet” on the part of organized labor, the Department of Labor was created in March 1913 in order “to foster, promote, and develop the welfare of working people, and to enhance their opportunities for profitable employment.” It enforces federal laws governing workplace conditions, attempts to uphold the principle of collective bargaining, seeks to protect the solvency of retirement and health care benefits through regulation and oversight, administers unemployment insurance, helps displaced workers through retraining and educational programs, and tracks basic economic data relevant to the American labor force (such as changes in unemployment, prices, wages, and productivity). The department is also responsible for ensuring compliance with federal labor laws in the workplace, including safety and minimum wage regulations and freedom from discrimination.

These basic tasks have evolved over time. At its inception, the Labor Department consisted of a new U.S. Conciliation Service to mediate labor disputes, the Bureau of Labor Statistics, the Bureau of Immigration and Naturalization, and a Children’s Bureau. Almost immediately, the demands of World War I meant that the department’s primary responsibility was in the mediation of potential labor disputes—and this in turn meant that the department emphasized organized labor’s right to collectively bargain with employers. That is, the Labor Department at times pushed for the organization of the workplace. But as health and safety regulations and rules governing the minimum hourly wage and overtime proliferated, the department’s energies have focused on enforcing these governmental standards in American workplaces, regardless of unionization. Education, retraining, and reemployment programs grew in importance as deindustrialization began to plague traditional industries in the post–World War II era. The Area Redevelopment Act of 1961, for example, targeted unemployed workers in regions particularly hard hit by deindustrialization. The Comprehensive Employment and Training Act of 1973 underscored the department’s increased emphasis on helping American workers survive in the “postindustrial” economy.

These functions are carried out today by the Bureau of Labor Statistics, the Employee Benefits Security Administration, the Employment Standards Administration, the Employment and Training Administration, the Mine Safety and Health Administration, the Occupational Safety and Health Administration, the Veterans’ Employment and Training Service, and the Women’s Bureau. The Labor Department’s annual budget sits at roughly image60 billion, and the department employs 17,000 people.

The Outer Cabinet Departments
Established in the Post–World War II Era

Departments established during the postwar decades addressed broad structural problems in American life. Unlike the Departments of Agriculture, Labor, and Commerce, however, many of the outer cabinet departments established in this era enjoyed the support of no single interest group. Departments such as Health and Human Services, Housing and Urban Development, and Education served millions of Americans. But as the institutional embodiments of New Deal and Great Society liberalism, these departments also became a target of attack by small-government conservatives.

The Department of Health and Human Services. The Department of Health and Human Services (HHS) is the federal government’s principal department working to ensure the health and welfare of all Americans, particularly those citizens least able to help themselves. HHS is by far the largest cabinet department in terms of budget—its 2007 budget was image707.7 billion.

In many ways, HHS embodied much of the economic reformism and the search for a basic sense of fairness and security that many historians argue was at the heart of New Deal liberalism. HHS began, in a sense, in 1939, when the Federal Security Agency was created to house the Public Health Service (from the Treasury Department), the Food and Drug Administration (from Agriculture), the Children’s Bureau (from Labor), and the newly created Social Security Administration. The undeniable popularity of the New Deal state led Republican President Dwight D. Eisenhower to transform the Federal Security Agency into the cabinet-level Department of Health, Education, and Welfare in 1953. This department became the Department of Health and Human Services in 1979, after the Education Division was removed and sent to the new Department of Education.

Aside from its New Deal–era foundations, by far the most important development in HHS history was the expansion in departmental functions that occurred as a result of President Lyndon Johnson’s Great Society programs, including Medicare, Medicaid, and the Head Start program for underprivileged preschoolers.

The department’s 65,000 employees currently administer more than 300 programs that affect the lives of hundreds of millions of Americans. The National Institutes of Heath is the federal government’s primary medical research organization. The Centers for Disease Control and Prevention works to prevent the outbreak of infectious disease. The Food and Drug Administration guarantees the safety of foods, pharmaceuticals, and cosmetics. The Health Resources and Services Administration provides basic medical care to Americans unable to afford health insurance. The Indian Health Service provides health care to nearly 2 million Native Americans and Alaskan Natives through a system of hundreds of health centers and village clinics. The department’s Administration for Children and Families administers 60 programs designed to provide basic economic and social security for low-income families with dependent children. This administration also oversees Head Start. The Administration on Aging provides services to elderly Americans, including the Meals on Wheels programs that deliver food to the homebound.

The hallmark of HHS, though, is the Centers for Medicare and Medicaid Services, which provide health insurance to nearly 50 million elderly or disabled persons through the Medicare program, cover another 50 million low-income Americans through Medicaid, and insure millions of children through the popular State Children’s Health Insurance Program. HHS is perhaps the cabinet department where ideological arguments against “big government” welfare programs collide most clearly with the reality that Americans now accept a role for the federal government in ensuring basic economic and health security for all Americans.

The Department of Housing and Urban Development. In addition to expanding the mission of HHS, Great Society initiatives also led to the creation of the Department of Housing and Urban Development (HUD) and the Department of Transportation (DOT). The establishment of HUD was an acknowledgment of the problems facing an increasingly urban nation during the 1960s.

The mission of HUD is to increase home ownership and provide access to affordable quality housing free from discrimination. Yet the department in many ways has been a house divided. During the mid-1960s, when HUD was created, the private housing construction and banking industries sought to restrict the new department’s activities to the promotion of the construction of new housing. Urban reformers, civil rights activists, and planners, on the other hand, wanted to seize upon urban redevelopment as a way to promote broader social and economic change. But community development agencies such as Lyndon Johnson’s Office of Economic Opportunity were not placed within HUD. And in 1968, HUD lost jurisdiction over urban mass transportation systems to the newly created Department of Transportation—a truly crippling blow to its ability to engage in comprehensive urban planning.

Today the Department of Housing and Urban Development oversees hundreds of programs and is divided into three broad offices. The Office of Community Planning and Development tries to integrate affordable housing with expanded economic opportunity for needy families; the Office of Fair Housing and Equal Opportunity oversees the Fair Housing Act and other civil rights laws designed to ensure that all Americans have equal access to housing; and the Office of Public and Indian Housing provides affordable public housing for needy individuals. In 2008 the Department of Housing and Urban Development’s budget was image35 billion.

The Department of Transportation. The Department of Transportation (DOT) is responsible for designing and carrying out policies to ensure the safety and efficiency of the nation’s transportation systems. The creation of the DOT was signed into law in October 1966, and it began operations in April 1967 as the fourth-largest cabinet department, bringing together 95,000 employees then working in more than 30 existing transportation agencies scattered throughout the federal government.

The DOT is divided into 11 administrations: the Federal Aviation Administration, the Federal Highway Administration, the Federal Motor Carrier Safety Administration, the Federal Railroad Administration, the National Highway Traffic Safety Administration, the Federal Transit Administration (urban mass transport), the Maritime Administration, the St. Lawrence Seaway Development Corporation, the Research and Innovative Technologies Administration, the Pipeline and Hazardous Materials Safety Administration, and the Surface Transportation Board. Its 2008 budget was image68 billion.

The Department of Energy. The Department of Energy (DOE) became the twelfth cabinet department in October 1977 in the midst of the protracted energy crisis of the 1970s and was designed to consolidate existing energy agencies in order to promote efficiency and facilitate the research and development of new energy sources. To this end, the department assumed the responsibilities of the Federal Energy Administration; the Energy Research and Development Administration; the Federal Power Commission; the Southeastern, Southwestern, and Alaskan Power Administrations (regional hydroelectric projects); and a handful of other energy-related programs previously housed in the Departments of Defense, Commerce, and the Interior.

Since its creation, the DOE has been responsible for energy research and development, oversight and regulation of interstate transmission of natural gas, oil, and electricity, promotion of alternative energy, and managing the nation’s nuclear weapons development and the cleanup and disposal of nuclear waste from cold war programs. It also serves as the federal government’s liaison with the International Atomic Energy Agency.

Yet not all of these goals have proven to be created equal. During the late 1970s, DOE emphasis was on research and development of new energy sources and on efficiency and conservation. During the 1980s, the DOE disproportionately focused on nuclear weapons development. During the late 1990s, a concern with renewable and alternative energy was again stressed. This interest in alternative energy has continued into the twenty-first century, although the conflation of energy and national security matters has set up a potential conflict within the DOE: Does national security mean all-out development of traditional sources of energy, or does it mean a long-term plan to develop alternative energy technologies? The Department of Energy employed 116,000 people in 2008, and its budget was image25 billion.

The Department of Education. Education in the United States has historically been the responsibility of state and local governments. The Department of Education notwithstanding, this remains the case. Even the 1979 enabling act that created the Education Department emphasized the fundamentally non-federal nature of education in America, noting that “the establishment of the Department of Education shall not . . . diminish the responsibility for education which is reserved for the states and the local school systems.” The Department of Education’s primary contributions to education are the disbursement of money in the form of grants to states and school districts, funding student loan and grant programs for postsecondary education, and ensuring equal access.

The antecedents of the Department of Education go back to 1867, when an Office of Education was established in the Interior Department. With fewer than ten clerks on staff, the Education Office served as a statistical agency, although during the 1890s, it also assumed a supporting role in overseeing the nation’s land-grant colleges and universities.

In 1939 the Office of Education was transferred to the Federal Security Agency before being incorporated into the Department of Health, Education, and Welfare (HEW) in 1953. But even within HEW the Office of Education led an austere existence until a dramatic increase in the federal presence in American education during the cold war. In an effort to compete with the Soviet Union, the 1958 National Defense Education Act provided support for college loans and also backed efforts to improve instruction in science, math, foreign languages, and area studies at all levels of the American education system.

The civil rights movement and the War on Poverty programs of the 1960s and 1970s also expanded the federal role in education. The 1965 Elementary and Secondary Education Act initiated programs for underprivileged children living in poor urban and rural areas. The Higher Education Act of 1965 provided financial aid programs for eligible college students. Civil rights legislation, such as Title IX of the Education Amendments of 1972, prohibited discrimination based on race, gender, and disability. Each of these responsibilities transferred to the Department of Education upon its creation in 1979.

The Education Department’s elementary and secondary programs affect 56 million students in nearly 100,000 public schools and 28,000 private schools in 14,000 school districts across the nation. The department administers grant and loan programs that support 11 million postsecondary students. The Education Department has a staff of 4,100—45 percent fewer than the 7,500 employees in 1980, a testament both to the powerful tradition of local control in the American education system and to the fact that the Education Department has spent most of its life under the management of conservative Republican administrations generally opposed to the idea of a strident federal role in education. The department’s 2008 budget stood at image68.6 billion.

The Department of Veterans Affairs. The mission of the Department of Veterans Affairs (VA), as taken from President Abraham Lincoln’s second inaugural address, is “to care for him who shall have borne the battle, and for his widow and his orphan.” With 58 regional offices, the VA’s Veterans Benefits Administration distributes benefits to veterans and their dependents that include compensation for death or disability, pensions, educational and vocational training, and low-interest loans. The Veterans Health Administration oversees one of the largest health care systems in the United States, complete with over 150 hospitals and 350 outpatient clinics. The National Cemetery Administration offers burial and memorial services.

Although the nation has always made some provision for the care of veterans, it was not until 1930, in the aftermath of World War I, that the federal government created a Veterans Administration to bring order to the government services offered to the nation’s veterans. This independent Veterans Administration was elevated to cabinet department status in 1988, in part due to the plight of disaffected Vietnam veterans. The annual VA budget is approximately image90 billion, and the department’s 250,000 employees cause the department to be ranked second only to the Defense Department in number.

FURTHER READING. Anthony J. Bennett, The American President’s Cabinet: From Kennedy to Bush, 1996; Jeffrey E. Cohen, The Politics of the U.S. Cabinet: Representation in the Executive Branch, 1789–1984, 1988; Richard F. Frenno, Jr., The President’s Cabinet: An Analysis in the Period from Wilson to Eisenhower, 1959; Stephen Hess, Organizing the Presidency, 2002; R. Gordon Hoxie, “The Cabinet in the American Presidency, 1789–1984,” Presidential Studies Quarterly 14, no. 2 (1984), 209–30; Shirley Anne Warshaw, Power Sharing: White House–Cabinet Relations in the Modern Presidency, 1996.

KEVIN POWERS

image

campaign consultants

The United States is the land of elections. Over any typical four-year cycle, there are more than a million elections, everything from the presidency, U.S. senator, and governor to big-city mayor, city council, and local school board bond issue. Americans vote into office approximately 513,000 elected officials and decide on thousands of ballot initiatives. No other country comes close to the number and variety of elections that are held in American states, cities, counties, and other political jurisdictions.

Most elections are low-profile, low-budget contests. For many voters, the first time they learn that an issue or some minor office is even being contested is when they close the curtain in the voting booth and see the official ballot. Candidates seeking office in these low-profile contests usually rely on their own shoe leather, pay for their own election expenses, and rely on assistance from family, friends, and other volunteers.

But in contests for big-city mayors, governors, members of Congress, and other contests, professional political consultants are used to help guide candidates, political parties, and interest groups through the complexities of today’s elections. These are the expensive, often high-profile contests, where candidates and interested parties will raise hundreds of thousands, even millions of dollars to fund their races. It is not unusual for candidates for the U.S. Senate to raise and spend image10 to image15 million. It was once a rarity for candidates for Congress to spend image1 million; now it is commonplace. In some jurisdictions, candidates who are elected to the state supreme court might spend image5 or image8 million, while some school board candidates in big cities have been known to spend well over image100,000. Statewide spending in California presents a special case. In 2005 alone, with no governor, no state legislators, and no other state officials to elect, still over image500 million was spent by participants trying to defend or defeat ballot issues.

Where does the money go? Much of it, of course, goes to television advertising or direct-mail expenses, but a considerable portion goes to a battery of professionals who are hired by the campaigns to help win the public over to their side. Campaign consulting is a thriving business; no serious candidate in an important contest can do without consultants. Yet, campaign consulting is a relatively new business.

Through much of American electoral history, campaigns were run by political party machines and operatives. Parties recruited candidates, funded election drives, urged people to vote, and tried to generate excitement through mass rallies and torchlight parades. But by the middle of the twentieth century, the political party was no longer the focus of campaigning for many elections. Increasingly, the focus was on the individual candidate. The candidates relied on others to assist them, and increasingly, as campaigns became more complex and sophisticated, they turned to professionals skilled in public relations, survey research, media relations, and other specialties.

The Beginning of the Business
of Political Consulting

The business of political consulting traces back to the mid-1930s, when a California husband-wife public relations team, Clem Whitaker and Leone Baxter, created a firm called Campaigns, Inc. Throughout their 25-year career, Whitaker and Baxter were enormously successful, providing public relations and communications services to a variety of candidates, ballot initiatives, and issue causes.

Others followed, but even by the early 1950s, most political consulting was still a sideline for public relations firms. One study showed that by 1957 some 41 public relations firms, located mostly in California, Texas, and New York, offered campaign services. But during the 1950s a new specialty was emerging: the professional campaign manager or political consultant. These were political activists who were making campaign work their principal business. By the 1960s the political consultant was becoming a fixture in presidential, gubernatorial, and U.S. Senate races.

The first generation of consultants included Joseph Napolitan, Walter De Vries, F. Clifton White, Herbert M. Baus, William B. Ross, John Sears, Stuart Spencer, and Joseph Cerrell. They tended to be generalists, who would handle a campaign’s overall strategy, develop themes and messages, and run campaigns. Others were known for special skills. Louis Harris, Albert H. (Tad) Cantril, Oliver Quayle, William Hamilton, Richard Wirthlin, Robert Teeter were among those who focused on research; Matt Reese was known for his campaign-organizing skills. Media specialists Charles Guggenheim, Tony Schwartz, David Garth, Marvin Chernoff, and Robert Squier crafted television commercials for Democratic candidates, while Robert Goodman, Douglas L. Bailey, John D. Deardourff, and others worked on the Republican side.

The business of political consulting grew quickly in the 1980s through 2000, and in 2008 approximately 3,000 consulting firms specialized in political campaigns. A few political consultants have become widely known to the public, like Karl Rove, Dick Morris, and James Carville. But they are the rare exceptions. Most consultants work quietly, and comfortably, behind the scenes. Even at the presidential level, few Americans would recognize the names of principal consultants for 2008 presidential candidates Hillary Clinton, Barack Obama, Rudy Giuliani, or John McCain.

The Role of Political Consultants in Campaigns

What do consultants bring to the modern campaign? They bring skills, experience, and discipline to an essentially unruly process. Few events are as potentially chaotic, vulnerable, and unpredictable as a modern campaign. They are, by definition, contests, pitting one side (or more) against another. So much can go wrong. There is such a steep learning curve for the campaign, so many ways to make a mistake, an often inattentive public, and an opponent and allies doing their best to knock your candidate off track.

In some campaigns, no amount of skill or energy from a consultant would change the ultimate outcome. Winning (and losing) is contingent on a variety of factors, and many of those are beyond the control of consultants. But consultants can make the vital difference between victory and defeat when contests are close. Furthermore, consultants can help candidates avoid big, costly mistakes. They can bring order, discipline, focus, and consistency when things might otherwise be falling apart; they can keep a volatile situation from total meltdown and fire up a listless, drifting campaign that has lost its direction.

Campaign consultants come with a variety of skills and occupy different niches in campaigns. For an image8 million U.S. Senate race, a candidate might hire a bevy of consultants. The candidate will hire strategists (a general consultant, a campaign manager, pollster, direct-mail specialist, media expert) and specialists (candidate and opposition researchers, fund-raisers, lawyers and accountants with specialized knowledge of campaign finance law, speechwriters, television time buyers, electronic media specialists, telemarketers, micro-targeting specialists, and others). The campaign will also use campaign vendors (firms that supply voter files, campaign software, yard signs, and more).

Many consultants offer niche services, such as providing state and federal election law advice, buying time for radio and television advertising, providing voter and demographic databases, geo-mapping, and sophisticated targeting techniques, helping candidates in debate preparation, preparing their stump speeches, or providing that all-important cadre of fund-raisers who collect money that provides the fuel for the entire campaign.

New specialties have emerged just as new technologies have been introduced. No serious political campaign now would be without a Web site, e-mail, and blog. One of the newest job descriptions is that of director of electronic media: the person on the campaign responsible for maintaining the Web site, coordinating e-mails, and monitoring the campaign’s blog. Particularly since the 2004 presidential campaign, candidates have found that online communications can be cost-effective and efficient ways to reach out to volunteers, collect campaign funds (usually in smaller denominations), and keep activists and others engaged in the campaign.

Campaign consultants provide services for more than the traditional candidate campaign, such as a gubernatorial race or big-city mayor’s race. In fact, very few campaign consultants work on only election cycle campaigns. Many are involved in ballot issue campaigns, such as found in California and about 25 other states. Many too provide services in issue advocacy fights, such as the battle over national health insurance, immigration reform, gay marriage, and many other issues. Consultants will work for corporations, trade associations, and other business interests. Finally, American consultants have found a lucrative market during the past 30 years going abroad and working on campaign elections in other countries.

The Business of Political Consulting

The business of political consulting is just a small fraction of the commercial marketing world. Many of these firms have fewer than ten employees and generate image1 million or less in revenue. Private political survey research represents only about 2.5 percent (or image100 million) of the image4 billion annual revenues of the polling industry. Direct mail for political causes constitutes but 2 percent of the direct-mail commercial market, and political telemarketing is less than 1 percent of the overall telemarketing industry.

While citizens watching television during a heated presidential primary might think that there is nothing but political commercials dominating the airwaves, in fact, such commercials are just a tiny portion of the market. In recent presidential campaign years, for example, during the six months that preceded the general election, political commercials represented only about 1.0 to 1.5 percent of all commercials. In a typical presidential election year, mass-marketing companies like Procter & Gamble or General Motors will each spend about the same amount selling their own products as all presidential candidates combined.

In the early twenty-first century, political campaigns pose special problems for candidates, consultants, and their campaigns. It is so much harder to get people’s attention. There has been a fundamental shift in where people get their news and how candidates can advertise. Newspaper readership is declining; weekly newsmagazines have become slimmer and less relevant in a 24-hour news culture; network television, which once dominated viewers’ attention, has little of its former power. Communications outlets exploded with the advent of cable television in the 1970s, followed by the Internet, e-mail, mobile phones, and instant messaging. The communications marketplace is extraordinarily splintered, making it much harder for campaigns to reach voters with their messages. The mass market of three television networks has largely been supplanted by niche markets with hundreds of choices.

At one time, campaigns were much simpler: one candidate vying against another. Television, radio, and print advertising from one camp were pitted against the advertising from the other side. Since then, campaign communications have become much more complicated. Other voices added their messages and get-out-the-vote drives in the campaigns. For example, labor unions, political parties, trade associations, even private individuals have been willing to spend great sums of money to influence a contest. Then contests became nationalized. By the mid-1990s, congressional races that once were considered only local contests were seeing advertising campaigns from abortion rights, pro-gun control, anti-NAFTA, and English-only advocates—national groups, all—trying to influence the outcome. On top of these influences has come the wide open, robust influence of citizen activists through blogs and Web sites, adding their voices to the mix. This makes it all the more necessary to have professional campaign help to fight through the clutter and competition in the contest and make the candidate’s views and positions known.

Consultants often get blamed for the harsh, negative tone of campaign rhetoric, especially in television ads. They defend their craft by saying that they are providing useful information, albeit in stark and clear terms, about the differences between their candidates and the opponents. Academics and public citizen groups worry about the negative impact such ads might have on democratic behavior or voter turnout.

If anyone is to be blamed, it must be the candidate, who ultimately is responsible for the conduct of a campaign. An unfair, slash-and-burn campaign commercial, a “dirty tricks” stunt against the opponent, an embarrassing photo digitally pieced together, a cruel, salacious comment by a campaign staffer—all these unfair or unethical practices redound against the campaign and the candidate.

The negativity and the harsh words found in contemporary campaigns will likely only get worse. New voices, without the constraints of professional responsibility, have entered the picture. We should expect campaigns to get uglier and louder. Particularly with online campaigning, there are so many more voices filling cyberspace, from bloggers to e-mail rumors, to the online posting of sound bites and video clips. Professional media consultants, knowing they have reputations to uphold and are working for a candidate and a party or interest group, will use some semblance of caution. The really wild, outrageous comments or videos posted on the Web will come from outsiders, often anonymous, unfettered by constraints. The early twenty-first century may become the Wild West period of campaigning.

Challenges and Opportunities
of Online Campaigning

Particularly since Howard Dean’s run for the Democratic presidential nomination in 2003–4, we have seen a challenge to the dominant form of professional campaigning. Dean and his campaign manager touted a new approach to campaigning. That approach was to listen to the voices expressed on Dean’s blog and other online sources and emerge with a bottom-up campaign, gaining ideas from the people, listening to (and presumably acting on) their concerns, rather than imposing a command-and-control, top-down campaign (with the implication that it did not listen to the people).

While the approach sounded promising, it was less than what it appeared. A critical ingredient in any successful campaign is top-down control: message discipline, a fixed but flexible strategy, the ability to cut through all the noise (electronic and otherwise) of a campaign, set a firm, clear direction, and plan to beat the opponent. This is what traditional, professional campaigning does best: it brings order out of chaos. But at the same time, successful campaigns are not out of touch with what voters want or feel. They conduct polls, run focus groups, and monitor blogs; candidates engage in “listening tours,” greet voters in malls, coffee shops, and private homes. In short, they listen very carefully to what people think.

In recent election cycles, a thriving blogging community, both on the liberal/Democratic and conservative/Republican sides, has emerged. These bloggers, self-appointed activists, are also claiming a stake in the nature, content, and direction of campaigning. Bloggers, particularly from the progressive or liberal side, like to tout that they will break down the barriers between the people and the candidates. They boast that they are the future of campaigns and will invariably supplant the activities of campaign consultants.

Electronic democracy, citizen activism, and Web advocacy, however, have not supplanted old-fashioned, professionally run campaigns. Professional campaign consultants, above all, want their clients to win. They adapt to new circumstances, and increasingly have added electronic communications to their arsenal of tools. No current professionally run campaign would be without an e-communications team to run the Web site, monitor the campaign blog, and produce online versions of campaign videos, fund-raising appeals, and other tricks of the trade. Consultants will adapt, endure, and prosper. They are indispensable to modern American campaigns, and, increasingly, in campaigns throughout the world.

See also campaigning.

FURTHER READING. David A. Dulio, For Better or Worse? How Political Consultants Are Changing Elections in the United States, 2004; Ronald A. Faucheux and Paul S. Herrnson, The Good Fight: How Political Candidates Struggle to Win Elections without Losing Their Souls, 2000; Paul S. Herrnson, Congressional Elections: Campaigning at Home and in Washington, 4th ed., 2007; Kathleen Hall Jamieson, Packaging the Presidency: A History and Criticism of Presidential Campaign Advertising, 1984; Dennis W. Johnson, No Place for Amateurs: How Political Consultants Are Reshaping American Democracy, 2nd ed., 2007; Dennis W. Johnson, ed., Routledge Handbook on Political Management, 2008; Stephen K. Medvic, Political Consultants in U.S. Congressional Elections, 2001; Bruce I. Newman, The Mass Marketing of Politics: Democracy in an Age of Manufactured Images, 1999; Daniel M. Shea and Michael John Burton, Campaign Craft: The Strategies, Tactics, and Art of Political Campaign Management, 3rd ed., 2006; James A. Thurber and Candice J. Nelson, eds., Campaign Warriors: The Role of Political Consultants in Elections, 2000.

DENNIS W. JOHNSON

image

campaign law and finance to 1900

Financing candidates’ campaigns was simple and uncontroversial in the absence of mass parties and a complex economy. But with the demise of the relatively hierarchical, elite-driven politics of the eighteenth and early nineteenth century, men of limited means who had made politics their profession needed financial help to reach an expanded electorate. Through the nineteenth century, campaign finance raised questions about who should pay for politics and whether those who paid got an unfair return on their investment. Those questions, asked with both public spiritedness and partisan advantage in mind, remain with us.

In the early years of the republic, many campaigns were straightforward. Men of prominence and wealth “stood” for office, ideally approaching the exercise as a somewhat unwelcome civic duty, akin to jury service in the twentieth century. The rituals of electioneering—voice voting for the (usually) white propertied men who made up the electorate and the candidates treating the voters with food and drink—illustrated the hierarchical relationships. Voter turnout tended to be low. Costs were, too, and they were usually borne by the candidates.

Contentious elections required more desperate activity and more extensive methods. Newspapers devoted to parties and to factions, such as those organized around personal followings in the Middle Atlantic states, promoted candidates. The Federalists financed Noah Webster’s American Minerva (later renamed The Commercial Advertiser); the Democratic-Republicans responded with Philip Freneau’s National Gazette. So it went around the new nation: from 1790 to 1808, the number of newspapers, most of them partisan, more than tripled. Backed by wealthy patrons or government printing contracts, newspapers put out the partisan line, denigrating the godlessness of the Democratic-Republicans or the monarchical aspirations of the Federalists.

The party press remained an important cost of doing political business with the rise of mass parties in the 1830s. There were additional expenses: printing and distributing ballots, holding rallies, dispensing campaign paraphernalia, and getting out the vote of an electorate expanded to include most white men. Party activities were now nearly constant, reaching down from presidential campaigns to an expanded number of state and local races. All this exertion required a regular stream of funds, particularly since a new generation of men who made politics their profession lacked the means of more elite predecessors.

Part of a solution to the problem of funds emerged from the “spoils system” or “rotation in office,” which followers of Andrew Jackson touted as a more democratic way to distribute government positions than the creation of a class of permanent officials. Replacing public employees, minor as well as major, with partisans of a new presidential administration—a practice repeated at the state and city levels—gave the executive branch loyal men to carry out the government’s work. These workers could also be tapped for funds. The party in power expected the men holding patronage jobs—from upper-level administrators to postmasters—to show their gratitude and fealty by paying a percentage (generally 2 to 5 percent) of their salaries to their party. Such assessments became an alternative system of taxation or after-a-fashion public financing, in which those who benefited or hoped to benefit from the result of an election paid for it.

The assessment system generated controversy almost from the start, and like every attempt to regulate campaign finance to follow, proposed reforms were both partisan and principled. In 1837 Whig congressman John Bell of Tennessee introduced a bill that would have banned assessments and electioneering on the part of federal employees. This measure, as well as a hearing concerning abuses at a customs house, took aim at the Democrats. Proponents argued that the executive branch was using patronage to build a machine that interfered with legislative elections; they decried a system in which “partisan service is the required return for office, as office is to be the reward for public service.” Opponents countered that everyone, including federal employees, had the right to participate in politics. The bill failed. But constituents received free copies of the debate and the hearing: the franking privilege was an important perk of incumbency.

Strains on the assessment system grew. Businessmen who relied on customs houses for import and export trade resented the sometimes inefficient service, which improved greatly with payoffs, provided by partisan employees. Workers who performed professional tasks increasingly wanted to be treated like professionals, rather than as patronage hacks. They came to resent paying taxes to their party. The cost of the Republican Party’s effort to build party infrastructure in the South after the Civil War encouraged the GOP to nag public employees more tirelessly than usual. Democrats naturally grumbled, as did some Republicans. Appalled by the corruption of the Ulysses S. Grant administration and weary of Reconstruction, African Americans, and the South, Liberal Republicans offered civil service reform as a new issue and purpose for the party and a cure for corruption.

They felt strongly enough about the issue to form their own party in 1872; President Ulysses Grant responded by creating a civil service commission. Meaningful reform came with the passage of the Pendleton Act in 1883, given a final push by the mistaken idea that President James Garfield had been assassinated by a “disappointed office seeker,” driven mad by patronage.

The campaign work and dollars of public employees continued to be a target of reform into the twentieth century, but assessments ceased to be a major source of funds for national party organizations by the late nineteenth century. Wealthy partisan individuals had pitched in throughout the nineteenth century, but as assessments dried up, business increasingly paid for elections. This money quickly sparked more controversy than assessments.

We know some of the names of the big spenders of the late nineteenth century—August Belmont, Jay Cooke, and Tom Scott, for example—and their interests in what government did in banking, tariffs, railroad subsidies, and regulation. The assumption of many, especially supporters of third parties, was that money blocked the path to reforms that were in the interests of those without huge wealth. Yet, business interests generally did not seek out political fundraisers—rather it was the reverse, except perhaps for cases like the 1896 election, when fear of what William Jennings Bryan would do if elected, inspired businessmen to give William McKinley more money than his campaign could possibly use. The question of what businessmen bought with their contributions would be central to the campaign finance debates of the twentieth century.

See also campaign law and finance since 1900.

FURTHER READING. Paula Baker, ed., Money and Politics, 2002; Robert E. Mutch, “Three Centuries of Campaign Finance Law,” in A User’s Guide to Campaign Finance Reform, edited by Jerold Lubenow, 1–24, 2001; Louise Overacker, Money in Elections, 1932, reprint, 1974; Clifton K. Yearly, The Money Machines: The Breakdown and Reform of Governmental and Party Finance in the North, 1860–1920, 1970.

PAULA BAKER

image

campaign law and finance since 1900

Concerns about campaign finance—the amount of money candidates and political parties raise and spend and the sources of those funds—have persisted through the twentieth and into the twenty-first century. The issue’s appeal was and remains rooted in many Americans’ suspicion of the influence of money on policy and fears for democracy itself. Anxiety has not produced satisfactory solutions. In contrast to the European model, American parties have no tradition of dues-paying members who support their activities, and American campaigns are long and require the purchase of increasingly expensive publicity. As campaign finance regulations have expanded, parties, candidates, and interest groups have adapted nimbly to the constraints, which they also helped to write. Regulations that control contributions and spending have been limited by constitutional protections of free speech. The structure of American politics and Supreme Court guidelines have allowed campaigns to raise and spend the funds they believe they need within each regulatory regime reformers have built, which leads inevitably to the next try at reform.

While fears about the impact of money on politics have been constant, the issue rarely has risen anywhere near the top of public concerns. The particular targets for reform have been driven by scandal and partisan politics. In the early twentieth century, large, secret corporate contributions, the mainstay of party fund-raisers in the 1890s, were the evil that most excited reformers’ interest. The first federal legislation stemmed from Democratic presidential candidate Alton B. Parker’s charges that his opponent, Theodore Roosevelt, collected big corporate contributions while threatening to regulate many of the same firms. Roosevelt successfully deflected the accusations, although his campaign had accepted substantial sums from financier J. P. Morgan (image150,000), Henry Clay Frick of U.S. Steel (image50,000), and railroad operator H. L. Harriman (image50,000, plus image200,000 raised from his contacts). A series of widely publicized scandals between 1904 and 1908—detailing campaign contributions and payoffs New York Republicans received from life insurance companies in return for favorable legislation, and benefits to utilities and railroads from legislators in the West and Midwest, among many others—appeared to confirm the suspicion that politicians catered to moneyed interests that made big contributions.

Federal Campaign Finance Legislation to 1970

The Tillman Act (1907) made it “unlawful for any national bank, or any corporation organized by authority of any laws of Congress, to make a money contribution in connection with any election to any political office.” Corporations or their officers or board members who violated the law were subject to fines and up to one year in prison. While the final bill did not require the parties to disclose the sources of their funds, the Republican National Committee and Democratic National Committee began to voluntarily release financial records for the 1908 presidential election. The Tillman Act prevented firms from using stockholders’ funds for campaign purposes but changed little in how campaigns raised money. There is record of only one prosecution, for a gift from the United States Brewers’ Association to a House candidate.

Reformers were disappointed that the Tillman Act did not provide for the disclosure of contributions, and Congress responded with the 1910 Federal Corrupt Practices Act (FCPA) and amendments to it in 1911. The law required every House and Senate candidate and “political committee” to report the sources of their contributions and capped spending at image5,000 for a House seat and image10,000 for the Senate. Committees that operated in single states were excepted. The 1911 amendments required reports both before and after elections and extended spending limits to primary elections. The measure contained no enforcement mechanism; violators would have to be taken to court, where they would be subject to fines and up to two years in prison. The 1910 bill passed the House without much debate; the 1911 amendments were adopted unanimously by the House and passed the Senate with only seven no votes, all from southern Democrats who opposed any extension of federal power over state election procedures.

The legislation had minimal effect on campaigns, due to loopholes and upward pressures on campaign costs. A close reading suggested that the spending limits applied to candidates, not committees formed to advance their campaigns, so only the most naïve candidates indicated that they had spent anything more than a nominal sum while “independent” committees raised the real money. Campaigns spent it because of Progressive Era changes in how the parties operated required that they do so. Primary elections could double campaign costs in competitive states. Nineteenth-century parties had controlled their own newspapers, but twentieth-century campaigns could not count on free fawning press coverage. Purchased advertising filled the gap. Advertising totaled between 25 to 50 percent of campaign budgets, depending on the expense of the market (in 1920, a full-page ad in the New York Times cost image1,539 and the Chicago Tribune image1,708). Election Day expenses, including hiring the workers who once had been patronage-hungry volunteers, were also substantial.

If the wish embodied in legislation could not control costs, the FCPA still had its uses. Congressional Democrats hoped spending limits might frighten wealthy Republican donors and lessen the GOP’s fund-raising advantages. An opponent’s spending could be made into a campaign issue. And a losing candidate could take a big-spending opponent to court or to Congress, in the hope of denying a seat to the winner. A few losing House candidates challenged the seating of winners, usually coupling charges of excessive spending with more traditional accusations of corruption. As the party balance in the Senate tightened in the 1910s and early 1920s, it became the site of numerous election challenges. The most consequential was Henry Ford’s challenge to Republican Truman H. Newberry’s 1918 victory in a close, nasty Michigan Senate race. The committee that promoted Newberry spent image190,000, almost all of it from his family and close friends, well above the apparent FCPA limits for candidates. The money bought the advertising deemed necessary to defeat the rich and famous automaker. Nearly four years of wrangling followed, including two grand jury hearings and one federal trial orchestrated by Ford that found Newberry guilty; a trip to the Supreme Court that reversed the conviction; and a Senate investigation that rehashed the legal proceedings. Newberry finally resigned in 1922 when that year’s election brought in enough new Democrats to make a rehearing and his expulsion likely.

The Supreme Court had found fault in both Newberry’s trial and the FCPA, but the majority opinion focused on the unconstitutional reach by Congress in regulating state primary elections. The 1924 and 1925 revisions of the FCPA removed coverage of primaries and increased the amounts candidates could spend, but otherwise left the law’s vague provisions intact. The Senate could still refuse to seat members-elect it deemed to have spent too much. A coalition of progressive Republicans and Democrats succeeded in denying seats to Frank L. Smith, Republican of Illinois, and William A. Vare, Republican of Pennsylvania, who were elected in 1926 by the voters of their states in expensive races.

Attempts to deny seats to candidates over excessive spending disappeared when the Democrats gained solid control of Congress in the 1930s. But the campaign finance issue persisted, now with Republicans taking the lead. A coalition of southern Democrats and Republicans struck at some of the Franklin D. Roosevelt administration’s sources of campaign labor and funds. The 1939 Hatch Act prohibited political activity by government workers, eliminating the threat of Works Progress Administration (WPA) recipients turning into the president’s personal political workforce. Because of the prohibition against all but high-ranking federal officials participating in conventions, the president could not stack a convention. (In 1936 about half of the delegates were federal employees.) The law set spending and contribution limits of individual political committees at image3,000,000 per year and capped individual contributions at image5,000. Sure enough, numerous “independent” committees sprung up, each able to spend to the limit.

In 1940 Republicans spent five times the limit and the Democrats double, all the while staying within the letter of the law. Beginning with the 1936 election, organized labor became an important source of funds for Democrats, and southern Democrats and Republicans aimed to restrict its ability to fund campaigns. The Smith-Connally Act (1943) contained a provision that temporarily banned union contributions in the same language that the Tillman Act used against corporate donations; the 1946 Taft-Hartley Act made the restriction permanent. The Congress of Industrial Organizations responded by setting up the first political action committee (PAC) in 1944 that legally funneled money to pro-union candidates. Not until the 1960s did business PACs follow the labor model.

Although the Supreme Court had opened the way for Congress to regulate primary elections in 1941 and the cost of elections outstripped the nominal limits, the FCPA remained the basic, if ineffective, campaign finance law of the United States. The issue had not disappeared. New actors dedicated to campaign finance reform, including a handful of academics and foundations, tried to fire public concern and to sell Congress on new policies. In 1956 Senator Francis Case of South Dakota admitted he had been the target of an oil company deal offering cash in exchange for his vote on a pending bill. This scandal refocused enough interest in campaign finance reform to inspire an investigation headed by Senate Democratic liberals but not to generate new legislation.

The problem was conflicting goals and interests. Organized labor, one important Democratic constituent group that favored restrictions that might curb the advantages of business donors and their Republican recipients, opposed legislation that targeted PACs; many Republicans insisted on controls on PACs; television networks rejected proposals that required them to provide free time for short “spot” advertisements; some sitting officeholders worried about assisting challengers; and a number of powerful members of Congress, backed by public opinion, blocked proposals for public funding of campaigns. Meanwhile, campaign costs ballooned, a trend carefully tracked by new “public interest” groups such as the Citizen’s Research Foundation and, by the late 1960s, Common Cause. Spending on the presidential elections of 1968 was up by 25 percent compared to that of 1964. Candidate- (rather than party-) centered campaigns, television, the consultants to coordinate the message, and fund-raising itself drove up the costs of congressional and statewide campaigns as well as presidential races.

Campaign Finance Legislation Since 1970

Rising costs got the attention of members of Congress, especially senators who could easily imagine facing well-financed challengers. With this prod, the logjam broke in 1971. The Federal Election Campaign Act (FECA) imposed much stronger disclosure requirements than those of the FCPA: candidates for federal office had to file quarterly reports detailing receipts and expenditures, with names, addresses, occupations, and businesses attached to each contribution over image100. It limited the contributions of candidates and their immediate families, spending on publicity and outreach, and the proportion of campaign budgets spent on radio and television advertising. The law thus treated media costs as the core problem and contained sufficient incumbent protection provisions to pass. But the FECA did not control overall costs. Richard M. Nixon’s 1972 reelection campaign spent twice as much as his 1968 run; Democratic nominee George McGovern’s race cost four times as much as Hubert H. Humphrey’s 1968 campaign. That, together with the Watergate scandal, brought a major overhaul of the FECA in 1974, creating the current framework of campaign finance law.

Arguing that a renewed effort to divorce money from politics would revive American’s sagging confidence in government (and, for Democratic liberals, promote their stalled policy agenda), the Senate bill provided public funds for primary and general election campaigns for federal offices and limits on what candidates could spend, even if using their own money. Restricted too were contributions to political parties and from independent groups, individuals, and PACs. The final bill kept the 1971 disclosure requirement and created the Federal Elections Commission, with commissioners evenly divided between the parties, to monitor compliance. It provided public funding for presidential elections, enabled by a image1 voluntary taxpayer checkoff, but no public support for congressional races. Even the most ardent champions of reform considered the bill not much better than a good first step toward clean elections.

The new regulations were immediately litigated. The major challenge came in Buckley v. Valeo (1976). A combination of left- and right-wing groups claimed that many of the FECA’s provisions violated constitutional rights to free speech and association. The Court agreed, in part. Large contributions given directly to a campaign might be corrupt or create the appearance of corruption. Therefore, the FECA’s image1,000 limit on individual donations and image5,000 for PACs to a single candidate fell within the federal government’s interest. But spending was protected speech. “The First Amendment denies government the power to determine that spending to promote one’s political view is wasteful, excessive or unwise,” according to the Court. The decision struck down the restrictions on candidates’ contributions to their own campaigns, mandatory spending limits (unless candidates voluntarily participated in the public financing system), and restrictions on expenditures by independent groups.

The FECA never succeeded in controlling campaign costs; it is doubtful that it would have done so even if the Court had ruled differently in Buckley. The law has required reams of paperwork but has only redirected the channels through which money reaches candidates. Raising enough money in small sums to run a credible campaign is expensive and time consuming. Internet fund-raising might lower the costs of finding small donors in future campaigns, but traditionally, only the most finely targeted and best maintained mailing and phone lists justified the considerable investment. Congressional and presidential candidates devote much more of their time to fund-raising than before the FECA. To lessen their load, they use “bundlers,” who locate caches of image1,000 contributions and PACs. The number of PACs grew from 608 in 1974 to 4,180 in 1990. Many of these were business PACs, which were responding to the expansion of federal regulations through the 1960s and 1970s by making sure they had access to both Democratic and Republican policy makers. New ideological PACs as well as officeholders’ PACs also emerged to encourage candidates who mirrored their agendas. A 1978 FEC ruling allowed state parties to follow state rather than federal disclosure and contribution guidelines. This had the effect of strengthening party organizations badly weakened by the FECA, but it also opened the way for unlimited “soft money” contributions. Democrats have relied more heavily than Republicans on “soft money,” since the Republicans constructed an effective database to generate “hard” money.

Soft money was the target of the next round of reform, which followed revelations about questionable donations to the Democratic National Committee and some unseemly fund-raising by President Bill Clinton and Vice President Al Gore. The Bipartisan Campaign Reform Act, commonly known as McCain-Feingold, blocked some of the soft money channels; others, especially the independent “527” groups (named after their place in the Internal Revenue Service code) remain. The FECA system continues to demand time and effort to ensure compliance, but since 1996 many presidential aspirants have opted out of the restrictive federal financing system. The American system of campaign finance remains tangled, intrusive, and complex, while reformers await the next scandal.

See also campaign law and finance to 1900; interest groups.

FURTHER READING. Anthony Corrado et al., The New Campaign Finance Sourcebook, 2005; Spencer Ervin, Henry Ford vs. Truman Newberry: The Famous Senate Election Contest, 1935; Alexander Heard, The Costs of Democracy, 1960; Louise Overacker, Money in Elections, 1932; James K. Pollock, Party Campaign Funds, 1926; Melvin I. Urofsky, Money and Free Speech: Campaign Finance and the Courts, 2005.

PAULA BAKER

image

campaigning

Campaigning for office is one of American democracy’s most defining acts—yet many citizens find campaigns unruly, distasteful, demeaning. Most elections are shrouded in some mystery; even in the age of polling, surprises sometimes occur. But, especially in presidential campaigns, the complaints about campaigns being too long, expensive, demagogic, and frilly come as regularly as America’s distinctive, scheduled election days, albeit more frequently.

The word campaign originated in the seventeenth century from the French word for open field, campagne. With contemporary soldiers fighting sustained efforts, often on the wide country terrain, the term quickly acquired its military association. The political connotation emerged in seventeenth-century England to describe a lengthy legislative session. In nineteenth-century America “campaign” was part of the barrage of military terms describing electioneering: as the party standard bearer, a war horse tapping into his war chest and hoping not to be a flash-in-the-pan—a cannon that misfires—mobilized the rank-and-file with a rallying cry in battleground states to vanquish their enemies.

American politicians needed to conquer the people’s hearts because popular sovereignty has been modern Anglo-American government’s distinguishing anchor since colonial days. In elitist early America, the ideal candidates stood for election; they did not run. Wooing the people was considered too ambitious, deceitful, undignified; passivity demonstrated the potential leader’s purity. This posture lingered longest at the presidential level, and it continues to feed the national fantasy about disinterested, virtuous candidates wafting into the presidential chair by acclamation, rather than the stereotypical grubby, aggressive, blow-dried, weather vane politicians slithering into office today.

There are more than 500,000 elected offices in the United States, ranging from tree warden to president. Most are elected directly to serve locally. While senatorial, gubernatorial, and presidential campaigns require “wholesale” campaigning to mobilize blocs of voters, the typical campaign is a “retail,” mom-and-pop, door-to-door operation pressing the flesh. This contact enables American citizens to meet, assess, scrutinize those who aspire to lead them. Even in today’s high-tech, television-saturated age, the July Fourth meet-and-greet county picnic or the Election Day get-out-the-vote drive by carpooling neighbors more typifies campaigns than big-budget multistate advertising buys during presidential elections.

While presidential campaigning commands most Americans’ attention, often setting the tone for campaigns at all levels, the presidential election remains indirect. As many Americans only first realized in 2000, when Al Gore won more popular votes than George W. Bush but lost the presidency, voters choose slates of electors, organized state by state, and pledged to vote for particular candidates when the Electoral College meets. This filtering of the people’s voice reflects the Founding Fathers’ fears of “mobocracy.” The Electoral College today orients campaign strategy toward a few voter-rich, swing states. In 1960 Vice President Richard Nixon vowed to campaign in all 50 states. He lost, narrowly. Many Republicans grumbled that Nixon wasted precious resources fulfilling that imprudent pledge.

Campaigns are legitimizing and unifying democratic rituals, linking the leader and the led in a historic, tradition-rich rite of affirmation. Ultimately, campaigns involve the mystical alchemy of leadership and the practical allocation of power, cleanly and neatly. America’s winner-take-all elections designate one winner, with no power sharing or booby prizes for losers. Campaigns also offer a clear narrative trajectory, with all the plots and personalities culminating on one day, when the people speak. As a result, the history of campaigning, on all levels, is a history of vivid clashes, colorful personalities, defining moments. Presidential campaigning history includes President John Adams clashing with his own vice president, Thomas Jefferson, in 1800; Republican Wide Awakes marching through northern cities before the Civil War; Grover Cleveland winning despite the mockery of “Ma, Ma, Where’s my pa, gone to the White House, Ha, Ha, Ha” in 1884; William Jennings Bryan’s valiant, superhuman speechifying in 1896; John Kennedy’s elegance during the 1960 televised debates; Ronald Reagan’s stirring Morning in America 1984 campaign; the geysers of baby boomer idealism that the honey-smooth Bill Clinton tapped in 1992; George W. Bush’s Karl Rove–engineered, play-to-the-base strategy in his 2004 re-election; and Barack Obama’s 2008 mix of redemptive “Yes We Can” uplift and an impressive ground game mixing old-fashioned grassroots politics with cutting edge netroots outreach.

The First Three Historical Phases: The Republican,
Democratic, and Populist Campaigns

Just as earth scientists debate whether humans evolved gradually or through occasional leaps called punctuated equilibrium, political scientists debate how campaigns developed. Historians traditionally focused on breakthrough campaigns pioneering particular techniques, celebrating 1840 as the first popular campaign that mobilized the masses and 1896 as the first mass merchandising effort organized bureaucratically. Historians also identified critical elections that realigned power, especially the 1800 revolution that empowered Thomas Jefferson’s Democratic-Republicans, the 1828 Democratic Jacksonian revolution, the 1860 Republican antislavery revolution, the 1896 corporate Republican realignment, the 1932 Democratic New Deal ascension, and the 1980 conservative Republican Reagan Revolution.

In fact, campaigns evolved slowly, haphazardly, sometimes fitfully, responding to changing communication and transportation technologies, shifts in population and society, the parties’ rise and fall, and the growth of the presidency. Technological innovations, including the railroad, telegraph, radio, television, and even the Internet, created necessary but not sufficient conditions for change. Sometimes, traditionalists resisted: throughout the 1800s, editorialists and opponents repeatedly denounced candidates who stumped—mounted speaking tours—claiming they acted in an undignified—and unprecedented—way. Sometimes, innovations failed until politicians figured out how to adapt the technology. Citizens in a democracy get the campaign they deserve; interested, overall, in winning strategies, successful candidates offer unsentimental reflections of what works, despite what people wish worked or imagine worked in the past.

The history of presidential campaigning can be divided into four phases: the republican, democratic, populist, and electronic. The republican phase, reflecting the founders’ republican ideology, trusted the wisdom of the few and feared the passion of mobs while rooting government’s legitimacy in consent of the governed. Politics’ gentlemanly tone reflected the search for virtuous candidates who would neither conspire with cabals nor rabble-rouse demagogically. Campaigns emphasized the candidate’s suitability, as candidates functioned as icons, ideal representations of the perfect gentleman and leader.

Candidate, from the Latin word for white, candidus, evoked the white togas that represented Roman senators’ supposed purity. In that spirit, candidates were to stand for election and not run. Local campaigns were not always as sober as the group conceit hoped. Decades before the Populist and Progressive movements instituted the secret, “Australian” ballot, Election Day was a raucous occasion. On this day, grandees, sitting at the polls as people voted, asked their social inferiors for help, thanking them with libations. This momentary egalitarianism reflected the essential links between equality, liberty, and democracy.

Still, the ideal republican candidate was George Washington. Reflecting his reluctance, he stayed on his farm in humble repose, awaiting the people’s call, before being elected president unanimously. In 1792, he was re-elected without touring around begging for votes.

Embodying national virtue, Washington was a demigod who set the bar unrealistically high for Americans—and his successors. As parties developed and as local candidates began campaigning, Washington’s passive silence became a straitjacket his successors tried wriggling out of, campaign by campaign.

The rise of political parties, the lifting of voting restrictions on white males, and the move from farms to factories triggered a democratic revolution. Local campaigns became increasingly hard fought. In 1824, 1828, and 1832, Andrew Jackson, the charismatic, controversial war hero who became an assertive president, brought a new personality-based mass excitement to the campaign. Jackson’s elitist Whig opponents copied, perfected, and outdid the Jacksonian Democrats’ mass appeal tactics in their 1840 campaign for William Henry Harrison. This Whig hijacking proved that the democratic sensibility had become all-American.

The nineteenth-century democratic campaign mobilized the masses through their partisan identities. Politicking became the national pastime. Election days were mass carnivals, culminating months of pamphleting, marching, orating, editorializing in party newspapers, and bickering neighbors. The party bosses dominating the system sought loyal soldiers more than virtuous gentlemen. The primary ability parties prized was “availability,” seeking pliant, appealing, noncontroversial candidates. Rather than lofty, passive icons, candidates were becoming actors, sometimes speaking, sometimes stumping, always following the party script. During this time, acceptance letters became increasingly elaborate policy statements, fitting the candidate’s views to the party platform, rather than simple republican expressions of virtuous reluctance to plunge into politics.

While party bosses picked most local candidates, the national parties mounted elaborate quadrennial conventions to nominate a standard-bearer and define the party platform. These colorful, often rollicking affairs were way stations between republican elitist politics and today’s popular politics. Party bosses lobbied behind the scenes to select candidates and set agendas, but the conventions’ deliciously democratic chaos reflected America’s drift away from hierarchical politics.

Seeking loyalists, these conventions nominated last-minute dark horses, like James Knox Polk or James Garfield; undistinguished party hacks like Millard Fillmore and Franklin Pierce; war heroes like Lewis Cass, deemed a “doughface” because he could mold himself to appear sympathetic to the North or the South; and relatively uncontroversial, compromise candidates like Abraham Lincoln. A one-term Whig congressman in a party dominated by the antislavery titans William Henry Seward and Salmon Chase, Lincoln followed the textbook strategy during this phase: “My name is new in the field, and I suppose I am not the first choice of a very great many. Our policy, then, is to give no offense to others—leave them in a mood to come to us if they shall be compelled to give up their first love.”

As democratization, urbanization, industrialization, and the communications revolution intensified, American politics became more populist, and the presidency became more central. In this populist phase, candidates were more independent of party and more nationalist in orientation. The quaint gentlemanly postures vanished as candidates stumped, whistle-stopped, and prop-stopped on trains, planes, and automobiles. Candidates needed to demonstrate their popularity and their potential to lead the nation. The best candidates were master orators with just a tinge of demagoguery who could move thousands listening in person and millions of newspaper readers and, eventually, radio listeners. After Franklin Roosevelt made America into a superpower and 1600 Pennsylvania Avenue America’s central address, Americans no longer sought mere actors but super-heroes who could dominate their parties, the campaign, the presidency, and the national news cycle.

Presidential candidates stumped more and more intensely throughout the nineteenth century, and the acceptance letter developed into an elaborate notification ceremony featuring a candidate’s address. In the 1880s and 1890s, torn between the tradition of passivity and pressures to be more active, James A. Garfield and other candidates mounted Front Porch campaigns, staying at home but greeting huge delegations of supporters from across the country who came to pay homage. Still, the 1896 campaign became one of those historical moments that consolidated and advanced various innovations. William Jennings Bryan’s elaborate 18,009 mile, 600-speech, 27-state rear-platform campaign ended the charade that candidates did not stump for themselves. William McKinley’s front porch campaign, whereby he greeted over 300 delegations consisting of 750,000 visitors from 30 states at his Ohio home, genuflected toward the past. Meanwhile, McKinley’s campaign manager, Mark Hanna, mounted a modern campaign. Recognizing the growing overlap between consumerism and politics, he organized dozens of special-interest groups, deployed hundreds of speakers, raised millions of dollars, and distributed hundreds of millions of pamphlets.

Subsequently, the charismatic, candidate-centered campaigns of Theodore Roosevelt, Woodrow Wilson, and Franklin Roosevelt presented the candidate as poised to master what Theodore Roosevelt called the “bully pulpit.” By 1948, even the mild-mannered Harry Truman felt compelled to run an aggressive, “Give ‘em hell” Harry campaign crisscrossing America, despite the fact that he, following Franklin Roosevelt, was also dominating the airwaves thanks to radio’s spread in the 1920s and 1930s.

By 1952, the heroic Dwight Eisenhower also campaigned actively and cut campaign commercials on the new medium of television. “Eisenhower Answers America” offered short, staged televised interactions between the general and “the people”—all-American types Eisenhower’s advertising wizards selected from the queue at New York City’s Radio City Music Hall. Between takes, Eisenhower muttered: “to think that an old soldier should come to this.”

The Fourth Phase: The Electronic Campaign

The television revolution ushered in campaigning’s electronic era. Most local candidates could not afford to broadcast television commercials, but the need for state, national, and some local candidates to raise big money favored entrepreneurial candidacies. Party discipline and loyalty faded as state primaries nominated most candidates. At all levels, outsiders could defy the bosses. Independent gunslingers with enough popularity could win the nomination and inherit the party apparatus. Movie stars could become California governors, billionaires could become New York City mayors. Losers then frequently faded into the sunset and winning candidates emerged less beholden to party powers. Media mastery, rather than virtue, loyalty, or oratory became prized, as candidates frequently traded on celebrity. Campaigns were no longer quests to emphasize a candidate’s iconic virtue but to project an appealing image. In this electronic era, smooth-talking salesmen such as John Kennedy, Ronald Reagan, and Bill Clinton dominated.

Television debates offered some of the turning points in presidential campaigns, including when a tanned, confident John Kennedy bested a sweaty, shifty Richard Nixon in 1960, when Gerald Ford stumbled and declared Eastern Europe “free” even though Soviets still dominated in 1976, and when Ronald Reagan laughed off Jimmy Carter’s criticisms in 1980, chuckling, “There you go again.” Television commercials offered equally powerful defining moments: 1964’s pro-Lyndon Johnson “Daisy” commercial suggesting Republican Barry Goldwater might blow up the world, 1984’s “Morning in America” commercial praising Ronald Reagan’s America as paradise recovered, and 1988’s “Willie Horton” commercial maligning Michael Dukakis for furloughing a murderer who then raped and murdered again.

Most recently, some politicians welcomed the computer age as heralding a fifth, virtual era of campaigning. But in the first few national election cycles, the Internet and blogosphere extended the reach of the electronic campaign without yet fully transforming it. In 2008, Barack Obama exploited the Internet as a fundraising and friend-raising tool, raising unprecedented amounts from a huge base of small donors. Still, most of his image600 million war chest came from big money sources. The revolution will happen, gradually, haphazardly.

Meanwhile, many of the historic conundrums surrounding campaigning persist. Are voters fools, do they make what scholars called “low information rationality” decisions like choosing a toothpaste brand, or are they seriously assessing a job applicant’s potential to lead in the world’s superpower? Why is voter turnout so low: does it reflect America’s stability or Americans’ disgust with politics? What is the role of money in politics: are campaign costs and donor influence out of control, or are costs reasonable, considering that Procter & Gamble’s advertising budget of image6.8 billion in 2006 puts into perspective the estimated image4.3 billion spent during the 2008 campaign to select the leader of the free world? Do Americans seek a president who can be king or prime minister, or could the criteria for those two different jobs be combined? And do America’s greatest leaders win campaigns—or if not, why not?

Questions and grumbling continue—and will continue, considering how important the process is, and how messy. Still, American campaigns remain magical, from the contest for the most common to the highest office in the land. Leaders trying to converse communally with thousands, millions, or even hundreds of millions face daunting challenges. But the lack of violence during campaigns, their remarkable regularity through prosperity and depression, peace and war reveal the system’s buoyancy. And the fact that even after the contested presidential election of 2000, most Americans accepted the declared winner as legitimate speaks to the Constitution’s continuing power. That a document cobbled together hastily in the horse-and-buggy age of the 1780s still works today is a miracle most Americans take for granted, but that every campaign affirms, no matter how much mudslinging, grandstanding, and promiscuous promising there may be.

See also campaign consultants; campaign law and finance; elections and electoral eras.

FURTHER READING. Paul Boller, Presidential Campaigns: From George Washington to George W. Bush, revised ed., 2004; Richard Ben Cramer, What It Takes: The Way to the White House, 1993; Kathleen Hall Jamieson, Packaging the Presidency: A History and Criticism of Presidential Campaign Advertising, 3rd ed., 1996; Alexander Keyssar, The Right to Vote: The Contested History of Democracy in the United States, 2001; Richard P. McCormick, The Presidential Game: The Origins of American Presidential Politics, 1984; Joe McGinniss, The Selling of the President, 1988; Nelson Polsby and Aaron Wildavsky, with David A. Hopkins, Presidential Elections: Strategies and Structures of American Politics, 12th ed., 2007; Gil Troy, See How They Ran: The Changing Role of the Presidential Candidate, revised ed., 1996; Theodore H. White, The Making of the President 1960, 1967.

GIL TROY

image

Caribbean, Central America, and
Mexico, interventions in, 1903–34

In the first two decades of the twentieth century, the United States intervened in the Caribbean, Central America, and Mexico with a frequency and purpose that made earlier policy in the region seem haphazard. The interventions ranged from outright military occupations in Mexico, Haiti, and the Dominican Republic to less bellicose, but still coercive, efforts to control the finances of Honduras and Nicaragua. Despite their differences, the actions all aimed to force stability on the poor, weak, and politically volatile nations of the South while protecting U.S. security and promoting American economic interests. The wave of interventions gave U.S. foreign policy a formal new doctrine—the Roosevelt corollary to the Monroe Doctrine—as well as the informal, and at first pejorative, title “dollar diplomacy.” While less dramatic than U.S. participation in the Spanish-American War and World War I, the interventions were highly controversial and had repercussions that echoed through the twentieth century and into the twenty-first.

U.S. Expansion in Global Context

The projection of American power in the Western Hemisphere in the early twentieth century was part of a larger global process by which advanced industrial nations—above all Great Britain and France, but also Germany, Italy, Belgium, and Japan—took direct or indirect control of less developed societies in Africa, Asia, and the Middle East. Many factors contributed to this New Imperialism, as it was called: the vast new military and technological power that the industrial revolution bestowed on a handful of advanced nations, the competition for raw materials, foreign markets and geopolitical advantage among these same powers, and the belief, supported at the time, that most non–European peoples were unfit for self-rule and thus needed to pass under the tutelage of one or another “civilized” nation—what Rudyard Kipling called the “White Man’s Burden.”

Similar factors encouraged the extension of American hegemony, or domination, over the Caribbean and Central America. The growing wealth, military power, technology, and trade of the United States meant that “more or less meddling on our part with the political affairs of our weaker neighbors seems inevitable,” as one newspaper editorial put it in 1912. American leaders also looked over their shoulders at the relentless growth of Europe’s colonial empires. They worried most of all that Germany, a powerful industrial nation with few colonies, would ignore the Monroe Doctrine and make a grab for territory in the New World.

Creating Panama

Those fears increased after the first American intervention of the twentieth century brought the nation of Panama into existence in 1903. This came about when Colombia refused to accept an American offer of image10 million for the right to build a canal across the isthmus of Panama, that nation’s northernmost province at the time. In November 1903, with the U.S. warship Nashville in place to keep Colombian troops from interfering, rebels in Panama declared their independence. Soon after, the tiny new nation leased the Canal Zone to the United States, and work on the great project began the next year. When the Panama Canal opened in August 1914, it channeled a significant share of world trade through Caribbean waters and allowed the United States to move warships quickly from the Atlantic to Pacific theaters. Given the economic and strategic importance of the canal, American policy makers feared that the “weak and tottering republics” of the Caribbean would emerge as the soft underbelly of the United States. As signs grew clearer that a major European war was imminent, U.S. officials fretted that the European powers would pressure one or another country near the canal to grant them coaling stations or naval bases. “The inevitable effect of our building the Canal,” Secretary of State Elihu Root noted in 1905, “must be to require us to police the surrounding premises.”

The Panama intervention gave the United States a 99-year lease on the Canal Zone, and American officials did not seek to acquire more territory in the region. Few Americans of the time favored creating a formal empire on the European model, especially after the United States resorted to brutal tactics to suppress the Filipino independence movement in the wake of the Spanish-American War. Anti-imperialists like Senator Carl Schurz of Ohio argued that the United States did not need to own the countries it wished to trade with, since American products could compete with the best industrial goods made. Racial prejudice also worked against creating a formal empire. Historically, Americans had allowed new territories to become states, granting full political rights to their inhabitants. If the United States annexed the Dominican Republic or Haiti, eventually those overwhelmingly nonwhite peoples might gain the right to vote, an abhorrent idea to most Americans in the early 1900s. Thus, while policy makers wished to impose stability in the Caribbean and Central America, they did so through what William Appleman Williams called “non-colonial but nevertheless imperial expansion”—the policy of dominating the region without the cost or political headaches of outright ownership.

Noncolonial Expansion:
The Dominican Receivership

President Theodore Roosevelt developed the centerpiece of this nonterritorial imperialism in the Dominican Republic, which shares the island of Hispaniola with Haiti. By 1904 the Dominican government had defaulted on millions of dollars in loans from both European and American creditors. At first the State Department worked to ensure that American investors got their money back, but when European foreign ministries threatened to force repayment by seizing Dominican custom houses, Roosevelt ordered U.S. Navy officers to broker a deal. In January 1905, Dominican president Carlos Morales agreed to accept U.S. control of his government’s finances. Under the customs receivership, as it was called, American officials took over the Dominican Republic’s custom houses—the source of nearly all government revenue—and then paid 45 percent of the money to the Dominicans, with the rest going to foreign creditors. The plan followed from the view that the frequent Latin American revolutions were nothing more than squabbles over money. Roosevelt believed that, by denying Dominicans control of their own national treasury, he had hit on the perfect mechanism to end political instability and financial chaos.

As a complement to the customs receivership, Washington officials brokered a image20 million loan from American banks to help the Caribbean nation build the roads, wharves, and other infrastructure needed for economic growth. Thus, the receivership and loan had the added, and not accidental, benefit of transferring Dominican financial dependence from Europe to the United States. Historian Emily Rosenberg has argued that, by linking a loan agreement to financial supervision, the Dominican plan was a prototype for public-private development partnerships that in the post–World War II period would be “enshrined in the International Monetary Fund.”

The Roosevelt Corollary

The Dominican intervention was also the occasion of Roosevelt’s famous corollary to the Monroe Doctrine. The original doctrine, first announced in 1823, warned European powers not to try to carve colonies from the newly independent but feeble nations of Latin America. Through the corollary, Roosevelt now forbade European powers to bombard or occupy nations anywhere in the Americas, even to collect legitimate debts. In return, Roosevelt promised that “the United States, however reluctantly,” would itself exercise “an international police power” to ensure that each nation “keeps order and pays its obligations.” When Democrats and some progressive Republicans in the U.S. Senate objected to the extension of American power over the Caribbean republic, Roosevelt skirted their treaty-making power and had the U.S. Navy implement the receivership by executive fiat.

American leaders saw the customs receivership as an ideal solution to the problem of instability in the Caribbean and, for the first few years, the receivership seemed to live up to their hopes. The New York Times cheered that “Uncle Sam has waved the wand that produces National transformation, and lo! a republic has appeared where government is of the people, peace is assured, prosperity is perennial.” Although the term would not be coined until a few years later, the Dominican receivership embodied the new policy of “dollar diplomacy,” which promised to deploy financial control rather than troops to bring stability to the Caribbean region.

Dollar Diplomacy in Central America

American officials saw the Dominican receivership as such a success that they tried to replicate it elsewhere. After Roosevelt left office, President William Howard Taft and Secretary of State Philander C. Knox pressured Honduras to accept a receivership on the Dominican model, supplemented by loans from American banks. Honduran president Miguel Dávila waffled, fearing a nationalist outcry if he voluntarily gave control of his country’s finances to a foreign power. At last, in 1911 he agreed to the plan—only to be driven from office a few months later. That same year, American diplomats pressed a similar arrangement on neighboring Nicaragua, where President Adolfo Díaz, formerly an employee of a U.S. mining company, fearfully accepted the receivership plan.

Both treaties ran into trouble in the U.S. Senate, however, where Democrats and some progressive Republicans objected to what they saw as an unholy alliance of Wall Street bankers and an overreaching president. Opponents worried that Taft would put the receiverships in place without Senate approval, as Roosevelt had done in the Dominican Republic. “They are trying to use the army and navy of the United States to accomplish that which we have specifically refused to give them authority to do,” Senator Augustus Bacon, Democrat of Georgia, fumed in 1912. In the end, neither treaty won Senate approval.

Dollar diplomacy, conceived with the goal of ending military interventions, sometimes precipitated them. In Nicaragua, President Díaz faced an armed insurrection by opponents of the U.S. receivership and the American loans that came with it. To keep Díaz in power, the United States landed over 2,000 marines in Nicaragua in what newspapers at the time noted was a clear setback to dollar diplomacy. Once the revolt was quelled, some marines remained, ostensibly to guard the U.S. embassy but really as a tripwire—if violence broke out again, American boots would already be on the ground, justifying the dispatch of reinforcements. U.S. soldiers would return to Nicaragua in force in the late 1920s to crush the rebellion led by Augusto Sandino, who declared, “I want a free country or death.”

The Haitian and Dominican Occupations

Even more disappointing to U.S. policy makers was the fate of the “model” receivership in the Dominican Republic. In 1911 an assassin took the life of President Ramón Cáceres, a popular leader who had cooperated with the United States. The death of Cáceres unleashed several years of precisely the kind of instability that the receivership had supposedly ended. As Dominican presidents rose and fell, the United States began to interfere in day-to-day politics on the island, sending 750 marines to “protect” the U.S. embassy, ordering warships to cruise Dominican waters, and cutting off the receivership’s payments to leaders that American diplomats disliked. It is noteworthy that these strong measures were taken by President Woodrow Wilson and his secretary of state, William Jennings Bryan, Democrats who had denounced the Latin American adventures of Republican presidents Roosevelt and Taft. Bryan, three times the Democratic candidate for president from 1896 to 1908, was indeed the symbol of anti-imperialism, once calling dollar diplomacy “a repudiation of the fundamental principles of morality.”

It was, nevertheless, under Wilson that U.S. Marines invaded and occupied the Dominican Republic in 1916 (Bryan had resigned in mid-1915 over Wilson’s increasing belligerence toward Germany). After repressing scattered resistance, the marines established a military government—no Dominican leader would give the occupation the fig leaf of local support—that imposed martial law, censored the press, and began confiscating arms from the local population.

The occupation lasted eight years. In that time, the American occupiers had some success improving infrastructure, education, and public health, despite unrelenting hostility from the Dominican people that grew into an armed resistance. In 1919 the Dominican president deposed by the U.S. Marines three years earlier traveled to the Versailles peace conference and called for an end to the occupation based on Wilson’s pledge in the Fourteen Points to support “justice for all peoples . . . whether they be strong or weak.” The Dominican plea had little effect beyond embarrassing American officials at the conference.

The Dominican intervention was not unique. Even before U.S. Marines landed in Santo Domingo, they had occupied the neighboring country of Haiti. Political instability in that country fed American fears of German intervention as World War I raged in Europe. Future secretary of state Robert Lansing justified the occupation of Haiti and other Caribbean nations as essential to “prevent a condition which would menace the interests of the United States . . . I make no argument on the ground of the benefit which would result to the peoples of these republics.” The American occupation of Haiti lasted from 1915 until 1934. As in the Dominican Republic, it triggered violent popular opposition in the form of strikes, riots, and guerrilla warfare.

Wilson and Mexico

Wilson and Bryan also used military force to try to steer the course of the Mexican Revolution. The overthrow of Porfírio Díaz, the dictator who had ruled Mexico for over 30 years, led to a violent civil war in Mexico that began in 1911. By 1914 General Victoriano Huerta had defeated his opponents and taken the oath as Mexico’s president, yet Wilson withheld official U.S. recognition and pressed for new elections.

When Huerta refused to resign, Wilson seized on a minor incident—Mexico’s arrest of several American sailors—to force a showdown. Wilson ordered U.S. Marines to seize the port city of Veracruz, assuming there would be scant resistance and that the occupation would humiliate Huerta and force him to resign. Instead, the marines found themselves fighting street to street with Mexican forces while Huerta clung to power. Despite the casualties, Wilson would not abandon his plan “to help Mexico save herself and serve her people.” Wilson at last withdrew the marines from Veracruz in November, after Huerta’s rival Venustiano Carranza forced him from power. American meddling in Mexico was not over, however. After revolutionary leader Pancho Villa raided a border town in New Mexico, Wilson sent a 6,000-troop “punitive expedition” across the border in 1916. Wilson, the “anti-imperialist,” intervened in Latin America more than any earlier U.S. president.

The receiverships, incursions, and occupations that characterized U.S. policy in the region left a bitter legacy of anti-Americanism throughout Latin America. Leading literary works of the early twentieth century, including Uruguayan intellectual José Enrique Rodó’s essay Ariel and Nicaraguan poet Rubén Darío’s bitter ode “To Roosevelt,” cast the United States as a materialistic bully out to crush the romantic spirit of Latin America. In the wake of U.S. interventions in the region, many Latin Americans came to accept this view of their northern neighbor.

Political Repercussions of Intervention

The interventions in the Caribbean, Central America, and Mexico launched between 1903 and 1916 coincided with the high tide of progressivism in the United States. Political debate at home focused on critical economic issues, such as corporate power and labor unrest, in addition to perennial topics like the protective tariff. Those and other domestic issues, as well as the outsized personalities of Theodore Roosevelt, William Jennings Bryan, William Howard Taft, and Woodrow Wilson, dominated the presidential campaigns of 1904, 1908, and 1912. Although many Democrats and some Republicans objected to the way Roosevelt had “taken” Panama, a majority of voters favored building the canal and forgave the president’s methods, returning him to the White House in 1904. By 1908, despite Bryan’s efforts to make Republican foreign policy a campaign issue, the New York Times declared that “anti-imperialism is not an issue in this country, it is only a whine.”

After Democrat Woodrow Wilson became president in 1913, Republicans accused him of “vacillating” in his defense of American lives and property in revolutionary Mexico. By 1914, however, World War I had dwarfed the issue of Mexico, and voters reelected Wilson in 1916 in large part for keeping the country out of what Democrats called the “carnival of slaughter” in Europe. While they never became decisive electoral issues, the U.S. interventions in Panama, the Dominican Republic, Nicaragua, Honduras, and Mexico triggered vigorous debate in the Senate, which was called on to approve treaties in the first four cases. The interventions thus became episodes in the long struggle between the executive and legislative branches over control of U.S. foreign policy.

The interventions in the Caribbean, Central America, and Mexico are arguably more relevant to U.S. foreign policy in the twenty-first century than larger conflicts like the two world wars, Korea, and Vietnam. The interventions raised stark questions about the constitutional limits of presidential power in foreign affairs, the effectiveness of using the military to promote stability, democracy, and nation building in less-developed regions, and the unanticipated consequences of overthrowing hostile leaders in manifestly weaker nations. The occupations of the Dominican Republic and Haiti ended by 1934, and Franklin D. Roosevelt formally abandoned dollar diplomacy and pledged to be a “good neighbor” to Latin America after he took office in 1933. Even so, in the second half of the twentieth century the United States resorted to overt and covert intervention in the greater Caribbean, with the Central Intelligence Agency’s destabilization of the elected government of Guatemala in 1954, the U.S.-sponsored Bay of Pigs invasion of Cuba in 1961, military intervention in the Dominican Republic in 1965, and the invasion of Panama in 1990.

See also foreign policy and domestic politics, 1865–1933; presidency 1860–1932; progressivism and the Progressive Era, 1890s–1920; Spanish-American War and Filipino Insurrection; territorial government.

FURTHER READING. Bruce J. Calder, The Impact of Intervention: The Dominican Republic during the U.S. Occupation of 1916–1924, 1984; Julie Green, The Canal Builders: Making America’s Empire at the Panama Canal, 2009; Robert E. Hannigan, The New World Power: American Foreign Policy, 1898–1917, 2002; John Mason Hart, Empire and Revolution: The Americans in Mexico since the Civil War, 2002; Walter LaFeber, Inevitable Revolutions: The United States in Central America, 1984; Lester D. Langley, The Banana Wars: United States Intervention in the Caribbean, 1898–1934, 2002; Emily S. Rosenberg, Financial Missionaries to the World: The Politics and Culture of Dollar Diplomacy, 1900–1930, 1999; Hans Schmidt, The United States Occupation of Haiti, 1915–1934, 1971; Lars Schoultz, Beneath the United States: A History of U.S. Policy toward Latin America, 1998; Cyrus Veeser, A World Safe for Capitalism: Dollar Diplomacy and America’s Rise to Global Power, 2002; William Appleman Williams, The Tragedy of American Diplomacy, 1959.

CYRUS VEESER

image

cities and politics

City politics in the United States has a distinctive trajectory, characteristic competitors, and institutions different from national politics. Cities have been the entryway for immigrants to the United States and the destination of domestic migrants from the countryside; their arrivals have discomfited some and thus challenged local politicians to manage their accommodation into local politics. For most of U.S. history, the characteristic antagonists of city politics have been machine politicians and municipal reformers. Machine and reform politicians argued about the purposes and institutions of city government from the mid-nineteenth century until well after World War II. Over the same period, machine and reform politicians adapted their styles and organizations to the expanded purposes of governing the city and the changing demands of urban voters. In the last quarter of the twentieth century, local institutions were again changed to accommodate diverse and politically sophisticated constituents.

From Echo to Urban Politics

In the first decades after independence, city politics were largely an echo of national politics. When Whigs and Democrats debated internal improvements and small government, for example, local party leaders in the cities voiced the same arguments. Two sets of issues interrupted that debate. In the 1840s, and again in the 1850s, nativist politicians argued that the central problem in local politics was the presence of immigrant voters. In the late 1820s and the mid-1850s, organized workingmen campaigned for legal reforms and for assistance during hard times. Each of these challenges left its mark on city politics.

Nativists argued that recent immigrants, mostly Irish and German Catholics, did not fit well into American cities. Immigrants were costly to city government, as evidenced by their presence in local poorhouses and prisons. Worse, since they came from countries that did not enjoy republican government, they were new to political participation, and did not have the skills to bear the burdens of U.S. citizenship. That deficit was amplified by their religion, as Catholicism, nativists claimed, discouraged independent thinking. Nativists were organized into local “American” parties. In most places, Whigs were more receptive to nativist arguments, while Democrats were more likely to defend immigrants. In the 1850s, nativists came together in Know-Nothing parties, which elected representatives to city, state, and federal governments. Like the Whigs, the Know-Nothings foundered on the political divisions that led to the Civil War. Party politicians who defended immigrants denounced nativists as “traitors to the Constitution” and “bigots and fanatics in religion.”

Workingmen’s parties appeared in cities in the late 1820s and had a long political agenda. The “workingmen” for whom they spoke were artisans, skilled craftsmen working in small shops. The parties called for the abolition of imprisonment for debt, compensation for municipal office holders, improved public schools, and easing the obligations of the militia system. Some of these issues were also supported by farmers and small businessmen; as a result, the parties succeeded at abolishing imprisonment for debt, reform of militia systems, and enacting democratizing reforms. Issues peculiar to wage laborers—a legal limit to the workday, abolition of prison labor—found few supporters beyond their natural constituency and failed. By the 1850s, there were fewer artisans in the cities and many more wage workers. Workers organized in mutual aid societies, unions, and federations of unions; these functioned in prosperous times but could not survive the periodic “panics” (depressions) that plagued the nineteenth-century economy. During the depression of the mid-1850s, mass demonstrations loudly demanded “work or bread” from city halls in New York, Philadelphia, Baltimore, and Pittsburgh, as well as some smaller cities like Newark, New Jersey, and Lynn, Massachusetts. Although the protests briefly convinced some members of the elite that revolution was imminent, the same demands provided an opportunity for major party leaders to demonstrate their sympathy and support for workers. Party leaders in Boston, Philadelphia, New York, Philadelphia, and Trenton, New Jersey, responded with public works to provide employment, established soup kitchens, and endorsed many of labor’s demands. In this decade, the first political bosses appeared in big cities. They led party organizations to which working class voters, especially the foreign-born, were fiercely loyal. Their opponents called the parties “machines” because they stamped out election victories as uniformly as machines created identical products.

Hardly had bosses appeared when municipal reformers challenged their rule. In the 1850s, municipal reform slates appeared in elections in Boston, Philadelphia, Baltimore, New York, and Springfield, Massachusetts. Reform politicians had good reasons for their discontent. First, party politics in the cities was corrupt. Second, as cities grew rapidly in the nineteenth century, their budgets grew with them. Even leaving aside the costs of corruption, the burgeoning cities needed streets, sewers, lighting, water, schools, and parks, and these required major investments. Given the incidence of municipal taxes in the nineteenth century—the rich could escape them and the poor were not taxed—growing municipal budgets rested squarely on the shoulders of a small, beleaguered urban middle class. Their disgruntlement provided followers and votes for municipal reform. Third, wealthy and middle-class citizens were not comfortable with the working classes, especially the immigrants, who were the foundation of party politics. Reformers denounced the reliance of party politicians on the unwashed, insisting that cities should be ruled “by mind instead of muscle.” Reformers were united by a belief that machine politicians and their immigrant constituents had corrupted democratic institutions in the cities; and they were united in their desire to upset the status quo and create order in city government. Thus, by mid-century, city politics had developed its own political antagonists and political arguments. Municipal reformers led small groups of activists and put forward candidates to compete in elections, but they rarely won. Party leaders defended workers as “the bone and sinew of the Republic,” denounced nativists, and insisted that parties were the best defense of the many against the few. Like extended families or married couples, they repeated these arguments over and over again in the century to follow.

Twentieth-Century Reform

Over the last half of the nineteenth century, the antipathies and discontents of municipal reformers coalesced into an agenda for change. Reformers’ opposition to the dominant parties led to a more general antiparty sentiment, and reformers endorsed non-partisanship. Arguing that the concerns of national parties, necessarily focused on national issues, were irrelevant to city life, reformers were fond of saying that there was no Republican way to lay a sewer, no Democratic way to pave a street. Cities, reformers argued, should be run as businesses; they required fewer politicians and more management. For reformers, urban budgets called for retrenchment, city governments should be frugal, and tax rates should be cut. In addition, municipal reformers opposed cronyism and patronage, called for competitive bidding for city contracts and, toward the end of the century, for merit-based appointments (civil service) for government jobs. Party leaders rejected the reform agenda. Cities were not like businesses, they argued, but communities in which government had obligations to citizens. Party leaders claimed that meeting needs for relief, public health, building codes, “make-work” during recessions, and accepting differences of religion and even affinity for drink were all in the legitimate province of politicians.

Seth Low, mayor of Brooklyn (1882–85) and then New York (1902–3), was an exemplary municipal reformer of this sort. Low made every effort to bring business-like efficiency to city government, and succeeded at reforming New York’s tax system and reducing its municipal debt. He opposed cronyism and patronage and argued for merit-based appointments to city jobs. Low did not attend to tenement reform and assistance to needy citizens.

At about the time Low was elected, another type of reformer appeared in the nation’s cities: the social reformer. Social reformers agreed with other advocates of municipal reform that corruption and cronyism were destructive of city finances and city government but also saw that corruption took its toll on all citizens. These views soon became widespread. Muckraking journalists brought shocking revelations to the public in newspapers and magazines. Lincoln Steffens’s essays, later published in the collection The Shame of the Cities (1904), exposed corruption across the country. He traced it not to the low character of immigrant voters but to the malfeasance of large interests and calculated its many costs to city coffers, ordinary citizens, and the moral standard of urban life.

Hazen Pingree, mayor of Detroit from 1890 to 1897, was a businessman transformed by his election. Once in office, Pingree became one of the nation’s leading social reformers. Detroit, like many cities, suffered from high prices set by its utilities. Of these, the most costly to working-class residents was the street railway; for workers, even a slight increase in fares might devastate a family budget. Pingree led a campaign to maintain the three-cent fare and free transfers. Resistance by the company led to riots not only of workers but also by middle-class patrons. Pingree vetoed renewal of the company’s franchise. Well before that, his efforts became a national crusade, followed in the press, to keep the three-cent fare. There were other social reform mayors, including Tom Johnson (Cleveland, Ohio) and Mark Fagan (Jersey City, New Jersey), who served their cities in similar ways.

In 1894 municipal reformers and leading progressives, including Theodore Roosevelt, founded the National Municipal League. In 1899 the league published its first model city charter, which proposed commission government for cities. In this innovation, citizens elected commissioners who individually served as administrators of city departments (streets, parks, etc.) and collectively served as both the legislative and the executive branch of city government. Although briefly popular, commission government was problematic, and in 1919 the National Municipal League endorsed a different model charter for city manager government.

The charter embraced nonpartisan, citywide elections for city council and the mayor. The council and mayor together appointed the city manager, a professional administrator who served as the chief operating officer of city government. The manager appointed the leaders of municipal agencies and monitored their performance. In addition, the manager was expected to advise the council about both its choice of policies and their implementation. The National Municipal League promoted the model charter in its journal, and supplied public speakers, pamphlets, advice, and boilerplate for newspaper editorials. By 1923, 240 cities had adopted city manager charters.

The changes endorsed by the National Municipal League were important for a second round of reform in the middle of the twentieth century. The Great Depression, and the U.S. effort in World War II, meant that cities across the country were neglected for almost a generation. Housing was not built, roads and other infrastructure not maintained, government conducted without change. After the war, one response to urban stagnation was the federal Urban Renewal program. Urban Renewal funds were eagerly sought by mayors and their city governments and drew support from downtown and real estate interests, construction workers, and low-income residents who hoped for better housing. Urban Renewal revitalized the downtowns of many cities but also displaced communities and did not fulfill the promise of increased housing for low-income families. Displacement provoked the tag “Negro removal” for the program, evidence of the bitterness left in its wake. Lingering black resentment at urban renewal joined demands for the integration of schools, anger at police brutality, resentment of job discrimination in the private sector, increased pressure for more candidates and public officials of color, and greater equity in law enforcement.

In the Southwest and the West, city governments responded to postwar challenges with a fresh campaign to reform city government. The goals of this latter-day reform movement were both to create new institutions and to staff them with new leaders. Between 1945 and 1955, charter change worked its way across the South and the West. This generation’s efforts brought municipal reform to towns that grew to be among the nation’s largest cities and mid-sized cities: Dallas and Austin, San Diego and San Jose, Phoenix, Albuquerque, and Toledo. In addition to renewed city manager governments, reformers in these cities created a counterpart to the political party, the Nonpartisan Slating Group (NPSG), which nominated candidates for office, agreed on a platform, placed advertisements in newspapers, and worked to get out the vote. In time, the local NPSGs were as successful as the political machines of earlier decades at winning elections, often without effective opposition. In the 20 years that followed World War II, the leaders of big-city reform governments achieved a great deal. They planned and oversaw unprecedented growth, recruited industry, and presided over enormous efforts to build housing, parks, roads, and schools for their cities’ growing populations. NPSGs led governments unblemished by scandal or patronage and governed without effective opposition.

Civil Rights

As popular as big-city reform governments were, they had problems and failures alongside their great successes. The failures were not reported in the press and, in some cities, even politicians were blind to them. Two problems in particular could not be fixed without dramatic change. The first was fiscal. A central promise of reform governments was low taxes. Yet for a generation and more after World War II, cities in the Southwest enjoyed tremendous economic, population, and territorial growth. It was the territorial growth—aggressive annexation of outlying areas over many years—that sustained low taxes. As city government expanded to deliver services, annexation kept the taxable population growing even more rapidly, keeping taxes low. By 1970, however, the cities were reaching limits that could not be extended. Many could not annex more territory, as they bordered nature preserves, military installations, and Native American reservations. Municipal debt was on the increase as governments tried to maintain their level of services. Yet the size of these cities was so great that it was not possible to deliver adequate services to all the residents who had come to expect them.

The second problem was the restricted political communities big-city reform created. The institutions of reform—nonpartisan, citywide, and sometimes off-cycle elections, stiff voter registration requirements (sometimes required annually), and literacy tests, in some places well beyond the time they were declared unconstitutional, kept the electorate small. In Dallas, for example, fewer than 20 percent of adults over 21 voted in municipal elections from 1947 to 1963. Turnout in partisan cities was higher: in New Haven, the election with lowest turnout in the same years brought 51 percent of adults over 21 to the polls. Restrictions on voting particularly affected residents of lesser means and citizens of color. The candidates they supported were rarely if ever elected; city councils were remarkably uniform. Annexation of new communities almost always increased the Anglo population and electorate, but not the number of African American and Spanish-surnamed voters.

In 1975 San Antonio tried to annex an outlying suburb. Latino residents filed a suit to stop the annexation, claiming that its intent and consequence were to keep them a minority of the electorate just as Latinos were on the verge of becoming the majority. The U.S. Justice Department agreed with them, and gave San Antonio a choice: the city could annex the territory but had to elect city council members from districts rather than citywide, or the city could maintain citywide elections, but could not annex any more territory. The city chose the first option. San Antonio thus became the first in a long line of big-city reform governments toppled by civil rights activists and the Justice Department.

Big-city reform governments everywhere gave up citywide for districted elections to city councils. The most common consequence was a more equitable distribution of public services. In San Jose, San Diego, and Albuquerque, city council members have more authority assuring the delivery of services to their constituents. This is not trivial. Public services—libraries, roads, garbage collection, schools, police officers, and firefighters—are key to the quality of daily life, and they are what cities provide.

The legacies of political machines and municipal reform remain. The most important and widespread legacy of reform is the decline of patronage appointments for municipal employees, and their replacement by merit-based hiring and civil service. Civil service has delivered more competent city employees at the street level and in administration and in management, has increased the quality of services to citizens, and has created openness and fairness of employment opportunity for those seeking work in the public sector. The legacy of machine politics is the principles that the responsibilities of city government extend beyond business-like management, that urban governments are obliged to represent the interests and values of all of their citizens. At the beginning of the new millennium, U.S. cities were again host to many immigrants, and thus once again were required to rethink and revise municipal institutions, decide what is fair, and debate the appropriate functions and priorities of city government.

See also local government; suburbs and politics.

FURTHER READING. Amy Bridges, Morning Glories: Municipal Reform in the Southwest, 1997; Steven P. Erie, Rainbow’s End: Irish-Americans and the Dilemmas of Urban Machine Politics 1840–1985, 1988; Melvin G. Holli, Reform in Detroit: Hazen Pingree and Urban Politics, 1969; David R. Johnson, John A. Booth, and Richard Harris, eds., The Politics of San Antonio: Community, Progress, and Power, 1983; Lincoln Steffens, The Shame of the Cities, 1904; Jessica Trounstine, Political Monopolies in American Cities: The Rise and Fall of Bosses and Reformers, 2008.

AMY BRIDGES

image

citizenship

At the time of the founding, the American conception of citizenship was marked by two profound contradictions that influenced the new nation’s constitutional and legal history.

First, the concept of citizenship, which we associate with freedom, was actually derived from the English tradition of subjecthood, anchored in the feudal notion of obligatory allegiance to the lord for all within his domain. This was formulated most prominently by Sir Edward Coke in Calvin’s Case (1608). At issue was whether persons born in Scotland after 1603, when the Scottish king James VI became king of England, were to enjoy the benefits of English law as subjects of the English king. Coke argued that, by virtue of the divine law of nature, they indeed did so. Once established in common law, the principle of jus soli was applied to all persons born on the king’s domain.

The residual importance of this element of domain is visible in American adherence to the overarching rule of jus soli, most prominently in the constitutional requirement of citizenship by birth on American soil as a qualification for the presidency, but also in the granting of citizenship to children of visitors and even illegal immigrants. However, well before the Revolution the colonists diverged from Coke, contending, in response to their special circumstances, that under the law of nature, subjecthood was modified by the wholly opposite principle of consent. They insisted that this transformed citizenship into an implicit contract whereby subjects could legitimately deny their allegiance to a tyrannical ruler. Accordingly, the concept of citizenship by consent is at the root of American constitutional documents and jurisprudence. By the same token, Americans proclaimed the right of the English and other Europeans to voluntary expatriation and the right of American citizens to shed their citizenship as well.

The second contradiction arose from the ambiguity of the concept of person. To begin with, the law of nature did not preclude the practice of slavery—whereby certain human beings were in effect property that could be owned, traded, and disposed of at will—nor the exclusion of persons of African origin born on American soil, even those legally free, from the benefit of jus soli. Although some of the colonies refrained early on from the practice of slavery, or even actively opposed it, the fact that American slaves were overwhelmingly of African descent extended the ambiguous legal status to all African Americans, even those legally free. This contradiction remained unresolved throughout the first half of the nineteenth century.

Eventually highlighted in the debates over the Dred Scott affair, it contributed to the escalation of tensions between the states. In the case formally known as Dred Scott v. Sanford, Scott was the slave of an army surgeon who was taken in 1834 from Missouri to Illinois, where slavery had been forbidden by the Ordinance of 1787, and later to the Wisconsin Territory, where slavery was also illegal. Scott sued for his and his wife’s freedom on the grounds of their residence in those locations. The case reached the U.S. Supreme Court, which ruled that neither Scott nor any person of African ancestry could claim citizenship in the United States and therefore could not bring suit in federal court under the diversity of citizenship rules. Moreover, Scott’s temporary residence in Illinois and Wisconsin did not affect his emancipation under the Missouri Compromise, as this would deprive Scott’s owner of his property. This contradiction was not resolved until after the Civil War and the enactment of the Fourteenth Amendment.

Ironically, a similar contradiction pertained to the status of those we now call Native Americans, in acknowledgment of their ancestral roots on American soil. In colonial times, independent unconquered tribes were dealt with as foreign nations; tributary tribes were considered subjects of His Majesty, but within a subordinate jurisdiction and with a separate legal status. In the first half of the nineteenth century, the United States dealt with organized tribes through treaties executed by the Department of War, after which jurisdiction passed to civilian control under the Department of Interior. Tribes gradually encompassed by white settlements exhibited the contradiction most acutely. American leaders viewed the barring of Indians from citizenship as a concomitant of their “peculiar” status: they were either members of “foreign nations,” ineligible for naturalization by virtue of the “free, white” requirement legislated in 1790, or—if living among whites—members of a “separate inferior race” in a “state of pupilage” resembling the relationship of a ward to his guardian (as pronounced by New York’s Chancellor Kent in 1825 and reaffirmed later at the national level by Chief Justice John Marshall).

The contradiction persisted until 1924, when Congress passed the Indian Citizenship Act, the same year in which it firmed up the blatantly racist National Origins Quota system of immigration regulation. In 1921, the United States had imposed an annual limit on the admission of immigrants from the “Eastern Hemisphere” (meaning Europe, as Asians were already largely excluded) and allocated a quota to each country based on the putative number of persons of that national origin in the current population of the United States. The legislation was designed to reduce immigration from eastern and southern European countries (notably Poland, many of whose immigrants were Jewish; Italy; and Greece). The quotas were further reduced in 1924. By the end of the 1920s, the number of Poles admitted shrank to 8.5 percent of the pre-quota level.

One important source of contention between the settlers and England, featured among the grievances expressed in the Declaration of Independence, was disagreement over the granting of privileges and immunities of citizenship to aliens by way of naturalization. Whereas England jealously guarded this as an exclusive royal privilege, to be allocated sparingly and only under special circumstances, the colonial settlers eagerly adopted an acquisitive stance, asserting their own authority in the matter and establishing a much lower threshold of eligibility. One reason for this was economic: citizenship created attractive and lucrative opportunities to buy and sell land, as the status of national was traditionally required for holding real property. Under English common law and throughout much of Europe, aliens could not pass on property to their heirs; at their death, it reverted to the king. The ability to grant citizenship, therefore, was crucial for American land promoters.

The doctrine of citizenship by consent was reflected in all the founding constitutional documents, at both the state and national levels. However, constitutional doctrine failed to specify precisely what privileges and immunities citizenship conferred, and which Americans were citizens. Adult white women were undoubtedly citizens, but it did not follow that they shared in the voting rights of white male citizens. Moreover, a woman could lose her citizenship by marrying an alien. In retrospect, the concept of citizenship was, in effect, limited to the legal sphere.

Until the Civil War, citizenship matters were complicated by the U.S. constitutional structure, which established a distinction between the relationship of persons to the several states and to the central government. The most important aspect of this distinction was that an African American could be a citizen of New York or Illinois but not of the United States (as the Dred Scott decision established). This was eliminated by the Fourteenth Amendment (1868), which declared that “all persons born or naturalized in the United States . . . are citizens of the United States and of the State wherein they reside,” and that “No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States.” Two years later, the Fifteenth Amendment broadened legal citizenship to encompass the political sphere by specifying, “The right of citizens of the United States to vote shall not be denied or abridged by the United States or any State on account of race, color, or previous condition of servitude.” However, it took nearly a century for this amendment to move from prescription to practice. The struggle to extend political citizenship to women, launched in the final decades of the nineteenth century, was achieved only about half a century later with the ratification of the Nineteenth Amendment in 1920. Yet another half a century later, the Twenty-Sixth Amendment extended political citizenship to 18 year olds, down from the age of 21.

By the middle of the twentieth century, the concept of citizenship had expanded further in most western democracies to cover the social sphere, constituting what attorney Thurgood H. Marshall termed in 1950 “social citizenship.” The United States made some important strides in this direction during the New Deal period, with the institution of unemployment compensation and Social Security. But after World War II it diverged significantly from the path followed by its European counterparts, as well as Canada and Australia, in the sphere of health insurance, the narrowing of income inequality, and the assurance of minimal means of subsistence to all citizens. In 1996 the United States did in effect acknowledge the broadening of the rights of citizenship to encompass the social sphere by enacting legislation to restrict important federal benefits, such as welfare, to U.S. citizens only and allow the states to do the same. In recent decades the concept of citizenship has begun to broaden further to encompass the cultural sphere: the acknowledgment of religious and linguistic diversity, as well as symbolic matters such as pictorial representations of “typical Americans” and references to their cultural heritage. However, in the United States this domain is largely relinquished to the private sector, leaving citizens to fend for themselves according to their varying resources.

Access to citizenship through naturalization figured prominently in the United States, as in other so-called immigrant nations. The dissolution of imperial bonds gave individual state governments the authority to admit members of the political community. Former British subjects were admitted without question on the basis of their participation in the revolutionary struggle. Beyond this, most of the states devised quite liberal rules for incorporating the foreign-born, either in their constitution or by statute. The northern states usually required good character, one year’s residence, the renouncing of foreign allegiances, and the taking of an oath of allegiance to the state. Maryland required, in addition, an oath of belief in the Christian religion (which could be taken by either Catholics or Protestants). For anyone seeking appointment or election to public office, however, most of the states required a more extended period of residence, thereby introducing gradations of citizenship. The southern states usually specified, in addition, that the person be free and white. Initially, “national” citizenship was derived from membership in a state and, except for the racial qualification, the states generally accepted one another’s acts of naturalization. Nevertheless, ambiguities arose and fostered a growing sense that citizenship was a national matter. The idea of a more unified state took shape in the Philadelphia Convention’s proposed federation, which would have authority to implement a “uniform rule of naturalization.” The coupling of naturalization with the authority to establish “uniform laws on the subject of bankruptcies” highlights the prominence of economic concerns in the Founding Fathers’ view of citizenship and in the overall process of nationalizing governance.

President George Washington placed citizenship on the agenda in his very first message to Congress, which acted promptly on the matter. Pennsylvania and the western states advocated quick and easy naturalization, especially for prospective buyers of land, to whom citizenship would assure secure property rights. The stance would also serve the interests of ethnic minorities. In the national elections of 1788, for example, Pennsylvania’s German community, which had hitherto shied away from politics, demanded representation in proportion to its weight in the population, thereby prompting both the Federalists and their opponents to nominate appropriate ethnic candidates. Voting as a bloc, the Germans sent three representatives to the new Congress, where they firmly supported what today would be called the liberal side in the naturalization debate, thereby demonstrating the feedback effect of political incorporation on immigration and naturalization policy. The other side consisted of a coalition of unlikely bedfellows: New Englanders, who reflected their region’s narrower view of national identity, and Southerners who, although in favor of immigration, feared that most of the new citizens would oppose slavery.

Overall, naturalization appears to have been conceived originally as a first step toward Americanization, somewhat akin to a secular baptism, rather than as the capstone of a process of incorporation. The Naturalization Act of 1790 provided that free white persons of satisfactory character would be eligible for naturalization after two years’ residence in the United States, including one year within the state from which they applied. The qualifier free excluded numerous white immigrants bound in temporary servitude until their term expired. The requirement of satisfactory character, inspired by the Pennsylvania Constitution, was designed to exclude not only convicts and felons, but also “paupers,” considered malefactors in need of discipline, much as with welfare cheats today. That said, the naturalization procedure was accessible: the law specified that it could take place in any common law court of record. The law also provided that the minor children of naturalized parents automatically became citizens by way of jus sanguinis and, conversely, that the children born abroad of American citizens be considered natural-born citizens.

Admission to the political community also required an oath of allegiance to the U.S. Constitution. Although applicants were not subject to political vetting, the Constitution did specify that foreign-born persons who had left the United States at the time of the Revolution could not become naturalized without the express consent of the states. Directed at repentant British-born Loyalists, this exclusionary provision constituted one more indication of the country’s emerging assertiveness as a sovereign state and distinctive political régime.

Although the requirement of whiteness, which southern representatives insisted on, constituted a retreat from the inclusive notion of citizenship inscribed in the Northwest Ordinance enacted three years earlier, it evoked no debate whatsoever. Perennially restated in subsequent legislation down to the Civil War, this provision excluded not only persons of African descent, notably mulattoes from Saint-Domingue (now Haiti) who streamed into the United States as refugees from the island’s revolution, but also American Indians, who could become citizens only by treaty. “White” clearly meant “white exclusively,” and when Asians appeared on the scene in the 1840s, the courts quickly determined that they were ineligible as a matter of course. In the end, although the law confirmed the new republic’s exclusionary racial boundary, the inclusiveness of all free Europeans of good character, regardless of nationality, language, religion, or even gender, constituted a unique assertion of republican universalism, no less remarkable for being driven by interests as much as principle.

Considered from an international perspective, the provision of routine access to American citizenship constituted a radical political innovation. It challenged the ruling European doctrine of “perpetual allegiance” and threatened to seduce subjects away from their sovereigns. Added to the marketing of land, the naturalization law encouraged immigration. As a counterpart of the naturalization procedure, Americans also insisted on the right to expatriation. At a time when Europe’s population was still growing slowly and Europe adhered to Jean Bodin’s mercantilist formula—“Il n’y a richesse ni force que d’hommes” (There is no wealth nor power but in men)—they actively recruited British subjects as well as foreign Europeans and intervened in the international arena to secure freedom of exit on their behalf. This entailed not only physical exit (i.e., emigration) but also political exit—those coming to America had to renounce their original nationality, thereby challenging the prevailing doctrine of perpetual allegiance. Indeed, Britain’s insistence that British sailors who had become U.S. citizens remained under obligation to serve the king shortly emerged as one of the sources of conflict leading to the War of 1812.

Despite America’s generally acquisitive stance, public opinion on immigration swung sharply after 1792, when the crisis triggered by the French Revolution flared into war. The United States attracted a variety of dissenting groups, including aristocratic Frenchmen and German Pietists seeking to evade military service. Most of all, the radicalization of the revolution in France triggered widespread fear of Jacobins. The Federalists, now in power, sought to restrict immigration altogether on the grounds that it constituted a threat to national security. The intrusion of security considerations into the sphere of immigration and citizenship prefigured similar movements in response to the threat of anarchism at the turn of the twentieth century, of Bolshevism in the wake of World War I and the Russian Revolution, and communism at the outset of the Cold War, as well as the fear of Jihad terrorism in the wake of 9/11.

The Federalists lacked constitutional authority to restrict immigration directly, however, because control over persons fell within the sphere of police powers that were reserved to the states in order to protect slavery. Instead, they sought to achieve such restrictions by passing the Alien and Sedition Acts, which subjected both aliens and their American associates to governmental surveillance and criminalized certain forms of political protest. Naturalization emerged as a secondary line of defense against undesirable immigration. In 1795 the ruling Federalists amended the naturalization law to require 14 years’ residence and the filing of a declaration of intention five years before undertaking naturalization proceedings. After the Jeffersonian Republicans gained power in 1800, they repealed the Federalist naturalization amendments: in 1802 the residency period for naturalization was set at five years, with a three-year delay for filing a declaration of intention.

These terms, founded on the notion that a substantial period of residence was necessary to infuse aliens with American values, marked a shift away from the idea of naturalization as a ritual starting point toward the notion that it constitutes the capstone of an apprenticeship. With the significant exception of the racial qualification, the terms of citizenship have changed little since 1802, with the addition of more or less demanding tests of language skills and political information to ascertain the candidate’s qualification for naturalization. Although federal statutes excluded people of even partial African descent from ordinary naturalization (in keeping with the “one drop of blood rule”), the ruling doctrine of jus soli suggested they might claim citizenship by birthright. But this, in turn, threatened the founding’s grand compromise that enabled the exemplary land of freedom to tolerate the existence of slavery. The resulting tension moved to the fore in the debates over the admission of Missouri in 1820 and remained unresolved throughout the first half of the century, culminating in the Supreme Court ruling in Dred Scott v. Sanford (1857).

At the end of the Napoleonic Wars in 1815, European emigration resumed quickly and encompassed a broader continental region, because the Congress of Vienna specifically authorized the departure of the inhabitants of the territory ceded by France, including most of present-day Belgium, for a period of six years, and also provided for free emigration from one German state to another, including the Netherlands. Once on the move, many “Germans” kept going until they reached the ports of embarkation for America. Numbers rose to at least 30,000 in both 1816 and 1817 and reached nearly 50,000 in 1818. The growing numbers found a warm welcome from America’s budding manufacturing community hungry for labor as well as from land promoters. But they simultaneously stimulated public concern that the arrival of so many Germans, including numerous destitute persons and other undesirables that would burden state and municipal relief facilities, would dilute the nation’s British heritage.

Lacking authority to restrict immigration directly, in January 1818 Congress adopted a motion to limit the number of persons carried by incoming ships according to the tonnage of the vessels. The proposal was modeled on the British Passenger Act of 1803, which had been designed to restrict emigration of sheepherders from Scotland. The measure prohibited ships of any nationality entering an American port from carrying more than two persons for every five tons of registry, and required them to deliver to the Department of State “a list or manifest of all the passengers taken on board,” including each one’s name, occupation, and place of origin. It further specified water and food requirements for Europe-bound ships departing from the United States.

The 1819 law, motivated by a combination of restrictionist and humanitarian concerns, stood as the sole federal enactment pertaining to European immigration until the late 1840s. In the broader perspective of American development, it can be seen as a block in the building of the “American system”—the ensemble of measures designed to promote the development of an autonomous American economy—in keeping with the continuing nationalization of major elements of economic policy. Five years later, the landmark 1824 Supreme Court decision in Gibbons v. Ogden granted Congress the power to regulate international commerce as well as immigration. Why 2.5 tons per passenger? Britain, it seems, had recently reduced the minimum from 5 tons per passenger to 3; therefore, 2.5 tons “would afford every necessary accommodation.” This was in fact misinformation; Britain maintained a 5-ton requirement for U.S.-bound ships but had recently lowered it to 1.5 for the traffic to British North America. In any case, the regulation significantly reduced the passenger-carrying capacity of all U.S.-bound ships but simultaneously gave American ships an edge over their British competitors. Immigration restriction, yes, but business is business.

At the time of Alexis de Tocqueville’s field trip to America in the 1830s, half a century after independence, his hosts did not think of themselves as a “nation of immigrants.” Reflecting the prevailing self-image, the French statesman characterized them as a thoroughly formed Anglo-American people whose political culture was founded on a collective character molded in the course of many generations of shared existence. In effect, he saw white Americans making up a more unified nation than his native France, which, despite centuries of monarchical centralization, remained a country of highly diverse provinces and localities. Although he did observe some immigrants, the foreign-born then constituted at most 5 percent of the white population, a minimal level not equaled again until the 1940s. However, in his second edition, Tocqueville added a last-minute footnote, undoubtedly inspired by information he received from his Whig friends, that deplored the undesirable impending changes likely to follow from the growing arrival of poor Europeans.

This was the first of the series of so-called immigration crises that have marked American history. Although the Know-Nothing Party and its sympathizers attempted to raise the residence requirement for naturalization to the Federalists’ 14 years, or even 21 years—a symbolic term designed to subject newcomers to a thorough “re-maturing” process on American soil—they failed to achieve their goal, and had to satisfy themselves with the imposition of burdensome residence requirements for access to local elected and appointed offices. State and local measures of this genre were reenacted in the course of later crises as well, notably in the final decades of the nineteenth century and the first three decades of the twentieth, when they were supplemented by linguistic requirements, literacy tests, and demonstration of the candidate’s knowledge of American history and governmental institutions. When these requirements were first imposed in the early decades of the twentieth century, public facilities existed to help applicants prepare. With the decline of immigration from the 1920s onward, however, these public facilities largely fell into disuse and were not fully revived when massive immigration resumed in the final third of the twentieth century.

When Asian immigrants appeared on the scene around 1850, the courts excluded them from naturalization by the long-standing statutory requirement of whiteness, ruling that this was in keeping with the Constitution. However, in the wake of the Civil War amendments, jus soli prevailed. In United States v. Wong Kim Ark (1898), a solid Supreme Court majority ruled that birth on American soil sufficed to make citizens of all people, even those of Chinese descent. Under the circumstances, the only way to prevent what many deemed the “racial pollution” of the citizenry was to minimize the likelihood of such births by redoubling efforts to exclude Chinese females, as well as by creating obstacles to miscegenation wherever possible. Consequently, the American population of Chinese origin declined steadily throughout the first four decades of the twentieth century, and Chinese and other Asians remained excluded from acquiring citizenship by naturalization. As a condition of China’s participation in the war against Japan, Chinese were made eligible in 1943, but the last traces of racial qualifications for Chinese naturalization were eliminated only in 1952; ironically, the same law also reasserted the racist National Origins Quota system instituted in the 1920s. During World War II, the United States deliberately violated the rights of many citizens of Japanese descent by ordering them to move away from the West Coast on security grounds or interning them along with legally resident aliens.

Despite the doctrinal equality between native-born and naturalized citizens, in practice the naturalized were often subject to more demanding rules. For example, a citizen could lose his U.S. naturalization by returning to his native country and residing there for more than a year. Most of these discriminatory strictures were eliminated in the final decades of the twentieth century, restricting denaturalization to persons who engaged in willful misrepresentation in the filing process.

Not surprisingly, applications for naturalization rose steadily in the wake of the revival of immigration that followed the 1965 reform law. The average annual number of people who became naturalized American citizens increased from fewer than 120,000 during the 1950s and 1960s to 210,000 in the 1980s. Enactment of the Immigration Reform and Control Act of 1986, which provided an avenue to legalization for 2.7 million undocumented immigrants, stimulated an additional 1 million applications, boosting the annual average for the 1990s to 500,000. By 2005, the leading source country for newly naturalized citizens was Mexico, at 13 percent; followed by the Philippines, at 6.2 percent; India, at 6 percent; Vietnam, at 5.4 percent; and China, at 5.2 percent. Seventy-seven percent of immigrants were residents of ten states, with California in the lead, at 28 percent; followed by New York, at 14 percent; Florida, at 7 percent; and Texas, at 6 percent.

The growth of anti-immigrant sentiment from the 1990s onward, the denial of public benefits to non-citizens, and the costs of a newly imposed mandatory renewal of permanent residence permits (“green cards”) prompted many eligible but hitherto unconcerned alien residents to apply for citizenship in order to be able to vote or to qualify for benefits. The swelling of applications quickly bogged down federal agencies and created lengthy delays.

Moreover, in the wake of 9/11, security concerns prompted U.S. authorities once again to tighten the borders and access to citizenship without resorting to statutory change. After the Immigration and Naturalization Service was relocated from the Department of Justice to the newly created Department of Homeland Security, its officials reassessed established procedures and concluded that FBI checks on applicants for citizenship were insufficiently thorough. Consequently, in 2002 the agency resubmitted 2.7 million names to be checked further. Rather than simply determining if the applicants were subjects of FBI investigations, the bureau was charged with ascertaining if their names showed up in any FBI files, even as witnesses or victims. Because many old documents were not electronic and were scattered throughout the agency’s 265 offices, the process could take months, if not years. Further confusion and delays arose from the difficulty of sorting out individuals with common non-European surnames. Some 90 percent of the names submitted for rechecking did not appear in the FBI records. The 10 percent whose names did appear faced further delays, because “deep checks” often require access to records of foreign governments. Many of those stuck in the backlog were from predominantly Muslim countries, as well as from Asia, Africa, and the former Communist countries of Eastern Europe.

According to the official rules as of 2006, to be eligible for naturalization: an alien had to be over 28 years of age, a legal permanent resident of the United States for at least five years (three if married to a U.S. citizen and only one if a member of the armed forces), with absences totaling no more than one year; reside in one state for at least three months; demonstrate the ability to read, write, speak, and understand “ordinary” English (with some exemptions); demonstrate knowledge and understanding of fundamentals of U.S. history and government (special consideration given to applicants with impairments or older than 65 with at least 20 years of residence); take an oath of allegiance that includes renouncing foreign allegiances (although dual citizenship with countries deemed friendly to the United States is permitted); and pay an application fee (plus an additional fee for fingerprints).

Grounds for refusal included certain criminal offenses and failure by the applicant to demonstrate that he or she is “of good moral character.” Individuals were permanently barred from naturalization if they had ever been convicted of murder or of an aggravated felony since November 29, 1990. Moreover, a person could not be found to be of good moral character if he or she had “committed and been convicted of one or more crimes involving moral turpitude”; had been convicted of two or more offenses for which the total sentence imposed was five years or more; had been convicted under any controlled substance law, except for a single offense of simple possession of 30 grams or less of marijuana. Other grounds for denial of naturalization included prostitution, involvement in the smuggling of illegal aliens, polygamy, failure to support dependents, and giving false testimony in order to receive a benefit under the Immigration and Nationality Act.

The number naturalizing grew 12 percent from 537,151 in 2004 to 604,280 in 2005, with Mexico the leading country of birth and the Philippines in second place. At the end of that year, more than half a million applications awaited decisions. Growth accelerated further in 2006 with an increase of 16 percent to 702,589. Mexico remained the leading source country, but India overtook the Philippines for second place. A decline in the median number of years in legal permanent residence of the newly naturalized from a recent high of ten years in 2000 to only seven in 2006 confirmed that legal permanent residents who met the residence requirement had become more eager to avail themselves of the rights and privileges of U.S. citizenship. Despite a further increase in naturalization fees and the introduction of a new, more difficult civic test in 2007, the number of naturalization petitions doubled that year to 1.4 million. The backlog was so great that receipt of applications filed in July 2007 was acknowledged only four months later.

Although applicants for naturalization are still required to renounce their allegiance to foreign states, in practice, the United States has become more tolerant of multiple nationality, thus falling in step with the general movement of liberal democracies toward a more flexible concept of citizenship less bound to the world of mutually exclusive territorial sovereignties.

See also immigration policy; voting.

FURTHER READING. Alexander Aleinikof and Douglas Klusmeyer, eds., Citizenship Today: Global Perspectives and Practices, 2001; James Kettner, The Development of American Citizenship, 1608–1870, 1978; Peter H. Schuck, Citizens, Strangers, and In-Betweens: Essays on Immigration and Citizenship, 1988; Daniel J. Tichenor, Dividing Lines: The Politics of Immigration Control in America, 2002; Aris-tide R. Zolberg, A Nation by Design: Immigration Policy in the Fashioning of America, 2006.

ARISTIDE R. ZOLBERG

image

civil liberties

Civil liberties are defined as the rights enjoyed by individuals over and against the power of government. The idea of civil liberties originated in English history, with the Magna Carta in 1215, and developed over the following centuries. By the time of the American Revolution, the list of individual liberties included habeas corpus, freedom of speech, and religious liberty, among others. Specific guarantees were incorporated into the U.S. Constitution (1787), the Bill of Rights (1791), and the constitutions of the individual states.

Civil liberties have had an uneven history in America. The freedoms enshrined in the Bill of Rights were largely ignored for much of American history. In the late eighteenth century, political thinkers did not debate complex rights issues; discussions of free speech, for example, did not address obscenity or hate speech. Serious consideration of such issues did not begin until the modern civil liberties era in the twentieth century.

Civil Liberties in Early American History

The first great civil liberties crisis in American history involved the 1798 Alien and Sedition Acts. The Sedition Act prohibited virtually any criticism of the government, and the administration of President John Adams prosecuted and jailed a number of its critics under the law. The two laws provoked strong protests, most notably the Kentucky and Virginia Resolves in late 1798, which were secretly written by James Madison and Thomas Jefferson, respectively. Both resolves denounced the laws as threats to freedom of speech and challenged the power of the federal government to enact such laws. The principal focus, however, was on the respective powers of the federal government and the states, not on freedom of speech; the resolves contributed to the debate over states’ rights rather to the theory of the First Amendment. The crisis over the Alien and Sedition Acts passed when Thomas Jefferson was elected president in 1800 and pardoned all Sedition Act victims.

In the years preceding the Civil War, a major free speech controversy erupted over efforts by proslavery forces to suppress advocacy of abolition, specifically by banning antislavery material from the U.S. mail and by restricting debates on slavery in the U.S. Congress. These assaults on free speech raised public concern to the point where the independent Free Soil Party in 1848 adopted the slogan “Free Soil, Free Labor, Free Speech, and Free Men.” The new Republican Party adopted the same slogan in 1856.

Civil War and Reconstruction Era Crises

The Civil War produced two major civil liberties crises. President Abraham Lincoln, in a controversial move, suspended the right of habeas corpus in certain areas controlled by the federal government, fearing that opponents of the war would undermine the war effort. The Supreme Court declared Lincoln’s action unconstitutional on the grounds that civil courts were still functioning in those areas. Military authorities in Ohio, meanwhile, arrested and convicted Clement Vallandigham, a prominent antiwar Democrat, for a speech opposing the war, charging him with interfering with the war effort. President Lincoln, however, deported Vallandigham to the Confederacy to avoid making him a martyr. Lincoln also directed military authorities to drop prosecution of an antiwar newspaper in Chicago, believing that such prosecution violated freedom of the press.

The Reconstruction era following the Civil War produced major changes in civil liberties law. The Thirteenth Amendment to the Constitution prohibited slavery, the Fourteenth Amendment forbade states from depriving persons of due process or equal protection of the law, and the Fifteenth Amendment guaranteed the right to vote. In practice, however, the Civil War amendments provided little actual protection to African Americans. The Supreme Court eventually interpreted the Fourteenth Amendment to invalidate social legislation to help working people, on the grounds that such laws violated individuals’ right to freedom of contract (Lochner v. New York, 1905).

World War I and the Modern Era of Civil Liberties

The modern era of civil liberties began during the World War I years, when Woodrow Wilson’s administration suppressed virtually all criticism of the war and also conducted massive illegal arrests of political dissidents. Such actions set in motion a national debate over the meaning of the Bill of Rights.

In the early twentieth century, the Supreme Court had issued a series of decisions on the First Amendment, all of which upheld the prosecution of antiwar critics. In the case of Abrams v. United States (1919), however, Justice Oliver Wendell Holmes, joined by Justice Louis Brandies, wrote a dissenting opinion arguing that the American experiment with democracy rested on the free expression of ideas. Holmes’s dissent shaped the subsequent course of constitutional law on the First Amendment.

Another important development shortly after the war was the creation of the American Civil Liberties Union (ACLU) as the first permanent organization devoted to the defense of individual rights. The ACLU succeeded the National Civil Liberties Bureau, which had been created in 1917 to defend the rights of conscientious objectors and to fight violations of free speech during the war. Officially founded in 1920, the ACLU played a major role in advocating expanded protection for civil liberties in the decades that followed.

The legal and political climate in the United States was extremely hostile to civil liberties in the 1920s. The idea of free speech was associated with radicalism, and in the pro-business climate of the period, the freedom of speech and assembly rights of working people who sought to organize labor unions were systematically suppressed.

A 1925 controversy over a Tennessee law prohibiting the teaching of evolution in public schools had a major impact on public thinking about civil liberties. Biology teacher John T. Scopes was convicted of violating the law in a trial that received enormous national and international attention. Because Scopes’s conviction was overturned on a technicality, there was no Supreme Court case on the underlying constitutional issues. Nonetheless, the case dramatized civil liberties issues for the general public and foreshadowed many subsequent battles over the role of religion in American public life.

The first important breakthrough for civil liberties in the Supreme Court occurred in the 1925 Gitlow v. New York case. The Court upheld Benjamin Gitlow’s conviction for violating the New York State criminal anarchy law by distributing a “Left Wing Manifesto” calling for the establishment of socialism in America. In a major legal innovation, however, the Court held that freedom of speech was one of the liberties incorporated into the due process clause of the Fourteenth Amendment. By ruling that the Fourteenth Amendment incorporated parts of the Bill of Rights, the Court laid the foundation for the revolution in civil liberties and civil rights law in the years ahead.

Four Supreme Court cases in the 1930s marked the first significant protections for civil liberties. In Near v. Minnesota (1931), the Court held that freedom of the press was incorporated into the Fourteenth Amendment. In Stromberg v. California (1931), meanwhile, it held that the Fourteenth Amendment incorporated the free speech clause of the First Amendment. Two cases arising from the celebrated Scottsboro case, where nine young African American men were prosecuted for allegedly raping a white woman, also resulted in new protections for individual rights. In Powell v. Alabama (1932) the Court overturned Ozie Powell’s conviction because he had been denied the right to counsel, and in Patterson v. Alabama (1935), it reversed the conviction because African Americans were systematically excluded from Alabama juries.

The Era of the Roosevelt Court

International events also had a profound impact on American thinking about civil liberties in the late 1930s and early 1940s. The examples of Nazi Germany and the Soviet Union provoked a new appreciation of the Constitution and the Bill of Rights in protecting unpopular minorities and powerless groups. The American Bar Association, for example, created a special Committee on the Bill of Rights in 1938, which filed amicus briefs in several important Supreme Court cases. President Franklin D. Roosevelt, in his 1941 State of the Union address, argued that the “Four Freedoms,” which included freedom of speech and freedom of worship, defined American democracy and promoted liberty in a world threatened by totalitarianism.

President Roosevelt appointed four justices to the Supreme Court who were strong advocates of civil liberties, and the so-called Roosevelt Court created a systematic body of constitutional law protecting individual rights. Some of the Court’s most important decisions involved the unpopular religious sect known as the Jehovah’s Witnesses. In Cantwell v. Connecticut (1940), the Court incorporated the freedom of religion clause of the First Amendment into the Fourteenth Amendment, thereby protecting the free exercise of religion against infringement by state officials. In the most famous controversy, the children of Jehovah’s Witnesses refused to salute the American flag in public schools as required by the laws in several states on the grounds that it violated their religious beliefs. The Supreme Court upheld their right, holding that the government cannot compel a person to express a belief contrary to his or her conscience (West Virginia v. Barnette, 1943).

World War II did not lead to the suppression of free speech that had occurred during World War I, but it did result in one of the greatest violations of civil liberties in American history. With Executive Order 9066, President Roosevelt ordered the internment of 120,000 Japanese Americans from the West Coast, 90,000 of whom were American citizens. They were held in “relocation centers” that were, essentially, concentration camps. Public opinion overwhelmingly supported the government’s action, as did the Supreme Court. In Hirabayashi v. United States (1943) the Court upheld the constitutionality of a curfew on Japanese Americans, and in Korematsu v. United States (1944), it sustained the forced evacuation of Japanese Americans, although Justice Frank Murphy denounced the government’s program as racist in his Korematsu dissent. In 1988 the federal government apologized for the Japanese evacuation and provided monetary damages to the surviving victims.

The Cold War Years

The anti-Communist hysteria of the cold war period resulted in sweeping assaults on civil liberties. Under President Harry Truman’s 1947 Loyalty Program, a person could be denied federal employment for “sympathetic association” with a group or activities deemed subversive. The House Committee on Un-American Activities publicly investigated individuals alleged to be Communists or Communist sympathizers. In the atmosphere of the times, people could lose their jobs or suffer other adverse consequences if the committee simply labeled them as subversive. States, meanwhile, required teachers and other public employees to take loyalty oaths. Senator Joseph McCarthy made reckless and unsupported claims that the government was filled with subversives, and “McCarthyism” became part of the American political lexicon. By the mid-1950s, Senator McCarthy was discredited, and the Supreme Court began to place constitutional limits on many anti-Communist measures.

Racial Equality and Other
Advances in Civil Liberties

The post–World War II years also marked significant advances in civil liberties in several areas. The civil rights movement challenged racial segregation in all areas of American life. The high point of this effort was the landmark Supreme Court case Brown v. Board of Education of Topeka (1954), which declared racial segregation in public schools unconstitutional. The decision marked the advent of the Warren Court (1953–68), named after Chief Justice Earl Warren, which issued many decisions expanding the scope of civil liberties and defending individual rights.

In response to changing public demands for greater freedom of expression, the censorship of books, magazines, and motion pictures came under steady attack. In Burstyn v. Wilson (1952), for example, the Supreme Court ruled that motion pictures were a form of expression protected by the First Amendment. In a series of subsequent decisions, the Court by the late 1960s struck down censorship of virtually all sexually related material except for the most extreme or violent forms, although it never succeeded in formulating a precise definition of pornography or what kinds of expression were outside the protection of the First Amendment.

With respect to the establishment of religions, the Court in 1947 (in Everson v. Board of Education) held that the establishment of religion clause of the First Amendment created a “wall of separation” between church and state. In 1962 the Court held that official religious prayers in public schools violated the establishment clause.

The Supreme Court also imposed constitutional standards on the criminal justice system. It placed limits on searches and seizures (Mapp v. Ohio, 1961) and police interrogations, ruling in Miranda v. Arizona (1966) that the police are required to inform criminal suspects they have a right to an attorney. The Court also held that all criminal defendants facing felony charges were entitled to an attorney under the Sixth Amendment. In Furman v. Georgia (1972), the Court held that existing state death penalty laws were unconstitutional as applied but did not declare the death penalty unconstitutional under the cruel and unusual punishment clause of the Eighth Amendment.

A New Rights Consciousness

The civil rights movement spurred a new consciousness about rights that affected virtually every aspect of American society and added a new and ambiguous element to thinking about constitutional rights. Although courts decided cases in terms of individual rights, the emerging rights consciousness increasingly focused on group rights. Decisions on equal protection and even the First Amendment rights of an African American, for example, became instruments for the advancement of African Americans as a group. As a consequence, political movements emerged in the 1960s to support the rights of women, prisoners, children, the mentally and physically disabled, and lesbian and gay people. Each group undertook litigation asserting an individual right as a device to advance the rights of the group in question and effect social change. The Supreme Court was sympathetic to many of these claims, and created a vast new body of constitutional law. The long-term result was the emergence of a new “rights culture” in which Americans responded to social problems by thinking in terms of individual and/or group rights.

The unresolved ambiguity between individual and group rights emerged in the controversy over affirmative action and other remedies for discrimination against particular groups. Traditional civil rights and women’s rights advocates argued that group-based remedies were necessary to eliminate the legacy of discrimination. Their conservative opponents argued that group-based remedies violated the individual rights of groups who did not receive what they saw as preferential treatment.

The Vietnam War (1965–72) and the subsequent Watergate Scandal (1972–74) raised several important civil liberties issues. Some Americans argued that the Vietnam War was unconstitutional because Congress had never issued a declaration of war. After much debate, Congress enacted the 1973 War Powers Act, designed to reassert its constitutional authority over committing American military forces to combat. Most commentators, however, have argued that the law failed to achieve its objectives, and the war in Iraq in 2003 again raised difficult constitutional questions regarding the power of the president as commander in chief.

The Watergate scandal resulted in the first Supreme Court ruling on the concept of executive privilege. The Court ordered President Richard Nixon to turn over certain White House tape recordings (which quickly led to his resignation from office) but held that presidents could withhold material whose disclosure would jeopardize national security. The exact scope of this privilege remained a controversy under subsequent presidents. Watergate also brought to light the abuse of constitutional rights by the Federal Bureau of Investigation and the Central Intelligence Agency over many decades. Both agencies had engaged in illegal spying on Americans. To assert more effective legal control over the FBI, Attorney General Edward H. Levi in 1976 issued a set of guidelines for intelligence gathering by the Bureau. In 1978 Congress passed the Foreign Intelligence Surveillance Act (FISA) to control intelligence gathering related to suspected foreign spying or terrorist activities.

The most controversial aspect of the new rights culture involved abortion. In the 1973 Roe v. Wade decision, the Supreme Court held that the constitutional guarantee of a right to privacy included the right to an abortion. Roe v. Wade provoked a powerful political reaction that exposed a deep cultural division within American society over civil liberties issues related to abortion, prayer in school, pornography, and gay and lesbian rights. A powerful conservative movement led to the election of several presidents and the appointment of conservative Supreme Court justices who either took a more limited view of civil liberties or objected to particular remedies such as affirmative action.

The Supreme Court took a more conservative direction in the 1980s, backing away from the judicial activism on behalf of individual rights that characterized the Warren Court. The conservative orientation became particularly pronounced following the appointments of Chief Justice John Roberts in 2005 and Associate Justice Samuel Alito in 2006. Two important indicators of the Court’s new orientation were the 2007 decisions disallowing race-based remedies for school integration (People Involved in Community Schools, Inc. v. Seattle; Meredith v. Jefferson County) and a 2008 decision striking down a Washington, D.C., gun control ordinance as a violation of the Second Amendment (District of Columbia v. Heller). Until the 2008 decision, the Court had ignored the question of whether the Second Amendment created an individual right to own firearms.

The War on Terrorism and Civil Liberties

The most significant development in civil liberties in the early twenty-first century involved the reaction to the September 11, 2001, terrorist attacks on the United States. The 2001 Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act (better known as the USA PATRIOT Act) included several provisions that critics argued threatened civil liberties. The law permitted secret searches of home or offices without a warrant (labeled “sneak and peek” searches) and authorized the FBI to collect information through “national security letters” that did not involve a judicial warrant and did not allow the person being investigated to reveal that the letter was even issued. President George W. Bush also authorized secret, warrantless wiretapping of American citizens by the National Security Agency, in violation of the 1978 FISA law. The administration also denied the right of habeas corpus to suspected terrorists held at the U.S. military base at Guantanamo Bay, Cuba. When the Supreme Court held that the president did not have constitutional authority to do this, Congress passed a law denying detainees the right of habeas corpus.

The controversies over habeas corpus, warrantless wiretapping, and other issues related to the so-called War on Terrorism represented what many observers regarded as the most serious constitutional crisis over civil liberties in American history, particularly with regard to the issues of separation of powers and presidential war powers.

See also anticommunism; civil rights.

FURTHER READING. David Cole and James X. Dempsey, Terrorism and the Constitution: Sacrificing Civil Liberties in the Name of National Security, 2nd ed., 2002; Edward J. Larson, Summer for the Gods: The Scopes Trial and America’s Continuing Debate over Science and Religion, 1997; Paul L. Murphy, World War I and the Origin of Civil Liberties in the United States, 1979; Mark E. Neeley, Jr., The Fate of Liberty: Abraham Lincoln and Civil Liberties, 1991; Geoffrey Stone, Perilous Times: Free Speech in Wartime from the Sedition Act of 1798 to the War on Terrorism, 2004; Samuel Walker, In Defense of American Liberty: A History of the ACLU, 2nd ed., 1999.

SAMUEL WALKER