Banking policy has been debated in the United States since Alexander Hamilton and Thomas Jefferson squared off in George Washington’s cabinet over the first Bank of the United States. Banks are depository financial institutions: people and firms deposit money in these intermediaries, which, in turn, lend it to other people and firms. Banking policy aims to ensure safety for depositors and borrowers and the financial soundness of banks. But there is more at stake than the banking industry. Banks allocate capital among the competing claims of individuals and firms, industries and regions, pursuing social goals and economic development. Further, the banking process is closely related to the nature and availability of the money supply, thus banking policy has long been entangled with monetary policy.
The big issues at stake are pragmatic and political. What arrangements will work well to provide and allocate money and credit? What implications do alternative arrangements have for the distribution of power: When do banks have too much economic power or political influence? U.S. banking policy debates have been about the structure of banking organizations—banks and regulatory agencies—as well as about rules that constrain their practices. Some of the organizations launched or redesigned in those debates are gone, but most remain. In the twenty-first century, banking firms include commercial banks and savings banks, savings and loan associations (S&Ls), and credit unions. Three federal agencies regulate commercial banks: the Office of the Comptroller of the Currency in the Treasury Department attends to national banks; the Federal Reserve System has responsibility for state banks that are its members; and the Federal Deposit Insurance Corporation (FDIC) oversees nonmember state banks. Federal S&Ls are supervised by the Office of Thrift Supervision, also in Treasury, while credit unions are overseen by the independent National Credit Union Administration. Public agencies in the states supervise and regulate depository intermediaries chartered under state laws. The Deposit Insurance Fund in the FDIC insures deposits in banks and S&Ls, while credit unions have a separate deposit insurance fund. Two rediscount institutions—Federal Reserve Banks and Federal Home Loan Banks—lend to depository intermediaries of all types. How did we get here?
The first Bank of the United States was chartered by Congress in 1791. Alexander Hamilton, secretary of the treasury, proposed the bank to provide credit for economic development and a reliable circulating medium and to assist the government as fiscal agent. At the time, and until establishment of the Federal Reserve System more than a century later, what should comprise the circulating medium remained at issue. Provisions in the new Constitution had ended issuance of paper money by the states. The national government had taken up the practice during the Revolution, but Hamilton wanted to stop it, arguing that the temptation to run the printing presses excessively would be irresistible in an emergency. Bank notes, on the other hand, issued in the process of commercial lending, would not be inflationary. In laying out the bank’s design, Hamilton identified dimensions of bank structure and practice that policy makers have manipulated in pursuit of workable institutions throughout U.S. history: capitalization, ownership, governance, and permissible assets and liabilities.
Thomas Jefferson, then secretary of state, led the opposition. Jefferson championed an agrarian economy, and so rejected the importance of aggregating capital for the development of manufacturing and commerce. Neither did he see merit in increasing circulating medium with bank notes. As a “hard money” (gold and silver, also called specie) advocate, Jefferson viewed paper money as inflationary regardless of whether it was issued by a bank or a government. But in a letter to George Washington disputing the bank’s constitutionality, he rested the weight of his objection on its evils as a corporation. Jefferson opposed corporations as artificially contrived entities for the private accumulation of wealth. The economic inequality they engendered would threaten political equality. Jefferson indicted the first Bank of the United States as a corporation, and because the new Constitution did not explicitly grant Congress the power to charter corporations, charged that it was unconstitutional. Nonetheless, Washington signed the statute establishing the Bank.
The Bank of the United States operated from 1791–1811, augmenting capital and the money supply and serving as the government’s fiscal agent, as planned. But in response to the unanticipated increase in the number of state-chartered banks, from four to over one hundred, it developed the additional function of de facto state bank regulator. As its charter approached expiration, President James Madison supported renewal, but a majority of Jefferson’s Republican Party defeated it in Congress in 1811.
Over the next five years, the number of state banks doubled; they issued paper notes used as currency and drove inflation. As specie disappeared from circulation and efforts to sell Treasury notes to fund the War of 1812 largely failed, the government accepted state bank notes but found it difficult to use them across the states to pay soldiers and buy supplies. In a pragmatic consensus, Congress chartered a second Bank of the United States in 1816 to provide for a “uniform currency,” which meant adequate but not inflationary levels of currency and credit available throughout the United States. John Calhoun developed the positive argument for the bank’s constitutionality on which centralized monetary policy rests into the twenty-first century: the Constitution gives Congress authority to “coin money” and “regulate the value thereof.” Bank paper had become money, and though the founders had not foreseen this, they had intended to provide authority to regulate whatever served as money.
The second Bank of the United States performed poorly at first. It fed the boom that followed the War of 1812 and then contributed to the financial panic of 1818–19. But under Nicholas Biddle, who became bank president in 1823, it was widely viewed by contemporary policy makers as effective. Andrew Jackson nevertheless vetoed the bill renewing its charter in 1832. He argued that the corporate charter provided a monopoly on banking with the government’s deposits and that the bank’s economic power resulted in political influence. Efforts to override the veto failed and the second Bank was wound down as its charter expired in 1836.
In 1833 Jackson directed Treasury Secretary Roger Taney to transfer government deposits from the Bank of the United States to selected state banks. Some chroniclers have interpreted this as favoritism for “pet banks” or an example of Jackson’s support for states’ rights. But Jackson opposed incorporated banking in the states as well as at the national level. He believed that the mechanisms involved in using bank notes as money inherently robbed common people. Jackson’s solution was to return to hard money. Accordingly, his administration used government deposits in state banks as leverage to prohibit bank issuance of small notes and to require payment in specie on demand.
By the eve of Jackson’s bank veto, state legislatures had chartered some 400 banks. Like Congress, they used banks to aggregate capital and provide commercial credit, a circulating medium, serve as the state’s fiscal agent, and regulate other banks. Pragmatic legislators also used banks to finance public improvements and state operations, and to ensure credit for agriculture. They initially responded to Jackson’s bank veto by chartering more banks and enlarging existing banks. But following the Panic of 1837, a partisan rift opened. A majority of Democrats moved to Jacksonian opposition to all incorporated banking. Whigs insisted that banks were needed, at a minimum, to provide capital for economic development. The issue in the state banking debates changed from how to design banks to whether to charter them at all.
The institutional outcome of these debates was the “free bank.” This model emerged in New York as a compromise among Jacksonian opponents of all incorporated banking, free enterprisers who objected to requiring legislative authority to bank, and supporters of regulated banking. To address charges of monopoly, New York’s free banking law was a general law, permitting anyone to bank who met its requirements. To protect noteholders, a new bond-backed currency was devised. A bank deposited bonds of public jurisdictions with the state’s comptroller. If the bank refused to pay hard money on demand for its notes, the comptroller was to sell the bonds and redeem the notes.
With the Civil War, banking policy returned to the agenda in Washington. Treasury Secretary Salmon P. Chase sold bonds, mostly to banks, to finance the war, and insisted that they pay in specie. By late 1861, major banks had no more specie and the government was confronted—as in the War of 1812—with the need for a uniform currency acceptable across state lines. Chase requested and Congress passed the National Currency Act of 1863, repealed and replaced by the National Bank Act of 1864. The statute aimed in the short run to finance the war but ultimately to achieve a national currency without raising Jacksonian specters of private power run amok in a single national bank or a currency wildly over-issued by many state banks. Modeled on the states’ free banking laws, the National Bank Act provided for privately owned banks that would issue bond-backed currency. Minimum capitalization of these banks depended upon size of place, but all were very small compared to the first and second banks of the United States. The statute established the Office of the Comptroller of the Currency in the Treasury Department to regulate national banks and issue the currency.
Banks chartered under the National Bank Act became significant actors in the U.S. economy, but the approach did not result in the unified national banking system that was expected. To motivate state banks to convert to national charters, Congress placed a prohibitive tax on their notes. But as state banks learned to make loans in the form of deposit credit, legislatures returned to chartering banks to meet needs not well served by national banks, notably credit for agriculture, real estate, and local business. The distinctive U.S. dual banking system emerged. The National Bank Act did not result in one currency either. State bank notes disappeared, but since national banks were not operational quickly enough to meet the government’s need for Civil War cash, the Treasury issued notes, dubbed “greenbacks.” Greenbacks circulated with national bank notes, short-term U.S. bonds, gold and gold certificates, silver and silver certificates.
Institutional arrangements framed by the National Bank Act served the expanding economy poorly in the decades following the Civil War. Farmers’ interests were ravaged as commodity prices declined in a long deflationary trend. In the populist critique that developed, the problem was the return to the gold standard, which arbitrarily limited the money supply, in tandem with bank control of credit allocation. The National Bank Act’s pyramiding reserve arrangements (country banks held reserves in city banks, which in turn held reserves in larger “reserve city” banks) were charged with channeling every community’s money to big banks in the East—the “money trust”—which loaned it to stock market speculators. In a reversal of the Jacksonian anti-inflation stance, farmers called for expanding the money supply through means subject to government—not bank—control, including greenbacks and remonetizing silver. In the realigning election of 1896, Democratic presidential candidate William Jennings Bryan championed free coinage of silver against supporters of the gold standard. The Democratic Party became home to farmers in the West and South, while erstwhile Democrats who favored the gold standard moved into the Republican fold to form a party system that stood until the New Deal.
Business and banking interests also indicted the National Bank Act’s currency and reserve provisions. In their critique, these arrangements provided no leverage for countering the debilitating business cycles and financial panics that plagued the economy. The bond-backed currency was inelastic. It should be replaced with currency backed by bank assets and issued in the process of extending short-term commercial credit; the supply was expected to expand and contract automatically in keeping with productive needs (the “real bills” doctrine). Reserve provisions, which required that each bank keep reserves, should be changed to facilitate using reserves.
More than 40 years after the National Bank Act, the Panic of 1907 and the severe contraction that followed finally provoked the Federal Reserve Act of 1913. This statute reflected compromises of ideology and interest as policy makers tried to design arrangements that would work. Old Guard Republicans and eastern bankers proposed a privately owned central reserve bank with monetary policy discretion. Conservative Democrats and small business agreed with private ownership but feared Wall Street control; they called for decentralized reserve banks and believed that self-regulating asset currency made monetary policy discretion unnecessary. Bryan Democrats insisted on government control of any reserve system and government-guaranteed currency. Determined to achieve banking reform, President Woodrow Wilson brokered compromises. The Federal Reserve Act established 12 regional Federal Reserve Banks owned by commercial bankers, a Federal Reserve Board (the Fed) comprising presidential appointees to regulate reserve banks, and new Federal Reserve notes backed by the assets of the bank that issued them and guaranteed by the government. National banks were required to join the Federal Reserve System; state banks were permitted to join.
The Federal Reserve System did not prevent the Great Depression or widespread bank runs. Indeed, easy money policy in 1927 and 1928, deployed under leadership of the Federal Reserve Bank of New York, was a factor in the speculative boom in the stock market that ended with the crash of October 1929. Once retrenchment was under way, an indecisive Federal Reserve Board permitted the money supply to contract along with the economy, intensifying the downward spiral into Depression. The Banking Act of 1935, introduced at the insistence of President Franklin Roosevelt’s Fed chairman Marriner Eccles, responded to these mistakes by strengthening the Federal Reserve Board and subordinating the Federal Reserve Banks. The statute empowered the Board to wield the known instruments of discretionary monetary policy. Bank reserve requirements, previously fixed in the Federal Reserve Act, could be adjusted by the Fed within a statutory range. The discount rate, which had been the purview of the separate Federal Reserve Banks, became subject to Board approval. Open market operations, unknown when the Federal Reserve Act was passed in 1913, had been “discovered” in practice by the Federal Reserve Banks. Resolving a tug of war between the reserve banks (banker control) and the Federal Reserve Board (public control), the legislation vested authority to use this tool in a newly constituted Federal Open Market Committee, effectively controlled by the Board.
In the Banking Act of 1933 (the Glass-Steagall Act), Congress took aim at banking practices which had contributed, along with the Fed’s easy money policy, to the stock market crash. As the stock market boom gained steam, large banks had financed the underwriting of securities (stocks and bonds), thus reaping fees; made loans to insiders for speculation in securities; and peddled securities to the public. To address this conflict of interest, the statute erected the “Glass-Steagall wall” that prohibited commercial bank involvement with investment banking (securities dealers). To restore public confidence in bank safety, it required deposit insurance for commercial banks. Funded with premiums paid by banks and backed by a government guarantee, the Federal Deposit Insurance Corporation (FDIC) is a public corporation that pays depositors if a bank cannot.
Depression-era legislation also put home ownership finance on a firmer footing. Savings and loan associations (S&Ls) were community-based depository intermediaries, chartered under state laws, devised to provide fixed-rate, fully amortizing loans for home ownership. The bipartisan Federal Home Loan Bank Act of 1932 established 12 regional Federal Home Loan Banks to make loans to S&Ls so that S&Ls could meet depositors’ withdrawal pressure and borrowers’ credit demand. Legislation in 1933 created a federal charter, and a 1934 statute provided for S&L deposit insurance.
Credit unions moved onto the national policy agenda during the Depression as well. Underpinned by a populist philosophy, credit unions are cooperatives, mutually owned by depositors. The Federal Credit Union Act of 1934 authorized a federal charter, which made it easier to organize credit unions in states with nonexistent or unwieldy enabling laws. In 1970, federal legislation gave them deposit insurance and an independent regulator.
These Depression-era banking arrangements stood for decades but were increasingly challenged by macroeconomic dynamics, financial market innovations, and shifting ideology. Inflationary spikes in the 1960s and stagflation in the 1970s led to “disintermediation”: bank and S&L depositors transferred money to new money market instruments in pursuit of higher interest, while big borrowers went elsewhere for loans. Advocacy for deregulation gained ground among elites. The rationale was that easing asset and liability constraints would facilitate soundness in individual depository institutions, and that unleashing competition in the industry would improve macroeconomic efficiency. In 1971, President Richard Nixon’s Commission on Financial Structure and Regulation (the Hunt Commission), laid out a comprehensive deregulatory reform agenda which was largely implemented over the next two decades.
The bipartisan Depository Institutions Deregulation and Monetary Control Act of 1980 phased out interest rate ceilings on deposits and substantially eliminated states’ usury ceilings, permitted S&Ls and credit unions to offer checking accounts, and expanded banks’ asset powers. At the same time, to increase monetary policy leverage, the statute extended the reach of the Fed. All depository institutions became subject to the Fed’s reserve requirements and gained borrowing privileges. In 1982 the Garn-St. Germain Act moved further in the direction of expanding depository institutions’ asset and liability options.
Whether due to too much deregulation or too little, in the 1980s much of the S&L industry collapsed, and its insurance fund went bankrupt. Commercial banks got into trouble too: several large banks failed and the bank insurance fund dipped into the red. The Financial Institutions Reform, Recovery and Enforcement Act of 1989 resolved the S&L crisis. It replaced S&ls’ original deposit insurer with a new fund located in the FDIC, and S&ls’ independent regulator with a new agency located in the Treasury Department. These changes moved toward the Hunt Commission objective of consolidating S&L and commercial bank regulation. In 1991 the Federal Deposit Insurance Corporation Improvement Act recapitalized the bank insurance fund and established new risk-based deposit insurance premiums: banks could compete aggressively but would need to pay a commensurate price for deposit insurance. In 2006, the S&L and commercial bank deposit insurance funds were merged into a single Deposit Insurance Fund.
In addition to easing asset and liability constraints and standardizing regulation across classes of depository institutions, the deregulatory agenda called for easing barriers to entry into banking. State and federal bank regulators progressively undermined restrictions on interstate banking and branching rooted in the Jeffersonian-Jacksonian-populist fear of concentrated wealth. The Riegle-Neal Interstate Banking and Branching Efficiency Act of 1994 eliminated what was left of these restrictions. In 1999, Congress ticked off the last major item on the deregulatory agenda: ease barriers to entry across the lines between banking and other financial services. The Financial Institutions Modernization Act (Gramm-Leach-Bliley Act) demolished the Glass-Steagall wall between commercial banking and the securities industry and removed the prohibition, dating from 1956, on banks dealing in insurance.
Under the “modernized” regulatory framework, bank size increased dramatically, and organizations of nationwide and regional scope combined banking with securities underwriting and brokerage and insurance. These organizations would be tested in terms of the same fundamental issues that Hamilton and Jefferson debated: Do they work well to achieve the goals of banking policy? Are they too powerful in the economy or in democratic politics?
In the financial crisis of 2008, deregulated U.S. banking failed its first significant test. At the root of the crisis were the large volume of subprime residential mortgages originated after 2000 and resulting waves of defaults and foreclosures. These loans were extended with inadequate attention to borrowers’ ability to repay and had features, like variable interest rates, which had been mostly eliminated by Depression-era regulatory moves but reappeared with deregulation. The loans were assembled into pools on the basis of which mortgage-backed securities were issued. Three large banks heavily involved in subprime mortgage lending failed in 2008. Most subprime mortgages, however, were originated not by banks but by mortgage brokers and mortgage bankers, their way paved by banking deregulation. Similarly, securitization was mostly provided not by commercial banks and S&Ls, but by Wall Street investment banks. Even so, commercial banks were major buyers of the mortgage-backed securities built on low-quality mortgages—and this is the link between inadequately regulated mortgage lending, even outside of banks, to the failure of banking in 2008. As commercial banks realized that they could not assess one another’s safety and soundness due to exposure to mortgage-backed securities, interbank lending, and therefore lending to business customers in the real economy, ground to a halt. The open question in banking policy is whether policy makers had learned, this time, that unregulated financial capitalism will—sooner or later—undermine the real economy.
See also business and politics; economy and politics.
FURTHER READING. Thomas F. Cargill and Gillian G. Garcia, Financial Reform in the 1980s, 1985; Robert A. Degen, The American Monetary System: A Concise Survey of Its Evolution since 1896, 1987; Susan Hoffmann, Politics and Banking: Ideas, Public Policy, and the Creation of Financial Institutions, 2001; Donald F. Kettl, Leadership at the Fed, 1986; Robert Kuttner, The Squandering of America: How the Failure of Our Politics Undermines Our Prosperity, 2007; Robert E. Litan, with Jonathan Rauch, American Finance for the 21st Century, 1998; James Livingston, Origins of the Federal Reserve System: Money, Class, and Corporate Capitalism, 1890–1913, 1986; John M. McFaul, The Politics of Jacksonian Finance, 1972; J. Carroll Moody and Gilbert C. Fite, The Credit Union Movement: Origins and Development, 1850–1980, 2nd ed., 1984; Irwin Unger, The Greenback Era: A Social and Political History of American Finance, 1865–1879, 1964; Lawrence J. White, The S&L Debacle: Public Policy Lessons for Bank and Thrift Regulation, 1991; Jean Alexander Wilburn, Biddle’s Bank, 1967.
SUSAN HOFFMANN
On September 25, 1789, the first Congress sent a packet of 12 constitutional amendments to the states. Two years later, ten of these amendments became part of the Constitution after Virginia’s ratification provided their necessary approval by three-fourths of the states. Over time, these amendments came to be known collectively as the Bill of Rights, and their adoption is generally portrayed as a critical concluding act in the story of constitutional reform that began with the Federal Convention of 1787. Since the early twentieth century, the interpretation of the rights enumerated in the first eight amendments, along with the equal protection and due process clauses of the Fourteenth Amendment (ratified in 1868), has emerged as the richest and most controversial realm of modern constitutional jurisprudence.
At the time of their adoption, however, the amendments were as much an anticlimax to the great constitutional debate of the late 1780s as its culmination. The major changes that the anti-Federalist critics of the Constitution sought in 1787–88 involved the structure of the new federal government and the division of power between the Union and the states. The limited scope of the amendments that James Madison introduced in the House of Representatives on June 8, 1789, hardly satisfied the anti-Federalists’ deeper concerns. Nor did most of the Federalist supporters of the Constitution who dominated the new Congress agree that amendments were necessary. It took a fair amount of hectoring from Madison even to get his congressional colleagues to consider the subject of amendments. Once Congress sent the amendments out for ratification, it took the states much longer to approve the ten articles that were adopted than it had to act on the original Constitution.
Once ratified, the amendments were largely forgotten. The Bill of Rights imposed restrictions only on the national government, not the states. Well into the nineteenth century, most of the governance that affected the daily lives of Americans occurred at the level of the states and local communities, where the federal guarantees did not apply. As the scope of federal activity gradually expanded after the Civil War, the national Bill of Rights slowly began to develop its own constitutional jurisprudence. But the critical evolution came later, after the Supreme Court, developing what is known as the Incorporation Doctrine, construed Section I of the Fourteenth Amendment to apply the protections enumerated in the federal Bill of Rights against state and local governments. In the 1960s and 1970s, a slew of decisions extended this doctrine almost comprehensively.
The belief that bills of rights were an essential safeguard of liberty occupied a prominent and venerable place in Anglo-American constitutional thinking during the seventeenth and eighteenth centuries. The classic example was Magna Carta, the famous agreement between the protesting barons of England and the domineering King John. Its negotiation in 1215 was thought to mark the moment when an English nation began to recover the ancient liberties it had lost in 1066, when William the Conqueror sailed from Normandy to impose the yoke of feudal rule on a free people. To this way of thinking, a bill of rights was a compact negotiated between a ruler and his subjects. It was not so much the source of the rights they claimed as proof and confirmation that they already possessed the liberties and privileges in question.
There were other helpful examples of such documents of more recent vintage. In 1628, at an early point in the great constitutional controversies between the Stuart monarchy and its opponents, Parliament had presented King Charles I with a Petition of Right meant to affirm fundamental rights dating to Magna Carta and beyond. In 1689 the new monarchs William and Mary accepted a Declaration of Rights framed by the so-called Convention Parliament as an implicit condition of their accession to the throne after the flight of Mary’s father, James II. Once legally reenacted by Parliament later that year, it became formally known as the Bill of Rights. Closer to home, many individual colonies had their own declarations of rights, often reiterating the customary rights and liberties that they and their English countrymen claimed but also including other statements of special importance to particular provinces.
There is no simple way to categorize the statements these documents could contain. In contemporary usage, a declaration of rights could combine broad principles of governance and general affirmations of natural rights to life, liberty, and property with the specific protections of common law procedures of trial and adjudication. Equally important, constitutional thinking before the revolutionary era did not regard statements of rights as legally binding commands that formally limited the power of government. Instead they identified principles and procedures that rulers ought to respect and follow. But the leading principle of Anglo-American constitutionalism after 1689 was the idea of legislative supremacy, which meant that the Crown, the executive branch of government, could no longer be allowed to assert the power to make law unilaterally, as a mere expression of royal will. The best way to protect the rights and liberties of a people was to empower and enable their duly elected representatives to act as lawmakers. A second security lay in the power of juries to prevent royal judges from deciding cases on their own authority. The right to vote for representatives and to serve on juries offered the best political safeguards a free people could enjoy.
In the mid-eighteenth century, these views were commonly held in both Britain and its American colonies. The great imperial controversy that began with the parliamentary Stamp Act of 1765 and ended with Americans declaring independence a decade later exposed the limited utility of bills of rights as instruments for checking power. The controversy pivoted on the British claim that the legislative supremacy of Parliament trumped the colonists’ belief that they could be taxed and governed only by the acts of their elected representatives. Appealing to their colonial charters and statements of rights was only one of many ways in which the colonists tried to prove they were exempt from the jurisdiction of Parliament. There were occasional suggestions that the dispute might be resolved by negotiating an American bill of rights to clarify the respective powers of empire and colonies. But the colonists could not overcome the dominant principle of British authority: that Parliament was the supreme source of law within the empire. If the colonists were loyal subjects of that empire, they must ultimately acknowledge its sovereignty over them.
The Americans would not acknowledge such sovereignty, and war came in April 1775, followed by the decision for independence 15 months later. During this final crisis, legal government effectively collapsed in most of the colonies, replaced by a network of extra-legal committees, conventions, and congresses. By early 1776, many Americans recognized that independence was inevitable, and individual colonies began petitioning the Continental Congress for permission to create new legal governments to replace both the revolutionary committees and the old colonial institutions. That would require writing constitutions to establish institutions grounded on republican rather than monarchical principles.
In the process, eight states adopted declarations of rights to accompany the new constitutions they were drafting. In three states (Pennsylvania and North Carolina in 1776, Massachusetts in 1780), these declarations became integral parts of the new constitutions. In others, they remained independent documents of uncertain authority. Some states, like New York, while not adopting distinct declarations, did include articles specifying particular rights in the constitutional text.
Whatever form these statements took, they were not initially regarded as firm commands that the new governments were fully obliged to obey. Their original purpose was more political than legal. In a literal-minded way, Americans in 1776 saw themselves emerging from the condition known as a “dissolution of government,” a situation well described by John Locke a century earlier. In that condition, a people exercising their natural right to form a compact of government were entitled and indeed expected to state the principles on which they were acting. By stating their rights, a people could remind themselves, their descendants, and their rulers of the basic purposes of the government they were forming. Hopefully, the rulers would not overstep the just boundaries of their power. But if they did, the existence of such declarations would enable a people to judge their ruler’s behavior and oppose or resist the encroachments of the power of the state upon the liberty of society.
In adopting these constitutions and declarations, the revolutionaries did not immediately abandon the assumptions of legislative supremacy they had inherited from the Anglo-American constitutional tradition. Historically, the major threat to rights was perceived to lie in the unchecked power of the Crown and the great purpose of identifying and protecting rights was to secure the people as a whole, conceived as a collective entity outside of government, from its arbitrary will. The idea that a legislative assembly composed of the people’s own representatives could also threaten their rights was not easy to conceive. These early statements of American constitutionalism sometimes included articles affirming the principle of separation of powers, as laid down by the Baron de Montesquieu in his influential treatise The Spirit of the Laws. A republican people had a right not to be subjected to a government in which the three forms of power (legislative, executive, and judicial) were concentrated in the same hands. But that did not make all three departments of power equal in authority. The legislature was the dominant branch, and its dominant role included protecting the rights of the people.
That inherited understanding was severely tested in the first decade after independence was declared. The war forced the legislatures to unprecedented levels of activity. The measures they adopted to keep the military struggle going intruded on people’s lives and liberties in novel and costly ways, through economic dislocations that affected the property rights that Anglo-American thinking had long deemed almost sacred. Inevitably, these laws had a differential impact on various sectors of society, driving criticism of the elected legislatures that faced unpleasant choices in distributing the burden of waging a prolonged and difficult war. As Americans grew more critical of the performance of these governments, it also became possible to think of a bill of rights as a potential restriction on the authority of the people’s own representatives.
A second, closely related development at the state level of governance also had a profound impact on American ideas. Calling a written charter of government a constitution did not by itself make such documents into supreme fundamental law. The constitutions drafted in 1776 were not only written in haste but also promulgated by the provincial conventions that drafted them, and not submitted to the people for ratification. In this sense, they did not embody a form of law higher than ordinary legislation, but were merely statutes, admittedly of exceptional importance, but still potentially subject to revision by later sessions of the legislature. They were not, in other words, constitutions in the exalted sense of the term: supreme law that would set benchmarks against which ordinary acts of government could be measured.
The movement to find a way to distinguish constitutions as supreme law began in Massachusetts and led to an important procedural breakthrough. Two conditions had to be satisfied, several Massachusetts towns argued, for a constitution to become fully constitutional. First, it had to be drafted by a special body elected for that purpose alone. Second, it then had to be ratified by the sovereign voice of the people, assembled in their town meetings. This doctrine took hold with the adoption of the Massachusetts Constitution of 1780, and marked a milestone in American constitutional theory and practice. Thomas Jefferson, among others, elaborated on this idea in his Notes on the State of Virginia, first published in France in 1784.
These developments raised provocative questions about the value and authority of bills of rights. How could a declaration or bill of rights operate as an effective restraint on the people’s own representatives? If constitutions were regarded as fundamental law, unalterable by ordinary acts of government, could the authority of bills of rights also be enhanced by following the precedent of the three states that integrated their statements of rights into the text of the constitution proper? If that happened, declarations of rights could operate less as statements of principle and more as legal commands, leaving open the possibility of their enforcement by the independent judiciaries the new constitutions had also created.
When Jefferson sent copies of his Notes on Virginia back home, he also asked James Madison whether the work would be appropriate for an American audience. One of Jefferson’s concerns was that his adverse comments on the Virginia constitution would be taken amiss. After reading the work closely, Madison agreed with Jefferson’s charge that the constitution had no greater authority than a statute. Madison also took the leading role in persuading the Virginia legislature to enact the Bill for Religious Freedom that Jefferson drafted in the late 1770s. Among other things, that bill affirmed that freedom of conscience—the right to believe whatever one wished about religious matters—was a fundamental natural right that no state could legitimately abridge. But the concluding paragraph of the bill also restated the constitutional dilemma that Jefferson had addressed in Notes on Virginia. A right recognized only by statute could also be repealed by statute, because one legislature had no power to bind its successors.
Jefferson and Madison had first met in 1776, while serving on the committee on religion in the lower house of the new Virginia legislature. Both were products of the eighteenth-century Enlightenment that regarded the previous centuries of religious persecution and warfare as a blight on European and Christian civilization. Their shared commitment to the principles of freedom of conscience and separation of church and state formed one basis of their deep personal friendship. But to Madison, even more than Jefferson, the idea of protecting an individual right to believe whatever one wished in matters of religion contributed to a broader rethinking of the problem of protecting rights in a republican society.
The basis for this rethinking came from Madison’s experience in the Virginia legislature (1784–86). Notwithstanding his success in securing passage of the Bill for Religious Freedom, his three terms of legislative service convinced him that representative assemblies could not be relied upon to act as guardians of the people’s rights. Too many recent acts seemed designed to promote one set of interests over another, without giving due attention to the rights of the losing minority. This was especially the case when matters of property were at stake. Madison was responding to the enormous upsurge in economic legislation that the war had made necessary, and which still plagued Americans in the mid-1780s as they wrestled with the public debt the war had created and other economic problems that accompanied the return of peace.
For Madison, as for most Anglo-American thinkers of the age, the protection of property was a fundamental purpose of society, as John Locke had made clear a century earlier in his Second Treatise of Government. But Madison combined his original commitment to freedom of conscience with his new appreciation of the problem of economic legislation to recast the problem of protecting rights in republican terms. The traditional problem under monarchical governments had been to protect the people against the arbitrary power of the Crown. But in a republic, where executive power seemed weak and derivative, the greater danger would come from a far more powerful legislature. Moreover, the real force driving the adoption of unjust laws would come not from the ambitions of the legislators but from the people themselves—or rather, from majorities among the people that would use legislative power to act unjustly toward disfavored minorities. Coming as he did from the upper stratum of Virginia’s ruling class of landed slave owners, Madison was thinking of the interests of his own class. But his insight remained a profound one. In a republican government, unlike a monarchy, one could not naively assume that the people would unite to protect their collective rights against the concentrated power of the state. Historically, bills of rights had been seen as instruments negotiated between the people’s representatives and their monarchical rulers. What use would they be when representatives and rulers were one, and when the people were the very source of the problem?
One other critical conclusion followed from this analysis. If the people themselves could threaten private rights, at which level of government would they pose the greater danger: national or state? Here Madison first formulated his famous argument about the problem of the “factious majority”—that is, a majority of citizens claiming democratic authority to rule but acting in ways inimical to either the “public good” or “private rights.” The smaller the society, Madison reasoned, the easier it would be for such mischievous majorities to form. It followed that such majorities could coalesce more easily within the limited confines of the states than would be the case in the “extended republic” of an expansive and expanding federal union. National government should prove less “factious” than state government, and rights more secure at the national level than within the smaller compass of the states or localities. Moreover, Madison also understood that the states would remain the real locus of daily governance. If, then, one hoped to make rights more secure, a way had to be found to enable the national government to check or correct the unjust misuse of power at the state level. As he prepared for the Federal Convention that would meet at Philadelphia in May 1787, Madison reached the radical conclusion that the best way to protect rights would be to give the national legislature a negative on state laws, akin to the detested veto the British crown had enjoyed over colonial legislation.
In somewhat diluted form, Madison’s proposal for a negative on state laws was part of the Virginia Plan that formed the basic agenda for the deliberations at Philadelphia. By mid-July, that proposal was dead, and with it, Madison’s hope that the adoption of a federal constitution would provide a means for dealing with the problem of protecting rights within the individual states. Ten days after the convention rejected the negative, it adjourned to allow a Committee of Detail to convert the resolutions adopted thus far into a working draft of a constitution. In its own deliberations, that committee briefly considered prefacing the constitution with some statement of principles akin to the state declarations. But as one of its members, Edmund Randolph, noted, the situation of 1787 was not the same as that of 1776. “We are not working on the natural rights of men not yet gathered into societies,” Randolph observed, “but upon those rights, modified by society, and interwoven with what we call the rights of states.”
The convention did include a few clauses protecting specific rights in the completed draft of the Constitution: the right to trial by jury in criminal cases; a prohibition on the suspension of habeas corpus except “in Cases of Rebellion or Invasion”; the guarantee that “Citizens of each State shall be entitled to privileges and Immunities of Citizens in the several States.” A prohibition on religious tests for office-holding would open the right to participate in government to all, but the fundamental right to vote was left a matter of state, not federal, law. Similarly, a prohibition on state laws impairing the obligation of contracts was consistent with Madison’s desire to prevent the states from enacting legislation inimical to vested rights of property (since such laws would advantage debtors over creditors).
Important as these individual provisions were, they did not amount to a comprehensive statement of fundamental rights. On September 12, five days before the convention adjourned, Elbridge Gerry and George Mason (the author of the Virginia Declarations of Rights of 1776) moved to correct that omission by proposing the adoption of a general bill of rights. By that point, most delegates knew that the two men would refuse to sign the completed Constitution. After cursory debate, their motion was rejected. The delegates apparently did not believe that the absence of a bill of rights would impair the prospects for ratification.
The convention’s Federalist supporters (as they soon became known) quickly realized this was a major political miscalculation. Anti-Federalists almost immediately made the absence of a declaration of rights a rallying point against the Constitution. This was hardly their sole objection, nor even the most important. But the convention’s failure to protect rights seemed to confirm the charge that the Constitution was in reality designed to consolidate all effective power in one supreme national government, leaving the states to wither away and the people’s liberties at the mercy of distant institutions they could not control.
An early speech by James Wilson, a framer and leading Pennsylvania Federalist, inadvertently reinforced the opponents’ concerns. Speaking outside the statehouse where the Constitution was written, Wilson argued that it would actually have been dangerous to provide explicit protection for such cherished rights as freedom of press or conscience. Properly understood, the Constitution vested only specific, explicitly designated powers in the new government. Providing for the protection of rights whose exercise the Constitution did not in fact threaten, Wilson argued, would imply that such a power had indeed been granted.
This was a lawyer’s argument, and politically it proved counterproductive. By Wilson’s logic, there was no need for the Constitution to protect the Great Writ of habeas corpus either; yet it did. By being too clever by half, Wilson only reinforced anti-Federalist suspicions that the Constitution was a lawyer’s document that skillful politicians could exploit to run roughshod over the people’s liberty.
With Wilson’s October 6 speech serving as a lightning rod, the omission of a bill of rights became a staple theme of anti-Federalist rhetoric. Drawing on the provisions found in the state declarations, they argued that the Constitution should be amended to provide explicit protection for an array of rights, ranging from freedom of conscience and press to the “liberty to fowl and hunt in seasonable times, on the lands they hold” and “to fish in all navigable waters.” Most Federalists remained unpersuaded. The convention might have miscalculated, but that did not mean that a constitution lacking such articles was truly defective.
In arguing for the adoption of rights-protecting amendments, anti-Federalists straddled a line between the traditional ideas of 1776 and the new constitutional norms that had emerged since. On the one hand, they did not fully imagine that a bill of rights, if adopted, would create a set of commands or rules that citizens could enforce at law, for example, by asking a court to provide a specific remedy when a right was infringed. Bills of rights were still regarded as political documents, a device for enabling the people to act in their political capacity to check abuses by government. This was a very traditional understanding. But many anti-Federalists also embraced the new way of thinking about written constitutions that had developed since 1776. By arguing that rights would remain insecure unless they were explicitly included in the constitutional text, they abandoned the older view that rights derived their authority from multiple sources, such as nature or custom. A right that was not incorporated in the text of a constitution might cease to be either a constitutional or a fundamental right.
The decisive fact making the adoption of a bill of rights more likely, however, was not the force of anti-Federalist reasoning. It was, rather, that narrow divisions in several key states—most notably Virginia and New York, the tenth and eleventh to ratify—inclined Federalists to agree to recommend the consideration of an array of amendments to the first Congress to meet under the Constitution. This concession was the price of ratification. To portray it as a firm bargain, as many histories have done, probably goes too far. It was more an expectation than a deal, and since Federalists successfully insisted that the states should ratify the Constitution without conditions or contingencies, they got the better of the deal.
Madison left the federal convention believing that its rejection of his proposed negative on state laws would leave the protection of individual and minority rights in the same vulnerable position as before. Though his disappointment eased over time, he shared the general Federalist misgivings about the utility of bills of rights. “Experience proves the inefficacy of a bill of rights on those occasions when its controul is most needed,” Madison wrote Jefferson in October 1788. “Repeated violations of these parchment barriers have been committed by overbearing majorities in every State.” Those majorities that concerned him were more popular than legislative, for in Madison’s analysis, the problem of how best to protect rights required that one first identify where “the real power in a Government lies.” In a republic, that power resided with “the majority of the Community,” and he simply doubted that such popular majorities would allow a mere declaration of rights to deter them from pursuing their violations of “private rights.”
This analysis of rights was, again, political rather than legal. But political considerations of a different kind were driving Madison to accept the idea of adding rights-protecting amendments to the Constitution. He had taken that position in the Virginia ratification convention in June, and he soon repeated it publicly during a tough race for election to the House of Representatives. Yet even then, Madison thought that the chief value of a declaration of rights would lie in its effect on public opinion. As the rights declared gradually acquired “the character of fundamental maxims of free Government” and were “incorporated with the national sentiment,” they would work to “counteract the impulses of interest and passion” that would still drive most citizens. It took a suggestion from Jefferson, writing from France, to prod Madison to concede that such articles might also enable the independent judiciary the Constitution would create to act as a “legal check” against abuses of power.
Consistent with his public statements, Madison assumed the burden of convincing the new Congress to take up the subject of amendments. This proved an uphill struggle. Federalists had gained easy control of both houses in the first federal elections, and few of them believed they were obligated to take up the question of amendments. For their part, anti-Federalists knew they would be unable to secure the structural changes to the Constitution they desired most, and without those, a mere statement of rights did not seem so important. Thus it fell to Madison to push what he privately called “the nauseous project of amendments.”
In preparing for this task, Madison reviewed all the amendments that the various state ratification conventions had proposed. From this list of over 200 proposals, he culled the 19 clauses he introduced in the House of Representatives on June 8, 1789. Two of his proposals related to congressional salaries and the rule for apportioning House seats among the states. The rest were concerned with identifying constitutional rights. They would not appear as separate, supplemental articles but rather be inserted at those points in the Constitution where they seemed most salient.
In drafting these clauses, Madison avoided the principle-propounding language of 1776. He did include one set of articles that would have added a second preamble to the Constitution, this one affirming the fundamental right of a sovereign people to form and reform governments designed to secure “the enjoyment of life and liberty, with the right of acquiring and using property; and generally of pursuing and obtaining happiness and safety.” The House eliminated this familiar language. The Senate subsequently rejected another article that Madison called “the most valuable amendment on the whole list”: a proposal to prohibit the states from violating “the equal right of conscience, freedom of the press, or trial by jury in criminal cases.” This proposal was consistent with Madison’s belief that the greatest dangers to rights would continue to arise within the states.
The remaining proposals formed the framework for the amendments that Congress ultimately sent to the states nearly four months later. One major change in format came at the repeated urging of Roger Sherman of Connecticut, who insisted that Congress must not tamper directly with the original Constitution that he, like Madison, had helped to draft. Rather than interweave the amendments in the existing text, as Madison had proposed, they should be treated as supplemental articles. Sherman, a dogged politician, had some difficulty getting his suggestion accepted, but eventually it prevailed. Turning the amendments into additional articles rather than interwoven ones made it easier over time to think of them as a bill of rights, each of whose separate articles addressed some distinct or integral realm of civil liberty or governance.
Records of the House debates on the amendments suggest that few members regarded their approval as an urgent priority. They did not discuss how the amendments, if ratified, would be enforced, nor did they speculate about the ways in which the constitutional entrenchment of rights might lead to an expanded judicial role in their protection. Mostly, they tinkered with Madison’s language before sending the amendments on to the Senate in August. The Senate did some editorial work of its own, and final revisions were made in a conference committee.
One especially noteworthy change occurred here. Madison had originally proposed a religion article, stating, somewhat awkwardly, “The civil rights of none shall be abridged on account of religious belief or worship, nor shall any national religion be established; nor shall the full and equal rights of conscience be in any manner, or on any pretext infringed.” The House changed the second clause to read, more simply, “Congress shall make no law establishing religion.” The Senate then narrowed that language to a mere prohibition on “laws establishing articles of faith or a mode of worship.” The conference committee on which Madison sat took a much simpler tack still, rejecting these efforts at precision with a broad ban holding, “Congress shall make no law establishing a Religion, or prohibiting the free exercise thereof.”
That broader and simpler wording has left ample room for continuing interpretation and controversy, but its original significance should not be overlooked. Religion was one realm of behavior that governments had always believed should remain subject to state support and public regulation. By approving the dual principles of nonestablishment and freedom of conscience, the adopters of the religion clause were endorsing the far-reaching principle that matters of religious organization and belief should henceforth fall entirely within the sphere of private activity, where individuals, acting freely for themselves or in free association with others, could be counted upon to decide what was best. Here was the most telling example of what the broader American commitment to individual liberty and private rights could mean in practice.
For well over a century after its ratification, the Bill of Rights was not a significant element in American constitutional jurisprudence. The “Congress shall make no law” formula of the First Amendment did not prevent the enforcement of the Sedition Act of 1798, which criminalized seditious speech directed against the administration of John Adams. In 1833, the Supreme Court held, in Barron v. Baltimore, that the numerous protections of the first eight amendments applied only against acts of the national government, not those of the states. One noteworthy doctrinal development did take place in 1878. In Reynolds v. U.S., a case involving the Mormon practice of plural marriage, the Court held that the free exercise clause of the First Amendment covered matters of religious belief but not all practices that followed from religious doctrine. In reaching this decision, the Court relied on the writings of Jefferson and Madison, the two Revolutionary-era leaders who had written most eloquently on the subject.
That decision came at a time, however, when the Court was blunting the potential reach of the Fourteenth Amendment, framed as the Reconstruction of the South was just getting under way. Section 1 of the amendment declared, “No state shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States.” It was entirely plausible to read “privileges or immunities” to include the clauses of the federal Bill of Rights. Some of the amendment’s leading authors and supporters believed that the Barron case of 1833 had been wrongly decided, and if their views were accepted, the amendment could have been read as a basis for enforcing the federal guarantees against the states generally. But in the early 1870s, with support for Reconstruction visibly waning, the Court began adopting a more restrictive approach, and the promise of the Fourteenth Amendment remained unfulfilled.
Only after World War I did the Court gradually begin to develop the so-called Incorporation Doctrine, under which it held that many, though not all, clauses of the original Bill of Rights could act as restraints on the actions of state governments. Freedom of speech was arguably the first right to be nationalized in this way, beginning with the 1925 ruling in Gitlow v. New York. The leading proponent of a wholesale incorporation of the Bill of Rights was Justice Hugo Black, the former Alabama senator and New Deal supporter who became the mid-century Court’s leading libertarian. Although his brethren on the Court initially shied away from adopting his broad views, by the 1960s, the animus to apply most of the protections of the Bill of Rights against the states became one of the defining features of the jurisprudence of the Warren Court. Decisions on a whole array of subjects, from school prayer to the “revolution in criminal justice” brought about by expansive interpretation of the numerous procedural rights enshrined in the Fourth, Fifth, and Sixth Amendments, helped make the Supreme Court a lightning rod for political criticism.
The Court’s most controversial modern decisions were those promoting desegregation (beginning with Brown v. Topeka Board of Education in 1954) and a woman’s right to abortion (Roe v. Wade in 1973). But the nationalization of the Bill of Rights also played a critical role in the denunciations of “judicial activism” that have resonated so deeply in American politics since the 1960s, and that show no sign of abating. Whether debating school prayer or the right to bear arms or the unusual cruelty of the death penalty, Americans treat the clauses of the Bill of Rights as subjects for political discussion as well as judicial definition and enforcement. This is a different notion of the political value of bills of rights than the one that Federalists and anti-Federalists disputed in the late 1780s. But it helps to explain why “rights talk” remains so prominent an element of our political discourse.
See also Articles of Confederation; civil liberties; Constitution, federal.
FURTHER READING. Akhil Reid Amar, The Bill of Rights: Creation and Reconstruction, 1998; Patrick T. Conley and John Kaminski, eds., The Bill of Rights and the States: The Colonial and Revolutionary Origins of American Liberties, 1992; Leonard Levy, Original Intent and the Framers’ Constitution, 1988; Jack N. Rakove, Declaring Rights: A Brief History with Documents, 1997; Bernard Schwartz, The Great Rights of Mankind: A History of the American Bill of Rights, 1992.
JACK N. RAKOVE
See regulation.
For generations the United States has nourished the world’s most potent capitalist economy. The United States is also the longest-lived democracy. Their coexistence would seem to suggest that capitalism and democracy, at least in America, go together like love and marriage. But their union has been far from harmonious. At critical moments, relations have deteriorated to the breaking point. Yet for even longer stretches of time, popular sentiment has tacitly endorsed President Calvin Coolidge’s 1925 adage that “the business of America is business.”
Faith in the inherent virtue of market society is less strictly mercenary than Coolidge’s aphorism might suggest. From the origins of the republic to the present day, the free market has promised freedom, in particular, the liberty of the individual to pursue happiness, to dispose of his or her own powers and talents, and to amass private property unencumbered by government or other forms of social restraints. The conviction that the market is the passway to liberty has deep roots in American culture.
But some Americans have always dissented from this faith. Henry Demarest Lloyd, a widely read critic of the country’s new plutocracy during the late nineteenth century, observed that “liberty produces wealth and that wealth destroys liberty.” Lloyd was worried about the power of great industrial combines and financial institutions to exterminate their competitors, snuffing out the freedom the market pledged to offer to all. He was also bothered by the inordinate influence of these economic behemoths over the country’s political institutions.
Capitalism and democracy, by their very natures, establish two rival sources of power and authority: one private, the other public. Large-scale corporations control and dispose of vast resources upon which the whole of society depends. The liberty associated with market society provides the rationale for this otherwise extraordinary grant of private empowerment over decisions that carry with them the greatest public import. Democracy operates according to a contradictory impulse, namely, that matters affecting the public welfare ought to be decided upon by popularly delegated and publicly responsible authorities.
It might be assumed that, in any showdown, democracy would trump capitalism, if only by virtue of sheer numbers. Quite the opposite has often been the case. Market society presumes the privileged position of private property as the vessel of liberty. Even a thorough-going liberal democracy like the United States has been reluctant to challenge that axiom. Precisely because business is so vital to the well-being of society, government has tended to bend over backward to encourage and promote the same business institutions that Lloyd cautioned were driven to destroy liberty.
The fundamental question about the relationship between capitalism and democracy has been asked since the beginning of the nation. One of the most famous controversies about U.S. history was ignited by the historian Charles Beard in his 1913 book, An Economic Interpretation of the Constitution of the United States. Beard argued that the architects of the new nation who gathered in Philadelphia in 1787 were chiefly motivated by a desire to protect and further the interests of the country’s dominant economic classes: merchants, bondholders, plantation owners, and the like. The Constitution they authored established a central government empowered to regulate interstate and foreign commerce, to secure the credit worthiness of the infant republic, to levy taxes, to implicitly legitimate chattel slavery, and to ward off overzealous assaults on the rights and prerogatives of private property—the sort of democratic upheavals, like Shays’s Rebellion, that had made life under the Articles of Confederation too precarious for the well-off.
Beard’s interpretation echoed a primal antagonism about the future of America that began with the country’s birth and raged with great intensity all through the 1790s, turning founding fathers into fratricidal enemies. Alexander Hamilton, the first secretary of the treasury, envisioned a formidable commercial future for the new republic. He imagined a nation fully engaged in international trade, one with thriving manufacturing establishments and cosmopolitan urban centers where high culture and high finance would mingle. All this would happen thanks to the catalytic role of the federal government and under the guidance of the country’s rich and well-born.
Three proposals in particular embodied the secretary’s bold plan for the country’s future: a “Report on the Public Credit,” which called on the new federal government to assume at face value the nearly worthless revolutionary and postwar debt obligations of the Continental Congress and the states; the creation of a Bank of the United States; and a “Report on Manufacturers” that pledged the national government to help jump-start industrialization. Each in its own way was designed to mobilize the capital resources and active participation of the country’s wealthier classes (as well as to attract foreign investment) in turning an underdeveloped country into a rival of the great European powers. Hamilton’s view prevailed during the administrations of Washington and Adams.
But some old comrades from the war for independence, men with whom Hamilton had jointly conceived the Constitution, detested the secretary’s policies. Thomas Jefferson and James Madison were convinced his proposals would incubate a new moneyed aristocracy that would subvert the democratic accomplishments of the Revolution. They feared precisely what Hamilton desired, namely, a commercial civilization like the one characteristic of the Old World. For all its cultural sophistication and economic vigor, such a society, they argued, was also a breeder of urban squalor and of vast inequalities of wealth and income. It fed a mania about moneymaking and self-seeking that incited moral and political corruption and an indifference to the public welfare.
Jeffersonians, however, did not comprise some eighteenth-century version of back-to-the-land romantics. They too recognized the advantages, even the necessity of trade and commerce. Hardly hostile to the free market, they instead wanted to widen its constituency beyond those circles favored by Hamilton. The most notable accomplishment of Jefferson’s presidency was the Louisiana Purchase, which enormously enlarged the potential territory open to freeholder agriculture.
Nor did Jefferson or his successors imagine these pioneering agrarians living in splendid, self-sufficient isolation. Every subsequent administration through the presidency of James Monroe sought to buttress the nation’s agricultural foundation by opening up the markets of the world to the produce of American farms. This meant breaking down the restrictions on trade with the rest of Europe and the Caribbean imposed by the British Empire, a policy that led first to Jefferson’s embargo on trade with European belligerents during the Napoleonic Wars and eventually to the War of 1812. These drastic measures indicated that the Jeffersonians considered international trade a vital component of a flourishing agrarian republic.
Over time, the infeasibility of the old Jeffersonian persuasion became apparent. Even its most dedicated proponents reluctantly grew to accept that a modern market economy would inevitably bring in its wake dense commercial networks and a complex division of labor, industries, and cities. What nonetheless kept the temperature of political animosity at fever heat for the rest of the nineteenth century was how these warring parties envisioned the role of the government in promoting economic development and keeping the channels of economic opportunity open to all. Even as agrarian republicanism faded from view, its profound suspicion of finance and industrial capitalism remained very much alive, a living indictment of big business as the defining institution of American civilization.
Andrew Jackson, “Old Hickory,” seemed the natural inheritor of the Jeffersonian tradition. Jackson waged a dramatic and protracted struggle against an institution that seemed to embody every Jeffersonian nightmare about the rise of a counterrevolutionary moneyed aristocracy. The Second Bank of the United States was better known during the 1830s as “the Monster Bank.” It was run by Nicholas Biddle, a blue-blooded Philadelphian. He presided over a quasi-public institution that exercised decisive influence over the country’s monetary resources but without a scintilla of public accountability. The president set out to the kill the Monster. Because the bank aroused enormous popular resentment, he eventually succeeded.
Yet it would be a mistake to conclude from the so-called Bank War that antebellum Americans, especially in the North, were hostile to the commercial development of the country in the same way their Jeffersonian ancestors were. On the contrary, they were men on the make. What they most resented about the Monster Bank was the way it limited commercial opportunity rather than opening it up to every man. While they remained wary of ceding the government too much power, they also believed it could and should help to nourish their entrepreneurial ambitions.
Beginning in the antebellum years, government at every level was dedicated to the promotion, not the regulation, of business. The country’s whole transportation and communications infrastructure—its roads, turnpikes, canals, wharves, dockyards, bridges, railroads, and telegraph system—were, practically without exception, quasi-public creations. Governments granted them franchises, incorporated them, lent them money, invested in them, provided them with tax exemptions and subsidies and land grants, and even, at times, shared in their management.
Such leaders of the Whig Party as Henry Clay and Daniel Webster sought to make this practice of state-assisted business development the policy of the national government. For this reason, the party commanded the allegiance of the rising business classes, as did the newly created Republican Party, into which the Whigs dissolved. Clay’s “American system” of internal improvements and a protective tariff to nurture infant industry also enjoyed significant support among Jacksonian Democrats, but only in the North. Most Southern planters cared strictly about the cotton trade and were hostile to a protective tariff, which could only increase the costs of manufactured necessities and invite retaliation from its trading partners abroad. Because the slave South held the whip hand in the Democratic Party, the American system remained stillborn until after the Civil War. But the general proposition that government ought to be a helpmate to business was already widely, if not universally, accepted.
No myth about American history has demonstrated greater durability than the belief that the late nineteenth century was the golden age of laissez-faire. It was then, so the story goes, that a hands-off policy by the government allowed the impersonal operations of the free market to work their magic and turn the United States into an industrial goliath. But actually all branches of the federal government were deeply implicated in that process. Not only did they actively promote the development of a national capitalist economy; they did so in a way that favored the interests of industrial and financial corporations. Thus, the rise of big business was due less to impersonal economic laws and more to man-made ones.
Railroads epitomized how the system worked. Without the land grants, loans, tax exemptions, and subsidies provided by the federal government, it is hard to imagine the extraordinary transcontinental network of rail lines that provided the skeletal framework of the national marketplace. Railroads were to the mid-nineteenth century what the steel industry was a generation later or the auto industry was during the mid-twentieth century: the engine of the national economy driven by the country’s dominant business institutions. America’s emergence as the world’s foremost economy began in this hothouse relationship between business and government.
However, accompanying government largesse were well-documented instances of cronyism and corruption. During the Grant administration, a scandal involving the Crédit Mobilier company, responsible for helping construct the transcontinental railroad, implicated congressmen, cabinet members, and even the vice president in schemes to defraud the government. Similar outrages occurred locally, as with the Erie Railroad in New York, where city councilmen, state legislators, and judges were bought and sold by rival railroad speculators. Such transgressions, however lurid and even illegal, were exceptional. Still, they signaled a more general disposition on the part of the country’s political class to align itself with the country’s most powerful business interests.
The Republican Party pursued this course most single-mindedly and commanded the loyalty of the emerging class of manufacturers, who particularly applauded the party’s staunch support for the protective tariff—a barrier against industrial imports that grew steadily higher as the century drew to a close. The promotion of business, however, was a bipartisan policy. New York merchants and financiers engaged in the export trade tended to be Democrats because they favored free trade rather than a protective tariff. But the leadership of the party, and its two-time president Grover Cleveland, were of one mind with their Republican opponents when it came to defending the gold standard as essential for economic stability.
The late nineteenth century witnessed the emergence of trusts and other forms of industrial combinations all across the economy, from steel and sugar production to the drilling and distribution of oil. These firms exercised extraordinary power over the marketplace and, inevitably, posed a dire threat to the American dream of entrepreneurial independence, inspiring a fierce resistance.
Beginning during the Gilded Age and lasting well into the twentieth century, the antitrust movement was a major influence on American politics. It enjoyed the support of small and medium-sized businessmen and of the growing class of urban middle-class consumers and white-collar workers. It also inspired the political mobilization of the nation’s farmers, especially in the South and the Great Plains, who particularly resented the life-threatening power of eastern financiers, the owners of the great railroads that crisscrossed the country, and the commercial middlemen (grain elevator operators, wholesalers, and the like) who dictated terms of credit as well as the costs of shipping, storing, and distributing farmers’ produce.
Together, these groups turned to the government for help. During the 1870s and 1880s, they were moderately successful, especially in state politics. Laws were passed regulating and restraining the operations of interstate corporations, particularly the railroads. But all three branches of the federal government were hostile to reform. In a series of landmark cases, the Supreme Court ruled that these regulatory attempts by state governments violated the rights of corporations to due process under the Fourteenth Amendment (which was originally passed to protect the civil rights of emancipated slaves) and also trespassed on the exclusive powers delegated to the federal government to regulate interstate commerce. At the same time, the Senate came to be known as “the Millionaires Club,” because its members seemed far more solicitous of the needs of major corporations than they did of their ordinary constituents.
Finally, the executive branch made its most formidable resource—the coercive power of the armed forces—available for the protection of corporate property. In addition to the antitrust and farmer movements (most famously the Populist or People’s Party of the 1890s), the other great challenge to the profits and preeminence of big business came from the labor movement. The main battle took place on the factory floor, where workers were subjected to a ruthless regime of wage cutting, exhausting hours, and a draconian work discipline. Strikes, some local, some nationwide, in dozens of industries swept across the country in several waves, beginning in the mid-1870s and continuing to the end of the century. Again and again, businesses were able to count on state and federal judges to issue injunctions that outlawed these strikes, and on governors and even the president of the United States to send in state militias and federal troops to quell them when they continued. President Cleveland’s decision in 1894 to use the army to end the strike of the American Railway Union against the Pullman Company is the most famous instance of the latter; it convinced many people that big business had captured the government.
Ironically, most leading businessmen of the Gilded Age initially had no serious interest in politics. They were content to turn to the government for particular favors when needed but otherwise kept aloof from party politics. All that began to change, however, when antitrust, Populist, and labor resistance grew so sizable it could not be ignored.
In the end, the groundswell of popular sentiment in favor of some form of regulation proved irresistible. Frustrated at the state level, reformers redoubled their efforts to get some redress from the federal government. The Interstate Commerce Act was passed in 1887 to try to do away with rebates and other forms of railroad market abuse. And, in 1890, Congress approved the Sherman Anti-Trust Act. But in both cases business interests used their political influence to dilute the effectiveness of these laws. By necessity, the nation’s commercial elite became more practiced in the arts and crafts of democratic politics.
Indeed, as the century drew to a close, the business community mobilized to defeat its enemies in the electoral arena as well. In 1896 the Democratic Party was captured by elements sympathetic to populism and to antitrust reform and nominated the charismatic William Jennings Bryan as its presidential candidate. But the victory of Republican William McKinley, lavishly supported by leading industrial and financial interests, seemed to put an end to the immediate threat to the economic and political supremacy of big business and to bury along with it the vision of an alternative political economy and social order.
Yet in the early twentieth century, the axis of American politics shifted direction. Once devoted to the promotion of business, government in the Progressive Era turned to its regulation. Odder still, important segments of business and finance welcomed this turn of events as often as they had resisted it. No longer indifferent to or contemptuous of the democratic process, the business community increasingly grew to believe its fate would be decided in that arena.
The presidencies of Theodore Roosevelt and Woodrow Wilson especially marked the era as progressive. Roosevelt was feared by the Republican old guard—figures like the industrialist and politician Mark Hanna and financier J. P. Morgan. And the president was not shy in making known his disdain for such “malefactors of great wealth.” His administration initiated the first antitrust lawsuits against such major corporations as Northern Securities (a gigantic railroad holding company) and Standard Oil.
Antitrust prosecutions comprised only a small portion of his attempts at regulating business. Roosevelt’s regime achieved such legislative milestones as the Pure Food and Drug Act, the Meat Inspection Act, and the Hepburn and Mann-Elkins laws strengthening the hand of the Interstate Commerce Commission to rein in the railroads. Meanwhile, municipal and state governments in dozens of big cities and states—egged on by muckraking journalists like Lincoln Steffens and Ray Stannard Baker—took on public utility and urban mass transit companies that had been looting government budgets for years.
Woodrow Wilson, elected in 1912, promised to extend the scope of business regulation. He and a chief adviser, future Supreme Court justice Louis Brandeis, were convinced that the rise of big business and finance had worked to choke off opportunity for would-be entrepreneurs, had aborted technological innovations, and, by blocking the pathways of economic mobility, had undermined the institutions of political democracy as well. Wilson took office amid a widely publicized congressional investigation of the so-called money trust, a purported clique of Wall Street investment banks that controlled not only a host of other financial institutions but also many industrial corporations, including United States Steel and General Electric. Soon after the hearings ended, the president signed into law the Federal Reserve Act. It established a quasi-public institution to oversee the nation’s monetary system and presumably break the stranglehold of the money trust.
The Wilson administration and a Democratic Congress accomplished other notable reforms of the business system as well. The Clayton Act tried to strengthen the antitrust provisions of the Sherman Act and exempted labor unions from prosecution as conspiracies in restraint of trade, which had been the main use of the earlier law. The new Federal Trade Commission (FTC) broadened the powers of the national government to regulate interstate commerce. The Adamson Act established the eight-hour day for railroad employees. However, the reform momentum stalled when the United States entered World War I in 1917.
Many businessmen and some consumers viewed corporate consolidation of the economy at the turn of the century as a godsend rather than a curse. It would, they assumed, end the competitive anarchy that had produced numerous booms and busts and two severe and protracted depressions during the Gilded Age. Businesses had tried, on their own, to impose some order over the marketplace by forming pools and other informal agreements to control prices, limit production, and share the market among the chief competitors. But such private arrangements to rein in the free market failed as firms could not resist taking advantage of any shift in the market to get a jump on their competitors.
Thus, some industrialists began to look to the government as the only institution with enough authority and legitimacy to impose commercial discipline. The Interstate Commerce Commission and the Mann-Elkins Act drew support not only from smaller businessmen who hoped to get rid of rate discrimination but also from the railroads themselves—as long as they could exercise some control over what the commissioners did and forestall popular pressure for more radical reform. Similarly, major meatpacking corporations welcomed the Meat Inspection Act. By establishing uniform standards of hygiene that their smaller competitors would have to meet, they hoped the act would open up foreign markets that had grown increasingly leery of importing American beef.
All this signaled that businessmen were becoming more politically conscious and organized. Trade associations representing whole industries began lobbying in Washington. The National Civic Federation—organized by Mark Hanna, Andrew Carnegie, and other leading industrialists—tried to impart a more conciliatory mood into the brittle, often violent relations between labor and capital characteristic of the Gilded Age. Other national business organizations, including the United States Chamber of Commerce (USCC) and the National Association of Manufacturers (NAM), formed to represent the political desires of smaller business, especially to thwart legislation sympathetic to labor.
Thus, on the eve of World War I, many businessmen and politicians had come to accept the need for some degree of regulation to supplement government promotion of private enterprise. However, differences within the business community and between it and the government over the nature and extent of that regulation left many questions unresolved.
Then the war fatefully shifted the terrain on which this tense relationship unfolded. On the one hand, the country’s need to mobilize its resources restored the social reputation of big business, which produced most of the munitions and other goods for the American Expeditionary Force and its European allies. On the other hand, such a massive mobilization of private assets demanded a level of coordination that only the government could impose. The War Industries Board, created by President Wilson and run by financier Bernard Baruch, initiated a degree of state intervention into the private sector that would have astonished most of the radical critics of the previous century. Although this apparatus was quickly dismantled once the war ended, the experience with economic planning and supervision left a legacy many would turn to during the next national emergency.
More immediately, however, it was the political rehabilitation of big business that left its mark on the Jazz Age following the war. The Republican administrations of Warren Harding, Calvin Coolidge, and Herbert Hoover deferred to the corporate world. Businessmen were hailed as the nation’s wise men who seemed to have solved the age-old problem of the business cycle, ushering in a new era of permanent prosperity.
Antitrust prosecutions virtually ceased. The Supreme Court severely restricted the power of the FTC to define methods of unfair competition. The regulatory vigilance of the government was relaxed in what turned out to be a tragic turn of events.
No crisis in American history, save for the Civil War, presented as grave a national trauma as did the crash of 1929 and the Great Depression that followed. The economic wisdom, social status, moral authority, and political weight of the business community collapsed in a hurry. Until that moment business had generally prevailed in its tug of war with the government and the forces of democracy. But, for a long generation beginning in the 1930s and lasting well into the 1970s, the balance of power was more equally divided.
President Franklin Delano Roosevelt’s New Deal never amounted to a single, coherently conceived and executed set of economic policies. It was in a constant state of motion, tacking first in one direction, then in another, pursuing courses of action that were often contradictory. One fundamental, however, never changed: the government must act because economic recovery could not be left to the impersonal operations of the free market. A return to the laissez-faire approach of the old regime was, once and for all, off the table.
Indeed, the Roosevelt administration found itself under relentless pressure from various social movements to challenge the power of big business. A new and militant labor movement spread through the industrial heartland of the country, taking on the most formidable corporations and mobilizing in the political arena to support the New Deal. Small farmers, tenants, and sharecroppers in the Midwest and South defied banks and landlords to stop foreclosures and evictions. Charismatic figures aroused populist emotions against financiers and the wealthy. Huey Long, the demagogic governor and later senator from Louisiana, led a Share the Wealth movement that called for taxing away any income over a million dollars a year. Father Charles Coughlin, the “radio priest” broadcasting from a suburban Detroit Catholic church, electrified millions of listeners with his denunciations of parasitic bankers.
Such democratic upheavals reverberated within the Democratic Party and helped heighten the tension between the Roosevelt administration and major sectors of the business and financial world. They inspired the President’s denunciation in 1936 of “economic royalists.” Without these upheavals, it is hard to imagine the basic reforms accomplished by the New Deal.
New Deal innovations covered four broad areas affecting the material well-being of Americans: (1) economic security and relief; (2) financial and industrial regulation; (3) industrial reform; (4) economic planning.
The most enduring legislative legacy of the New Deal is the Social Security system. Passed in 1935 by an overwhelmingly Democratic Congress, the Social Security Act, for the first time, established government responsibility for protecting against the most frightening insecurities generated by the free market: a poverty-stricken old age and unemployment. In fits and starts, the New Deal also initiated a series of federal relief measures to address the plight of the jobless and dispossessed in both urban and rural America.
This ethos of social responsibility for the economic security of all citizens continued well beyond the New Deal years. The welfare state continued to expand after World War II, most memorably through the GI Bill, which committed the federal government to subsidizing the education and housing of returning veterans. In the mid-1960s, President Lyndon Johnson’s Great Society programs included various antipoverty and urban redevelopment measures—especially Medicare and Medicaid, which addressed the health needs of the elderly and some of the poor. However limited and flawed, such programs represented a major shift in the country’s political orientation. They indicated a recognition that the free enterprise system could not be relied upon, by itself, to assure the economic security of the American people.
Rightly or wrongly, most people blamed the Depression on Wall Street, in particular on the reckless speculation, insider trading, and fraudulent practices that allegedly led to the crash of 1929 and to the economic implosion that followed. The resulting demand for the government to closely monitor the behavior of business naturally focused on the financial system. Therefore, the Roosevelt administration and Congress targeted the banking and securities industries. The Glass-Steagall Act of 1933 made it illegal for the same establishment to function both as an investment and as a commercial bank on the grounds that there was an inherent conflict of interest between those two functions—one that had undermined the credibility of the whole banking structure of the country.
The suspect and secretive practices of the stock market were addressed by two securities acts, the second of which established the Securities and Exchange Commission. These reforms demanded of Wall Street a far greater transparency about its operations, outlawed certain kinds of insider dealings, and subjected the nation’s chief financial markets to ongoing public supervision.
Business regulation under the New Deal was hardly confined to the financial sector. New agencies like the Federal Communications Commission were created, and transportation and public utility companies found themselves under more rigorous scrutiny. The Public Utility Holding Company Act attempted, without much success, to dismantle corporate pyramids in the electrical power industry that had fleeced consumers and left the underlying companies saddled with insupportable debt. This assault, while abortive, reflected a revving up of antitrust sentiment.
While the politics of antitrust would lose energy in the years ahead, the regulatory momentum of the New Deal order would only grow stronger. By the late 1960s and early 1970s, its emphasis would shift from regulating the abuses of particular industries to policing business in general. New environmental and consumer protection movements inspired deep popular distrust of the business community. Largely composed of middle-class, college-educated professionals, these public interest organizations demanded that corporations be held not only responsible but accountable for their actions. The Occupational Health and Safety Act and the Environmental Protection Act of 1970 and the Consumer Product Safety Commission created in 1972—followed by legislation to clean up the air and water and the toxins of industrial waste—were all passed during the Republican administration of President Richard Nixon, suggesting just how irresistible this regulatory impulse had become.
A third phase of New Deal economic intervention proved just as robust. It is hardly an exaggeration to describe the pre–New Deal system of industrial labor relations as a form of industrial autocracy, especially in heavy industry. Most workers endured long hours at low wages, conditions subject only to the unilateral whim of their employers, who could also hire and fire them at will. The rise of a militant labor movement and the growth of pro-labor sentiment within the Democratic Party during the early years of the Depression resulted in fundamental reform. The National Labor Relations Act (NLRA, or Wagner Act) of 1935 established a form of industrial democracy by inscribing workers’ right to organize unions free of employer interference. It proscribed a range of employer behaviors as unfair labor practices and made companies legally obliged to engage in collective bargaining once a union had been established by government-supervised elections. The Wagner Act was supplemented in 1938 by the Fair Labor Standards Act, which set a national standard of minimum wages and maximum hours and outlawed child labor.
Many of those who supported the NLRA did so not just out of a sense of social justice. They believed that reforming the system of labor relations was part of a larger design for economic recovery in which the government would have to play a central role. Unions and minimum wages, they hoped, would restore the mass purchasing power of ordinary Americans and thereby spur production and employment. The idea that government had an overarching role to play in getting the economy moving again was hardly restricted to these areas, however. It inspired the fourth salient of New Deal innovation: economic planning.
The National Industrial Recovery Act (NRA) of 1933—the New Deal’s first attempt at general economic recovery—created a form of federally sanctioned corporatism or cartelization, operating not unlike the old War Industries Board of World War I. The system effectively suspended antitrust laws so that big businesses in every industrial sector could legally collaborate in establishing codes of fair competition.
A similar form of government intrusion into the operations of the free market tried to address the depressed state of the agricultural economy. The Agricultural Adjustment Administration established government-sanctioned production controls and acreage allotments that were designed to solve the crisis of overproduction that had led to mass farm foreclosures and evictions all across the country.
At first, many businesses welcomed these forms of economic planning, because they amounted to a form of self-regulation that left big firms in effective control of the government machinery charged with doing the planning. But for just that reason, they were vociferously attacked by smaller businessmen and small farmers. In 1935, the Supreme Court ruled the NRA was unconstitutional. Its demise, however, gave life to different, more democratically minded strategies for government-directed economic recovery.
Members of the New Deal administration were convinced that only government intervention could restart the country’s economic engine. The Tennessee Valley Authority (TVA) was created with an ambitious mission to transform the economic and social life of the deeply impoverished trans-Appalachian southeast region. Under the auspices of the TVA, the federal government itself entered the electrical power business, directly competing with private utilities and forcing them to make their own operations more efficient and cost effective. By making electricity available to millions of rural Americans, TVA planners hoped to bring them within the orbit of the modern economy, improving their standard of living and turning them into customers for manufacturers of electrical appliances.
Balancing the federal government’s budget had been a central orthodoxy of the old order. In the face of this, the New Deal committed the sacrilege of deficit spending to help get the economy out of its protracted slump. Deficit finance was at the heart of a new economic policy associated with the famous British economist, John Maynard Keynes. Keynesianism located the origins of the Depression in the problem of insufficient demand and argued that this tendency was inherent in modern capitalism. To overcome it, the government had to resort to its powers over fiscal and monetary policy, especially the former. The government’s tax policy would redistribute wealth downward, bolstering consumer demand. The “wealth tax” of 1935 was a first attempt to do this by raising taxes on corporations and the wealthy. The Roosevelt administration never fully committed itself to this strategic break with the past, but even its experiments in that direction were pioneering.
During World War II, a version of Keynesianism became the new orthodoxy. Much of the war economy operated under government supervision, was financed with federal loans and cost-plus contracts, and a sizable portion of the new plants and equipment was actually owned by the government. But by putting an end to the Depression, the war also restored the economic and political fortunes of the private sector. Consequently, after the war, the business community managed to rein in the more social democratic variant of Keynesian policy and replace it with a version friendlier to private enterprise. Commercial Keynesianism relied more on monetary policy, that is, on manipulating the money supply and interest rates in responding to the business cycle. The government backed away from any effective commitment to assuring full employment, grew more leery of deficit spending (except in the military sector) and investments in the public sector, and avoided—until the mid 1960s—a broad expansion of the social welfare state. Commercial Keynesianism relied on automatic stabilizers like unemployment insurance to stabilize the business cycle.
The New Deal’s groundbreaking efforts at income redistribution and social welfare did inspire the early years of the Lyndon Johnson’s Great Society during the 1960s. But the enormous financial burdens of the Vietnam War and America’s other military commitments short-circuited those assaults on poverty. Inflation—always the latent danger of deficit spending—haunted the economy as the decade of the 1960s drew to a close. It was easier to cut budgets for poverty programs than to tackle the politically more potent military-industrial complex that President Dwight Eisenhower had warned about in his Farewell Address in 1961.
Commercial Keynesianism made business first among equals in the postwar New Deal order. Moreover, some segments of American finance and business had always supported certain New Deal reforms, just as there were elements of the business community that had backed Progressive-Era legislation. Industries that relied on the consumer market and banks oriented to the small depositor were sympathetic to Keynesian approaches that bolstered demand. These businesses formed the Committee on Economic Development in 1942. There were also corporations that supported Social Security as politically inevitable and worked to make the program business friendly. Some northern-based manufacturers backed the Fair Labor Standards Act as a way of eliminating the wage differential that gave their southern-based competitors an advantage. Still, the main business lobbies—the NAM and the USCC—militantly opposed most New Deal reforms.
Then the Keynesian regime entered a terminal crisis at the end of the 1960s. It collapsed for many reasons: the costs of the Vietnam War; the inability of the government’s economic managers to stem war-induced inflation; the unraveling of the postwar system of fixed international exchange rates in 1971; the emergence of powerful industrial competitors in Germany and Japan; the rise of foreign oil-producing nations that hiked energy costs. A protracted period of what came to be called “stagflation” in the 1970s seemed the final refutation of the Keynesian outlook. After all, its adherents had always maintained it was impossible for the economy to suffer simultaneously from inflation and unemployment. By the late 1970s, however, it indubitably was.
Commercial Keynesianism had been a bipartisan persuasion. The reaction against it, however, formed mainly within the Republican Party and culminated in the election of Ronald Reagan to the presidency in 1980. If big business had been the bete noire of the New Deal, big government played that role for champions of the free market counterrevolution to reverse the New Deal.
Restoration of the old order proceeded in several directions at once. Supply-side economics, which in contrast to Keynesianism focused on encouraging investment rather than demand, became the theoretical justification for cutting taxes on corporations and the wealthy and for trimming the social welfare budget and public spending on infrastructure. Both occurred during the two Reagan administrations.
Deregulation became a key watchword of the new regime. It entailed both capturing and dismantling the regulatory apparatus of the federal government. Agencies were increasingly run and staffed by people either directly recruited from the industries they were charged with regulating or by civil servants with decided sympathies for the business point of view. So, for example, presidential appointees to the National Labor Relations Board, beginning under Reagan and continuing during the administrations of George H. W. Bush and his son George W. Bush, ruled repeatedly in favor of businesses charged with unfair labor practices, or they at least tolerated long procedural delays that effectively stymied union organizing and collective bargaining.
Rules were relaxed or not enforced or eliminated entirely in a host of specific industries, including airlines, trucking, telecommunications, and the financial sector. The lifting of rules governing the operations of savings-and-loan banks resulted in the collapse of the whole industry in the late 1980s and the largest government bailout of private enterprise in American history; until, that is, the massive rescue and partial nationalization of the country’s leading financial institutions during the meltdown of the country’s financial system in 2008.
The drive to deregulate was less successful in the arena of non-industry-specific social regulation, exemplified by the Environmental Protection Agency and the Occupation Health and Safety Administration. Here powerful middle-class constituencies resisted the deregulatory impulse. Still provisions of the Clean Air Act were weakened, the powers of the FTC reduced, and efforts to beef up consumer protections thwarted.
The Democrats put up little effective resistance to such changes. The influence of big business and finance within the Democratic Party had grown steadily after Reagan’s 1980 victory. By the 1990s, leading elements within the Democratic Party also favored cuts in the welfare state, deregulation, and monetary and fiscal policies that were designed to please the business community.
Still, the politics of restoration went only so far; modern business had come to rely on the government in certain crucial respects. While paying lip service to the aim of a balanced budget, deficits grew under both Ronald Reagan and George W. Bush, sometimes enormously. In both cases this had to do with a considerable expansion of the defense budget. Even before the Republican counterrevolution, military spending had become big business and a permanent part of the country’s political economy. Much of this military production is sustained by government-guaranteed, cost-plus contracts, insulated from the oscillations of the free market.
However, the assault on the New Deal order had accomplished a great deal. Substantial credit for its success must go to the remarkable ideological discipline, self-organization, and political savvy of the business community itself. Beginning in the 1970s, new organizations like the Business Roundtable (founded in 1972 and made up the country’s top CEOs) worked with established ones like the USSC and the NAM not only to affect public policy but also the underlying values that shaped such policy. Together with such well-funded think tanks as the Heritage Foundation and the American Enterprise Institute, they denounced the profligacy of “tax and spend” liberalism, exposed the “culture of dependency” encouraged by the welfare state, and extolled the virtues of the free market. If the New Deal had been premised on the quest for economic security and a sense of social obligation, the business-sponsored literature of the Reagan and Bush eras help create a large constituency for older values of individualism, risk taking, and self-reliance.
The heightened activities of corporate lobbyists and think tanks were supplemented by an equally spectacular growth of business involvement in elections. A 1976 Supreme Court decision—Buckley v. Valeo—established the right of corporations to make unlimited financial contributions to political parties and political action committees. Increasingly, the viability of candidates for public office came to depend on how much money they could raise to wage their campaigns. No one had more money to offer than big business. Even without engaging in corrupt or illegal practices, the machinery of American democracy came to be more and more lubricated by cold cash.
The relationship between capitalism and democracy in American history has been a tumultuous one. It began in mutual suspicion, but then passed through a period when the governing classes considered it their mission to promote the development of business and the national market. The dilemmas of late nineteenth-century free market capitalism led to widespread demands for reform, ushering in the next phase of that relationship in which the government took up the responsibility of regulating business, culminating in the New Deal order. At least until the presidency of Barack Obama, the latest stage shifted the balance of power back in favor of the private sector, but the contemporary business system relies on government intervention and support far more than its nineteenth-century predecessor. Arguably, this latest phase in the relationship between capitalism and democracy was less a return to some imaginary laissez-fair utopia than it was a return to the kind of elitist financial system that Alexander Hamilton proposed at the birth of the republic.
See also banking policy; economy and politics; labor movement and politics.
FURTHER READING. Edward Berkowitz and Kim Mc-Quaid, Creating the Welfare State, 1988; Thomas Cochran and William Miller, The Age of Enterprise, 1942; Robert Collins, The Business Response to Keynes, 1929–1964, 1981; Steve Fraser, Every Man a Speculator: A History of Wall Street in American Life, 2005; John Kenneth Galbraith, The Great Crash: 1929, 1997; Charles Lindbloom, Politics and Markets: The World’s Political-Economic Systems, 1977; Drew McCoy, The Elusive Republic: Political Economy in Jeffersonian America, 1982; C. Wright Mills, The Power Elite, 1956; Kevin Phillips, Wealth and Democracy: A Political History, 2002; David Vogel, Fluctuating Fortunes: The Political Power of Business in America, 1989; Robert Wiebe, Businessmen and Reform: A Study of the Progressive Movement, 1962.
STEVE FRASER