5

FDR Creates Satellite States

On March 2, 1930, a state governor took to the radio to warn citizens that politicians in Washington were intent on destroying the “individual sovereignty of our States.” The national government’s desire to seize “practically all authority and control,” predicted the governor, would cause the United States “to drift insensibly toward . . . autocracy.”

The most outrageous thing, according to this governor, was Congress’s usurping of the traditional police powers of the state. Under the Constitution, Congress had no right to legislate on “the conduct of public utilities, of banks, of insurance, of business, of agriculture, of education, of social welfare and of a dozen other important features. In these, Washington must not be encouraged to interfere.”

Who was this archconservative, constitutional fundamentalist? Actually, it was Franklin Delano Roosevelt—then the chief executive of New York State—and his target was the supposedly nationalistic tendencies of Republicans in Congress and in the Herbert Hoover administration. In the same radio address, FDR argued that Congress had clearly exceeded its enumerated powers by building a federal government that cost taxpayers $3.5 billion annually. Unless Americans acted to “halt this steady process of building commissions and regulatory bodies and special legislation . . . we shall soon be spending many billions more.”1

Uncanny: How did FDR know? Oh, wait, it’s because he was the one who ended up spending “many billions more.” Two and half years after his states’ rights speech, FDR would win the presidency by promising to use the power of the federal government to lift Americans out of the Great Depression. In the first fiscal year of his administration, FDR—the guy who had complained about a $3.5 billion federal government—would spend $2 billion on antipoverty programs alone. In his administration’s first seven years, federal government outlays exceeded Hoover-era spending by over $21 billion—and that was before America’s entry into World War II. President Roosevelt and his allies would deploy that money in an effort to wipe out the very state sovereignty that had been so precious to Governor Roosevelt.

Even judged by the low standards of politics, Roosevelt’s about-face on states’ rights reveals a staggering degree of hypocrisy. As president, FDR would work to tear down every constitutional limitation on the power of the national government—he would mock those who adhered to the constitution as reactionaries, stuck in the “horse and buggy” era. The “individual sovereignty of our States” would become, under Roosevelt, a dead letter.

A Highly Successful Flop

Most historians insist that FDR had no choice but to expand the federal government because the Depression required national action. The same American history study guide that we saw in the previous chapter declares that in the 1930s “it became apparent that no entity except the federal government had the resources to address the profound suffering of the Great Depression.” Note the passive voice: it wasn’t that anyone made a conscious decision to jettison the Tenth Amendment, it just “became apparent” that it didn’t apply anymore. In reality, the federal government’s role was hotly debated. Even the progressive jurist Louis Brandeis reportedly told FDR’s aides in 1935 “I want you to go back and tell the president that we’re not going to let this government centralize everything.”2 In addition to the tendency to gloss over the disagreements among FDR’s contemporaries, the standard history is wrong on many fronts.

First, the Great Depression was primarily caused by the federal government. What made the Depression “great” was a one-third contraction in the nation’s money supply between 1929 and 1933, a phenomenon driven largely by wrongheaded policies of the Federal Reserve: the central banker created by the Progressives in that fateful year of 1913. The initial federal response to the crisis—which included massive tax hikes and protectionist tariffs—compounded the economic contraction.

Second, New Deal policies did not bring about an economic recovery. Although mainstream accounts, like Ken Burns’s worshipful 2014 documentary The Roosevelts, credit FDR with “expertly managing” the Depression, FDR’s policies in fact prolonged the crisis for a decade—longer than any economic crisis before or since.3 Under the New Deal, economic growth languished—briefly reaching pre-Depression levels in 1936, only to slump again in 1938. Real recovery would not come until World War II. Unemployment averaged 18.6 percent from 1933 to 1940, and it never went below 14 percent.4

In previous depressions (or “panics,” as they used to be called), economic recovery came swiftly without federal intervention. In 1921, for example, economic output fell by 9 percent and unemployment soared to 11.7 percent. Rather than shredding the Constitution, the Harding administration allowed the market to work. Within a year, unemployment was plummeting; within two years, it was down to 2.4 percent.

The Panic of 1893, likewise, was on a par with the Great Depression. A quarter of the nation’s railroads went bankrupt, and unemployment among industrial workers exceeded 20 percent in some cities. Nobody in Congress considered it “apparent” that the federal government should attempt to tackle unemployment.5 Within five years, the economy was back to its earlier prosperity. By comparison, in 1938—after five years of New Deal programs—unemployment was still near 20 percent.

And while market forces did their thing, government stood idly by, allowing widows and orphans to starve, right? Actually, in America, relief for the poor had always been a joint effort of charitable organizations and government—local government. “Counties, cities, and townships administered relief to their residents—usually through their township supervisors or superintendents of the poor—using local tax revenue,” according to the historian Susan Stein-Roggenbuck.6

Economically, the New Deal was a flop. Politically, it was a roaring success. It ushered in a massive expansion of government, aggrandizing politicians not only at the federal level but at the state level as well. That’s right: state governments grew during the New Deal. It isn’t that the New Deal led to smaller state governments; rather, the New Deal led to larger, but more subservient, state governments. Size is a matter of dollars, not autonomy. The two don’t always go together. Like the satellite states of the Soviet era, the American states of the 1930s were getting bigger, but they were losing their autonomy.

FDR and Il Duce

The intellectual framework for the New Deal—particularly the contempt for constitutional limitations—had already been erected by the Progressives. The New Dealers shared the Progressives’ faith in government by experts, even though FDR had once scoffed at Hoover’s reliance on “master minds” (as he derisively referred to them), arguing that truly objective government experts do not exist. But lo, just two years later, with FDR in the White House, we find the president surrounded by his own collection of masterminds, the “Brain Trust” drawn largely from academia.

FDR’s experts were not content to rely exclusively on American progressivism, which, after thirty-odd years, was beginning to look a little frumpy and Midwestern. Instead, as the historian Ira Katznelson observes, “the core policymakers in this initial phase of the New Deal . . . were drawn to Mussolini’s Italy.”7 In 1933, Roosevelt expressed his admiration for Benito Mussolini in a letter to an American envoy and confided that “I am keeping in fairly close touch with the admirable Italian gentleman.”8 Yes, bringing fascism to America was the explicit goal of many of the early New Dealers. In those years before Pearl Harbor, “fascism”—in short, an authoritarian system of centralized national control over business and labor—was not yet a bad word. To the contrary, it had positive connotations for most Progressives, promising an efficient integration of government and private sector.

On March 4, 1933, almost three years to the day after his paean to states’ rights, FDR’s first inaugural address foreshadowed the authoritarian impulses behind the New Deal. Roosevelt described the president’s job as leading “this great army of our people”—shades of Edward Bellamy’s “industrial army”—“dedicated to a disciplined attack upon our common problems.”9 The president kindly paid his respects to the Constitution, but then breezily observed that the economic crisis “may call for a temporary departure” from constitutional procedures. In particular, if Congress failed to do his bidding, FDR declared that he would seek “broad Executive power to wage a war against the Emergency.”

This was war, all right, but the enemy was not “our common problems.” The enemy was the entrepreneur and the consumer. In June 1933, FDR signed into law the National Industrial Recovery Act (NIRA), under which the president could unilaterally impose “codes of fair competition” on virtually every American industry. The codes would force businesses to conform to top-down mandates on wages, hours, and countless other details of business conduct. When he was briefed on NIRA, Mussolini reportedly exclaimed “Ecco un ditatore!”—“Behold a dictator!”10 Within two years, that admirable American gentleman FDR had ordered 557 basic codes and 189 supplementary codes designed by the National Recovery Administration and backed by over ten thousand pages of administrative orders.11

And that was just the tip of the iceberg. During his tenure in office, Roosevelt issued 3,723 executive orders, more than all subsequent presidents from Truman to Obama combined. Not only did he impose business “codes” by executive fiat, but he used executive orders to seize private businesses from their owners during strikes. In 1933, FDR ordered Americans to surrender any personally owned gold to the federal government. During World War II, he would order 112,000 Japanese Americans to be put into “relocation centers,” which, as Supreme Court Justice Owen Roberts explained, is “a euphemism for concentration camps.”12

NIRA was just one part of a package of “reforms” pushed through during FDR’s first hundred days. This package also included the Agricultural Adjustment Act, which was, essentially, a mini-NIRA for the agricultural sector. Under the AAA, the federal government restricted farm output in order to keep agricultural prices high. As the Cato Institute’s Chris Edwards points out, “While millions of Americans were out of work and going hungry, the federal government plowed under 10 million acres of crops, slaughtered 6 million pigs, and left fruit to rot.”13 Yes, only the federal government could take on such a big job.

Another early New Deal program was the Federal Emergency Relief Administration (FERA)—actually, an expanded version of a Hoover experiment—which made grants to state governments. In theory, the states were free to spend the money as they saw fit. In reality, FERA’s enabling legislation gave the agency’s administrator discretion to withhold up to half of the grant money unless states conformed their relief efforts to federal dictates. When a state refused to cooperate, FERA could simply take over that state’s relief programs. On at least seven occasions, FERA’s administrator, Harry Hopkins, did in fact “federalize” state relief efforts and—according to Professor James Patterson—would have federalized many other state programs had he not been anxious to avoid the “appearance of federal dictation.”14

Intellectuals cheered the consolidation of power in Washington as the first step to a fascist makeover of the United States. One of the chief architects of NIRA, Hugh Johnson (also Time magazine’s man of the year for 1933), kept a portrait of Mussolini hanging in his office and distributed fascist literature to his colleagues in the cabinet.15 In 1934, Lorena Hickok, a former journalist who toured the country for FERA, wrote wistfully to Hopkins that “if I were twenty years younger . . . I think I’d start out to be the Joan of Arc of the Fascist movement in the United States.”16 As late as 1936, FDR’s Committee on Administrative Management sent two academics to Italy to study Mussolini’s “modern” administrative methods. This was a year after Italy’s bloody conquest of Ethiopia.

And the states? They were obsolete, according to opinion leaders like the New York Times Magazine, which, in early 1935, reported on a “growing sentiment . . . among certain members of Congress with advanced social views” to abolish the states altogether. As evidence, the Times pointed to a proposed plan—but without naming any of its proponents—to replace the forty-eight states with nine administrative “departments.” Each department would have its own elected governor, but he or she would be answerable to the president because each department would simply be a subdivision of the national government. “There would still be that system of [constitutional] checks,” explained the Times, “the whole federal process remaining the same, except that State governments as such would cease to exist.”17

FDR vs. the Constitution

With its first hundred days behind it, the Roosevelt administration got down to the hard work of arresting and prosecuting citizens who dared to work long hours or produce abundant crops. There was just one problem: the Constitution. The federal government had no enumerated power to regulate business practices, or agricultural production, or labor practices; therefore, it clearly had no business imposing fines or jail time on people who failed to follow the new mandates.

In 1935, the US Supreme Court killed NIRA in the case of ALA Schechter Poultry Corp. v. United States. In Schechter, lawyers from the National Recovery Administration sought to put some Brooklyn-based chicken butchers in jail for violating the “Live Poultry Code.” The butchers were convicted, but the Supreme Court overturned the convictions, holding that the code, and therefore NIRA itself, exceeded Congress’s power under the commerce clause. As in the Sugar Trust case, the court reaffirmed the original understanding of “commerce” as referring to trade across state lines, and not as an all-purpose synonym for economic activity. Chief Justice Charles Evans Hughes observed that the code did not actually regulate the interstate sale of chickens, but instead regulated the business practices of butchers who might handle chickens that had crossed state lines. The court was willing to allow regulation of intrastate activities that had a “direct effect” on interstate commerce, but here the connection to interstate commerce was too tenuous.

Hughes conceded that there might be cases where the line between “direct” and “indirect” effects might be ambiguous—courts exist to deal with gray areas—but that was no reason to abandon the distinction. Without some line between interstate and intrastate affairs, “there would be virtually no limit to the federal power and for all practical purposes we should have a completely centralized government.”

FDR immediately convened a White House press conference in which he compared the Schechter decision to the proslavery ruling in Dred Scott. It was in this presser that Roosevelt lambasted the court for its “horse and buggy” understanding of interstate commerce. He stoutly announced that the administration would find some way to invest the federal government with “the powers which exist in the national governments of every nation of the world.” Like Italy!

The following year, 1936, the Supreme Court struck down a separate New Deal law that had established yet another industry code: the Bituminous Coal Code (“bituminous” being a fancy word for black coal). The country was divided into twenty-three “coal districts,” each with its own unelected board responsible for implementing the code. The boards were empowered to determine the maximum hours and wages in coal mines, and to set prices for the local coal industry. Labor disputes would be resolved by a panel of arbitrators appointed by the president. Coal producers that failed to subscribe to the supposedly voluntary code would have to pay a punitive 15 percent tax on their gross sales.

The court held that Congress had no constitutional authority to turn the coal industry into a government utility (Carter v. Carter Coal Co.). Such a power, if it exists, was reserved to the states and never delegated to the national government. Writing for a near-unanimous court (only Justice Benjamin Cardozo dissented, in part), Justice George Sutherland demolished the argument that the state governments had authorized the federal power grab by their failure to object. The right to “complete and unimpaired state self-government” belongs to the people, not state officials. In other words, state politicians cannot “abdicate” their powers, even if doing so would provide welcome relief from their responsibilities.

In a somewhat curious detour, Sutherland stressed that “the Constitution itself is, in every real sense, a law.” The fact that he felt compelled to point this out reveals less about Sutherland than it does about the New Dealers who treated the Constitution as an archaic cultural tradition—sort of like quilting—that can be put aside when there is important work to be done.

This was a tough year for the New Dealers. A few months earlier, the court had also struck down the AAA in United States v. Butler. The court’s decision was based on the fairly obvious—one would think—fact that Congress’s power to regulate commerce does not include the power to coerce farmers to produce less food. One of FDR’s top advisers, Edward Corwin, a professor and once a leading Progressive scholar, proposed that the president silence his critics on the bench by adding six new justices who would guarantee pro-administration decisions.18

In February 1937, Roosevelt unveiled his infamous court-packing scheme, provoking a national debate. The scheme was never enacted but the threat worked. Later in 1937, the court blessed the National Labor Relations Act (NLRA) as a proper exercise of Congress’s commerce power, even though the act regulated purely intrastate labor practices (NLRB v. Jones and Laughlin Steel Corp.). Not only did NLRA invade states’ rights, but it did so unnecessarily. The centerpiece of the act—the promotion of collective bargaining—had already been embraced by thirty-two states before Congress got involved. As we saw in the last chapter, states consistently beat the federal government to the punch when it came to progressive legislation.

But in Jones and Laughlin, Justice Roberts, who had previously opposed the New Deal’s expansion of federal power, now abandoned the Sugar Trust distinction between commerce and manufacturing. Instead, he joined a new majority that resurrected the Shreveport Rate decision that Congress’s power over commerce extended to “all matters” having a “close and substantial” relation to interstate commerce.

In theory, the “close and substantial” test means that certain activities must, logically, have such a “distant and insubstantial” relation to interstate commerce that Congress could not legitimately assert power over them. But for the remainder of the New Deal, and for decades thereafter, the court would uphold every congressional assertion of commerce power no matter how tenuous the connection to interstate commerce. In two cases from the 1940s, the court upheld Congress’s power to regulate purely intrastate activities of building maintenance workers and elevator operators. Why? Because they worked in buildings where some of the tenants produced goods destined for interstate commerce.

In 1942, the court upheld Congress’s power to dictate how much food a private citizen could grow on his own land for his own consumption. The case of Wickard v. Filburn involved an Ohio farmer (Roscoe Filburn) who cultivated eleven acres of wheat more than was allowed under the (now revised) AAA. Mr. Filburn did not sell the excess wheat; rather, he used it on his own farm, to feed livestock and make flour for his family. As punishment for growing too much food, the Department of Agriculture fined Filburn forty-nine cents a bushel. Instead of paying the fine, however, Filburn filed suit in federal court, arguing that his constitutional rights were being infringed.

After Filburn won a partial victory in the district court, which reduced the amount of his fine, the secretary of agriculture, Claude Wickard, appealed to the Supreme Court, seeking to vindicate the federal government’s power to control agricultural production—even when the produce never crosses state lines. The court obliged. Applying “the principles first enunciated by Chief Justice Marshall in Gibbons v. Ogden,” the court held that the commerce power “is not confined in its exercise to the regulation of commerce among the States”—even though that’s precisely what the text says (“to regulate Commerce . . . among the several States”). The invocation of John Marshall, as we saw in the previous chapter, was a reliable indicator that the court was playing fast and loose with the Constitution’s language.

New Deal lawyers were a new breed who portrayed their constitutional infidelity as a positive virtue. They weren’t disregarding the text; they were expounding the “Living Constitution”—a theory that had been promoted by Progressive politicians and academics since the 1920s.19 According to this doctrine—which still holds sway in most law schools—you don’t need the consent of three-fourths of the states to amend the Constitution, as the text says. Instead, judges and politicians can creatively reinterpret the text to remove any constitutional barriers to their preferred policies.

As a practical matter, the Living Constitution is a disaster for states’ rights. The Constitution and the Bill of Rights are devoted largely to defining the limits of the central government’s power. Because the Living Constitution liberates judges to disregard those parchment barriers, it naturally leads to federal judges’ aggrandizing the federal government at the expense of the states.

Wickard was a breathtaking example of this federal power grab. One can read the Constitution backward and forward but one will never find a provision empowering Congress to regulate agricultural production at all, much less to dictate the amount of wheat a farmer can grow for his own consumption. For a state to pass such a law would be foolish and counterproductive, but it would be constitutional. For Congress to do so—and for the Supreme Court to bless it—is essentially a declaration that the Constitution is dead. Long live the Living Constitution!

To justify the result in Wickard, the court applied its “living” interpretation not only to the commerce clause but also to the necessary and proper clause of Article I, Section 8. As previously noted, the latter provision empowers Congress to “make all laws which shall be necessary and proper for carrying into Execution the [enumerated] Powers.” The assumption in cases like Wickard—and now the conventional wisdom—is that the necessary and proper clause “is an enlargement . . . of the powers expressly granted to Congress,” to quote a website devoted to legal education.20 The truth is the reverse—the necessary and proper clause was inserted to emphasize Congress’s duty to adhere strictly to its enumerated powers.21 But from the earliest days, the Anti-Federalists had predicted that the clause would be exploited to usurp power from the states. That is what Congress and the court were now doing.

The Second New Deal

Before he had completely subdued the Supreme Court, FDR did make superficial concessions to states’ rights by retreating from the extreme centralization of his first hundred days. In 1935, Roosevelt launched what came to be known as the “Second New Deal,” a policy shift that would ostensibly involve paying more respect to the states.

Declaring that the central government “must and will quit this business of relief,” FDR proposed a scheme of unemployment insurance that would be administered by the states, although funded by federal tax credits. He also unveiled a new system of matching grants to states that provided federally recognized “categorical assistance”—that is, welfare relief for certain categories of needy persons, such as widows or the blind. “Returning relief to the states” was the administration’s theme for 1935—much to the chagrin of a professional social work establishment that was just getting used to the convenience of one huge money spigot in DC.22

Roosevelt had not rediscovered the virtues of states’ rights. Even in the midst of the “Second New Deal,” he centralized policies when he could get away with it. Old-age pensions, for example, emerged in 1935 as an entirely national program with no state participation. In 1938, FDR would secure passage of the Emergency Relief Appropriations Act, which gave the president unilateral power to disburse $4.8 billion of federal money. Among other things, FDR used the money to create the Works Progress Administration (WPA), of which he put Harry Hopkins in charge.

But FDR did begin to harness state governments in 1935, partly because of the need to give local bosses patronage, but also because of the “lack of bureaucratic and administrative capacity at the federal level,” as the historian Gary Gerstle has observed.23 In other words, the state governments had the knowledge and the manpower needed to implement FDR’s grandiose schemes. But the central government did have the uncommonly useful power to print money and then spread it around.

Consider, for example, how the New Deal Congress co-opted the states with respect to unemployment insurance. Part of the Social Security Act imposed a federal tax—rising to 3 percent—on the gross payroll of every company with eight or more employees. Companies based in states that had adopted federally approved unemployment insurance laws would get a 90 percent rebate on the tax. Any state politician who failed to secure that 90 percent rebate was asking for trouble. Not surprisingly, the states fell in line.

Was the tax-and-rebate scheme constitutional? The issue came before the Supreme Court when an Alabama corporation challenged the law as (among other things) an invasion of state sovereignty. From 1789 to 1935, relief of the unemployed was one of those topics squarely within the police powers of the state—that is, it was one of those powers reserved to states under the Ninth and Tenth Amendments. After the passage of the Social Security Act, each state not only had to enact an unemployment insurance program, but the program had to be approved by the unelected members of the Social Security Board.

Unfortunately, in Charles Steward Machine Co. v. Davis (1937), five justices of the Supreme Court rejected the Alabama corporation’s challenge. Writing for the majority, Justice Cardozo insisted that Congress had to have the power to nationalize unemployment relief, not because the Constitution says so, but because it is “too late today for the argument to be heard with tolerance that in a crisis so extreme the use of the moneys of the nation to relieve the unemployed and their dependents” is beyond the powers of Congress. This is the argument—as old as the Alien and Sedition Acts—that constitutional limitations can be disregarded whenever Congress decides there is an “emergency.” Why not just say that the Constitution applies except when it doesn’t? Cardozo then rhetorically asked, “Who then is coerced through operation of this statute? . . . Not the state,” he answered. “For all appearances,” said Cardozo, “she is satisfied with her choice, and would be sorely disappointed if it were now to be annulled.”

This is what’s known as pulling yourself up by your own bootstraps: since Alabama enacted a federally compliant law and hasn’t seen fit to repeal it, the state must be acting voluntarily. Ergo, no federal coercion! “Alabama is still free” to repeal its unemployment law at any time, declared Cardozo. The fact that Alabama could not exercise its “freedom” without subjecting its citizens to a punitive tax was simply ignored.

But even if Cardozo’s ivory tower fantasy was correct—even if Alabama’s politicians were perfectly content to go along with the federal mandates—it doesn’t change the constitutional equation. As James Madison observed during Virginia’s struggle against the Sedition Act, the word “state” stands for a political community; it is not a shorthand expression for the politicians who happen to hold state office at any given time. By equating “state” with “state politicians,” Cardozo rejected the key insight that the court had recognized just one year earlier in the Carter Coal case: the right to local self-government is not something that belongs to state politicians to waive at their convenience.

On the same day as the Charles Steward decision, the court also upheld the Social Security Act’s old-age pension scheme. In Helvering v. Davis, the court held that the creation of a national pension plan—although beyond any of Article I’s enumerated powers—was nonetheless authorized under the general welfare clause. Justice Cardozo announced that “the conception of the spending power advocated by Hamilton and strongly reinforced by [Justice] Story has prevailed over that of Madison.”

Well, there you have it. It’s not as though Cardozo himself were rewriting the Constitution. It’s just that the whole Hamilton vs. Madison thing had gone into double overtime, and Hamilton had scored the tie-breaking goal, allowing him to posthumously define the scope of the “spending power” (as Cardozo described it) under the general welfare clause. Except that there is no spending power in the general welfare clause. That clause, which is at the very beginning of Article I, Section 8, provides that “the Congress shall have Power to lay and collect Taxes, Duties, Imposts and Excises, to pay the Debts and provide for the common Defence and General Welfare of the United States”; enumerated powers follow.

The entire provision is about taxing, not spending. It gives Congress the power to tax—a controversial power in 1789, and one that Congress had lacked under the Articles of Confederation. But the clause restrains this fearful power by specifying that any taxes must be justified in the name of (1) paying down the debt; (2) defense; or (3) the “general welfare.” As originally understood, the general welfare clause was not a grant of extra powers to Congress; rather, it was an additional constraint on those powers. Congress was to be limited to its enumerated powers—that is the main thrust of Article I—but Congress also had to ensure that it did not use its taxing power to favor any particular region or faction. That is why the Constitution mandates that tax revenues be used only to pay the debt (which is owed by the entire union), or to provide for the common defense or general welfare.

With Helvering, Cardozo and his allies turned the general welfare clause into a loophole giving Congress nearly complete discretion to spend the money that it collects without regard to enumerated powers. Imagine the Constitution’s framers painstakingly spelling out the specific powers of Congress—and then deciding that Congress could ignore those limitations as long as it was spending taxpayer money. What else does Congress do?

Good-bye, Tenth Amendment

Having abdicated its responsibility to set limits on congressional power, the Supreme Court also abandoned any attempt to protect states’ rights. After the disastrous decisions of 1937—Jones and Laughlin, Charles Steward, and Helvering—it would be thirty years before any litigant could successfully invoke either the Ninth or Tenth Amendment in any federal court. Here at last was the full flowering of the changes that had begun to take hold in 1913: a radically transformed Union in which the states are subservient to the central government. By 1941, the Supreme Court would brush aside objections to the federalization of employment law—long the exclusive province of state law—asserting that the Tenth Amendment is “but a truism” that does not in any way limit the scope of Congress’s powers (United States v. Darby).

The new measure of federal power was to be the consent of the states. Not real consent, but the phony consent of the Charles Steward opinion, and of the Massachusetts v. Mellon opinion of the previous generation. States would be deemed to consent to national policies provided they accept whatever carrots, and avoid whatever sticks, Congress may brandish. The old idea of “dual federalism” was officially dead. In its place, as the law professor Frank Strong observed in 1938, there was a “new brand of federalism” that involved “expanding federal power through the implied consent on the part of the states” (emphasis mine).24

This new brand of federalism was called “cooperative federalism,” a feel-good label that obscures the coercive reality of federal power. “Cooperation” was not hard to come by, now that Congress had unfettered “general welfare” power to—how shall I put this?—persuade the states to sign up for national programs. Under the cooperative system, the national government grew by throwing money at the states. Between 1932 and 1940, federal outlays for cooperative programs grew from $250 million a year to nearly $4 billion.

This infusion of government money “created the possibility of unprecedented political patronage for the politicians in control of the money.”25 Mainstream commentators—eager to preserve FDR’s iconic status—would like us to believe that Roosevelt and his allies resisted the temptation to use relief money for political purposes. The New York Times’ Paul Krugman, for example, wrote in 2008 that FDR had made big government “clean” by steering clear of pork barrel legislation.26 Krugman’s version of history is pure fantasy. In reality, the New Deal led to political corruption on a vast scale.

Throughout the New Deal, the Roosevelt administration systematically diverted funds from Southern states—which were the poorest, but also safely Democratic—to Northern “swing states” that FDR needed in order to win reelection. Under Hopkins’s leadership, the WPA channeled money to projects that would benefit pro-FDR politicians. According to an analysis by Gavin Wright, an economic historian at Stanford University, roughly 80 percent of the state-by-state variation in per-person New Deal spending can be explained by politics rather than real differences in need.27 At one point, FDR threatened to cut off every penny of infrastructure spending for New York City unless Mayor Fiorello La Guardia fired Robert Moses, a municipal planner who happened to be a political foe of Roosevelt’s.28

None of this should come as a surprise to progressives who consistently rail against politicians being “bought and paid for.” If money from Wall Street can corrupt a politician, why not money from Washington?

Good for States, Bad for People

Thanks largely to cooperative fiscal programs, which often required states not only to comply with federal conditions but also to match federal grants, state governments grew during the New Deal. In 1913, the states accounted for 9.3 percent of all government expenditures in the United States. By 1940, that figure had grown to 17.5 percent—nearly double. Moreover, the states were allowed to exercise regulatory powers concurrently with the federal government. That had not been the case under the system of “dual federalism,” in which state and federal powers were mutually exclusive. The nice thing about that system—aside from being faithful to the Constitution—was that any particular activity was regulated by, at most, one level of government. But in the New Deal, as Michael S. Greve, a professor at George Mason University School of Law, observes, “not a single [federal] regulatory regime unambiguously trumped or displaced the states.”29 For example, Congress created the Securities and Exchange Commission to regulate stock and bond trading—but without eliminating the state regulators who covered the same turf. As political scientists often put it, dual federalism is like a layer cake, with clearly delineated levels of government, while cooperative federalism is a marble cake, with overlapping state and federal roles.

The growth of state government does not equal an expansion of states’ rights. Remember: states’ rights belong to the people of each state, not to the politicians. By creating a system of autonomous states locked in a virtuous competition for citizens and tax revenues, American federalism had been maximizing individual liberty since 1789. For politicians, however, the New Deal grant programs offered a tempting way out of the rigors of competition. Federal grants allowed them to take credit for expanded services “without resort to the politically embarrassing expedient of state taxation,” as New Dealer Jane Perry Clark conceded in 1938.30 The money came with strings attached, but those strings would apply to all states, and thus would not put any particular state at a competitive disadvantage. State politicians, in short, sacrificed autonomy for money and protection from competition.31

As a result of federal-state collusion, state politicians took on redistributive programs far beyond anything their constituents had actually asked for—a fact that eventually caught up with the politicians. In Michigan, for example, a popular referendum in 1936 overturned the state legislature’s attempt to create a centralized welfare system so that the state could qualify for Social Security funds.32 That’s right: in the midst of the Depression, voters in Michigan opted to retain local control over poor relief, even though that meant sacrificing federal dollars. Two years later, the 1938 midterm elections delivered a national repudiation of the New Deal. Republicans and anti–New Deal Democrats captured governorships across the country, even in progressive bastions like Minnesota and Wisconsin. A Republican won the Pennsylvania gubernatorial race partly on the strength of his campaign pledge to burn all three thousand pages of regulatory legislation signed by the Democratic incumbent. The GOP picked up eighty-one seats in the House of Representatives and six in the Senate.33 Politically, 1938 might have been the beginning of the end for FDR, but he would soon be reborn as a wartime leader.

The Death of Local Government

The growth of state governments did not come at the expense of the federal government, which also exploded during the 1930s and 1940s. Rather, the big loser in the New Deal was local government. In 1913, local governments represented over 60 percent of total government expenditures. By 1952, that figure was 20 percent, and it has never since exceeded 30 percent. Local government—rule by people who know their constituents personally, who are readily accessible, and who are directly accountable at the ballot box—had been the basis of American civics before the New Deal.

But local governments were largely powerless to arrest the centralizing tendencies of the New Deal. Although voters occasionally rebelled against federal-state collusion—as occurred in Michigan’s 1936 referendum—the only institution that could have enforced the Constitution’s structural provisions was the Supreme Court. Unfortunately, the New Deal court abdicated its traditional role as a “structure court”—an enforcer of the Constitution’s organizing principles—and instead became a “rights court.”34 After the late 1930s, Congress would be allowed to centralize as much power as it wanted, unless it intruded on certain rights that the court deemed worthy of protection (local self-government not being one of those rights).

As for state laws, the court had no hesitation about striking them down under the “substantive due process” technique we saw in the previous chapter. What was changing, however, was the scope of substantive due process. Since the Progressive Era, the Fourteenth Amendment had been expanded to “incorporate” certain parts of the Bill of Rights—so that, under the guise of “due process,” federal courts could now veto any state law for violating provisions that were originally intended to serve as a restraint on the federal government. And even that was not enough: the court went on to assert that the Fourteenth Amendment gave it authority to punish states for violating any other right that the court deems to be “so rooted in the traditions and conscience of our people” that it must be enforced by the judiciary (Snyder v. Massachusetts, 1934).

The Supreme Court had now set itself up as the national arbiter of acceptable policy. It would take a couple of decades and some ill-considered judicial nominations, but the Supreme Court would come to pose the greatest threat to states’ rights during the postwar era, as we will see in the next chapter.