IN APRIL 1786, a year before the Constitution’s authors gathered in Pennsylvania, an unfortunate tax collector found himself surrounded by a mob near the state’s western frontier. The official had come to collect new state taxes on liquor. The protesters had come to teach the official a lesson about the limits of government authority. They cut the hair off one side of his head, dressed him up in embarrassing attire, and then forced him to ride through three counties, with regular stops at local stills to sample the wares. One dismayed observer called it the “most audacious and accomplished piece of outrageous and unprovoked insult that was ever offered to a government.”1
Audacious and accomplished it was, but not unprecedented. After all, the case for the American Revolution rested heavily on resistance to taxation, as memorialized in the far more audacious Boston Tea Party. Yet the American revolutionaries decried “taxation without representation.” The Pennsylvania mob had representation. They were resisting authority. To those who would assemble in Philadelphia a year later, it was such resistance—not just by citizens but also by the states themselves—that was the problem.
The leaders of the movement to write a new constitution shared a common diagnosis: The Articles of Confederation were dangerously weak. The federal government needed coercive authority, including authority to tax. As the political scientist David Brian Robertson explains their thinking, “Taxes were the lifeblood of the new national government.”2 The framers refused, for example, to put any constraint on what taxation could be used for, and they explicitly defeated proposals to limit direct taxes on property, sales, and income. Eleven years after the Declaration of Independence denounced Britain for “imposing taxes on us without our Consent,” the architects of the Constitution gave their envisioned national government taxing authority even greater than that held by the Crown. As Alexander Hamilton explained in “The Federalist No. 30,”
Money is, with propriety, considered as the vital principle of the body politic; as that which sustains its life and motion, and enables it to perform its most essential functions. A complete power, therefore, to procure a regular and adequate supply of it, as far as the resources of the community will permit, may be regarded as an indispensable ingredient in every constitution. From a deficiency in this particular, one of two evils must ensue; either the people must be subjected to continual plunder, as a substitute for a more eligible mode of supplying the public wants, or the government must sink into a fatal atrophy, and, in a short course of time, perish.3
If the rallying cry of the nation’s first revolution was “No taxation without representation,” the rallying cry of the second could very well have been “Yes taxation with representation.”
Less than a decade later, this “vital principle” came into sharp focus when western Pennsylvanians again showed their displeasure with new taxes on liquor: this time a national tax passed by Congress in 1791 at the insistence of Hamilton, now secretary of the Treasury. In the conflict that followed, known by historians as the Whiskey Rebellion, President George Washington led a force roughly as large as the Continental army to put down the insurgency and establish federal authority. The Whiskey Rebellion was “the single largest example of armed resistance to a law of the United States between the ratification of the Constitution and the Civil War,” writes the historian Thomas Slaughter.4 If nothing else, the huge federal response showed that when the Founders said they believed in taxation with representation, they really meant it.
Today the federal government faces another antitax rebellion: a destructive campaign of resistance to the federal government’s authority to collect taxes owed under law. Only this time it’s not led by frontier outlaws. It’s led by federal officials.
Meet Senator Ron Johnson: Since his election in 2010 as a Tea Party darling backed by Koch Industries, the Wisconsin Republican has led a successful crusade against taxation with representation, crippling the capacity of our modern tax collectors to carry out their jobs. Johnson has done nothing so crude as shredding documents or dressing up inspectors. He’s adopted a much more effective strategy: slashing the IRS’s budget.
In 2014 the independent IRS Oversight Board recommended a budget of $13.6 billion for the agency. Congress, led by Johnson and his House GOP counterpart, Florida’s Ander Crenshaw, gave it less than $11 billion. That might seem like a modest difference between two big numbers. It’s not. With its funding declining since the 2010 midterm, when Republicans captured the House, the IRS has grown more beleagured and less capable. Even as it administers an ever more complex tax code for a growing population, its employees have struggled with staff shortages, huge cuts in training, stagnant salaries, and increasingly archaic technology.5
The biggest costs, however, are borne by those who must deal with the IRS. At some branches, reported Bloomberg News in 2015, “Lines of taxpayers seeking assistance have looped around the block. . . . Waiting times have stretched into hours. An estimated six in ten callers to the agency’s toll-free lines haven’t been able to get through, and those who have could get help with only the most basic questions.” The head of the IRS’s Taxpayer Advocate Service, Nina Olson, alerted Congress to a “devastating erosion of taxpayer service, harming taxpayers individually and collectively” and creating “compliance barriers and significant inconvenience for millions of taxpayers.”6
If $11 billion seems like a lot to spend on tax administration, keep in mind that the IRS collects around $3 trillion in taxes. More important, every $1 spent on IRS enforcement yields $6 in recovered taxes, as well as at least three times more in indirect gains due to the deterrent effect on tax evasion.7 (Targeted enforcement efforts that focus on high-income taxpayers produce more than $47 in recovered taxes for every $1 spent.)8 Still, the IRS estimates that one in six tax dollars go unpaid—a loss of nearly $450 billion (in 2015 dollars) each year.9
Most people pay their taxes not because they fear an audit but because they believe taxes are legitimate and others are also paying their fair share. (For families that rely mostly on wages, automatic withholding also makes tax evasion inherently difficult.) Audits are essential, however, to send a message that taxes are being paid—and especially that they are being paid by businesses and wealthy Americans whose income is mostly exempt from withholding and whose incentive to avoid taxes is greatest. Yet audits are at their lowest level in a decade, with large drops for returns filed by corporations and by taxpayers earning more than $1 million.10 IRS experts and leaders worry that the United States might be reaching a tipping point beyond which voluntary compliance will start to fall. “The IRS will never be a beloved federal agency because it is the face of government’s power to tax and collect,” explained TAS’s Olson. “But it should be a respected agency.”
Respect isn’t exactly what the IRS is getting these days. The Founders saw the power to lay and collect taxes as the hallmark of the nation’s new democracy. Now those who claim to speak in their name don’t just criticize the laying of taxes but also seek to undermine government’s capacity to collect the taxes authorized by popular rule. In an interview with the Atlas Society—dedicated to promoting the philosophy of Ayn Rand—Senator Johnson said, “We’re all suffering collectively from Stockholm syndrome. That’s where people who have been kidnapped are grateful to their captors when they just show them a little bit of mercy.” In case it wasn’t clear who those captors were, Johnson explained that “the root cause” of the nation’s problems was “the size, the scope, the resulting cost of the government.”11
Johnson is an extreme example. He helped erect a statue honoring Atlas Shrugged in his hometown, criticized CEOs who called for higher taxes to reduce the deficit as similar to the corporate pawns who enabled government “looting” in Rand’s novel, and called the Affordable Care Act “the greatest assault on freedom in our lifetimes”—which might surprise African Americans grappling with Jim Crow when the sixty-year-old Johnson was growing up. When Johnson was asked the differences between his ideas and those of Rand, he replied, “I’m not sure there are too many differences.”12 The businessman-turned-politician even compared himself to the novel’s Hank Rearden: the brilliant steel baron who joins the strike of the productive class led by John Galt.
As we have seen, however, Johnson is hardly alone in embracing Randian beliefs. Nor is he alone in decrying the IRS. In 2012 Maine’s Republican governor, Paul LaPage, called the IRS “the new Gestapo.” Asked to apologize, he clarified, “What I am trying to say is the Holocaust was a horrific crime against humanity, and, frankly, I would never want to see that repeated. Maybe the IRS is not quite as bad—yet.”13 In South Carolina the next year, the Republican Party sent out a fund-raising appeal that described the agency as “Obama’s Gestapo.”14 The legacy of Pierce Butler, the South Carolina delegate to the constitutional convention who described taxes as “distinguished marks of sovereignty,” has apparently faded in the Palmetto State.
To be sure, not everyone compares the IRS to the Gestapo. The moderate view, however, isn’t much friendlier. Since the mid-1990s, when Republicans went on the warpath against the agency, the IRS has been portrayed as a sort of evil idiot savant—at once horrendously incompetent and devilishly cunning. When, in 2013, the inspector general of the IRS alleged that lower-level staff had applied special scrutiny to conservative political groups seeking tax-exempt status, the cacophony of contempt was deafening. “This is tyranny,” said Joe Scarborough, the House Republican turned MSNBC TV host. ABC News correspondent Terry Moran described the affair as “a truly Nixonian abuse of power by the Obama administration.”15 The most colorful complaints, however, came from the comedian Jon Stewart on The Daily Show, who described the incident as “removing the last arrow” in President Obama’s “progovernance quiver.” “Congratulations, Barack Obama,” proclaimed Stewart. “You’ve managed to show that when the government wants to do good things, your managerial competence falls somewhere between David Brent [the horrible boss on British TV’s The Office] and a cat chasing a laser pointer. But when government wants to flex its more malevolent muscles, you’re fucking Iron Man.”16
The takedown was humorous, but it wasn’t accurate. Subsequent investigations showed that the frontline IRS officials who had questioned the nonprofit designation were acting substantially on their own. Moreover, their net of scrutiny had ensnared liberal as well as conservative groups; in fact, the only group flagged by the IRS that ultimately had its tax-exempt status denied or revoked was a left-leaning one.17 After the inspector general report, the officials responsible were reprimanded and the federal supervisor of their division compelled to resign. (President Obama forced the IRS’s head to step down as well, even though he had not been in charge at the time; a Bush appointee had been.)
What’s more, the underlying issue was complicated. The Supreme Court’s Citizens United ruling had opened the door to new organizations that claimed tax-free status yet acted much like traditional political groups that are taxed. A year later, a Republican-appointed federal judge dismissed all the lawsuits brought against the IRS. What was portrayed as a witch hunt carried out by a partisan agency turned out to be mostly an ill-conceived screening process developed by a short-staffed IRS grappling with ambiguous law. (Thanks to the GOP attacks, however, the IRS would essentially give up policing the increasingly murky lines between charitable nonprofits and those dedicated to political advocacy.) If this showed the IRS’s malevolent muscles, Iron Man had nothing to worry about.
Nor was 2013 the first time Republicans had gone after the IRS. In 1997, GOP leaders launched a sophisticated campaign against the agency. Following the advice of Republican strategists, the GOP chair of the Senate Finance Committee launched two years of hearings on the IRS’s “Gestapo-like” (Senator Trent Lott) and “out of control” (Senator Don Nickles) behavior. As veteran tax reporter David Cay Johnston described the spectacle in his 2005 book Perfectly Legal,
Over six days in the fall and spring the television networks gave mostly breathless accounts of a rogue agency ruining lives with abandon. . . . The story: Unnamed IRS agents falsely making unnamed people pay taxes they did not owe; dozens of criminal investigation agents, brandishing guns, entering peaceful offices and homes as if they expected armed drug dealers; agents issuing subpoenas for no purpose except to embarrass people. There was even testimony that an IRS agent held a gun on a girl, caught in a raid on her parent’s home, and forced her to change her clothes while he watched. There was one problem: Most of it wasn’t true.18
No matter. The IRS was a “frightening concept,” explained GOP word guru Frank Luntz. “Perception is reality,” he went on. “People are afraid, whether they should be or not. As a child, you are afraid of monsters in the closet. To my knowledge, there has never been a monster found in the closet of a five-year-old, but how many five-year-olds are afraid of monsters? Same thing here.”19 In time, investigative reporting, court proceedings, and an independent government examination all concluded that the most serious charges were false, and that the biggest problems with the agency concerned insufficient staff, inadequate training, and a focus on the easiest cases rather than the biggest or most egregious ones. By then, however, Congress had passed the Internal Revenue Service Restructuring and Reform Act of 1998—which upended the IRS, requiring that the agency jump significant new hurdles without significant new funding or personnel. In the next fifteen years, the IRS would lose twenty-four thousand employees.
If the ongoing GOP assault hasn’t illuminated the IRS’s real challenges, it does say a lot about how effective governance has been eroded. The cycle of ginned-up scandals and destructive policies follows a recurrent pattern: Antigovernment critics make inflated or fabricated charges. Fox News and other conservative sources sensationalize them. Most of the rest of the news media follow—until the charges are shown to be false, at which point the mainstream press loses interest. Leading Democrats who might have counseled balance stay quiet. No one stands up for a government that works, and so government comes to work less and less well.
For antigovernment forces, this is a win-win proposition. With their two-decade-plus campaign against the IRS, Republicans have mastered the self-fulfilling critique: Say the government isn’t doing its job, make it harder for the government to do its job, repeat. Far from paying a price for their exaggerated or false claims, Republicans have used the issue to gain valuable political ground. And from that ground, they have hobbled the enforcement powers of the IRS—particularly when it comes to businesses and higher-income taxpayers. Under Ronald Reagan’s IRS, one out of every nine returns was audited. By 1997, it was one in sixty-six, and after the 1997–98 hearings and legislation, it fell to less than one in a hundred.20
Revealingly, there was one group for which audits didn’t fall: the working poor. After Republicans captured Congress in 1994, a major agenda item was cutting the Earned Income Tax Credit, which GOP leaders claimed was rife with malfeasance—“the federal government’s fastest growing and most fraud-prone welfare program,” as Senator Nickles put it. In response to congressional pressure, audits for the EITC rose to record highs for the poor even as they fell to record lows for the rich.
Americans understand that taxes are a civic obligation.21 Their biggest complaint is not that federal taxes are illegitimate or excessive, but that wealthy Americans don’t pay their fair share. Yet Republicans generated enormous support for a crusade that offered increased benefits to affluent tax evaders and increased hassles for ordinary taxpayers, especially the most disadvantaged. After all, everyone distrusts the government.
The successful attack on the IRS is not an isolated story. It is emblematic of a destructive assault on the capacity of government to carry out its most basic tasks. American government does have a big problem: It’s increasingly incapable of doing what it needs to do to ensure a prosperous and safe society. Across nearly every area of domestic government—managing the nation’s finances, regulating the market, protecting the environment, delivering basic services, investing in the future—we see decay and destruction, stalemate and subversion, efforts not just to reduce government’s capacity but also to undermine the legitimacy of (and hence voluntary compliance with) federal law itself. The tragic irony is that this assault is being waged in the name of a Constitution that was designed with precisely the opposite intent.
As taxpayers snaked around the block at IRS offices, theatergoers jostled to see the “hottest ticket in New York City”: Hamilton, the story of the nation’s first Treasury secretary, told through contemporary music and dance, with a largely nonwhite cast. The musical dwelled less on Hamilton’s thinking than on his modest roots and outsized ambition. Still, there was something incongruous about hordes of fans going to see a show with a hero whose resolutely progovernment stance ran so against the grain of contemporary debate. (At times, the incongruity was hard to ignore: After seeing the musical, Fox News founder Rupert Murdoch tweeted, “Fabulous show! Musical of Alexander Hamilton, great acting, dancing, singing. Historically accurate with lives of Washington, Jefferson, etc.”)
It’s understandable why a popular musical would give Hamilton’s philosophy of government less attention than his illegitimate origins or spectacular demise. Yet we should not forget that, from his first days in public life until his ill-fated duel with Aaron Burr, Hamilton fought for a vigorous, effective national government. Liberty was essential, Hamilton believed, but it could thrive only in the context of effective authority. As he declared in his closing address at the convention,
In the commencement of a revolution which received its birth from the usurpations of tyranny, nothing was more natural than that the public mind should be influenced by an extreme spirit of jealousy. To resist these encroachments and to nourish this spirit was the great object of all our public and private institutions. The zeal for liberty became predominant and excessive. In forming our Confederation this passion alone seemed to actuate us, and we appear to have had no other view than to secure ourselves from despotism. The object certainly was a valuable one, and deserved our utmost attention. But, sir, there is another object equally important and which our enthusiasm rendered us little capable of regarding; I mean a principle of strength and stability in the organization of our government, and vigor in its operations.22
More than any other leader of his day, Hamilton worked to construct a modern administrative state, with the ability to impose order on society. The Whiskey Rebellion was a direct outgrowth of his plan to create a functioning fiscal system that could manage intranational and international commerce and borrow to finance military actions and other pressing needs. The crisis of the articles, Hamilton understood, was a fiscal crisis that threatened the very capacity of the fledgling nation to survive. Taking over the debts of the states, creating a common currency, issuing bonds—these were what would make the United States capable of promoting commerce and fighting war. What’s more, they would bind the financial elite (what Hamilton called “the moneyed interests”) to the federal government rather than to the states. Today’s chorus of conservative criticism of active monetary policy—Ron Paul demanding an end to the Fed; former Texas governor Rick Perry accusing the GOP-appointed Fed chair of “treason”; leading Republicans pressing for legislation to “audit the Fed”—turns all these Hamiltonian notions on their head.
Of course, the authors of the Constitution had no way to grasp how dramatically governance would change in the centuries to come. And no matter how many books (or musicals) about these men get produced, it is impossible for us to comprehend fully the task they saw before them in Philadelphia. But one thing is clear: The conservative commentators and antigovernment politicians who call themselves “constitutional conservatives” are espousing a vision of government that those who wrote the Constitution were dedicated to defeating. Tea Partiers are fond of quoting The Federalist Papers—without acknowledging that these were propaganda pieces that put a good face on messy compromises and played down their authors’ enthusiasm for a strong national government. But if they could somehow be transported back to 1787, they would not be Federalists. They would be anti-Federalists. They would not be the people who wrote the Constitution. They would be the very people whom the authors of the Constitution hoped to marginalize.
Hamilton, of course, has been the bugbear of small-government conservatives for more than two centuries. Of the key drafters of the Constitution, it’s James Madison they embrace. “Madison was the most profound thinker among the Founders,” writes George F. Will, criticizing today’s active national government as “hostile to Madison’s Constitution.”23
In a different sort of celebration, the libertarian social scientist Charles Murray has recently created an activist organization in Madison’s name to finance litigation and public relations campaigns against government rules. The goal of the Madison Fund, as detailed in Murray’s 2015 book By the People: Rebuilding Liberty Without Permission, is to refuse voluntary compliance with federal laws so as to make it excessively costly for government to enforce them. “I want to pour sugar into the regulatory state’s gas tank,” Murray explains. Regulators will sometimes win the resulting battles, he acknowledges, but “Goliath cannot afford to make good on that threat against hundreds of Davids.” To finance his fund honoring the father of the Constitution, Murray is looking for a few of the many “billionaires,” “centimillionaires,” or even “mere millionaires” who are “principled advocates of limited government.”24
Yet on the key questions of public authority, Madison was aligned fully with Hamilton. To Madison, the precondition of effective government was a single center of authority that could compel obedience. The “fatal omission” in the articles was its lack of coercive power. The James Madison who described government as “an institution to make people do their duty” does not deserve to be associated with Murray’s crusade.
Nor were Madison’s emphatic words mere rhetoric. He came to Philadelphia with a proposal to replace the Articles of Confederation: the Virginia Plan. The plan would have given the federal government an absolute veto over state laws, a power that Madison saw as “indefinite,” available “in all cases whatsoever,” and “absolutely necessary to a perfect system.” Lest those at the convention doubt its centrality, he compared it to gravity within the universe: “the great pervading principle that must control the centrifugal tendency of the states, which without it will continually fly out of their proper orbits and destroy the whole harmony of the political system.”25
Indeed, Madison advocated a governing structure much closer to a parliamentary democracy than to our current system. In the Virginia Plan, Congress chose the executive, and during the convention, Madison proposed that members of Congress be able to serve in the executive branch—an outline for British-style cabinet government that lost by the narrowest of margins. More important, the Virginia Plan established an upper chamber that, like the House, was based on population, with its members selected by the House rather than by the states. Just as he insisted on federal supremacy over all state laws, Madison never wavered from his conviction that the Senate should mirror the House in representing people, not states. Madison’s plan shattered on the rocks of fierce opposition from the less populous states. Only after the population-based Senate was scuttled did Madison swing toward a stronger executive to check state power in the Senate and formulate his famous defense of the separation of powers and federalism.
In other words, the Constitution that many scholars describe as “Madisonian” was hardly a direct translation of Madison’s ideas into institutional design. It was a compromise in which Madison had to give up the stronger and more streamlined national government he wanted. No less than Hamilton, Madison believed that states should be subordinate to the federal government and that coercive national authority embodied “the great vital principles” of an effective political arrangement. Today’s self-appointed defenders of the Constitution—or “the Constitution in Exile,” as constitutional conservatives sometimes call it—hold views that are anything but Madisonian. Instead, their views look a lot like those of the Constitution’s original opponents.
Madison, Hamilton, and the others who sought a new constitutional democracy wanted a government that could act. Their immediate fear was not, to recall the quote of James Wilson of Pennsylvania during the convention, a government that “governed overmuch” but one that “governed too little.”26 For all the adjustments in context and content that a journey across three centuries requires, our present crisis bears an unfortunate resemblance to that which the Founders endeavored to overcome: chronic stalemate, eroding government capacity, weakened accountability, declining trust, and increasing accommodation of the narrow interests that flourish when effective public authority withers.
In August 2011 Standard & Poor’s downgraded the credit rating of the United States for the first time in the agency’s more than 150-year history. “The political brinksmanship of recent months,” the nation’s oldest ratings agency wrote, suggested that American governance had become “less stable, less effective, and less predictable.” Everyone knew what S&P meant by “political brinksmanship”: Republicans had used an obscure and once routine requirement that Congress raise the upper limit on federal borrowing periodically to pressure President Obama to accept steep cuts in spending.27 They did so even though failing to increase the so-called debt ceiling could bring about catastrophic economic effects. In essence, Republicans had taken the US economy hostage, demanding huge budget cuts that they couldn’t achieve otherwise as ransom.
The strategy was straight out of the Gingrich-McConnell playbook: Find a leverage point no one had dared use before and exploit it ruthlessly, whatever the toxic effects. At the height of the debt-ceiling fight, McConnell argued, “The Constitution must be amended to keep the government in check. We’ve tried persuasion. We’ve tried negotiations. We’ve tried elections. Nothing has worked.”28 McConnell knew a default would be terrible, but he was willing to threaten it—and many of the GOP rank and file looked more than willing to go through with it. As McConnell explained to the Washington Post at the end of the ugly fight, “I think some of our members may have thought the default issue was a hostage you might take a chance at shooting. Most of us didn’t think that. What we did learn is this: It’s a hostage that’s worth ransoming.”29
The ransom was high. Republicans insisted that federal spending be slashed—even as they insisted the still-sputtering economy was too fragile to increase taxes. As consumer confidence fell and job growth ground to a halt, congressional leaders struggled to forge a deal. Yet talks foundered on the shoals of Republican resistance to any increase in taxes. Democrats floated the idea of reducing subsidies for oil corporations and corporate jets as a small part of a deficit-reduction package. Republicans abandoned the negotiations.
In what he would later admit was a major miscalculation, Obama tried desperately to reach a “grand bargain.” With billionaire spending hawk Pete Peterson’s antideficit empire cheering him on, the president appeared willing to accept major cuts in Social Security and Medicare. Yet he insisted that higher taxes on the wealthy be part of the bargain. With just days remaining, House Speaker John Boehner made clear the GOP stance: “The American people will not accept, and the House cannot pass, a bill that raises taxes on job creators.”30 Ultimately, the president agreed to cuts in discretionary spending so deep that he believed they would force Republicans back to the table to reduce the hit on the discretionary defense budget. The strategy didn’t work. Dreams of a grand bargain evaporated.
There was once a bipartisan formula for deficit reduction: when the economy was on stable ground, a mix of tax increases and spending cuts, with an emphasis on weeding out rents paid to robber barons. That formula, however, was no longer viable. Now grand bargains meant that only one party was going to bargain. It wasn’t just that taxes were off the table. Restricting the rent seeking that drove up public spending with little public benefit also seemed to be off-limits.
To make the situation even more vexing, it wasn’t clear that GOP leaders knew how to keep their most conservative members from “shooting” the hostage. Major struggles within the party between the hard right and the harder right portended more crises to come. And indeed, the government shut down its nonessential operations for more than two weeks in October 2013 when House Republicans insisted (futilely) that the budget cut funding for the Affordable Care Act. The shutdown ended up being just five days shorter than the epic struggle between Newt Gingrich and Bill Clinton in the mid-1990s that marked the ascendance of today’s hyperpolarized politics.31
Shutdowns and debt-ceiling crises are just the highest-profile examples of how difficult it has become to get anything done in Washington. It’s always hard to do hard things. Now it’s hard to do easy things. Our nation’s infrastructure lies in disrepair even as the interest rates we would pay to finance those investments are near record lows.32 Discretionary spending caps have ravaged R&D investment and starved health care research that could produce the next great treatments. Federal Pell Grants for low-income students now cover only a modest fraction of college costs, when they once defrayed most of the expense.33 Federal agencies struggle to carry out their essential tasks for want of funding, personnel, and congressional direction.
Judged by the number of laws passed, the 112th and 113th Congresses—spanning 2011 through 2014—were far and away the least productive since World War II. The “do-nothing” 80th Congress that Harry Truman ran against in 1948 passed more than nine hundred laws. The 112th and 113th averaged fewer than three hundred.34 The best that defenders could muster is that Congress had spent a record amount of time developing and debating bills with no chance of getting signed, including more than fifty repealing, in whole or part, the Affordable Care Act. Playing off the popular children’s book, the journalist Ezra Klein called the 112th the “very bad, no good, terrible Congress.”35 And he was probably more positive than most Americans, who gave Congress the lowest ratings in the history of modern polling. Among the things more popular than Congress in opinion surveys: cockroaches, zombies, and making the United States a communist state. (For the time being, Congress remains more popular than serial killers.)36
The budget process—central to effective governing—is so dysfunctional that the parties in Congress no longer bother to offer plans with a chance of passage. Instead, Congress relies mostly on what are called “continuing resolutions” to limp along from one year to the next. Failure to pass timely, long-term, or comprehensive budgets is not costless. Contracts and grants go unwritten or unfulfilled, federal hiring languishes, agencies are left in limbo, and our military and civilian personnel waste time scrambling to get by when they should be doing their work.
On the other hand, it is often better than what Congress does eventually. Unable to agree on broader budget priorities, lawmakers have repeatedly imposed harsh austerity on discretionary spending for nondefense aims—the small but vital part of the federal budget that finances science and energy investments, health care research, education and training, law enforcement, and all of the vital aspects of day-to-day governance. In the late 1970s and early 1980s, such spending equaled 5 percent of the economy. Today it’s around 3 percent—the lowest level since such data became available in 1962. Nondefense discretionary spending is slated to fall to just over 2 percent by the 2020s.37
These numbers are so catastrophic that commentators often have a hard time believing anyone would let them occur. Writing about the GOP’s latest budget plan in 2015, former OMB director Peter Orszag scoffed that it envisioned “a path so unrealistic, no serious person could defend it with a straight face.”38 As Orszag noted, the plan proposed that the federal government spend the same amount on all discretionary functions in 2025 as it had in 2008—before adjusting for inflation. That does seem unrealistic: not a dollar more for the FBI, the National Institutes of Health, the FDA, the Transportation Security Administration (TSA), and most other government agencies over a period in which prices will likely rise between 30 percent and 40 percent. Yet when Orszag was writing, Republican leaders were defending this horrific outcome with very straight faces.
Orszag’s reaction, however, is typical of many centrist commentators. Among seasoned Washington watchers, the tendency is to see extreme positions as mostly position-taking: At some point, the “adults in the room” will step in and do what needs to be done. But those adults are becoming less numerous and even less influential. And while the sanguine centrists are busy reassuring us, they are not consistently calling out extremism, much less articulating the positive case for government that extremists attack. When, for example, Washington mandarins celebrated the 2015 budget deal that partially undid the sequester’s devastating cuts, few stopped to note that even with the deal nondefense discretionary spending would be almost one-eighth lower in 2016 than it had been in 2010—after adjusting for inflation and despite population and economic growth. By 2017, under the deal, nondefense discretionary spending would fall to its lowest level as a share of the economy ever recorded (with data going back to 1962).39
To see what this budget dysfunction means, just look down the street—or, more accurately, down at the street. As we saw in chapter 1, American infrastructure is crumbling. According to the American Society of Civil Engineers, bringing US infrastructure up to acceptable levels by 2020 would require $3.6 trillion in additional spending.40 A startling 610,000 bridges in the United States—about one in nine—are structurally deficient.41 Yet for years, Congress has been kicking the infrastructure can down the road. Since 2009, it has passed a dozen stopgap measures, including one that lasted just a week, making it all but impossible for states and localities and their private-sector partners to pursue the long-term planning that serious infrastructure projects require. (In 2015, Congress finally passed a five-year transportation bill that modestly increased anemic funding levels for highways and transit systems. As big a political lift as this was for the contemporary Congress, the legislation fell well short of estimated spending needs, relied mostly on one-time sources of revenue and budget gimmicks, and did nothing to create permanent funding sources for long-term investment.)42 The Federal Highway Trust Fund is vastly underfunded relative to historical standards or future needs. The reason: It’s financed by a tax on gasoline that is pegged neither to inflation nor gas prices. The last time it was raised was 1993—a year before antitax Republicans took control of the House.43
The consequences aren’t just economic. When infrastructure fails, people get hurt. We hear about the high-profile tragedies: a deadly bridge collapse in Minnesota, a horrifying Amtrak derailment. But the biggest effects are mostly invisible. Poor public transportation means more cars on the road, which means poorer air quality and more traffic accidents. Congestion steals people’s time as well as their health—a huge uncounted cost. Failing to upgrade infrastructure also means forgoing the chance to employ the latest insights about how to design roads to reduce auto accidents, including accidents involving pedestrians. The United States used to have the lowest auto accidents per capita of any rich nation. But while America has improved its safety record since the 1970s, other affluent democracies have continued to make the necessary investments, and have brought down accident rates much faster. Now the United States and South Korea enjoy the dubious distinction of being the most dangerous countries for drivers in the advanced industrial world.44 According to the US Department of Transportation, obsolete road designs and poorly maintained roads contribute to fourteen thousand traffic fatalities a year. Another study estimated the medical costs of injuries from poor road conditions at $11.4 billion in 2013.45
What’s happened with infrastructure has happened with other pillars of American prosperity, too. Federal investment in medical research, digital innovation, and other breakthrough technologies, for instance, has plummeted. The fallout is hard to see, because R&D spending takes years, even decades, to pay off. But the fallout is real.
The fragmentation of American political authority has always made governance challenging. But the challenges have multiplied as our traditional checks and balances have collided with the growing antigovernment intensity of the GOP. Our political institutions were not built to handle a highly polarized struggle in which one side is openly hostile to the system itself. No less important, many of the broader social institutions that once reduced the inherent centrifugal tendencies of American politics—a labor movement spanning a third of the private workforce; a dense fabric of mass membership organizations; high levels of civic trust and participation—have fragmented and crumbled, too.46 What remains is a thin shell of governance around a thickening core of dysfunction.
We can see this in many areas: Programs and agencies hobbled and compromised; elected officials with limited incentive or capacity to achieve positive-sum compromises; and narrow interests filling the vacuum. But we should think first about what we can’t see: the toxins in our food that are getting more dangerous even as the government we need to protect us is increasingly hobbled.
Each year, roughly 48 million people are sickened by foodborne illness—1 in 6 Americans. Of this total, approximately 128,000 end up in the hospital and 3,000 die. According to the CDC, roughly 1 million Americans a year will suffer from chronic illness as a result of food poisoning, including heart and vascular disease, neural and neuromuscular dysfunctions, kidney and thyroid diseases, and reactive arthritis. And these are effects related just to food poisoning. Other dangerous substances in our food—pesticides, chemicals, mercury—also pose risks, though the extent is much less clear.
What is clear is that the main agency charged with protecting us against these risks is overwhelmed. Congress created the FDA in the wake of Upton Sinclair’s 1906 novel The Jungle, with its grim depiction of meat factories churning vermin and filth into food. Inspection remains one of the agency’s principal and most demanding responsibilities. Over the last generation, the FDA has been required to inspect more and more facilities—more and more of which are overseas. But while the need for inspections and the number of foodborne-disease outbreaks have increased, the FDA’s capacity has not. Between 1972 and 2007, domestic inspections conducted by the agency fell by 81 percent. In 2008, according to the FDA’s own data, fewer than one-quarter of food facilities under its jurisdiction were inspected. Indeed, more than half of facilities were not inspected at all over a five-year period. At US borders, across which more and more food transits to Americans’ tables, only 1 percent to 2 percent of imports are examined.47
By 2010, the problem had grown so bad that Congress passed a law updating the FDA’s enforcement strategy. Unfortunately, the funding to implement the new law was slashed. Fierce lobbying by the food industry meant that user fees proposed by the FDA were a political nonstarter. Meanwhile, Congress appropriated less than half the total that the Congressional Budget Office said the FDA needed. “We have good plans for moving forward,” said a top FDA official in 2015. “The problem is we don’t have the money.”48 Nor does it have the manpower. Nearly half of the job openings in its crucial overseas offices remain unfilled. And while the number of workers in domestic food safety offices has risen from its nadir in 2007, it remains well below historical levels.
As grim as the FDA story is, it’s actually sunny by the standards of federal agencies today. Since the 1970s, the federal workforce has declined dramatically relative to the American population. When Eisenhower took office, there was one federal worker for every 78 Americans. By 1989, the ratio was one for every 110. Now it hovers around one federal worker for every 150 Americans.49 The biggest decline has occurred in defense-related employment. Yet the drop has been sharp in domestically oriented agencies as well. In a country where the population and economy keep on expanding, the federal workforce has been caught in a time warp.
Assume that the number of federal workers had risen in line with the US population since the late 1970s. By 2009, in this alternative reality, the Agriculture Department would have employed 83 percent more workers than it did. The US Department of Health and Human Services—the agency responsible for two of the nation’s fastest growing programs, Medicare and Medicaid—would have employed 60 percent more. Treasury (which certainly has its hands full, too) would have employed 39 percent more.50 All told, to get the employment-population ratio back to its pre-1980 levels, the federal government would need to increase its workforce by around 80 percent over the next twenty years.
Yet the loudest voices in Washington are calling for the opposite. After the 2010 midterm, GOP leaders vowed to cut the federal workforce by at least a tenth within five years.51 Not content with that goal, two Republicans in the House introduced the Federal Workforce Through Attrition Act, which would limit new federal hires to one worker for every three who left government. “Real, productive job creation takes place on Main Street America, not in the bloated federal government,” declared one of the bill’s authors, Wyoming Republican Cynthia Lummis.52 On the campaign trail in 2015, the Republicans’ most moderate major candidate for president, Jeb Bush, endorsed a similar plan.
Recent GOP budget plans have also demanded sharply higher contributions for federal workers’ health insurance and retirement benefits—on top of a three-year freeze in federal pay agreed to in the 2011 deal to raise the debt ceiling. Though it is widely believed that federal workers are overpaid, careful studies indicate that the most educated and skilled workers—who make up a much larger share of the federal workforce than the private-sector workforce—receive substantially less than they would in comparable private jobs.53 Not surprisingly, federal employee satisfaction has fallen dramatically, relative both to the recent past and to private-sector workers.54
The dirty little secret is that essential government responsibilities don’t disappear when federal workers do. They just get farmed out to private contractors or pushed down to lower levels of government. Study after study has concluded that excessive reliance on outside contractors magnifies complexity, reduces performance, and impairs accountability.55 For the federal government’s most complicated tasks—guiding scientific inquiry, managing medical payments, overseeing complex financial transactions—talented public workers are vital. “Today’s federal civil service is not bloated,” concludes John Dilulio, the public administration scholar who worked on George W. Bush’s faith-based initiatives. “It is overloaded.”56 To be effective, he argues, the federal government needs to hire one million new workers—a 50 percent increase in its workforce—in the next twenty years. Dilulio might still be working on faith-based initiatives: It will take divine intervention to achieve that goal.
Those whose job it is to serve the public have more to do and less with which to do it. The same is not true of the private industries they are regulating. Over the last generation, as the capacity of government had eroded, the organizational and financial capacity of narrow private interests has exploded. The typical public worker lives in a world of scarcity. The typical lobbyist lives in a world of abundance: lavish salaries, PR wizards, mercenary experts who can provide just the favorable finding or legislative language needed. No wonder the federal government has hemorrhaged talent.
Once, it was rare for those working in government to move into paid advocacy. Today more than half of lobbyists have crossed through the proverbial revolving door between government service and professional advocacy.57 And no wonder: The median congressional staffer makes around $50,000. Even top congressional staff are in the $100,000 range. By contrast, the average lobbyist is making around three times that. Moreover, those working in Congress and the executive branch are facing greater pressures than ever before.58 The number of congressional staff has declined roughly in tandem with the shrinking of the federal civil service. Many of Congress’s nonpartisan sources of expertise—the Office of Technology Assessment (OTA), the US Government Accountability Office (GAO), the Congressional Research Service (CRS)—have been cut back sharply or canned altogether. (The OTA was an early victim of the “Gingrich revolution.”) The House Committee on Science, Space, and Technology—created in 1958 to oversee rapidly expanding federal R&D spending, particularly in astronautics—has gone from writing and overseeing the laws that foster America’s scientific leadership to spending much of its time investigating scientists and scientific institutions for their alleged biases (especially the biases that make them think that global warming is serious and caused by humans). Stagnant pay, dwindling in-house expertise, greater outside demands, fewer fellow staff, less respect for evidence-based policy making—these aren’t ideal conditions for holding on to workers.
The problem isn’t just that talented personnel pass through the revolving door. The prospect of outside employment shapes what people do even before they start the door spinning. Sometimes, the effect is salutary: A government lawyer who wins cases is going to be more sought after than one who loses them. But there’s nothing guaranteeing this happy alignment and, indeed, lots of reason to believe that the main effect is to make public servants too solicitous of outside interests. The revolving door creates a modified H. L. Mencken rule: It’s hard to get a man to believe something when his future salary depends on his not believing it. No less important, the revolving door creates a network of personal connections between those within government and those outside pursuing narrow ends. For example, former congressional staff who become lobbyists experience major drops in their lobbying income when their former boss resigns or goes down to defeat.59 Their connections are suddenly worth much less.
The root of the problem is simple: a growing mismatch between the enormous outside pressures on government—more and more organizations in Washington spending more and more to shape policy—and the weakened capacity of government to channel and check those pressures. “More than three decades of disinvesting in government’s capacity to keep up with skyrocketing numbers of lobbyists and policy institutes, well-organized partisans, and an increasingly complex social and legal context,” argue the political scientists Lee Drutman and Steven Teles, have created “a power asymmetry crisis.”60
In other words, weak government doesn’t mean efficient government. It means government that gives away the store to the rent seekers far too often. It also means government that must confront the most vital challenges facing it—facing us—with one hand tied behind its back.
Ebola is scary. The virus, discovered in 1976 and centered in West Africa, has a 50 percent fatality rate. Its symptoms include vomiting, diarrhea, organ failure, and internal hemorrhaging—though not, in most cases, the external bleeding that makes the disease so vivid in the public mind. There is no proven treatment and, as late as January 2016, no approved vaccine.
So it was big news when, in September 2014, Thomas Eric Duncan, a Liberian visitor to the United States, was admitted to Texas Health Presbyterian Hospital in Dallas with symptoms of the deadly virus and died shortly thereafter. A few weeks later, another case came to light: a New York doctor named Craig Spencer, who had traveled to Africa to treat Ebola patients stricken by a major ongoing outbreak in Liberia, Guinea, and Sierra Leone.61 What followed was a case study in how American politics has gone bad: a combination of shallow, alarmist news coverage, ugly right-wing recrimination, and GOP sabotage that suggested the body politic might be even sicker than the Manhattan doctor.
Let’s start with the media frenzy. As frightening as Ebola is, it’s difficult to transmit: Infection requires direct contact with the body fluids of a person exhibiting symptoms, which is why medical personnel are at such elevated risk. And while the 2014 outbreak in West Africa was the most serious since Ebola’s discovery, within the United States only a handful of cases and just one death, Duncan’s, had come to light by mid-October.
You wouldn’t have known that, however, from watching the news. Hysterical stories suggested that Ebola was a grave and present danger, playing off, and feeding, public fears about immigration and terrorism. Calming words from experts confronted skepticism from news anchors and outright derision from talking heads. Among the many self-styled authorities who appeared on cable programs: Donald Trump, the novelist Dr. Robin Cook, and Gene Simmons of the rock band Kiss. (To be fair, Simmons does have some experience spitting blood.) One longtime media observer summed up the coverage as follows: “You will personally eject blood from your anus and eye sockets! RUUUUUN!”62
The spasm of media attention produced lots of anxiety but little understanding. In October Ebola made it onto the Gallup poll’s list of top health concerns, beating out obesity and cancer. Even more striking, it debuted on Gallup’s list of the “most important problem facing the nation,” ahead of terrorism, poverty, race relations, and crime.63 Yet most Americans were unaware of even basic facts about the disease. Nearly 60 percent believed it “very likely” that Ebola could be caught from a sneeze or cough; 75 percent said that sufferers were infectious before they began to show symptoms.64 Not surprisingly, given these false beliefs, a quarter of Americans considered Ebola a “major” public health threat, and even more said they were avoiding air and train travel for safety. Revealingly, those following the story most closely had the lowest levels of knowledge: The more they watched, the less they knew.
And then, suddenly, it was over. For a few weeks after the Texas patient died, evening broadcast and cable news couldn’t cover Ebola enough, running nearly a thousand segments in just four weeks. Once the threat had passed, and it was clear that federal officials had been right to urge calm, Ebola disappeared from the news. When Gallup asked the public about the nation’s most important problem in early 2015, Ebola didn’t register at all.65
At least one American, however, remembered the hysterical coverage. At the 2015 White House correspondent’s dinner—an annual ritual in which the president entertains the nation’s top reporters with lighthearted shtick—President Obama brought his “anger translator” onto the stage to convey what the famously cool president was really thinking. For those who haven’t seen the “anger translator” routine on the comedy TV show Key and Peele, the translator is named Luther (played by Keegan-Michael Key), and his job is to put into words what Obama is too restrained to actually say.
Here is what Obama said to the gathered journalists: “We won’t always see eye to eye.” And here is Luther’s translation:
And CNN, thank you so much for the wall-to-wall Ebola coverage. For two whole weeks, we were one step away from The Walking Dead. Then y’all got up and just moved on to the next day. That was awesome.
Oh, and by the way, if you haven’t noticed: You don’t have Ebola!66
The wall-to-wall Ebola coverage didn’t convey much information. But it did carry at least one clear message: Government can’t save you. “Ebola is a serious threat,” wrote Ron Fournier, a senior columnist and the editorial director at the National Journal, “but it’s not the disease that scares me. What scares me is the fact that we can’t trust the institutions that deal with such threats, and we can’t trust the people who run them.”67 The columnist and Fox talking head Charles Krauthammer, writing in the Washington Post, opined that “Ebola has crystallized the collapse of trust in state authorities.”68
And these were the moderate voices: Within hours of Thomas Duncan’s diagnosis, talk radio and Fox News filled up with right-wing commentators spouting ever more outrageous conspiracy theories. Glenn Beck suggested that Dallas, Texas, was the first US city to experience the disease because it leaned Republican. According to Rush Limbaugh, Obama and the left “have this attitude, ‘Well, if they have it in Africa, by God, we deserve to get it because they’re in Africa because of us and because of slavery.’ ”69 Not to be outdone, Michael Savage—whose show, The Savage Nation, has over five million listeners—said Obama’s actions regarding Ebola rose “to levels of treason; it actually exceeds any level of treason I’ve ever seen. . . .” “Obama wants equality, he wants fairness,” Savage continued. “It’s only fair that America have a nice epidemic, or two or three or four, in order to really feel what it’s like to be in the Third World.”70
Conservative celebrities are in the outrage industry, of course. But it was a strikingly short distance from their apoplectic warnings to the criticisms lobbed by prominent GOP politicians. With the 2014 midterm looming, leading Republicans—including presidential hopeful Rand Paul (a doctor) and former Massachusetts senator Scott Brown, who was locked in a tight race for governor in New Hampshire—warned that a disease centered in West Africa and never seen before in Latin America would soon cross the Mexican border. Paul and other top GOP politicians also claimed that President Obama’s decision to provide US military support for efforts to fight the disease in Africa would lead to mass infection of American troops. Raising the specter of a “whole shipload of soldiers” infected with Ebola, Paul suggested the CDC had “understate[d] the transmissibility” of Ebola, and “political correctness” was standing in the way of “sound, rational, scientific decisions.”71 Yet political correctness apparently did not stand in the way of widespread calls by Republicans for travel bans on affected countries—an unworkable policy that, experts warned, would not only prevent foreign medical personnel from reaching patients but also increase the incentive for those infected to evade screenings or lie to health officials.
Perhaps it’s no surprise, then, that in the 2015 Gallup poll that showed Ebola disappearing from the public’s list of important problems, the problem offered most often by survey respondents was “dissatisfaction with government.”72
The biggest and most telling tragedy of the Ebola scare is that it might well have been avoidable. Over the prior decade, funding for the National Institutes of Health declined by $5 billion after inflation—a drop of almost a fifth in the NIH’s budget.73 Even more striking, the CDC’s budget for disaster preparedness, always small, was slashed in half over the same period. These cuts amid the economic downturn contributed to more than forty-five thousand job losses within state and local health departments just between 2008 and 2012.74 And the cuts continued after the Ebola scare: The so-called sequestration legislation that ended the debt-ceiling standoff put in place tough automatic cuts in discretionary spending (again, the kind of spending that finances many of the most vital investments in American’s future, such as infrastructure, medical research, and education), further squeezing the NIH and the CDC.75
Bill Gates has argued recently, with justification, that the most dangerous “global threat” today is not terrorism or nuclear weapons but a global pandemic. You might think that a billionaire philanthropist would be unworried about budget cuts. But Gates has publicly opposed sequestration, calling its effect a “crisis” that undermines the effective coordination of public expenditures and private philanthropy. And he has scoffed at those who think the market will step in, arguing that “the flaw in the pure capitalistic approach” is to think that profit-seeking firms will invest sufficiently in cures for dread diseases with inadequate consumer demand, such as malaria.76
In retrospect, the most shortsighted cuts were for funding of a vaccine to prevent Ebola (which, at the start of 2016, looked finally to be on the horizon). Since 2010, the NIH’s spending to develop a vaccine dropped by half. Indeed, Francis Collins, the head of the NIH, claimed that were it not for the “ten-year slide in research support, we probably would have had a vaccine in time for this that would’ve gone through clinical trials and would have been ready.” Collins said that other treatments, too, “were on a slower track than would’ve been ideal, or than would have happened if we had been on a stable research support trajectory. We would have been a year or two ahead of where we are, which would have made all the difference.”77
In the end, the measures pushed by federal authorities—airport screening, self-reporting, specially equipped hospitals, and support for action in Africa—worked. The two people who became sick in the United States recovered, and no new cases of the disease occurred here. As the health reporter Jonathan Cohn noted, “The Ebola response turned out to be a clear public health success—a model for effective, responsive government action.”78 Nonetheless, that response would have been much more effective and perhaps even unnecessary had the capacity of the federal government not been so eroded over previous decades. And yet nobody paid a political price for this erosion. No one paid a price for alarmism or extremism. Indeed, when it became clear that government policy had proved remarkably successful despite these obstacles, no one seemed to notice.
If you were a fan of the comic strip Calvin and Hobbes, you know about “Calvinball”: the chaotic game of ever-changing rules that six-year-old Calvin plays with his stuffed tiger Hobbes. The only consistent rule in Calvinball is that you can never play it the same way twice. But the essence of the game is simple: Calvin makes up the rules as he goes along in whatever way advantages him most. If that means players must wear a Lone Ranger mask and hit a badminton shuttlecock with a croquet mallet, that’s what the rules say—until, that is, Calvin changes them again.
Washington has its own version of Calvinball. As we saw in chapter 8, politicians follow two sorts of rules: a set of formal rules that prohibit certain moves and a set of informal, long-observed boundaries that politicians feel compelled to respect. In the decades after World War II, these hazier boundaries loomed large. Opponents didn’t filibuster every piece of legislation that came up in the Senate—not because the formal rules prohibited it but because it simply wasn’t done. Nominations by the president to fill court vacancies were generally approved quickly and only occasionally with significant dispute—not because Congress couldn’t tie them up for months or turn every candidate into a defining ideological fight but because it simply wasn’t done outside of unusual circumstances. Congressional deference was even greater for nominations to the executive branch, which the president has the greatest authority to oversee. And when big legislative fights were over, they were over. Sure, sometimes the losers would head for the courts or start mobilizing to rewrite the law. But if the courts didn’t come to the losers’ defense—and the norm was for them to stay out of big legislative fights—the losers had to lick their wounds, regroup, and head back through the legislative gauntlet. That was how the system worked.79
Or that was how the system once worked. In Capitol Hill Calvinball, the formal rules remain the same, but it’s still possible never to play the game quite the same way twice. The informal rules—the norms once policed by the Washington establishment—turn out to allow plenty of leeway to make up the rules as you go along. And as the gap between the parties has increased, the incentives to play Capitol Hill Calvinball have, too.
In previous chapters, we have examined the transformation of the filibuster from a rare parliamentary tactic into a routine weapon. Republicans were the innovators here: first in the early 1990s, as Gingrich’s Senate allies pushed for a more confrontational stance, and then in the late 2000s under Mitch McConnell. Republicans filibustered popular as well as controversial legislation—bills that would pass unanimously as well as those that would just squeak through. Sight unseen, the GOP response was obstruction.
One way to measure the prevalence of filibusters is to count the number of cloture motions to end them. From 1917 until 1970, there was a grand total of 58 cloture motions—roughly 1 per year. From 1970 until 1990, there were 365—approximately 18 per year. But then the new GOP strategy kicked in. Between 1991 and 2006, there were 563 cloture votes to end filibusters—just over 35 per year. That might have seemed an upper limit, but when Democrats took back control of both houses of Congress in 2006, the GOP Senate minority upped the ante. From 2007 through 2014 (when Democrats lost the Senate), there were 644 cloture motions—or more than 80 per year. When Republicans have raised the stakes, Democrats have stayed in the game. After Republicans captured the Senate in 2014, for example, Democrats staged more filibusters than they did the last time they were in the minority. But it is Republicans who have consistently rewritten the rules.80
So normal has this sharp historical change come to seem that the news media barely report on filibusters anymore, making them even less costly for those who launch them. During President Obama’s first two years in office, news stories focused relentlessly on the president’s ability to attract the support of the last few Democratic senators who constituted the party’s fleeting sixty-vote majority. Left unsaid was that these senators’ votes were needed only because of the unprecedented obstruction of the GOP. Now it was just taken for granted that the Senate operated under the “rule of sixty.”
The filibuster proved beneficial to Republicans not just because it allowed them to block or shape legislation but also because it made the Democrats look so bad. Serial filibusters stymied Congress and took up valuable time that otherwise might have been used to advance the majority’s agenda. The tortured efforts of Democratic leaders to rally the troops made them look disorganized and ineffectual. It also created incentives for individual Democrats to hold out for special deals—which, when granted, simultaneously weakened legislation and gave Republicans fresh material for their attacks.
Gridlock, in short, wasn’t just a policy win. It was a political win as well. With their committed campaigns of obstruction, Republicans severely compromised both Clinton’s and Obama’s early legislative drives. Yet far from paying a price for their intransigence, in 2014 they ended four years of minority obstruction in the Senate with majority control of both houses of Congress.
If gridlock has made Congress less important, it has made the federal courts more important. When Congress is silent, courts become louder voices in lawmaking—not just because political actors turn to the courts to resolve disputes that Congress can’t or won’t, but also because those courts are more likely to have the last word.
Which means that who is on the courts matters more, too. Federal judges are nominated by the president with the “advice and consent” of the Senate. For much of the twentieth century, presidents consulted with key senators and then made nominations that were approved quickly. As recently as the early 1980s, virtually all presidential nominees to the federal courts were confirmed, usually within a matter of weeks.
That has changed. The big shift came after Bill Clinton brought to an end twelve years of Republican control of the presidency. Confirmation rates for judicial nominees fell to under 70 percent in the mid-1990s, with the average time from nomination to confirmation increasing to around six months toward the end of the Clinton presidency.81 Although Clinton tried to accommodate the GOP by nominating moderates, Republicans barraged the White House with an unprecedented number of filibusters and “holds” (when individual senators delay consideration of a nominee with the implicit threat of a filibuster—another informal practice once used rarely). Once again Capitol Hill Calvinball had a new set of rules.
While both parties played by the new rules, rewriting them was mostly a Republican project. This became apparent when Obama took office in 2009. Presidents typically have greater success with their nominees when they control the Senate. Yet Obama’s Democratic majority did little to temper GOP resistance. In Ronald Reagan’s first two years in office, he had a GOP Senate, and around 95 percent of his nominees were confirmed. In Obama’s first two years in the White House, he had a Democratic Senate—and only around 55 percent of his nominees were confirmed.82 In Reagan’s first term, the median number of days between nomination and confirmation was 28 days. In Obama’s, it was more than 215 days.83 These numbers are all the more notable because, when Obama entered office, the federal courts had an unprecedented number of vacancies—including a record forty-nine openings considered “emergencies” by the Administrative Office of the US Courts.84
That GOP obstruction was the problem became undeniable when frustrated Democrats changed the rules governing nominations in 2013. Using their authority as the majority party, Democrats voted to allow confirmations of federal judicial nominations with a simple majority, except in the case of nominees to the Supreme Court. Many moderate Democrats had long resisted this rule change, fearing they would lose the leverage the filibuster gave them. Moreover, all Democrats worried about what would happen if and when Republicans regained control of the Senate. But the breakdown of the confirmation process overrode these doubts, and after November 2013, appeals and district court nominees could no longer be blocked by the GOP minority. In 2014 Obama successfully nominated 89 new judges. Federal vacancies fell from record highs to historical lows, with only 7 of 179 seats open within the federal appeals courts and 31 of 677 within the federal district courts.85 Apparently, the main obstacle standing in the way of fully staffed courts was congressional Republicans.
The final example of Capitol Hill Calvinball might be the most troubling—because it strikes so centrally at the legitimacy of democratic governance. In their 2012 book It’s Even Worse Than It Looks, Thomas Mann and Norman Ornstein describe a set of obstructionist strategies they call “the new nullification.” The phrase comes from the ugly history of state resistance to federal laws. Yet Mann and Ornstein apply the term to a range of contemporary tactics that opponents use to cripple democratically enacted laws. These include coordinated assaults on those laws’ constitutionality, denying funds necessary for their implementation, and “blocking nominations, even while acknowledging the competence and integrity of the nominees, to prevent the legitimate implementation of laws on the books.”86
Consider the fight over the Consumer Financial Protection Bureau, an agency designed to put consumers on a slightly more level playing field with those offering complicated financial services. When the CFPB was included in President Obama’s financial reform legislation, the Chamber of Commerce and Wall Street tried desperately to kill it. The head of the Chamber’s Center for Capital Markets Competitiveness vowed to “spend whatever it takes” to defeat the proposal.87 Despite all the spending, however, the opponents lost—which would usually mean the agency would go into operation. Yet Republicans refused to confirm a head to the CFPB unless President Obama agreed to changes that would weaken it greatly. When Obama refused, Republicans threatened a filibuster to block the intellectual architect of the bureau, Harvard law professor Elizabeth Warren, from becoming the agency’s first director. (They might have been better off letting her go through, since she would go on to knock off Massachusetts’s GOP senator, Scott Brown.) Obama then nominated Richard Cordray, a former Ohio attorney general. Republicans acknowledged that he was qualified, but they did not budge from their nullification stance: no changes in the agency, no head of the agency.88
It is worth pausing here to consider just how audacious this demand was. The CFPB was law. Republicans didn’t have the votes to repeal it. Yet the GOP still felt it could win the war by refusing to carry out its constitutional role of advise and consent. Nor was this fight the only example of the new nullification at work. The other appointments that Obama made during the Senate’s sort-of-recess were to the National Labor Relations Board, an eighty-year-old agency with around 1,700 employees charged with adjudicating private-sector labor disputes. When Obama entered office, the five-member board had only two members—too few to issue binding rulings. Though Obama followed tradition and nominated Republicans alongside Democrats, the GOP threatened to filibuster several of his nominees and refused to act on any of them. It took a first set of recess appointments in March 2010 (one Democrat and one Republican) to get the board running again.89 But with the GOP vowing to block future nominations to fill pending vacancies, the agency’s viability remains in doubt.
Again, the NLRB was established by law to play a specific role. Again, business lobbies and Republicans disagreed with that role. And again, unable or unwilling to change the law through the normal democratic process, they threatened to make the agency dysfunctional. After a complaint against Boeing in 2011 by the NLRB’s general counsel (who acts independently of the five-member board), Senator Lindsey Graham of South Carolina vowed to block all nominations to the board. “The NLRB as inoperable could be considered progress,” he declared.90 The complaint was dropped, but GOP resistance continued—even after Obama’s reelection. In July 2013, Mitch McConnell made clear the basis of GOP objections by making a remarkable demand: He would support Obama only if he agreed to a board with a Republican majority. Nullification or capitulation—those were the choices.
A similar intransigence has crippled the Federal Election Commission. Created in the wake of Watergate, the FEC is charged with policing violations of campaign finance law. With money pouring into politics from a growing range of legally questionable organizations, policing seems like it should be a priority. For roughly a decade, however, the FEC hasn’t been able to do much at all: Between 2008 and 2012, the number of FEC enforcement cases collapsed from 612 to 135.91 The commission is required to have three Democrats and three Republicans and can act only with four votes. But Republican commissioners have refused to act on all but the most egregious violations—and maybe not even then: In 2008, in what seemed a simple case, a wealthy friend of Mitt Romney’s spent $150,000 to fly Romney campaign workers to a fund-raiser. The legal limit on such “in-kind” donations is $2,600. The committee deadlocked 3 to 3.92 The Democratic chair of the commission, Ann Ravel, admitted in 2015, “The likelihood of the laws being enforced is slim. . . . People think the FEC is dysfunctional. It’s worse than dysfunctional.”93
Even when nullification isn’t the goal, the contentious, drawn-out confirmation process impairs governance. In his first year in office, fewer than 65 percent of Obama’s nominees to the executive branch were confirmed, compared with just over 80 percent in George W. Bush’s first year.94 The forced vacancies included scores of high-ranking officials whose job was to assist the Treasury secretary in dealing with the financial crisis. A host of critical offices went without heads for months, from US Customs and Border Protection to the National Highway Traffic Safety Administration to the Centers for Medicare and Medicaid Services (which by 2012 had not had a confirmed head in six years). Over three national elections, from 2010 through 2014, the US Election Assistance Commission—created to help voters after the contested 2000 election—lacked a single commissioner.95 Not surprisingly, Obama ended his first term with a higher share of vacancies in Senate-confirmed positions than either Bill Clinton or George W. Bush.
Some effects are easy to see: It took nearly a half year for President Obama’s 2014 nominee for US attorney general, Loretta Lynch, to receive Senate confirmation.96 Yet as Mann and Ornstein note, the most profound effects are more hidden: “Citizens offering to serve their country, often at significant personal and financial cost, are forced to put their personal lives on hold for many months. With the stress this puts on their careers, marriages, and children, will really talented people remain willing to subject themselves to such indignity? The government that we want to be more effective is crippled.”97
Mann and Ornstein meant their critique as a good-government salvo against the breakdown of American politics—one that spoke the truth that too many Washington insiders were afraid to say. As they wrote in the book’s most powerful paragraph:
However awkward it may be for the traditional press and nonpartisan analysts to acknowledge, one of the two major parties, the Republican Party, has become an insurgent outlier—ideologically extreme; contemptuous of the inherited social and economic policy regime; scornful of compromise; unpersuaded by conventional understanding of facts, evidence, and science; and dismissive of the legitimacy of its political opposition. When one party moves this far from the center of American politics, it is extremely difficult to enact policies responsive to the country’s most pressing challenges.
The book was a broadside against Capitol Hill Calvinball. “The argument we’re making is that our politics will never really get better until the Republican Party gets back into the game, instead of playing a new one,” Mann said after the book came out. “We want a strong, conservative Republican Party—but one with some connection with reality.”
What they got instead was a sharp rebuff from the media that had once made Mann and Ornstein go-to political analysts in Washington. After the book’s publication, they were no longer quoted in mainstream news stories or invited to the public affairs shows on which they had appeared regularly or, for that matter, discussed by the parts of the news empire set up to examine its own practices. “What the fuck is an ombudsman doing if he’s not writing about this?” Ornstein complained to Huffington Post in late 2012. Mann lamented, “I can no longer be a source in a news story in the Wall Street Journal or the Times or the Post because people now think I’ve made the case for the Democrats, and therefore I’ll have to be balanced with a Republican.”98
To Mann and Ornstein, the problem was the continuing insistence that both Republicans and Democrats were equally to blame for government dysfunction. Mann marveled at the degree to which the well-funded campaign by Pete Peterson and others to elevate the deficit as the nation’s number one concern had led many journalists to conclude that American government was overextended—and Democrats, in denial about this alleged reality, at least as complicit as Republicans in the failure to address the problem. “The Peterson world, I think, has given journalists the material to keep doing what they’re doing,” said Mann.
Seeing both parties as equally at fault seems hardheaded and superficially suggests objectivity, but it’s an abdication of responsibility. “If voters are going to be able to hold accountable political figures, they’ve got to know what’s going on,” explained Ornstein. “And if the story that you’re telling repeatedly is that they’re all to blame—they’re all equally to blame—then you’re really doing a disservice to voters, and not doing what journalism is supposed to do.” This responsibility is not journalists’ alone. But as other sources of truth telling have eroded or been sucked into the partisan vortex, the abdication of the news media has become more damaging. There is no bipartisan Washington establishment anymore, nor the dense network of mass-membership organizations that once helped citizens figure out what Washington is up to, nor much deference to scientific authority that conflicts with what partisans or powerful private interests insist is true. If one side is tearing down government, it’s a deep problem when those writing about American politics are convinced that the mess is thoroughly bipartisan.
The breakdown of American governing authority involves a vicious cycle. Those who cripple governance pay little price. To the contrary, they gain when trust in government falls. They gain when government can’t regulate or tax the narrow private interests they support. And they gain when it becomes harder and harder to make the case for the effective public authority that Hamilton and Madison both embraced. If we are to break the vicious cycle, we need to understand who is to blame. We also need to know what to do—the focus of our final chapter.