4

Manacling the Octopus

PROLOGUE: BRIGHT LIGHTS AND RUBBER HOSES

Vincent Ragosta Jr. hadn’t been out of law school long when, late on a Friday night, the phone rang in the small apartment he shared with his wife in Providence. A colleague was calling with an urgent request from a longtime client: her husband had been arrested outside a bar downtown—disorderly conduct—and someone would have to bail him out. As a young associate at his family’s law firm, and, during the 1970s, one of its younger employees, Ragosta was the next man up. He had, by that point in the evening, already begun to unwind. But, perhaps because he was so green, he felt a rush of excitement. The young lawyer had never been drafted to do anything like this. So he threw on a suit, grabbed his briefcase, and headed down to the Fountain Street police station.

Even if Fountain Street served as the department’s city-wide headquarters, the station house felt sleepy, if not relaxed, when Ragosta arrived—it was, after all, a routine weekend evening, with not much doing. So he was caught off guard when, while sitting on a wooden bench, the room suddenly came alive. When the young lawyer looked up, he saw that Providence’s chief of police, who would rarely have made an appearance during the evening shift, had swept into the building. Ragosta sensed that everyone was suddenly on edge. Officers behind the desk rapped to attention, each attempting to wear a studious expression while paging through piles of presently important papers. Everyone wanted to appear busy.

To this day, Ragosta isn’t sure why the chief came downtown that night. In the mid-1970s, Providence was, admittedly, a rough town. As in other working-class cities in New England, the decline of manufacturing—most notably, in Rhode Island, the textile and costume jewelry industries—had left working-class communities restive and angry. Beyond that, Mayor Buddy Cianci’s Providence was haunted by the barely hidden menace of organized crime. Against that backdrop, policing Rhode Island’s rusting capital was a challenge. To do the job well, you needed to cut a fearsome profile—to elicit discipline in the ranks. And so it was no wonder that when the chief walked in, the cops on duty bristled in fear. You didn’t trifle with department brass, and you certainly didn’t want to be seen disrespecting the chief.

Unfortunately, at the moment the chief arrived, one poor slob of an officer wasn’t able to get himself together—he was too far gone. Had this Friday evening been like most others—a lazy parade of inebriated bar patrons being booked for minor offenses—the young cop wouldn’t have had any reason to stand on ceremony. Even by his mere posture, you could tell he was just biding time before the end of the shift. He was fat—“corporeal,” as Ragosta described it, years later. He was slouching. He wasn’t wearing his hat, as department policy required. Perhaps worse, his shirttails were out. He looked a mess, and when the chief walked past, the young cop was caught utterly out of sorts. It was like a scene out of a sitcom—but without the humor.

Describing what happened next more than forty years later, Ragosta still professes shock. The chief stopped, gave the officer a quick up-and-down, cocked his arm, and slapped him with an open hand. As the younger man recoiled, the chief began barking commands. “Tuck in your shirt. Where’s your fucking hat? Get your act together!” The disheveled cop, humiliated, scurried into the back of the station to clean himself up. And Ragosta, embarrassed enough to look away, suddenly gleaned a much clearer understanding of what it was like to serve in the ranks of Providence’s police department.1 However fearsome the city’s officers might have appeared while walking the beat, they were themselves vulnerable to violence from above. And as would soon become clear, the dynamic on display that evening in Providence was indicative of the norm across the country.2

At the time, that dynamic was only beginning to change. Amid the tumult of the 1960s and 1970s, America’s perceptions of law enforcement had begun to evolve. Previously, during the period when authority figures had been more summarily venerated, cops had often enjoyed broad public esteem. If not every community was entirely satisfied with the ways officers comported themselves—Black communities were more regularly subject to disproportionately harsh treatment—the men and women charged with upholding law and order were typically shown a great deal of deference, even among progressives. But as the movement’s cultural aversion to power had taken hold, cops had become subject to more intense scrutiny. Victims of misconduct and brutality had begun bringing suits against the blue. And in many cases, the Supreme Court responded by issuing rulings that required officers to raise their standard of conduct.

Mapp v. Ohio (1961) had directed courts to rule out evidence that police had collected from unconstitutional searches. Escobedo v. Illinois (1964) excluded statements made to investigators after suspects requested a lawyer. Miranda v. Arizona (1966) required officers to advise suspects of their rights.3 By the late 1960s, the message had been made clear—officers had, in the past, too frequently abused their authority. Their wings now needed to be clipped. By the time news cameras caught cops whaling on protestors outside the 1968 Democratic National Convention, progressive attitudes had broadly shifted. Time to clamp down. Whatever fly-by-night practices had previously been permitted would need to be stamped out moving forward.

That change elicited a variety of responses. Some officers, of course, accepted the new procedural regime without protest. Some resented the new strictures, irritated that, in their view, a bunch of bleeding-heart lawyers were hog-tying the brave men holding a thin blue line. Still others began ignoring the rules, imagining themselves as Dirty Harry types—vigilantes in uniform. But what few outside the police community seemed to realize was that there was another reaction as well: insofar as lawless hippies, smelly vagrants, and out-and-out criminals were now being protected from abusive police behavior, officers themselves remained vulnerable. As would be made clear to Vin Ragosta during his visit to Providence’s Fountain Street station, many of the men and women charged with keeping the streets safe were more vulnerable in the station house than some criminals were in the wild. And that disparity spurred varying degrees of resentment and bewilderment in the ranks. Who, at long last, was going to protect the officers from the chiefs?

The question fit into a broader context. By the 1970s, many Americans had come to embrace a certain idea about the abuse of power. It was all too easy, many had concluded, to compel ordinary people to do terribly depraved things. Mao had convinced millions to torture their neighbors during the Cultural Revolution. The Vietcong had convinced their adherents to inflict untold horrors on American captives in the jungles of Southeast Asia. And while few, if anyone, would make the comparisons explicit, it was easy to imagine something similar might have happened before Chicago’s police had beaten protestors outside the DNC, or before members of the National Guard had murdered students at Kent State. Not that the perpetrators weren’t themselves responsible on some level—but perhaps they’d been cowed into their demented behavior by the beacons of some cruel Establishment order. Perhaps, in fact, members of the rank-and-file felt as if they’d had no choice.4

It was in this context that, decades later, Vin Ragosta used the phrase “bright lights and rubber hoses” to describe the way chiefs had taken to interrogating rank-and-file officers during that period. Providence’s police chief had made no bones about slapping a disheveled officer late on a Friday night. But that was just a peek behind the curtain. There was nothing to stop a chief from throwing a wayward officer into an interrogation room and torturing him until he confessed to running a racket, or placating a mobster, or, for that matter, not running a racket or not doing favors for the local mob. Many officers presumed that the only way for them to advance up the ranks, or avoid a hazing, or even keep their jobs, was to be entirely supplicant to the brass. Put simply: individual officers were powerless against the System.

At the time, few would have been confused about who exactly constituted that system—in the popular imagination, they were all of a type. Chicago mayor Richard Daley. Birmingham, Alabama, public safety commissioner Bull Connor. General George Patton. In Baltimore, the System was personified by Donald Pomerleau, a former marine who would serve as the city’s police commissioner from 1966 to 1981. Pomerleau relished his role as Charm City’s chief enforcer. And while he was credited near the outset of his tenure for bringing order to a flailing department, his detractors came to view him as a kind of municipal dictator.5 When asked at one point if he’d directed his undercover officers to keep tabs on political figures, he’d brashly replied, “Just the blacks. Just the blacks. Just the blacks.” He explicitly intimidated other public officials in the style of J. Edgar Hoover—compiling dossiers and hinting ominously at the leverage he wielded.6 He was a man who many in Maryland feared. And as it turned out, some among those who feared him the most were members of his own department.

Early in his tenure, Pomerleau had been rumored to have assembled Baltimore’s entire command staff into a single room and defied anyone to deny that he was the “sole boss.”7 He’d instituted a practice of subjecting officers to lie detector tests—and firing them if they failed.8 Some claimed he had often taken to placing officers in a “sweatbox,” questioning them for hours until they finally admitted to a crime, real or contrived.9 And as the nation’s judges had begun clamping down on police abuse of ordinary citizens, the chasm between suspect rights and officer rights had only appeared to widen. Pomerleau was doing to his rank-and-file what the rank-and-file were precluded from doing to suspected perps (even if, in some cases, they did anyway). It didn’t seem fair.

In Baltimore, leaders of the local police union were quick to highlight the discrepancy. One complained that, while criminal suspects were sure to be released in the absence of being read their rights, “a policeman is never advised of his rights because he has no rights.” Officers argued that the city’s beat cops operated perpetually under a “black cloud,” fearful that they might at any time be subject to suspensions or ruined careers without even facing any incriminating evidence.10 In the wake of news that 1,250 bags of heroin worth $100,000 had been taken from the department’s evidence control unit, the strong suspicion was that higher-ups—perhaps some of the Commissioner’s favorites—had been involved. But Pomerleau was connected to the Nixon-appointed US attorney who purportedly gave him a heads-up. And when a local prosecutor threatened an investigation, Pomerleau sent a message by withdrawing two officers from the prosecutor’s office.11 You were either with the Commissioner or against him—and if you were against him, Katy bar the door.

Progressives had cheered when the judiciary began requiring cops to adhere to more rigorous standards of professionalism. And now, many wanted to apply the same approach to another circumstance defined by vast disparities of power. The problem here wasn’t just Pomerleau’s pension for abuse—it was Pomerleau’s system. By the 1970s, inured in the culture of the moment, reformers wanted to do something that many of their peers would have considered anathema during the 1950s and early 1960s: rein in the authority of the purportedly wise men running powerful institutions. The Baltimore chief’s shenanigans weren’t stand-alone failures; they were just the latest illustrations of an Establishment gone wild. And so, for progressive reformers, the solution wasn’t simply to replace one abusive chief with another. It was to impose checks that would prevent power from corrupting men and women of goodwill. Here was an opportunity for the movement’s Jeffersonian impulse to take flight.

In what would appear like a strange bedfellows moment just a few decades later, unions appeared to be on the side of reform.12 Beyond banning corporal punishment, organized labor also wanted to institute procedural reforms to ensure that officers were given opportunities to correct bad behavior before being let go—no more summary firings. To many progressives looking on from the outside, that just seemed fair: if tenured professors at public universities enjoyed job security, officers putting their lives on the line should get something similarly robust.13 And it just went from there. The details of the various state laws passed as Law Enforcement Officers’ Bills of Rights varied by jurisdiction: in some places, chiefs were prohibited from interrogating officers except under certain supervised circumstances; in others, the brass needed to warn officers before subjecting them to discipline.14 In some places, complaints had to be kept confidential, and in others disputes had to be settled by “neutral” arbiters. Across the board, however, the intent was largely the same: Pomerleau-like reigns of terror inside police departments would have to end.

Decades later, many progressives coming late to the scene would presume that these guardrails were the handiwork of archconservative villains—legislators eager to carry water for the police unions in exchange for campaign donations and endorsements. In the 2010s and 2020s, as demonstrators filled streets across America in protest of the police killings of Michael Brown, Tamir Rice, Breonna Taylor, and, most pointedly, George Floyd, progressives typically derided LEOBORs. Arduous due process protections made it impossible, in many cases, to get wayward officers fired from the force. Bad cops appeared free to wreak havoc without repercussion. Racist cops patrolled with impunity. Similarly frustrating, LEOBOR-connected confidentiality provisions prevented chiefs from commenting on accusations of misconduct, giving communities the impression that their complaints were falling on deaf ears—and, in some cases, they certainly were.

Lost amid the frustration, however, was a realization that, upon their inception, LEOBORs had frequently been viewed as progressive victories—the means to pare back the power of devious figures like Commissioner Pomerleau. Working off the assumption that the line cops would be kinder and gentler if they were liberated from the nefarious influence of the bad chiefs—that cops on the beat would uphold a standard of morality if they did not work in fear of the brass—LEOBORs had been framed as a great leap forward. With the same sorts of protections that guaranteed university professors the ability to say no if a dean demanded they adopt a racist curriculum, cops would now be able to reject a chief’s demand that they beat up protestors, or plant evidence, or gratuitously pull over Black motorists. LEOBORs would protect Black cops from being harassed by white chiefs. Or so many reformers at the time had initially presumed.

Decades later that perception would change. If progressive reformers and police unions had been allied against the chiefs during the 1970s, by the late 2010s the chiefs and reformers were aligned against the rank-and-file. By then, both rued LEOBORs for protecting abusive officers, wrapping disciplinary measures in an absurdity of red tape, and precluding responsible supervisors from maintaining proper compliance. The effect had been an outrage: to exact a minimal level of discipline, chiefs often chose to punish more serious misbehavior with slaps on the wrist, lest they get wrapped up in the vortex of a LEOBOR review. But then, if the officer fell out of line again, the first offense could not be used as precedent for a firing.15 When reformers became aware of this racket, they were rightly incensed. But chiefs, even while facing the public’s wrath, believed their hands were tied—had they pursued more punitive justice in the first instance, they might not have been able to impose any penalty whatsoever.

In the years following his eye-opening experience on Fountain Street, Vin Ragosta built a practice as the lawyer many of Rhode Island’s cities, towns, and police departments turned to when seeking to discipline errant officers. And his exposure to the System eventually convinced him that Rhode Island’s LEOBOR had overcorrected for the abuses of the postwar era. A political independent, Ragosta hailed from a Democratic family. His heart had gone out to that “corporeal” officer slapped years earlier by Providence’s imperious police chief. But post-LEOBOR, he’d seen too many unfit officers retained for the wrong reasons. And he’d come to conclude that nearly everyone was suffering as a result. Citizens were wary of the police. Officers faced skepticism from the people they had sworn to protect. And police departments found it increasingly difficult to recruit good new cops. A vicious cycle.

Nationally publicized instances of police brutality—most notably Derek Chauvin’s videotaped 2020 murder of George Floyd—sparked demonstrations and spurred a renewed interest in LEOBOR reform. But more mundane episodes of impunity were born of the same dynamic. In 2021, the City of North Providence, Rhode Island, tried to fire Scott Feeley, a police sergeant charged with a whole rash of violations. He’d been insubordinate to his superiors. He’d lied in the course of an internal affairs investigation. He’d failed to call in traffic stops. In all, Chief Alfredo Ruggiero Jr. brought ninety-seven different charges against him, and the LEOBOR tribunal empaneled to evaluate those charges found him guilty of seventy-nine of them.16 But the panelists decided, in the end, not to strip him of his badge, as the city demanded. Instead, he was simply demoted to patrolman after serving a forty-five-day suspension. North Providence’s mayor, Charles Lombardi, responded in outrage: “What do we do with this guy? He was found guilty of failing to obey, truthfulness.… This is insulting.”17 And yet there was nothing the mayor could do to keep him off the beat.18 Feeley returned to service.

The stories differed from place to place, but the underlying dynamics were rarely distinct. Chauvin, the officer eventually found guilty of murdering George Floyd, had previously been subject to twenty-two complaints and internal investigations as a Minneapolis police officer.19 Whatever his reputation for brutality, he’d remained on the force, protected in no small part by reforms often championed by progressives decades before. Those protections now appeared like vestiges of Jim Crow—invitations to abuse. In fact, however, they had been born in large part from the reform movement’s Jeffersonian impulse. Feeley’s and Chauvin’s impunity, like that for so many officers whose misconduct remained shrouded by process, had been husbanded by the cultural aversion to power. The ugliness of Hamiltonian power had spurred progressives to pursue reforms that invited a different form of abuse. The movement had undermined itself—and this wouldn’t be the only time.

THE RIGHTS REVOLUTION

As we saw in the last chapter, the tumult of the 1960s marked a narrative turning point for the progressive movement—its zeitgeist was changed substantively by the upheaval. Downstream of that philosophical shift was the movement’s policy agenda—the nuts-and-bolts approach progressives would propose to address various challenges. If, after the change, the movement was more driven to push power down than up—if, beyond losing faith in the Establishment, it was bent on prying off the octopus’s tentacles—progressivism’s practical approach would also have to evolve. As we’ll see, the substantive thrusts of an explicitly Jeffersonian agenda would turn out to be very different than those born from the movement’s now more muted Hamiltonian impulse. The shift wasn’t merely narrative—it was practical as well.

Before tackling how, exactly, the cultural aversion to power rippled out across the progressive agenda, we need to recall the context. Decades earlier, during the movement’s formative years, reformers had frequently been consumed by their frustration with the courts. Time and again, nineteenth-century judicial doctrine had frustrated the reform community’s designs on improving working conditions, raising wages, and much more. What is now remembered as the Lochner era was defined by a jurisprudence determined to thwart government interference in the private economy, thereby enfeebling centralized government. Progressives ranging from Theodore Roosevelt to Robert La Follette to Franklin Roosevelt proposed at various times a range of reforms designed to diminish the guardrails judges erected around executive action, from judicial overrides to court packing. Time and again, they failed. If progressives were going to overcome the judicial barriers to centralization, the change would have to emerge from within the courts themselves.

And it did come—and with it a great sense of relief. Lochner’s repudiation in the late 1930s made a relic of the old progressive desire to push past the nation’s judiciary. Suddenly flush with Roosevelt appointees, the Supreme Court appeared less like a menace than an opportunity. Perhaps, beyond merely deferring to the New Deal’s newly expanded administrative state, the courts could be jerry-rigged in a progressive force for good.20 With liberals now in control, reformers began to consider a previously unheralded question: Where now could judges serve as the tip of the progressive spear?

The intellectual journey that followed began with what many legal scholars refer to simply as “the Footnote.” Written by Justice Harlan Fiske Stone and appended to the Supreme Court’s 1938 Carolene Products decision—a ruling that effectively curtailed the scope of the judiciary’s review of most economic regulation—the Footnote suggested a series of issues where courts, in Stone’s view, should take a more proactive role.21 As sketched out in the justice’s brief delineation, and then colored in through subsequent jurisprudence, the judiciary reoriented itself to take a more aggressive role in guaranteeing democratic fundamentals (like voting rights), protecting minorities who had been victimized by prejudice, and upholding other rights conferred explicitly by the Constitution.22 Quietly, this represented a watershed: by shedding the court’s traditional role as a bulwark against the movement’s Hamiltonian impulses, Stone was pointing the judiciary to become an agent for Jeffersonianism.

This seemingly subtle shift in American jurisprudence, begun in the late 1930s, did not spark a wholesale change overnight—it was not the lightning rod Brown v. Board of Education would prove to be. But the Footnote did plant a flag or, perhaps more accurately, lay the seedbed for changes that would flower several decades later. In a decision designed broadly to pare back judicial interference—Carolene Products is largely remembered for the fact that it was one among many decisions green-lighting big, centralized, Hamiltonian regulation—Stone was specifying areas where the court could take a more proactive role. But in the late 1930s and right through the era when progressivism was more focused on stability, exercising the “rights” Stone delineated was not nearly so high on the movement’s agenda. The New Left, maximum feasible participation, Eldridge Cleaver’s “one giant octopus,” and the cultural aversion to power were still a quarter century in the future; Brown v. Board of Education was a ways off as well. Jeffersonianism had, at that point, only a fraction of the same purchase on progressive minds and hearts. But come the late 1960s and early 1970s, Stone’s tools remained there for the picking. And it was only a matter of time before those wanting to chip away at the Establishment would find ways to use them.

To be clear, these two separate shifts—the evolving culture examined in the last chapter and the changing jurisprudence mentioned here—weren’t entirely unrelated. But for the most part, they evolved in parallel. In the immediate aftermath of the New Deal, various elements of the administrative state had license to operate almost entirely without guardrails. The National Labor Relations Board and the Securities and Exchange Commission, for example, were initially empowered to implement rules without any substantive review, and it wasn’t clear when or if someone who believed the rules were unfair could complain, let alone challenge them in court. Hoping to establish a thoughtful mechanism for weighing competing concerns, a nascent movement among legal scholars, the Legal Process School, emerged in favor of regularizing judicial review—ensuring that those under the thumb of regulatory power had recourse if the regulators overstepped their bounds.23 This was, by some measure, the gospel of balance applied to the worlds of bureaucracy and regulation.24

The scholars and jurists who composed the Legal Process School weren’t alone in fearing Hamiltonian progressivism gone wild. Other progressives had simultaneously begun to worry about the “threat of capture”—namely the concern that regulators might become tools for the industries they were supposed to oversee (as many would complain during the Eisenhower years). The ACLU, which had, to that point, largely viewed the courts as beacons of moneyed interests, began in the post-Footnote era to conceive of the judiciary more as an institution uniquely equipped to serve as a backstop against government abuse. While less iconic than the organization’s work protecting free speech rights, progressive civil libertarians were determined to rein in the administrative state for much the same reason they were interested in defending individualized expression: having unleashed the power of big government—the same power, by some horrific notions, that Benito Mussolini had infamously used to keep the trains running on time—they worried the centralized bureaucracies progressives had lionized might eventually be used for ill.

The touchstone of this rearguard action became the Administrative Procedure Act (APA) of 1946.25 Less celebrated than other iconic progressive achievements, the APA mandated a certain cadence of rulemaking within federal bureaucracies, requiring them, with certain exceptions, to publish proposed rules, to make accommodation for public comment, and then to consider the reaction.26 The law precluded regulators from adopting rules that were “arbitrary and capricious” or “unsupported by substantial evidence.”27 And it stated explicitly that regulatory procedures were subject to judicial review—put another way, it specified that outsiders could sue if they believed action by an administrative agency wasn’t just or fair.

These new protections were important—but many among those watching the amoeba of the administrative state grow in the aftermath of the Second World War began nevertheless to articulate a growing sense of alarm. And this is where cultural predilections and judicial doctrine began to intermingle. Five years after Truman signed the APA into law, a Roosevelt appointee to the court stated in clear and concise terms why checks on the bureaucracy were so important. In a scathing dissent to a court opinion blessing a determination made by the Interstate Commerce Commission, Justice William O. Douglas wrote: “Expertise, the strength of modern government, can become a monster which rules with no practical limits on its discretion. Absolute discretion, like corruption, marks the beginning of the end of liberty.”28 Here, more than a decade before the Port Huron Statement, Douglas was articulating some of the same underlying concerns, namely that the centralized institutions that previous waves of progressives had fought to empower were liable to go rogue.

Through the 1950s, even as these concerns grew, the new “rights” created by the APA were rarely invoked. Without a lot of friction, the government proceeded with big projects reminiscent of the still-thriving Tennessee Valley Authority—projects including the Hoover and Grand Coulee Dams and a raft of massive nuclear power plants. Even when a Democratic Congress passed (and President Eisenhower signed) the law providing a mechanism to fund the vast highway program Roosevelt had conceived years earlier, there was little discussion of whether ordinary people should be guaranteed some opportunity to question government decision-making.29 Whatever checks the APA provided, deference to authority was the unwritten rule.

In the years that followed, Jeffersonian efforts to corral the administrative state weren’t pursued in a vacuum. Progressives during these years were kicking the tires on new uses for the judiciary in a whole range of contexts, none more famous than Brown v. Board of Education (1954), which employed judicial authority to do what the elected branches of government had failed to get right—namely to desegregate the nation’s public schools. Often considered separately as beacons of entirely different realms of the law, these initiatives were born nevertheless from the same underlying notion that deliverance would come not by amalgamating authority, but by pushing it down to individuals. The proposition that judges could be the essential ally of those victimized by centralized power had been laughable during the Lochner era—judges had then shielded private interests from the demand of what many elites viewed as the democratic mob. Now, with the judiciary having a much more liberal bent, the courts appeared poised to be a crucial bulwark for the beleaguered outsider.

The same year that Congress passed the Civil Rights Act, a professor at Yale Law School wrote a law review article that would take this notion a crucial step further. Charles Reich’s “The New Property” framed the bureaucracies progressivism had once celebrated less as salves than as scourges. In Reich’s view, the administrative state had become such an omnipresent force in everyday life that, by the 1960s, a bureaucrat’s decision to deprive an individual of any certain service was tantamount to stealing that citizen’s personal property. People needed to drive their cars to work, for example, so if a bureaucrat withdrew an individual’s driver’s license without cause, or for some reason that might be deemed “arbitrary and capricious,” the government was essentially robbing that citizen of their income. Reich argued that administrative agencies should have no more discretion to deprive someone a routine privilege than they had authority to summarily lay claim to someone’s home or business.

Reich went on to provide a litany of the various ways in which the “gigantic syphon” of government improperly stole personal property. New Jersey’s director of the Division of Motor Vehicles had suspended a citizen’s driver’s license even after he had been acquitted of committing any underlying crime. New York State had rescinded a citizen’s welfare benefits when he refused to stop sleeping in a dirty barn that wasn’t up to code. After a man was deported, the Social Security Administration had denied his wife the benefits due to her for the years he had paid into the system. These were, in Reich’s view, violations of individual rights, and so he proposed a salve designed to protect the individual against the Establishment: “The denial of any form of privilege or benefit on the basis of undisclosed reasons should no longer be tolerated. Nor should the same person sit as legislator, prosecutor, judge and jury, combining all the functions of government in such a way as to make fairness virtually impossible.”30 And as important as the substance, that principle was the subtext. Reich was translating the New Left’s antipathy for power into a specific agenda.31

Without overstating the influence of any single law review article, the New Property’s significance was born from the grist it added to the new progressive zeitgeist.32 Reich wasn’t just arguing on behalf of a guy sleeping in a barn, or an exonerated driver deprived of his ID. What became known to some as the “rights revolution” broadly encompassed Jeffersonian progressivism’s strategy of pushing authority down by investing individuals with new rights. Not only were progressives now averse to power—they were conceiving of specific ways to chisel at the authority Robert Moses types had come to wield. Culture had shaped the boomers’ politics, and now their sensibilities were prompting a new policy agenda. Jeffersonian means to Jeffersonian ends. In ways that would have been inconceivable just a few years earlier, individual rights, protected by liberal judges, would become the tools reformers used to beat the Establishment into submission.33

SALT IN THE WOUND OF POVERTY

Perhaps nowhere in the public policy lexicon has the clash of progressivism’s Hamiltonian and Jeffersonian sensibilities come into clearer contrast than in the realm of public assistance—what some pejoratively call “the dole.” More than a quarter century after the Clinton administration’s overhaul of America’s welfare system, two different, if familiar, narratives still frame the debate.34 Progressives tend to view recipients as deserving victims of systemic inequality. Conservatives are more likely to frame the same population as “welfare queens.” Lost in this dichotomy are the basics of a once-heated debate within progressivism. If, as most reformers have long believed, those in need of public assistance are fundamentally worthy, they have frequently been of two minds about how best to help them. And that internecine policy dispute is derived from the movement’s divided heart.

This oft-overlooked disagreement centers on what, exactly, welfare is supposed to accomplish—or, perhaps more pointedly, what the public can reasonably expect of those receiving taxpayer-funded subsidies. In some instances, and in certain periods, the movement had explicitly wanted welfare to be paternalistic—or, perhaps put more accurately, maternalistic.35 Progressives in the early decades of the twentieth century wanted social workers to do for young women receiving public assistance what a doting parent might do for a struggling adult child—help them up and on their way.36 But since the 1960s, many progressives have come to see that maternalism as a patronizing burden on families already mired in poverty. As a result, like in other realms, reformers became more inclined to manacle the octopus.

The program most commonly understood as “welfare” was established during the New Deal and known originally as Aid to Dependent Children. To that point, the federal government had played only a more marginal role caring for the nation’s poor, leaving primary responsibility to families, private charities, local businesses, and, in some cases, states.37 But in a progressive leap spurred largely during the ravages of the Great Depression, Democrats conceived a system designed primarily to aid single mothers—and in their prevailing conception, widows. During an era when the pervasive expectation was that, in a proper family, a father would serve as the breadwinner and a mother would raise the kids, reformers were bent on keeping the traditional roles intact—that is, in keeping mothers from having to work outside the home.38

Decades later, as conservatives attacked welfare for incentivizing indolence, that original intent would be overlooked. But if the progressives who originally shaped the program had hoped to preclude single mothers from having to leave their children in order to earn an income, the government’s largesse came explicitly with strings attached. Aid to Families with Dependent Children (AFDC), as the program would later be renamed, may have been funded overwhelmingly by the federal government, but it was administered through state agencies and local bureaucracies dominated by professional social workers. Informed by progressivism’s embrace of scientific expertise, these social workers were credentialed professionals, many steeped in the legacy of the old progressive “settlement house” movement. Like the settlement house employees determined to mold the immigrant classes into responsible American adults, those dispensing welfare saw it as their charge to mold the poor into responsible, middle-class citizens.

True to form, the army of social workers tasked with serving as something akin to surrogate mothers were not content simply to dispense checks. They saw it as their charge to help single mothers stay on the straight and narrow, raise their children right, and avoid the temptations of single life: indolence, substance abuse, alcoholism, promiscuity.39 Public assistance, for the social workers who controlled it, was a carrot that could quickly morph into a stick—a tool that middle-class marms could use to keep poor women “moral.”40 And that maternalistic notion defined the program’s ethos from its inception in the mid-1930s through the early 1960s: armies of social workers were generally granted wide berths of authority to mentor, mother, and cajole single mothers as they individually saw fit.

Some among the nation’s social workers were gentle and nurturing—they took care to cultivate a loving touch. Others, however, led with a heavier hand, treating their charges in ways that many viewed as cruel. Regardless, there was no mistaking who wielded power in any given relationship. No one in the early years could deny that, at root, the program was a scheme to compel working-class and poor women to adopt middle-class values even if, as increasingly became the case, many resented the imposition. The bulk of progressives, and certainly the social workers themselves, dismissed those sorts of objections: Being infantilized by an expert seemed to them a mild penance to pay for relying on the public dole. If, as many conservatives would later argue, these women had wanted a different life, they should have chosen a different path.

Decades on, it may be hard to appreciate how it might have felt to have the Establishment hover over you as these social workers loomed over their cases. But, in many cases, the dynamic was not just infantilizing—it was oppressive. Social workers often took it upon themselves to police the role men played in recipients’ lives. On the one hand, they figured, government shouldn’t be cutting checks to women who were otherwise being supported by male breadwinners. On the other, they didn’t want taxpayers subsidizing promiscuity. And so, in some instances, welfare agencies adopted “man-in-the house” policies designed to smoke out mothers shacking up with their boyfriends. That left many recipients to live in fear of what some called “midnight raids”—unannounced visits seeking to unearth who was present inside a given home during the late-night hours.41 If someone discovered a man hiding in the closet, the recipient’s checks might be curtailed entirely.

This was, almost by any measure, Hamiltonian progressivism at its very worst. In reaction, reformers in the 1960s began mobilizing, as on so many other fronts, to pull the whole scaffolding down. While authority, in this realm, wasn’t wielded by a singular bureaucracy, the social workers essentially acted as little Robert Moseses in the recipients’ lives—personalized Nurse Ratcheds. The other social movements of the era—against the war, for civil rights, in search of a flower-powered counterculture—were not directly akin to what became a collective uprising against the infantilization of poor, single mothers. But they all drew from the same cultural desire to liberate the oppressed from a heavy hand of one sort or another.42 The question was how to manacle these particular tentacles.43 And so, as we will see, in other realms, it soon became clear that the most expeditious route was to draw on Charles Reich’s conception of the New Property.

From AFDC’s inception, benefits had been used as a cudgel—social workers threatened to curtail payments if recipients failed to fall into line. Reformers, in turn, viewing this dynamic as a scourge, began lobbying governors, state legislators, program administrators, and anyone else wielding influence to reconceptualize the benefits as a right. Their thinking was clear: if social service agencies were compelled to cut checks regardless of whether a single mother followed a social worker’s strictures, recipients could be brought out from under the government’s thumb.44 Reframing welfare not as a benefit but as an entitlement was an explicit swipe at the Establishment. It took authority from state-empowered matriarchs and transferred it to women who, from the reformers’ perspective, were doing their best to raise children in dire circumstances.

The exact mechanism of this Jeffersonian reform varied from place to place. But the judiciary struck one universal blow when, in 1970, the Supreme Court established in Goldberg v. Kelly that welfare recipients were entitled to a hearing before the government could terminate a benefit—thus winnowing the discretion of individual social workers.45 But the crusade to push power down to recipients didn’t end there. Under pressure, some states updated their policy manuals to make payments automatic. And the impact of that sort of mundane change could often be profound: instead of being subjected to a social worker’s prying questions, an applicant now simply had to proffer a few facts to determine whether, if by formula, she qualified for benefits, and at what amount. If you were poor, if you had one kid, if you had additional kids, you needn’t worry about the presence of a boyfriend, or if you broke up with your boyfriend, or if you were engaged in a more casual relationship. The system was less personalized, and more routine.46

To be sure, recipients weren’t powerful in this system—but neither were they at the same mercy of the marms. Nevertheless, there were trade-offs. The ordinary kindnesses that had once been within the purview of a social worker’s discretion were now off the table—there was no one to bequeath a struggling mother an extra few taxpayer dollars if her car needed an unexpected repair, for example. And that was for the same reason that it was now harder for racists to shave down an applicant’s benefit: the tasks that had once been handled by social workers were now being done by clerical caseworkers. As the paradigm shifted and the bureaucrats running social welfare agencies bought into the new depersonalized approach, they began seeking out employees capable of resisting the urge to interfere. As one senior Massachusetts official explained: “We’ve been trying to get the people who think like social workers out and the people who think like bank tellers in.”47

Depending on your perspective, progressivism’s new Jeffersonian approach represented something of a great leap forward. As a result of the change in orientation, many more poor people both applied and qualified for assistance. From 1966 to 1971, the percentage of eligible families not receiving welfare benefits fell from a third to a tenth.48 In New York City alone, welfare rolls grew from 240,000 people in 1959 to 1,165,000 in 1971, and the costs to the federal government nationally grew more than five times over.49 But if the new regime was more generous, it was, by progressive design, colder. Reformers were intent on getting bureaucrats out of the practice of orchestrating a woman’s climb up the socioeconomic ladder. Massachusetts abandoned its practice of sorting cases by geography explicitly to ensure that caseworkers would not develop a working knowledge of a neighborhood’s social service agencies. Instead, the Bay State began assigning cases directly by alphabetical order.

The shift to depersonalization had profound impacts—and not entirely for the good. Recipients who once bristled at being patronized now lived in perpetual fear of a bureaucratic snafu. In the old model, the social workers dispensing aid were (theoretically) rooting for a recipient’s success. Caseworkers, by contrast, were charged merely with adhering to the letter of the law—and they risked trouble of their own if they bent the rules.50 Here it’s easy enough to understand why so many bureaucrats embraced what some called an “attitude of impersonality.” If anything, their incentive was to deny or limit benefits lest their supervisors accuse them of misconduct.51 And that was understandably infuriating for recipients: the last thing a young mother struggling to raise her children needs is the added burden of tracking down a letter to prove her family’s eligibility for a little plus-up in their monthly check.52 Salt in the wound of poverty.

What was perhaps most remarkable about the shift was the alacrity with which advocates went from ruing midnight raids to castigating the miserliness of the new impersonal regime. A broad range of organizations—the National Welfare Rights Organization, the Children’s Defense Fund, the Legal Aid Society—fought vociferously to prevent impersonal bureaucracies from denying benefits to the recipients who were legally entitled. Some even embraced an audacious strategy to deluge the program with recipients such that the entire enterprise would go bankrupt, presuming that a lack of resources would force the government, eventually, to create a guaranteed income.53 But the growing feeling was one of frustration.54 The public was given the impression of a system in turmoil. And that impression was often accurate. With images of disinterested caseworkers swimming in seas of paperwork, few were given to imagining that the system was defined by competence and efficiency.

It’s worth taking note of the narrative arc here because it tracks a pattern that will be made evident again and again in the chapters that follow. A policy conceived before the 1960s, imbued with an explicitly Hamiltonian ethos, becomes the object of Jeffersonian ire. Well-intentioned reformers then begin to conceive of ways to push that centralized power down and out—to disrupt the Establishment’s hold on its victims. In order to manacle the octopus, reformers work to endow the victims of the old regime with new rights—leverage of the kind Charles Reich conjured in “The New Property.” Last, amid generalized frustration with the end result, government is made to appear generally, and specifically, incompetent. To wit, nearly every element of the story can be explained without incorporating the (generally nettlesome) influence of conservatism. Progressives serve at the forefront of every successive change. And yet there was no doubting who benefited politically from the change: the right.

In this case, the figure who first took advantage of the corner progressives had painted themselves into was Richard Nixon. Machiavellian as he was, Nixon understood how more conservative elements of the New Deal coalition would react to the notion that their tax dollars were being promised to unappreciative recipients by an overwhelmed bureaucracy. With racial prejudice woven into the subtext, conservatives replaced the public’s sympathy for widowed mothers with the notion that “welfare queens” were bleeding the system. A former New York Times editorial board member recalled a comment one mother on welfare made to New York City mayor John Lindsay at a legislative hearing: “It’s my job to have kids, and your job, Mr. Mayor, to take care of them.”55 Seeing the opportunity to drive a wedge, Nixon proposed what amounted to a guaranteed income for those at the bottom of the income scale, making himself appear to be sympathetic to the poor while castigating the same system that progressives were also prone to flay. While the Family Assistance Plan died on the vine, Nixon won by letting progressives walk into a trap of their own creation.56 The nation’s welfare system, by the movement’s own testimony, was now perceived to be a fatuous mess.

The ultimate irony was that progressivism’s effort to preclude social workers from prying into people’s personal lives created what felt like more government. Absent a system where individual social workers could exercise personalized discretion, policymakers were compelled to lay down increasingly specific ground rules, guardrails, limits, and operating instructions. How, exactly, would someone qualify for assistance? What would exclude them from eligibility? How might they be reinstated? What proof did they need to provide to maintain their benefits? As in other realms of policy, decisions once made by experts were now decided by rule. In the decade that followed Nixon’s inauguration, the federal register of regulations grew a full four times in length, with decisions including how welfare was administered now subject not only to layers of bureaucracy, but a new reign of judicial oversight as well.57

In the decades that followed, the welfare discourse reverted to the old tropes, with conservatives arguing with Nixonian fervor that big government was incentivizing indolence, and progressives arguing that miserly government was failing to serve those most in need. Some would try, on occasion, to break out of that vice, as happened among progressive supporters of welfare reform in the mid-1990s. Lost amid that tension was any serious attempt to grapple with the reality that the system was a mess largely by progressive design. The old Hamiltonian conception of the dole had been plagued with problems—but the Jeffersonian alternative was deeply flawed as well. In the end, the movement’s cultural aversion to power had left progressives with the worst of both worlds: a program that was cold on the one hand, and insufficient on the other.

And then there was the role welfare had come to play in the public discourse. Here, the political implications come into clearer view. A program established during the New Deal to help mothers in need had become, a half century on, a poster child for bureaucratic incompetence. It served as Exhibit A for those wanting to sap confidence in government more generally. In the decades to follow, whatever progressives subsequently suggested to do to help the nation’s poor—however they proposed to strengthen the social safety net—they were now forced perpetually to swim against a tide of public cynicism. Progressives had teed it up, and conservatives hammered it home. Government, observers were induced to believe, was indelibly incompetent, and only a fool would presume that public bureaucracy could work effectively in the public interest. Even if that wasn’t true—and, to be clear, it wasn’t in many cases—few could have come away with the impression that progressivism was poised to deliver on its broader promise.58

A NEW AND DIFFERENT AGENDA

The cultural aversion to power served not just to reframe progressivism’s approach to combating police brutality, or awarding driver’s licenses, or provisioning welfare to the poor. The impulse to push power down, combined with the policy tools born from endowing individuals with new “rights,” proved particularly potent in the realm of environmental protection. At the turn of the twentieth century, America had been riven by a fear that the country was running out of space, driving progressives, most notably Theodore Roosevelt, to charge centralized authorities with keeping the nation’s great expanses wild.59 By the 1960s, however, the nation’s natural resources appeared to face a different threat. Beyond conservation, a burgeoning new environmental movement wanted to combat other forms of degradation. That new mission would require new public policy tools—and activists would discover them within the Jeffersonian tradition.

The problems, at that point, were both dire and obvious. The postwar economic machine had made no bones about plying the landscape with chemicals, toxins, and filth.60 Rachel Carson’s Silent Spring served as the touchstone of a new back-to-the-earth environmentalism that aimed not only at preserving nature but also at combating industrialization. Carson wasn’t simply indicting human selfishness—her ire was targeted at centralized power more generally. As she argued in a speech delivered a year after the book’s publication: “The fundamental wrong is the authoritarian control that has been vested in the agricultural agencies” that put farming interests ahead of the underlying ecology.61 This, she contended, was part of a larger problem: wherever the preservation of the environment came up against the Establishment, the public interest appeared destined to lose. The only potential salve was to push power down and out, from the centralized nodes to the otherwise helpless victims.

The Establishment’s scourge extended everywhere, and its tentacles reached far beyond the agricultural machines poisoning the nation’s farmland. Highway planners and engineers, six years into constructing the interstate system at the time of Silent Spring’s publication, were selecting routes that often lay needless waste to the underlying flora and fauna. Highway engineers may not have been purposefully destroying the environment. But their mandate was to enhance mobility at minimal cost.62 That perversely incentivized ecological destruction: Not only were pristine natural patches of land fair game when selecting routes, but they often marked the path of least resistance. Better, Establishment figures concluded, to disturb a forest than a neighborhood. Better to displace a bird habitat than add a few minutes to a suburban commuter’s drive downtown.63

Regardless of whether progressive leaders explicitly understood how the movement’s zeitgeist was changing, they were quick to pick up on the public’s growing antipathy. In 1964, the Bureau of Public Roads, later to become the Federal Highway Administration, directed state engineers to consider not only convenience and price, but also the social, economic, and environmental costs of any given route. Ralph Nader, a young lawyer then emerging on the scene, viewed highway engineers much as Carson had viewed the bureaucrats at the Department of Agriculture—indifferent, at best, to the public interest, and more likely in cahoots with the profiteers (farming and construction companies) making fortunes off the march to progress. The alternative, in his mind, was obvious and in line with other reformers of his generation: empower the public to advocate for itself against the System.64

It’s worth pausing here, once again, to appreciate just how revolutionary this turn was within the world of progressive policymaking. The old Hamiltonian zeitgeist—the one that favored centralization—had cast the central tension in public life as the pull-and-push between public and private interests. Roosevelt’s “boys with their hair ablaze,” presumed that the government was out for the greater good, while businesses were out for themselves. Their agenda was erected upon that narrative foundation. Public officials of all stripes—the august figures of the Georgetown Set, the city fathers managing local affairs, the experts fanned out across all the various bureaucracies—were presumed to have good intentions.

But now, as articulated by the likes of Carson, Nader, and others, those same big public bureaucracies weren’t balanced against private interests—they were co-conservators. The public interest had been captured, corrupted, and worse. Even Louis Jaffe, a New Dealer who became a luminary of administrative law, was by 1965 writing about “the most monstrous expression of administrative power.”65 And that meant, logically, that the only real way to pursue the greater good was to reverse course—to push power down and out, to give ordinary people the opportunity to advocate for themselves, to establish outside watchdogs designed to keep the system honest.66 It wasn’t enough simply to throw your body on the machine. Per the new zeitgeist, law and policy would give citizens the tools they needed to manacle the octopus.

The ideological whiplash was rife in nearly every realm.67 Between 1965 and 1977, Congress passed a rash of environmental bills, some of which—the Clean Air Act of 1970 and the Clean Water Act of 1972 among them—compelled strong, centralized, executive agencies to take tougher lines against polluters.68 Rather than accept an industry’s explanation for why it couldn’t wean itself from whatever harmful environmental practice it had embraced to date—smokestack emissions, toxic runoff—Congress set explicit standards and timetables. These were “command-and-control” regulatory regimes: the demands came from government, and industry was compelled to fall into line. Power was vacuumed into the hands of bureaucrats who would fix problems from above.69

But at the same time progressives were championing these command-and-control solutions, they were also pursuing an agenda that pointed power in the other direction. Late in 1969 Congress passed, and in early 1970 President Nixon signed, what some have called the “magna carta” of environmental legislation—the National Environmental Policy Act (NEPA).70 In fairness, the bill’s authors and champions in Congress weren’t aware of how drastically the law would impact environmental law or, for that matter, government. But NEPA was remarkable in that its aim wasn’t to pull power up and into a centralized bureaucracy—it was to force big, imperious government bureaucracies to pay attention, at long last, to the damage they were inflicting on the nation’s landscape. The bill merely required that the bureaucracies sponsoring big projects review potential drawbacks. A mandate that the bureaucrats draft “environmental impact statements” was designed primarily to prompt more internal dialogue about potential degradation. Look before you leap.

At the time, few thought seriously to object to the proposed bill, largely because few predicted much would change upon its passage. When NEPA’s House champion, the automobile industry–friendly Representative John Dingell (D-MI), was asked in a public hearing whether the new bureaucracy created by the legislation, the Council on Environmental Quality, would now be involved in reviewing every project, the answer was unequivocally no: “The conferees did not view NEPA as implying a project-by-project review and commentary on Federal programs.… Rather it is intended that the Council will periodically examine the general direction and impact on Federal Programs in relation to environmental trends and problems and recommend general changes in direction or supplementation of such programs when they appear to be appropriate.”71 Indeed, most everyone viewed the bill as fairly innocuous—a statement of “good principles” that would cost little more than $1 million in staff work a year.72

But if NEPA had been created largely in the Hamiltonian tradition—if it was conceived to induce those within the behemoth agencies to be more ecologically sensitive while reshaping the nation’s natural landscape—it inadvertently gave outsiders new Jeffersonian leverage. Those determined to drive power away from centralized nodes realized that the process for drafting environmental impact statements was subject to the Administrative Procedure Act’s requirement that decision-making not be “arbitrary and capricious.” If judges could be convinced that a project’s environmental impact would, in actuality, be different from what an agency projected, ordinary citizens now had a Reichian cudgel to bring the bureaucracy to heel. And while the legislative language within NEPA never explicitly contemplated that litigants would be empowered to challenge the quality or thoroughness of environmental impact statements, a new phalanx of lawyers saw an opening.73

The impact was almost immediate. Mere weeks after Nixon signed NEPA into law, a newly formed progressive nonprofit, the Center for Law and Social Policy (CLASP), filed suit on behalf of the Wilderness Society, Friends of the Earth, and the Environmental Defense Fund against the secretary of the interior demanding a halt to construction of the trans-Alaska oil pipeline. Their contention was that the government had not properly accounted for the project’s environmental impacts. A judge issued an injunction. And while, in this case, the pipeline was eventually built, the judiciary’s willingness to consider the case established a precedent that expanded well beyond NEPA.74 Now, when an executive agency was assigned responsibility for protecting the environment, outsiders contending that the agency was doing an insufficient, incomplete, or incompetent job weighing various concerns could draft the judiciary to hold those executive agencies to account.

Federal bureaucrats quickly began to appreciate how the old system’s familiar rhythms were under assault. Two years on, a Federal Highway Administration official attending a conference in Wisconsin on environmental impacts complained that a federal judge had halted a highway project for lack of an environmental study. The plaintiffs, he explained, had been able to file suit without posting bond—they weren’t being required to cover the small fortune their lawsuit would cost taxpayers if the government prevailed and the project was eventually green-lit. Worse, this was part of a trend. In 1971, twenty-four similar suits had been filed across the country.75 The Federal Highway Administration then managed a $5 billion budget with a mere five thousand employees. The agency’s bureaucrats and lawyers were already overwhelmed, and the Nixon White House was trying to reduce the size of the federal workforce. If the government had to do this sort of study on every project, something would have to give.76 And something would give—but it wouldn’t be the environmentalists.

Over the next several years, states and cities began using NEPA as a model for similar bills, some of which came to mandate protections not only for governmental action, but for private projects as well. And before long, what had appeared initially as an innocuous requirement that federal bureaucracies simply look before they leap had reshaped not only the process for building things in America, but the very contours of power.77 No longer could a figure like David Lilienthal swoop in from above and remake an entire river valley, as the TVA had done during the 1930s. Now, an agency would make a decision, a public-interest group would file a lawsuit, there would be a tussle in the press, and ultimately in a courtroom.78 The parties might come to a compromise “consent decree.”79 But the core dynamics were unmistakably different: centralized power was finally being held to individual account.

As in the realm of welfare policy, the political implications of progressivism’s shift were also profound. Ahead of his 1972 reelection campaign, Nixon would growl to his chief of staff: “We’ve had enough social programs: forced integration, education, housing.”80 His grousing was typical for any anti-Establishment conservative. Nixon had long made a sport of skewering what Donald Trump would, a half century later, label the “deep state.” What was odd was that Nixon’s Democratic opponent that year, Senator George McGovern, embraced much the same attitude. He promised at one point to combat the “empty decaying void” of an “establishment center… that commands neither the confidence” of most Americans “or their love.”81 Conservatism’s antipathy for government was narratively resonant with progressivism’s aversion to power.82 Nixon’s promise to fight for America’s “silent majority” against the institutional elite wasn’t so different from cynicism typified by C. Wright Mills, Abbie Hoffman, Eldridge Cleaver, Rachel Carson, and Ralph Nader. Both ends against the middle—both against public power.

Today, many may scoff at the notion that progressivism was somehow in sync with Richard Nixon, or vice versa. And, indeed, during this period, progressives remained supportive, in theory, of a whole range of more Hamiltonian proposals embraced by the Democratic mainstream at the time. (To repeat, progressivism has always embraced elements of both its Hamiltonian and Jeffersonian impulses—even if the balance is often in flux.) The congruence between the likes of Richard Nixon and Ralph Nader wasn’t that they shared some broader dream of the same social justice—it was that they both sought ways to hack at nodes of centralized authority. And by the 1970s, the evidence of the Establishment’s rot was so pervasive that, in many cases, Hamiltonian approaches to problem-solving were beyond the pale.

The evidence of bureaucratic incompetence and malfeasance was, at the time, almost overwhelming. When Saigon fell, few doubted that the Pentagon had gotten in over its head. The oil crises undermined faith in America’s economic machine.83 Cities that had been rioting in the late 1960s appeared to be decaying through the 1970s—with New York, in particular, edging toward bankruptcy. In the years that preceded what would be (mis)remembered as Jimmy Carter’s “malaise” speech, the public shared a notion that the country, led by tired institutions, was in decline.84 Even the great exponents of the Hamiltonian New Deal were demanding a change. Arthur Schlesinger, who had venerated Franklin Roosevelt in an epic trilogy before serving as special assistant to John F. Kennedy, now argued against the centralized power of executive agencies.85 And a new generation of Democrats, many of them elected in 1974 as “Watergate babies,” saw curbing power as their cri de coeur. That year, a young Democrat named Gary Hart won a Senate seat in Colorado by explicitly flaying the Establishment. “The time is now for all of us to rise up and take our country back from the power-hungry corrupters, the fat cats and… the comfortable, complacent, backward-looking old men.”86

Through the 1970s, the Jeffersonian impulse came to define more and more of progressivism’s legislative agenda. Congress passed the War Powers Act, limiting the president’s ability to take the country to war without explicit authorization. Legislators created the Church Committee in 1975 to thwart the abuses of the intelligence community. Legislators passed the Developmentally Disabled Assistance and Bill of Rights Act to protect people with extra challenges from abuse and neglect. Congress passed the Ethics in Government Act in 1978 to attack conflicts of interest.87 The list of laws pushing power down and endowing new groups rights just seemed to grow. Hamiltonian ideas sometimes garnered lip service, but they were rarely center stage. And the effects of these various Jeffersonian triumphs, from NEPA on down, would become increasingly clear in the years and decades to come.

A PUPPET CONTROLLED BY WARRING HANDS

Throughout Hamiltonianism’s long reign as progressivism’s predominant impulse, municipal reformers tended to castigate ordinary city agencies as lumbering creatures of corrupt political machines. To bypass the bosses—to ensure true experts were in charge of building a new water system or managing a complex transit network—the reform community frequently favored transferring control to what were innocuously termed “public authorities.” Governed by appointed boards of directors, public authorities were conceived to be free of corrupt entanglements—more prone to being “businesslike” and “nonpolitical” than the bureaucracies with line authority from City Hall. They represented a kind of Hamiltonian ideal—a centralized bureaucracy capable of combining private and public sensibilities to the common good.88

During the 1970s, however, public authorities lost their luster. If Eldridge Cleaver’s octopus had ever come to life, its most fearsome tentacles would almost certainly have taken their form. Designed explicitly to be insulated from politics, their executives appeared like distilled emanations of the Establishment. Their boards were filled with business elites, their priorities aligned with upper-middle-class sensibilities, and their finances tied to Wall Street. To build that new sewer system, authorities would hire financiers to sell tax-exempt municipal bonds to institutional investors who would siphon interest from the water bills paid by ordinary taxpayers. To maintain the system, authority executives often appeared more interested in maintaining their bond ratings (lest they be denied subsequent opportunities to borrow) than in fulfilling their public mandates. For reformers, these were the archetypical examples of the Establishment gone wrong. And the evidence of abuse was, in many cases, jaw-dropping.

The Chesapeake Bay Bridge and Tunnel Commission, for example, had issued bonds to enhance and maintain the various bridges and tunnels on the Eastern Seaboard. Wall Street banks earned sizable commissions selling the bonds to investors who then expected to earn steady returns. When, however, toll revenue wasn’t sufficient to pay the bondholders, it wasn’t entirely clear who would be forced to eat the loss. The authority, the banks, and the bondholders all viewed the underlying terms of the deal as nonnegotiable, thus requiring taxpayers to make up the shortfall. And there was no one, save a few outside critics, positioned to raise a stink. Here was the Establishment at its most devious—a public enterprise, designed to pursue the greater good, crafting a bait and switch that benefited the wealthy and powerful. This was the octopus at work.

The problem was bigger than the financing. Public authorities were often taken to pursuing projects ordinary citizens abhorred even while neglecting improvements that the public might have deemed more worthwhile.89 This was part of the rapacity that Robert Caro exposed in The Power Broker—Robert Moses used his power atop the Triborough Bridge Authority, among other bureaucracies, to engage in projects that served his predilections rather than the public good.90 And it wasn’t just Moses. Austin Tobin, running the Port Authority at much the same time, resisted entreaties to subsidize the woebegone railroad ferrying residents between New Jersey and Manhattan for fear that the cost might preclude the Port from engaging in projects more to his liking. He would eventually lose that fight—the PATH system was born as a result. But here was yet more evidence to prove that the Establishment had its own agenda and that its priorities weren’t necessarily aligned with the public interest.91

But what to do to rein public authorities in? By the point of The Power Broker’s release in 1974, the New Left had dissipated, and SDS was defunct. Many of the organizations that had sounded the alarm had, it seemed, vanished from the scene. But if the whirlwind of protest had ended, the cultural aversion to power remained in full force. Not only were many veterans of the New Left now elected to office, the Jeffersonian ethos had come to own a much bigger slice of progressive discourse. To boot, by the 1970s, those eager to push power down and out had at their fingertips a whole playbook of moves at their disposal—new oversight boards, new reporting requirements, a new spirit of litigiousness.92 Finally, girding all of these various blows against centralized executive power was a judiciary that was willing and eager to join the fight.

This judicial element was, by some measure, the coup de grâce of the whole transition. It wasn’t just that progressivism had changed culturally. It wasn’t just that the movement had developed new tools to hack away at the Establishment. By the 1970s, a new generation of judges eager to cut Robert Moses types down to size had arrived on the scene. One particularly influential figure, Judge David L. Bazelon of the DC Circuit Court of Appeals, wrote approvingly of a new day when courts would “insist on strict judicial scrutiny of administrative action.”93 And so a branch of government progressives had once been so desperate to sideline began providing “injunctive” relief against government plans in some cases, and taking over executive departments in others.94 Here was the thrust of Justice Harlan Fiske Stone’s famous Footnote in full flower.

In 1967, a mere three lawsuits had been filed to enforce federal statutes for every one hundred thousand Americans. That figure would quadruple by 1976, and grow seven times over by 1986. And for all the attention paid post-Watergate to the threat of an imperial presidency, the judiciary had perhaps a more powerful effect hobbling the same administrative agencies the Establishment had once lionized. A full half of the major regulations issued by the Environmental Protection Agency during the 1980s were blocked at one point or another—as were a host of Forest Service management plans, port dredging proposals, and vehicle safety regulations. In 2000, at least thirty of the nation’s welfare agencies were operating under court orders demanding improvements. Even in 2016, the child welfare systems in twenty states were operating under consent decrees, and the courts were processing ten thousand appeals each year for benefits from government agencies.95

To read those statistics today, many progressives may be tempted to cheer, if only because it’s so easy to imagine a Jane Jacobs type taking a Robert Moses–like figure to court, or a Ralph Nader acolyte convincing a judge to prevent a Republican appointee from leasing publicly owned forests to a corporation controlled by a GOP donor. Moreover, if a city’s child welfare bureaucracy isn’t protecting vulnerable children, why wouldn’t progressives sue to force caseworkers to do their jobs? If an environmental regulator appears inclined to green-light pollution, why not enjoin her to uphold the public interest? If a police chief is firing good cops to hire incompetent good ol’ boys, why not give upstanding officers more protection? Why not, in the end, use every possible mechanism to ensure the real public interest comes first?

In some cases, there is no good riposte to those questions—progressives are right to hold the authority to account. But there’s a balance to be struck if only because, by limiting the discretion public officials have to do bad, the Jeffersonian agenda also narrows the path for other public officials to do good. By curtailing opportunities for centralized power brokers to wreak havoc, reformers risk immobilizing the public sphere, rendering the big, hulking bureaucracies that were once the apple of progressivism’s eye incompetent. Controlling for the vices of Hamiltonianism, progressivism loses virtues as well. The scholar Jonathan Rauch, describing the New Deal, illustrated how nimble the government had been during the era when progressivism celebrated centralized power:

On the seventeenth day of his administration, [President Franklin Roosevelt] proposed the Civilian Conservation Corps; three weeks later, it was law. On November 2, 1933, he was given a proposal for a Civil Works Administration employing people to repair streets and dig sewers; by November 23, the program employed 800,000 people; five weeks later, it employed 4.25 million people or 8 percent (!) of the American workforce. Just as important, Roosevelt was able to get rid of programs. He ordered the Civil Works Administration shut down at winter’s end; its total lifespan was only a few months. Similarly, the Civilian Conservation Corps went away in 1942, once the war made it superfluous.96

Few of the items on Roosevelt’s agenda would have struck a public chord in the absence of a Great Depression. But by the late 1970s, neither would they have even been possible. Scholars and analysts have long debated why faith in government has declined since Watergate. Some of the impetus, no doubt, was born from Nixon’s villainy. Some of the subsequent decline has undoubtedly been due to the fact that the purported “party of government” has never boasted the filibuster-proof majorities that made Roosevelt’s flexibility possible in the mid-1930s. Some of the public’s frustration has surely been born from the role entrenched interests play frustrating efforts at reform.97

But beneath and beside these explanations is another omnipresent reality: government today suffers from an endemic diffusion of authority. The ability of presidents, police chiefs, social workers, public authority executives, and others to exercise discretion has been severely curtailed. In many cases, too many disparate voices now wield a proverbial veto.

This new dynamic was not prompted by any single shift. It wasn’t just progressivism’s cultural aversion to power that sparked the transformation. It wasn’t just Charles Reich’s new conception of personal property, or the subsequent development of a jurisprudence investing new rights in those affected by decisions above. It wasn’t just the judiciary’s new willingness to interfere more directly in the machinations of executive branch decision-making. But braided together, these shifts swung a great balance of power from central nodes of government to individuals on the periphery. Power that previous generations had pulled up and in has subsequently been pushed down and out. Subjected to legislative demands, judicial injunctions, and executive orders, the administrative state has become, in many cases, a puppet controlled by warring hands. And the effect, in many cases, has been dysfunction.

From a progressive point of view, this new constellation of power at first appears advantageous. When nefarious characters like Richard Nixon or, say, Donald Trump head the octopus, their administrations are now bound to be more limited in the damage they can do—they can’t so haphazardly dismantle the programs progressives care about most. If a municipal welfare agency is operating under a consent decree enforced by a judge, presidents, governors, mayors, and others have considerably less opportunity to curtail the benefits it affords the poor. If a law gives regulators flexibility setting smokestack pollution standards, bureaucrats captured by industry now have a tougher time giving them a pass, if only because outside groups can sue the executive agency in the public interest—and win. All this is seemingly for the good.

But the downsides have been profound. It’s not just that LEOBOR-like statutes sometimes have the perverse effect of making it harder to do good things—like get rid of bad cops. As we’ll see in subsequent chapters, the new regime has also made it more difficult to build affordable housing, and to construct new high-speed rail lines, and to site the transmission lines required to deliver renewable energy to places eager to overcome their addiction to oil and gas. But the most profound detriment of the system Jeffersonian progressivism has created is that it has made government, in many realms, authentically incompetent. That, as we’ll see, has now become progressivism’s great political burden. Who in their right mind would seek to give more power to this sort of dysfunctional, do-the-least-possible version of government?98 Is it any wonder that many Americans who might typically have found a home in the old New Deal coalition have ventured so far outside—even to the point of lionizing a man who rants against the “deep state”?99

A generation ago, when Ronald Reagan famously quipped that government would inevitably screw up a two-car parade, progressives bristled. But the Gipper’s critique wasn’t really directed at the government incarnated by the New Deal, the centralized system where, in the eminent legal scholar James M. Landis’s words, experts were given “grants of power which with to act decisively.”100 Rather, Reagan and his conservative allies were indicting this new, perverse, impenetrable, Kafkaesque alternative. And when progressives, not entirely registering the distinction, fought back by claiming that government was somehow good, or worthwhile, or productive, they came to look untethered to the reality they themselves had helped to create. And yet the distinction was real.

In the decades that followed the 1960s, progressivism became a movement defined by the parks department unable to rebuild ice skating rinks, and the railroad unable to build a train station—a movement of social justice activists unable to act against brutal cops, of climate activists incapable of delivering clean energy, and of housing activists incapable of erecting new homes. By tipping the scales too far from Hamiltonianism, and too far toward Jeffersonianism, progressivism became, in short, a movement of do-gooders unable to do enough good. The diffusion of power hasn’t just undermined government—it has short-sheeted progressivism’s political appeal. And that is perhaps the most important lesson of the last several decades. Absent a progressivism that works, what reformers get is a progressivism left vulnerable to demagoguery. When government appears incompetent, voters turn to figures like Donald Trump.

1