Chapter Three
GOVERNING
Michael Barone, America’s foremost political analyst, wonders why America produces so many incompetent eighteen-year-olds but remarkably competent thirty-year-olds. The answer is in his new book, Hard America, Soft America: Competition vs. Coddling and the Battle for the Nation’s Future. It illuminates the two sensibilities that sustain today’s party rivalry.
One answer to Barone’s question is: schools. In 1900, only 10 percent of high-school-age Americans went to high school. Subsequently, schooling became universal and then schools became emblematic of Soft America, suffused with “progressive” values—banning dodgeball and other games deemed too competitive, attempting personality adjustment, promoting self-esteem and almost anyone with a pulse.
In contrast, Barone says, “Hard America plays for keeps: The private sector fires people when profits fall and the military trains under live fire.” Soft America depends on the productivity, creativity, and competence of Hard America, which protects the country and pays its bills.
For a while, Soft America, consisting of those sectors where there is little competition and accountability, threatened to extinguish Hard America. By 1950, America had what Barone calls a Big Unit economy—big business and big labor, with big government often mediating between them. This economy was, Barone says, “inherently soft.” Security, a concept not relevant to the Hard America of the novel Sister Carrie (1900), was central to the corporatist world of conformity—the world of the novel The Man in the Gray Flannel Suit (1955).
With a novelist’s eye for the telling detail, Barone notes that the Labor Department building, constructed in the 1960s, had two conference rooms adjacent to the secretary of labor’s office, one for management, one for labor, so the secretary could shuttle between them. Together, the three big units would work out agreements, passing the costs of them along to consumers in an era much less competitive than today’s deregulated and globalized era.
Between 1947 and 1968, big business got bigger: the share of assets owned by the two hundred largest industrial companies rose from 47 percent to 61 percent. Then came a hardening. Deregulation ended soft niches (e.g., airlines, trucking) protected by government-sponsored cartelization. The Interstate Commerce Commission, which encouraged cartelization, was abolished.
New financial instruments (e.g., junk bonds) fueled hostile takeovers. Capital gains taxes were cut, stimulating entrepreneurship. Between 1970 and 1990, the rate at which companies fell from the Fortune 500 quadrupled. The portion of the gross national product accounted for by the one hundred largest industrial corporations fell from 36 percent in 1974 to 17 percent in 1998.
In 1957, the Soviet Sputnik provoked some hardening of America’s schools—with more science and advance placement courses, and consolidation of rural schools. President Kennedy’s vow to reach the moon by the end of the 1960s was an inherently hard goal, with a hard deadline measuring success or failure.
But the second half of the 1960s brought the Great Softening—in schools and welfare policies, in an emphasis on redistribution rather than production of wealth and in the criminal justice system. The number of violent crimes per 100,000 people rose from 1,126 in 1960 to 2,747 in 1970 while the prison population declined from 212,000 in 1960 to 196,000 in 1970. In 2000, after the swing toward hardening, there were 1.3 million prisoners.
Barone says racial preferences, which were born in the 1960s and 1970s, fence some blacks off from Hard America, insulating them in “a Soft America where lack of achievement will nonetheless be rewarded.”
The Detroit riot of 1967 lasted six nights before twenty-seven hundred federal troops restored order. In 1992, after the 1980s turn toward hardness, the Los Angeles riots lasted eighteen hours, ending six hours after twenty-five thousand federal troops were dispatched.
In the Soft America of 1970, the tapestry of welfare benefits had a cash value greater than a minimum wage job. In the Harder America of 1996, welfare reform repealed Aid to Families with Dependent Children, a lifetime entitlement to welfare. And in the 1990s, welfare dependency—and crime—were cut in half. A harder, self-disciplined America is a safer America.
What institution is consistently rated most trustworthy by Americans? The institution that ended its reliance on conscription, that has no racial preferences and has rigorous life-and-death rules and standards: the military.
Barone believes that promotion of competition and accountability—hardness—is the shared theme of President Bush’s policies of educational standards, individual health accounts, Social Security investment accounts, and lower tax rates to increase self-reliance in the marketplace. Barone’s book is a guide to electoral map reading: the blue and red states have, respectively, softer and harder sensibilities.
[MAY 9, 2004]
MILWAUKEE—Angela Jobe, thirty-eight, is a grandmother who has lived most of her adult life at ground zero of the struggle to “end welfare as we know it.” At about the time candidate Bill Clinton was promising to do that—in autumn 1991—she boarded a bus in Chicago, heading for Milwaukee, lured by Wisconsin’s larger benefits and lower rents. Unmarried, uneducated, and unemployed, she already had three children and eight years on welfare.
Today, she is in her ninth year of employment in a nursing home, earning $10.50 an hour. How she left welfare, and how her life did and did not change, is one of the entwined stories in Jason DeParle’s riveting new book, American Dream: Three Women, Ten Kids and a Nation’s Drive to End Welfare, the fruit of DeParle’s seven years of immersion in Jobe’s world.
His subject is the attempt of welfare reformers, in Wisconsin and then Washington, to end the intergenerational transmission of poverty in the chaotic lives of fractured families. His book reads more like a searing novel of urban realism—Theodore Dreiser comes to Milwaukee—than a policy tract. His reporting refutes the 1930s paradigm of poverty—the idea that the perennially poor are strivers like everyone else but are blocked by barriers unrelated to their behavior. Angie Jobe is not Tom Joad.
After the liberalization of welfare in the mid-1960s, the percentage of black children born to unmarried mothers reached 50 by 1976 (it is almost 70 today), and within a generation the welfare rolls quadrupled. But DeParle says people mistakenly thought people like Jobe were organizing their lives around having babies to get a check. Actually, he says, their lives were too disorganized for that.
What can help organize lives—at least those that are organizable—is work. The requirements of work—mundane matters such as punctuality, politeness, and hygiene—are essential to the culture of freedom. The 1996 reform replaced a lifetime entitlement to welfare with a five-year limit, and called for states to experiment with work requirements. Welfare rolls have since declined more than 60 percent. DeParle writes:
“In Creek County, Oklahoma, the rolls fell 30 percent even as the Legislature was still debating the law, a decline officials largely attributed to the mere rumors of what was coming…. The late 1990s can be thought of as a bookend to the 1960s. One era, branding welfare a right, sent the rolls to sudden highs; the other, deeming welfare wrong, shrank them equally fast.”
The mass movement from welfare rolls to employment rolls is progress. But DeParle’s unsentimental reporting offers scant confirmation of the welfare reformers’ highest hope, that when former welfare mothers go to work, their example will transform the culture of their homes, breaking the chain of behaviors that passes poverty down the generations. On the street where Jobe lives today, almost every house is the home of a working mother with children but no husband.
When Jobe was thirteen, her parents divorced and she went to live with her father, who let her roam Chicago’s South Side streets. The father of the child she had at seventeen is serving a sixty-five-year prison sentence. And the wheel turns: Jobe’s daughter Kesha got pregnant at sixteen. Kesha told the fourteen-year-old father at his eighth-grade graduation, and hardly heard from him again. Kesha, a checkout clerk at a grocery store, has two children and lives with a boyfriend.
Jobe’s house teems with life. During a recent visit, there were two infants in diapers, and the seventeen-year-old girlfriend who lives there with Jobe’s eighteen-year-old son.
Milwaukee’s mandatory self-esteem classes were part of the “hassle factor” designed to diminish welfare’s appeal. But, says Jobe, “There’s nothing wrong with my self-esteem,” the timbre of her voice validating the assertion. She is a four-foot-nine geyser of pluck, humor, and compassion for her nursing home patients. She has no sense of entitlement. DeParle says of her and the other two women whose story he tells, “When welfare was there for the taking, they got on the bus and took it; when it wasn’t, they made other plans.”
What of her future? Today, she says, “I don’t think much about tomorrow.” Complete absorption in the present is both a cause and a consequence of living a precarious and disorganized life, but so far her postwelfare story illustrates two truisms: People respond to strong social cues, as she did when she got on the bus, and later when she got off welfare. Second, poor people are more resilient—and more resistant to fundamental behavior modification—than their various would-be improvers suppose.
[DECEMBER 30, 2004]
If by the dawn’s early light of November 3 George W. Bush stands victorious, seven of ten presidential elections will have been won by Southern Californians and Texans, all Republicans. The other three were won by Democrats—a Georgian and an Arkansan.
This rise of the Sunbelt is both a cause and a consequence of conservatism’s rise, which began in 1964 with, paradoxically, the landslide loss of the second post–Civil War major-party presidential nominee from that region—Arizona’s Barry Goldwater, four years after the first, Richard Nixon. His campaign was the first stirring of a mass movement: Nixon’s 1960 campaign attracted 50,000 individual contributors; Goldwater’s attracted 650,000.
Conservatism’s 40-year climb to dominance receives an examination worthy of its complexity in The Right Nation, the best political book in years. Its British authors, John Micklethwait and Adrian Wooldridge of the Economist, demonstrate that conservative power derives from two sources—its congruence with American values, especially the nation’s anomalous religiosity, and the elaborate infrastructure of think tanks and other institutions that stresses that congruence.
Liberals, now tardily trying to replicate that infrastructure, thought they did not need it because they had academia and the major media. But the former marginalized itself with its silliness, and the latter have been marginalized by their insularity and by competitors born of new technologies.
Liberals complacently believed that the phrase “conservative thinker” was an oxymoron. For years—generations, really—the prestige of the liberal label was such that Herbert Hoover called himself a “true liberal” and Dwight Eisenhower said that cutting federal spending on education would offend “every liberal—including me.”
Liberalism’s apogee came with Lyndon Johnson, who while campaigning against Goldwater proclaimed, “We’re in favor of a lot of things and we’re against mighty few.” Johnson’s landslide win produced a ruinous opportunity—a large liberal majority in Congress, and incontinent legislating. Forty years later, only one-third of Democrats call themselves liberal, whereas two-thirds of Republicans call themselves conservative. Which explains this Micklethwait and Wooldridge observation on the Clinton presidency:
“Left-wing America was given the answer to all its prayers—the most talented politician in a generation, a long period of peace and prosperity, and a series of Republican blunders—and the agenda was still set by the right. Clinton’s big achievements—welfare reform, a balanced budget, a booming stock market and cutting 350,000 people from the federal payroll—would have delighted Ronald Reagan. Whenever Clinton veered to the left—over gays in the military, over health care—he was slapped down.”
Micklethwait and Wooldridge endorse Sir Lewis Namier’s doctrine: “What matters most about political ideas is the underlying emotions, the music to which ideas are a mere libretto, often of very inferior quality.” The emotions underlying conservatism’s long rise include a visceral individualism with religious roots and antistatist consequences.
Europe, postreligious and statist, is puzzled—and alarmed—by a nation where grace is said at half the family dinner tables. But religiosity, say Micklethwait and Wooldridge, “predisposes Americans to see the world in terms of individual virtue rather than in terms of the vast social forces that so preoccupy Europeans.” And: “The percentage of Americans who believe that success is determined by forces outside their control has fallen from 41 percent in 1988 to 32 percent today; by contrast, the percentage of Germans who believe it has risen from 59 percent in 1991 to 68 percent today.” In America, conservatives much more than liberals reject the presumption of individual vulnerability and incompetence that gives rise to liberal statism.
Conservatism rose in the aftermath of Johnson’s Great Society, but skepticism about government is in the nation’s genetic code. Micklethwait and Wooldridge note that in September 1935, during the Depression, Gallup polling found that twice as many Americans said FDR’s administration was spending too much than said it was spending the right amount, and barely one person in ten said it was spending too little.
After FDR’s 1936 reelection, half of all Democrats polled said they wanted FDR’s second term to be more conservative. Only 19 percent wanted it more liberal. In 1980, when Ronald Reagan won while excoriating “big government,” America had lower taxes, a smaller deficit as a percentage of GDP, and a less-enveloping welfare state than any other industrialized Western nation.
America, say Micklethwait and Wooldridge, is among the oldest countries in the sense that it has one of the oldest constitutional regimes. Yet it is “the only developed country in the world never to have had a left-wing government.” And given the country’s broad and deep conservatism, it will not soon.
[OCTOBER 10, 2004]
In the 1920s and 1930s, the American left was riven by multiple factions furiously representing different flavors of socialism, each accusing the others of revisionism and deviationism. Leftists comforted themselves with the thought that “you can’t split rotten wood.”
But you can. And the health of a political persuasion can be inversely proportional to the amount of time its adherents spend expelling heretics from the one true (and steadily smaller) church. Today’s arguments about conservatism are, however, evidence of healthy introspection.
The most recent reformer to nail his purifying theses to the door of conservatism’s cathedral is Michael Gerson, a former speechwriter for the current president, and now a syndicated columnist. He advocates Heroic Conservatism in a new book with that trumpet-blast of a title.
His task of vivifying his concept by concrete examples is simplified by the fact that he thinks the Bush administration has been heroically conservative while expanding the welfare state and trying to export democracy. His task of making such conservatism attractive is complicated by the fact that…well, it is not just the Twenty-second Amendment that is preventing the president from seeking a third term.
Gerson, an evangelical Christian, makes “compassion” the defining attribute of political heroism. But compassion is a personal feeling, not a public agenda. To act compassionately is to act to prevent or ameliorate pain and distress. But if there is, as Gerson suggests, a categorical imperative to do so, two things follow. First, politics is reduced to right-mindedness—to having good intentions arising from noble sentiments—and has an attenuated connection with results. Second, limited government must be considered uncompassionate, because the ways to prevent or reduce stress are unlimited.
“We have a responsibility,” Bush said on Labor Day 2003, “that when somebody hurts, government has got to move.” That is less a compassionate thought than a flaunting of sentiment to avoid thinking about government’s limited capacities and unlimited confidence.
Conservatism is a political philosophy concerned with collective aspirations and actions. But conservatism teaches that benevolent government is not always a benefactor. Conservatism’s task is to distinguish between what government can and cannot do, and between what it can do but should not.
Gerson’s call for “idealism” is not an informative exhortation: Huey Long and Calvin Coolidge both had ideals. Gerson’s “heroic conservatism” is, however, a variant of what has been called “national greatness conservatism.” The very name suggests that America will be great if it undertakes this or that great exertion abroad. This grates on conservatives who think America is great, not least because it rarely and usually reluctantly conscripts people into vast collective undertakings.
Most Republican presidential candidates express admiration for Theodore Roosevelt. A real national greatness guy (“I have been hoping and working ardently to bring about our interference in Cuba”), he lamented that America lacked “the stomach for empire.”
He pioneered the practice of governing aggressively by executive orders. Jim Powell, author of Bully Boy, an unenthralled assessment of TR, says that in the forty years from Abraham Lincoln through TR’s predecessor, William McKinley, presidents issued 158 executive orders. In seven years, TR issued 1,007. Only two presidents have issued more—TR’s nemesis Woodrow Wilson (1,791) and TR’s cousin Franklin Roosevelt (3,723).
“I don’t think,” TR said, “that any harm comes from the concentration of power in one man’s hands.” That sort of executive swagger is precisely what Washington does not need more of. It needs more conservatives such as David Keene, chairman of the American Conservative Union for twenty-three years and Southern political director of Ronald Reagan’s 1976 presidential campaign. Writing on “The Conservative Continuum” in the September/October issue of the National Interest, Keene says of Reagan:
“He resorted to military force far less often than many of those who came before him or who have since occupied the Oval Office…. After the [1983] assault on the Marine barracks in Lebanon, it was questioning the wisdom of U.S. involvement that led Reagan to withdraw our troops rather than dig in. He found no good strategic reason to give our regional enemies inviting U.S. targets. Can one imagine one of today’s neoconservative absolutists backing away from any fight anywhere?”
It is a pity that TR built the Panama Canal. If he had not, “national greatness” and “heroic” conservatives could invest their overflowing energies and vaulting ambitions into building it, and other conservatives—call them mere realists—could continue seeking limited government, grounded in cognizance of government’s limited competences. That is an idealism consonant with the nation’s actual greatness.
[NOVEMBER 25, 2007]
In this winter of their discontents, nostalgia for Ronald Reagan has become for many conservatives a substitute for thinking. This mental paralysis—gratitude decaying into idolatry—is sterile: Neither the man nor his moment will recur. Conservatives should face the fact that Reaganism cannot define conservatism.
That is one lesson of John Patrick Diggins’s new book, Ronald Reagan: Fate, Freedom, and the Making of History. Diggins, a historian at the City University of New York, treats Reagan respectfully as an important subject in American intellectual history. The 1980s, he says, thoroughly joined politics to political theory. But he notes that Reagan’s theory was radically unlike that of Edmund Burke, the founder of modern conservatism, and very like that of Burke’s nemesis, Thomas Paine. Burke believed that the past is prescriptive because tradition is a repository of moral wisdom. Reagan frequently quoted Paine’s preposterous cry that “we have it in our power to begin the world over again.”
Diggins’s thesis is that the 1980s were America’s “Emersonian moment” because Reagan, a “political romantic” from the Midwest and West, echoed New England’s Ralph Waldo Emerson. “Emerson was right,” Reagan said several times of the man who wrote, “No law can be sacred to me but that of my nature.” Hence Reagan’s unique, and perhaps oxymoronic, doctrine—conservatism without anxieties. Reagan’s preternatural serenity derived from his conception of the supernatural.
Diggins says Reagan imbibed his mother’s form of Christianity, a strand of nineteenth-century Unitarianism from which Reagan took a foundational belief that he expressed in a 1951 letter: “God couldn’t create evil so the desires he planted in us are good.” This logic—God is good, therefore so are God-given desires—leads to the Emersonian faith that we please God by pleasing ourselves. Therefore there is no need for the people to discipline their desires. So, no leader needs to suggest that the public has shortcomings and should engage in critical self-examination.
Diggins thinks that Reagan’s religion “enables us to forget religion” because it banishes the idea of “a God of judgment and punishment.” Reagan’s popularity was largely the result of “his blaming government for problems that are inherent in democracy itself.” To Reagan, the idea of problems inherent in democracy was unintelligible because it implied that there were inherent problems with the demos—the people. There was nothing—nothing—in Reagan’s thinking akin to Lincoln’s melancholy fatalism, his belief (see his second inaugural) that the failings of the people on both sides of the Civil War were the reasons why “the war came.”
As Diggins says, Reagan’s “theory of government has little reference to the principles of the American founding.” To the Founders, and especially to the wisest of them, James Madison, government’s principal function is to resist, modulate, and even frustrate the public’s unruly passions, which arise from desires.
“The true conservatives, the founders,” Diggins rightly says, constructed a government full of blocking mechanisms—separations of powers, a bicameral legislature, and other checks and balances—in order “to check the demands of the people.” Madison’s Constitution responds to the problem of human nature. “Reagan,” says Diggins, “let human nature off the hook.”
“An unmentionable irony,” writes Diggins, is that big-government conservatism is an inevitable result of Reaganism. “Under Reagan, Americans could live off government and hate it at the same time. Americans blamed government for their dependence upon it.” Unless people have a bad conscience about demanding big government—a dispenser of unending entitlements—they will get ever-larger government. But how can people have a bad conscience after being told (in Reagan’s first inaugural) that they are all heroes? And after being assured that all their desires, which inevitably include desires for government-supplied entitlements, are good?
Similarly, Reagan said that the people never start wars, only governments do. But the Balkans reached a bloody boil because of the absence of effective government. Which describes Iraq today.
Because of Reagan’s role in the dissolution of the Soviet Union, Diggins ranks him among the “three great liberators in American history”—the others being Lincoln and Franklin Roosevelt—and among America’s three or four greatest presidents. But, says Diggins, an Emersonian president who tells us our desires are necessarily good leaves much to be desired.
If the defining doctrine of the Republican Party is limited government, the party must move up from nostalgia and leaven its reverence for Reagan with respect for Madison. As Diggins says, Reaganism tells people comforting and flattering things that they want to hear; the Madisonian persuasion tells them sobering truths that they need to know.
[FEBRUARY 11, 2007]
It has come to this: The crux of the political left’s complaint about Americans is that they are insufficiently materialistic.
For a century, the left has largely failed to enact its agenda for redistributing wealth. What the left has achieved is a rich literature of disappointment, explaining the mystery, as the left sees it, of why most Americans are impervious to the left’s appeal.
An interesting addition to this canon is What’s the Matter with Kansas?: How Conservatives Won the Heart of America. Its author, Thomas Frank, argues that his native Kansas—like the nation, only more so—votes self-destructively, meaning conservatively, because social issues such as abortion distract it from economic self-interest, as the left understands that.
Frank is a formidable controversialist—imagine Michael Moore with a trained brain and an intellectual conscience. Frank has a coherent theory of contemporary politics and expresses it with a verve born of indignation. His carelessness about facts is mild by contemporary standards, or lack thereof, concerning the ethics of controversy.
He says “the preeminent question of our times” is why people misunderstand “their fundamental interests.” But Frank ignores this question: Why does the left disparage what everyday people consider their fundamental interests?
He says the left has been battered by “the Great Backlash” of people of modest means against their obvious benefactor and wise definer of their interests, the Democratic Party. The cultural backlash has been, he believes, craftily manufactured by rich people with the only motives the left understands—money motives. The aim of the rich is to manipulate people of modest means, making them angry about abortion and other social issues so that they will vote for Republicans who will cut taxes on the rich.
Such fevered thinking is a staple of what historian Richard Hofstadter called “the paranoid style in American politics,” a style practiced, even pioneered, a century ago by prairie populists. You will hear its echo in John Edwards’s lament about the “two Americas”—the few rich victimizing the powerless many.
Frank frequently lapses into the cartoon politics of today’s enraged left, as when he says Kansas is a place of “implacable bitterness” and America resembles “a panorama of madness and delusion worthy of Hieronymus Bosch.” Yet he wonders why a majority of Kansans and Americans are put off by people like him who depict their society like that.
He says, delusionally, that conservatives have “smashed the welfare state.” Actually, it was waxing even before George W. Bush’s prescription-drug entitlement. He says, falsely, that the inheritance tax has been “abolished.” He includes the required—by the left’s current catechism—blame of Wal-Mart for destroying the sweetness of Main Street shopping. “Capitalism” is his succinct, if uninformative, explanation of a worldwide phenomenon of the past century—the declining portion of people in agricultural employment—which he seems to regret.
If you believe, as Frank does, that opposing abortion is inexplicably silly, and if you make no more attempt than Frank does to empathize with people who care deeply about it, then of course you, like Frank, will consider scores of millions of your fellow citizens lunatics. Because conservatives have, as Frank says, achieved little cultural change in recent decades, he considers their persistence either absurd or part of a sinister plot to create “cultural turmoil” in order to continue “the erasure of the economic” from politics.
Frank regrets that Bill Clinton’s “triangulation” strategy—minimizing Democrats’ economic differences with Republicans—contributed to the erasure. Politics would indeed be simpler, and more to the liking of liberals, if each citizen were homo economicus, relentlessly calculating his or her economic advantage, and concluding that liberalism serves it. But politics never has been like that, and is becoming even less so.
When the Cold War ended, Pat Moynihan warned, with characteristic prescience, that it would be, like all blessings, a mixed one, because passions—ethnic and religious—that were long frozen would come to a boil. There has been an analogous development in America’s domestic politics.
The economic problem, as understood during two centuries of industrialization, has been solved. We can reliably produce economic growth and have moderated business cycles. Hence many people, emancipated from material concerns, can pour political passions into other—some would say higher—concerns. These include the condition of the culture, as measured by such indexes as the content of popular culture, the agendas of public education, and the prevalence of abortion.
So, what’s the matter with Kansas? Not much, other than it is has not measured up—down, actually—to the left’s hope for a more materialistic politics.
[JULY 8, 2004]
PHILADELPHIA—At one end of Independence Mall, at the historic center of this city where so much of America’s foundational history was made with parchment and ink, stands the brick and mortar of Independence Hall. Built between 1732 and 1756, this model of what is called the Georgian style of architecture is where independence was voted and declared, and where, eleven years later, the Constitution was drafted.
At the other end of the mall sparkles a modernist jewel of America’s civic life, the National Constitution Center, a nongovernment institution that opened July 4, 2003, and already has received more than 2 million visitors. It is built of gray Indiana limestone—it is possible, even in Philadelphia, to have a surfeit of red brick—and lots of glass. The strikingly different, yet compatible, styles of the eighteenth-century building where the Constitution was drafted and the twenty-first century building where it is explicated and studied in its third century is an architectural bow to the fact that a constitution ratified by a mostly rural nation of 4 million persons, most of whom lived within twenty miles of Atlantic tidewater, still suits an urban nation that extends twenty-five-hundred miles into the Pacific.
The center is a marvel of exhibits, many of them interactive. For example, it uses newspapers and film to give immediacy to such episodes as the Supreme Court holding in 1952 that President Truman exceeded his constitutional powers—what a thought: there are limits on the commander in chief’s powers—when he seized the nation’s steel mills to prevent a labor dispute from disrupting war production. And it shows President Eisenhower, thirteen years after sending paratroopers into Normandy, sending them to Central High School in Little Rock.
Throughout, the center illustrates what Professor Felix Frankfurter—before he became Justice Frankfurter—was trying to express more than seventy years ago when he said, “If the Thames is ‘liquid history,’ the Constitution of the United States is most significantly not a document but a stream of history.” But it is, first and always, a document that is to be understood, as the greatest American jurist, John Marshall said, “chiefly from its words.”
Those words—which, by the way, do not include “federal” or “democracy”—comprise a subtle, complicated structure that nourishes various aims and virtues. So it would be wonderful if some of the liberal groups now gearing up for a histrionic meltdown over the coming debate about the confirmation of John Roberts could spend a few hours at the National Constitution Center. Judging by the river of rhetoric that has flowed in response to the Supreme Court vacancy, contemporary liberalism’s narrative of American constitutional history goes something like this:
On the night of April 18, 1775, Paul Revere galloped through the Massachusetts countryside, and to every Middlesex village and farm went his famous cry of alarm, “The British are coming! The British are coming to menace the ancient British right to abortion!” The next morning, by the rude bridge that arched the flood, their flag to April’s breeze unfurled, the embattled farmers stood and fired the shot heard round the world in defense of the right to abortion. The Articles of Confederation, ratified near the end of the Revolutionary War to Defend Abortion Rights, proved unsatisfactory, so in the summer of 1787, fifty-five framers gathered here to draft a Constitution. Even though this city was sweltering, the Framers kept the windows of Independence Hall closed. Some say that was to keep out the horseflies. Actually, it was to preserve secrecy conducive to calm deliberations about how to craft a more perfect abortion right. The Constitution was ratified after the state conventions vigorously debated the right to abortion. But seventy-four years later, a great Civil War had to be fought to defend the Constitution against states that would secede from the Union rather than acknowledge that a privacy right to abortion is an emanation loitering in the penumbra of other rights. And so on.
The exhibits at the National Constitution Center can correct the monomania of some liberals by reminding them that the Constitution expresses the philosophy of natural rights: People have various rights, including and especially the right to property and self-government. These rights are not created by government, which exists to balance and protect the rights in their variety.
And the center can remind conservatives of an awkward—to some of them—fact: The Constitution was written to correct the defects of the Articles of Confederation. That is, to strengthen the federal government.
[AUGUST 14, 2005]
Using arguments “that range from the unpersuasive to the offensive” (says the Washington Post), Senate Democrats are filibustering the nomination of Miguel Estrada to the U.S. Court of Appeals for the D.C. Circuit because he is conservative. Those senators should examine two recent writings by Judge J. Harvie Wilkinson of the U.S. Court of Appeals for the Fourth Circuit.
In “Is There a Distinctive Conservative Jurisprudence?” (University of Colorado Law Review, Fall 2002), he refutes the charge that there is no principled distinction between the “activism” of the Supreme Court under Chief Justice Rehnquist and that of the New Deal and Earl Warren courts. The Rehnquist Court has indeed invalidated many laws. However, Wilkinson says the earlier courts would “constitutionalize freely,” meaning “extend constitutional rights to a point that impaired the democratic process.” All judicial activism intrudes upon democratic processes, but many of the Rehnquist Court’s invalidations have “restructured democratic responsibilities,” partially restoring the Founders’ understanding of the proper allocation of responsibilities.
In the 1995 overturning of the federal Gun-Free School Zones Act, the Rehnquist Court acted, Wilkinson says, as “a structural referee, not an ideological combatant.” Congress justified preempting states and regulating guns near schools by citing the usual justification for extending its reach—its power to regulate interstate commerce. But, says Wilkinson, “the proposition that regulable commerce must mean something short of everything is hardly debatable.” And the Rehnquist Court’s ruling left states empowered to enact gun-free school zones democratically.
In the Rehnquist Court’s conservative jurisprudence of balancing, federalism does not always trump competing constitutional values. The states, says Wilkinson, are not the only important “mediative institutions” between the individual and the national government. When state and local governments have imposed intrusive regulations on the Boy Scouts (mandating that they accept gay scoutmasters) and political parties (mandating primaries open to persons not members of the party), the Rehnquist Court has restricted states’ powers in the name of the very principle that, in other contexts, caused the court to affirm states’ powers—to protect the intermediary institutions of civil society through which our communal, as opposed to our solitary, selves are expressed. Such protection has been, Wilkinson believes, scanted by the “binary” vision of liberal activism, which is committed to “sweeping and virtually limitless national power” and “the recognition of new individual rights,” but little in between.
In “Why Conservative Jurisprudence Is Compassionate” (to be published in the Virginia Law Review), Wilkinson argues that compassion in jurisprudence is more complicated than merely rendering judicial succor to those with poignant circumstances. Rather, compassion, as judges can properly consider it, begins by understanding this:
Rules—rules that restrict judges’ discretion to heed the promptings of poignancy—have considerable virtues. They give people advance notice of what is permitted and required, they produce uniform and consistent treatment of comparable cases, and they respect whatever democratic processes have produced the rule.
Wilkinson says liberals and conservatives differ about “the place of compassion in the democratic process.” The human condition features myriad misfortunes and devastating conditions. “Victims of social circumstance, however, are altogether distinct from victims of another’s violation of a specific legal duty. It is the job of the democratic process to ameliorate the effects of the former. It is the judiciary’s charge to rectify the latter.”
And “modesty”—for Wilkinson, the cardinal virtue—is required of judges by society’s complexity. Are rent controls compassionate, or do they create a shortage of rental units and a disincentive for landlords to spend on maintenance? Does bilingual education, compassionately intended, impede the mastery of English and upward mobility? Such vexing policy questions about applied compassion are quintessentially those to which democratic rather than judicial processes should provide answers.
For judges struggling with what Wilkinson calls “the inscrutability of compassion,” a guiding principle should be that individual plaintiffs are not the only focus of compassion. Collective entities often are instruments of society’s compassion. Judges must “personalize” social injury, understanding, for example, that inertial law enforcement has its victims. And that although supposedly compassionate malpractice and product-liability awards may increase patient and consumer safety, they also may drive up prices and prevent needed goods from reaching the market.
When next there is a Supreme Court vacancy, Wilkinson’s measured jurisprudence might make him the ideal nominee to silence those whose arguments against judicial conservatism range from the unpersuasive to the offensive.
[MARCH 3, 2003]
In contemporary American politics, as in earlier forms of vaudeville, it helps to have had an easy act to follow. Gerald Reynolds certainly did.
The U.S. Commission on Civil Rights’ new chairman follows Mary Frances Berry, whose seedy career—twenty-four years on the commission, eleven of them as chairman—mixed tawdry peculation, boorish behavior, and absurd rhetoric. Because Reynolds represents such a bracing change, it is tempting to just enjoy the new six-to-two conservative ascendancy on the commission and forgo asking a pertinent question: Why not retire the commission?
Its $9 million budget—about sixty employees and six field offices—is, as Washington reckons these things, negligible. So even Berry’s flamboyant mismanagement of it—several Government Accountability Office reports have said federal guidelines were ignored during her tenure; another report is coming—was small beer, even when including the hundreds of thousands of dollars a year paid to the public relations firm that mediated her relations with the media. But although the monetary savings from closing the commission would be small, two prudential reasons for doing so are large.
One is that someday Democrats will again control the executive branch and may again stock the commission with extremists—Berry celebrated Communist China’s educational system in 1977, when she was assistant secretary of education; she made unsubstantiated charges of vast “disenfranchisement” of Florida voters in 2000—from the wilder shores of racial politics. The second reason for terminating the commission is that civil rights rhetoric has become a crashing bore and, worse, a cause of confusion: Almost everything designated a “civil rights” problem isn’t.
The commission has no enforcement powers, only the power to be, Reynolds says, a “bully pulpit.” And if someone must be preaching from it, by all means let it be Reynolds. Born in the South Bronx, the son of a New York City policeman, he is no stranger to the moral muggings routinely administered to African-American conservatives. But he says, “If you think I’m conservative, you should come with me to a black barbershop. I’m usually the most liberal person there,” where cultural conservatism—on crime, welfare, abortion, schools—flourishes.
After working in some conservative think tanks, he became head of the Department of Education’s Office for Civil Rights in the administration of the first President Bush. He is currently a corporate lawyer in Kansas City, where he has witnessed the handiwork of an imperial judge who, running the school system, ordered the spending of nearly $2 billion in a spectacular, if redundant, proof that increased financial inputs often do not correlate with increased cognitive outputs.
But about this commission as bully pulpit: Does anyone really think America suffers from an insufficiency of talk about race? What is in scarce supply is talk about the meaning of the phrase “civil rights.” Not every need is a right, and if the adjective is a modifier that modifies, not every right is a civil right—one central to participation in civic life.
Reynolds, forty-one, says that the core function of civil rights laws is to prevent discrimination, meaning “the distribution of benefits and burdens on the basis of race.” But if so, today a—perhaps the—principal discriminator is government, with racial preferences and the rest of the reparations system that flows from the assumption that disparities in social outcomes must be caused by discrimination, and should be remedied by government transfers of wealth.
Reynolds rightly says that the core function of the civil rights laws, which required “a lot of heavy lifting by the federal government,” was to dismantle a caste system maintained by law. But that has been accomplished.
It is, as Reynolds says, scandalous that so few black seventeen-year-old males read at grade level; that so many black teenagers are not mentored to think about college as a possibility and of SAT tests as important; that many young blacks—68.2 percent are now born out of wedlock—are enveloped in the culture that appalls Bill Cosby, a culture that disparages academic seriousness as “acting white” and celebrates destructive behaviors. Reynolds is right that much of this can be traced far back to discriminatory events or contexts.
But this is a problem of class, one that is both cause and effect of a cultural crisis. It is rooted in needs, such as functional families and good schools, that are not rights in the sense of enforceable claims. Civil rights laws and enforcement agencies are barely relevant. Proper pulpits—perhaps including barbershops—are relevant. Government pulpits are not.
[MARCH 10, 2005]
Americans are not losing their minds, but they are afraid of using their minds. They are afraid to exercise judgment—afraid of being sued.
In 1924, Will Rogers said Americans thought they were getting smarter because “they’re letting lawyers instead of their conscience be their guide.” Rogers was from Oologah, Oklahoma, where in 1995 a child suffered minor injuries when playing unattended on the slide in the town park. The parents sued the town, which subsequently dismantled the slide.
Products come plastered with imbecilic warnings (on a baby stroller: “Remove child before folding.”) for the same reason seesaws and swings are endangered species of playground equipment: fear of liability. A federal handbook morosely warns: “Seesaw use is quite complex.” So seesaws are being replaced with spring-driven devices used by only one child at a time. Swings? Gracious, suppose a child falls on the—imagine this—ground. The federal handbook again: “Earth surfaces such as soils and hard-packed dirt are not recommended because they have poor shock-absorbing properties.” No wonder a Southern California school district has banned running on the playground.
The early twentieth-century playground movement aimed to acquaint children with mild risks. In 1917, a movement leader said: “It is reasonably evident that if a boy climbs on a swing frame and falls off, the school board is no more responsible for his action than if he climbed into a tree or upon the school building and falls. There can be no more reason for taking out play equipment on account of such an accident than there would be for the removal of the trees or the school building.” Today, New York City cuts branches off trees so children will not be tempted to climb.
Today, when a patient complains of a headache, a doctor, even when knowing that an aspirin is almost certainly the right treatment, may nevertheless order an expensive CAT scan. You cannot be too careful in a country in which six Mississippians have been awarded $150 million not because they are sick, but because they fear that they someday may become sick from asbestos-related illnesses.
Michael Freedman reports in Forbes magazine that 42 percent of obstetricians are leaving the Las Vegas area now that 76 percent of that city’s obstetricians have been sued—40 percent of them three or more times. Pharmaceutical companies are limiting research on “orphan drugs” that treat serious but rare diseases because tort liability is so disproportionate to possible return on investment.
“Dismissing a tenured teacher,” says a California official, “is not a process, it’s a career.” Which is why in a recent five-year period only 62 of California’s 220,000 tenured teachers were dismissed. The multiplication of due-process protections has turned jobs into a property right, undoing the progressive movement’s dream that a civil service would end the tradition of treating public jobs as private property. In 1998, Pennsylvania reported that in the preceding forty years only thirteen teachers had been removed for incompetence. In New York State, terminating a teacher costs an average of $194,000 in legal bills—the cost in time and energy of school officials is extra. Termination is a seven-year process in Detroit.
By the mid-1970s, writes Philip K. Howard, due process “had become a kind of legal airbag inflating instantly” to protect individuals aggrieved about any adverse encounter with authority. Howard’s book, The Collapse of the Common Good: How America’s Lawsuit Culture Undermines Our Freedom, is a compendium of the social havoc caused by the flight from making commonsense judgments. Americans now “tiptoe through the day,” fearful that an angry individual with a lawyer will extort money from society while imposing irrational rules on society.
Oliver Wendell Holmes defined law as “prophecies of what the courts will do.” But Howard rightly says that “nobody has any idea what a court will do.” A Delaware River canoe rental company is found liable on the theory that it should have stationed lifeguards along miles of riverbank. After a church was sued—unsuccessfully—because a parishioner committed suicide, many churches began discouraging counseling by ministers. When any harmful event can give rise to a lawsuit, the result is “law a la carte,” changeable from jury to jury.
The cost of this—in money, health, lives, self-government, and individual liberty—is staggering. Howard, a thinking person’s Quixote, has founded (with Shelby Steele, Mary Ann Glendon, John Silber, George McGovern, Newt Gingrich, and others) Common Good, an organization “to lead a new legal revolution to restore human judgment and values at every level of society.”
To the barricades! The address of the barricades is: ourcommon good.com.
[JUNE 2, 2002]
Some illiberal liberals are trying to restore the luridly misnamed Fairness Doctrine, which until 1987 required broadcasters to devote a reasonable amount of time to presenting fairly each side of a controversial issue. The government was empowered to decide how many sides there were, how much time was reasonable, and what was fair.
By trying to again empower the government to regulate broadcasting, illiberals reveal their lack of confidence in their ability to compete in the marketplace of ideas, and their disdain for consumer sovereignty—and hence for the public.
The illiberals’ transparent, and often proclaimed, objective is to silence talk radio. Liberals strenuously and unsuccessfully attempted to compete in that medium—witness the anemia of their Air America. Talk radio barely existed in 1980, when there were fewer than one hundred talk shows nationwide. The Fairness Doctrine was scrapped in 1987, and today more than fourteen hundred stations are entirely devoted to talk formats. Conservatives dominate talk radio—although no more thoroughly than liberals dominate Hollywood, academia, and much of the mainstream media.
Beginning in 1927, the government, concerned about the scarcity of radio-spectrum access, began regulating the content of broadcasts. In 1928, it decided that the programming of New York’s WEVD, which was owned by the Socialist Party, was not in the public interest. The station’s license was renewed after a warning to show “due regard for the opinions of others.” What was “due”? Who knew?
In 1929, the government refused the Chicago Federation of Labor’s attempt to buy a station because, spectrum space being limited, all stations “should cater to the general public.” A decade later, the government conditioned the renewal of a station’s license on the station’s promise to broadcast no more anti-FDR editorials.
In 1969, the Supreme Court rejected the argument that the Fairness Doctrine violated the First Amendment protection of free speech, saying the doctrine enhanced free speech. The court did not know how the Kennedy administration, anticipating a 1964 race against Barry Goldwater, had wielded the doctrine against stations broadcasting conservative programming. The Democratic Party paid people to monitor conservative broadcasts and coached liberals in how to demand equal time. This campaign burdened stations with litigation costs and won 1,678 hours of free airtime.
Bill Ruder, a member of Kennedy’s subcabinet, said: “Our massive strategy was to use the Fairness Doctrine to challenge and harass right-wing broadcasters in the hope that the challenges would be so costly to them that they would be inhibited and decide it was too expensive to continue.” The Nixon administration frequently threatened the three networks and individual stations with expensive license challenges under the Fairness Doctrine.
In 1973, Supreme Court justice and liberal icon William Douglas said: “The Fairness Doctrine has no place in our First Amendment regime. It puts the head of the camel inside the tent and enables administration after administration to toy with TV and radio.” The Reagan administration scrapped the doctrine because of its chilling effect on controversial speech, and because the scarcity rationale was becoming absurd.
Adam Thierer, writing in the City Journal, notes that today’s “media cornucopia” has made America “as information-rich as any society in history.” In addition to the Internet’s uncountable sources of information, there are fourteen thousand radio stations—twice as many as in 1970—and satellite radio has nearly 14 million subscribers. Eighty-seven percent of households have either cable or satellite television with more than five hundred channels to choose from. There are more than nineteen thousand magazines (up more than five thousand since 1993). Thierer says, consider a black lesbian feminist who hunts and likes country music:
“Would the ‘mainstream media’ of 25 years ago represented any of her interests? Unlikely. Today, though, this woman can program her TiVo to record her favorite shows on Black Entertainment Television, Logo (a gay/lesbian-oriented cable channel), Oxygen (female-targeted programming), the Outdoor Life Network and Country Music Television.”
Some of today’s illiberals say that media abundance, not scarcity, justifies the Fairness Doctrine: Americans, the poor dears, are bewildered by too many choices. And the plenitude of information sources disperses “the national campfire,” the cozy communitarian experience of the good old days (for liberals), when everyone gathered around—and was dependent on—ABC, NBC, and CBS.
“I believe we need to reregulate the media,” says Howard Dean. Such illiberals argue that the paucity of liberal successes in today’s radio competition—and the success of Fox News—somehow represent “market failure.” That is the regularly recurring, all-purpose rationale for government intervention in markets. Market failure is defined as consumers’ not buying what liberals are selling.
[MAY 7, 2007]
Marriage is the foundation of the natural family and sustains family values. That sentence is inflammatory, perhaps even a hate crime.
At least it is in Oakland, California. That city’s government says those words italicized here constitute something akin to hate speech, and can be proscribed from the government’s open e-mail system and employee bulletin board.
When the McCain-Feingold law empowered government to regulate the quantity, content, and timing of political campaign speech about government, it was predictable that the right of free speech would increasingly be sacrificed to various social objectives that free speech supposedly impedes. And it was predictable that speech suppression would become an instrument of cultural combat, used to settle ideological scores and advance political agendas by silencing adversaries.
That has happened in Oakland. And, predictably, the ineffable Ninth U.S. Circuit Court of Appeals has ratified this abridgment of First Amendment protections. Fortunately, overturning the Ninth Circuit is steady work for the U.S. Supreme Court.
Some African-American Christian women working for Oakland’s government organized the Good News Employee Association (GNEA), which they announced with a flier describing their group as “a forum for people of Faith to express their views on the contemporary issues of the day. With respect for the Natural Family, Marriage and Family Values.”
The flier was distributed after other employees’ groups, including those advocating gay rights, had advertised their political views and activities on the city’s e-mail system and bulletin board. When the GNEA asked for equal opportunity to communicate by that system and that board, they were denied. Furthermore, the flier they posted was taken down and destroyed by city officials, who declared it “homophobic” and disruptive.
The city government said the flier was “determined” to promote harassment based on sexual orientation.” The city warned that the flier and communications like it could result in disciplinary action “up to and including termination.”
Effectively, the city has proscribed any speech that even one person might say questioned the gay rights agenda and therefore created what that person felt was a “hostile environment.” This, even though gay rights advocates used the city’s communication system to advertise “Happy Coming Out Day.” Yet the terms natural family, marriage, and family values are considered intolerably inflammatory.
The treatment of GNEA illustrates one technique by which America’s growing ranks of self-appointed speech police expand their reach: They wait until groups they disagree with, such as GNEA, are provoked to respond to them in public debates, then they persecute them for annoying those to whom they are responding. In Oakland, this dialectic of censorship proceeded on a reasonable premise joined to a preposterous theory.
The premise is that city officials are entitled to maintain workplace order and decorum. The theory is that government supervisors have such unbridled power of prior restraint on speech in the name of protecting order and decorum that they can nullify the First Amendment by declaring that even the mild text of the GNEA flier is inherently disruptive.
The flier supposedly violated the city regulation prohibiting “discrimination and/or harassment based on sexual orientation.” The only cited disruption was one lesbian’s complaint that the flier made her feel “targeted” and “excluded.” So anyone has the power to be a censor just by saying someone’s speech has hurt his or her feelings.
Unless the speech is “progressive.” If GNEA claimed it felt “excluded” by advocacy of the gay rights agenda, would that advocacy have been suppressed? Of course not—although GNEA’s members could plausibly argue that the city’s speech police have created a “hostile workplace environment” against them.
A district court affirmed the city’s right to impose speech regulations that are patently not content neutral. It said the GNEA’s speech interest—the flier—is “vanishingly small.” GNEA, in its brief asking the U.S. Supreme Court to intervene, responds that some of the high court’s seminal First Amendment rulings have concerned small matters, such the wearing of a T-shirt, standing on a soapbox, holding a picket sign, and “other simple forms of expression.”
Congress is currently trying to enact yet another “hate crime” law that would authorize enhanced punishments for crimes committed because of, among other things, sexual orientation. A coalition of African-American clergy, the High Impact Leadership Coalition, opposes this, fearing it might be used “to muzzle the church.” The clergy argue that in our “litigation prone society” the legislation would result in lawsuits having “a chilling effect” on speech and religious liberty. As the Oakland case demonstrates, that, too, is predictable.
[JUNE 24, 2007]
The campaign to deny Luis Paucar his right to economic liberty illustrates the ingenuity people will invest in concocting perverse arguments for novel entitlements. This city’s taxi cartel is offering an audacious new rationalization for corporate welfare, asserting a right—a constitutional right, in perpetuity—to revenues it would have received if Minneapolis’s City Council had not ended the cartel that never should have existed.
Paucar, thirty-seven, embodies the best qualities of American immigrants. He is a splendidly self-sufficient entrepreneur. And he is wielding American principles against some Americans who, in their decadent addiction to government assistance, are trying to litigate themselves to prosperity at the expense of Paucar and the public.
Seventeen years ago, Paucar came to America from Ecuador, and for five years drove a taxi in New York City. Because that city has long been liberalism’s laboratory, many taxi drivers there are akin to, as an economist has said, “modern urban sharecroppers.”
In 1937, New York City, full of liberalism’s itch to regulate everything, knew, just knew, how many taxicab permits there should be. For seventy years, the number (about twelve thousand) has not been significantly changed, so rising prices have been powerless to create new suppliers of taxi services. Under this government-created scarcity, a permit (“medallion”) now costs about $500,000. Most people wealthy enough to buy medallions do not drive cabs, any more than plantation owners picked cotton. They lease their medallions at exorbitant rates to people like Paucar who drive, often for less than $15 an hour, for long days.
Attracted by Minneapolis–St. Paul’s vibrant Hispanic community, now 130,000 strong, Paucar moved here, assuming that economic liberty would be more spacious than in New York. Unfortunately, Minnesota has a “progressive,” meaning statist, tradition that can impede the progress of people like Paucar but who lack his knack for fighting back.
The regulatory impulse came to the upper Midwest with immigrants from northern Europe, many of whom carried the too-much-government traditions of “social democracy.” In the 1940s, under a mayor who soon would take his New Deal liberalism to Washington—Hubert Humphrey—the city capped entry into the taxi business.
By the time Paucar got here in 1999, 343 taxis were permitted. He wanted to launch a fleet of 15. That would have required him to find 15 incumbent license holders willing to sell their licenses for up to $25,000 apiece.
As a by-product of government intervention, a secondary market arose in which government-conferred benefits were traded by the cartel. In 2006, Minneapolis had only one cab for every one thousand residents (compared to three times as many in St. Louis and Boston), which was especially punishing to the poor who lack cars.
That fact—and Paucar’s determination and, eventually, litigiousness; he is a real American—helped persuade the City Council members, liberals all (twelve members of the Democratic Farmer-Labor Party, one member of the Green Party), to vote to allow forty-five new cabs per year until 2010, at which point the cap will disappear. In response, the cartel is asking a federal court to say the cartel’s constitutional rights have been violated. It says the cap—a barrier to entry into the taxi business—constituted an entitlement to profits that now are being “taken” by government action.
The Constitution’s Fifth Amendment says no property shall be “taken” without just compensation. The concept of an injury through “regulatory taking” is familiar and defensible: Such an injury occurs when a government regulation reduces the value of property by restricting its use. But the taxi cartel is claiming a deregulatory taking: It wants compensation because it now faces unanticipated competition.
When the incumbent taxi industry inveigled the city government into creating the cartel, this was a textbook example of rent seeking—getting government to confer advantages on an economic faction in order to disadvantage actual or potential competitors. If the cartel’s argument about a “deregulatory taking” were to prevail, modern government—the regulatory state—would be controlled by a leftward-clicking ratchet: Governments could never deregulate, never undo the damage that they enable rent seekers to do.
By challenging his adopted country to honor its principles of economic liberty and limited government, Paucar, assisted by the local chapter of the libertarian Institute for Justice, is giving a timely demonstration of this fact: Some immigrants, with their acute understanding of why America beckons, refresh our national vigor. It would be wonderful if every time someone like Paucar comes to America, a native-born American rent seeker who has been corrupted by today’s entitlement mentality would leave.
[MAY 27, 2007]
CHICAGO—Thirty-five summers ago, in the angriest year of a boiling era, the forces of peace, love, and understanding—they fancied themselves “flower power”—clashed violently at the Democratic Convention with the police of Mayor Richard Daley. Today, the mayor of this famously muscular city—big-shouldered hog butcher and stacker of wheat—practices flower power. His name is Richard Daley.
The son is in his fourth four-year term. Chicago has been governed by him or his father for thirty-five of the forty-eight years since 1955. Chicago’s name derives from an American Indian word meaning “wild onion,” and the city’s motto Urbs in Horto means “city in a garden.” Daley’s green thumb has produced a city chock-full of gardens.
Including one on the roof of city hall. It has twenty thousand plants of more than 150 species. And three beehives, which produce sixty pounds of honey a year. The hives are emblematic of the intensely practical nature of Daley’s passion to prove that “nature can exist in an urban environment.”
A Daley aide calls him “the Martha Stewart of mayors,” then quickly decides there must be a more felicitous encomium to celebrate his attentions to fine touches conducive to gracious urban living. Daley explains his passion for prettification by recalling that shortly after being elected in 1989, he took a train from Washington to New York, and was struck by the ugliness travelers saw as trains entered and left cities along the way. So today the river of traffic flowing from O’Hare airport to downtown on the Kennedy Expressway passes between, as it were, landscaped riverbanks.
Within most cities, Daley says, people experience “a canyon effect of steel and concrete.” But cities need not mean just “steel, concrete and dirt.” Flowers, he says, are one way “to change the perception of cities.” He has created median strips on some major streets and planted them with flowers “to slow down traffic.” And, he says, “Flowers calm people down.”
This apostle of calming influences is the son of the man called—it is the title of a biography of him—an “American pharaoh.” His father would stop his limousine to pick up littered newspapers, but his passion was not for sensory pleasure but for tidiness. The father considered even Republicans litter: In his last three elections, he carried 148 of the city’s 150 wards.
The pharaoh was a builder, but his most famous pyramids were not what he wanted. He favored low-rise public housing, but federal pressures forced him to build high-rises that became crime-infested pillars of poverty. The son has been dismantling them. The son, too, is a builder—witness Chicago’s sparkling skyscrapers. But he is primarily a planter.
Since 1989, he has presided over the planting of more than three hundred thousand trees, which he says not only please the eye but “reduce noise, air pollution and summer heat.” Twenty-one underutilized acres around the city have been turned into seventy-two community gardens and parks. The renovation of Soldier Field on the lakefront will include seventeen acres of new parkland. The largest park project, the Calumet Open Space Reserve on the far Southeast Side, is four thousand acres of prairies, wetlands, and forests.
The city contracts with an organization called the Christian Industrial League, which hires many down-and-out persons to wash streets and water plants. And the organization is building a greenhouse that will sell flowers in winter.
The great Chicago architect Louis Sullivan (1856–1924) relished the city’s “intoxicating rawness,” which was still abundant when the first Mayor Daley won the first of his six elections. He became perhaps America’s most important twentieth-century mayor. Alan Ehrenhalt, executive editor of Governing magazine, notes that Chicago could have become a Detroit, a symbol of urban failure.
If the son is a twenty-first-century model mayor, it is because he senses the importance of the senses: Human beings respond to aesthetic values. Daley is not given to flights of theory, so he may not realize the extent to which he is continuing a project begun on this city’s South Side 110 years ago.
Chicago successfully competed against New York, Philadelphia, and Washington for the right to host the World’s Columbian Exposition of 1893, celebrating the four hundredth anniversary of Columbus’s voyage. The White City, as the exposition was called, received 27 million visitors and gave rise to the City Beautiful movement, the premise of which was that cities do not need to be grimly utilitarian, and that improvement of a city’s material environment would be conducive to the moral improvement of its residents. Coarse environments would coarsen people; refined environments would help ameliorate what reformers considered the moral deficiencies of people struggling to adapt to urban life.
A century ago, reforming elites thought of beautification in terms of social control: it would tame the lower orders. Daley, a Chicago chauvinist, primarily just wants his city to be second to none—Second City, indeed. But he also aims at mild social control: He hopes his flowers will calm down everyone.
[AUGUST 4, 2003]
The taxing power of government must be used to provide revenues for legitimate government purposes. It must not be used to regulate the economy or bring about social change.
—PRESIDENT RONALD REAGAN, State of the Union message, February 18, 1981
(b) no portion of the proceeds of such issue is to be used to provide (including the provision of land for) any private or commercial golf course, country club, massage parlor, hot tub facility, suntan facility, racetrack or other facility used for gambling, or any store the principal business of which is the sale of alcoholic beverages for consumption off premises.
—Title 26, Internal Revenue Code (tax exemption requirements for qualified redevelopment bonds)
Well, yes, certainly no massage parlors. Or hot tubs, of course; one shudders to think what happens in those. And tanning facilities, too, are the Devil’s playgrounds. As for racetracks, although state governments promoting their lotteries are America’s most energetic advocates of gambling, government should err on the side of caution when protecting whatever this tax provision protects by frowning on racetracks, hot tubs, and other things.
This peculiar wrinkle in the tax code, first approved the year after President Reagan said the tax code should not be used to leverage social change, makes certain projects ineligible to be financed by industrial redevelopment bonds that are subsidized by preferential tax treatment. This provision recently popped back into the news, thanks to Katrina.
That ill wind blew some (barely) offshore casinos onto the shores of the Gulf Coast. As part of the plan to “rebuild,” as the saying goes, the damaged coast, such bonds are going to be issued. But not promiscuously. Some legislators do not want tax-subsidized bonds to finance the rebuilding of casinos.
Not that the casinos need help: They are rebounding briskly, even expanding. Still, government has a sorry record of dispensing billions in corporate welfare for flourishing businesses.
It is mysterious why states or localities that want casinos operating nearby—and providing jobs and tax revenues—also want them afloat, a few feet from a riverbank or ocean shore. (Mississippi has just decided to let them come ashore.) Does the narrow band of water provide prophylactic protection against sin? The communities already have weighed the sin against the jobs and revenues and found the sin congenial.
But such awkward questions arise when government begins moralizing, especially about the minutiae of life, such as hot tubs. Which brings us to Reagan’s 1981 statement about inappropriate uses of the tax code.
He disliked government using the code to conduct industrial policy, picking commercial winners and losers, which is a recipe for what is called “lemon socialism”—tax subsidies for failing businesses that the market says should fail. Regarding the second part of Reagan’s statement, any tax code is going to shape society. But he opposed manipulating the tax code to stigmatize this or that consumer preference. Which is what the code’s anti-hot-tub provision does.
One wonders: Why did the social improvers who used the code to put the government, in its majesty, on record against hot tubs and tanning facilities not extend their list of disapproved choices? Their list looks morally lax.
Really stern social conservatives probably favor explicitly proscribing government assistance to lots of things, most of them somehow involving sex. Government could preen about being too moral to subsidize, with tax-preferred bonds, economic projects that include bookstores that sell Judy Blume novels, or hotels that offer in-room pornography. And wouldn’t it be fun to find the words lap dance in the nation’s tax code?
As strongly as social conservatives deplore commercialized sex, liberals deplore cigarettes, Big Macs, firearms, fur coats, SUVs, pornography not printed on recycled paper, pornographic movies produced by nonunion studios, holiday trees provocatively labeled “Christmas trees,” and much more.
But do we really want to march down this path paved with moral pronouncements? When government uses subsidies to moralize, as with tax preferences for bonds that can be used to finance this but not that, government is speaking. It is expressing opinions about what is and is not wholesome. And once government starts venting such opinions, how does it stop?
Government could spare itself the stress of moralizing about so many things if it decided that the choices people make with their money is their, not its, business. And government could avoid having opinions about so many things if it would quit subsidizing so many things.
When, for example, the valuation and allocation of money through bonds is left to the market, government can be reticent. And reticent government sounds wonderful.
[JANUARY 8, 2006]
On the north bank of the Ohio River sits Evansville, Indiana, home of David Williams, fifty-two, and of a riverboat casino. During several years of gambling in that casino, Williams, a state auditor earning $35,000 a year, lost approximately $175,000. He had never gambled before the casino sent him a coupon for $20 worth of gambling.
He visited the casino, lost the $20, and left. On his second visit, he lost $800. The casino issued to him, as a good customer, a “Fun Card,” which when used in the casino earns points for meals and drinks, and enables the casino to track the user’s gambling activities. For Williams, those activities became what he calls “electronic morphine.”
By the time he had lost $5,000, he said to himself that if he could get back to even, he would quit. One night he won $5,500, but he did not quit. In 1997, he lost $21,000 to one slot machine in two days. In March 1997, he lost $72,186. He sometimes played two slot machines at a time, all night, until the boat docked at 5 a.m., then went back aboard when the casino opened at 9 a.m. Now he is suing the casino, charging that it should have refused his patronage because it knew he was addicted. It did know he had a problem.
In March 1998, a friend of Williams’s got him involuntarily confined to a treatment center for addictions, and wrote to inform the casino of Williams’s gambling problem. The casino included a photo of Williams among those of banned gamblers, and wrote to him a “cease admissions” letter. Noting the “medical/psychological” nature of problem gambling behavior, the letter said that before being readmitted to the casino he would have to present medical/psychological information demonstrating that patronizing the casino would pose no threat to his safety or well-being.
Although no such evidence was presented, the casino’s marketing department continued to pepper him with mailings. And he entered the casino and used his Fun Card without being detected.
The Wall Street Journal reports that the casino has twenty-four signs warning: “Enjoy the fun…and always bet with your head, not over it.” Every entrance ticket lists a toll-free number for counseling from the Indiana Department of Mental Health. Nevertheless, Williams’s suit charges that the casino, knowing he was “helplessly addicted to gambling,” intentionally worked to “lure” him to “engage in conduct against his will.” Well.
It is unclear what luring was required, given his compulsive behavior. And in what sense was his will operative? The fourth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) says “pathological gambling” involves persistent, recurring, and uncontrollable pursuit less of money than of the euphoric state of taking risks in quest of a windfall. Pathological gamblers often exhibit distorted thinking (denial, superstition, overconfidence). They lie to friends and family to conceal their behavior, resort to theft or fraud to finance it, and succumb to “chasing”—ever more risky and high-stakes gambling in attempts to recoup losses.
It is worrisome that society is medicalizing more and more behavioral problems, often defining as addictions what earlier, sterner generations explained as weakness of will. Prodded by science, or what purports to be science, society is reclassifying what once were considered character flaws or moral failings as personality disorders akin to physical disabilities.
However, at least several million Americans do have a disposition—a “mental disorder”? a “compulsive disease”?—that seems to make them as unable to gamble responsibly as an alcoholic is unable to drink responsibly. This is a small portion of the nation’s population, but a large pool of misery for themselves and loved ones.
Gambling has been a common feature of American life forever, but for a long time it was broadly considered a sin, or a social disease. Now it is a social policy: The most important and aggressive promoter of gambling in America is government.
Forty-four states have lotteries, twenty-nine have casinos, and most of these states are to varying degrees dependent on—you might say addicted to—revenues from wagering. And since the first Internet gambling site was created in 1995, competition for gamblers’ dollars has become intense. The October 28 issue of Newsweek reported that 2 million gamblers patronize eighteen hundred virtual casinos every week. With $3.5 billion being lost on Internet wagers this year, gambling has passed pornography as the Web’s most lucrative business.
The anonymous, lonely, undistracted nature of online gambling is especially conducive to compulsive behavior. But even if government knew how to move against Internet gambling, what would be its rationale for doing so? Government curbs on private-sector gambling enterprises look like attempts to cripple the competition—to prevent others from poaching on the population of gamblers that government has done so much to enlarge.
David Williams’s suit should trouble this gambling nation. But don’t bet on it.
[NOVEMBER 25, 2002]
Perhaps Prohibition II is being launched because Prohibition I worked so well at getting rid of gin. Or maybe the point is to reassure social conservatives that Republicans remain resolved to purify Americans’ behavior. Incorrigible cynics will say Prohibition II is being undertaken because someone stands to make money from interfering with other people making money.
For whatever reason, last Friday, the president signed into law Prohibition II. You almost have to admire the government’s plucky refusal to heed history’s warnings about the probable futility of this adventure. This time the government is prohibiting Internet gambling by making it illegal for banks or credit-card companies to process payments to online gambling operations on a list the government will prepare.
Last year, about 12 million Americans wagered $6 billion online. But after Congress, thirty-two minutes before adjourning, passed its ban, the stock of the largest online-gambling business, Gibraltar-based PartyGaming, which gets 85 percent of its $1 billion annual revenue from Americans, declined 58 percent in one day, wiping out about $5 billion in market value. The stock of a British company, World Gaming PLC, which gets about 95 percent of its revenue from Americans, plunged 88 percent. The industry, which has some twenty-three hundred websites and did half of its business last year with Americans, has lost $8 billion in market value because of the new law. And you thought the 109th Congress did not accomplish anything.
Supporters of the new law say it merely strengthens enforcement; they claim that Internet gambling is illegal under the Wire Act enacted in 1961, before Al Gore, who was then thirteen, had invented the Internet. But not all courts agree. Supporters of the new law say online gambling sends billions of dollars overseas. But the way to keep the money here is to decriminalize the activity.
The number of online American gamblers, although just one-sixth the number of Americans who visit real casinos annually, doubled in the last year. This competition alarms the nation’s biggest gambling interests—state governments.
It is an iron law: When government uses laws, tariffs, and regulations to restrict the choices of Americans, ostensibly for their own good, someone is going to make money from the paternalism. One of the big winners from the government’s action against online gambling will be the state governments that are America’s most relentless promoters of gambling. Forty-eight states (all but Hawaii and Utah) have some form of legalized gambling. Forty-two states have lottery monopolies. Thirty-four states rake in part of the take from casino gambling, slot machines, or video poker.
The new law actually legalizes online betting on horse racing, Internet state lotteries, and some fantasy sports. The horse-racing industry is a powerful interest. The solidarity of the political class prevents the federal officials from interfering with state officials’ lucrative gambling. And woe unto the politicians who get between a sports fan and his fun.
In the private sector, where realism prevails, casino operators are not hot for criminalizing Internet gambling. This is so for two reasons: It is not in their interest for government to wax censorious. And online gambling might whet the appetites of millions for the real casino experience.
Granted, some people gamble too much. And some people eat too many cheeseburgers. But who wants to live in a society that protects the weak-willed by criminalizing cheeseburgers? Besides, the problems—frequently exaggerated—of criminal involvement in gambling, and of underage and addictive gamblers, can be best dealt with by legalization and regulation utilizing new software solutions. Furthermore, taxation of online poker and other gambling could generate billions for governments.
Prohibition I was a porous wall between Americans and their martinis, giving rise to bad gin supplied by bad people. Prohibition II will provoke imaginative evasions as the market supplies what gamblers will demand—payment methods beyond the reach of Congress.
But governments and sundry busybodies seem affronted by the Internet, as they are by any unregulated sphere of life. The speech police are itching to bring bloggers under campaign-finance laws that control the quantity, content, and timing of political discourse. And now, by banning a particular behavior—the entertainment some people choose, using their own money—government has advanced its mother-hen agenda of putting a saddle and bridle on the Internet.
Gambling is, however, as American as the Gold Rush or, for that matter, Wall Street. George Washington deplored the rampant gambling at Valley Forge, but lotteries helped fund his army as well as Harvard, Princeton, and Dartmouth. And Washington endorsed the lottery that helped fund construction of the city that now bears his name, and from which has come a stern—but interestingly selective—disapproval of gambling.
[OCTOBER 23, 2002]
If you have an average-size dinner table, four feet by six feet, put a dime on the edge of it. Think of the surface of the table as the Arctic National Wildlife Refuge in Alaska. The dime is larger than the piece of the coastal plain that would have been opened to drilling for oil and natural gas. The House of Representatives voted for drilling, but the Senate voted against access to what Senator John Kerry, Massachusetts Democrat and presidential aspirant, calls “a few drops of oil.” ANWR could produce, for twenty-five years, at least as much oil as America currently imports from Saudi Arabia.
Six weeks of desultory Senate debate about the energy bill reached an almost comic culmination in…yet another agriculture subsidy. The subsidy is a requirement that will triple the amount of ethanol, which is made from corn, that must be put in gasoline, ostensibly to clean America’s air, actually to buy farmers’ votes.
Over the last three decades, energy use has risen about 30 percent. But so has population, which means per capita energy use is unchanged. And per capita GDP has risen substantially, so we are using 40 percent less energy per dollar output. Which is one reason there is no energy crisis, at least none as most Americans understand such things—a shortage of, and therefore high prices of, gasoline for cars, heating oil for furnaces, and electricity for air conditioners.
In the absence of a crisis to concentrate the attention of the inattentive American majority, an intense faction—full-time environmentalists—goes to work. Spencer Abraham, the secretary of energy, says “the previous administration…simply drew up a list of fuels it didn’t like—nuclear energy, coal, hydropower, and oil—which together account for 73 percent of America’s energy supply.” Well, there are always windmills.
Sometimes lofty environmentalism is a cover for crude politics. The United States has the world’s largest proven reserves of coal. But Mike Oliver, a retired physicist and engineer, and John Hospers, professor emeritus of philosophy at USC, note that in 1996 President Clinton put 68 billion tons of America’s cleanest-burning coal, located in Utah, off-limits for mining, ostensibly for environmental reasons. If every existing U.S. electric power plant burned coal, the 68 billion tons could fuel them for forty-five years at the current rate of consumption. Now power companies must import clean-burning coal, some from mines owned by Indonesia’s Lippo Group, the heavy contributor to Clinton, whose decision about Utah’s coal vastly increased the value of Lippo’s coal.
The United States has just 2.14 percent of the world’s proven reserves of oil, so some people say it is pointless to drill in places like ANWR because “energy independence” is a chimera. Indeed it is. But domestic supplies can provide important insurance against uncertain foreign supplies. And domestic supplies can mean exporting hundreds of billions of dollars less to oil-producing nations, such as Iraq.
Besides, when considering proven reserves, note the adjective. In 1930, the United States had proven reserves of 13 billion barrels. We then fought the Second World War and fueled the most fabulous economic expansion in human history, including the electricity-driven “New Economy.” (Manufacturing and running computers consume 15 percent of U.S. electricity. Internet use alone accounts for half of the growth in demand for electricity.) So by 1990 proven reserves were…17 billion barrels, not counting any in Alaska or Hawaii.
In 1975, proven reserves in the Persian Gulf were 74 billion barrels. In 1993, they were 663 billion, a ninefold increase. At the current rate of consumption, today’s proven reserves would last 150 years. New discoveries will be made, some by vastly improved techniques of deepwater drilling. But environmental policies will define opportunities. The government estimates that beneath the U.S. outer continental shelf, which the government owns, there are at least 46 billion barrels of oil. But only 2 percent of the shelf has been leased for energy development.
Opponents of increased energy production usually argue for decreased consumption. But they flinch from conservation measures. A new $1 gasoline tax would dampen demand for gasoline, but it would stimulate demands for the heads of the tax increasers. After all, Americans get irritable when impersonal market forces add twenty-five cents to the cost of a gallon. Tougher fuel-efficiency requirements for vehicles would save a lot of energy. But who would save the legislators who passed those requirements? Beware the wrath of Americans who like to drive, and autoworkers who like to make, cars that are large, heavy, and safer than the gasoline sippers that environmentalists prefer.
Some environmentalism is a feel-good indulgence for an era of energy abundance, which means an era of avoided choices. Or ignored choices—ignored because if acknowledged, they would not make the choosers feel good. Karl Zinsmeister, editor in chief of the American Enterprise magazine, imagines an oh-so-green environmentalist enjoying the most politically correct product on the planet—Ben & Jerry’s ice cream. Made in a factory that depends on electricity-guzzling refrigeration, a gallon of ice cream requires four gallons of milk. While making that much milk, a cow produces eight gallons of manure, and flatulence with another eight gallons of methane, a potent “greenhouse” gas. And the cow consumes lots of water plus three pounds of grain and hay, which is produced with tractor fuel, chemical fertilizers, herbicides, and insecticides, and is transported with truck or train fuel:
“So every time he digs into his Cherry Garcia, the conscientious environmentalist should visualize (in addition to world peace) a pile of grain, water, farm chemicals, and energy inputs much bigger than his ice cream bowl on one side of the table, and, on the other side of the table, a mound of manure eight times the size of his bowl, plus a balloon of methane that would barely fit under the dining room table.”
Cherry Garcia. It’s a choice. Bon appétit.
[MAY 6, 2002]
What good is happiness? It can’t buy money.
—HENNY YOUNGMAN
Social hypochondria is the national disease of the most successful nation. By most indexes, life has improved beyond the dreams of even very recent generations. Yet many Americans, impervious to abundant data and personal experiences, insist that progress is a chimera.
Gregg Easterbrook’s impressive new book, The Progress Paradox: How Life Gets Better While People Feel Worse, explains this perversity. Easterbrook, a Washington journalist and fellow of the Brookings Institution, assaults readers with good news:
American life expectancy has dramatically increased in a century, from forty-seven to seventy-seven years. Our great-great-grandparents all knew someone who died of some disease we never fear. (As recently as 1952, polio killed 3,300 Americans.) Our largest public health problems arise from unlimited supplies of affordable food. The typical American has twice the purchasing power his mother or father had in 1960. A third of America’s families own at least three cars. In 2001, Americans spent $25 billion—more than North Korea’s GDP—on recreational watercraft. Factor out immigration—a huge benefit to the immigrants—and statistical evidence of widening income inequality disappears. The statistic that household incomes are only moderately higher than twenty-five years ago is misleading: Households today average fewer people, so real-dollar incomes in middle-class households are about 50 percent higher today. Since 1970, the number of cars has increased 68 percent and the number of miles driven has increased even more, yet smog has declined by a third and traffic fatalities have declined from 52,627 to 42,815 last year. In 2003, we spend much wealth on things unavailable in 1953—a cleaner environment, reduced mortality through new medical marvels ($5.2 billion a year just for artificial knees, which did not exist a generation ago), the ability to fly anywhere or talk to anyone anywhere. The incidence of heart disease, stroke, and cancer, adjusted for population growth, is declining. The rate of child poverty is down in a decade. America soon will be the first society in which a majority of adults are college graduates.
And so it goes. But Easterbrook says that such is today’s “discontinuity between prosperity and happiness,” the “surge of national good news” scares people, vexes the news media, and does not even nudge up measurements of happiness. Easterbrook’s explanations include:
• “The tyranny of the small picture.” The preference for bad news produces a focus on smaller remaining problems after larger ones are ameliorated. Ersatz bad news serves the fund-raising of “gloom interest groups.” It also inflates the self-importance of elites, who lose status when society is functioning well. Media elites, especially, have a stake in “headline-amplified anxiety.”
• “Evolution has conditioned us to believe the worst.” In Darwinian natural selection, pessimism, wariness, suspicion, and discontent may be survival traits. Perhaps our relaxed and cheerful progenitors were eaten by saber-toothed tigers. Only the anxiety-prone gene pool prospered.
• “Catalogue-induced anxiety” and “the revenge of the plastic” both cause material abundance to increase unhappiness. The more we can order and charge, the more we are aware of what we do not possess. The “modern tyranny of choice” causes consumers perpetual restlessness and regret.
• The “latest model syndrome” abets the “tyranny of the unnecessary,” which leads to the “ten-hammer syndrome.” We have piled up mountains of marginally improved stuff, in the chaos of which we cannot find any of our nine hammers, so we buy a tenth, and the pile grows higher. Thus does the victor belong to the spoils.
• The cultivation—even celebration—of victimhood by intellectuals, tort lawyers, politicians, and the media is both cause and effect of today’s culture of complaint.
Easterbrook, while arguing that happiness should be let off its leash, is far from complacent. He is scandalized by corporate corruption and poverty in the midst of so much abundance. And he has many commonsensical thoughts on how to redress the imbalance many people feel between their abundance of material things and the scarcity of meaning that they feel in their lives. The gist of his advice is that we should pull up our socks, spiritually, and make meaning by doing good while living well.
His book arrives as the nation enters an election year, when the opposition, like all parties out of power, will try to sow despondency by pointing to lead linings on all silver clouds. His timely warning is that Americans are becoming colorblind, if only to the color silver.
[JANUARY 11, 2004]
Why did we run? Well, those who didn’t run are there yet.
—AN OHIO SOLDIER
CHANCELLORSVILLE, VIRGINIA—The twelve-mile march on May 2, 1863, took Stonewall Jackson from the clearing in the woods where he conferred for the last time with Robert E. Lee, to a spot from which Jackson and thirty thousand troops surveyed the rear of the Union forces. Those forces, commanded by a blowhard, Joe Hooker (“May God have mercy on General Lee, for I shall have none”), were about to experience one of the nastiest shocks of the Civil War.
Two hours before dusk, Federal soldiers were elated when deer, turkeys, and rabbits came pelting out of the woods into their lines. It was, however, not dinner but death approaching. By nightfall, Federal forces were scattered. When the fighting subsided four days later, Lee was emboldened to try to win the war with an invasion of Pennsylvania. The invasion’s high-water mark came at the crossroads town of Gettysburg.
One hundred and thirty-nine years after the battle here, a more protracted struggle is under way. In 1863, the nation’s survival was at stake. Today, only the nation’s memory is at stake. “Only”? Without memory, the reservoir of reverence, what of the nation survives?
Hence the urgency of the people opposing a proposal to build, on acreage over which the struggle surged, 2,350 houses and 2.4 million square feet of commercial and office space. All this would bring a huge increase in traffic, wider highways, and the further submergence of irrecoverable history into a perpetually churned present.
Northern Virginia, beginning about halfway between Richmond and Washington, is a humming marvel of energy and entrepreneurship, an urbanizing swirl of commerce and technology utterly unlike the static rural society favored by Virginia’s favorite social philosopher, Jefferson. Chancellorsville is in an east-west rectangle of terrain about fifteen miles long and ten miles wide, now divided by Interstate 95, that saw four great battles—Fredericksburg, Chancellorsville, Spotsylvania, the Wilderness—involving one hundred thousand killed, wounded, or missing.
Where a slavocracy once existed, Northern dynamism prevails. But Northern Virginia has ample acreage for development, without erasing the landscapes where the Army of Northern Virginia spent its valor. As for the Federals’ side, it is a scandal that the federal government’s cheese-paring parsimony has prevented the purchase of historically significant land—twenty-thousand acres, maximum—at Civil War battlefields from Maryland to Mississippi.
Just $10 million annually for a decade—a rounding error for many Washington bureaucracies—would preserve much important battlefield land still outside National Park Service boundaries. The government’s neglect can be only partially rectified by the private work of the Civil War Preservation Trust, just three years old. (You can enlist at www.civilwar.org. Also check www.chancellorsville.org.)
CWPT’s president James Lighthizer, a temperate, grown-up realist, stresses that CWPT’s members are “not whacked-out tree-huggers” who hate development and want to preserve “every piece of ground where Lee’s horse pooped.” But regarding commemorations, Americans today seem inclined to build where they ought not, and to not build where they should, as at the site of the World Trade Center.
In New York City, many people who are antigrowth commerce despisers want to exploit Ground Zero for grinding their old ideological axes. They favor making all or most of the sixteen-acre parcel a cemetery without remains, a place of perpetual mourning—what Richard Brookhiser disapprovingly calls a “deathopolis” in the midst of urban striving.
But most who died at Ground Zero were going about their private pursuits of happiness, murdered by people who detest that American striving. The murderers crashed planes into the Twin Towers, Brookhiser says, “in the same spirit in which a brat kicks a beehive. They will be stung, and the bees will repair the hive.” Let the site have new towers, teeming with renewed striving.
But a battlefield is different. A battlefield is hallowed ground because those who there gave the last full measure of devotion went there because they were devoted unto death to certain things.
Those who clashed at Chancellorsville did so in a war that arose from a clash of large ideas. Some ideas were noble, some were not. But there is ample and stirring evidence that many of the young men caught in the war’s whirlwind could articulate what the fight was about, on both sides. See James M. McPherson’s For Cause and Comrades: Why Men Fought in the Civil War.
Local government here can stop misplaced development from trampling out the contours of the Confederacy’s greatest victory. A Jeffersonian solution.
[SEPTEMBER 22, 2002]
In most movies made to convey dread, the tension flows from uncertainty about what will happen. In United 93, terror comes from knowing exactly what will happen. People who associate cinematic menace with maniacs wielding chain saws will find that there can be an almost unbearable menace in the quotidian—in the small talk of passengers waiting in the boarding area with those who will murder them, in the routine shutting of the plane’s door prior to push-back from the gate at Newark airport on September 11.
But two uncertainties surrounded United 93: Would it find an audience? Should it?
It has found one, which is remarkable, given that in 2005 most moviegoers—57 percent—were persons twelve to twenty-nine years old. Twenty-nine percent were persons twelve to twenty-four. These age cohorts do not seek shattering, saddening experiences to go with their popcorn. In its first weekend, United 93 was the second most watched movie, with the top average gross per theater among major releases. It was on 1,795 screens, and 71 percent of viewers were thirty or older.
To the long list of Britain’s contributions to American cinema—Charles Chaplin, Bob Hope, Cary Grant, Stan Laurel, Deborah Kerr, Vivien Leigh, Maureen O’Hara, Ronald Colman, David Niven, Boris Karloff, Alfred Hitchcock, and others—add Paul Greengrass, writer and director of United 93. He imported into Hollywood the commodity most foreign to it: good taste. This is especially shown in the ensemble of unknown character actors, and nonactors who play roles they know—a real pilot plays the pilot, a former flight attendant plays the head flight attendant—and several persons who play on screen the roles they played on 9/11.
Greengrass’s scrupulosity is evident in the movie’s conscientious, minimal, and minimally speculative departures from the facts about the flight that were painstakingly assembled for the The 9/11 Commission Report. This is emphatically not a “docudrama” such as Oliver Stone’s execrable JFK, which was “history” as a form of literary looting in which the filmmaker used just enough facts to lend a patina of specious authenticity to tendentious political ax grinding.
A New York Times story on the “politics of heroism” deals with the question of whether the movie is “inclusive.” Well, perhaps United 93 did violate some egalitarian nicety by suggesting that probably not all the passengers were equally heroic. Amazingly, no one has faulted the movie for ethnic profiling: All the hijackers are portrayed as young, fervently devout Islamic males. Report Greengrass to the U.S. Commission on Civil Rights.
In a movie as spare and restrained as its title, the only excess is the suggestion, itself oblique, that the government responded even more confusedly that morning than was to be expected. Most government people, like the rest of us, were in the process of having their sense of the possible abruptly and radically enlarged.
Going to see United 93 is a civic duty because Samuel Johnson was right: People more often need to be reminded than informed. After an astonishing fifty-six months without a second terrorist attack, this nation perhaps has become dangerously immune to astonishment. The movie may quicken our appreciation of the measures and successes—many of which must remain secret—that have kept would-be killers at bay.
The editors of National Review were wise to view United 93 in the dazzling light still cast by a Memorial Day address, “The Soldier’s Faith,” delivered in 1895 by a veteran of Ball’s Bluff, Antietam, and other Civil War battles. Oliver Wendell Holmes Jr. said why understanding that faith is important:
“In this snug, over-safe corner of the world…we may realize that our comfortable routine is no eternal necessity of things, but merely a little space of calm in the midst of the tempestuous untamed streaming of the world, and in order that we may be ready for danger…. Out of heroism grows faith in the worth of heroism.”
The message of the movie is: We are all potential soldiers. And we all may be, at any moment, at the war’s front, because in this war the front can be anywhere.
The hinge on which the movie turns are thirteen words that a passenger speaks, without histrionics, as he and others prepare to rush the cockpit, shortly before the plane plunges into a Pennsylvania field. The words are: “No one is going to help us. We’ve got to do it ourselves.” Those words not only summarize this nation’s situation in today’s war, but also express a citizen’s general responsibilities in a free society.
[MAY 7, 2006]
Before the dust from the collapsed towers had settled, conventional wisdom had congealed: “Everything has changed.” But what about what matters most, the public’s sensibility?
It has taken five years for 9/11 to receive a novelist’s subtle and satisfying treatment, but it was worth the wait for Claire Messud’s The Emperor’s Children. Her intimation of the mark the attacks made on the American mind is convincing because in her comedy of manners, as in the nation’s life, that horrific event is, oddly, both pivotal and tangential.
Messud’s Manhattan story revolves around two women and a gay man who met as classmates at Brown University and who, as they turn thirty in 2001, vaguely yearn to do something “important” and “serious.” Vagueness—lack of definition—is their defining characteristic. Which may be because—or perhaps why—all three are in the media. All are earnest auditors and aspiring improvers of the nation’s sensibility.
Marina is a glamorous child of privilege because she is the child of Murray, a famous liberal commentator given to saying things such as, to a seminar on Resistance in Postwar America, “once upon a time, poetry did matter.” A former intern at Vogue, Marina lives with her parents, on an allowance from them, on Central Park West. She is having trouble finishing her book on “how complex and profound cultural truths—our mores entire—could be derived from” analysis of changing fashions in children’s clothes. “I want to make a difference.” But get a job? “I worry that will make me ordinary, like everybody else.” She is, her father recognizes, “stymied, now, by the very lack of smallness” in her life, “by the absence of any limitations against which to rebel.”
Danielle, from Ohio, is a producer of documentaries who hopes to “articulate” an “ethos” into a “movement.” Her current project, to raise “questions about integrity and authenticity,” concerns women who had bad experiences with liposuction.
Julius, from Michigan, is an independent book and film reviewer “with a youthful certainty that attitude would carry him.” His “life of Wildean excess and insouciance seemed an accomplishment in itself.” He is “an inchoate ball of ambition,” and is intermittently aware that at thirty “some actual sustained endeavor might be in order.” That might, however, be difficult, given his belief that “regularity was bourgeois.”
The problem the three share is not that their achievements, if there ever are any, will be ephemeral, but that their intentions to achieve them are ephemeral. Not solid, like those of the Australian who comes to New York “to foment revolution.” With a new magazine.
Murray’s nephew, Bootie, a morose autodidact—imagine Holden Caulfield with his nose in a book of Emerson’s essays—rounds out Messud’s central cast, each illustrating Messud’s acute understanding of the Peter Pan complex now rampant among young adults who feel entitled to be extraordinary: “To be your own person, to find your own style—these were the quests of adolescence and young adulthood, pushed, in a youth-obsessed culture, well into middle age.”
Not until page 370 of Messud’s delicious depiction of the quintet’s tangled lives, “torn between Big Ideas and a party,” do the planes hit the towers. Bootie—it could have been any of these people preoccupied with manufacturing interpretations of fashions and fashions of interpretations—has “a fearful thought: you could make something inside your head, as huge and devastating as this, and spill it out into reality, make it really happen.” Imagine that.
Before 9/11, Messud began writing a Manhattan novel about young adults living in the media hall of mirrors. After 9/11, she abandoned it. Then returned to it. Asked if she thought she had written a “9/11 novel,” she demurs: “I wrote an August 1914 novel.” Meaning, “The world I had set out to describe in 2001 had become historical.”
But what had changed? The party, scheduled for 9/11, to launch the Australian’s magazine and the revolution—Renée Zellweger had accepted; Susan Sontag was a maybe—was canceled, as was the magazine. Murray “formulated a reasoned middle ground”: America did not deserve the attacks, but remember the West Bank. “He wasn’t opposed to the invasion of Afghanistan, but qualified about its methods.” Danielle decides to proceed with her liposuction documentary.
Nothing changes everything. And even huge events that, as Messud says, make “certain things seem particularly frivolous” leave most of our enveloping normality largely unscathed. That truth and a heightened sense of the frivolous are conducive to national poise five years into a long war.
[SEPTEMBER 10, 2006]