Few decades in American history are as fabled as the 1960s. They were years of great hope as activists, lawyers, and governmental officials translated American liberal ideas into civil rights laws and Great Society social reforms. They were also years of dashed dreams as the full benefit of those liberal achievements was undercut by the Vietnam War, which drained resources from American cities to ramp up the war effort in Vietnamese provincial towns, remote villages, and jungles. The military campaigns may have taken place halfway around the world, but the battlefronts extended into American homes, dividing loyalties between parents and their children, and into city streets as angry protesters clashed with riot police. It was a decade in which a number of visionary and inspiring political leaders, including Martin Luther King Jr., John F. Kennedy, Malcolm X, and Robert Kennedy, rallied millions to their cause. But it was also the decade in which their lives—and much of the hope they inspired—were cut short by assassins’ bullets.
If “the sixties” evoke images of a decade in which high hopes and daring dreams spiraled into protest and violence, they are also remembered as a period in which these political and social conflicts generated emancipatory ideas of all kinds. The standard historical narrative tells how these contestations produced visions, some realized and others deferred, of the liberation of blacks from white oppression, women from male dominance, gay pride from homophobia, Afros from straightened hair, breasts from bras, the environment from pesticides and overdevelopment, and the younger generation’s life paths from the older generation’s best-laid plans.
Though the sixties were, no doubt, a period of intense cultural ferment, the traditional explanation that the political and social tumult of the period produced radical ideas reverses cause and effect by viewing American politics and society as the conditions producing radical ideas, rather than understanding how earlier ideas helped create those very conditions. So many of the radical ideas that convulsed the period were produced early in the decade (before the Big Bang version of the period typically commences), and these are the revolutionary claims that went on to realign American intellectual life for the remainder of the century. To be sure, parsing cause from effect is one of the most difficult challenges in understanding historical change. But the case of “the sixties” suggests that ideas have been historical stimuli, not merely symptoms. Another way to put this is that “the sixties”—and the dramatic decades that followed—started in ideas.
The year 1962 does not announce itself as a significant turning point in history. It does not have the convenient clarity to mark a beginning like 1607, when the first English colonists arrived in America, or 1776 with the Declaration of Independence. Nor is it an 1848, a year when the Mexican American War ended; the California gold rush began; the quest for women’s rights rocked Seneca Falls, New York; and liberal revolutions shook Europe. The year 1962 is not even closely associated with a traumatic event like 1929, when the American stock market crashed so spectacularly that its devastating ripple effects were felt around the world.
The year 1962 does not rival these other years as exceptionally momentous in American political and social history. As with any calendar year in history, some important events did take place. It was the year of the Cuban Missile Crisis, John Glenn’s orbit around Earth, and Jackie Robinson’s election to the Baseball Hall of Fame (the first for an African American). It was also the year when Cesar Chavez founded the United Farm Workers; Kmart opened its first store in Garden City, Michigan; and the animated series about a futurist family, The Jetsons, premiered on television. These are not insignificant milestones. But on the whole, 1962 is not one of those years packed with larger social and political significance.
But from the vantage point of American intellectual history, 1962 (or thereabouts) is crowded with major publications and events that realigned the paths of American thought and culture that lead up to today’s America. It is the year when Rachel Carson published her exposé Silent Spring, revealing the detrimental effects of synthetic pesticides and launching the modern environmental movement. It is the year when Michael Harrington’s The Other America, a ground-breaking study of poverty in America, became a surprise bestseller that shaped the Great Society programs and debates about the “culture of poverty” for decades to come. It is the year when Ken Kesey’s One Flew over the Cuckoo’s Nest hit on the theme of how those marked “insane” prove to be less sick than the society that labels them as such, foreshadowing a mode of biting cultural criticism in the dog days of the Vietnam War. And it is the year when Andy Warhol first exhibited his thirty-two Campbell’s Soup Cans canvases, stacked one on top of the other and packed side by side, mimicking all the stately beauty and refinement of a grocery store shelf.
What is remarkable about the intellectual works of 1962 or thereabouts is not only the perspicacity with which they addressed pressing issues but also the ways in which they foreshadowed, and even influenced, the central preoccupations in American thought for decades to come. For example, Marshall McLuhan’s The Gutenberg Galaxy: The Making of Typographic Man, which explores how communication technologies fundamentally transformed the human beings who used them, helped establish the fields of communication and media studies by showing how, as he later put it, “the medium is the message.”1 “Language is metaphor in the sense that it not only stores but translates experience from one mode into another,” McLuhan wrote. “The principle of exchange and translation, or metaphor, is in our rational power to translate all of our senses into one another. This we do every instant of our lives. But the price we pay . . . is that these massive extensions of sense constitute closed systems.”2 McLuhan’s warning about the flattening tendencies of mass society broke open the discursive space for Herbert Marcuse’s One Dimensional Man: Studies in the Ideology of Advanced Industrial Society (1964), while his critique of the alienating effects of technology would help set the terms for Robert Pirsig’s Zen and the Art of Motorcycle Maintenance (1974). The sentiments expressed in The Gutenberg Galaxy would help to make the related logic of deconstruction, and with it Jacques Derrida’s proclamation that “there is nothing outside the text” a decade later seem a little less implausible.3
It may be hard to imagine that the leftist Students for a Democratic Society’s (SDS) Port Huron Statement and the staunchly libertarian economist Milton Friedman’s Capitalism and Freedom came out in the same year, but it shows how concerns about the effects and possibilities of postwar American prosperity captured the attention of critical observers in the early 1960s. In June 1962, fifty-nine members of the SDS, inspired by the civil rights movement and focused similarly on issues of racial equality, economic justice, and peace, came together at a labor union resort in Michigan to draft a manifesto for their movement. In it, they laid out the contours of “participatory democracy”—“the art of collectively creating an acceptable pattern of social relations.”4
These young leftists could not have used words more anathema to University of Chicago economics professor Milton Friedman’s ideal vision of a democratic society if they had tried. For Friedman, the most prominent face of the “Chicago School” of economics and its most vigorous critic of Keynesian (or state-interventionist) economics, the best thing a democratic government could do was to get out of its citizens’ way. Translating his technical expertise into accessible and engaging prose for a general audience, Friedman’s wildly popular Capitalism and Freedom maintained that governmental regulations were strangling freedom not just out of the market but out of American political life as well. He asserted that “the invisible hand has been more potent for progress than the visible hand for retrogression.”5 The vision of progress he laid out in rather jaunty, upbeat prose included abolishing the minimum wage, the licensing of doctors, and fair employment laws, as well as advocating for vouchers for public schools. While the Port Huron Statement helped re-establish a Progressive-era style of yoking cultural and political criticism to enable a more holistic way of talking about a democratic society, Capitalism and Freedom helped exalt “free choice” and “the market” as indisputable moral goods. An odd couple if there ever was one, they showed how the most persuasive claims for a welfare state and the most alluring arguments for laissez-faire principles simultaneously secured the prominent place they have in Americans’ notions about the economy today.
Less than a year later, two transformative texts would add to this lineup, breaking apart conventional ideas of the proper social “place” for women and people of color while helping to strengthen their respective liberation movements: Betty Friedan’s The Feminine Mystique and Martin Luther King Jr.’s “Letter from Birmingham Jail”. Both are a form of prison writing—Friedan from the gilded cage of her suburban home, and King from an Alabama jail cell, where he was held in solitary confinement for violating a court injunction against public protests.
Friedan was a wife, the mother of three, and a labor activist turned journalist for women’s magazines before coming to terms with the fact that the magazines she wrote for contributed to white, middle-class women’s “problem that has no name.” She argued that the “comfortable concentration camp” of the midcentury cult of motherhood and femininity kept women out of the workforce, beholden to their husbands, and tethered to diaper bags, grocery shopping carts, and PTA meetings.6 Friedan drew on de Beauvoir’s Second Sex (1949) in lamenting that women still regarded men as the measure of what it is to be human while taking Freudian analysis to task for perpetuating myths about women’s sexuality that gave scientific respectability to outmoded gender ideals. She invoked Mary Wollstonecraft, Angelina Grimké, Elizabeth Cady Stanton, and Margaret Fuller as the first wave of feminist striving, thereby helping her launch what would come to be known as second-wave feminism.
King wrote his letter in response to a “Call for Unity” issued by eight white fellow clergymen, who encouraged African Americans to abandon their protests and leave it to the courts to work out the terms of their fate as American citizens. His audience, however, was broader than that. In his letter, he sought to grab moderate whites by their consciences and shake them free of their muddled arguments for incrementalism. King wanted to redirect their concern from the effects of protests to the myriad causes for them. The letter recognized sympathetic whites’ desire for racial equality while chastising their timidity to make the time now to achieve it. “We have waited for more than 340 years for our constitutional and God-given rights,” King wrote before cataloging the abuses visited upon African Americans on a daily basis and emphasizing that only just laws, not unjust ones, demanded assent: “We should never forget that everything Adolf Hitler did in Germany was ‘legal’ and everything the Hungarian freedom fighters did in Hungary was ‘illegal.’ ” He presented a vigorous case for direct nonviolent action suffused with love as a necessary and legitimate “force” to negotiate a future for African Americans. And he reminded his readers of the obvious: “Freedom is never voluntarily given by the oppressor; it must be demanded by the oppressed.”7
Though exploring different struggles and addressing themselves to different audiences, Friedan’s and King’s texts did similar intellectual work in the early 1960s. They provided women and African Americans, respectively, with a trenchant critique of contemporary thought, customs, and law, while offering them, in turn, a vision of equality, justice, and human flourishing.
There is one more crucial text of this moment that was neither an overnight bestseller like Friedan’s book nor widely discussed in the popular press like King’s letter. Nevertheless, it had subtle but penetrating effects that would sneak up on the natural sciences, the human sciences, and ethical thought in the late twentieth century. Thomas Kuhn’s The Structure of Scientific Revolutions (1962) fundamentally challenged long-standing notions of science as a purely objective enterprise. The central idea of the book is that scientific investigation typically takes place in “normal” periods of conventional practice, shared values, and intellectual consensus—what Kuhn called a “paradigm.” But a crisis emerges when a paradigm’s tools and language are not able to deal effectively with evidentiary “anomalies.” In Kuhn’s account, scientific revolutions occur when a rival paradigm comes forth to successfully account for the anomaly, thereby superseding the old one.
This could all seem perfectly uncontroversial but for the fact that, according to Kuhn, scientific knowledge from the earlier paradigm is “incommensurable” with that of the latter paradigm. This meant, in effect, that there is no common measure for different scientific paradigms, no airtight way for a paradigm to apprehend an absolute or external “reality” independent of its means of apprehension, and no clear progress in the move from one paradigm to another.
The shocking and controversial implications of Kuhn’s argument radiated out into virtually all fields of scientific and humanistic inquiry. If science does not provide objective measures that transcend all paradigms, how can we decide between contradictory claims of truth if they are wholly relative to one’s particular paradigmatic point of view? Does “incommensurability” mean that reason, rational inquiry, and indisputable evidence are not actually the ways we adjudicate different claims of truth? If science is the study of an external reality by means of social constructs, does that mean that all scientific knowledge is merely a social construct too? Structure became one of the most cited academic texts of the second half of the century, primarily because the answers to these questions had such high stakes in virtually every domain of scholarly inquiry. By dismantling science as a domain of absolute truth, Structure sparked an intellectual revolution of its own, accrediting antifoundationalist ideas that would become the opening salvo of late twentieth-century American postmodernism.
Kuhn never identified himself as a postmodernist, and, indeed, it was quite some time before American thinkers would claim him to be one. This was so because, for much of the late 1960s and 1970s, postmodernism was seen as a French import.
In the late 1960s, a growing number of (mostly) French philosophers began to advance a style of thought that would come to be known as postmodernism (and its variants poststructuralism and deconstruction) while launching their careers as celebrities in American academic literary circles. Some, like Gilles Deleuze, Pierre Klossowski, and Luce Irigaray, came only periodically to participate in a conference, give a lecture, or promote an English translation of one of their books. Others, such as Paul de Man, Jacques Derrida, Jean-François Lyotard, and Michel Foucault, came to hold professorships or make extended stays at American universities, thus securing a more direct and sustained impact on their American readers. The mystique of their ideas, the charisma of their personas, and the dramas and controversies that followed them and their theories made it seem to many observers that these European intellectual superstars were bringing a radically new—and even reckless—mode of anti-universalist thinking to America, and with it a host of moral and epistemological messes for Americans to clean up after them.
Though the term postmodernism first gained traction in intellectual discourses as a way to characterize dramatic changes in 1960s aesthetics, it quickly became associated with an emerging intellectual style in philosophy, literary theory, and social criticism. A way of thinking that insists that the intellectual center of all claims to knowledge does not hold, postmodernism is thus quite difficult to characterize. There are, nevertheless, shared assumptions, commitments, and methods that help identify it. Postmodernism is skeptical of all binary oppositions, rejecting the distinction between “objectivity” and “subjectivity” most vociferously. Once “objectivity” is out, so too is the notion that one’s perception links up to an absolute reality. Many postmodern theorists thus pushed for a McLuhan-like approach to language by demonstrating how language constructs reality rather than simply representing it transparently. They also challenged beliefs or practices claimed to be rooted in nature, arguing that all truths are shaped by human will, desire, and habits. They gave a new name for an old Jamesian way of thinking, calling this “anti-essentialism” where William James described it as a recognition that “the trail of the human serpent is . . . over everything.” In addition, they took umbrage at the universalist assumptions of something they referred to as “the Enlightenment Project,” which was also a new formulation at this time. Jean-François Lyotard helped to give all these various strains of radical skepticism some coherence in The Postmodern Condition: A Report on Knowledge (1979) when he characterized their common stance as “incredulity toward metanarratives.”8
Figure 8.1 Rejecting what he saw as the stale formalism of modernist architecture, architect Frank Gehry helped usher in a postmodern aesthetic, which rejected rectilinear shapes and jettisoned a visual center. “Postmodernism” first gained traction in American thought through its association with a new aesthetic in architecture before it was enlisted to describe movements in philosophy and literature. Weisman Art Museum (completed 1993), University of Minnesota; Justin Ladia/Flickr
The philosopher and psychiatrist Michel Foucault was the thinker who made the most damning criticisms of the Enlightenment project. Of all the postmodernists, he concerned himself most explicitly, and most forcefully, with questions about “power” once divine and even natural bases for it are called into question. How are intellectual, psychological, and social forms of power, authority, and domination expressed, and where do they come from? How are they justified and maintained? To answer these questions, Foucault developed what he called “archaeological” and “genealogical” approaches to the study of social configurations of power as they manifest themselves in worldviews, institutions, and cultural practices. In Discipline and Punish (1975), Foucault extended his investigation of the genealogy of moral codes by looking into the development of modern, more “humane” and “enlightened” approaches to punishing social deviants. These reforms aimed, in his words, “to punish less, but to punish better.”9 Foucault extended his study of prisons to understand how new modes of policing, control, surveillance, and reform extended into modern factories, hospitals, schools, and the military, and, in doing so, created a modern system of disciplinary power. Foucault thus helped popularize, like Ken Kesey before him, a skepticism about the “helping professions” and seemingly benign forms of social education and control, asserting that modern notions of health and well-being were themselves quite sick.
Had so many American academics in the humanities in the 1970s, 1980s, and 1990s not found the postmodernists’ ideas about the contingency of truth, the manipulations of language, and the ambiguity of moral regimes so persuasive, the vogue of French theorists might have just been another fleeting intellectual fashion. But because a vocal minority of academics began working with postmodern ideas and interpretive strategies, even turning their spokespeople into adjectives for this way of thinking (“Foucauldian,” “Derridean”), worried observers sounded the alarm that a “foreign invasion” was infecting the academy and, by extension, the tender minds of impressionable students.
University of Chicago classics professor Allan Bloom did the most to popularize the intellectual transformations of the 1970s and 1980s as a foreign invasion, and with his clever and persuasive histrionics turned himself into an equally high-wattage intellectual superstar until his death in 1992. In his runaway bestseller The Closing of the American Mind: How Higher Education Has Failed Democracy and Impoverished the Souls of Today’s Students, Bloom cautioned that these rampant antifoundational ideas had foreign pedigrees. However, in his retelling they stemmed not from postwar French thought but rather from late nineteenth-century German philosophy, in particular Nietzsche’s moral relativism. According to Bloom, when Americans went for “no fault divorces,” “conflict resolution,” and “political correctness” instead of adjudicating the truth, they were simply putting into practice the nonsense being taught in the college classroom. He was not belittling Nietzsche and the European thinkers who followed him. He was simply arguing that Americans had become “Philistines” after the convulsions of the 1960s, and by becoming too distrustful of moral authority they were deploying a dumbed-down “value relativism” from Europe. “All awareness of foreignness disappears,” Bloom repined. Little did college students and their baby boomer professors and parents realize that these dark assaults on moral universalism had no place “on enchanted American ground.”10
Though American practitioners of antifoundationalism certainly did not agree with Bloom’s diagnosis, many would have assumed he was right in his assessment that ideas trafficking under the rubric of “postmodernism” had a foreign provenance. Indeed, for some, that was their appeal. Thus, when a historian would later cite American postmodernism as an “exemplary adventure in intellectual dissemination,” he made a claim many assumed to be true.11 What is missing in this account (as only a very few astute observers at the time noticed) is that these varieties of European antifoundationalism were making such big strides in late twentieth-century American life primarily because they were tapping into intellectual habits and commitments long present in American thought.
But just two years after the publication of Bloom’s jeremiad, a young Princeton University professor of religion named Cornel West sought to set the record straight. With his American Evasion of Philosophy: A Genealogy of Pragmatism, West argued that the “evasion of epistemology-centered philosophy” was something bequeathed to late twentieth-century Americans by their own “indigenous” Emerson. He had no objection to calling this “postmodernism,” so long as it was clear to his American readers that this philosophy was their own native tradition of pragmatism coming back dressed in a new linguistic garb. Some other neo-pragmatists, a group of thinkers intent on refocusing attention on the writings of the founders of American pragmatism, were uncomfortable with this comparison. For West, “American pragmatism is less a philosophical tradition putting forward solutions to perennial problems in the Western philosophical conversation . . . and more a continuous cultural commentary or set of interpretations that attempt to explain America to itself at a particular historical moment.” It was a way of thinking particularly well fitted to an American landscape with a “confused populace caught in the whirlwinds of societal crisis, the cross fires of ideological polemics, and the storms of class, racial, and gender conflicts.”12 Emerson understood this. So too did James and Dewey after him. And so too were Americans again, at the end of the century, welcoming an intellectual “evasiveness” (West’s term for the move away from epistemological foundations and toward situational, contextual problem solving). As West understood it, the vogue of postmodernism was nothing more than Americans recognizing their own rejected thoughts coming back to them with a certain alienated majesty.
If Bloom fired the first shot of the “culture wars” with his Closing of the American Mind, he soon got reinforcement from high-profile pundits and politicians, who treated these postmodern endorsements of intellectual fragmentation and subversion with disdain. Most were variations on a theme charging that the assaults on universals were creating a pernicious gospel of “multiculturalism” in the academy, which was just another means to relativistic ends. Historian Arthur M. Schlesinger Jr. sounded the alarm with his 1991 The Disuniting of America: Reflections on a Multicultural Society. He argued that academics had degenerated into ethnic activists and that advocates were proselytizing to students a way of thinking about America that “belittles unum and glorifies pluribus.”13
Throughout much of the 1970s and early 1980s, one fault line contributing to a sense of a disunited state of America was one long familiar to observers: individualism. There is no period in American history when thinkers have not wrestled with the appropriate balance of power between self-interest and social obligation (think back to republicanism, transcendentalism, and progressivism). But it was at this time that a number of writers offered up new ways of conceptualizing America’s cultural premium on an atomistic, even antisocial, conception of the self. In a 1976 New York magazine article, “The ‘Me’ Decade and the Third Great Awakening,” satirist Tom Wolfe ridiculed what he saw as the profligacy and excesses of the burgeoning self-help movement of the period: “The old alchemical dream was changing base metals into gold. The new alchemical dream is: changing one’s personality—remaking, remodeling, elevating, and polishing one’s very self . . . and observing, studying, and doting on it. (Me!).”14
The historian and cultural critic Christopher Lasch chimed in with an even more devastating critique, The Culture of Narcissism: American Life in an Age of Diminishing Expectations (1979). Lasch turned the word narcissism—which up until then was a very specific clinical term of Freud’s to diagnose a pathology in an individual—into a way of characterizing the American personality. The narcissist, Lasch insisted, was not someone puffed up on her or his own self-worth. Rather, the narcissist had a frail ego, was choked with rage and self-loathing, and therefore turned to others for her or his insatiable need for love and confirmation. For Lasch, a culture of narcissists has no concept of—or longing for—larger structures of meaning and obligation once provided by religious and even political commitments. Lasch’s alarm reached the ears of President Jimmy Carter, who invited him to the White House, and Carter later drew on his ideas as he formulated his “crisis of confidence” speech addressing the energy crisis of 1979. Carter warned that the problem facing Americans was not a shortage of gas but a shortage of public spiritedness: a “path that leads to fragmentation and self-interest. Down that road lies a mistaken idea of freedom.”15
Schlesinger, however, was less concerned about atomistic individualism breaking up public spiritedness and more upset by the ethnic, racial, and other group identity rifts running through academic departments on university campuses. In 1968, the first black studies and Chicano studies programs were founded. In 1970 came the first women’s studies and Native American studies programs. Within a few years, these programs had multiplied exponentially across the country. All of them were born of liberation struggles—black studies from the Black Power movement, women’s studies from radical feminism, and so on. The courses of study, like the political movements they grew out of, were explicitly and unapologetically based on group identity. For American students and their professors who recognized themselves as members of oppressed groups, the purpose of identity-based curricula was to demand recognition in relevant fields of study and on the college campus. “Our foremost plight is our transparency,” wrote Sioux lawyer and activist Vine Deloria Jr. in Custer Died for Your Sins: An Indian Manifesto (1969). “The more we try to be ourselves the more we are forced to defend what we have never been. . . . To be an Indian in modern American society is in a very real sense to be unreal and ahistorical.”16 The various “studies” programs thus sought to reverse this long-standing erasure from the historical record of American knowledge production, to reclaim the lost history and knowledge of their group, and to use organic epistemologies to enable victims of oppression to challenge white, male, heterosexual standards and truth claims.
Not long after these different “studies” programs began making a significant impact on college curricula and campuses, the intellectual effects of postmodernism were radiating out in such a way as to call them into question. If there was a governing logic to American thought in the 1980s, it was to add fault lines to fault lines. It first came in the form of theorists complicating such categories of racial and ethnic identity by showing their “intersectionality” with class, gender, sexual orientation, religion, and region. The poet Audre Lorde provides a glimpse into why some insisted that categories of identity and belonging needed to be rethought. “As a forty-nine-year-old Black lesbian feminist socialist mother of two, including one boy, and a member of an interracial couple, I usually find myself a part of some group defined as other, deviant, inferior, or just plain wrong.”17 It was not long after these identitarian claims posed serious challenges to conventional academic disciplines and practices that postmodernism applied pressure to their assumptions by stressing that they were built on the crumbling foundations of essentialism. Postmodernism thus encouraged scholars specializing in race, ethnicity, and gender to rethink the integrity and stability of their most basic terms of self-description.
Judith Butler, professor of comparative literature and rhetoric at the University of California at Berkeley, was one of the most powerful theorists, and certainly the most popular, to question the assumptions and categorical thinking of fellow feminists. In her groundbreaking book Gender Trouble: Feminism and the Subversion of Identity (1990), Butler called into question the “foundationalist fictions” of the sex/gender divide, which viewed sex as something rooted in biology, and gender as constructed by culture. Both, in her view, were neither products of nature nor of necessity but rather the byproducts of a long heteronormative history, with language—and the ability to name—a site of power. But she went further than that to argue that even the category of “woman” itself is a fiction and a “site of contested meanings” produced by a culture invested in regulating the “subjects” created by those fictions. Butler employed Nietzsche’s and Foucault’s genealogical method of unraveling the historical modes of domination and power knotted up in late twentieth-century categories of thought. Our language for gender is merely the performance of outmoded regulatory regimes. Butler, with a flair for difficult phrasing that seemed both unnecessarily inelegant and yet somehow oracular, pronounced: “there is no gender identity behind the expressions of gender; . . . identity is performatively constituted by the very ‘expressions’ that are said to be its results.”18
Figure 8.2 American artist Barbara Kruger’s Untitled (You Are Not Yourself) (1981) visually rendered the postmodern death of the subject, while speaking to various forms of female oppression—both externally imposed and internally safeguarded. Kruger became best known for her black-and-white photographs with staccato texts meant to unsettle the viewer’s assumptions about herself, and about American politics and culture. Mary Boone Galleries
The concept of “race” came under similar scrutiny in the 1980s and 1990s, thereby challenging the most basic assumptions of black and ethnic liberation campaigns. These movements aimed to release “people of color” (itself an invented category, which surged in popularity in this period) from the constraints of prejudice filtered into the arena of language, where even the most basic categories of racial and ethnic thought (“black,” “Hispanic,” and “race” itself) were thrown into question. Philosopher and professor of Afro-American studies at Harvard Kwame Anthony Appiah argued that the latest research challenging the biological basis of contemporary ideas about race demanded the rethinking of racial categories altogether. In his magisterial study In My Father’s House: Africa in the Philosophy of Culture (1992), he interrogated the “metaphysical or mythical unity” to conceptions of “Africa” in Pan-Africanism, African and African American studies, and Western philosophy more broadly, while trying to underscore “the illusions of race.” “In a sense,” observed Appiah, “trying to classify people into a few races is like trying to classify books in a library: you may use a single property—size, say—but you will get a useless classification. . . . No one—not even the most compulsive librarian!—thinks that book classifications reflect deep facts about books. . . . And nobody thinks that a library classification can settle which books we should value.”19At the same time, discussions of a “postracial” and “postethnic” America called into question not just the integrity but also the usefulness of racial and ethnic essentialism. Having long been a student of historical notions of American cosmopolitanism, the University of California–Berkeley intellectual historian David Hollinger updated Randolph Bourne’s criticism of the pluralistic schema of a “melting pot” multiculturalism by arguing instead for fluid notions of identity, belonging, and solidarity more in line with late twentieth-century American realities. “Fewer and fewer Americans believe in biological reality of races,” Hollinger observed, “but they are remarkably willing to live with an officially sanctioned system of demographic classification [in the census] that replicates precisely the crude, colloquial categories, black, yellow, white, red, and brown.”20
As Appiah and Hollinger tried to imagine new forms of cosmopolitan affiliation without essentials, they were not naïve about the consequences of anti-essentialist ideas for American minorities’ experiences. Appiah understood that jettisoning “the biological category of the Negro” should “leave nothing for racists to have an attitude toward,” but they would find a way.21 Indeed, works like Heritage Foundation commentator Dinesh D’Souza’s The End of Racism (1995) demonstrated how readily conservative critics moved in to adopt a similar challenge to the biological basis of race—not to overcome racism, but to show that black people’s cultural deficiencies and their white liberal enablers were to blame. Accepting that race had no biological basis, but not the cultural relativism that could accompany such an acknowledgment, D’Souza lamented that “relativism has now imprisoned liberals in an iron cage that prevents them from acknowledging black pathology.”22 Unsurprisingly, culture war conservatives found D’Souza’s bold argument persuasive and hoped more claims of this sort would help bring an end to the politics of racial grievance and government interventions on behalf of minorities. But African American commentators, who like D’Souza utilized anti-essentialism’s challenge to racial categories, strenuously disagreed that the end of race in any way meant the end of racism. As comedian Chris Rock put it in one of his stand-up routines in the late 1990s: “There ain’t a white man in this room who’d change places with me. None of ya! And I’m rich!”23
Challenging the essentialist assumptions of race was particularly fraught for black intellectuals, because they sought affective bonds deeper and more absolute than the shared experience of systematic oppression to define themselves. As a result, the most demanding analyses of racial and ethnic anti-essentialism came not from conservative culture warriors, but rather from the very minorities intended to benefit from it. One of the most chastening observations came from the black feminist philosopher bell hooks, who warily observed: “We should indeed [be] suspicious of postmodern critiques of the ‘subject’ when they surface at a historical moment when many subjugated people feel themselves coming to voice for the first time.” Though hooks felt herself to be “on the outside of the discourse [of ‘white male intellectuals and/or academic elites’] looking in,” she nevertheless agreed with Appiah that if “such a critique [of essentialism] allows us to affirm multiple black identities, varied black experience,” and challenge “colonial imperialist paradigms of black identity,” which she felt worked to “reinforce and sustain white supremacy,” then the potential payoffs for postmodernism might be worth the risk of forging “blackness” without absolutes.24
Wherever late twentieth-century antifoundationalist thinkers found a category of analysis, a concept, or a truth claiming to be absolute, they called it out as an artifact of human quests for power as opposed to a window onto reality. Their efforts aroused their interlocutors’ and readers’ enthusiasm, gratitude, concern, and outrage, as well as plenty of ridicule, but rarely indifference. But while the challenges to universalism generated passionate responses to their various critiques, nowhere were the reactions as volatile and the stakes as high as when its logic rippled out to theories about civil and, most especially, human rights.
Only the most perspicuous observers noticed the apparent inconsonance of what Rice University historian Thomas Haskell in 1987 referred to as “The Curious Persistence of Rights Talk in the ‘Age of Interpretation.’ ”25 Soon after, prominent scholars and activists engaged in symposia and attended conferences about whether it was possible to be against universalism and for human rights. They wanted to figure out: Should a truly incredulous “incredulity toward metanarratives” recuse itself when it brushes up against universalist notions with vital and demonstrable benefits for life in a civil society such as human rights? Is there any way for antifoundationalism and rights discourse to find common ground or negotiate a peace agreement?
In 1992, Amnesty International held its first Oxford Amnesty Lecture Series to address this question. As the organization put it in a letter of invitation to speakers, including literary theorists Barbara Johnson, Edward Said, Wayne Booth, and Terry Eagleton: “Our lecturers are being asked to consider the consequences of the deconstruction of the self for the liberal tradition. Does the self as construed by the liberal tradition still exist? If not, whose human rights are we defending?”26 That antifoundationalism could have enough intellectual firepower to take down rights struck its critics as a sign that the American mind not only had closed but also was sealed shut against any hope of reason and sobriety. That antifoundationalism could have enough intellectual firepower to take down rights struck its theorists and defenders as one of the most pressing issues they needed to reckon with. The problems posed by antifoundationalism were then—and remain today—varied and complex. But while the discourse for so many felt so new and so disorienting, it grew out of intellectual concerns and habits of thinking long familiar in American intellectual life. The debates that ensued were yet a new iteration of the persistent question of how to build bridges over abysses in moral viewpoints, to create common ground when there are no shared moral grounds to find. This is the question American observers would ask as they found themselves in an age of globalization.