Chapter Eight
Tin-Foil Mortarboards: Conspiracism’s Ivy League Enablers

I have heard it confidently stated . . . that the American troops had been brought to Europe not to fight the Germans but to crush an English revolution. One has to belong to the intelligentsia to believe things like that: no ordinary man could be such a fool.

—George Orwell, 1945

Jacques Derrida’s Hall of Mirrors

Though his name will forever be associated with “deconstruction” and other avant-garde literary doctrines, Paul de Man was an anachronism—a throwback to an age when gentleman scholars with scant formal credentials could be catapulted into the loftiest reaches of the Ivy League on the strength of charm, native intelligence, and personal connections.

After fleeing postwar Belgium in his late twenties, de Man began life in American letters at the bottom rung—serving customers at a Grand Central Station bookshop. But he managed to make friends at Partisan Review, and a letter of introduction from Mary McCarthy landed him a teaching position at Bard College. That in turn led to postings at Harvard’s Society of Fellows (where he overlapped in the 1950s with an up-and-coming linguist named Avram Noam Chomsky), Cornell and, finally, Yale. There, he bestrode the comparative literature program until his death in 1983, preaching austere sermons to graduate students about the “limitations of textual authority.” Conventional literary criticism, he argued, is a sham—a naïve and romantic project aimed at extracting intrinsic meaning from written words that, by their nature, are mere chicken scratches on paper.

As the fad for such ideas crested in the 1970s and 1980s, and scholars made increasingly radical claims about their capacity to reimagine language, de Man became a sort of secular prophet. At his university memorial service, recalls David Mikics, a Yale PhD who later went on to teach English at the University of Houston, onlookers were “struck by the fervent devotion, almost religious in tone, shown to the dead de Man by his disciples. They would carry his work on, in his memory; he had shown the way for all future reading.” Jacques Derrida himself, the father of the deconstructionist creed, and de Man’s close friend, spoke at the event, praising “the ever so gentle force of his thought.”

It is only in this context that one can understand the traumatic impact of what happened three years after his death, when a researcher discovered that de Man had written collaborationist articles during his years as a journalist in Nazi-occupied Belgium. In the most notorious essay, “The Jews and Contemporary Literature,” de Man described the Jews as a “foreign force,” and expressed relief that European cultural forms had not been “enjuivées.” It was published next to a crude anti-Semitic caricature of two elderly Jews, with a caption reading: “May Jehovah confound the Gentiles!” “By keeping, in spite of Semitic interference in all aspects of European life, an intact originality and character, [our civilization] has shown that its basic character is healthy,” the article concluded—this, at a time when Belgium recently had passed laws excluding Jews from a variety of professions, including de Man’s own, journalism. To those who took deconstructionism as their religion, these revelations constituted the secular equivalent of a sex-abuse scandal implicating the Vatican’s most powerful cardinal.

Derrida, in particular, became obsessed with the issue—and published a lengthy essay about it in 1988. But astonishingly, this giant of textual analysis insisted on pretending that de Man’s words signified the exact opposite of their plain meaning, and that the “scandal” over his alleged collaboration was nothing but a conspiracy hatched by malevolent journalists, whose campaign resembled nothing so much as the Nazis’ own “exterminating gesture” against the Jews. Applying his deconstructionist art to “The Jews and Contemporary Literature,” Derrida performed a series of logical back-flips in arguing that de Man had in fact offered an “indictment” of anti-Semitism, not to mention an “uncompromising critique” of the Nazis. At the climax of this fantasy, de Man is claimed to be actually praising the Jews: “The manner in which he describes the ‘Jewish spirit’ remains unquestionably positive.”

Given deconstructionism’s unworkably bleak character (“existentialism at its height, without the existentialist’s belief in human heroism,” as Mikics concisely describes it), its fall from academic fashion was inevitable. But the descent was given a solid push by the de Man revelations. As one Boston University professor told a Newsweek journalist, the creed suddenly seemed like “a vast amnesty project for the politics of collaboration during World War II.”

Perhaps more than de Man’s words themselves, Derrida’s defense of them highlighted the true problem with deconstructionism: Pregnant within the view that words have no stable meaning outside their existence as symbols—Il n’y a pas de hors-texte—is the suggestion that they can mean anything, even their apparent opposite, depending on the perspective of the person communicating or interpreting them. In the political arena, in particular, deconstructionists often have fallen back on Michel Foucault’s maxim that all knowledge—including historical knowledge—is merely a pretext for justifying existing power relationships. There was no “truth,” Foucault declared—only a “regime of truth” that shifted day by day.

“The world begins to seem a realm of illusion, where we have tricked ourselves into supposing that we are real,” wrote Mikics in his 2009 book-length meditation on the subject, Who Was Jacques Derrida? An Intellectual Biography. “The whole history of ideas seemed to him to be a debate, carried on between the lines of great philosophical texts, between the masterful coherence of metaphysics and its deconstructionist opponent, skepticism.”

Out of this view came the idea that destroying the conventional, bourgeois construct of objective truth was not merely a tool of literary analysis, but a sacred intellectual duty to the world’s oppressed. The deconstructionist approach “revealed the volatile core of instability and indeterminacy lurking underneath every philosophical assertion, every scientific method, every work of literature,” literary critic Judith Shulevitz wrote in her reminiscence about studying at Yale under de Man. “Nothing we’d learned (we learned) meant what it claimed to mean. All texts were allegories of their own blindness . . . All this gave me an unusually palpable sense of purpose. I was a mole burrowing under the foundations of the tottering edifice of Knowledge.”

Contrary to caricature, Derrida did not inhabit a universe of pure subjectivity: In many cases, he argued, facts did matter. (Holocaust denial, for instance, was something he found quite troubling.) But the great thinker never found any coherent way to harmonize that concern with his insistence that truth is just a yarn that we spin at each other. In any event, as Derrida himself unwittingly demonstrated in the de Man affair, deconstructionism was the ideal smokescreen for scholars and activists peddling counterfactual interpretations of the world. Even after 9/11, just a few years before his own death, the ur-deconstructionist was still at it, babbling vapidly about the phrase “September 11”: “The telegram of this metonymy—a name, a number—points out the unqualifiable by recognizing that we do not recognize or even cognize that we do not know how to qualify, that we do not know what we are talking about.” On the broader question of political violence, he served up to the same interviewer the usual left-wing digressions into state terrorism and Western imperialism, but also a turgid disquisition about the very unknowability of terrorism: “Semantic instability, irreducible trouble spots on the borders between concepts, indecision in the very concept of the border: All this must not only be analyzed as a speculative disorder, a conceptual chaos or zone of passing turbulence in public or political language.” By the popularization of such bafflegab to legions of impressionable modern-languages students, the great French scholar became the conspiracy theorist’s polite Ivy League cousin—a famous name, and a set of impressive-sounding terms of philosophical art, to be trotted out whenever the blurring of black into white requires a scholarly footnote.

In this regard, deconstructionism dovetailed with a separate intellectual trend that had been underway since the 1960s: modern identity politics, which involved the reconstruction (and in some cases, the wholesale invention) of history according to the viewpoint of women, blacks, gays, and other minorities—a project that replaced the historian’s once-unquestioned goal of objective truth with an explicitly political, Marxist-leaning agenda aimed at empowerment and solidarity-building.

While all good historical scholarship relies, to some degree, on challenging received wisdom about the past, many radicalized New Left historians took this approach to an extreme, romanticizing any historical narrative, however counterfactual or even conspiracist, that challenged dominant attitudes. Scholarship became a species of “resistance”—even to this day, the term appears everywhere in radicalized scholarship and activism—suggesting an analogy to warfare and its maidservant, propaganda. Many faculty-lounge guerillas took their cue from Franz Fanon’s 1961 opus, The Wretched of the Earth, which sanctioned any tactic (even wanton murder, as Jean-Paul Sartre emphasized in his famously nihilistic preface) in the service of anti-colonialism. Committed intellectuals, Fanon declared, must create “combat literature” to inspire the coming revolution. The question of objective “truth,” as most people would understand the term, was, of course, secondary. As Peter Novick put it in his extraordinary 1988 book, That Noble Dream: The Objectivity Question And The American Historical Profession: “Most leftist historians agreed with Barrington Moore’s observation that ‘in any society the dominant groups are the ones with the most to hide about the way society works,’ and that to the extent that radicals took a jaundiced view of dominant ideology they were more likely to penetrate to the truth, to resemble Mannheim’s ‘free floating intelligentsia.’ ”

Some scholars went farther and argued that no single set of truths about the world could be said even to exist for all peoples—since blacks, women, “queers,” and other oppressed groups all have inherently different cognitive approaches. Following on Paulo Freire’s Pedagogy of the Oppressed, some academics turned the podium around, and made students the stars of the classroom: Since challenging oppression was the main goal of education, why should the theoretical ramblings of an educator be privileged over the more “authentic” life lessons related by female, black, and gay students?

In certain fields, entire areas of academic research and inquiry were declared off-limits. In my native Canada, for instance, it has become impossible to have any sort of intelligent debate about the relationship between the continent’s white European settlers, and the aboriginals whose ancestors first migrated from Asia at the end of the last glacial period. The historical truth about first contact between seafaring European explorers and North America’s animist hunter-gatherers—that it was a meeting between two peoples at vastly different stages of technological development—was progressively phased out in favor of a narrative that suggests a meeting of two equal “nations.” (Thus the rebranding of small, scattered aboriginal tribes as “First Nations” in the politically correct Canadian lexicon.) In the same vein, academic curricula were revised according to the fiction that our Western intellectual tradition had been built on the sayings and customs of wise old Indian chiefs.

In one trendy book, for instance, Aboriginal Education: Fulfilling the Promise, Brenda Tsioniaon LaFrance argued that science students should study “units of the Haudenosaunee teachings of the Four Winds, Thunder, Lightning and Sun, along with overall notions of conservation and ideas stemming from Western science.” The study of math should focus on “a survey of aboriginal number systems [as well as] the limits of counting.” In 2008, Canadian philosopher John Ralston Saul went further, arguing in his book A Fair Country: Telling Truths About Canada that Canada is “a Métis civilization” that owes all it has (except for the nasty racist bits) to “Aboriginal inspiration.” The question of how, exactly, groups of occasionally warring, preliterate aboriginal hunter-gatherer societies can claim credit for the creation of a modern, democratic, capitalist, industrial powerhouse built entirely in a European image never gets resolved. But most readers probably didn’t notice: For decades, the unspoken agreement in Canadian academia has been that scholars can peddle any sort of historical nonsense they like about aboriginals—so long as it functions to enhance their dignity. In 1997, no less an authority than the Supreme Court of Canada even declared that aboriginal “oral traditions” stood on a par with written documents as a form of legally admissible evidence.

Such doctrines typically have come packaged with a reductionist, militantly anti-Western view of history that draws a straight line from slavery and imperialism to such modern, “neoimperial” phenomena as globalization, free trade, counter-terrorism, humanitarian military intervention, and nation-building. Even the language we speak to one another became a weapon in the culture war against dead white males: Armed with deconstructionism and related theories, scholars began teasing out the hidden racist, sexist, and heterosexist messages encoded in everything from the Iliad to the Archie Comics to the SAT. University administrators created Black Studies and Women’s Studies departments, laboratories in which society’s bigotries could be diagnosed, and perhaps even cured. In the process, Novick notes, much historical scholarship was transformed into a form of abstruse cheerleading along a set of motifs preapproved by radicalized activists: “overcoming historical neglect; stressing the contributions of the group; an emphasis on oppression, with its troublesome complement, victimization and damage; a search for foreparents in protest and resistance; finally, a celebration of an at least semiautonomous separate cultural realm, with distinctive values and institutions.” It was in these “realms”—detached from the bourgeois conventions of objective truth-seeking, and insulated from mainstream criticism by a cult of political correctness—that deconstructionism and identity politics combined to produce a climate in which university professors felt entitled to spout historical fantasies (Afrocentrism being the most prominent example) and full-blown conspiracism so long as they were cast as doctrines of empowerment.

Consider, for instance, the manner in which men were portrayed by self-described “radical lesbian feminist” and “ecofeminist” Mary Daly, who taught courses in theology and “patriarchy” at Boston College from 1967 to 1999 (at which point she was fired for refusing to admit men into her classes). According to Daly, the human race was divided between “necrophilic” men and “biophilic” (life-loving) women. Christianity and other organized religions also were anathema to the pagan Daly—since she regarded them as interchangeably patriarchal. She even refused to identify herself as a “human being,” since this category was infected by the neurotoxin of maleness: “I hate the ‘human species’—look at it!” she told an interviewer in 1999. “I hate what it is doing to this earth: the invasion of everything. The last two frontiers are the genetic wilderness and the space wilderness; they’ve colonized everything else. It’s a totally invasive mentality—rapist. That is alien, and insofar as I’ve internalized any of that, I’m sorry. I’m contaminated by it. We all are.”

Like an early Zionist, Daly sought to create a geographical “homeland”—one reserved for “women who identify as women.” In her book Quintessence, she rhapsodized that such an all-female, “gynocentric” society wouldn’t need men: reproduction would be accomplished through parthenogenesis. When magazine interviewer Susan Bridle asked Daly what she thought about a related proposal put forward by another radical lesbian feminist, Sally Miller Gearhart, that “the proportion of men must be reduced to and maintained at approximately 10% of the human race,” Daly responded: “I think it’s not a bad idea at all. If life is to survive on this planet, there must be a decontamination of the Earth. I think this will be accompanied by an evolutionary process that will result in a drastic reduction of the population of males. People are afraid to say that kind of stuff anymore.”

The “stuff” Daly referred to here was, of course, eugenics—the “improvement” of humankind through the selective extermination of “undesirable” elements within the population. As the quotation in the paragraph above illustrates, Daly often spoke of men the way the Nazis spoke of Jews. Should any modern scholar make similar remarks about breeding out, say, blacks, or Jews, or gays, they would instantly become an object of disgrace. Yet Daly continued to be a cult hero among radical feminists until her death in 2010: According to Bridle, she is “one of the most revered visionaries of the contemporary women’s liberation movement” and “the grande dame of feminist theology”—not to mention “a demolition derbyist of patriarchal ‘mindbindings.’ ”

On a basic level, Daly’s hyperpolitically correct conspiracism can be viewed simply as radical populism stood on its head: Like the militant fringe of the Tea Party movement, Daly believed American society is locked in an ideological war between effeminate, left-wing, pagan ecopacifists and traditionally minded, star-spangled Christian culture warriors. The only point of disagreement is which utopia we should be rooting for—the “necrophilic” America of yesteryear, or the “biophilic” cloud city of the future.

But in one very profound way, Daly’s left-wing campus conspiracism is actually more radical than the sensational YouTube fare served up even by the most delusional New World Order types. That’s because, for all their paranoia, men such as Alex Jones, Michael Ruppert, David Ray Griffin, Richard Gage, and Joseph Farah truly do believe that the facts of history matter—that there is a central, objective, historical truth out there, and that their own investigations are crucial for finding out what it is. Daly, on the other hand, freely admitted that her historical reveries about a utopian “pre-patriarchal” stage of human history were based on romantic invention. But as she told Bridle, this shouldn’t bother anyone:

What is the risk? I mean, we live in hell. This is called hell. H-E-L-L—patriarchy. . . . Is it romantic to try to remember something better than that? There’s a reality gap here. How can I make it clearer? We’re living in hell and [a critic is] talking about a danger of romanticism in imagining something that is a hope for something better in the future? I think that the question comes from not looking deeply enough at the horror of phallocracy. . . . If you experience the horror of what is happening to women all the time, it is almost unbearable, right? All the time! . . . Then, when you are acutely aware of that and desire to exorcise it, the exorcism welcomes, requires, some kind of dream.

All this, I believe, helps explain why there is such a paucity of academic research in the field of conspiracy theories. The tone of the available papers suggests why: Most researchers seem hesitant to suggest that any view of the world—no matter how preposterous—is unambiguously wrong. The guiding notion, echoing the plot of Thomas Pynchon’s influential blockbuster, Gravity’s Rainbow, was that the institutionalized conspiracy woven into the fabric of corporate capitalism is more sinister than any narrative concocted by the likes of the Truth movement.

In some cases, I found, full-blown conspiracy theories have even made their way into seemingly mainstream university programs. In late 2010, for instance, the University of Lethbridge, in the Canadian province of Alberta, announced that it was awarding a $7,714 scholarship to conspiracy theorist Joshua Blakeney so that he could pursue his 9/11 Truth research under the direction of Globalization Studies professor Anthony Hall, a fellow Truther.

In 2006, the peer-reviewed Administrative Theory & Praxis, a prestigious quarterly devoted to “critical, normative, and theoretical dialogue in public administration,” and supported by the School of Public Affairs at Arizona State University, published an article by tenured Florida State University professor Lance deHaven-Smith, lamenting that “citizens of the United States continue to be victimized by suspicious incidents that benefit top public officials, and yet Americans have no way of knowing whether the incidents are unavoidable events or, instead, crimes initiated or facilitated by the officials themselves.” DeHaven-Smith’s bill of particulars includes “the defense failures on September 11, 2001 (9/11); the anthrax attacks on U.S. Senators a month later; and the series of terror alerts issued on the basis of flimsy evidence.”

As it turned out, deHaven-Smith was just getting started. In 2008, he coauthored an academic paper detailing the machinations of a “criminal, militaristic/fiscal cabal” operating at the highest echelons of the U.S government—a group of super-secret James Bond–like agents that he calls “SCAD-Net.” (The acronym stands for State Crimes against Democracy, a term deHaven-Smith proposes as an alternative for “conspiracy theory,” which he complains is “associated with paranoia and hare-brained speculation.”) While the paper begins with the usual array of rarified academic jargon and dense footnotes, it quickly morphs into a freeform conspiracist meditation on Skull and Bones, Malcolm X, and dozens of other conspiracist obsessions. In particular, we learn, JFK’s death was “probably” the work of J. Edgar Hoover, [CIA official] Richard Helms, Lyndon Johnson, [US Air Force Chief of Staff] Curtis LeMay, and (“almost certainly”) Richard Nixon. The Warren Commission was a “cover-up.” SCAD-Net also likely murdered Robert Kennedy, stole the 2000 presidency for Bush-Cheney, and perpetrated the 9/11 attacks. And more plots are on the way: One chapter is entitled “SCAD-Net is likely to strike again in 2012–2013, probably with a ‘dirty’ bomb at a sporting event.” Two years later, in 2010, deHaven-Smith again found a respectable scholarly home for his SCAD-Net conspiracism, this time in American Behavioral Scientist, which devoted its entire February 2010 issue to the subject—much of it consisting of full-blown 9/11 conspiracism.

According to one of my correspondents, who was present when deHaven-Smith and his coauthors presented their 2008 paper at Virginia Commonwealth University, no one in the crowd seemed distressed that their conference had become a forum for conspiracist fantasies. This did not altogether surprise me: Modern academics tend to romanticize the conspiracy theorist (at least in his nonracist manifestation), imagining him to be a source of “countercultural opposition,” “narratives of resistance,” or (as Foucault called them) “subjugated knowledge.” Many, like deHaven-Smith, refuse even to use the term “conspiracy theory.” Writing in the Journal of Black Studies, for instance, Denison University scholar Anita Waters argued instead for the term “ethnosociology,” and urged that we “reserve opinion” about their truth. By way of example, she cited AIDS conspiracy theories, which might be seen as “a logical outcome of the process by which ‘urban African Americans are struggling to conceptualize the threatening ecological and social decay’ that surrounds them.”

In his introduction to the 2002 book Conspiracy Nation: The Politics of Paranoia in Postwar America, University of Manchester professor Peter Knight assures his readers that “the essays in this collection refuse instantly to dismiss [conspiracism] as the product of narrow-minded crackpot paranoia or the intellectual slumming of those who should know better.” Rutgers University media-studies professor Jack Bratich, another prominent commentator on conspiracism, criticizes the “expertism” of those who would dismiss AIDS conspiracy theories out of hand, and instead seeks a “nuanced approach” that “looks for their origins in social, cultural and economic conditions.” Skip Willman of the University of South Dakota (formerly of the Georgia Institute of Technology) applauds conspiracy theories as “an oppositional political culture in the shadow of the marketplace and its attendant consumerism.” And then there’s Eithne Quinn, one of a long line of university academics to build her career on the literary analysis of rap-music lyrics. In her essay “All Eyez On Me: The Paranoid Style of Tupac Shakur,” she gushes that Tupac’s violent, obscene conspiracism offers “profound connections between the personal and the political, the psychic and the social, the individual and the larger relations of power. Such critical thinking is of course essential to the production of political consciousness.”

Among the Antiracists

Those who authentically commit themselves to the people must re-examine themselves constantly . . . Conversion to the people requires a profound rebirth. Those who undergo it must take on a new form of existence; they can no longer remain as they were. Only through comradeship with the oppressed can the converts understand their characteristic ways of living and behaving, which in diverse moments reflect the structure of domination.

—Paulo Freire, Pedagogy of the Oppressed

Sandy, Jim, and Karen work at a downtown community center where they help low-income residents apply for rental housing. Sandy has a bad feeling about Jim: She notices that when black clients come in, he tends to drift to the back of the office. Sandy suspects racism. (She and Jim are both white.) On the other hand, she also notices that Jim seems to get along well with Karen, who is black. As the weeks go by, Sandy becomes more uncomfortable with the situation. But she feels uncertain about how to handle it. Test question: What should Sandy do?

If you answered that Sandy’s first move should be to talk to Karen, and ask how Jim’s behavior made her feel, you are apparently a better antiracist than I am: That, for what it’s worth, was the preferred solution offered by my instructor at Thinking About Whiteness and Doing Anti-Racism, a four-part evening workshop for community activists, presented in early 2010 at the Toronto Women’s Bookstore.

My own answer, announced aloud in class, was that Sandy should approach Jim discretely, explaining to him how others in the office might perceive his actions. Or perhaps the manager of the community center could be asked to give a generic presentation about the need to treat clients in a color-blind manner, on a no-names basis.

The problem with my approach, the instructor indicated, lay in the fact that I was primarily concerned with the feelings of my fellow Caucasian, Jim. I wasn’t treating Karen like a “full human being” who might have thoughts and feelings at variance with her superficially friendly workplace attitude.

Moreover, I was guilty of “democratic racism”—by which we apply ostensibly race-neutral principles, such as “due process,” constantly demanding clear “evidence” of wrongdoing, rather than confronting prima facie instances of racism head-on. “It seems we’re always looking for more proof,” said the instructor, an energetic thirtysomething left-wing activist named Sheila Wilmot who’s been teaching this course for several years. “When it comes to racism, you have to trust your gut.”

I felt the urge to pipe up at this. Racism is either a serious charge or it’s not. And if it is, as everyone in this room clearly believed, then it cannot be flung around casually without giving the accused a chance to explain his actions. But I said nothing, and nodded my head along with everyone else. I’d come to this class not to impose my democratic racism on people, but to observe.

Most of the other thirteen students were grad student types in their twenties—too young to remember the late 1980s and early 1990s, when political correctness first took root on college campuses. The jargon I heard at the Women’s Bookstore took me back to that age—albeit with a few odd variations. “Allyship” has replaced “solidarity” in the antiracist lexicon, for instance, when speaking about interracial activist partnerships. I also heard one student say she rejected the term “gender-neutral” as sexist, and instead preferred “gender-fluid.” One did not “have” a gender or sexual orientation, moreover. The operative word is “perform”—as in, “Sally performs her queerness in a very femme way.”

Wilmot’s Cold War–era Marxist jargon added to the retro intellectual vibe. Like just about everyone in the class, she took it for granted that racism is an outgrowth of capitalism, and that fighting one necessarily means fighting the other. At one point, she asked us to critique a case study about “Cecilia,” a community activist who spread a happy message of tolerance and mutual respect in her neighborhood. Cecilia’s approach was incomplete, the instructor informed us, because she neglected to sound the message that “classism is a form of oppression.” The real problem faced by visible minorities in our capitalist society isn’t a lack of understanding; “it’s the fundamentally inequitable nature of wage labor.”

The central theme of the course was that this twinned combination of capitalism and racism has produced a cult of “white privilege,” which permeates every aspect of our lives. “Canada is a white supremacist country, so I assume that I’m racist,” one of the male students said matter-of-factly during our first session. “It’s not about not being racist. Because I know I am. It’s about becoming less racist.” At this, a woman told the class: “I hate when people tell me they’re color-blind. That is the most overt kind of racism. When people say, ‘I don’t see your race,’ I know that’s wrong. To ignore race is to be more racist than to acknowledge race. I call it neo-racism.”

All of the students were white (to my eyes, anyway), and most said they’d come so they could integrate antiracism into their activism and community outreach efforts. A good deal of the course consisted of them unburdening themselves of their racist guilt. The instructor set the tone, describing an episode in which she lectured a junior black colleague about his job. “When I realized what I was doing, I approached him afterward and apologized,” she told the class. “I said to him, ‘I’m so sorry! I’m unloading so much whiteness on you right now.’ ”

Another woman, an activist with an expertise in media arts, took the floor to describe her torment when a friend asked her to give a presentation to a group of black students—an exercise that would have made a spectacle of her white privilege. “Should I say yes? Or is it my responsibility to say no?” she said, quite literally wringing her hands with apprehension. “But then he may say, ‘I want you to do it—because you have a particular approach . . .’

“But wait! Could it be that the reason I have that particular approach is that I’ve been raised to think that I could have that particular approach, that I have the ability, that I am able to access education in a particular way? All these things are in my head, in my heart, not really knowing how to respond. On the other hand, I also recognize that the person asking me has the agency to decide that I’m the right person . . . so I say yes! . . . But them I’m still thinking, ‘I don’t know if I did the right thing.’ I still struggle with this all the time . . .”

An especially telling moment came when someone raised the subject of Filipino nannies who immigrate to Canada under government-sponsored caregiver programs. The instructor told the class that the practice was inherently “superexploitative” (a Marxist term that, according to Wikipedia, means “exploitation that goes beyond the normal standards of exploitation prevalent in capitalist society”). She also pointed us to an article included in the week’s reading, “Black Women and Work,” in which Canadian author Dionne Brand argues that cynical employers use appeals such as, “You know that you’re part of the family,” to emotionally blackmail nannies, housekeepers, and elder-care workers into the continuation of abusive work relationships.

A community activist—I’ll call her Kelly—interjected, apologetically. While she was all on board with the general thrust of the Brand article, she couldn’t help but confess that her own family had employed a Filipino nanny who truly did seem “part of the family.” Kelly had been a flower girl at the nanny’s wedding, and became close friends with the nanny’s own children, who’d spent much of their lives in Kelly’s own house.

This little speech from the heart—one of the rare instances in which someone had actually stepped outside the dogmas of antiracism and told a story in real, human language—caused a ripple of discomfort in the class. One woman suggested that the nanny has adopted a “coping mechanism” to deal with her subordinate situation. This led to a discussion about how we must recognize the nanny’s “agency”—a popular buzzword signifying that minority members must not be seen as passive victims. The instructor listened attentively—but couldn’t offer much more except that the example demonstrated the “contradictoriness” of antiracism studies. We moved on while Kelly just sat there, looking somewhat confused. I felt sorry for her.

In fact, I felt sympathy for just about everyone in that class. Like communist die-hards confessing their counterrevolutionary thought-crimes at a Soviet workers’ council, or devout Catholics on their knees in the confessional, they were consumed by their sin, seeming to regard their pallor as a sort of moral leprosy. Their guilt was never far from the surface: Even basic communication with friends and fellow activists, I observed, was a plodding agony of self-censorship, in which every syllable was scrutinized for subconscious racist connotations as it was leaving their mouths. While politically correct campus activists often come across as smug and single-minded, their intellectual life might more accurately be described as bipolar—combining an ecstatic self-conception as high priestesses who pronounce upon the racist sins of our fallen society, alongside extravagant self-mortification in regard to their own fallen state.

As cultural critics have been arguing for decades, the mindset I am describing betrays many of the hallmarks of totalitarianism: humorlessness, Orwellian neologisms, promiscuous accusations of thought-crimes, and the sanctification of doctrinal purity over candid emotional expression. But I also found it interesting to observe how closely this militant critique of society hewed to a traditional conspiracist narrative, which divides society between an elite, all-controlling oppressor class and everyone else. Or as Wilmot tells it: “In the blinding whiteness that controls our society—who gets what jobs, who is running the governments and business, who controls the media—the lives of people are generally erased.”

Since Wilmot’s dogmas are rooted in communist logic, this aspect should not be surprising: As noted elsewhere in this book, Marxism is, in its broad contours, itself a form of conspiracy theory that pits evil industrialists against the common workingman. But the rise of militant antiracism in the 1980s and 1990s injected a fresh element into conspiracist culture: the notion that the evildoers aren’t Jews, or communists, or Bilderbergers, or aliens, or even capitalists—but rather ourselves. Under this politically correct, profoundly antipopulist form of conspiracism—which has found expression in not just radical antiracism, feminism, and anticapitalism, but a slew of more esoteric academic disciplines, such as postcolonial studies and queer theory—the very fabric of our society represents an invidious plot against anyone who doesn’t happen to be rich, white, male, and straight.

The great irony is that all this was set in motion not by any of the epic crimes perpetrated by the white patriarchy (of which, let it be said, history records many), but by the civil rights movement and the legal and political victories that came in its wake. Once Western liberals launched themselves on the noble hunt for bald-faced bigotry, they expected to find their quarry everywhere—even once the hated creature had become virtually extinct.

Lessons from the Yale Law Journal

Yale University is a diverse place. When it comes to admissions, educational programs, and employment, the school claims in its official policy statements that it does not discriminate for or against any individual on the basis of race, color, or ethnic origin. But it’s widely known that Yale officials take whatever informal measures are required to increase the representation of minorities on campus. During my time at the law school in the mid–1990s, just under 10 percent of each year’s slots were assigned to African Americans—this, despite the fact that, as at other highly ranked U.S. law schools, few black applicants meet the school’s general standard for undergraduate grade-point averages and scores on the Law School Admission Test (LSAT).

If the benefits of “diversity” are to be reaped anywhere, Yale Law School is the place. On some campuses, the ugliest aspect of affirmative action is that, by bringing in a population of less qualified students, it inevitably generates a racially stratified hierarchy of academic performance. At Yale Law School, on the other hand, classes are pass/fail affairs that, in practice, everybody passes. While it’s possible for students to earn an “honors” grade in their course work, the school’s culture actively discourages Paperchase-style competition. An aphorism frequently recited among the faculty in my day was that, once admitted to Yale, students were “off the treadmill.”

But as my first year of law school wore on, it became clear that this was not quite true. Though grades count for less at Yale than at most schools, extracurricular activities count for more. The typical student has high ambitions. He does not merely dream of passing the local bar exam and joining a firm but rather aspires to become a judge, an academic, or a federal prosecutor—goals requiring, as a first step, the steady accumulation of accolades during one’s term at law school itself. These include, most notably, membership on the editorial staff of the prestigious Yale Law Journal.

From the point of view of race relations, the Journal presents a problem. Under the rules in place during my time, applicants were required to complete a forty-eight-hour take-home exam testing their abilities in writing, editing, and the formatting of legal footnotes. The identities of the test-takers were unknown to the graders, and no accommodation was made for “underrepresented minorities.” Out of eighty-four white applicants in my year, fifty-two made the cut, as did five out of twelve Asians. Out of the seven black applicants, none was successful.

This was not a one-time phenomenon. In the previous year’s competition, eleven blacks had applied, of whom only one was accepted. The result was that, overall, the editorial membership of the Journal was overwhelmingly white and Asian. Out of 113 members, only two were black.

When these numbers were released, a scandal erupted. Journal officials convened a public meeting to discuss the problem, filling one of the law school’s biggest classrooms with a standing-room-only crowd that stayed for three hours. It was an angry meeting—and also an awkward one. The problem was that no one dared mention the most obvious explanation for the racial imbalance that everybody decried. To refer, even obliquely, to the race-tagged stratification of talent at the school would have been humiliating for black students. So instead we censored ourselves and invoked esoteric theories of racial exclusion. The most popular of these was that black applicants approached the writing component of the Journal exam with a special “black” style that was routinely and unfairly marked down by the test’s administrators. Some speakers argued that the test itself, like other such standardized exercises, amounted to a collection of culturally biased riddles. At the meeting, and in other campus discussions of the issue, many of my classmates folded their criticisms into a more general argument: the will of black applicants had been sapped by the “institutional racism” that allegedly pervaded Yale Law School.

It was around this time that I began noticing a broadening social estrangement at the school along racial lines. Since the only way to explain the racial gap at the Law Journal while simultaneously preserving the academic dignity of black students was to endorse various theories of alienation, black students were encouraged to see signs of such alienation in the neo-Gothic law school’s every frieze and stained-glass medallion. A great deal was made of the absence of black “role models” on campus—especially black female role models. One of my fellow students argued in a public complaint that “in this environment, women students of color must fashion their professional personas out of thin air, because almost none of their professional mentors look anything like them.” Another lamented: “How can I think that my ideas are respected here when people who are just like me—black women—aren’t considered ‘good enough’ to teach here as full professors?”

Much grist was provided by small incidents. When a study group ejected one of its members, a black student whose contributions apparently were subpar, the spurned member posted a J’accuse manifesto charging racism. In another supposed manifestation of “micro-racism” (as such phenomena would later be described by antiracism advocates), one of my classmates complained in an essay that she’d been “excluded and alienated from the classroom environment” by her criminal-law professor, who had unconscionably confined discussion about race to a three-week segment of the semester.

In the classroom, certainly, the promised educational benefits of diversity rarely materialized. By promoting the idea that blacks thought and wrote in a special black style, the fallout from the Law Journal scandal reinforced the conceit that blacks and whites inhabit mutually impenetrable ideological worlds. Whites became increasingly reluctant to offer any comment that might be interpreted as threatening to blacks, while classroom comments by black students on any race-charged issue would almost always go unchallenged. Among my white peers, there was a feeling that sentiments expressed by black students had to be treated, in some abstract sense, as correct for blacks, and therefore immune from refutation. In general, most students were terrified of being accused of racism; when a subject connected to race came up, they either uttered platitudes or kept their mouths shut.

What helped me understand the benign origins of this surreal, emperor-has-no-clothes hunt for racist phantasms was the actual substance of my coursework—and, in particular, my classes on constitutional law—in which much of the material focused on the great victories against the very real racism embedded in America’s legal framework until well into the 1960s, and arguably beyond that. (For instance, Loving v. Virginia, the U.S. Supreme Court case striking down Virginia’s antimiscegenation statute, was not decided until 1967.) The 1954 case of Brown v. Board of Education in particular, was taken as the ultimate touchstone of America’s moral redemption; just as its racist 1896 precedent, Plessy v. Ferguson—upholding the doctrine of “separate but equal”—was taken as a byword for racist hypocrisy. Like everyone else in my class, I remember being genuinely moved by Brown and similar cases, and by the back-stories of the litigants who’d fought them. While most of us knew we were destined to become anonymous corporate lawyers and litigators in large law firms, the civil rights crusade launched by our professional forebears filled us with lawyerly pride. Thanks to them, blacks were living Martin Luther King’s dream of full racial equality, and overt racism had been pushed to the margins of American society. Armed with the same individual rights as the rest of us, we hoped and expected, blacks would quickly rise to the same level as their white neighbors.

Except, three decades after King’s death, they hadn’t—not where it counted, anyway: in jobs, education, housing, earning power, crime, or any other index of socioeconomic success. The complex reasons for this lie beyond the scope of this book (and, in any case, are so commonly catalogued by America’s race-obsessed talking heads that I doubt anyone needs to hear them repeated, even in capsule form). But for the young and the idealistic, the fine points of gang culture, welfare-trap economics, and single parenthood were beside the point: Racism, we’d all learned in school, was America’s congenital disease. And the fact that blacks had not yet achieved full, practical equality meant it hadn’t yet been fully treated. It was not a question of whether America was racist—that fact was answered by the data. It was a question of how. And if the answer couldn’t be found in the plain language of laws and policy statements, it must somehow be lodged in hidden, even invisible, places, such as our own minds and words.

It was this benign but misguided instinct, not any inherently totalitarian urge to control others, that gave birth to political correctness and the associated, increasingly conspiratorial witch hunt for racist phantasms within our souls. As illiberal as it seems, this conspiracist spirit is precious to those infused by it: Once one surrenders to it, all of the inequities in our society—between men and women, blacks and whites—can be chalked up to the familiar, reassuringly simplistic bogeyman of bigotry.

Political correctness and radical identity politics have subsided slightly since their high-water mark in the early 1990s, in large part thanks to a backlash by right-wing culture warriors allied with principled leftist free-speechers. But like Marxism, it has left behind a toxic ideological residue on our intellectual coastline: a vague but powerful baseline belief among educated liberals that mainstream society is divided into victims and oppressors—and that the latter are largely white, male, straight middle-aged men who look a lot like George W. Bush and Donald Rumsfeld. After a few years spent wandering this coastline, the belief that these people might fly planes into the World Trade Center doesn’t follow automatically, but it certainly becomes a lot easier to assimilate.