7. IDENTITY
We must have but one flag. We must also have but one language. That must be the language of the Declaration of Independence, of Washington’s Farewell address, of Lincoln’s Gettysburg speech and second inauguration.
—THEODORE ROOSEVELT, 1917
I hope very much that I’m the last president in American history who can’t speak Spanish.
—BILL CLINTON, 20001
One day, a few years ago, when my girls were small, I read them Mary Poppins, the notably bizarre but durably beloved 1934 fantasy by P. L. Travers. Things were going along just fine, with Mary arriving on the east wind to kick off a series of magical adventures for her charges. Then we got to chapter six—“Bad Tuesday.”
This chapter revolves around Mary’s remarkable compass, which not only tells which way to north, south, east, and west, but also sends Mary and the kids there. Mary barks “North!” and they find themselves at the North Pole conversing with a polar bear. The command of “South!” lands them in a steamy jungle where they eat bananas with a hyacinth macaw. “East!” takes them to China and a panda bear, while “West!” brings them to a beach where they encounter a seaweed-serving dolphin.
It was the dolphin that did it. Maybe the exotic specificity of “hyacinth macaw” should have made me wonder, but it was the dolphin, environmentalism’s poster mammal, that jerked my gearshift from doting mother to PC-detector. Was it possible, was it plausible, that P. L. Travers—a British subject born at the end of the reign of Queen Victoria, the high point of the British empire—would choose a polar bear, a hyacinth macaw, a panda bear, and a dolphin to represent the four corners of the earth? Not bloody likely. (Later, I learned that the panda didn’t even appear in the West until Ruth Harkness brought a cub named Su-Lin out of China in 1936, two years after Mary Poppins was first published.) Another hand, contemporary and clumsy, was at work, as indicated by the note to be found in the 1997 edition’s table of contents: “Chapter Six (Revised version).” A quick dip into the local library fished up a suitably old and unreconstructed copy of Mary Poppins, which revealed what drove modern-day editors to rewrite the thing.2
Turns out, Mary’s original spin around the globe took her and the children not to visit animals of different species, but human beings of different races. To the north, in the original chapter six, Mary & Co. rub noses with “an eskimo man … his round brown face surrounded by a bonnet of white fur.” This, of course, was not a face Inuit rights advocates were going to love. His “eskimo wife” goes on to make an offer whose generosity would be lost on PETA: “Let me get you some fur coats. We’ve just been skinning a couple of Polar Bears.” In a southern desert, a black-skinned family offers, gulp, watermelon to the parched travelers—not only ballistically “incorrect,” but also botanically improbable. (“My, but dem’s very white babies,” the mother, her tiny “picaninny” in her arms, tells Mary.) To the east, they encounter a punctilious Chinese Mandarin, whom P. L. Travers has dressed in a kimono—which, of course, is a Japanese costume, not to mention a fashion don’t for Asian activists. To the west, they meet Chief Sun-at-Noonday: “‘My wigwam awaits you,’ he said in a grave, friendly voice. ‘We are just frying a reindeer for supper.’”
As benign, as hokey, as these travels are, they also hit just about every stereotype on race and enviroment there is. The significance of the modern-day editorial rewrite, however, goes beyond a contemporary compulsion not to offend. Lost in the shuffle between old and new is the author’s point of view: the thoroughly Western, if specifically British, perspective from which Miss Travers, dimly enough, conceived of the world and its peoples. More than anything else, it is her singular vantage point (originally illustrated by a drawing of Mary Poppins and the children at the center of the world and its peoples) that has now become unacceptable, and hence defenseless prey to “revision.”
Sometime between Mary Poppins’s world tour in 1934, and her editorially forced march in 1997, a radical shift in outlook took place. As a society, we—“we” meaning we who are born of the West—no longer look through anything like a Western lens onto the rest of the world. Indeed, there really is no “West” and no “rest” by now, or so we are taught.3 As we learn, beginning in preschool, there is now a “diversity” of “cultures,” “voices,” and “global perspectives.” This shift has yanked us—again, we who are born of the West—from a reflexively Western viewpoint to a self-consciously multicultural one.
What was the impetus for this change? Taking into account the unprecedented movements, both legal and illegal, of nonwhite, non-English-speaking peoples into the United States (and also Western Europe) over the past four decades, it may seem logical to guess that the multicultural perspective reflects these new demographics. In America, these new demographics were the direct result of sweeping new immigration laws that Congress passed in 1965; they introduced entry criteria that effectively favored non-European over European immigrants, and weighed “family reunification” concerns over a potential immigrant’s skills.4 In Europe, meanwhile, as historian Bat Ye’or has documented, the past forty years have seen unprecedented immigration from Muslim countries into Western Europe, particularly following the oil crisis of 1973, and the subsequent creation of the Euro-Arab Dialogue, a political entity created by the European Economic Community at the behest of France and the Arab League to foster a kind of Euro-Arab civilization: a cultural and political “convergence between Europe and the Islamic states of North Africa and the Middle East.”5
It would seem, then, that the multicultural outlook emerged as a reflection of multicultural demographics. To a great extent, however, the new outlook preceded and anticipated such population shifts. It turns out that there was something else that made us blink and redirect our cultural sights, something that has a lot to do with the same erosion of certitude, of cultural affinity and confidence, that led to the death of the grown-up.
As the authority of the adult diminished, so, too, did his totems of authority, from the “obvious” primacy of Shakespeare, to the “undoubted” superiority of the Bill of Rights, to the happily-ever-afterhood of the house in the suburbs—where, not incidentally, those “universal truths” of Shakespeare et al could be found in the inevitable set of encyclopedia in the equally inevitable family room. (Had Bob Dylan forty years ago revealed his secret longing for just such a house, complete with “picket fence,” history might well have been a little different.6) Once there was no longer anything singularly precious about Socrates, Beethoven, or sliced bread, once these underpinnings that supported the Western hierachy of cultures buckled, the leveling impact of cultural relativism became the order of the day. Maybe it was French philosopher Claude Levi-Strauss who first sounded the call to arms to “fight against cultural differences hierarchically” in the 1950s;7 by the 1980s, with a resounding multiculturalist victory in the so-called culture wars, this leveling mission was accomplished.
Amid a purposeful blur of cultural perspectives—among which only the anti-Western is deemed superior—the principles of multiculturalism have flourished and spread, undermining both our attachment to and confidence in Western culture across the land. Hence, Columbus’s remarkable voyage of discovery to the western hemisphere is taught to fourth-graders through the eyes of a mocking West Indian girl (Chevy Chase, Maryland); the father of our country is deconstructed into a slaveholder unfit to name elementary schools after (New Orleans),8 or hang portraits of in government offices (Brooklyn)9; and on the National Mall in Washington, D.C., the National Museum of American History showcases permanent exhibitions that emphasize American slavery, slaughter, and strife. (Just next door, interestingly enough, a permanent exhibition at the Natural History Museum called African Voices omits mention of African slavery, slaughter, and strife.)10 In such a world, in such a culture, Mary Poppins and her compass just had to go. Under a multicultural sun, no one refracts humanity through a Western prism, not even a quirky character in a children’s fantasy—maybe especially not a quirky character in a children’s fantasy, given the impressionability of young readers.
The symbolism is striking. Like Mary Poppins, we have been blinded to our own perpsective. Conditioned by our academic, cultural, and political elites, we no longer regard the “rest of the world” as anything like the “Other.” Having simultaneously embraced diversity and denied difference, we now find it, on the whole, just easier to save the dolphin. This is why sociologist Nathan Glazer could declare, “We are all multiculturalists now.” Certainly, we are supposed to be. It can’t be just a coincidence that this acquiescence to a state of cultural negation coincides with the cultural practice of nipping maturity in the bud. In other words, the loss of identity would seem to be linked to the loss of maturity. At the very least, the easy retreat from history and tradition reveals the kind of callow inconstancy and lack of confidence that smacks of immaturity as much as anything else. It seems that just as we have stopped “growing up,” we have forgotten “who” it was we were supposed to grow up into.
In Alien Nation: Common Sense about America’s Immigration Disaster, a bracingly unequivocal assessment of the cultural and political shambles that make up U.S. immigration policy—the basis of sovereignty—author Peter Brimelow opens his preface with a provocative statement.
There is a sense in which the current immigration policy is Adolph Hitler’s posthumous revenge on America. The U.S. political elite emerged from the war passionately concerned to cleanse itself from all taints of racism and xenophobia. Eventually, it enacted the epochal Immigration Act (technically, the Immigration and Nationality Act Amendments) of 1965. And this, quite accidentally, triggered a renewed mass immigration, so huge and systematically different from anything that had gone before as to transform—and ultimately, perhaps, even to destroy—the one unquestioned victor of World War II: the American nation, as it had evolved by the middle of the twentieth century.11
Brimelow doesn’t elaborate on Hitler’s revenge, but further consideration is illuminating. It’s easy to imagine that in its revulsion at Adolf Hitler’s genocidal anti-Semitism and obsession with Aryan racial purity, the U.S. political elite wanted to put as much distance between itself and any policy or practice smacking of the evils of the Third Reich. Ditto for the Nazi regime’s rigid, if buffoonish, authoritarianism. Remember the Hechingers, with their astute observation that postwar American culture expressed an instinctive animus toward the autocratic classroom, its pedagogical authority, and the blind obedience of rote memorization. This old-fashioned model wasn’t, as they observed, going to fly in the new postwar day. Having just triumphed over a German dictatorship and a Japanese divine monarchy, American culture was in a decidedly democratic mood; this, as the Hechingers demonstrated, played out in the widespread receptivity to new, nonauthoritarian, child-directed education theories, and a growing emphasis on self-expression.
Brimelow has picked up on another aspect of the postwar mood—the passionate concern of the political elite “to cleanse itself from all taints of racism and xenophobia.” This, he maintains, culminated in the Immigration Act of 1965. By reconstituting the immigrant pool to accommodate non-Europeans and nonwhite peoples, this new legislation codified a policy of non-racism (“racism” understood as discrimination against nonwhites) within an official American embrace of non-Western cultures. The practical impact of this landmark legislation still hasn’t been acknowledged; the emotional effect on proponents, however, was undoubtedly instantaneous as warm waves of self-satisfaction foamed with newly proven purity—not purity of race, of course, but rather purity of intentions.
Such idealistic trends, the one cited by the Hechingers, the other by Brimelow, were at heart emotional trends—part of the same national mood swing of postwar exhuberance. The “democratic” classroom that no longer saluted authority embodied the difference between the heil-Hitler bad guys and the power-to-the-people good guys; so, too, did “democratic” immigration legislation (“a national, emotional spasm”12) that sent Western European émigrés toward the back of the line for American entry. Just as we were now inclined to bridle at the traditional hierarchy in the classroom, we were also ready to reject the traditional hierarchy of cultures. This would ultimately, however, call into question our own place on top.
And therein lies Hitler’s revenge—the cultural leveling that either emerged from, or was, in some crucial way, accentuated by natural outrage over the crimes against humanity committed by the Third Reich. Hitler, of course, was totally defeated, along with his tyrannical notions of cultural (Germanic) and racial (Aryan) “supremacy.” But so, too, perhaps, were all notions of Western primacy regarding culture and race (which I take here to include nationhood)—even ones that supported, not supremacy in a murderous form, but judgment in a rational form. Grounded by notions of sovereignty and cultural affinity, such judgment determines the kinds of attitudes and choices—on everything from religion to law to literature—that are expressed in cultural identity. In the case of the United States and its European allies, these attitudes and choices derive from a specifically Judeo-Christian identity forged in fire, ink, and steel by those whom our modern-day multiculturalists insultingly deride as “dead, white men.”
Having failed to destroy the democracies by making Nazi war, then, Hitler may have unwittingly managed to destroy the democracies by effecting a post-Nazi peace in which the act of pledging allegiance to the flag itself, for example, would practically become an act of nationalist supremacism—racism, even; bigotry, too. Quite suddenly, it didn’t matter whether the culture in question led to a reign of terror, or to liberty and justice for all. The act of maintaining or defending the culture, or, ultimately, even defining it—whether through unabashed opposition to communist expansionism, purposefully selective immigration practices, or even sticking to the Western canon—became confused with and condemned as an exclusionary and, therefore, evil chauvinism. In this way, having won the great victory, the Allies lost the will to survive. Writer Lawrence Auster has explored this theme.
Having defined the ultimate evil of Nazism, not as the ultimate violation of the moral law as traditionally understood, but as the violation of liberal tolerance, postwar liberalism then set about dismantling all the existing ordinary particularisms of our own society (including, in the case of the EU, nationhood itself) in the name of preventing a resurgence of Nazi-like evil. This was the birth of political correctness, which sees any failure on our part to be completely open to and accepting of the Other—and thus any normal attachment to our own ways and our own society—as the equivalent of Nazism.13
Openness and acceptance on every and any level—from personal to national, from sexual to religious—are the highest possible virtues of the postmodern Westerner. This makes boundaries and taboos, limits and definition—anything that closes the door on anything else—the lowest possible sins. Judgment, no matter how judicious, is tarred as “prejudice” and, therefore, a neobarbarous act to be repressed and ultimately suspended. Patriotism has been caricatured out of polite society as boorish warmongering. Western civilization itself, which may be taken as the product of both judgment and patriotism, has been roundly condemned for being both prejeudiced and warmongering. The overall effect has been to sap the culture’s confidence in its own traditions, even—especially—in the classical liberal tradition that stiffened our spines against Hitler in the first place. The cultural anemia that began to take hold long ago has passively accepted the transformation of America the Western into America the Multicultural (and Western Europe into Multicultural Europe) as a good, or necessary, or even just inevitable thing. And thus—with the practical disappearance of the nation, or, perhaps better, the culture, that defeated him—Hitler’s revenge.
Maybe this is the World War II Syndrome we never knew we had. As a touchstone in the twenty-first century, World War II evinces a nostalgia for unity in the face of adversity; such unity, however, is now irreclaimable for its having vanquished that specifically Hitlerian adversity that made unity itself—nationalism, nationhood, and culture—verboten.
The führer’s revenge takes other forms. Consider the noxious “Bush=Hitler” equation. Senseless but pernicious, this comparison is a crude attack on the president’s attempt to draw lines between Good Us and Evil Them. “You’re either with us or you’re against us,” was the way Bush began his so-called war on terror—a declaration that is the very essence of national or cultural definition, and as such a multicultural no-no. It’s necessarily “divisive,” and thus, not “diverse”; it’s by definition noninclusive, and therefore “intolerant.” Never mind that it’s noninclusive and intolerant of diverse people who singlemindedly want to kill you. Wasn’t Hitler also noninclusive and intolerant? Ergo, “Bush=Hitler.”
Seized upon by the antiwar movement, this slogan took hold in a culture without a core, inspiring proponents from former Vice President Al Gore, who repeatedly slandered President Bush’s Internet supporters as “digital brownshirts,” to ex-astronaut and former Sen. John Glenn, who tarred the Bush agenda as “the old Hitler business,” to Sen. Robert Byrd (D-WV), who equated the Bush era with Nazi Germany—as did German Justice Minister Herta Däubler-Gmelin. Billionaire leftist George Soros, former UN arms inspector Scott Ritter, cartoonist Ted Rall, singer Linda Ronstadt, author and radio personality Garrison Keillor, and Nobel Prize–winning playwright Harold Pinter have all played the same Dadaesque, if malicious game.14 The fact is—all the facts are—there is no historic or moral basis on which to make the comparison, period. Historian Victor Davis Hanson took the trouble to explain why.
At first glance, all this wild rhetoric is preposterous. Hitler hijacked an elected government and turned it into a fascist tyranny. He destroyed European democracy. His minions persecuted Christians, gassed over six million Jews, and created an entire fascistic creed predicated on anti-Semitism and the myth of the superior Aryan race.
Whatever one thinks of Bush’s Iraqi campaign, the president obtained congressional approval to invade and pledged $87 billion to rebuild the country. He freely weathered mass street demonstrations and a hostile global media, successfully defended his Afghan and Iraq reconstructions through a grueling [reelection] campaign and three presidential debates, and won a national plebiscite on his tenure.15
Hanson went on also to note Bush’s friendship with Israel—“in a world almost uniformly hostile to the democratic Jewish state”—and his efforts to introduce democracy to the Middle East, “with no guarantee that such elected governments will not be anti-American.” In total contrast to Hitlerian policy, Hanson added, “No president has been more adamantly against cloning, euthanasia, abortion or anything that smacks of the use of science to predetermine supergenes or to do away with the elderly, feeble or unborn.”16
Okay, so Bushdoes not equal Hitler. Where Hanson pegs the “Bush=Hitler” phenomenon to ignorance, arrogance, and even deflection (Senator Byrd, a former grand wizard in the Ku Klux Klan, and Däubler-Gmelin, a post-Nazi German justice minister, may be seeking some kind of cover or kudos by attributing Hitlerian evil to others), there is something else to consider. Or, rather, there is the lack of something else to consider, something missing from the wider society. What’s missing is a connection to our own identity (dare I say cultural self-esteem?), that autonomic understanding that should tell us, unless we missed the coup d’état, that no American president—checked and balanced according to the U.S. Constitution, monitored and dissected by a free press—is going to be the hellish twin of Adolf Hitler, and unless we demand better, more decent ways to express dissent, our society won’t remain civil, much less democratic. Lacking such a connection, lacking such confidence, lacking such understanding, lacking such cultural self-esteem, society is ill-equipped to rein in or laugh out the debased absurdity of “Bush=Hitler” in the first place.
Thus, the bogeyman of Hitler & Co.—I hesitate to make light, but the promiscuous invocation of Hitlerism and other racisms to shut down debate has created a ridiculous, extra-historical context—pops up in the most serious deliberations concerning sovereignty issues, bringing rational discourse to a screeching halt. A (thankfully failed) proposal to enable illegal immigrants in California to drive legally with a special driver’s license is likened, by state senator Gil Cedillo, to “a scarlet letter that would invite discrimination, much like the star of David on Jewish people in Nazi Germany.”17 The Minutemen Project, the civilian group that patrols stretches of border to call attention to federal inaction on illegal immigration, is equated to the KKK by a U.S. Congressman, Rep. Lloyd Doggett (D-TX)18—and called “vigilantes,” by the way, by President Bush. In Virginia, the proposals of a gubernatorial candidate, Republican Jerry W. Kilgore, to make the nation’s immigration laws more enforceable are “tinged with nativism,” according to the editorial page of The Washington Post.19 Over There, in Europe, Margot Wallström, a senior official of the European Union (EU), took the occasion of VE-Day 2005 to condemn the concept of nationalism itself—not Hitler, not the democracies’ appeasement of Hitler—for causing the outbreak of World War II. She accused Europeans reluctant to cede their sovereignty—or, as she put it, their “nationalistic pride”—to the supranational and antidemocratic EU bureacracy as risking a return to Nazism and the Holocaust.20 It’s not just Bush that equals Hitler, national identity equals Hitler, too.
It’s a simplistic, even babyish, argument, but that’s no coincidence. Five or six decades of nonjudgmentalism and multiculturalism have taken their toll on education and knowledge. Mary Poppins aside, children’s literature, also including educational material, is a festering hot spot for the syndrome. I say that not only having read such books as Sandra Stotsky’s Losing Our Language: How Multicultural Classroom Instruction Is Undermining Our Children’s Ability to Read, Write, and Reason (the subhead says it all), but also having watched, firsthand, the multicultural education process in my own children’s classrooms.
Long before my kids went to school, the horrors of the Western world, according to politically correct indoctrination, were familiar past the point of cliché: Columbus was a genocidal germ-carrier out-eviled only by plantation-owner George Washington and his oppressive band of white patriarchs, whose wigs we’d be powdering to this day—were it not for Molly Pitcher, the Iroquois Confederacy, George Washington Carver, and Yoshiko Uchida. I knew this drill going into the game; what I didn’t know was the strategy.
In other words, it came as no shock to learn, for example, that in our public elementary school in Westchester County, New York, third-graders devote a hefty part of a semester to studying Kenya. They don’t know who discovered the Hudson River, who is buried in Grant’s Tomb, or where the Battle of White Plains was fought, but they come home with plaudits for Kenya’s health care system—which, incredible as it may seem, I had never heard of. This, I recognized, was par for the PC course, as were the stories that came home about cow’s-blood cuisine and earlobe enhancement, which the kids found relishingly disgusting.
But the kids also came home with stories of how the teacher admonished them to modify their feelings about such barbarities; indeed, to coin a phrase, to shut up. Teacher says: “Who are we to say that supping on cow’s blood is ‘gross’? That’s their culture.” This is the pattern that has repeated itself in other years and other schools. In a public elementary school in Chevy Chase, Maryland, for example, fourth-graders study the Plains Indians. “Who are we to say that using a buffalo tongue for a brush is ‘gross’?” the teacher asked the class. “That’s their culture.” What finally struck me is that teaching children to internalize these reactions to one tribal custom or another is not a lesson in etiquette, akin, say, to table mannners that teach us not to spit out unfamiliar food at a dinner party to spare the hostess’s feelings. It teaches children to sublimate the traditions and teachings of their own civilization—those that tend to regard buffalo-tongue brushes, for example, as being revolting or unsanitary. The repetition of this kind of instruction—who are we to say anything about anything?—impresses upon young minds the crucial need to adopt an attitude of painstaking neutrality when regarding other (read: less developed) cultures. In other words, it teaches children to suspend their judgment. This is an elementary lesson in cultural relativism that American and other Western children never forget.
Along the lines of Pavlov’s dog, then, we learn as students to react to primitive customs or barbarous practices by reflexively suspending judgment. In the beginning, it’s just weird cuisine and bizarre customs that get labeled neither weird nor bizarre. “That’s their culture” becomes the mantra of accepting the Other. But it also becomes the mantra of denying the Self. And in learning to turn off the assessment process, in learning to stymie the gut reaction, we have learned to shut it down entirely. Such self-abnegation may be fine, sort of, in dealing with the more superficial practices of diet or dress—don’t blanch at body “art”; do appreciate pendulous earlobes—but what happens in the face of less benign cultural phenomena, from censorship and religious repression to female genital mutilation (FGM), forced marriage, so-called honor killing, and suicide bombings?
First of all, words fail. Literally. After a 2005 terrorist attack on a Baghdad school in which five teachers were lined up against a schoolroom wall and shot to death, columnist Clifford D. May pointed out that leading newspapers such as The Washington Post and The New York Times described the terrorists as “gunmen” and “armed men.”21 Such euphemistic restraint is typical. Middle East expert Daniel Pipes compiled a list of twenty euphemisms used by the news media to avoid using the term “terrorist” in stories describing the, well, terrorists who murdered 331 civilians in Beslan, Chechnya, 186 of them children.22 The word “militant,” he observed, usually serves as the media’s “default term” for terrorists everywhere, although it would seem that the heretofore unimagined heinousness of the atrocities committed at Beslan School No. 1 challenged the media’s collective imagination. From “assailants” (National Public Radio), to “fighters” (Washington Post), to “perpetrators” (New York Times) to—Pipes’s fave—“activists” (Pakistani Times), the euphemisms, he noted, betrayed the kind of frantic thesaurus-thumbing on the part of journalists that may actually induce calluses. Pipes explains the quest for synonymity this way: “The origins of this unwillingness to name terrorists seem to lie in the Arab-Israeli conflict, prompted by an odd combination of sympathy in the press for the Palestinian Arabs and intimidation by them.” Worth considering also is the idea that such sympathy and/or intimidation may be something we learn in school, a way of thinking linked to a widespread moral paralysis born of the conditioned response to suspend judgment.
In the case of the Beslan atrocity, for example, we’re talking about the ski-masked, explosive-belted bastards who seized more than one thousand innocent people, children, parents, and teachers who came to school to celebrate the beginning of a new year, not to be imprisoned in a school gymnasium booby-trapped with mines and enmeshed in trip wires. Terrorists, no? Not so fast, according to the fourth estate. They were “militants,” or “rebels.” It’s not that journalists were necessarily endorsing the Beslan terrorists and the jihadist cause of their leader, Shamil Basayev, to establish an Islamic caliphate in the Caucausus. (Basayev, killed by Russian forces in 2006, also made noises about reviving the Islamic caliphate with the capital city in “Al Kudsa” [Jerusalem].23) But the media’s studied nonjudgmentalism on this and other atrocities gives jihadist terrorists a perpetual benefit of the doubt. Such doubts—raised in the language of “neutrality”—reserve a crucial moral space for the possibility of sympathetic judgment, enforcing the notion that blamelessness for terrorism is just as possible as blame. This implies that terrorism is not beyond the pale, which, in a civilized society, is no longer exactly a “neutral” position to take. Treating terrorism with an evenhandedness accorded to competing tax plans creates an atmosphere that is amoral to a point of immorality. Besides staving off condemnation and leaving room for approval, the act of suspending judgment—and this is what may be most significant—delivers terrorism and terrorists from the nether realm that all civilizations reserve for taboo, anathema, and abomination. This begins to explain why the practice is so dangerous.
Reuters (Beslan terrorists=gunmen) says its doesn’t “characterize the subjects of news stories, but instead [reports] their actions.” To do so, the wire service follows “a policy to avoid the use of emotive words.” But “terrorism,” which may usually be defined as attacks on noncombatants in civilian settings, is employed by terrorists specifically to create emotion—terror—for political or strategic ends. “Terrorism,” then, is not only an emotive word, it is an emotive practice. Repressing the word not only mischaracterizes the action, it also serves to supress, and even numb, society’s natural reaction of salutary abhorence.
Not that most media would agree. The Chicago Tribune (Beslan terrorists=“militants”) defends eschewing the “t” word as a way to provide “unbiased” coverage. “No intellectually honest person can deny that ‘terrorist’ is a word freighted with negative judgment and bias,” writes Don Wycliff, the newspaper’s ombudsman. “So we sought terms that carried no such judgment” (emphasis added). This is as good an introduction as any to that hoary cliché: One man’s freedom fighter is another man’s terrorist. Or, as The Boston Globe’s ombudsman Christine Chinlund put it with a gratuitously feminist twist: “One person’s terrorist is another’s freedom fighter.”24 The real question is, if the killer is a terrorist to “Us,” who cares if he’s a freedom fighter to “Them”? But I am forgetting: We are supposed to have learned that there is no “Us” just as there is no “Them.” Even in the face of terrorism, there can be no such consensus—“no such judgment,” as the Tribune editor put it, no such Western-based belief in the sanctity of pizza parlors, commuter trains, and the first day of school. This shattered consensus helps explain why, when the first raw shock of the latest terrorist barbarity fades away—discos, skyscrapers, hospitals, yawn—so, too, does sympathy for the victim. Or at least it must now jockey for space with sympathy for the terrorist.
On some level, this is the latest incarnation of the age-old encounter between the West and the rest—specifically, the non-Western Other encountered during various periods of Western exploration, conquest, and colonization. Age of Exploration Europeans liked to talk about the “noble savage,” acknowledging or projecting a nobility onto the primitive peoples of the New World that canceled out, or compensated for, their obvious savagery. A striking parallel seems to exist in contemporary analysis of the Islamic terrorist—sorry, “militant”—and his assault on heretofore Western civilization. Just as apologists have seen in white man–scalping redskins the desperation of the primitive in the face of an advanced and encroaching civilization, apologists today see in the suicide bomber a similar desperation, a plight in which a terrorist’s life and limbs are his only weapons against a technologically superior and encroaching civilization. What sounds like an apology for Islamic terrorism against Israeli, American, and other Western targets also sounds like a variation on the traditional theme: enlightened society meets primal scream. And who’s to say … whatever?
We can trace the first more or less flowering of this relationship with the Other to the sixteenth century, writes Islamic history scholar Ibn Warraq. That was when the noble savage theme emerged as a rhetorical device European writers used to critique their own societies. Warraq points to the writings of Peter Martyr Anglerius (1459–1525), who contrasted the greed and cruelty of Spanish conquistadors with the “happier” Indians who peopled an Edenic paradise “free from money, laws, treacherous judges, deceiving books, and the anxiety of an uncertain future.”25 Under Martyr’s influence, Montaigne (1533–1592) would go on to “develop the first full-length portrait of the ‘noble savage’ in his celebrated essay ‘On Cannibals.’” Warraq pinpoints this essay as being “the source of the idea of cultural relativism.” (In a nutshell, Montaigne lays it all out in his assessment of Brazilian Indians: “I am not so anxious that we should note the horrible savagery of these acts as concerned that, whilst judging their faults so correctly, we should be so blind to our own.… [We] surpass them in every kind of barbarity.”) The noble savage myth endured through the centuries, taking in not just indigenous peoples of the Americas, but also other non-European peoples, including Muslims in the Ottoman East.26 By the eighteenth century, Warraq tells us:
The noble savage was simply a device to criticize and comment on the follies of one’s own civilization.… By emphasizing the corruption, vice, and degradation of the Europeans, eighteenth-century writers exaggerated the putative superiority of the alien culture, the wisdom of the Chinese or Persian or Peruvian moralist and commentator. They were not really interested in other cultures for their own sake; in fact, they had very little knowledge of these civilizations.27
In this era of the ideal Other, it was the antimaterialist, anticlerical likes of Voltaire, Gibbon, and Carlysle who used the Other like a goad to poke and prod their own societies. “Europe has always needed a myth for purposes of comparison and castigation,” historian Bernard Lewis writes. As Lewis tells it, after Europeans became disillusioned with the stereotypically “wise and urbane” Chinese, “there was a vacancy for an Oriental myth. Islam was in many ways suitable.”28
And even necessary, although not simply to energize what Lewis aptly calls “Western intellectual shadowplay.” Historian Bat Ye’or sees the myth of Islamic tolerance as piece of propaganda born of nineteenth-century political expediency. Such a myth was engineered, she says, to maintain the great powers’ balance of power in Europe, which was at the time anchored in the East by (Muslim) Turkey’s block against a (Christian) Russian advance to the Mediterranean. She credits the British with primary authorship: “To justify the maintenance of the [Muslim] Turkish yoke on the [Christian] Slavs, this yoke had to be presented to public opinion as a just government. The [Muslim] Ottoman Empire was painted by Turkophiles as a model for a multi-[ethnic], multi-religious empire.”29
But a tolerant Turkey was always a fraudulent model, given its foundation in sharia (Islamic law), which codified harsh inequities between men and women, Muslims and non-Muslims. These didn’t go unnoticed. Despite the machinations of geopolitical mythmaking, the work of such nineteenth- and twentieth-century scholars of Islam as Sir William Muir, David S. Margoliouth, Thomas Patrick, Arthur Jeffrey, and many others cataloged copious evidence of Islamic repression and intolerance of non-Muslims. Still, the myth survived. Besides taking its place in the narrative of European diplomacy, the myth was increasingly perpetuated, perhaps paradoxically, by devout Christians. Their Islamic knowledge was deeper than the intellectual apologists of a century or two before, but, as priests and missionaries, their motives were different because their times were different. Ibn Warraq explains: Rather than use Islam “as a weapon against [European] intolerance, cruelty, dogma, clergy, and Christianity,” as seventeenth- and eighteenth-century intellectuals had done, these Christians of the nineteenth and twentieth centuries increasingly believed “that Christianity and Islam stood or fell together.” In an age of rationalism, skepticism, and atheism, Christian scholars took it upon themselves to protect Islam and its totalitarian dogma from deconstruction, just as they protected Christianity from undergoing similar deconstruction. “Thus, by the end of the twentieth century,” Warraq writes, “Christian scholars of Islam had become the unwitting guardians and perpetrators of the myth of Islamic tolerance.”30
The above is a brief explanation of how the myth of Islamic tolerance—noble Islam?—became lodged in the mind of the West. But there’s a crucial difference in the contemporary incarnation of this Other. Where the Other used to live, vividly imagined if dimly understood, in the Western imagination, the Other now lives, quite literally, in the West itself. If the Other was once a remote star by which the West liked to see its own failings, proximity has changed the light completely as a massive demographic shift has brought Islam, chief among clashing civilizations, deep into Europe. The Other is still vividly imagined, if dimly understood; but where he once provided intellectuals with a theoretical foil against modernity, the Other—in this century, in the guise of Islam—now manifests itself as a concrete bloc. The Other-inspired tradition of self-criticism is no longer adequate in these circumstances. Instead, the Other demands and receives a kind of cultural accommodation that is nothing short of revolutionary. In the real-life endgame of multicultural “inclusion,” then, this would seem to make the West’s dismantlement inevitable.
The story of France and the hijab offers an inkling as to how this is taking place. In 2003, when the French government determined that Muslim girls, draped in the hijab, or head scarf, were inserting religion into the state-run and avowedly secular French classroom, it passed a law. The new law barred Muslim dress in the public schools. This ban on the hijab—a form of dress, like Muslims, that is relatively new to France—came at a very high, Judeo-Christian price. Also banned by law were the star of David and the yarmulke (Jewish skullcap), “large” crucifixes, along with the turban of the Sikhs. In other words, all these religious symbols, which, in modern times, had coexisted as easily in France as their religions had, were suddenly stripped and hidden away from the public square. Why? The reason was to save Islam’s face: to make it appear as though the hijab hadn’t been singled out as an offending symbol, despite the fact that it was. And why was it so singled out? The answer has something to do with the fact that the hijab—unlike the star of David, the yarmulke, the cross, and the Sikh’s turban—symbolizes a Muslim way of life that makes sharia the law of the land, any land. Allowing the head scarf, goes the argument, creates a climate hospitable to other special, extra-Western demands, from the insistence of Muslim men that their wives be treated by female doctors, to a refusal to tolerate certain Western texts in the classroom, to the institution of sharia-compliant (no-interest) loans, forced marriage, and polygamy, to the toleration of jihadist treason in the mosque, to, the Islamic hope goes, universal submission to sharia in a global caliphate. No other religious symbol on earth packs this totalitarian punch. But France—and this has happened elsewhere, including Germany, where school hijab bans have also stripped nuns of their habits—has decided to pretend otherwise. Thus, for the government to bar a symbol of religious oppression, all other symbols of religion were judged oppressive also. In the name of tolerance, they were deemed equally provocative; in the name of inclusion, they were all banned.
In such a way is traditional (pre-Islamic) society dismantled, symbol by symbol, law by law. Are all religious symbols, and thus all religions, equally prone to incite trouble, if not terrorism? And are all religious symbols, and thus all religions, equally imperialistic, and thus incompatible with an ecumenically based secular democracy? Of course not. But for France to admit Islam’s violent past, present and, to date, unreformed future, is to advance a case for discrimination—in this example, to justify a ban on the hijab of resurgent Islam, while justifying the acceptance of the cross of quiescent Christianity, the star of David of beleaguered Judaism, and the turban of nonbelligerent Sikhism. Such a judgment is a multicultural impossibility. Rather than resist the bigotry of the hijab, France (and by extension, the West), without even the courtesy of a show trial, will always plead guilty, admitting to the catchall culpability of itself and its symbols—and hence, its beliefs.
This could only happen in an era of Western identity-decline, a time in which cultural relativism has wedged itself between the West and those original and defining beliefs. The extent of the estrangement comes into focus in a story of a brief but intense identity fight that broke out at Harvard in 2002 over what constituted the acceptable bounds of a specifically Christian identity on the multicultural campus. The point of contention was whether the college—founded in 1636, let’s recall, as an institution for Puritan ministers—would renew its official recognition of a tiny campus Christian group, the Harvard Radcliffe Christian Fellowship. Here was the deal: While any student could join the Harvard Radcliffe Christian Fellowship, the group drew its leadership from among candidates who actually believed in the Holy Spirit and the resurrection of Jesus Christ.
This was unacceptable discrimination, according to the brave, new editors at the Harvard Crimson student newspaper; indeed, they argued, it violated Harvard’s antidiscrimination policy. As one editorial put it, Harvard was “in error for not demanding that the club remove its discriminatory policy from its constitution. All students should be free to participate in College activities without being discriminated against because of belief.”31
Zounds. Given that a Christian fellowship is one big college activity uniquely dependent on belief, it would seem that Muffy and Jason and Savonarola went a little far this time; ultimately, the College thought so, too, because it continued extending recognition to the Christian group. But the case laid out by the opposition made clear the extent to which that basic Christian identity was not only nothing sacred, but that the manifestation of its very existence was open to “liberal” censure. According to this next generation of sensitivity trainers and diversity consultants, any student who wanted to lead the Christian Fellowship “should not be excluded because of a reluctance to accept certain tenets.” Moreover, they wrote, Fellowship members should be “forced” by Harvard to eliminate these “certain tenets”—you know, Christ, the Holy Spirit—from their leadership requirements “or lose College recognition.”32
In their totalitarian zeal, these best and brightest types helpfully crystallized the obvious threat and the apparent contradiction at the core of multicultural groupthink: in the name of inclusiveness, elimination; in the name of diversity, conformity. But the apparent contradiction is an illusion. In opposing college recognition for the Christian-ness of the Christian Fellowship—and, in so doing, provoking no discernible outrage on or off campus—the students were only applying the familiar lessons of multiculturalism. These lessons taught them, simultaneously, to embrace “diversity,” since we are all different, and to deny distinction, since we are all the same. In embracing diversity and denying distinction, then, the students set out to stop a Christian group from being led by believing Christians—and it all made perfect multicultural sense. That is, there’s no contradiction when the “diversity” being embraced is non-Western, and the distinction being denied is Western.
The result? This multiculti bear hug has left society in a state of moral, cultural, and political paralysis, which the following report should illustrate. It comes from Steven Vincent, the American journalist who in 2005 was kidnapped and murdered in Iraq. In his final Internet post, this one on corruption in Basra, he interviewed a Gary Cooperesque U.S. Air Force captain in charge of handing out contracting jobs worth up to $1 million to the locals. The captain explained his modus operandi.
“I want to have a positive effect on this country’s future,” the Captain averred. “For example, whenever I learn of a contracting firm run by women, I put it at the top of my list for businesses I want to consider for future projects.” I [Steven Vincent] felt proud of my countryman; you couldn’t ask for a more sincere guy.
Layla [Vincent’s translator], however, flashed a tight, cynical smile. “How do you know,” she began, “that the religious parties haven’t put a woman’s name on a company letterhead to win a bid? Maybe you are just funneling money to extremists posing as contractors.” Pause. The Captain looked confused. “Religious parties? Extremists?” …
Layla and I gave him a quick tutorial about the militant Shiites who have transformed once free-wheeling Basra into something resembling Savanorola’s Florence.
The Captain seemed taken aback.… “I’ll have to take this into consideration … I certainly hope none of these contracts are going to the wrong people.” … Collecting himself, “But should we really get involved in choosing one political group over another?” the Captain countered. “I mean, I’ve always believed that we shouldn’t project American values onto other cultures—that we should let them be. Who is to say we are right and they are wrong?” [Emphasis added.]33
Et tu, Captain America?
As Vincent observed, “And there it was, the familiar Cultural-Values-Are-Relative argument, suprising though it was to hear it from a military man.” “Surprising” isn’t the word. It’s one thing to get this mindless mantra from a Maryland public school teacher with rings on his toes and multiculturalism on his agenda, quite another to hear it from a twenty-first-century Gary Cooper–type. But there it was: wings on his chest and nonjudgmentalism in his heart. Layla, for one, “would have none of it,” Vincent continued, describing a scene of pathos reminiscent of Cervantes via Broadway, as when in Man of La Mancha, Dulcinea and Sancho Panza desperately try to make Don Quixote remember his quest.
“No, believe me!” she exclaimed sitting forward on her stool. “These religious parties are wrong! Look at them, their corruption, their incompetence, their stupidity! Look at the way they treat women! How can you say you cannot judge them? Why shouldn’t you apply your own cultural values?”34
The question is excellent, the answer depressing. It’s not that our “cultural values” are fungible, exactly; in the case of the Air Force officer in Basra, such values came down to equal rights before the law, and maybe just “law” generally—as distinct from the law of the jungle (desert). Such precepts should be easy to stick to. But, as the Air Force officer said, “Who is to say we are right and they are wrong?” It is as widely believed now as gospel once was that Western civilization has contributed to the world just one set of cultural values among many sets of cultural values—even in the eyes of those who fight to preserve them; and even in comparison, in this case, to the lawless Basra barbarism born of totalitarian Islam. That’s not gross; that’s their culture. If such sets of values are truly interchangeable, there is no compelling reason why Captain America should prize, apply, or even prefer, his own. What the American journalist and his Iraqi friend were looking at was the end of the multicultural line; the saturation point. “I want to have a positive effect on this country,” the captain said. It is difficult to have a positive effect when Captain America is a liberator who brings nothing but liberty.
There is a hollowness to the whole enterprise that is embodied by the captain’s relativism, a barren chamber where the empty slogan “war on terror” echoes on without meaning. That is, terror is a tactic. You don’t make war on a tactic; you make war on the people who use it. Imagine if FDR had declared “the war on sneak attack” or the “war on blitzkrieg.” It doesn’t make sense and neither does “war on terror.” And not only does it not make sense, it also uncovers our biggest handicap going in: that perilous lack of cultural confidence, that empty core at our heart. Where an empty core has nothing with which to refute the absurdity of Bush=Hitler, an empty core has nothing with which to define “a war on terror.” Who are we to say … who we are fighting … and why?
This is not a flippant tag. In this war for survival, we don’t know who we are; little wonder the identity of the enemy—his history, his teachings, his goals—has remained in many ways anonymous. This identity crisis is profound. But maybe such a phase is inevitable in the life of an adolescent culture like ours. Coined by Erik Erikson in 1970, the now-familiar term “identity crisis” describes that stretch of adolescence, usually marked by “a loss of the sense of sameness and historical continuity of one’s self.” Culturally, we see that loss in spades. Also, by definition, there may be “confusion over values”—such confusion is universal—or “an inability to accept the role the individual perceives as being expected of him by society.” This last symptom is also familiar, given our split personality as both world policeman and world villain. Lacking parental guidance, this adolescent culture of ours may be doomed not only to perpetual adolescence, but also perpetual identity crisis. The glib pop terminology doesn’t begin to conjure up the ravages of a culture that has lost its core. Lawrence Auster has written extensively on this condition.
Under the reign of multiculturalism, Americans have been undergoing for decades, as if in slow motion, what the historian Thomas Molnar once described as “the collapse of the old order, the sudden realization that the universe of a given community has lost its center.” Molnar, a refugee from Communist Hungary, calls this phenomenon “verbal terror.” The actual terrorist, by blowing up actual human beings, makes the members of a society feel that every assumption that has constituted their world, the very ability to walk down a street or ride in a bus, is vanishing, and thus weakens their will to resist the terrorists’ political demands. The verbal terrorist, by smearing everything great and ordinary about a people and their institutions, makes them feel that nothing about themselves is legitimate.35
Put into such terms, our renunciation of cultural paternity begins to make sense; it’s a natural consequence of believing in our own illegitimacy. No wonder, then, that when we do say something to define or defend all those many “great and ordinary” things about ourselves and our institutions, it invariably comes up short and mumbling, the crucial point missing, withheld by a reticence born of shame. In the debates of the day, we (who are so inclined) rail against hanging obscene art in the museums—Mapplethorpe’s bullwhip-bottom studies, for instance, or Serrano’s Piss Christ—only because such art is helped along by taxpayer dollars, never because it menaces the public life of our culture. We (who are so inclined) rail against teen promiscuity because it’s a health risk, never because it’s an affront to the probity of our culture. We (who are so inclined) rail against the explosion in illegal immigration because it breaks the law (changeable, after all), never because it’s killing our culture. These are off-kilter arguments, timidly oblique, half-hearted—and so very common. Which suggests the total victory the war of “verbal” terror has won. By chastened consensus, Western culture, the expression of Western peoples, is deemed narrow, bigoted, and scrap-heap ready. Only the “inclusion” of non-Western cultures, so the accepted thinking goes, can endow it with the legitimacy of “diversity.”
Hence, the cacophony of voices, lumping together sublime William Shakespeare with ridiculous Rigoberta Menchú, that passes for new-and-improved academia; hence, the hodgepodge of cultures that passes for the new and improved nation state. But to what end? The decision by the city of London to erect a statue to Nelson Mandela in Trafalgar Square translates the answer into simple, stone symbols anyone can understand.
Spearheaded by the radical mayor of London, Ken Livingstone, this decision of symbolism and statuary constitutes an “inclusive” act. That’s because, in multicultural patois, “inclusive” means introducing a non-Western element (South Africa’s antiapartheid hero) into a Western milieu (London’s most famous square). Dedicated to the 1805 naval battle that ended Napoleon’s plans to invade England but killed its victorious commander, Horatio, Lord Nelson, Trafalgar Square might seem to have been adequately, even admirably adorned by Nelson’s towering stone image topping a 185-foot column anchored by a brass capital made from captured French guns. But not according to multiculti theory. Lo, these many decades, it seems, the square has only managed to express a noninclusive monoculture. Introducing a bronze sculpture of Nelson Mandela fixes everything because it introduces a vital “diversity” into the oppressive Britishness of it all.
This is really nothing new in London, where in 1998, for example, Westminster Abbey—storied British site of coronations, royal weddings, and state funerals with monuments and plaques to famed British monarchs, famed British writers, and famed British political leaders galore—spruced up its façade by filling the ten empty statuary niches over its massive main entrance with ten Christian martyrs. Notably enough, they were all foreigners—and the first foreigners to be so honored. Whatever good these worthies may have done—from American civil rights leader Martin Luther King, Jr., to the liberal Salvadoran Archbishop Óscar Arnulfo Romero, with lesser known martyrs from Poland, South Africa, Uganda, Russia, Germany, India, China, and Papua New Guinea in between—they were neither British, nor were they acting in British interests.36 What they represented, besides Christian goodness, was “diversity.”
What goes unnoticed is that such “diversity” actually brings more-of-the-sameness: every place becomes like any other. Or, rather, every Western place becomes like any other; i.e., every Westminster Abbey becomes a mini–United Nations. The example of the city schoolroom offers a good illustration: When 43 percent of New York City schoolchildren speak one of nearly 170 languages other than English at home,37 and between a quarter to more than half of London schoolchildren speak one of three hundred languages other than English at home,38 both cities have achieved an indistinguishable “diversity.” No longer singularly American or singularly British, they are interchangeably global. Grouping Nelson Mandela with Lord Nelson and the several other British war heroes in Trafalgar Square has the same, if symbolic, effect. No longer will Trafalgar Square evoke the quintessence of British culture. It will be, as London’s Mayor Ken Livingstone puts it, a “world square.”39
And that is the point: a world square, not a British one; a global identity, not a Western one. It is surely a paradox that the rest of the world—meaning the nations of the non-Western world about which the Western world is so assiduously “inclusive”—remains strikingly, immovably, and unapologetically nondiverse, uniform even, in every way: ethnically, religiously, and culturally. For example, students may speak Urdu, Arabic, Pashtun, and Turkish in British, French, Dutch, and German schools; they don’t, however, speak English, French, Dutch, and German in Pakistani, Arab, Afghani, and Turkish schools. Mosque construction breaks ground all over Europe and the United States, but churches and synagogues do not rise in the Islamic world. The president of the United States adds a Koran to the White House library for Ramadan; Bibles are confiscated and destroyed by the Saudi Arabian government. Born in Benin, Achille Acakpo teaches traditional African dance and percussion in Vienna; who born in Vienna is teaching Strauss waltzes in Benin?
In some way, these dramatic acts of inclusiveness—unrequited elsewhere in the world—may be an extension of the intellectual inquisitiveness that, as Ibn Warraq writes, “is one of the hallmarks of Western civilization.” That is, maybe the same curiosity that has driven exploration—global, scientific, and artistic—is a factor in our acceptance of cultural novelty. Warraq quotes J. M. Roberts on the subject of Western curiosity.
The massive indifference of some civilizations and their lack of curiosity about other worlds is a vast subject. Why, until very recently, did Islamic scholars show no wish to translate Latin or western European texts into Arabic? Why, when the English poet Dryden could confidently write a play focused on the succession in Delhi after the death of the Mogul emperor Aurungzebe, is it a safe guess that no Indian writer ever thought of a play about the equally dramatic politics of the English seventeenth-century court? It is clear that an explanation of European inquisitiveness and adventurousness must lie deeper than economics, important though they may have been. It was not just greed which made Europeans feel they could go out and take the world. The love of gain is confined to no particular people or culture. It was shared in the fifteenth century by many an Arab, Gujarati, or Chinese merchant. Some Europeans wanted more. They wanted to explore.40
Having explored the world, having colonized huge swaths of it, having returned again to their little homelands, the West, not the world it conquered, has been transformed. That is, it is the West that has become “inclusive” and “multicultural”; not the “rest.” Mayor Livingstone can declare that erecting a Mandela statue in Trafalgar Square signifies “the peaceful transition” from British empire as symbolized by Lord Nelson “to a multiracial and multicultural world,” but what he’s really talking about is the British transition to a multiracial and multicultural London—a fait accompli—where no one can tell Nelson from Nelson. Opposition to the mayor’s plan was largely aesthetic, limited to bickering over placement of the statue, not over whether a bronze likeness of Nelson Mandela—the long-suffering South African apartheid-buster with a distressing fondness for despots from Castro to Mugabe to Arafat to Qadafi—constitutes a singular expression of British identity. Indeed, it became apparent that there were no British cultural or historical imperatives at issue here because there were no British cultural or historical imperatives, period. The only cultural objection was an oblique outburst saying the statue represented a “major and awkward change in the narrative of the square.”41 Exactly why it was major or awkward, or what the narrative of the square was to begin with, went undefined, and thus undefended under Admiral Nelson’s distant, one-eyed gaze of carved Scottish stone.
It’s the story of our civilization—undefined and undefended. As we learn to get along, eyes closed to our own identity, we also turn a blind eye toward everyone else’s. Often literally. In the summer of 2005, in the jittery wake of jihadist attacks on the London Underground (notable for being the first Islamic terrorist attacks on a Western country by homegrown Islamic terrorists), The New York Times editorialized on the timely topic of commuter safety. The aspect under consideration was searching buses and subways for bombs, something we do in post-identity America.
The police officers must be careful not to give the impression that every rider who looks Arab or South Asian is automatically a subject of suspicion.… Those who are selected simply because they are carrying packages should be chosen in a way that does not raise fears of racial profiling—by, for example, searching every fifth or twelfth person, with the exact sequence chosen at random.42
What is most scary about public safety à la The New York Times—its absurdity, or the purpose of its absurdity? The point here, according to The Times, was not to avoid death by murder-bomber (melted flesh and mangled steel) at the potential cost of racial profiling (hurt feelings); but rather to avoid raising “fears of racial profiling” (hurt feelings) at the potential cost of death by murder-bomber (melted flesh and mangled steel). And mark the creed the enlightened ones urged all good citizens to follow: In the Exact Sequence Chosen at Random We Trust. It makes perfect sense. Since we have denied our own identity so long, the Other—even the Other who may at any moment explode, driving nuts and bolts into the burning flesh of unarmed innocents—has also ceased to matter. Which may or may not seem to amount to much when it comes to the adventures of Mary Poppins, but is it any way to fight a war?