“There is nothing outside the text”
—or, fear of the proliferation of meaning
One of the larger canards in the received wisdom about poststructuralism and postmodernism is that their proponents don’t believe in “meaning,” don’t think it’s possible for anyone ever to “mean” anything at all. Both modes of theoretical writing have been branded as “trendy nihilisms” that deny life, language, or literature any significance whatsoever. But this charge of “nihilism” rather badly misses its mark, for, as we will learn in this lesson, poststructuralist and postmodernist writers actually fall quite short of affirming that life, language, and literature have “no meaning.” Rather, such writers examine our fear that human reality generates far too many meanings, produces way too much interpretation. They trace and engage—but attempt never to assuage—our pervasive anxieties about semiotic excess.
For to return to a key figure who we haven’t mentioned in a while, these writers have read their Nietzsche, who thought that we should neither “wish to divest existence of its rich ambiguity” nor ever “reject the possibility” that the world “may include infinite interpretations” (1887/2006: 378, 379). Post-Nietzschean writers hope to preserve and enhance this exceedingly “rich ambiguity,” but they also attempt, as did Nietzsche at his diagnostic best, to bring out into the light our persistent “metaphysical” wish to “divest” ourselves of interpretive overabundance; they examine our “imperialist” tendency to reject multiple possibilities and to suppress alternative intelligibilities, our desire to control and contain difference and alterity in “others” and in ourselves. Far from espousing some lame “disbelief in meaning,” then, these writers attempt to expose all the “ideological figure[s] by which one marks the manner in which we fear the proliferation of meaning” (Foucault 1969/1998: 222).
Chief among those fear-based “ideological figures” is arguably “meaning” itself, a word that has been used quite routinely in the “history of metaphysics” to police rowdy proliferation, an interpretive police-action allowing certain readers to imagine that post-Nietzschean infidels “don’t believe” in any “meaning” simply because they don’t buy any one “fixed” interpretation of what “meaning” means. But other looming figures of fixity include reason, order, origin, essence, presence, unity, universality, purpose, being, identity, totality, God, man, center, truth, concept, science, enlightenment, history, progress, modernity, author, structure, the unconscious, and so on. The general poststructuralist/postmodernist argument is that these figures—each of which has been purported or relied upon to guarantee “meaning,” to enable and enrich (or at least secure and stabilize) our understanding—have also worked to impede and impoverish creative thought, to limit “the play of signification” (Derrida 1966/1978: 280), to constrict the “liberation of symbolic energy” (Barthes 1971/1977: 158), to curtail “the free circulation, the free manipulation, the free composition, decomposition, and recomposition of fiction” (Foucault 1969/1998: 221), to deny, restrict, or otherwise de-vivify what Nietzsche himself might have simply called “art.”1
The more specifically postcolonial inflection of this argument (for our lesson here will also concern postcolonial theory) is that these master tropes of Western metaphysics have enriched and empowered themselves at the considerable expense of others—not simply other tropes, but also, and often brutally, other people—all those who have been “othered” (colonized, subordinated, abjected, marginalized, exoticized, silenced, exploited, enslaved) by the dominant “first-world” orders of knowledge, power, and truth. Thus, Homi Bhabha begins his essay “The Other Question” by asserting that “an important feature of colonial discourse is its dependence on the concept of ‘fixity’ in the ideological construction of otherness” (1983/1996: 37).
We can’t adequately demonstrate here how each and every one of the figures listed above has participated in “the violence of metaphysics” or operated in the service of ideology and empire. But we can note, for example, how the appearance on our list of the words “essence,” “God,” and “man” underscores the anti-essentialist, anti-theological, and anti-humanist dispositions of theoretical writing that we first encountered back in our introductory chapter. The fact that the word “truth” gets listed here would seem to indicate that post-Nietzschean theory is also anti-veridical, “against truth.” And that’s actually a true story, for these interpretive strategies do in fact follow Nietzsche in holding that there are no facts, only interpretations, and in experimentally calling into question the actual value of what gets called “truth.” The distinction between “truth” and “what gets called truth” is crucial here, and for post-Nietzscheans, the operative theoretical assumption is that there’s actually none of the former outside of the latter—not that there are no truths at all, but rather that there are no truths outside of particular truth-claims, which are always rendered in language, always put into words. In other words, and again, “theory begins . . . at the moment it is realized that thought is linguistic . . . and that concepts cannot exist independently of their linguistic expression” (Jameson 2004: 403).
Or, in still other words, “there is nothing outside the text” (Derrida 1967/1997: 158).
Now, an indispensable guide to understanding the paradox of “anti-veridical truth-claims” would be Nietzsche’s “On Truth and Lies in an Extra-moral Sense,” particularly the following bit of Q&A—
What then is truth? A moveable host of metaphors, metonymies, and anthropomorphisms: in short, a sum of human relations which have been poetically and rhetorically intensified, transferred, and embellished, and which, after long usage, seem to a people to be fixed, canonical, binding. Truths are illusions which we have forgotten are illusions; they are metaphors that have become worn out and have been drained of sensuous force, coins which have lost their embossing and are now considered as metal and no longer as coins (1873/2006: 117).
Only by forgetting this primitive world of metaphor can one live with any repose, security, and consistency: only by means of the petrification and coagulation of a mass of images which originally streamed forth from the human imagination like a fiery liquid . . . in short, only by forgetting that he himself is an artistically creating subject, does man live with any repose, security, and consistency. (1873/2006: 117)
These lines should help us understand the “anti-veridical” bent of post-Nietzschean theoretical writing, to grasp why Nietzsche himself interprets “truth,” or what gets called truth, as the implacable enemy of “art.” For again, in all truth, what we call “truth” exists only in the form of statements, expressions, or truth-claims, which must be made in language, which is by nature fictional. “Truths” are those initially experimental fictions that have become so sedimented or fixed for “a people” as to seem metallically real, canonical, foundational. For Nietzsche, what “we the people” actually value in what we call “truth” is much less veracity than fixity, “repose,” the binding and comforting sense of security against fiction, against the wild proliferation of fiction, that “knowing the truth” would seem to provide. If (as Nietzsche’s story goes) the “eternal verities” could ever be honest about themselves, if “an honest truth” weren’t a contradiction in terms, then “truth” would just have to fess up to being fiction, simply another form of art, and certain pro-veridical disciplines—religion, philosophy, history, science—would have to acknowledge their own imaginative, rhetorical, performative, or “literary” statuses as well. Art can remain luxuriantly artistic while still being completely honest in and about its utter mendacity. But neither “truth” nor its attendant discourses can afford to be truthful about themselves without becoming truly other than themselves, strangers to themselves. And so, rather like the violent homophobe who attacks in the openly queer individual what he can’t admit or abide in himself, the ugly “truth” must maintain itself in its rancorous hostility to art, its “constitutive other” or beautiful semblable—hence the phrase “the violence of metaphysics.”
But just as “truth” operates aggressively against the “symbolic energy” animating the “free circulation” of art, so do concepts function repressively against the freeplay of differences, against Derridean différance.2 But the key to understanding what Derrida means by “différance,” or, at least, a key to understanding why he insists that “différance” isn’t a concept, is found once again in “On Truth and Lies,” where Nietzsche asks that we critically “consider the formation of concepts.”
Every concept arises from the equation of unequal things. Just as it is certain that one leaf is never totally the same as another, so it is certain that the concept “leaf” is formed by arbitrarily discarding these individual differences and by forgetting the distinguishing aspects. This [forgetting] awakens the idea that, in addition to the leaves, there exists in nature the “leaf”: the original model according to which all the leaves were perhaps woven, sketched, measured, colored, curled, and painted—but by incompetent hands, so that no specimen has turned out to be a correct, trustworthy, and faithful likeness of the original model. (1873/2006: 117)
Here, Nietzsche takes aim at “conceptual” targets both political (the egalitarian democratic movements of his day, which he thought were forcing “the equation of unequal things”) and philosophical (the arch-idealist Plato, who believed that behind all multifariously existent material realities, like “leaves,” there stands a unitary and original form or truly ideal model for those realities—for example, “leaf”; that all material things are just so many sorry copies of the original model; and that mimetic representations of material things are merely copies of copies, many false steps removed from the “truth”).
By playing so anti-Platonically in these leaves, however, Nietzsche partially births deconstruction, “reversing and displacing” a foundational binary opposition of Western metaphysics—good original vs. bad copy.3 While in garden-variety Platonism, “leaf” would be held up as the first and original “cause” over and against those secondary effects called “leaves,” Nietzsche here suggests that the putative copies, the leaves—or better, the differences among them—actually “come first,” and that the concept “leaf” is formed only by virtue of specific operations of repression (arbitrarily discarding individual differences) and amnesia (forgetting distinguishing aspects). It is only after these operations are performed that their conceptual effect—“leaf”—is implanted in our idealist imaginary as the original cause of the infinitely multifarious leaves. In our ordinary “metaphysical” thinking, the singularly “good” origin must always causally precede the multiply “bad” copies. In Nietzsche’s “proto-deconstructionist” analysis, however, the copies precede and give birth to the origin, which turns out to be a hoary sham—more twilit idol than guiding light.
Before leaving Nietzsche’s leaves, however, let’s take a brief look at how he treats another prominent figure on the metaphysical list—“reason.” Nietzsche begins his critique of philosophical rationalism with his first writing, Birth of Tragedy, which diagnoses Greek drama’s decline from the “divine” representation of Apollonian and Dionysian “dreams and ecstasies” to the more “stage-manageable” realm of Socratic and Euripidean “ideas and feelings,” a more reasonable kingdom in which “art is overgrown by philosophical thought and forced to cling closely to the trunk of dialectic” (1872/2006: 68).4 But if Nietzsche begins his critique of rationalism with Birth of Tragedy, he brings it to a head in Genealogy of Morals, where we find perhaps his pithiest aphoristic truth-claim—“reasons relieve” (1887/1992: 576). To begin to understand Nietzsche’s reasons for making this claim, consider that in Nietzsche’s analysis we have long channeled our overabundant “symbolic energy” into a series of tense negotiations with the problem of suffering. As you’ll recall from the discussion in Lesson Four, Nietzsche really couldn’t argue with the Buddha’s first noble truth—life is painful. In Nietzsche’s view, we the living can endure quite a lot of pain for quite a long time, even to no end; but what we apparently cannot endure, not even for a New York minute, is pain to no purpose, suffering for no good reason. Thus, a large part of our imaginative activities involves creative rationalization, inventing all the very good reasons we can come up with, but necessarily forgetting our own acts of invention, pragmatically using these fabricated reasons to explain our suffering to ourselves, blessedly relieving ourselves of the evil of unexplainable sorrow while identifying “the reasonable” and “the relieving” both with each other and with “the good” in and of itself.
A rudimentary example: one aspect of reality that we tend to find insufferable is unfightable injustice—“life isn’t fair,” horrible things happen to wonderful but powerless people, despicable creeps get away with heinous crimes, and so on. Over time, we’ve devised our own systems of justice to try to deal with this problem, punishing the guilty and protecting the innocent whenever humanly possible. But because we know that the systems that we know that we created don’t work perfectly—and can’t, in any case, medicate the pain we experience when the innocent are slain by “acts of nature”—we must also fantasize perfect (if mysterious) systems of justice (God’s will, eternal compensations or penalties in the afterlife, the ironclad laws of karma, etc.), systems that we can’t know or acknowledge that we ourselves imagined, simply because if we did know, they wouldn’t work, wouldn’t provide “relief.” Nietzsche here joins Marx in thinking of religion as the oldest and most popular opiate in world history. Unlike actual drugs, however, which “work their magic” regardless of whatever their users might “believe” about them (crystal meth will have its way with me even if I discover that it wasn’t cooked up by elves), imaginary opiates typically fail to opiate humans who “come to believe” that merely human imaginations produced them—“reasons” cease to “relieve” the non-duped who figure out the real reasons behind them.
Hence “God is dead” for the utterly disenchanted, modern, secular, rationalist imagination, which has supposedly left religious fear and superstition behind in the dust of its progress. But Nietzsche’s whole argument “against reason” is that “reason” can operate just as narcotically as does religion, dialectically “relieving” its adherents of their sufferings, their pained experience of contradiction, assuaging them of their anxiety that scientific “truth” might relapse into the more ambiguous realities of myth and art. For Nietzsche, the “fundamental secret of science” is that it constitutively misunderstands its own teleological goal: the scientific “search for truth” has always been
accompanied by a profound delusion, which first came into the world in the person of Socrates—the unshakeable belief that, by following the guiding thread of causality, thought reaches into the deepest abysses of being and is capable not only of knowing but even of correcting being. This sublime metaphysical madness accompanies science as an instinct and leads it again and again to its limits, where it must transform itself into art: which is the real goal of this mechanism. (1872/2006: 71)
For Nietzsche, then, the “reason” worshipped by both modern and classical metaphysics is at root animated by anxiety, by our fear of “the dangerous and cancerous proliferation of significations” (Foucault 1969/1998: 222), our fear that metaphysical truth might metastasize into mad fiction. Like religion, “reason” operates analgesically, spreading the salve of coherence on the painful wound of contradiction.
And as always, coherence in contradiction expresses the force of a desire. The concept of centered structure is in fact the concept of a play based on a fundamental ground, a play constituted on the basis of a fundamental immobility and a reassuring certitude, which is itself beyond the reach of play. And on the basis of this certitude anxiety can be mastered. (Derrida 1966/1978: 27)
If, as Nietzsche teaches, “concepts” are the “graveyard of [differential] perceptions” (1873/2006: 121), if “concepts” per se express “the force of a desire” to repress or forget the play of différance, then, as Derrida, to whose text we’ve abruptly cut here, might say, “the concept of centered structure” has long expressed the coercively orthopedic heart of that desire. In Derrida’s heartbreaking estimation, such forceful expression/repression is the center’s structural function, or the structure’s central function, and has been for quite some time, for
the entire history of the concept of structure, before the rupture of which we are speaking, must be thought as a series of substitutions of center for center, as a linked chain of determinations of the center. Successively, and in a regulated fashion, the center receives different forms or names. The history of metaphysics, like the history of the West, is the history of these metaphors and metonymies. Its matrix . . . is the determination of Being as presence in all senses of this word. (1966/1978: 279)
Now, the “rupture” of which Derrida speaks in this passage would seem to involve the advent of structuralism, “the linguistic turn in the human sciences.” Before this turn, before this rupture, “the notion of a structure lacking any center represents the unthinkable itself” (1966/1978: 279). After the turn, after Nietzsche’s insight that the “web of concepts is torn by art” (1873/2006: 121), after Saussure’s insight that language is a differential structure without positive terms, the unthinkable materializes itself, tears us a new one. In other words, the unthinkable rupture occurs when it finally dawns upon certain thinkers that human reality is only ever put into writing and that writing neither contains nor emanates from a center. “Henceforth,” writes Derrida,
it was necessary to begin thinking that there was no center, that the center could not be thought in the form of a being-present, that the center had no natural site, that it was not a fixed locus but a function, a sort of nonlocus in which an infinite number of sign-substitutions came into play. This was the moment when language invaded the universal problematic, the moment when, in the absence of a center or origin, everything became discourse . . . that is to say, a system in which the central signified, the original or transcendental signified, is never absolutely present outside a system of differences. The absence of the transcendental signified extends the domain and the play of signification infinitely. (1966/1978: 280).
If it was structuralism that first alerted thinkers to this invasion, Derrida thinks that “modern” structuralism, still indentured to an ancient metaphysical dream of truth, misread its own differential significance. In “The Structural Study of Myth,” for example, Claude Lévi-Strauss asserts that “whatever emendations the original formulation may now call for, everybody will agree that the Saussurean principle of the arbitrary character of the linguistic sign was a prerequisite for the accession of linguistics to the scientific level” (1963/2007: 860). On the contrary, poststructuralists would agree that Saussure’s principles necessitated the demotion of almighty science to the merely linguistic level, the displacement of all scientific truth into figurative language. And this figural displacement carries with it not only “truth” but any “pro-veridical” discourse aspiring to operate on “the scientific level” or purporting to make “rigorous statements” about any “central” objects of analysis. These “centers” of science include, obviously, “structure” for structuralism, but also “history” for Marxist dialectics and “the unconscious” for psychoanalysis. In the essay “Freud and Lacan,” for example, Althusser avers that
Lacan’s first word is to say: in principle, Freud founded a science. A new science which was the science of a new object: the unconscious. A rigorous statement. If psycho-analysis is a science because it is the science of a distinct object it is also a science with the structure of all sciences. (1971/2001: 135)
For Derrida, however, this “rigorous” presumption of a distinct object as the prerequisite for any science represents the big problem with big science, betraying the metaphysical hangover afflicting the structure of all sciences as well as the “science-ism” of all existing structuralisms.
In Derrida’s critique of psychoanalysis, then, Lacan remains a facteur de la vérité, a scientific “purveyor of truth” (Derrida 1980/1987: 413) in cahoots with every ascetic idealist in the history of metaphysics from Plato and Hegel to Rousseau and Lévi-Strauss. Coupling, as you’ll recall, Saussure with Freud to argue that “the unconscious structured like a language” always involves a central lack, Lacan “deprivingly” dictates that “we must accept castration” because “castration” is the bedrock “truth” of desire and the veritable “centre of analytic experience” (Lacan 2008: 41, 53). Writing against the castrating Lacan, Derrida reads Saussure back through Nietzsche to argue that language “more radically” involves not a central lack, but a lack of center. Derrida thus more generously advocates a “joyous affirmation” that “determines the noncenter otherwise than as loss of center” (1966/1978: 292)—a “playful” affirmation that determines the noncenter otherwise than in the Oedipal terms of castration and without any guilt over “broken immediacy” (1966/1978: 292) with mother/nature or any nostalgia for some lost ontological homeland of the real. This “joyous affirmation,” writes Derrida, “plays without security.” This “active interpretation” of interpretation
affirms play and tries to pass beyond man and humanism, the name of man being the name of that being who, through the history of metaphysics or of ontotheology—in other words, throughout his entire history—has dreamed of full presence, the reassuring foundation, the origin and the end of play. (1966/1978: 292)
One could of course argue with Derrida about the “central significance” of Lacan’s writings. As I suggested at the end of the previous lesson, it’s possible to affirm Lacan otherwise than as phallogocentric, heterosexist, straight-ahead structuralist blowhard. But as should be clear to anyone who might actually bother to read Derrida’s most basic writings, his poststructuralist affirmation of “the noncenter” hardly amounts to a nihilistic chucking of all “significance” tout court. Rather, Derrida’s Nietzschean “affirmation of life” traces what he calls an “overabundance of the signifier” (1966/1978: 290).5 This vital excess can only cause trouble in “a world where one is thrifty not only with one’s resources and riches but also with one’s discourses and their significations” (Foucault 1969/1998: 221); excessive différance is thus “obviously threatening and infallibly dreaded by everything within us that desires [such a world, such] a kingdom” (Derrida 1967/1982: 21–2). This overabundance of differential signification provokes “anxious” interpreters to circle the wagons around the “fundamental immobility” and “reassuring certitude” of a “center” that can “hold.” But this overabundance can also spur active interpreters to initiate new methods of paying “tenacious attention to the materiality of human signification” (Chow 2002/2007: 1910), to produce “new concepts to explain how meaning works” (Lucy 2004: 144), novel ways of reading and writing all the arts of being human after the linguistic turn and in the absence of any “transcendental signified.”6
Derrida, for his part, attempts to “affirm play” beyond “man and humanism,” after the closure of metaphysics. But he hardly imagines that, by virtue of this attempt, he or we can ever simply wash our hands of metaphysics, for any claim “against truth” is still inescapably a truth-claim, and any attempt to rinse oneself clean of the remains of the metaphysical remains, in itself, a metaphysical gesture.
There is no sense in doing without the concepts of metaphysics in order to shake metaphysics. We have no language—no syntax and no lexicon—which is foreign to this history; we can pronounce not a single destructive proposition which has not already had to slip into the form, the logic, and the implicit propositions of precisely what it had to contest. (Derrida 1966/1978: 280–1)
To “affirm play” thus doesn’t mean to imagine that one has completely shaken off the last drops of metaphysics. Rather, for Derrida, to “affirm play” means to let go of the idea that there’s ever going to be any really “reassuring foundation” for the signification of human reality, any natural or supernatural locus regulating the proliferation of meaning, any philosophical, political, theological, or poetical “center” that isn’t implicated in the all too human dream of full presence, the magical “image of perfectly self-present meaning” that is “the underlying ideal of Western culture” (Johnson 1981: ix).7
Once play is affirmed, however, Derrida does indeed attempt to extend its domain ad infinitum, releasing a swarm of new terms and phrases—différance only one among so many others, like trace and supplement, that Derrida warrants his own personal dictionary (Lucy 2004)—all in the effort to examine “how meaning works” without foundationally fixing it or transporting it into some transcendental ether.8 These Derridean “figures without truth” (1967/1982: 18) cannot be absolutely foreign to metaphysics, but they can defamiliarize its history—they can never “truly” shake metaphysics off, but they can make its foundational assumptions tremble.
Derrida’s most infamously tremulous bit of “averidicality” is no doubt the axiom that forms the title of this lesson—il n’y a pas de hors-texte—there is no outside-text, or “there is nothing outside the text” (1967/1997: 158). This little zinger appears in the section of Of Grammatology called “The Exorbitant. Question of Method,” which concerns both Rousseau’s writings and Rousseau’s representative and dysgraphic anxieties about the exorbitances of writing in general. For Derrida, however, the methodological question is one “not only of Rousseau’s writing but also of our reading.” Any writer, writes Derrida,
writes in a language and in a logic whose proper system, laws, and life his discourse by definition cannot dominate absolutely. He uses them only by letting himself, after a fashion and up to a point, be governed by the system. And [our] reading must always aim at a certain relationship, unperceived by the writer, between what he commands and what he does not command of the patterns of language that he uses. This relationship is not a certain quantitative distribution of shadow and light, of weakness or of force, but a signifying structure that critical reading should produce. (1967/1997: 158)
Derrida attempts to “produce” a method or mode of “reading” (sometimes called deconstruction) which assumes that writers, even great writers, are never the absolutely dominative commanders of language; deconstructive reading tenaciously attends to the differences between command and noncommand that appear in the patterns of language in which all writers, even great writers, participate. As we’ll soon see, this Derridean production is not unrelated to Roland Barthes’ autopsy of “the Author” and Michel Foucault’s interrogation of that august figure, all of which (production, autopsy, and interrogation) were published within the same few tremulously anti-authoritarian years (1967, 1968, and 1969, respectively). But before attending those slightly later funerals for authorial authority, let’s consider one of Derrida’s explanatory comments about il n’y a pas de hors-texte:
The concept of text I propose is limited neither to the graphic, nor to the book, nor even to discourse, and even less to the semantic, representational, symbolic, ideal, or ideological sphere. What I call “text” implies all the structures called “real,” “economic,” “historical,” socio-institutional, in short; all possible referents. Another way of recalling, once again, that “there is nothing outside the text.” That does not mean that all referents are suspended, denied, or enclosed in a book, as people have claimed, or have been naïve enough to believe or have accused me of believing. But it does mean that every referent, all reality, has the structure of a differential trace, and that one cannot refer to this real except in an interpretive experience. The latter neither yields meaning nor assumes it except in a movement of differential referring. That’s all. (1988: 148)
Despite, however, this and other lucid explanations of his take on “the text,” Derrida is still construed by conventional wisdom to be an “abstruse” nihilist who thought that “all referents are suspended” and that no “interpretive experience” can ever “yield meaning” of any kind or of any value to any reader. Derrida is still understood to have “claimed that language, by its very nature, undermined any meaning it attempted to promote” (Eugenides 2011: 47). But Derrida actually rejects the inherited metaphysical logic that if there’s no “center” for everything there can never be any “point” or “meaning” to anything, that if “the center cannot hold” then all arguments must “fall apart.” There’s quite a significant difference, after all, between claiming that all “meaning” is always “undermined”—whatever that means—and promoting the view that no “meaning” ever escapes or transcends its constitutive involvement in “a movement of differential referring.” There’s a large and loudly honking difference between writing “There is no simple reference” (1972/1981: 206), as Derrida did, and asserting that “there is simply no reference,” as Derrida didn’t. It was always Derrida’s problem that certain people read (him) very selectively, if at all, and that certain readers have trouble envisioning any protocols of reading other than those that protect their own certainties. Reasons relieve, and so, often enough, does reading. In Of Grammatology, Derrida writes that a productive (rather than protective) way of reading
cannot consist of reproducing, by the effaced and respectful doubling of commentary, the conscious, voluntary, intentional relationship that the writer institutes in his exchanges with the history to which he belongs thanks to the element of language. This moment of doubling commentary should no doubt have its place in a critical reading. To recognize and respect all its classical exigencies is not easy and requires all the instruments of traditional criticism. Without this recognition and this respect, critical production would risk developing in any direction at all and authorize itself to say almost anything. But this indispensible guardrail has always only protected, it has never opened, a reading. (1967/1997: 158)
This passage, had it been carefully read, might have quieted certain academic and journalistic rumors that Derridean “freeplay” = “anything goes,” that deconstruction completely evacuates (itself on) “traditional criticism” while “playfully” authorizing itself to say almost anything, or that Derrida considers all “critical productions” equally valid or equally invalid and all “interpretive experience” simply a “meaningless” game. Coincidentally, the passage just quoted happens to appear on the very same page of Derrida’s text as il n’y a pas de hors-texte, the phrase that launched a thousand claims that Derrida believed all of reality to be “enclosed in a book.” Of course, one would have to have actually opened and read a book by Derrida to understand how many of the slings and arrows of academic outrage against deconstruction were quite beside his points. But then again, one would have to have read (and not just heard rumors about) Nietzsche to understand how and why valid points against validity, or truth-claims against truth, or reasonable arguments against reason, might be possible or desirable in the first place; one would need to have read a few key Nietzschean affirmations to understand how deconstruction can be “on the side of the yes, of the affirmation of life” (Derrida, cited in Benjamin 2006: 81), how “interpretive experience” can say “yes” to “life” by affirming no end of “figures without truth” (Derrida 1967/1982: 18); one would have to have read Nietzsche, as Derrida read Nietzsche, to understand how reading and writing can affirm “life” by experimentally calling the “value of truth” into question.
As for Barthes and Foucault, their respective titles—“The Death of the Author” and “What is an Author?”—seemed to distress the late twentieth-century literary cognoscenti even more than Nietzsche’s “God is dead” outraged his readers at the previous fin de siècle. A possible explanation for this difference in distress-levels: most contemporary intellectuals are comfortably atheist or agnostic and either don’t very much mind God’s being dead or never entertained the notion of one day becoming deities themselves. But many writers (the writer of this very sentence not, in all honesty, exempted) still hold on to the dream of ending up as respected authors or authorities in the dominative and commanding sense that Derrida, Barthes, and Foucault describe and deride. Aspiring masters of meaning may no longer believe in God, but “they still believe in truth,” as Nietzsche puts it; they still on some level want (every reader) to bow down before the powerful figure of “the author” as both producer and proprietor of “truth”; they still depend upon the idea of “the author” to grant them serenity, “repose, security, and consistency” (Nietzsche 1873/2006: 119)
Barthes’ move “to substitute language itself for the person who until [recently] had been supposed to be its owner” and his assertion that “it is language which speaks, not the author” (1968/1977: 143) both spell a kind of “death” for this patriarchal “Author-God” as proprietary commando, “the father and the owner of his work” (1971/1977: 160). But Barthes doesn’t thereby represent actual writers as mere ventriloquist’s dummies. The “Author-God” action-figure is arguably dead enough, but for Barthes, this demise hardly means the end of writing. For writing “can be read without the guarantee of its father” (1971/1977: 160); moreover, for Barthes, the writer’s actual power isn’t paternally procreative anyway. The writer’s actual power is not to originate but to mix.
The text is not a line of words releasing a single ‘theological’ meaning (the ‘message’ of the Author-God) but a multi-dimensional space in which a variety of writings, none of them original, blend and clash. The text is a tissue of quotations drawn from the innumerable centres of culture . . . The writer can [thus] only imitate a gesture that is always anterior, never original. His only power is to mix writings, to counter the ones with the others . . . Did he wish to express himself, he ought at least to know that the inner ‘thing’ he thinks to ‘translate’ is itself only a ready-formed dictionary, its words only explainable through other words, and so on indefinitely. (1968/1977: 146)
In Barthes’ estimation, the actual purpose of this figure called “the Author” is to please and empower the critic, not the active writer or the performative reader. For
to give a text an Author is to impose a limit on that text, to furnish it with a final signified, to close the writing. Such a conception suits criticism very well, the latter then allotting itself the important task of discovering the Author . . . beneath the work: when the Author has been found, the text is ‘explained’—victory to the critic. Hence there is no surprise in the fact that, historically, the reign of the Author has also been that of the Critic, nor again in the fact that criticism (be it new) is today undermined along with the Author. (1968/1977: 147)
In the place of “literature,” that once-sacred but now fatally compromised cow milked by Author-God and Victor-Critic alike, Barthes proposes “writing,” which
by refusing to assign a ‘secret’, an ultimate meaning, to the text (and to the world as text), liberates what may be called an anti-theological activity, an activity that is truly revolutionary since to refuse to fix meaning is, in the end, to refuse God and his hypostases—reason, science, law. (1968/1977: 147)
Barthes goes on to suggest that the “true place” of such “writing” is “reading,” that “a text’s unity lies not in its origin but in its destination”—namely, the reader, who “is simply that someone who holds together in a single field all the traces by which the written text is constituted.” Giving a big leg-up to reception theory and reader-response criticism, Barthes concludes, “Classic criticism has never paid any attention to the reader; for it, the writer is the only person in literature . . . We [however] know that to give writing its future, it is necessary to overthrow the myth; the birth of the reader must be at the cost of the death of the Author” (1968/1977: 148).9
Writing a year later than Barthes, Foucault finds nothing fresh in the news about “the disappearance—or death—of the author,” which he says “criticism and philosophy took note of . . . some time ago.” He suggests, however, that its “consequences . . . have not been sufficiently examined, nor has its import been accurately measured” (1969/1998: 207). As Foucault puts it, in a barb against Barthes and a dig at Derrida, “it is not enough . . . to repeat the empty affirmation that the author has disappeared . . . [or] to keep repeating that God and man have died a common death” (1969/1998: 209), nor is it enough to use “the notion of writing” to “transpose the empirical characteristics of the author into a transcendental anonymity” (1969/1998: 208). Instead, writes Foucault, “we must locate the space left empty by the author’s disappearance, follow the distribution of gaps and breaches, and watch for the openings this disappearance uncovers” (1969/1998: 209).
One important historical detail that Foucault uncovers is that “the author” hasn’t always represented everything it seems to stand for today, that the “author function” has functioned differently at various moments in the history of “our civilization.” “The author function,” writes Foucault, “does not affect all discourses in a universal and constant way.”
In our civilization, it has not always been the same types of texts that have required attribution to an author. There was a time when the types of texts we today call “literary” . . . were accepted, put into circulation, and valorized without any question about the identity of their author . . . On the other hand, those texts we now would call scientific . . . were accepted in the Middle Ages, and accepted as “true,” only when marked with the name of their author. “Hippocrates said,” “Pliny recounts” . . . (1968/1994: 212)
Today, of course, the situation is reversed—while we can’t tolerate the idea of a great literary work without some illustrious personage designated as its author, we routinely “impersonalize” scientific discourses in the very gesture of granting their authority (“evolutionary biology says,” “according to quantum physics,” etc.) without giving that grant a second thought.
Foucault’s most characteristic arguments here, however, involve the ideology of the “author function,” the relationship between, on the one hand, the circulation of discourses that are thought to have “authors” and, on the other, the coercively panoptical operations of knowledge and power. When, for example, we read at the beginning of Foucault’s treatise that “the coming into being of the notion of the ‘author’ constitutes the privileged moment of individualization in the history of ideas, knowledge, literature, philosophy, and the sciences” (1968/1994: 205) we should recall that while for some of us “individualization” might sound like a sweet deal, for Foucault, it’s essentially a disciplinary process, a sour means of reproducing power relations. As he admonishes elsewhere, “Do not demand of politics that it restore the ‘rights’ of the individual, as philosophy has defined them,” for “the individual is the product of power” (Foucault 1972/1983: xiv). “Individualization” for Foucault is related to (albeit not completely identical with) ideological “interpellation” à la Althusser, which, as you’ll remember, involves turning “individuals” into docile bodies who “work all by themselves,” as if they were centers of rights and initiatives, as if they were free.10 If, historically, “discourses” have become unfree, have become “objects of appropriation” or ownership, then their “authors,” the “individuals” who can be held responsible for them—who “own” them or can be made to “own up” to them—aren’t exactly free either but are always already “subjects” of discipline and control. In other words, “authors” are brought into being so that discourses might be better brought into custody; or, discourses are attributed to “authors” so that the latter can more effectively be located, incarcerated, and/or killed.
Texts, books, and discourse really began to have authors . . . to the extent that authors became subject to punishment, that is, to the extent that discourses could be transgressive. In our culture (and doubtless in many others), discourse was not originally a product, a thing, a kind of goods; it was essentially an act—an act placed in the bipolar field of the sacred and the profane, the licit and the illicit, the religious and the blasphemous. Historically, it was a gesture fraught with risks before becoming goods caught up in a circuit of ownership. (1969/1998: 212)
Relating Foucault’s observation to relatively recent cultural clashes, we might ask against whom the Ayatollah Khomeini of Iran could have issued his famous fatwa if The Satanic Verses (1988) had been attributed only to “the anonymity of a murmur” (Foucault 1969/1998: 222) and not to the transgressively blasphemous Salman Rushdie.
Foucault of course begins and ends the essay called “What is an Author?” with a question attributed to Samuel Beckett—“ ‘What does it matter who is speaking’ ”? (1969/1998: 205). Someone like Foucault might reply not simply that it never matters who’s speaking, that we should never take the question seriously at all, but rather that it is only the Ayatollahs of coercive culture—and culture is always to some degree coercive—who are duty-bound to take the question of “who is speaking” deadly seriously. For if we “keepers of the culture” can’t ascertain which particular “who” is in fact speaking, how can we know exactly which individual we should want to punish or silence or kill?
Foucault himself wasn’t into killing off “the author.” Nor was he interested in torturing and interrogating that figure, forcing it to reveal its inner “authenticity” or express its “deepest self” (1969/1998: 222). But Foucault did want to “change the subject,” to “reexamine the privileges of the subject” and call into question “the absolute character and founding role of the subject.” Foucault didn’t want to water-board “the author,” but he advocated “depriving the subject (or its substitute) of its role as originator” and favored “analyzing the subject as a variable and complex function of discourse” (1969/1998: 220–1). Because “the author” has long (but not always) functioned as one of the most highly privileged “substitutes” for “the subject” qua originator in “our civilization,” Foucault thinks that it’s high time to address “the ‘ideological’ status of the author” and “reverse the traditional idea” of the author function.
We are accustomed . . . to saying that the author is the genial creator of a work in which he deposits, with infinite wealth and generosity, an inexhaustible world of significations. We are used to thinking that the author is so different from all other men, and so transcendent with regard to all languages, that, as soon as he speaks, meaning begins to proliferate, to proliferate indefinitely.
The truth is quite the contrary: the author is not an indefinite source of significations that fill a work; the author does not precede the works; he is a certain functional principle by which, in our culture, one limits, excludes, and chooses; in short, by which one impedes the free circulation, the free manipulation, the free composition, decomposition, and recomposition of fiction. In fact, if we are accustomed to presenting the author as a genius, as a perpetual surging of invention, it is because, in reality, we make him function in exactly the opposite fashion . . . The author is . . . the ideological figure by which one marks the manner in which we fear the proliferation of meaning. (1969/1998: 221–2)
As this passage should make crystal clear, Foucault didn’t scoff at “meaning” or fear its proliferation. He obviously didn’t completely discount “the truth,” either, since he doesn’t seem to mind telling us what it is.11 And if it’s true that we have turned “the author” into an overly privileged “principle of thrift in the proliferation of meaning,” Foucault truly feels that we no longer have to be quite so “thrifty” with our “discourses and their significations” (1969/1998: 221), that “our civilization” can now afford to dethrone “author” and “subject” as original, eternal, transcendent, inexhaustible sources of signification, that it really wouldn’t kill us to begin thinking of these figures, and of ourselves, as variable and complex functions of discourse.
Foucault’s proposals are thus remarkably compatible with Derrida’s and Barthes’ writings about “a writing that can know no halt” (Barthes 1968/1977: 147). But maybe it’s that very compatibility that makes all their writings about all that writing threatening to and dreaded by all our inner Ayatollahs, by everything within us that fears semiotic excess and wants to fix meaning, to bring writing to a halt, everything within us that desires to be “the author” or to bow down before that seemingly generous and extravagant but actually quite austere figure. Just as our inherited metaphysical assumption has been that the center must hold in order for everything to hang together, our traditional literary assumption has been that in order to love language and appreciate great writing, to affirm fiction and value its meaning, we pretty much had to believe in “the author.” The truth, as we for some reason still call it, may be quite the contrary, and the deconstruction of these conventional assumptions might radically renew our appreciation, and our affirmation, of the fiction that we write, the fictions that we are.
II. “What are we calling postmodernity?”
Not that it should matter who’s speaking here, but it just happens to be Foucault, admitting or perhaps feigning ignorance about postmodernity in a 1983 interview—“What are we calling postmodernity? I’m not up to date” (Foucault 1983/1998: 447). Foucault’s interlocutor, one Gerárd Raulet, thus finds himself in the ironic position of having to bring Foucault, reputedly one of postmodernism’s principle perpetrators, up to speed on the debate between Jürgen Habermas and Jean-François Lyotard about the viability of the so-called project of modernity and the emancipatory potential of what we’re still calling postmodernity.
For the German Habermas, the “project of modernity” begins with eighteenth-century “Enlightenment” rationalism; it involves “the belief, inspired by modern science, in the infinite progress of knowledge and in the infinite advance towards social and moral betterment” (Habermas 1980/2001: 1749). Though he recognizes that “the 20th century has shattered this optimism,” Habermas believes that we should still “try to hold on to the intentions of the Enlightenment, feeble as they may be” rather than “declare the entire project of modernity a lost cause” (1980/2001: 1754). Upholding the goal of a transparently “communicative rationality” operating within and governing “all spheres—cognitive, moral-practical, and expressive” (1980/2001: 1754), Habermas thinks that we should want to continue with the “progressive” modern project, which he considers “incomplete” but still completely worthwhile. He thus labels Derrida and Foucault as postmodern “young conservatives” (1980/2001: 1758) who have prematurely abandoned the progressive project out of an irrationalist Nietzschean aestheticist extravagance and a fetishistic investment in the notional expenditures of Georges Bataille.
On the French side of the debate, Lyotard also notes “the disappearance of this idea of progress within rationality and freedom . . . a sort of decay in the confidence placed by the last two centuries in the idea of progress.” For Lyotard, “this idea of progress as possible, probable or necessary was rooted in the certainty that the development of the arts, technology, knowledge and liberty would be profitable to mankind as a whole.” But Lyotard thinks that contemporary thinking has become deeply distrustful of the very idea of “mankind” as a unified “whole,” and rightly so. Lyotard finds it no longer salutary to sustain the modern “belief that enterprises, discoveries and institutions are legitimate only insofar as they contribute to the [total] emancipation of mankind” (1986/2001: 1612–13). Calling this modern faith in the complete emancipation of everybody a metanarrative—a grand or master narrative, overarching and monolithic—Lyotard famously characterizes postmodern skepticism as a radical “incredulity towards metanarratives” (1984: xxiv). For Lyotard, postmodern reality comprises incommensurable “language games,” a Humpty-Dumpty host of differential micro-narratives that the modernist metanarrative can no longer put back together again—“Only the transcendental illusion (that of Hegel [or of Sartre]) can hope to totalize [these language games] into a real unity” (1979/1984: 81).12 But Lyotard warns that “the price to pay for such an illusion is terror,” and he thus links the transcendental dream of “completing” the project of modernity to the totalizing and totalitarian schemes of modern history.
The nineteenth and twentieth centuries have given us as much terror [and as much totality] as we can take. We have paid a high enough price for the nostalgia of the whole and the one, for the reconciliation of the concept and the sensible, of the transparent and the communicable experience . . . [and] for the [attempted] realization of the fantasy to seize reality . . . Let us [thus] wage a war on totality; let us be witnesses to the unpresentable; let us activate the differences and save the honor of the name. (Lyotard 1979/1984: 82)
Having summarized this French–German debate, Raulet explains to Foucault that “Postmodernity is a breaking apart of reason . . . Postmodernity reveals, at least, that reason has only been one narrative among others in history; a grand narrative, certainly, but one among many, which can now be followed by other narratives” (in Foucault 1983/1998: 447). But Foucault surprisingly responds that he’s “never clearly understood what was meant in France by the word ‘modernity’ ” in the first place. Nor does he know “what Germans mean by modernity.”
Neither do I grasp the kind of problems intended by this term—or how they would be common to people thought of as being “postmodern.” While I see clearly that behind what was known as structuralism, there was a certain problem—broadly speaking, that of the subject and the recasting of the subject—[I] do not understand what kind of problem is common to the people we call “post modern” or “poststructuralist.” (1983/1998: 448)
Now, at this point, you might very well be thinking—if Michel Foucault himself didn’t quite get “modernity” or “postmodernity,” what fat chances for understanding have I? I would answer by saying that whatever is meant by “modernity” or “structure,” Foucault is essentially correct—they aren’t simply different words for the same set of problems. And so while postmodernism and poststructuralism are “problematically” related—both involve “following” Nietzsche, questioning science, calling truth’s bluff, changing “the subject,” interrogating the absolute primacy of “reason,” activating the play of differences, protecting the proliferation of meaning, and so on—they aren’t exactly the same theoretical phenomenon. We’ve read that poststructuralism takes the specific findings of structural linguistics (the arbitrary and differential nature of the linguistic sign) to their most “extreme” conclusions. Here, we’ll see how postmodernism involves a different but related set of dilemmas and extremities.
Let’s begin by considering three mutually implicated aspects of “modernity,” so as to better address the question of what the “post” in “postmodernity” might entail. Let’s say that these three aspects—let’s call them socioeconomic modernization, philosophical modernity, and aesthetic modernism—all involve new and different ways of coming to terms with the fact that the world must be made to mean. Socioeconomic modernization involves shifts in what a Marxist would call the mode of production—new ways of making wealth, goods, services, tools, machines, technologies, laws, wars, institutions, weapons, governments, nations, states, and empires, what Marx himself calls the “uninterrupted disturbance of social conditions” (1888/1978: 476). Philosophical modernity involves developing new modes of conceptualizing, rationalizing, critiquing and/or justifying the intellectual processes of making sense in and of modernization as uninterrupted social disturbance. And aesthetic modernism involves new ways of making and responding to the work of art within modernity/modernization.
Modernization is the “oldest” of these three aspects. Indeed, as Marshall Berman points out, “vast and increasing numbers of people have been going through it for close to five hundred years” (1988: 15).13 Berman writes that
The maelstrom of modern life has been fed from many sources: great discoveries in the physical sciences, changing our images of the universe and our place in it; the industrialization of production, which transforms scientific knowledge into technology, creates new human environments and destroys old ones, speeds up the whole tempo of life, generates new forms of corporate power and class struggle; immense demographic upheavals, severing millions of people from the ancestral habitats, hurtling them half-way across the world into new lives; rapid and often cataclysmic urban growth; systems of mass communication, dynamic in their development, enveloping and binding together the most diverse people and societies; increasingly powerful national states, bureaucratically structured and operated, constantly striving to expand their power; mass social movements of people, and peoples, challenging their political and economic rulers, striving to gain some control over their own lives; finally, bearing and driving all these people and institutions along, an ever-expanding, drastically fluctuating capitalist world market. In the twentieth century, the processes that bring this maelstrom into being, and keep it in a state of perpetual becoming, have come to be called “modernization.” (1988: 16, my emphases)
The “dynamic” words that I’ve emphasized in Berman’s description—changing, hurtling, striving, challenging, driving—can all be summed up in that last phrase, “a state of perpetual becoming.” And this state of perpetual(ly) becoming (modern) could be negatively compared to the sense of relatively “static being” or uninterrupted non-disturbance that we now associate (rightly or wrongly) with the pre-modern or medieval “life-world,” in which there really didn’t seem to be much happening, in which everything and everybody basically seemed to stay put—no great discoveries; no big changes in images of our place in the cosmos or one’s place in the “natural order”; no radical transformations in knowledge effected or even desired (particularly not by the church); no appreciable social mobility, much less mass demographic upheaval; no moveable type, printing presses, or mass communications; no particularly successful challenges to autocratic rulers; no acceleration, no movement, no change.
Or, in a word, no capitalism—Berman is right to say that it’s “finally” capitalist markets driving “the maelstrom of modern life” only in the sense that he lists the capitalist engine last. But it was arguably the transition in Western Europe from feudal agrarianism to mercantile capitalism that got this ball of “perpetual becoming” or “uninterrupted disturbance” rolling in the first place. It was arguably the shift from immovable to moveable capital, from arable land to investable money as the primary basis of wealth in Europe, which initiated all the increasingly rapid “movement and change” that we now associate with modernization. This shift helped precipitate the various revolutions (scientific, industrial, and sociopolitical) by virtue of which the rulers of the ancien regime (the titled monarchs of the landed aristocracy and the stony patriarchs of the crumbling church) were suddenly or gradually forced to cede power to the more secular and democratic mercantile bourgeoisie.
But more than political economy, more than an exchange of money and power, is at stake in the “modern” triumph of “movement and change” over feudal–medieval stability and stasis. There began to dawn, in the eighteenth and nineteenth centuries in Europe, the philosophical sense that perpetual “movement and change” were revolutionary values in themselves, inherently utopian, leading somewhere pretty good or even supremely good for everybody; in other words, there began to form the optimistic conviction that all these ever-accelerating upheavals were not just aleatory economic transitions profitable for the rising bourgeoisie alone, but morally progressive developments that would turn out to be “profitable to mankind as a whole” and would, in fact, lead to a final and total “emancipation of mankind.” In place of the relatively “frozen” or cyclical sense of time and history supported by feudal agrarianism (cyclical because still allegorizing seasonal cycles of planting and harvest), modern philosophers began to substitute a linear, dynamic, and dialectically progressive sense of human temporality and historicity. Moreover, in place of the anti-ameliorative ideology of “original sin” promulgated by a medieval church that condemned all talk of worldly self-improvement as hubristic heresy (no “redemption” for the fallen save through God’s mercy; no final happiness for select humans except in heaven, and so on), modern philosophy served up the purely secular idea of rational Enlightenment as mankind’s “original destiny” (Kant 1784/1996).14
We might note a sort of merger between “acquisitive” economic modernization and “inquisitive” philosophical modernity in a claim made by our old friend Hegel, one of the great promoters of perpetual becoming. In “The Positivity of the Christian Religion,” Hegel suggests that the principle imperative of Enlightenment rationality is to justify “the human possession of treasures formerly squandered on heaven” (1795/1948: 159). In the Enlightenment, that is, those who dared to think for themselves began thinking that we should start thinking of ourselves and should keep our most treasured thoughts to ourselves, in our own orbit, rather than squandering them on exorbitant fantasies like “God” and “heaven.” Enlightenment humanists thus attempted to give us all permission to start loving, helping, and believing in ourselves directly. Short-circuiting the old other-worldly route, Enlightenment humanist thinkers stopped projecting all the great powers of love and salvation onto the Deity and disinvested in the after-life as the only conceivable site for the final acquisition of happiness or the total accomplishment of our own ameliorative goals, all of which could be worked out in this world through the progressive use of Reason.15
And so began for Western Europe the languid and sinister blooming of the dream of a totally rational and totally organized human self-possession. Justifying our complete ownership of treasures once squandered on the divine, hoping to gain a conceptually controlling interest in “the maelstrom of modern life,” philosophical modernity attempts a total “realization of the fantasy to seize reality” (Lyotard 1979/1984: 82). From the postmodern perspective, however, the end results of this grab at reality’s fleeting ring have proven rather mixed. Not that Enlightenment humanism has produced only unmitigated disaster for humans; not that there hasn’t been some recognizable “progress within rationality and freedom” in the Western world in the last 200–500 years, but the twentieth century in particular has shattered the blithe assumption that our taking up the dare to think for ourselves would necessarily advance us all toward “social and moral betterment”; it has darkened the optimistic view of human history as the inevitably beneficent upward expansion of Man’s Reason.
For tooling along in the blind spot of Enlightenment’s Sunday morning drive is none other than our friend Thanatos, the good old-fashioned death instinct, which you don’t have to be a licensed psychoanalyst to discern busily at work in all teleological fantasies indentured to “nostalgia for the whole and the one,” whether the fantasies be sexual, secular, philosophical, or religious, harbored by political left or right. God knows there’s more than a touch of suicidal desire in the fantasy of sending oneself to heaven, else “the Everlasting” wouldn’t have “fix’d His canon ‘gainst self-slaughter,” as Shakespeare had Prince Hamlet complain in the “early modern” year 1603. And, as Walter Benjamin observed in 1936, “mankind” as the collective subject/object of mechanical modernization has reached such a degree of self-alienation “that it can experience [even] its own destruction as an aesthetic pleasure of the first order” (1936/1968: 242). But of course the pleasures of totally human destruction, subjective and objective, failed to remain merely aesthetic in the mid-twentieth century; indeed, only a few years after Benjamin’s suicide, these irresistible urges from “beyond the pleasure principle” became real in a substantially “new and different” way. If the modern metanarrative involves the fantasy of humanly (not humanely, but humanly) possessing all the treasures formerly squandered on heaven, and if one of the great powers humans had heretofore attributed to the Deity was the capacity to reduce the world to rubble and ash, then one developing plot-line of modernity’s big story reaches its climax in 1945, when we for the first time held the real power of world-destruction in our own trembling hands. Perhaps the postmodern condition really begins with the bombings of Hiroshima and Nagasaki. Or perhaps it begins somewhere in the “unpresentable” distance between Auschwitz as an industrial mode of genocide and Hiroshima as a technological form of mass destruction. To be sure, the new American petard was an inspired scientific advance over the old European ovens, but one wouldn’t exactly call it progress qua “social and moral betterment.”
So much, then, for the question of how we got to the “post” in philosophical postmodernity—few philosophers whole-heartedly believe in the modern metanarrative any longer, and incommensurable language (and war) games are still proceeding without morally progressing. If pro-modernist stalwarts complain that postmodernity involves all the social fragmentation and malaised alienation of “the maelstrom of modern life” but without the hope of total reunification and emancipation that alone makes it all bearable, the postmodernist rejoinder is that this totalizing “hope” is itself irredeemably implicated in various totalitarian daydreams of a unified “life-world” hygienically cleansed of all contaminating “others” (Jews, queers, capitalists, immigrants—name your poison). The darkest side of the dialectic of Enlightenment is purely instrumental reason, the racist/fascist “male warrior” fantasy of global purification in which freedom’s just another word for nothing left to kill.16 For philosophical postmodernists, then, the only “good war” left is Lyotard’s war against totality.17
But if the preceding explains the philosophical “postmodern turn,” how might we answer the question of “postmodernization”? How do we deal with the idea of human reality “after the end” of modernization when modernization clearly hasn’t ended? After all, there’s still a lot of “perpetual becoming” qua technological innovation going on in the world, so perhaps the term “postmodernization” is descriptive only in regard to certain “futuristic” fictions like George Miller’s 1981 film The Road Warrior, which depicts the coming exhaustion of petro-industrial society as a bloody struggle between nomadic hordes dueling over the last dribbles of fossil-fuel in a post-apocalyptic wasteland; or James Cameron’s 1984 film The Terminator, which suggests technology’s relentless continuation of its own “project” even after human civilization has ended; or David Foster Wallace’s 1996 novel Infinite Jest, which represents the consumer society of the very near future as being so fatally addicted to entertaining itself and so indifferent to a progressive or even linear conception of time and history that its calendar years are no longer consecutively numbered but corporately sponsored, named after illustrious commodities (Year of the Whopper, Year of Glad, Year of the Tuck’s Medicated Pad, etc.).
But, to think less speculatively about what “postmodernization” might mean for us today, we might think in terms of a particular paradigm shift from industrial mechanics to digital technology within the contemporary mode of production itself; we might consider, that is, the way technology seems to have superseded industry as the socioeconomic dominant of our global civilization; and we might ponder the changes in the experiential character of our present “life-world” consequent to this transition. The transition itself involves not only the specters of mass destruction (as in the Auschwitz to Hiroshima itinerary cited above) but the “indetermanances” of mass transportation and mass communication as well. Consider that while the paradigmatic contraption of “the modern age” is arguably the engine (steam, locomotive, automobile, jet), the paradigmatic conveyance of the postmodern condition is surely the screen (cinematic, televisual, digital, terminal). Consider as well that the shift from the former to the latter effectively and profoundly inverts and compresses human space/time relations. While the modern engine still serves to move bodies (and/as commodities) through space at ever-increasing speeds, the postmodern screen serves to bring commodified images of bodies and commodified information about commodities in ever-quickening tempos to increasingly stationary or stay-at-home bodies. While our engines might still take us to work, or play, or war, our screens bring all of that business back home to us in a hi-def 3D nanosecond. While we still have asphalt highways upon which to drive our fossil-fueled or hybrid automobiles, the “information superhighway” (to use a now rather dated phrase) is a much more important and culturally dominant thoroughfare. And while we may still want to drive our hybrids really fast, the speed of our hard-drives and search engines has become our infinitely more vital consideration.
Indeed, today, everything vital seems to have gone terminally virtual, which is why Jean Baudrillard considers “the postmodern” as the age of the simulacrum, “the desert of the real itself” (1983: 2).18 But because the “engine” driving both modern/industrial and postmodern/technological “movement and change” is still very much the production of wealth and power for the ruling/owning class—rather than, say, the positive annulment of private property and the dawn of a classless society—the Marxist Fredric Jameson designates and castigates postmodernism as “the cultural logic of late capitalism” and laments quite a number of its cultural turns. In addition to mourning “the death of the subject,” Jameson bemoans what he calls the “eclipse” of lively parody by dead-pan pastiche. Both are forms of stylistic mimicry, but while parody, says Jameson, “mocks the original” style in a satiric spirit of collectively normative judgment, casting “ridicule on the private nature of . . . stylistic mannerisms and their excessiveness and eccentricity with respect to the way people normally speak or write,” pastiche is spiritless “speech in a dead language.”
It is a neutral practice of . . . mimicry, without parody’s ulterior motive, without the satirical impulse, without laughter, without that still latent feeling that there exists something normal compared with which what is being imitated is rather comic. Pastiche is blank parody, parody that has lost its sense of humor. (1988/2007: 1957, 1958)
A particularly unamusing form of pastiche for Jameson is the “nostalgia film,” which displays its “pathological” indifference to developmental social transformation by transporting outdated cinematic styles into contemporary settings (as in Lawrence Kasdan’s 1981 film Body Heat, which Jameson takes as “distant remake” of Billy Wilder’s 1944 film-noir classic Double Indemnity) or by beaming futuristic technologies into a mythic past, as in George Lucas’s heavily archetypified Star Wars saga (“Long ago, in a galaxy far far away . . .”). To Jameson, it seems
exceedingly symptomatic to find the very style of nostalgia films invading and colonizing even those movies today which have contemporary settings, as though, for some reason, we were unable today to focus our own present, as thought we had become incapable of achieving aesthetic representations of our own current experience. But if that is so, then it is a terrible indictment of consumer capitalism itself—or, at the very least, an alarming and pathological symptom of a society that has become incapable of dealing with time and history. (1988/2007: 1960)
Nor is Jameson amused by postmodern architecture, which, like the nostalgia film, tends to glom together different and incongruent historical styles without any sense of historical progression, and which, like pastiche in general, makes no normative judgments about its contextual urban surroundings and, worse, expresses no particular desire to transform them. For Jameson, the “great monuments of the International Style” that epitomized modernist architecture could be critically distinguished from their surrounding cities; moreover, “the act of disjunction was violent, visible, and had a very real symbolic significance,” for this stylistic gesture
radically separates the new utopian space of the modern [building] from the degraded and fallen city fabric, which it thereby explicitly repudiates (although the gamble of the modern was that this new utopian space . . . would fan out and transform [the whole urbanized world] eventually by the power of its new spatial language). (1988/2007: 1962)
The postmodern building, however, expresses neither critical judgment nor any ameliorative will to power beyond its own design parameters and is “content” to let the fallen city lie—“no further effects—no larger protopolitical utopian transformation—are either expected or desired” (1988/2007: 1962).
But if conditions are alarmingly bad with postmodern structures when considered from the outside, things get even worse, even more indifferent to utopian transformation, when you pass through the entrances into their bewildering interiors. Jameson, that is, has even less fun being lost in the funhouses of consumer capitalism than he does with postmodern pastiche, and he singles out as the worst architectural offender John Portman’s Los Angeles Bonaventure Hotel, a “mini-city” that “ideally ought not to have entrances at all (since the entryway is always the seam that links the building to the rest of the city that surrounds it), for it does not wish to be a part of the city” (1988/2007: 1962). The Bonaventure is a “postmodern hyperspace” in the lobby of which “it is quite impossible to get your bearings.” It is also the structure in which Jameson himself lost his bearings (as academic rumor has it) while trying to find his panel at a Modern Language Association convention being held there. Generalizing from his own interpretive experience of alienated dislocation, Jameson comes to his “principal point”:
that this latest mutation in space—postmodern hyperspace—has finally succeeded in transcending the capacities of the individual human body to locate itself, to organize its immediate surroundings perceptually, and to map cognitively its position in a mappable external world . . . This alarming disjunction between the body and its built environment . . . can itself stand as the symbol and analogue of that even sharper dilemma, which is the incapacity of our minds, at least at present, to map the great global, multinational and decentered communicational network in which we find ourselves caught as individual subjects. (1988/2007: 1963)
For Jameson’s money, then, we have splendid reasons to fear postmodern “proliferations of meaning” at every level, for they are all driven by the “ahistoricizing” logic of late capitalism, in the “perpetual present” of whose invisible hand we still find ourselves caught. For Jameson, the only intellectually valid way to bite the hand that feeds us postmodern culture is constantly to obey what he calls “the imperative of all dialectical thought” and to “always historicize!” (1981: 9)—a slogan that for minds less dialectically supple than Jameson’s (or for that matter Marx’s) seems to boil down to constantly diagnosing every cognitively mappable social ill as a symptom of “the global offensives of capital” (Ahmad 1996: 284).19
Of course, one might wonder if Jameson himself isn’t less “historicizing” here than overly generalizing about the current “incapacity of our minds.” After all, not every “individual human body” in the world gets hopelessly lost in the Bonaventure or its utopia-indifferent analogues, and some individual subjects (who are neither venture capitalists nor schizoid consumers) can cognitively map postmodern hyperspace reasonably well. One might think, moreover, that a Marxist with populist leanings (though that’s not exactly the sort of intellectual Jameson is) would smile upon certain aesthetic practices of postmodernism, practices which do their best to overturn given hierarchies and to subvert all the regnant “highnesses” of “elitist” culture. As Jameson notes, aesthetic postmodernisms “emerge as specific reactions against the established forms of high modernism . . . which conquered the university, the museum, the art gallery network and the foundations”; they efface “key boundaries or separations, most notably the older distinction between high culture and so-called mass or popular culture,” so that in postmodernism “the line between high art and commercial forms seems increasingly difficult to draw” (1988/2007: 1956, emphases added). This difficulty in drawing the line, however, which often energizes the populist academic left, seems only to distress Jameson, for insofar as he remains within the Frankfurt School tradition of profound suspicion toward mass culture (rather than the Birmingham tradition of cautiously celebrating the popular), Jameson of course wants art and thought to keep their critical distance from commerce. In other words, he would concur with Habermas that “when the containers of an autonomously developing cultural sphere are shattered, the contents get dispersed. Nothing remains from a desublimated meaning or a destructured form; an emancipatory effect does not follow” (1980/2001: 1756).20
But here, a question of cultural and intellectual authority emerges—who gets to decide what counts as a bona fide “emancipatory effect”? An “effect”—a discernible change in the interpretive experience of “the subject,” a particular activation of difference or liberation of symbolic energy or desedimenting shift in cultural innovation—that might well seem emancipatory, salutary, productive, or maybe just interesting to an Ihab Hassan or a Donna Haraway won’t cut much mustard in a strictly Marxist metanarrative that views any changes as “legitimate only insofar as they contribute to the [total] emancipation of mankind” (Lyotard 1986/2001: 1613) or only insofar as they help to bring about “the revolutionary transformation of social relations as a whole” (Jameson 1988: 53). Emancipatory effects in the fields of gender and sexuality, for example, will forever register as small potatoes for any Marxist meta-narrator in relation to the always much meatier dialectic of history as class antagonism. Indeed, for those who imagine themselves as the firmest adherents to the trunk of the Marxist dialectic, steadfast opposition to the predations of late capitalism and Western neo-imperialism sublates all “other” considerations, so that any new developments within postmodernity will be suspected as the free market’s latest ruse, and even the most retrograde and anti-modern practices of sexual oppression in non-Western or “Third World” regions (female genital mutilation, murderous persecution of gays and lesbians, fatwas against advocates of “gender mixing,” “honor killings” of young women, compulsory veiling of all women, and so on) can be countenanced, since the religious police or indigenous goon-squads who enforce these traditions can be viewed as defending their cultures against globalization, resisting colonization by the neoliberal West.21
We’ll return to this problem in the next section, which attends somewhat more closely to postcolonial theory. Let’s conclude this section, however, by taking up in greater detail the two aspects of aesthetic postmodernism that Jameson singles out—the reaction against “high modernism” and the effacement of the boundary between “high” and “mass” culture.
It’s easy enough to cognitively map “high modernism” in terms of periods and players—its time-frame stretches from just before World War I to just after World War II (with the greatest wave cresting in the period entre deux guerres). Its most prominent practitioners would include Picasso, Braque, Mondrian, Matisse, Rothko, and Pollock in the visual arts; Stravinsky, Schoenberg, Webern, and Berg in music; and Eliot, Joyce, Pound, Woolf, Faulkner, Stevens, and Hemingway, etc., in literature. What characterizes all these aesthetic practices at their heights is the relentless will to experimentation and innovation, the need to draw a line between current artistic procedures and those immediately preceding, the imperative (as per Ezra Pound’s famous slogan) to always “make it new.” Aesthetic modernism involves “the vertiginous work” of questioning all the given “rules of image and narration,” so that “all that has been received, if only yesterday . . . must be suspected” (Lyotard 1979/1984: 79). Aesthetic postmodernism thus involves the work that must be done when modernist practices themselves become the all too given, received, established, when formerly vertiginous work no longer provokes even the slightest unease, much less vertigo, in the viewer, listener, or reader. In other words, postmodernism “occurs” when the aesthetic value of experimentation is itself (experimentally) called into question, which is what Lyotard means when he “preposterously” says that modernism had to be postmodernist in order to stay modernist—“A work can become modern only if it is first postmodern. Postmodernism thus understood is not modernism at its end but in the nascent state, and this state is constant” (1979/1984: 79).
But let’s linger with the question of modernism’s nascence. Habermas is correct to say that the modernist “movement” in painting and literature began “in the mid-nineteenth century” when “color, lines, sounds and movement ceased to serve primarily the cause of representation,” when “the media of expression and the techniques of production became the aesthetic object” (1980/2001: 1755). But how do we account for this modernist non servium to the cause of representation? In a sense, Western painting has always served that cause in some form or another, but what Habermas means is that the self-defining gesture of modernist art is to abjure verisimilitude, to decline “realistic” representation.
In terms of the history of painting in the West, we can say that the cause of representational realism was first taken up by Giotto in the fourteenth century, with his development of perspective, the specific technique that gives the viewer of a painting the “realistic” impression of three dimensions, of depth within the scene depicted in the frame. Before Giotto, European painting, however otherwise vivid, was noticeably “flat” in a number of senses—spatially two-dimensional (and so somewhat “cartoonish” from our perspective); temporally anachronistic (for the painter wasn’t expected to accurately “frame” any single moment of historical time); facially expressionless and thematically “monotonotheistic” (for the painter’s job was not to capture human emotion but to depict identifiable allegorical figures from Christian mythology). For some time after Giotto, Western art may have remained religiously themed, but it became ever more realistically framed. And European painting continued in its servitude to verisimilitude, obeying the rules of perspective and serving the cause of representation, even as it dropped religious content and joined the party trying to justify the human possession of treasures formerly squandered on heaven.
Painting continued to adhere to the rules of spatio-temporal realism, that is, until it entered the age of mechanical reproduction and confronted the new reality of the camera, a little mechanical invention capable of serving “the cause of representation” much more faithfully and meticulously than painting ever could. Painting, from then on, in order to serve the cause not of verisimilitude but of painting, had to abandon realistic representation, had to perform aesthetic feats of which the camera would be incapable; painting had to distinguish itself from mechanical photography by turning its own autonomy, its own specifically painterly “techniques of production,” into the very content of its self-presentation. What the modern “abstract” painting conveys to its viewer is not the artist’s power to serve up a slice of real life, but rather the essential “painterliness” of painting itself. And one of the first steps in establishing painting’s autonomy by breaking the rules of pictorial realism was the abolition of perspective (as in Gauguin), followed in short order by the conspicuous foregrounding of the brushstroke (Van Gogh, Cézanne), the flattening out of multiple perspectives effected by the cubists (Picasso and Braque), and finally the jettisoning of even minimally mimetic content (the pristine geometries of Mondrian, the color fields of Rothko, the pure action paintings of Pollock, and so on).
All of these painterly breaks with realism (and breaks with the immediately preceding breaks with realism) were precipitated by modernism’s flight from photography.22 It isn’t that modernism dismissed photography or cinema as art-forms in their own right; rather, modernist painting staked its autonomy as an art-form on its critical distance from “the cause of representation” as served by these new media. Given this steady rejection, however, it should be relatively easy to see what’s postmodern in the painterly embrace of photography represented by the “photorealism” of Chuck Close or some of the work of Gerhardt Richter—in a sense, both painters paint their rejection of painting’s rejection of the photographic image. If, moreover, the “essential virtue” of high modernism was its “staying power” against the “spreading ooze of Mass Culture” (Macdonald 1957/1998: 35)—against advertising jingles, standardized Hollywood schlock, pulp fiction, kitsch, porn, comics, television, rock’n roll, and so on—then it’s relatively easy to see what’s postmodern in “pop art,” in Andy Warhol’s promiscuously lithographed Campbell’s Soup cans, Elvis Presleys, and Marilyn Monroes, the replicated comic book panels of Roy Lichtenstein, or the porn-inspired statuary of Jeff Koons.
But the “relative ease” with which postmodern art can be seen, consumed, or “used” is, for some, the very heart of its problem. Jameson, again, writes that postmodernism’s effacement of the boundary between high and mass culture “is perhaps the most distressing development of all from an academic standpoint, which has traditionally had a vested interest in preserving a realm of high or elite culture . . . and in transmitting difficult and complex skills of reading, listening and seeing in its initiates” (1988/2007: 1956). And yet, from a radically different academic standpoint—that of the branch of contemporary critical inquiry known as cultural studies—it’s a mistake of the highest order to think that “difficult and complex skills of reading, listening and seeing” aren’t needed to negotiate with mass and/or popular culture or that the consumers of such culture are merely manipulated dupes who don’t know how to read, listen, or see. A decidedly postmodern academic phenomenon, cultural studies takes its cues rather indiscriminately from all manner of Marxist social theory (Frankfurtean, Birminghamian, Althusserian, Gramscian); from feminism and gender studies; from Derridean speculation on difference and Foucauldian analytics of power; and, particularly, from the early semiological acrobatics of Roland Barthes, who demonstrated back in his 1957 text Mythologies that “difficult and complex skills of reading” could be quite productively lavished on such items of contemporary French popular culture as professional wrestling, striptease, Citroëns, and soap-powders.
But here we can let Barthes be our bridge to the question of postcolonial theory as an “anti-Western” extension of European poststructuralism and postmodernism.23 For a major political reality informing Barthes’s writing in the 1950s is the French colonial presence in Indochina and Algeria. Indeed, in White Mythologies, Robert Young argues “that the historical roots of poststructuralism are to be found not in the crisis of European culture associated with the student revolts of 1968, but in the Algerian struggle against colonialism ten years earlier” (cited in Gikandi 2004: 99). So it isn’t exactly irrelevant that one of Barthes’s more dazzling semiotic performances in Mythologies involves deciphering the cover-image of a Paris-Match magazine showing a “Negro in a French uniform . . . saluting, with his eyes uplifted, probably fixed on the tricolor” (1957/1972: 116). The cover is of course operating “mythologically,” in Barthes’s sense, attempting to “turn history into nature” by imposing a “depoliticized” image of social reality upon the very reality of the social. And Barthes says that he sees “very well” what this mythic cover attempts to “depoliticizingly” signify:
that France is a great Empire, that all her sons, without any colour discrimination, faithfully serve under her flag, and that there is no better answer to the detractors of an alleged colonialism than the zeal shown by this Negro in serving his so-called oppressors. (1957/1972: 116)
Barthes understands fully well that French colonialism is more than simply “alleged,” that French imperialism isn’t all that great and that the “so-called oppressors” are so called for excellent empirical reasons. He understands that Anglo-European colonialism and imperialism are real social structures, the actual socioeconomic sources of “the steady immiseration of the large majority of the world’s population” (Lazarus 2004b: 27). But later on in his performance, when Barthes insists that what this naturalizing image of the saluting African constitutively occludes is nothing but “the contingent, historical, in one word: fabricated, quality of colonialism” (1957/1972: 143), that “one word: fabricated” turns out to be a fighting word, adumbrating postcolonial studies as the site of some particularly difficult and complex struggles over the proliferation of meaning in a “Third World” that, like any other world, must be made to mean. As we’ll eventually see, the agon of postcolonial studies pits “Third World” intellectuals who “always historicize” from a critical position of epistemological realism against those purportedly less political theorists who take a woefully “cultural” approach to the fabrications of empire—and who thus, according to their realist adversaries, end up “endorsing the cultural claims of transnational capital itself” (Ahmad 1996: 285).
III. “something strange to me, although it is at the very heart of me”
We’ll begin with two quite different theoretical writers, both heavily influenced by Foucault, who describe their respective objects of inquiry in such remarkably similar terms that “the celebrated Foucauldian nexus between knowledge and power becomes clear in the arenas of both colonial relations and gender relations” (Bahri 2004: 205). The one, Edward Said, describes his object of analysis as “a logic governed not simply by empirical reality but by a battery of desires, repressions, investments and projections” (1978: 8); the other, Eve Sedgwick, describes hers as “an array of acts, expectations, narratives, pleasures, identity-formations, and knowledges” (1990: 29). Said is of course describing Orientalism, “the imaginative examination of things Oriental . . . based more or less exclusively upon a sovereign Western consciousness out of whose unchallenged centrality an Oriental world emerged” (1978: 8), while Sedgwick is examining “something legitimately called sex or sexuality,” something that “is all over the experiential and conceptual map” and which represents “the full spectrum of positions between the most intimate and the most social, the most predetermined and the most aleatory, the most physically rooted and the most symbolically infused, the most innate and the most learned, the most autonomous and the most relational traits of being” (1990: 29).
Both theorists, then, address a certain “something” that is not simply empirically real but is so constitutively “constructed” or “fabricated” as to require constant and complex mapping and remapping. Said assumes “that the Orient is not an inert fact of nature. It is not merely there, just as the Occident itself is not just there either. We must take seriously Vico’s great observation that men make their own history, that what they can know is what they have made, and extend it to geography” (1978: 4–5). Sedgwick follows Freud and Foucault in extending Vico’s great observation to human sexuality; Sedgwick assumes that “the distinctly sexual nature of human sexuality has to do precisely with its excess over or potential difference from the bare choreographies of procreation,” and she stresses that “the definitional narrowing-down in this century of sexuality as a whole to a binarized calculus of homo- or heterosexuality is a weighty fact but an entirely historical one” (1990: 29, 31). Following Said and Sedgwick, then, we can note that neither Orientalism nor sexuality, neither geographical nor sexual “orientation,” is merely empirical, natural, or inevitable; all of our orientations are inextricably caught up in the graphic, the rhetorical, the fabricated, “the constructed, the variable, the representational” (Sedgwick 1990: 29); all are inscribed in the sociohistorical nexus of asymmetrical “knowledge and power” relations.
As Deepika Bahri points out, “the power of representation as an ideological tool” is such that “those with the power to represent and describe others clearly control how those others will be seen” (2004: 205). Hence, for Said, Orientalism is a powerful representational/ideological tool.
Orientalism can be discussed and analyzed as the corporate institution for dealing with the Orient—dealing with it by making statements about it, authorizing views of it, describing it, by teaching it, settling it, ruling over it: in short Orientalism is a Western style for dominating, restructuring and having authority over the Orient. I have found it useful here to employ Michel Foucault’s notion of a discourse . . . to identify Orientalism. My contention is that without examining Orientalism as a discourse one cannot possibly understand the enormously systematic discipline by which European culture was able to manage—and even produce—the Orient politically, sociologically, militarily, ideologically, scientifically, and imaginatively during the post-Enlightenment period. (1978: 3)
Sedgwick similarly employs Foucault’s notions to consider “sex/sexuality” discursively, as a corporate institution for representing and dealing with bodies and pleasures both “normal” and “perverse,” both within and beyond the “bare choreographies of procreation.” For Foucault, human sexuality is not a timeless natural/instinctual force that can be repressed but a historico-discursive deployment that can be systematically managed or even produced. And for Foucault, sex has been produced, particularly “during the post-Enlightenment period,” as “an especially dense transfer point for relations of power” (1976/1990). For Foucault and Sedgwick, then, sexual identities or orientations are always social representations rather than merely empirical facts—like the Occident and the Orient, “heterosexuality” and “homosexuality” are no more “merely there” than is the “binarized calculus” that produces and reduces them.
Now, the point of this mutual articulation of the postcolonial critic Said with the queer theorist Sedgwick is that, precisely in being transfer points for relations of power, colonial and sexual relations are also particularly dense transfer points for each other, and that all these power transfers can be facilitated and contested, analyzed and discussed, in the cultural and political arenas of representation/fabrication. Arguably, all the dominant fictions to date have attempted to ensure that Orientalism—as “a distribution of geopolitical awareness” and an “elaboration” of a “basic geographical distinction” in which “the world is made up of two unequal halves, Orient and Occident” (Said 1978: 12)—is wedded to institutional heterosexism and misogyny. In other words, we can observe what Donna Haraway calls “the close ties of sexuality and instrumentality” (1985/2008: 340) in the ongoing work of culturally constructing both colonial and sexual relations. And we have only to glance at a few scenes from classical Hollywood cinema to see with what success the Occident fabricates itself in and as the sovereign hetero-masculine “hero” and constructs the Oriental “other” as the passively feminized, the criminally abject, and/or the treacherously queer.
Consider, for example, the entrance of dandy criminal Joel Cairo (Peter Lorre)—announced by strains of Levantine “snake-charming” music simultaneously whimsical and sinister—into the office of detective Sam Spade (Humphrey Bogart) in John Huston’s 1941 film The Maltese Falcon. Cairo’s calling card reeks of gardenia, while his cane-handling antics none too subtly suggest oral/anal receptivity to penetration. He carries multiple “false” passports, and hence has no single “true” national origin or identity, but there’s no mistaking the various global and sexual “regions” we should suppose Mr Cairo to represent. Nor should we doubt that the violence our Western hero and straight arrow Sam Spade inflicts against those “regions”—Middle Eastern but fully nether—is justified, if not desired: when Cairo angrily objects to being struck by Spade, the detective coolly responds, “when you’re slapped, you’ll take it and like it.”
Or consider Howard Hawks’1946 film The Big Sleep. Here, detective Philip Marlowe (Bogart again) finds himself having to snoop into a rare bookstore that is actually a front for a criminal ring of blackmailing pornographers. To prepare for this reconnoiter, Marlowe conducts research in the Hollywood Public Library, arming himself with knowledge about a “Chevalier Audubon 1840” and a Ben Hur 1860 “with an erratum on page one-sixteen.” When the young bespectacled female librarian tells Marlowe that he doesn’t “look like a man who would be interested in first editions,” Marlowe asserts his hard-boiled private dick-iness with the retort that he also “collects blondes in bottles.” But when he’s just about to enter his target, Geiger’s Rare Books, Marlowe realizes that to play his part convincingly he really should look rare and bookish himself, so pushes up the brim of his hat and pulls his sunglasses down his nose and begins behaving in the mincing, effeminate, and bitchy way that codes him as queer as per the standard performative conventions of 1940s Hollywood film.
It’s an apt disguise, for it turns out that Geiger is not exactly a “real man” himself but rather a homosexual with a “shadow” (a young male consort and “gunsel” named Carol), not to mention a glass eye and a “Charlie Chan moustache.” But that last detail is only one of the very many that serves to “Orientalize” Geiger and his enterprises, for his bookstore is positively saturated with Asian artifacts and decorations. He’s got Buddhas out the wazoo, so to speak, and all these Oriental motifs are brought into even stronger relief when Marlowe pulls out of Geiger’s and trots across the street to the opposing and conspicuously Occidentalized “Acme Bookstore.” Here, our hero drops his queer act, straightforwardly reveals himself as “a private dick on a case,” and so gets some straight information and (we infer) some straight sex from the knowledgeable and accommodating proprietress (Dorothy Malone)—and all with a presidential seal of approval, for there’s a legitimizing portrait of, not Buddha, but FDR himself, looking down on these upstanding heterosexual citizens from the Acme Bookstore’s wall.
In the counterfeit presentment of two bookstores, then, we behold a spectacularly “binarized calculus,” an active distribution of both geopolitical and eroticized awareness—on the Acme side of the street, we find a stronghold of knowledge and power; we find truth, justice, and the American way (of having sex); while on the Orientalized side, we find only criminal deception, perversion, artifice, and ignorance (the “girl in Gieger’s bookstore” doesn’t know anything about books, while glass-eyed Geiger reportedly “affects a knowledge of antiques and hasn’t any”). If “Acme” is the pinnacle, the very top, then Geiger, like Cairo, is clearly a bottom. Thus, does Hollywood at its heights put the ass in Asiatic, insert itself and its powers of representation into every open orifice in the Oriental market, a colonizing gesture if there ever was one.24
If, however, you were to stop me here with the suggestion that I “get real”; if you were to insist upon firmly distinguishing cultural or merely representational colonization from “the real thing”; if you were to point out that no actual Asians were harmed in the making of these films, whereas untold numbers are steadily immiserated by the unfabricated onslaughts of capitalism, then you would be missing Bahri’s point about “the power of representation as an ideological tool.” But you might well complain about my using these or any other cinematic examples anyway, citing them as sorry signs of the misbegotten “culturalist emphasis in postcolonial studies” (Lazarus 2004a: 9). You might dislike the way I’ve chosen to frame this discussion, taking my insistence upon serving up Said and Sedgwick side by side as symptomatic of the standard bourgeois Western male intellectual’s incapacity to think of “the Orient” or “the Other” except in the “exotic” terms of sex, or the sexy terms of culture. After all, isn’t it just like a postmodernist/cultural studies/queer theory type to revert, in what should be a serious discussion of postcolonialism, to the relative safety of campy close readings of Humphrey Bogart films to the exclusion of any consideration of history, social context, political economy, or “the international division of labor” (Bahri 2004: 201)? And isn’t it all too predictably Eurocentric to keep employing the Frenchman Foucault to critically limn Orientalism when that perpetrator of non-Marxist historicism might very well have been not only a “young conservative,” as Habermas calls him, but even a “new Orientalist,” as per the analysis of Ian Almond (2007)?
These questions stem from the serious reservations certain Marxist critics hold about some of the most prominent postcolonial theorists, who are perceived as being overly indentured to the poststructuralist/postmodernist idea “that language (in the broad sense) is not only world-disclosing but also world-constituting” (Lazarus 2004a: 11). Said himself is even a bit suspect for ever having employed the discursive theories of Michel “I have never been a Marxist” Foucault. But the main culprits here seem to be Gayatri Spivak and Homi Bhabha—Spivak, the translator of Derrida whose difficult representations of the unrepresentability of subaltern speech “come close to fetishizing difference under the rubric of incommensurability” (Lazarus 2004a: 10), and Bhabha, whose own “postcolonial perspective resists the attempt at holistic forms of social explanation” (Bhahba 1994: 173)—a resistance considered by Marxists to be “constitutively anti-Marxist” (Lazarus 2004a: 4)—and whose dense ruminations on hybridity and liminality are thus, according to his adversaries, really only consumerist celebrations complicit with the global offensives of late capitalism.25 Aijaz Ahmad writes that
the entire logic of the kind of cultural ‘hybridity’ that Bhabha celebrates presumes the intermingling of Europe and non-Europe in a context already determined by advanced capital, in the aftermath of colonialism . . . The underlying logic of this celebratory mode is that of the limitless freedom of a globalized marketplace that pretends that all consumers are equally resourceful and in which all cultures are equally available for consumption, in any combination that the consumer desires . . . This playful ‘hybridity’ conceals the fact that commodified cultures are equal only to the extent of their commodification. At the deepest level, however, the stripping of all cultures of their historicity and density . . . produces not a universal equality of all cultures but the unified culture of a Late Imperial marketplace that subordinates cultures, consumers and critics alike to a form of untethering and moral loneliness that wallows in the depthlessness and whimsicality of postmodernism—the cultural logic of Late Capitalism, in Jameson’s superb phrase. (1996: 290)
For Marxists like Ahmad, however, the main problem with the hybrid intermingling of poststructuralism, postmodernism, and postcolonialism is that this theoretical mash-up seems to demolish the very possibility of intellectual critique in the sense that Marxism inherits from the Enlightenment tradition. Faithfully representing that tradition, Neil Lazarus writes that “our methodological assumption would be that it is always in principle (and indeed in practice) possible to stand outside any given problematic in order to subject its claims to scrutiny. This, of course is the classical notion of critique as encountered in Immanuel Kant and exemplified most significantly for radical scholarship in Karl Marx’s various critiques” (2004a: 12). Also privileging radical scholarly exteriority, Ahmad critiques the following formulation from Spivak’s Outside in the Teaching Machine—“This impossible ‘no’ to a structure which one critiques, yet inhabits intimately, is the deconstructive philosophical position, and the everyday here and now of ‘postcoloniality’ is a case of it” (1993: 281)—by describing Spivak’s variance from Said, for whom “the line of demarcation between the so-called colonial and postcolonial intellectuals was that the ‘colonial’ ones spoke from positions imbibed within metropolitan culture while ‘postcolonial’ ones spoke from outside those positions” (1996: 277, 278). Now, since “the deconstructive philosophical position” that Spivak promotes does, in principle and in practice, question all lines of demarcation and all resulting positions or dispositions, deconstruction and the general “consent to theoretical postmodernity” (Ahmad 1996: 283) would indeed seem to disturb, if not destroy, the Enlightenment ideal of a pure critical exteriority, the traditional scholarly ideal of speaking “truth”—even “truth to power”—from some absolutely objective outside.
But do deconstruction and postmodernism in their exceedingly Nietzschean inheritance truly kill the switch on “critique” altogether? Must “critique” always establish its Enlightenment bona fides, its pure exteriority to its problematic, to count as having any resistant or transformative value, any potential for generating any emancipatory effects whatsoever? Can scholars not attempt to critique particular structures that they could never help but intimately inhabit? Is there nothing but untethered moral loneliness to be gained from an “extimate” critique of (but still in) postmodern indetermanance? The postmodern/poststructuralist answer to these questions is that there’s no compelling reason, after all, why the lack of pure exteriority, the “interpretive experience” of liminal hybridity, or the actually lived “coincidence of utter alterity with absolute proximity” (Žižek 1999/2008: 368) should stop anyone from addressing a problematic, subjecting competing truth-claims to scrutiny, or exposing a particular logic as being “governed not simply by empirical reality but by a battery of desires, repressions, investments and projections” (Said 1978: 8).
But perhaps, in a spirit of postmodern modesty, a responsible scholar in and of “the everyday here and now” really should stop short of imagining that he or she addresses any problematic from some Archimedean point purely exterior to it, much less that the “subject position” or cognitive encampment from which one launches one’s critique is itself anything other than an all-too-human battery of desires, repressions, investments and projections, “something strange to me, although it is at the very heart of me” (Lacan 1986/1992: 71). In other words, in the interests of “getting real,” of being responsive to (neither completely outside of nor utterly complicit with) our times, one might cease dreaming that one can finally hoist one’s critical fabrications up the long flagpole of transcendence and into the immaculate ether of some purely exterior “truth.” One might call time-out (or even game-over) on this metaphysical dream, this fantasy of truly seizing reality, without thereby sacrificing theoretical militancy, without admitting surrender, capitulation, or defeat. Perhaps deconstruction as radically “extimate” critique—beginning “from a refusal of the authority or determining power of every ‘is’ ” (Lucy 2004: 11), committed to the cause of “dislocating, displacing, disarticulating, disjoining, putting ‘out of joint’ the authority of [any] ‘is’ ” (Derrida 1995: 25)—is “constitutively anti-Marxist” or an exercise in “apocalyptic anti-Marxism,” to repeat the words of Lazarus and Ahmad. But such a critique could never hope to remain proliferatively deconstructive while at the same time totally opposing the emancipatory project of modernity or absolutely dispossessing itself from what Derrida calls the “spiritual inheritance” of Marx.26
Such, one might say, would be the “anti-metanarrative” lesson of deconstruction, the “extra-moral” moral of the postmodern story. And “hence”—as Foucault did say at the end of an essay called “Truth and Power”—“the [ongoing] importance of Nietzsche” (1977/2000: 133).
Coming to Terms
Critical Keywords encountered in Lesson Nine:
Difference, deconstruction, binary opposition, speech/writing, reception theory/reader response, project of modernity, metanarrative, simulacrum, parody/pastiche, mass/popular culture, Frankfurt/ Birmingham Schools, globalization, cultural studies, Orientalism, subaltern, hybridity, liminality
Notes
1 Because Derrida can be playful when writing about play, the play of his writing is frequently misinterpreted. Niall Lucy writes that “When Derrida writes about ‘play’, he doesn’t mean ‘freeplay’ or wanton ‘playfulness’. He doesn’t mean, ‘playing around with—for the heck of it.’ ” Rather, writes Lucy, Derrida “makes it clear that ‘play’ means something like ‘give’ or ‘tolerance’ . . . which works against ideas of self-sufficiency or absolute completion” (2004: 95). But Lucy also contends that some “US literary critics” offer wrong-headed readings of Derrida “based on a misinterpretation of Derrida’s ‘play’ as ‘freeplay’ or a kind of quasi-Nietzschean ‘creativity’ ” (2004: 94–5). Now, by associating Derrida’s “play” with Nietzsche’s “art,” as I have above, I would seem to be guilty of just such a misreading as Lucy describes; I insist upon this overly “free association” anyway, mainly because, despite Lucy’s correction, I remain persuaded that Derrida’s “play” would not have been possible, or givable, or tolerable, without Nietzsche’s “quasi-Nietzschean creativity,” or at least without what Derrida himself calls the “Nietzschean affirmation . . . the joyous affirmation of a world of signs without fault, without truth, and without origin which is offered to an active interpretation” (1966/1978: 292).
2 “Perhaps unhelpfully,” write Malpas and Wake, “Derrida claims . . . that différance is ‘literally neither a word nor a concept’ and that it ‘has neither existence nor essence’. What is clear, however, is that différance derives from the Latin verb differre and the French différer, which in English have given rise to two distinct verbs: to defer and to differ. Différance incorporates both of these meanings and thus serves to emphasize two key Derridean concerns: with absence rather than presence (full meaning is never present, but is instead constantly deferred because of the différance characteristic of language); and with difference rather than identity . . . In describing différance as the ‘systematic play of differences’ which is built into language . . . Derrida carries Saussure’s theory of language as a system of differences to its most extreme conclusion” (2006: 173). Niall Lucy adds that “the ongoing movement of différance disturbs the idea of difference meaning ‘a fixed difference’ . . . [T]he disturbance caused by différance [puts] the entire history of metaphysics . . . at risk . . . because difference . . . dislodges the security or self-sufficiency of concepts like truth, presence and identity” (2004: 26).
3 Niall Lucy writes that while deconstruction “is impossibly difficult to define, the impossibility has less to do with the adoption of a position or the assertion of a choice on deconstruction’s part than with the impossibility of every ‘is’ as such. Deconstruction begins, as it were, from a refusal of the authority or determining power of every ‘is’, or simply from a refusal of authority in general . . . Or, as Derrida puts it in one of many approximations of a definition of deconstruction, to say that deconstruction consists of anything would be to say it consists of ‘deconstructing, dislocating, displacing, disarticulating, disjoining, putting “out of joint” the authority of the “is” ’ [Derrida 1995: 25]” (2004: 11–12). To “deconstruct” is thus “to open or unsettle the seeming imperviousness of a concept of essence or identity in general, concerning fixed ideas of politics, being, truth, and so on” (Lucy 2004: 12). As for the binary oppositions that deconstruction tends to have its way with, note how each of the privileged “master tropes” on our metaphysical list tends to stand over and against its “other” in a hierarchical relationship of dominance: reason/madness, order/chaos, purpose/chance, presence/absence, identity/difference, being/nothingness, god/devil, man/woman, center/margin, truth/error, etc. Derrida argues that Western metaphysics has always depended on maintaining these and other hierarchical binaries. He is principally concerned with the binary pair speech/writing, with the way Western metaphysics since Plato has privileged the spoken word, which seems to guarantee the speaker’s living presence both to himself and his auditors, over the written trace, which seems to imply absence, spacing, difference, and death. For Derrida, however, speech is always already infected by every bad thing that writing seems to represent (including the “graphic violence” of the representational itself). Derrida reads the metaphysical privileging of speech as a secondary effect derived from dysgraphia, the basic fear of writing.
4 I emphasize the phrase reasonable kingdom here to pave the way for the following “regicidal” passage from Derrida: “Différance is . . . not a present being, however excellent, unique, principal, or transcendent. It governs nothing, reigns over nothing, and nowhere exercises any authority . . . Not only is there no kingdom of différance, but différance instigates the subversion of every kingdom. Which makes it obviously threatening and infallibly dreaded by everything within us that desires a kingdom, the past or future presence of a kingdom” (1967/1982: 21–2). The “nostalgic” part of us that “desires a kingdom” is, arguably, the part that dreads différance, that fears the proliferation of meaning, and so wants above all else the stability of fixed signification, a.k.a. “truth.” The “other” part of us is drawn toward what Derrida calls the “Nietzschean affirmation” of “active interpretation” (1966/1978: 292). Derrida writes that this “active interpretation . . . substitutes incessant deciphering for the unveiling of truth as a presentation of the thing itself in its presence, etc.” What results from this incessant deciphering are “figures without truth, or at least a system of figures that is not dominated by the value of truth . . .Thus, différance is the name we might give to the ‘active,’ moving discord of different forces, and of differences between forces, that Nietzsche sets up against the entire system of metaphysical grammar, wherever that system governs culture, philosophy, and science” (1967/1982: 18).
5 “Deconstruction,” writes Derrida, “is on the side of the yes, of the affirmation of life” (cited in Benjamin 2006: 81).
6 Rather than seeming to support an unproductive “us” versus “them” interpretation of interpretation, I hope to have suggested here that the “anxious” and the “active” modes of interpretation can operate simultaneously within the same subject’s “interpretive experience.” In so suggesting, I am echoing not only points made in note 4 above but also Nietzsche’s argument in Beyond Good and Evil that “master moralities and slave moralities” aren’t necessary parceled out to “masters” and “slaves” respectively, but can be internally juxtaposed in one individual subject’s psyche—“even in the same person, within one single breast” (1886/2006: 356).
7 Speaking of poetical “centers,” one might say that the line from Yeats’s “The Second Coming” to which I’ve been alluding—“Things fall apart; the center cannot hold” (1920/1983: 187)—rests on the assumption that the very coherence of things depends upon the center’s absolutely holding. Yeats assumes that for most of his readers it will just make sense—it will be sense itself, as Derrida might say—that if the center cannot hold, things will fall apart. In affirming the noncenter and the absence of the transcendental signified, however, Derrida is no slouching beast; he is not trying to make things fall apart or let all the falcons fly away, not letting loose mere anarchy upon the world or drowning the ceremony of innocence in a blood-dimmed tide. Rather, carrying “Saussure’s theory of language as a system of differences to its most extreme conclusion” (Malpas and Wake 2006: 173), Derrida simply proposes an extremely different model of coherence, a radically different way for things to hold together, than that presupposed by “the underlying ideal of Western culture” and by the centered structure of Yeats’s poem.
8 Unlike the Saussurean “sign”—which presupposes a “unity” of signifier and signified and the maintenance of an “active–passive” binary relation between those two components—Derrida’s trace “functions to unsettle the sign’s metaphysical determination” (Lucy 2004: 144). “Although referred to in the affirmative, the trace is actually a lack, the presence of an absence or the absence of presence, the antithesis of the sign” (Malpas and Wake 2006: 261). The supplement is not unrelated: “In ordinary language, a supplement is something added to an already complete whole. The possibility of something being added, however, reveals a lack in the original it is meant to complete . . . Derrida extends the contradictory logic of the word ‘supplement’ in order to interrogate the conventional Western idea that speech, as the original form of language, is merely represented by writing. Derrida argues that the structure of writing is not secondary to, but inextricable from, that of speech itself. This challenges the supposed ‘originality’ of speech in relation to writing” (Malpas and Wake 2006: 258).
9 Elsewhere, Barthes writes that overthrowing the myth of the Author “requires that one try to abolish . . . the distance between writing and reading . . . by joining them in a single signifying practice.” He compares this joining to a moment in “the history of music”—before the age of mechanical reproduction and hence of music’s passive consumption—“when ‘playing’ and ‘listening’ formed a scarcely differentiated activity” because “practicing amateurs” (1971/1977: 162) had to be able to read and play the music on an instrument to be able to listen to it. As for reception theory and reader-response criticism, these are “concerned with both the aesthetic and the historical aspects of reading, i.e., the ways in which readers use texts for pleasure, and how readings alter and shift through history” (Malpas and Wake 2006: 245).
10 While there’s no room here for a full explication of the tension between Foucault’s analytics of power and Althusser’s theory of ideology, suffice it to say that Foucault associates Althusser with just the sort of “Freudian-structuralist-Marxism” from which he wants to free himself: “I have,” he proclaims, “never been a Freudian, I have never been a Marxist, and I have never been a structuralist” (1983/1998: 437).
11 Though Foucault necessarily speaks of “the truth” in a phrase like “the truth is quite the contrary,” his thinking about truth remains Nietzschean, which is to say that his thinking remains quite contrary to the idea that the truth exists, that there can ever be one truth for good and for all. For as he insists in “An Aesthetics of Existence,” “I believe too much in truth not to suppose that there are different truths and different ways of speaking the truth” (1984/1988: 51).
12 In his 1960 Critique of Dialectical Reason, Sartre stated that his philosophical goal was “to establish that there is one human history, with one truth and one intelligibility—not by considering the material content of this history, but by demonstrating that a practical multiplicity, whatever it may be, must unceasingly totalize itself through interiorizing its multiplicity at all levels” (1960/1976: 69).
13 We can avoid undue befuddlement about the word “postmodern” by not mistaking “the modern” for the contemporary, the present day, or even the twentieth century. Western culture has been in “the modern” for quite a while. Shakespeare, for example, is an “early modern” writer.
14 In 1784, Immanuel Kant defined Enlightenment as “man’s emergence from his self-incurred immaturity. Immaturity is the inability to use one’s own understanding without the guidance of another. This immaturity is self-incurred if its cause is not lack of understanding, but lack of resolution and courage to use it without the guidance of another. The motto of enlightenment is therefore: Sapere aude [from Horace: ‘dare to be wise’]. Have the courage to use your own understanding” (1784/1996: 51). When Kant goes on to say that “One age cannot enter into an alliance on oath to put the next age in a position where it would be impossible for it to extend and correct its knowledge . . . or to make any progress whatsoever in enlightenment [for] this would be a crime against human nature, whose original destiny lies precisely in such progress” (1784/1996: 54), the phrase “original destiny” seems a rather pointed jab against the doctrine of original sin and against anyone still immature enough to fall for it.
15 Compare Marx—“The criticism of religion disillusions man so that he will think, act, and fashion his reality as a man who has lost his illusions and regained his reason; so that he will revolve about himself as his own true sun. Religion is only the illusory sun about which man revolves so long as he does not revolve about himself” (1844/1978: 54).
16 See Klaus Theweleit (1987). Or read Freud, who in 1930 observed that it was not “an unaccountable chance that the dream of German world-dominion called for anti-semitism as its complement; and it is intelligible that the attempt to establish a new, communist civilization in Russia should find its psychological support in the persecution of the bourgeois. One only wonders, with concern, what the Soviets will do after they have wiped out their bourgeois” (1930/1989: 752).
17 Bellicosely inscribing “an ironic political myth faithful to feminism, socialism, and materialism,” Donna Haraway writes in her “Cyborg Manifesto” that postmodern feminists “do not need a totality in order to work well. The feminist dream of a common language, like all dreams for a perfectly true language, of perfectly faithful naming of experience, is a totalizing and imperialist one. In that sense, dialectics too is a dream language, longing to resolve contradiction” (1985/2008: 324, 342).
18 A simulacrum is a copy for which there is no original. The term is as old as Plato. But while for Plato, the simulacrum is an aberration, for Baudrillard, it’s the order and general rule of the day, for in postmodernity simulation “is the generation by models of a real without origin or reality: a hyperreal” (1983: 2).
19 In The Political Unconscious, where he designates “always historicizing” as “the imperative of all dialectical thought” (1981: 9), Jameson also writes that “to think dialectically is to invent a space from which to think . . . two identical yet antagonistic features together all at once . . . to identify [the] twin negative [or reactionary/ideological] and positive [or progressive/utopian] features of [any] given phenomenon” (1981: 224)—even, presumably, the phenomenon of global capitalism itself, for in this description of dialectical thinking Jameson is following and lauding Marx, who in the Communist Manifesto, identified both the positive/progressive/utopian and the negative/ideological/reactionary aspects of the mercantile bourgeoisie’s ascent. Though he, of course, emphasizes the negative, Marx doesn’t fail to mention the positive. For example, railing against early capitalism’s already global/colonial offensives, Marx writes that “the bourgeoisie, by the rapid improvement of all instruments of production, by the immensely facilitated means of communication, draws all, even the most barbarian, nations into civilization . . . It compels all nations, on pain of extinction, to adopt the bourgeois mode of production; it compels them to introduce what it calls civilization into their midst, i.e., to become bourgeois themselves. In one word, it creates a world after its own image” (1888/1978 477). But that Marx sees this compulsory creation as simultaneously negative/reactionary/ideological and positive/progressive/utopian is made clear in the very next passage, where Marx writes that “the bourgeoisie has subjected the country to the rule of the towns. It has created enormous cities, has greatly increased the urban population as compared with the rural, and has thus rescued a considerable part of the population from the idiocy of rural life” (1888/1978: 477, emphasis added). To think dialectally with Marx here is to see that Marx is simultaneously critiquing and endorsing this anti-idiotic rescue operation—to think dialectically is to hold on to the condemnation of all “the offensives of global capital”—including bourgeois imperialism/colonialism—while not losing sight of the fact that Marx actually does prefer civilization, even bourgeois civilization, to feudal barbarity, urbanity to idiocy, science to superstition, and so on; in other words, while he frequently expresses reverence for an earlier “artisanal” (as opposed to industrial) mode of production, Marx just isn’t all that nostalgic for “the feudal relations of property” that have been “burst asunder” or the “ancient and venerable prejudices” that have been “swept away” by the “colossal productive forces” unleashed by capitalism’s “uninterrupted disturbance of all social conditions” (1888/1978: 476). To think undialectically, on the other hand, is to imagine that “Marxism” always and everywhere equals an unequivocating knee-jerk “anti-capitalism”; to think undialectically is to think that if Marx were alive today, he would whole-heartedly endorse the preservation of certain contemporary superstitions, barbarisms, and idiocies on the grounds that the idiots in question are not just being idiots but heroically “resisting Western hegemony” and fighting back against “the offensives of global capital.”
20 In general, popular culture can be understood as culture that is actually produced by “the people” and which expresses their “authentic” desires. Mass culture, by contrast, is commodified stuff that is mass-produced for “the people” by the “culture industry,” which reifies and exploits their desires. We can distinguish mass from popular culture by considering the different attitudes that the Frankfurt and Birmingham Schools take toward them. The name “Frankfurt School” refers to the Institute for Social Research founded in Frankfurt in 1923. Key members include Benjamin, Adorno, Horkheimer, Herbert Marcuse, and, in a second generation, Habermas. In the Frankfurt School view, writes John Fiske, “the industrialization of culture and the development of the mass media had destroyed all traces of authentic popular or folk culture . . . The culture industries . . . were crucial in enabling capitalism to saturate people’s experiences and consciousness so thoroughly as to leave no space in which to experience a noncapitalist identity or consciousness”—the consciousness of being anything other than a consumer. “The culture industries, then, were the means by which capitalism could erase any possibility of opposition and thus social change . . . They commodified people by erasing their consciousness of all needs or desires except those that could be satisfied by commodities” (1995: 324). Cultural theorists in the Birmingham School tradition (associated with Birmingham University’s Centre for Contemporary Cultural Studies and with the work of Richard Hoggart, E. P. Thompson, Raymond Williams, and Stuart Hall) view the Frankfurt School’s “critical pessimism” as “ultimately elitist because it saw people as the helpless, passive victims of the system, and denied them any agency of their own.” The Birmingham “school of thought agrees with all the criticism of industrial capitalism” launched from Frankfurt “but disagrees with the claimed totality of their effectiveness.” The Birmingham tradition “rejects the assumption that the people have no resources of their own from which to derive their coping strategies, their resistances, and their own culture.” For Fiske, contemporary popular culture is unproductive but still creative—it “is typically bound up with the products and technologies of mass culture, but its creativity consists in its ways of using these products and technologies, not in producing them” (Fiske 1995: 325). For Jameson’s nuanced and dialectical reading of the high/mass culture divide, see his “Reification and Utopia in Mass Culture” (1979), in which he argues that any given cultural phenomenon, high or mass, negotiates social anxieties by simultaneously staging antagonistic (reifying vs. utopian) desires and by representing imaginary resolutions to real contradictions.
21 Globalization is “a term drawn from economics to refer to the dominant model of contemporary manufacture, consumption and political systems within capitalist societies. Rather than focusing upon the needs of a local or national market, the globalized approach considers the world or ‘global village’ as its end user. Because such an audience encompasses a wide range of peoples and values, globalized practices inevitably use models of ‘best fit’. Many times these values reflect a corporation’s Western origin, with the result that some critics accuse globalization of favouring Western interests and norms” (Malpas and Wake 2006: 195). As for “honor killings,” these “are widely reported in the Middle East and South Asia, but in recent years they have taken place in Italy, Sweden, Brazil, and Britain. According to Navi Pillay, the United Nations High Commissioner for Human Rights, there are 5,000 instances annually when women and girls are shot, stoned, burned, buried alive, strangled, smothered and knifed to death by fathers, brothers, sons, uncles, even mothers in the name of preserving family ‘honor.’ ” (New York Times, 13 July 2010, A22) Given such figures, I, for one, have to confess, in full knowledge that I will not in certain corners of theory-world ever be forgiven, that I read, say, Aimé Césaire’s “searing” critique of colonialism—his discourse “about societies drained of their essence, cultures trampled underfoot, institutions undermined . . . religions smashed . . . [and] extraordinary possibilities wiped out” (1955/1972: 21) by colonialism—with somewhat less sympathy than I otherwise might. Given the figures on “honor killings” and other atrocities, I confess to wanting to play a self-consciously Western-devil’s advocate and ask about the extraordinary possibilities wiped out by these very societies, cultures, institutions, and religions themselves, to ask why certain traditionally vicious, lethal, and misogynist practices deserve not to be undermined, trampled, and smashed, even if the tramplers are the imperial forces of modernization, colonization, globalization, “Western interests and norms,” “hegemonic Western feminism,” and so on. I ask this unforgiveable question, frankly, out of a profound fatigue with the resolutely undialectical sort of anticapitalist postcolonial theorist (see note 19 above) for whom to “always historicize” means always to blame only Western capitalism/imperialism/militarism—never indigenous patriarchal religions, never Hindu tradition, never Judeo-Christianity, never Islam—for retrograde misogynist and homophobic violence in and out of the so-called Third World. Sara Suleri, for example, in an article published in Critical Inquiry, describes some “murderous and even obscenely ludicrous” punishments administered against young women under so-called Hudood Ordinances in Pakistan in the 1980s and then provides the standard “historicizing” explanation—“It is not the terrors of Islam that have unleashed the Hudood Ordinances on Pakistan,” she concludes, “but more probably the US government’s economic and ideological support of [General Mohammad Zia-ul-Haq’s] military regime” (1992: 768). Just to be clear, though: I have no desire to absolve the US government from “probable” involvement in the unleashing of the idiotic ordinances in question; I’m only saying that I consider quite undialectical the claim that Islamic tradition had nothing to do with their issuance. Or let’s say that I find Suleri and some others (like Chandra Mohanty, to whom we owe the phrase “hegemonic Western feminism” and whose critique of that phenomenon we’ll consider in the next chapter) guilty of what Slavoj Žižek calls “over-rapid historicization,” which “makes us blind to the real kernel which returns as the same through diverse historicizations/symbolizations” (1989: 50). In this case, the “real kernel” is global misogynist violence, which “returns as the same” in a number of seemingly diverse cultural contexts, and which “always historicizing” qua always blaming capitalism never seems to adequately explain or contain.
22 I rehearse here arguments about modern art first made by Clement Greenberg. See Clark (1982).
23 In Postcolonialism: An Historical Introduction, Robert Young presents postcolonial theory “as an extension of anticolonial movements in the ‘Third World,’ arguing that poststructuralism developed as an anti-Western strategy ‘directed against the hierarchal cultural and racial assumptions of European thought’ ” (Gikandi 2004: 99; Young 2001: 67).
24 For more on The Big Sleep in particular and Hollywood Orientalism, in general, see White (1988).
25 The term subaltern “designates non-elite or subordinated social groups. It problematises humanist concepts of the sovereign, autonomous subject, since the subaltern has been overlooked in the accounts of and by the elite. The subaltern emerges not as a positive identity complete with a sovereign self-consciousness, but as the product of a network of differential, potentially contradictory identities” (Woods 2009: 49). Hybridity and liminality are terms Bhabha uses to “stress the mutual interdependence and construction of selfhood that exists between a colonizer and a colonized person.” For Bhabha, hybridity “refers to a ‘third space’ or ‘in-between space’ which emerges from a blend of two diverse cultures or traditions, like the colonial power and the colonized culture” (Woods 2009: 51), though Bhabha insists that hybridity “is not a third term that resolves the tension between two cultures” (Bhabha 1994: 113), that its purpose is to intervene “in the exercise of authority not merely to indicate the impossibility of identity but to represent the unpredictability of its presence” (1994: 114), and to terrorize authority “with the ruse of recognition, its mimicry, its mockery” (1994: 115). Liminality, writes Woods, “derives from the Latin word ‘limen’ meaning ‘threshold’, and like ‘hybridity’ refers to an ‘in-between space’ . . . of symbolic interaction, which is distinguished from the more definite notion of a ‘limit.’ ” Woods also comments that “Bhabha’s concept of hybridity fits the poststructuralist attack on totalities and essentialisms” (2009: 52); for Ahmad, however, this “fit” links Bhabha’s postcolonialism to an “apocalyptic anti-Marxism” that “playfully” abolishes “nationalism, collective historical subjects and revolutionary possibility as such” (Ahmad 1996: 283).
26 Derrida speaks complexly, but affirmatively of this inheritance throughout Specters of Marx (1994). Marxists of various stripes speak complexly but not always affirmatively of Derrida in Ghostly Demarcations (Sprinker, ed., 1999).