2 The Age of the World Program

The Convergence of Technics and Media

SPIEGEL: And now what or who takes the place of philosophy? Heidegger: Cybernetics.

—Martin Heidegger, “‘Only a God Can Save
Us Now’: The Speigel Interview” (108)

 

 

These puzzles where one is asked to separate rigid bodies are in a way like the “puzzle” of trying to undo a tangle, or more generally, trying to turn one knot into another without cutting the string.

—Alan Turing, “Solvable and Unsolvable Problems” (181)

 

It might be said that a number of the central contentions of this book—notably that the impact of technology on culture over the past half-century or so goes far beyond the actual “use” of technology, and that the best way to frame such changes is to in some way retrace the separation of technicity from knowledge and language in early Greek culture—are largely a continuation of Martin Heidegger's writings on technology. Indeed, one might say that these concerns became the dominant themes of Heidegger's work, beginning from at least as early as the 1938 essay “The Age of the World Picture” and continuing into the 1950s wherein the “threat” posed by technology to a number of cultural processes becomes a central part of Heidegger's reflections on the possible futures of democracy, philosophy, and of “humankind” as a whole. Although critiques of Heidegger's work on these subjects have often positioned his conflation of material technologies with conceptual or interpretive frameworks—that there is something like a “technological worldview” that is in excess of our material use of technologies— as a weakness, we might today instead see this as its strength: Heidegger's insistence that “the essence of technology” transcends its “mere external forms” to radically reorganize capacities for human communication, the synchronicity or co-temporality of global culture, and humans' ability to destroy their natural environment can look like quite a prescient one when viewed retrospectively from an era marked by the ubiquity of “information technology,” economic and cultural globalization, and an increasing dread over the ecological devastation caused by human industry (“What” 112).

There are, however, at least two other reasons we might be suspicious that Heidegger's critique of technology or technics, important as they may be as an anticipation of the contemporary challenges posed by these phenomena, might not be the most useful tools for responding to them today. Perhaps most obviously, the fundamental technologies of Heidegger's day are hardly the same as those that are presently at the center of contemporary culture. Although one might presume, simply by virtue of the speed of technological change since the mid-twentieth century, that any of his references to specific technologies might unavoidably seem quaint when read decades after the fact, what is far more problematic are the ways in which such changes have largely diverged from or even inverted the tendencies that were central to Heidegger's analysis. Thus, for instance, while it might have already seemed somewhat of a stretch to identify, as Heidegger does, the ability for people across Europe to simultaneously listen to a concert broadcast in London as exemplifying the ontological condition of mid-twentieth-century individuals—the “frenzy for nearness” [eine Tollheit auf Nähe] that drives one's interest in the broadcast evoking, for Heidegger, Dasein's fear of confronting its own finitude and own “time” by embracing the generic temporality of a worldwide community—it is, of course, very difficult to imagine how this would match up with the trend in contemporary media toward asynchronicity, niche audiences, and a general “individualization” of time and temporality on the level of (media) consumption (History 312). While it would be hard to argue the reverse, that the increasing asynchronicity of media now stages a kind of confrontation with one's own “finitude” or individual temporality—there are perhaps a number of subjective effects produce by watching an endless stream of YouTube videos or of answering work e-mails in the middle of the night, but these hardly seem to be likely ones—there are, as I will attempt to show below, a wide number of constitutive divergences between the kinds of media and their concomitant representational tendencies referenced by Heidegger and those that are common today. To anticipate a bit, though, I will suggest here that Heidegger's own reference to “the new fundamental science that is called cybernetics” as the culmination of a long historical sequence of both metaphysical thought and of technical evolution more or less suggests the same: the intervening decades have very much proven that cybernetics is perhaps better grasped as the start of a radically new stage in these processes, rather than the predictable “completion” of others (“End” 58).

Second, even if we discount criticism of Heidegger's conflation of material technologies and hermeneutic structures, we might still stay that his approach to unpacking the “meaning” or “essence” of technology is limited by his more specific suturing of this investigation to the history of Western metaphysical reason as a whole. Recall that, for Heidegger, the interrogation of technology is very much bound up with a parallel, if broader, investigation into “historical ontology,” the various ways in which humans have attempted at to ground their access and understanding of phenomena of any type at the most basic level. As he tells us early in “Age of the World Picture,” “metaphysics grounds an age, in that through a specific interpretation of what is and through a specific comprehension of truth it gives to that age the basis upon which it is essentially formed” (115). Heidegger's suspicion of the various ways in which such a metaphysical grounding tends to identify a central force as the guarantor of truth is largely responsible for the ambivalent positioning of metaphysics in his work. On the one hand, metaphysics is shown to be much more than an esoteric concern of interest only to philosophers. Though the fairly elite discourse of philosophy remains the best resource for accessing the ontological source positioned as central in a given historical moment, it represents a much broader cultural understanding of modes of experience and self-reflection; as Iain Thomson writes, for Heidegger, “by giving shape to our historical understanding of ‘what is,’ metaphysics determines the most basic presuppositions of what anything is, including ourselves” (8). On the other hand, while this would presumably give metaphysics a fairly obvious pride of place in any attempt to consider dominant modes of perceiving and thinking, Heidegger's tends to read the history of what most would consider the entirety of the Western metaphysical tradition—from Plato to the philosophy of Heidegger's own contemporaries—as largely a series of compound errors.

Thus, while Plato is typically invoked as genesis figure for metaphysical questioning, Heidegger instead reads Platonic thought as the start of a gradual forgetting of the original question “of Being,” the start of a long process by which one or another specific force (“ideas” of “ideal forms” for Plato, “God” for medieval philosophy) is offered to answer the most essential question of philosophical thought—“why are there beings at all instead of nothing?”—until the question itself is largely forgotten, or made a purely subjective one, insofar as human beings are positioned as grounding their own access to phenomenal reality. In this sense, then, while metaphysics is the name given to the long history of establishing the grounds of human perception and understanding, it is at the same time the history of growing further away from both the question of and our access to these categories as they were experienced by the pre-Socratics.

It is at the end of this kind of regressive sequence or history of decline that Heidegger places the “essence” of technology. By invoking the dual meanings of various terms designating the “end” or “conclusion” of something, Heidegger identifies the technoscientific domain of cybernetics as the final stage of this process: both the end of philosophy or metaphysics as the several-centuries-long attempt to answer the fundamental questions of human existence as well as the “fulfillment” or highest stage of the degraded path that this effort has taken from Plato onward. To quickly summarize a fairly complex argument that we will have the opportunity to unpack in more detail below, for Heidegger the “world picture” offered by technoscience shows an absolute disregard for questions of purpose, direction, and phenomenological or veridical grounding, and thus serves as the full or quotidian emergence of a certain kind of operational nihilism, or absolute subjectivity of experience, that was only implicit in t he “anti-met a-physical” thinking of figures like Marx and Nietzsche. The essence of technology, then, appears to us as the fi nal triumph of a purely “calculative” mode of thinking and social interaction over the kind of “meditative thinking” that Heidegger considers as historically constituting “man's essential nature” (Discourse 56). Technology thus becomes not only a symptom of the new phase in “human nature” but also something like a compensatory mechanism for the same cultural transformations it has helped midwife; technology, Heidegger tells us, is not only the “essence” that organizes contemporary experience as well the physical matter of a wide variety of the world, but also “the organization of a lack,” a way of “keeping things moving,” in a world when purpose or a sense of destination or completion seem like antiquated concepts (“Overcoming” 107).Thus, to mention only one of Heidegger's examples, socioeconomic exchange continues apace, but only via “circularity of consumption for the sake of consumption” rather than as part of some process or a larger meaning or identity.

And, I take it, it is these kinds of conclusions that are likely very familiar to contemporary ears. Though they are certainly not typically phrased in the terms of Heidegger's fairly rarified language, it is only a small step from Heidegger's positing of contemporary technology as a kind of simultaneous “shallowing” and “speeding up” of contemporary culture and our default conceptions of the social impact of communicational media and information technology today. Indeed, it could be claimed that Heidegger's writings on technology are immensely influential if largely unacknowledged sources for a wide variety of both academic and popular critiques of the effects of technology on contemporary culture, from the contention that it encourages a kind of heedless and unreflective participation in some of the worst tendencies of contemporary capitalist consumerism, to the feeling that our connections to “reality” and each other are increasingly mediated through impersonal and informatic exchanges of various types.

There is much to be said for such critiques, and I do not necessarily disagree with many of them. It would be immensely difficult, for instance, to suggest that contemporary communication media and information technology encourages the kind of “slow” or meditative thinking prized by Heidegger (and, more broadly, a certain kind of cultural humanism at large). What I am interested in, however, is troubling the frame of reference that makes such comparisons seem inevitable, as well as the not unrelated question of what that comparison, and other possible frames of reference, might mean for us today, particularly in the field that we might take as having the largest overlap with what Heidegger codes as philosophy—and the one in which he has had his most outsize posthumous impact: contemporary critical theory in the humanities and social sciences. While it may be true that “meditative thought” has been on the downswing over the last several decades—which, to be fair, would only be the continuation of a process in Heidegger's account that has been in something of a downward spiral since the fourth century BCE—it does not seem to be me like the “calculative” or similar descriptions of the inhuman or purely ends-driven logic of political economy remains dominant today; indeed it would seem that if such a regime ever did exist, it would be more accurate to locate it in the mid-century context in which Heidegger began his critique of technology, that moment in which it at least appeared that that the rolling yield of the assembly line and crossed columns of statistical tables threatened to capture us all, making us either the victims of or the mantle-holders for the “organization man” or “man in the grey flannel suit” that we heard so much about in the 1950s. All of which is to say: although I think it is certainly true that whatever we might call the dominant logic of culture today is fundamentally shaped by contemporary technology, it is hardly one that has assumed principles of calculation, reductionism, or standardization that Heidegger among many others has associated with the increased ubiquity of technology in social life.

However, while I take it that Heidegger's description of sociotechnics is largely a moribund one, I do think we might have much to learn from repeating or updating his historical take on the evolution of technology and ontology, and I will perform such a reading in this chapter, with a few key differences. More specifically, I am interested, on the one hand, in refracting or skewing Heidegger's indexing of the growing centrality of technology in social life to the decline of metaphysics from Plato onward, and instead suggest that we might more productively read this historical sequence as evincing the “return” of elements of rhetorical thought and praxis that were largely crowded out by Platonic thought. On the other hand, I am ultimately not so much interested in this particular redrawing of the boundaries or exchanges between particular disciplines as I am in the ways in which such a reconsideration might offer us a different attitude toward and concretes strategies for responding to the “technologics” of today.

While the next three chapters of this book will take up these strategies around more specific questions relating to politics or social power, the contemporary status of persuasion or motivation, and ethics, I think that in addition to doing some important groundclearing in advance of these other questions, thematizing or theorizing the dominant cultural intersection of technics and media will have much to teach us in and of itself, including some insight into the status and function of “theory” in the past and present.

HEIDEGGERIAN DETOUR: HISTORICAL ONTOLOGY AND THE HISTORY OF TECHNIQUES

But before we begin, a brief detour through Heidegger's approach to historical analysis; insofar as this chapter attempts a certain kind of rereading and response to Heidegger's project and what I take to be its influences on contemporary thinking about technology and culture, it may be necessary to provide at least a little more detail, however briefly, about that process and these effects in addition to what was sketched above. The first attempt to thematize “historical ontology”—to trace the ways that human conceptual processes and patterns of self-understanding have changed over time—is likely that undertaken by Hegel in his Lectures on the History of Philosophy. Early in his introduction to that work, Hegel emphasizes the apparent “inner contradiction” of that text's titular objective. “Philosophy,” Hegel writes, “aims at understanding what is unchangeable, eternal, in and of itself” but “history tells us of that which has at one time existed at another time has vanished, having been expelled by something else” (7–8). Thus though one might produce something like a chronological rendering of the different propositions or methodological systems developed by philosophers, it would not seem to have the same temporal or developmental import we usually attribute to historical analysis. Hegel's unique remedy to this situation would be to combine the transhistorical or universal characteristic of philosophy with a conception of its maturation as being similar to that of an individual's learning process. Thus Western philosophy's development could be thought of as like that of “a universal Mind [geist] presenting itself in the history of the world”; philosophy can be traced progressively while maintaining its “universal” quality, as long as one considers its advances as proceeding in the same way that self-consciousness and maturation occurs in individual minds, but in this case distributed across different civilizations and communities (33). Hegel would further approach this question from the opposite angle in the undertaking that, appropriately enough, transposes the subject and domain of analysis of The History of Philosophy; he writes in The Introduction to the Philosophy of History that philosophy allows us to chart the coming to self-consciousness of the universal geist which is itself the self-consciousness of human freedom, and thus “world history is the progress in the consciousness of freedom” (22).

Perhaps what is most striking about Heidegger's particular return to this question is the way in which his own execution of a “philosophy of history” or historical ontology largely inverts both the premise and conclusion of Hegel's attempt; if Hegel's histories map a slow progression toward greater awareness and freedom, Heidegger's narrates the increasing abstraction or overmediation of human self-understanding and experience, culminating in a kind of “freedom” that is only possible in the absence of human purpose and responsibility in general. Thus Heidegger's division and analysis of historical ontology via five primary “epochs”—the pre-Socratic, Platonic, medieval, modern, and late-modern (Heidegger's present) stages of ontological self-understanding—is one in which the possibility of an unmediated and positive relation to ourselves and to “reality,” one that seems at least implicit in the writings of the pre-Socratics, becomes increasingly distant as Being becomes successively grounded around the central forces of the Platonic forms, God (in the medieval epoch), the human subject (modern), and the spirit of technology or pure operationality (the present). If the sequence leading from the Platonic to the present is largely a series of errors in a too-narrow and too-easy grounding of Being that culminates in a problematic “self-grounding,” the contemporary epoch that Heidegger associates with the spirit of technology expresses something like the total abandonment of the enterprise altogether. Technoscience, Heidegger tells us, takes over the traditional task of philosophy, that of determining “the ontologies of the various regions of beings,” and retools it, focusing only on superficially theorizing “the necessary structural concepts of the coordinated areas of investigation,” an examination of merely those structural vectors that have an apparent operational impact on function or performance (“End” 58).Thus, Heidegger concludes, “‘Theory’ means now: supposition of the categories which are allowed only a cybernetical function, but denied any ontological meaning.”

For Heidegger, this bleeding over of the practical or utilitarian vectors of technology or technics into the realm of episteme, the former space for reflecting on such processes, subsequently results in a certain kind of “technologiziation” of humans themselves. While Heidegger could find recuperative tendencies in even Nietzsche's “will to power”—this reference point for Heidegger's modern “epoch” is still suggestive of a certain transformational purpose for individuals—what Heidegger calls the “will to will” that dominates human existence in the age of high technology seems to have little to no redemptive potential. Rather than a will that drives one toward the goal or the fulfillment of a particular process, even one that is destructive or self-defeating, a will to or for something, Heidegger suggests the contemporary age is dominated by what we might call a will to or for anything, an unguided and chaotic acting out of contemporary cultural forms without any particular end other than the continuation of the existing process. Thus, while the essence of technology produces what Heidegger calls an “unconditional uniformity” of humankind, it does not do so around the insistence of a particular identity or ontological “grounding,” but rather via the absence of stable identities and the retreat of any ontological foundation for human existence. As Heidegger concisely summarizes this sequence in the essay “Overcoming Metaphysics,” “The unconditional uniformity of all kinds of humanity of the earth under the rule of the will to will makes clear the meaninglessness of human action which has been posited absolutely” (110). Rather than being cathected to an overly rigid or narrow arbiter of truth and self-understanding, modern individuals remain in a kind of existential holding pattern, a frantic and aleatory movement within a determined space or system.

I am not so much interested in contesting the accuracy of what Heidegger identifies here as the challenges posed, or negative effects caused, by the movement of technology or technics to the center of the cultural field; indeed, as I will suggest below, there are many reasons to believe that they have become, if anything, even more urgent or “dangerous” given changes in technology and media since the time of Heidegger's writings. I am, however, interested in contesting the modes of rectification or amelioration promoted by Heidegger in response to these challenges. In other words, it is not really the problems foregrounded by Heidegger in his reading of technology that seem necessarily moribund from the view of present as much as it the solutions proposed for them, ones that I take to be not only fairly representative of contemporary thinking about technology but, as we shall see, of a wide variety of both popular and academic social theory today.

“Solutions,” may be too strong of a word, given Heidegger's preference in his late work for a kind of hesitant and provisional response to contemporary culture—indeed in many ways it is “more meditation” that itself appears to be Heidegger's most common suggestion for remedying the negative social effects of the will to will, or the culture of nihilism growing around the dominance of calculative reason. What we might, however, at least call Heidegger's “strategies” might be divided into three categories. First, there is an attempt to recover the relation to Being that Heidegger sees in pre-Platonic thought, a common concern of Heidegger's late writings. The suggestion that only a “god can save us” in the title of the source for our epigraph above is, for example, a reference to such a possible new grounding; elsewhere Heidegger holds out hope that we might “stand and endure in a world of technology without being imperiled by it” through retrieving our “rootedness” or seeking “a new autochthony which someday even might be able to recapture the old and now rapidly disappearing autochthony in a changed form” (Discourse 55). Second, there is the very process of retracing the history of error in metaphysical reasoning that Heidegger spends a large amount of his late work retracing. If the dominance of technology today is only the culmination of a long history of the mistaken “double grounding” of ontology onto a theological or primary force, then a “double reading” that allows us to see how that process takes place and to contextualize it within that history might provide some purchase on its power or some insight into its deceptive qualities. Finally, there is Heidegger's suggestion that we need to rethink our standards of valuation, particularly those of how we judge the importance of meditative thought or philosophy proper. If the world is dominated by the “will to will” or a calculative reasoning that focuses on only immediate results, and this is at least partially what allows something like cybernetics to take over the role normally given philosophy, then we must, in some sense, start to value that which is not at least immediately effective, or not effective by the same standards dominant in contemporary culture. As one might expect, philosophy itself becomes key vector in this process, which Heidegger tells us “is not a kind of knowledge which one could acquire directly, like vocational and technical expertise, and which, like economic and professional knowledge in general, one could apply directly and evaluate according to its usefulness in each case” (Introduction 9). Indeed, as Heidegger goes on to argue, whenever it appears that philosophy h as found “a direct resonance in its [own] time,” that is our signal that it is not really philosophy, or it is a philosophy being misused, and those who makes such claims for it are making the mistake of thinking philosophy could be evaluated “according to everyday standards that one would otherwise employ to judge the utility of bicycles or the effectiveness of mineral baths” (9, 13). In other words, insofar as the cult of pure functionality is the sign of the absence of “real” self-understanding, philosophy is most apparent to us in the ways in which it does not “work” or “function” along these pragmatic or utilitarian lines.

And it is here, in Heidegger's three strategies for responding to contemporary technology, that I think we might detect a large number of correspondences with contemporary thinking about the problems of (today's) modernity. Perhaps most obviously, we might find some fairly clear parallels between Heidegger's counterposing of the hypermediated and decadent present with an original or “rooted” centering of human life and society with a wide variety of conservative calls for a “return” to a more grounded or simpler way of life. Indeed, the very symptoms that mark the degradation of the human spirit under the reign of technology for Heidegger in “The Age of the World Picture”—“the flight of the gods” and secularization of the world, the reduction of all community bounds to those of “mere culture,” the triumph of hollow consumerism and the abandonment of tradition—reads much like the laundry list of complaints issues by a variety of what we might call contemporary “retro-fundamentalisms,” from religious radicalisms and ethnic chauvinisms to the (comparably benign?) calls for a return to “simpler” styles of life and value systems. While it may go too far to suggest that a large spectrum of contemporary conservative ideologues are kissing cousins of Heidegger, there is more than a mere family resemblance between his more elegiac or nostalgic mode and recent “back to X” movements in the present political scene.

At the ostensible other end of the continuum, we might find a significant legacy of Heidegger's “double reading” strategy, his zeroing in on the transitory nature of what has historically served as an ostensibly transhistorical arbiter of truth and reality, in left-oriented social and political theory. Here, of course, there is a particularly direct connection between Heidegger's thinking and leftist cultural that theory drew some of its most powerful concepts from continental philosophy of the last few decades. Indeed, what we might reasonably consider the two greatest content providers for this domain, Jacques Derrida and Michel Foucault, seem to have both found appreciable influence in Heidegger's “ontological history.” As numerous commentators, including Derrida himself, have pointed out, Derrida's immensely influential concept/practice of deconstruction is something of a belated continuation of Heidegger's planned destruktion or “the task of destroying the history of ontology.”1 While Foucault's genealogical work on the history of knowledge owed much clearer debts to Nietzsche, he would state in his last interview that Heidegger had always been for him “the most essential philosopher” and that his “entire philosophical development was determined” by his reading of Heidegger's works (“Return” 250). And indeed, Foucault's investigation into what he himself has called a “historical ontology of ourselves” would follow a similar path—from the history of European thought back to Greek antiquity—and methodology as Heidegger's, even if its upshot for present day readers, one Foucault described as showing people that they are “much freer than they feel,” was almost entirely the opposite of the one offered by Heidegger in his writings on the present state of ontology (“What” 20).

More generally, it is certainly also worth mentioning the degree to which Heidegger's prizing of “authentic” [eigentlich] modes of self-relation and action has cast a long shadow over contemporary thinking about related modes of “self-actualization” or “resistance” to dominant modes of social subjectivity or behavior. While the term “authenticity” itself may be long out of fashion in much left-oriented academic discourse, the qualities associated with the concept have largely returned under new names. As Jeffrey T. Nealon argues, for instance, what we might call the “agency question” in contemporary cultural theory is really a disguised “question about ‘authenticity’”; when the question of agency is raised in critiquing a theory of power as either too totalizing, or too naively volunteerist, “agency is a code word for a subject performing an action that matters, something that changes one's own life or the lives of others … doing something freely, subversively, not as a mere effect programmed or sanctioned by a constraining social norms” (Foucault 102). In this context and several others—the turn to affect in social theory discussed previously in this book would also be apposite here, for instance—what Adorno once called the “jargon of authenticity” in Heidegger's work remains, in many ways, the lingua franca of theory in the social sciences and humanities, particularly any that take up questions of power and politics explicitly.

Finally, we might see a legacy of Heidegger's emphasis on the “nonoperational” character of philosophic thought—and in a sense also gather together both the left- and right-leaning pseudo-Heideggeranisms sketched above—in the context of the “death of theory” debate we had occasion to reference in this last chapter. Insofar as this debate has revolved around presumptions that theory has become (or always was?) “useless,” and/or that its central tenets have become misused by the kind of entities and institutions it was originally designed to critique, I take it to be one very much bound up with Heidegger's earlier concerns about the misuse of philosophy and the need for theory to be meditative and not immediately impactful discourse—the counterpoint to a social realm that too quickly fetishizes the immediately useful and effective over everything else.

At the same time, however, these more recent reflections seem to ask somewhat different questions about the viability of “impractical” theory as a counterpoint to contemporary cultural paradigms. Consider, for instance, Critical Inquiry editor W. J. T. Mitchell's response to the New York Times article on the journal's 2003 symposium that in many ways set the ball rolling for the “death of theory” discourse. In what Mitchell himself calls “an embarrassingly earnest letter” he sent to the Times in reply to the article, Mitchell tells the Times editors that their original article title (“The New Theory Is That Theory Doesn't Matter”) is not so much inaccurate as it is lacking a crucial qualifier: “immediately.” While the Times coverage accurately described the symposium participants' negative response to the question of whether cultural theory could immediately intervene in the Iraq War taking place at the time, it failed to acknowledge that cultural theories “take time to percolate down to practical application” (328). Offering a particularly appropriate example, given the context, Mitchell goes on to suggest that “those who think theory doesn't matter should note that the present war in Iraq is the long-term consequence of political theories (hatched at the University of Chicago, amongst others) that are now heavily represented by key intellectuals in the Bush administration.”

What is striking to me about Mitchell's response here is how it hews toward Heidegger's consistent emphasis on critical thought as a preparatory and largely anticipatory activity, one whose practical effects, if any, are largely only apparent long after the fact, as well as how it also introduces, at least by proxy, one of the other major planks in the “theory is dead” platform: that left-oriented cultural theory is in decline precisely because of the availability of its central tools for appropriation by actors of almost any type, that theory has indeed become itself as operational or pragmatic as the forces it was designed to counterpoint and slow down. In this sense, then, critical thinking seems to be itself very much a contemporary technic, or part of the “essence of technology” today, just as much as the aleatory consumerism and proliferation of spectacle that Heidegger railed against in his own time. In this sense, turning the crank on the history of ontology another notch, we might say that even Heidegger's critique of the essence of technology and its legacies today look more like a symptom of, rather than a strategic response to, the contemporary regime of technics, or, to put it a little more pointedly, Heidegger's solutions to the calculative regime of sociotechnical systems seem to be the problems or challenges that are posed by its current incarnation.

In any case, it is with that ironic situation in mind that I proceed with my own history of sorts in the remainder of this chapter. Although I will be surveying largely the same terrain that Heidegger covers in defining his “epochs” of ontological history, I want to shift ground somewhat in deciding precisely what convergence of forces should take priority. Despite the fact that Heidegger's work in and around the history of metaphysics is in one way all about mediation—the ways in which our access to the original pre-Platonic question of Being has been undermined by the mediation of history, the ways in which our present day experience and self-understanding suffers from a certain overmediation of cultural and technological forces—it is, in another way, almost entirely blind to questions of media; in other words, while Heidegger may attend to a broad contour of the evolution of technology and its impact on cultural systems of (self)representation (from his famous examples of hand tools in Being and Time to his writing about early computation technologies and cybernetics), there is very little discussion of what today we might take to be a major vector of contemporary technology: the ways in which it has become intertwined with a variety of representational media. Thus my own intervention or refraction here is by way of studying what might be more appropriately considered a “history of techniques,” a tracing of the convergences between media and technics or technology, as opposed to the combined history of ontology and technics favored by Heidegger. Such a shift in focus, it seems to me, has much to teach us about rethinking our responses to contemporary intersection of media and technology and its cultural impact. More precisely, if an ontological history results in telling us something about the progression or regression of epistemic frameworks and its effects on our contemporary modes of self-understanding and ontological possibility—Hegel's tracing of the coming-to-consciousness of human freedom, Heidegger's exposure of the “nihilistic freedom” of mid-twentieth-century culture, or Foucault's mission to show people they are “freer” than they feel—then the alternative off ered here, while still touching on how what me might call “human freedom” in a generic sense is managed in contemporary cultural formations, will have a more sustained focused on the particular techniques dominant today, how they function and might be made to function otherwise. Considered from a different angle, this sequence might also be viewed as substituting Heidegger's narrative of the historical decline of philosophy and critical thought as an organizing principle for culture with one that indexes the increasing centrality of persuasion, charting not the decline in the force of philosophical reason within culture, but the return of rhetorical forces that were originally crowded out in the formalization of Western philosophical thought.

More programmatically, in place of Heidegger's five epochs, this chapter's periodizing schema will focus on three key historical shifts in the arrangement of media and technics.

From the Measurement of Transhistorical Qualities
to the Measurement of Universal Quantities

As numerous historians of science have noted, the emergence in mid-thirteenth-century Europe of diverse new systems and tools for the quantification of phenomena of various types inaugurated a fairly radical break with the legacies of Platonic and Aristotelian metaphysics and early Greek science. According to the classicist F. M. Cornford, the so-called Greek miracle, the emergence of Greek science and of “rational” thought and the retreat of a mythological worldview, was motivated by a new desire to formulate “an intelligible representation or account (logos) of the world, rather than the laws of the sequence of causes and effects in time—a logos to take the place of mythoi” (141). The resultant focus on transhistorical substances or qualities would long dominate science and intellectual thought in Western Europe for many centuries, until the proliferation of techniques for perceiving and representing phenomena as variable quantities that found its start around 1250 BCE. The historian Alfred W. Crosby, appropriating a term that goes back until at least the mid-fifteenth century, has referred to this era as one marked by an urge toward “pantometry,” the systematic measurement of all things. Key to this sea change would be the creation of a variety of instruments (from accounting tables to public clocks) and representational techniques (the integration of geometric perspective into illustration and the refinement of cartographic projection) that trouble the barriers between phenomena that represent information and machines that perform technical processes.

From the Determination of Quantitative Measurements in the Present to the Calculation of Probable Relationships in the Future

During the start of the seventeenth century, roughly coinciding with the formalization of the mathematical calculus by Newton and Liebniz, we see the emergence of a large number of new systems and practices that move beyond the measurement and relation of discrete quantities to determining, on the one hand, more flexible relationships of correspondence, compossibility, or adequality, and, on the other, to conceiving of the relationships between phenomena as a series of processes unfolding in time, sequences whose probable results could be calculated if enough of its variables could be determined in advance. The statistical information and data relationships of pantometric techniques here become operationalized in a series of social processes and management styles for political economy that might be taken to have peaked in the early to mid-twentieth century.

From the Calculation of Probable Relationships to the Creation of Algorithmic Processes for Managing Associations

This contemporary moment, one largely midwifed by the cybernetics movement in the mid-twentieth century, is marked by a shift toward the integration of a variety of flexible systems in which the pursuit of specific goals is secondary to maintaining a systematic equilibrium that permits a spectrum of “acceptable” results to take place. This particular shift, I will argue, is at the heart of a number of seemingly paradoxical changes in contemporary culture and political economy that addressed in the previous chapter—from the “standardization” of niche marketing, to the loosening of prohibitive strictures as a control mechanism within social power.

In what follows, I address the first of the sequences in reference to its importance as a genetive context for thinking about media, technics, and culture in Western intellectual thought, and the second because I believe it to be the regime that largely remains the focus of contemporary criticism of technology. However, as a reader might expect, my primary concern will be the final sequence, one that I take to be the dominant cultural paradigm for the contemporary convergence of media and technics today.

PANTOMETRICS

Although it may appear counterintuitive, the convergence of media and technics in the period with the greatest historical distance from the present may also be the one that is most familiar to modern minds. While the methods for standardizing measurements and their representation that was the driving force of immense industrial, scientific, and artistic achievements at this time—from mechanical clocks, to accounting tables, to the introduction of geometric perspective into visual representation—may have been spectacular and novel applications of human-made frameworks to natural phenomenon, they have become so ubiquitous and proletarianized in the present as to seem entirely natural, if not invisible. Similarly, the conceptual process that helped transform the prevailing mentalité of Western society beginning in thirteenth century and that was embodied in such devices is now largely second nature. As Crosby describes it, the spread of universal measurement required mastering only a simple three-step process: “reduce what you are trying to think about to the minimum required by its definition; visualize it on paper, or at least in your mind, be it the fluctuation of wool prices at the Champagne fairs or the course of Mars through the heavens, and divide it, either in fact or in imagination, into equal quanta” (228). For these reasons, perhaps the only way to mark the paradigmatic nature of such a shift—and for our purposes here, to begin trailing a recurring process that we might map on to successive regimes of technics and media—is to consider its distance from the opposing emphasis on qualification that was inherited from Greek intellectual culture.2

Insofar as the predominant vector of scientific and industrial representation in the late Middle Ages and early modernity focused on the presumed universality of measurement schemes, this was a radical break with the very concept of “universality” inherited from Greek antiquity, as well as from the ways the Greek themselves measured the relative importance of disciplinary domains of knowledge. While using the term “qualification” to describe Greek approaches to metrics has the advantage of contrasting nicely with the emphasis on “quantification” that came to later dominate scientific thought, it is perhaps more enlightening to consider how Greek intellectual thought's investment in determining the essential qualities of phenomena led to a dominant concern with absolutes and transhistorical essences. Plato's concentration on determining the necessary qualities of both physical objects and mental concepts is, of course, a key vector of his dialectical method, one often linked with his equally consistent argument that “the truest of all kinds of knowledge” was produced only by disciplines “concerned with being and with what is really and forever in every way eternally and self-same” (Philebus 58a). Thus in the Philebus, amongst other dialogues, arts and sciences that study variable phenomena or historical mutability itself are relegated to an inferior position, with rhetoric—that discipline that, on Plato's reading, pretends to universal application via a counterfeiting of philosophical methods—singled out as a particular danger.

Against this backdrop, the sea change initiated by quantitative reasoning and the popularization of pantometric media (geometrically based illustration and cartography, musical notation) and devices (calculating boards and accounting tables, public clocks) is perhaps most striking in how it disturbed this rigid discrimination of episteme from techne that was inherited from Greek culture; these phenomena seemed to simultaneously perform or in some ways combine functions that used to be exclusive to representational media—particularly those that would record or present established knowledge or logical concepts and relations—with that of physical technologies that produce actions in material reality. In this sense, period technologies such as calculating tables were both means of representation and tools for the actual performance of measuring and relating different quantities. For their part, more recognizably static signifying media such as uniform musical notation and reliable cartographic projections were emblematic of an increasingly “executable” form of representational media, one in which its representational capacity was only useful insofar as it was operational for its users. Both types would perform the dual purpose of being both representational media and technologies for establishing relationships between individuals and items, mediating economic exchanges, and establishing cultural norms.

We might take the installation of large mechanical clocks as a synecdoche for the pantometric conflation of media and technics in at least three significant ways. First, the standardization of shared time that took place via the installation of public clocks demonstrates a clear break with the purely representational or static mimetic capacities of previous media. Importantly, such a break would come through a modeling of dynamic organic processes within a mechanical realm. The presumption of a constancy of “time” that would inspire the making of something like a mechanical clock is, of course, based on a particular appropriation of natural processes (the division of time by the orbit of the earth as well as the “body clock” of circadian rhythm). The “automaticity” of the mechanical clock designed around its approximation of biological function would be both an early avatar of later modeling of living systems within technical realms as well as an inspiration for what E. J. Dijksterhuis would call the “mechanization of the world picture” in science and philosophy of the sixteenth and seventeenth centuries, a way of representing the world or various natural and social systems within it as a discrete entity, but one composed of moving parts changing over time (both Descartes and Leibniz, for example, would come to use mechanical clocks as key symbols for life and consciousness in their metaphysical writings).

Second, public clocks of this time are also one of our best examples of the ways in which pantometric devices created a socialization and a secularization of forces and concepts that were previously “held” privately or under the aegis of religious rituals and authorities. Time was of course already “shared” socially before the installation of public clocks via a variety of mechanisms used to broadcast the hours kept by private timepieces, but public clocks both regularized and for the most part removed the need for constant human intervention into the process of keeping public time.3 Thus the introduction of public clocks in the Middle Ages proletarianized a function that used to be largely regulated by religious authorities; as the historian Jacques Le Goff writes, the introduction of public clocks led to “the great revolution of the communal movement in the time domain,” as merchants and artisans “began replacing this Church time with a more accurately measured time useful for profane and secular tasks, clock time” (35). More generally, this phenomenon led to what philosophers of science Isabelle Stengers and Didier Gille call “the autonomization of social time,” a sociotechnical event that will play a key role in prompting further investigations into the best ways to establish “universal” systems of chronological and spatial projection that can be shared inside and across communities of various types, a “second nature” of temporality that could take the place of less reliable systems of measuring astronomical time with sundials and other instruments whose accuracy was dependent on the skill of their individual users (179).

Finally, the popularity and ubiquity of socialized time embodied in public clocks provides perhaps our best demonstration of the ways in which pantometrics shifted the economies of quality and quantity as conceived in Greek antiquity, as well as helped augur the return of the metron techniques of early rhetorical and sophistic thought opposed by Platonic metaphysics. The abstraction made possible by quantification became prized for its functional, rather than epistemic accuracy; while systems for producing and relating quantities of various types were obviously “artificial” in the sense that they did not represent any “inherent” or “essential” property of the phenomena being measured, this is what formed, as Henri Bergson would later suggest “the quality of quantity” that would allow one to “form the idea of quantity without quality,” or a measuring system based solely on effectivity rather than epistemology (Time 123). While in Heidegger's schema we might retrospectively see a shift in epistemic and pragmatic prioritization as a precursor to the “essence of technology” that takes root even earlier than Heidegger may have considered (the shared time of public clocks being an early version of the rejection of “individual time” that Heidegger attributes to radio broadcasts?), it is perhaps best to consider the withdrawal of concern for ontological properties in favor of pure function as an early step in the sequence through which ontology will increasingly be itself put to work, a process through which human conceptual power and the prevailing mentalitié of a community is actively integrated within cultural processes.

PROGNOMETRICS

Near the start of the seventeenth century, the pantometric techniques and large variety of devices and systems based on quantification began to give way to an entirely new system of techniques, technologies, and models of mental operation based instead on calculation. Indeed, the end of Crosby's periodizing scheme corresponds roughly to the formalization of calculation proper in mathematics through the co-invention of the infinitesimal calculus by Newton and Liebniz in mid-seventeenth century. We might take one of the key concepts of the that system, adequality or the concept of “proximal equality,” as a suturing point for a large range of subsequent cultural changes based on the assumption of “likely” relationships between phenomena or the results of particular processes. The emergence of a number of techniques of this time that we might describe with the portmanteau word “prognometrics” (“prognosticate” or predict + “measure”), would demonstrate a turn away from methods for quantifying phenomena in the present and toward procedures for predicting the probable future states and effects of natural, mechanical, and social systems of various types. The effects of phenomena occupying a state between mediation and technology here and its impact on societal change during modernity might be found, for instance, in the increase in stock-exchange systems and early speculative markets, as well as series of new techniques and technologies, spectacles and sign systems, used to define social roles and modulate the future behavior of individuals that perhaps found their most influential study in Foucault's Discipline and Punish.

In this sense, prognometric techniques might be taken as a break with pantometrics, but one that intensified the latter's initial movement away from traditional metaphysics. Key to this process, as with the shift from the Greek emphasis on qualification to the quantification refined over the three previous centuries, the emergence of prognometrics in the seventeenth century would mark a further remove from the consideration of ontology as a “first philosophy” that could account for the eternal nature of things, and toward the strategic leveraging of human sensible and ideational capacities as an operational component of industrial, aesthetic, and political design. Beyond its important role in the history of mathematics proper, the development of the calculus might taken as a conceptual linchpin for an entire host of practices and design strategies that may have found their roots in the rarified world of metaphysics, but would soon spread into and beyond art and industry to become a quilting point of social life in the subsequent century.

As the French mathematician Rene Thom suggests, the controversies between the followers of the physics of Descartes and Newton that were reaching their apex at the end of the seventeenth century were ones largely arranged around the question of whether the ontological concerns of traditional scientific theory or the more immediately pragmatic forms of abstraction based on calculation should be given precedence: “Descartes, with this vortices, his hooked atoms and the like, explained everything and calculated nothing; Newton, with the inverse square law of gravitation, calculated everything and explained nothing” (5). Insofar as pantometric methods and technologies overcame the Greek obsession with transhistorical absolutes and the unchanging properties of substances through time, prognometrics would accelerate this process. Most notably, the quantifiable regularities of both the natural world and the operations of social systems that became the common coin of sixteenth-century science would be turned into mathematical variables that would not only represent the common behavior of these systems, but also predict their likely future behavior under specific conditions.

As with the switch from the commonplaces of Greek metaphysic and mentalité to the natural philosophy and conceptual tools of pantometrics, this shift might also be discovered at least provisionally in changes in formal scientific methods of the time, and then subsequently traced through the innovations in art and aesthetics as well as the material technologies of the same moment. If the Aristotelian legacy within medieval science could be seen most strongly in a disaffection for observational methods and physical experimentation, and the pantometric era can be defined around the direct measurement and manipulation of imaginary units of quanta and the use of specialized tools for manipulating such units, prognometric methods and techniques would, on the one hand, demonstrate both an internal accounting for human variability in the use of measuring technologies (and eventually its full-scale automatization), and, on the other, display an increase in the forecasting abilities of such measurements to not only record past and present behavior but provide oversight on future planning. The former would largely take place through a great formalization of standards for calculating and extrapolating conclusions from vast amounts of quantitative data—the emergence of statistics as a recognizable discipline—the data itself in many ways only newly enabled by the massive increase in material records made possible by the popularization of printing technologies.

Such a process pushed the pantometric desire for “universal measurement” to another level—now there would be not only universal units of measurement, but also universal standards for how one would aggregate and extrapolate conclusions from such measurements. As many philosophers and historians of science have noted, such mechanisms emphasize a growing awareness of the subjective element of human observation and record collection—an attempt to acknowledge and account for the potential errors on behalf of individuals collecting such data. At the same time, though, corrective protocols appear based on the trust that revised methodologies could ameliorate such problems, pursuing what the historian Zeno G. Swijtink has called “observation without an observing subject” (280). At the same time, however, there was an increased concentration on methods for dealing with outliers and unusual results; the concern for accounting for such irregularities was in many ways a result of the increase in quantitative data available, but it was such increases in the size of data that made mathematical methods for dealing with irregularities a viable strategy. The result is what we might call an increase in the “operationality” of quantitative information of this type. For instance, although vital listings were kept in much earlier times, by the 1600s “bills of mortality” would be made publicly available so that, as the historian Richard H. Shyrock writes, “emerging dangers could be detected and that the genteel would ‘know when to leave town’” (225).

The ability to manipulate quantified information would lend a different type of “operationality” with aesthetics and art proper of the time, one that would intensify the tendencies toward the simulation of living systems within inorganic realms as well as the increasing conflation of the domains of technics and media generally that found its formal start in pantometric techniques. This is perhaps most clear in Baroque art; indeed, the key defining traits of Baroque art and architecture—an abundance of fine details, an emphasis on lighting effects, the use of trompe-l'oeil and other visual illusions simulating dimensionality, movement, and the images approximating multiple generic forms simultaneously—speak to the use of formal (mathematical) and informal (strategic) techniques of calculation far beyond the introduction of perspective into images, concomitantly increasing capacities to approximate movement and “liveliness” into static material works. Heinrich Wöfflin would summarize these tendencies in his early and influential treatment of the Baroque around a displacement of traditional geometric forms by ones that spectators would experience as flickering between different states, or appearing to move despite their solidity:

Line was abolished; this meant in terms of sculpture that corners were rounded off, so that the boundaries between light and dark, which had formerly been clearly defined, now formed a quivering transition. The contour ceased to be a contiguous line; the eye was no longer to glide down the sides of a figure, as it could on one composed of flat planes … Just as no need was felt to make the lines of contours continuous, so there was no urge to treat the surface in a simpler way; on the contrary, the clearly defined surfaces of the old style were purposely broken up with ‘accidental’ effects to give them greater vitality. (35)

Such tendencies might be taken as synecdoches for two ways that calculation became a conceptual operating principle for a wide variety of techniques and endeavors of various types that newly emerged in this period. On the one hand, there is the explicit use of mathematical calculation in the design and creation of these architectural features; on the other, there is also an emphasis on calculating the experiential effects of the created design on its future viewers, an introduction of the physiological sequence of perception and reaction on the part of the viewer into the design process. As Wölffin writes, because the viewer of the detail-packed and abstract contrasts of Baroque visual works “cannot possibly absorb every single thing in the picture” they are “left with the impression that it has unlimited potentialities, and his imagination is kept constantly in action” (34). If pantometric techniques of exact and “usable” measurement helped bring into existence a new type of art that could more precisely approximate the surface and contours of organic objects, the introduction of more advanced mathematical methods and the general interest in calculating procedural effects into the design of Baroque works would provide a way to stimulate organic movement, and to more precisely presume and dictate the organic process of visual perception and affective response to such works.

Through the movement in Baroque architecture in which decorative elements became more specifically communicative, and through which Baroque artists of various types integrated the presumed sequence of perceptual processes that would be prompted by such materials, we might recognize the distinct ways in which what I have been calling prognometric techniques demonstrate a merging of technics and media that would be a step beyond what was achieved in the pantometric era. This conflation, and the procedural character of Baroque representational media more generally, might also serve as the principle that connects it closely with coeval developments in science and metaphysics of the time.4 The calculating and tracking of the mental processes of the viewer would find a complement, perhaps most obviously, in the Associationist School of psychology and philosophy; as some historians of science have suggested, concepts of the human mind and perceptual process as described by such thinkers as John Locke, David Hume, and David Hartley are closely linked to more general intellectual concerns with probability and the forecasting of possible effect: both considered “the mind a kind of counting machine that automatically tallied frequencies of past events and scaled degrees of belief in their recurrence accordingly” (Gigerenzer 9). In a more methodological vein, Dalibor Vesely suggests that the abstract visual forms produced by Baroque luminary Giovanni Battista Parinesi presumes an operation of mental abstraction and processing on the part of all viewers that might be compared to that of the “thought experiment” now common in empirical science; much as a thought experiment presumes that a participant can imagine an idealized version of reality, the distinctive qualities of baroque visual art rely on a similar power of abstraction and real-time cogitation by its viewers (257, 443n49). The spread of such techniques into the general social field demonstrated a further movement in the ways ontological conditions of the era became available as strategic resources for unique techniques and technologies of this moment.

FROM PROGNOMETRICS TO PARAMETRICS

I spent a considerable amount of time above tracing the emergence of statistical methods in social governance and the origins of “calculative” economies for a very specific purpose. While the use of statistics in the management of political economy go back at least as far as the early seventeenth century and “machinic” perspectives on social management are largely composed in the late eighteenth century, I take it that it is these types of theories and practices—ones in which quantifiable data and mechanical calculation are the coin of the realm—that are typically marshaled as evidence in discourses that identify contemporary (information) technology and a reliance on quantitative research methods as, if not the outright cause, then at least adroitly symbolic of a society taken to be increasingly inhumane or mechanistic in its logic. In other words, in addition to Heidegger, it seems like a vast majority of critiques of this kind are premised on the notion that we have abandoned some socially ameliorative and humanistic investment in critical judgment in favor of a world system run on brute calculations.5

Perhaps a touchstone for this critique is Adorno and Horkheimer's Dialectic of Enlightenment, a text that takes the contemporary domination of calculation or programmatic reason as exigence for tracing its beginnings back to Enlightenment thought. For Adorno and Horkheimer, when a philosopher like Kant, the figure who perhaps more than any of his time made the concept of judgment central to philosophical inquiry, ends up “confirming the scientific system as the embodiment of truth,” this starts the beginning of the end of judgment as it might be conceived in any ideal or even positive sense; the self-reflective capacities crucial to judgment disappear and “thought sets the seal on its own insignificance,” emulating scientific reason and thus becoming “a technical operation, as far removed from reflection on its own objectives as is any other form of labor under the pressure of the system” (66). While tracing this tendency back to the eighteenth century when advances in mathematics and formal logic “offered Enlightenment thinkers a schema for making the world calculable,” it might be taken to be approaching its apex in the postwar period, as shown in Adorno and Horkheimer's famous descriptions of the impact of positivist methodologies on social analysis and government planning and that of the “the Culture Industry” in shaping human motivation and attribution of value (4). Today's world, or at least the world of early postwar America, is one in which “chance itself is planned”: “For the planners it serves as an alibi, giving the impression that the web of transactions and measure into which life has been transformed still leaves room for spontaneous, immediate relationships between human beings” (117).

However, as I have already had occasion to suggest in the introduction to this book, it might equally be said that the wartime period—such a central symbol to Adorno and Horkheimer in Dialectic of Enlightenment as it was, sometimes for very different reasons, for the late work of Heidegger—was also the birthplace of a variety of prototypical techniques that would radically revolutionize not only contemporary technology and media, but also a wide variety social and economic structures. These new techniques would follow the pattern of change and intensification that I have been sketching above, but would alter the logics of calculation or what I have been calling prognometrics in such a fundamental manner that it would be very hard to say that the targets of either Heidegger's or Adorno and Horkheimer's analyses remain dominant in the present moment. I map the occurrence of these parametric techniques throughout a variety of domains later in this chapter, but first I will try to unpack the difference between critical conceptions of the “calculative” or “reductionist” impact of technology on culture and what I take to be the more dynamically processual or algorithmic logic of society after cybernetics.

We might find a particularly resilient example text for such an objective in a work composed just a few years after the publication of Adorno and Horkheimer's masterpiece. Readers of Alan Turing's 1950 essay “Computing Machinery and Human Intelligence” can be forgiven for presuming that they were about to read an argument by one of the century's preeminent logicians about how the second subject of the title (“human intelligence”) might be realized in the realm of the first (“computing machinery”). Though computer science research had certainly yet to produce a machine that could rival human intelligence at the time of Turing's writing, human expectation—if not technology—had reached science fiction proportions. However, in the essay Turing immediately brackets the question “can machines think?”—a query he immediately indicts as “absurd,” possessing a “dangerous attitude,” entailing an infinite regress of definitional obligations, and “too meaningless to deserve discussion”—and attempts to convert the thrust behind this query from an abstract concern into a performance containing multiple practices of human-machine interaction, mediation, and response (42–43). In place of a traditionally epistemological concern with the nature of “thinking” Turing suggests a retrofitting of the popular party pastime, the “imitation game.” The game is normally played with a man, a woman, and an “interrogator” of either gender. The interrogator asks the other two participants a series of questions, their responses are mediated through a third party or preferably transcribed or typed, and then the interrogator attempts to determine based on these responses alone the gender of each participant. Turing hypothesizes that the substitution of a “thinking machine” for one of the respondents in this game would create a more productive environment for engaging the significance behind the original question of “can machines think?”

Rather than attempting to “prove” the identity of human and machinic thought by recourse to formal logical or definitional mechanisms, or his experience as a leading mathematician and logician of the era, Turing's retrofitted imitation game relocates this process to an interactive experiment that requires the reader's (virtual) participation. As a game, Turing's test constitutes a series of algorithmically complex rhetorical practices of interaction and persuasion that hinge more on human response to machinery than any philosophical inquiry into the possibility of a “thinking” machine: “The original question, “can machines think?” I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted” (442). The original query (“can machines think”) immediately hails questions of agency and cognition traditionally proper to philosophical reflection, an inquiry Turing pejoratively describes as one “best answered by a Gallup poll” (433). His rendering of it into a game depends no less on human response and no less on the quantification appropriate to polling—“I believe that in about fifty years time … an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning”—but these effects emerge out of the material practices between human and machine when put into circuit with one another and assay not the identity between the silicon and flesh participants but the negotiation of their alterity (442).

Turing's essay was the virtual rehearsal for an experiment in the clinical sense of the word and entailed all that goes with such a process, such as reproducibility and modification. As such, the original Turing Test comprises only the first iteration of a process that has been relentlessly duplicated and modified during the past half-century in actual attempts to assay artificial intelligence programs. In addition to this physical repetition, the Turing Test is often reiterated textually as a point of departure for gauging rapid advancements in automated computing technologies and the related rise of informatics in the life sciences. In the latter endeavor, the Turing Test, in both its technological and cultural import, has been popularly thematized as a pivotal event in not only the modeling of machines on the basis of human behavior, but also the reflexive erasure of embodiment in theories of human cognition and subjectivity: if a machine can think, or at least fool an interrogator in the very pragmatic performance of the Turing Test, then surely, many critical interpretations suggest, the difference between the two can be reduced to so much vitalist hand-waving, and intelligence and consciousness have nothing to do with the very human quality of embodiment and the very humanist quality of self-sovereignty. Hayles, for instance, positions the Turing Test as a ground zero for cyberneticists' supposed dismissal of embodiment from considerations of consciousness: “If you cannot tell the intelligent machine from the intelligent human, your failure proves, Turing argued, that machines can think. Here at the inaugural moment of the computer age, the erasure of embodiment is performed so that ‘intelligence’ becomes a property of the forceful manipulation of symbols rather than enaction in the human lifeworld” (Posthuman xi). For Hayles, the dislocation of the question “can machines think?” from a definitional query to the operative process of his revised imitation game does not so much shift the grounds of his engagement, but compels the reader to accept deceptive assumptions: “Think of the Turing Test as a magic trick. Like all good magic tricks, the test relies on getting you to accept at a certain stage assumptions that will determine how you interpret what you see later.” Similarly, in a slightly more nuanced reading of the role of embodiment in Turing's test, Elizabeth A. Wilson argues that the “attentive constraint and management of corporeal effects” in the test produces the “projection of a certain fantasy of embodiment—specifically, the possibility of noncognitive body, or (in what amounts to the same thing), the possibility of a cognition unencumbered by the body” (111, 120).

At first blush, all of this—the association of Turing's test with the perceived disembodiment of human intelligence or human subjectivity, its positioning as a key moment in the history of science and technology fields that would come under fire by humanists and social scientists in the twentieth century as reductionist and politically naïve—makes a lot of sense. Turing's does seem to be suggesting that something like “thought” or “intelligence” can be captured in written form (the pieces of paper passed back and forth from the “test subject” and its hidden interlocutor), he does go to great pains to restrict any test of a computing machine that would require it to have a (visible) body, and, as his appropriation of the “imitation game” based on guessing gender implies, the true test of whether an observer will presume they are interacting with another person is very much reliant on how well the machine is able to exhibit what will be accepted as (stereo) typical behavior for a human.

On a second read, however, one that pays closer attention to the specific parameters of Turing's Test and its processual design, there is also good reason to be surprised that cultural critics, particularly those well versed in contemporary theories of the performativity of identity categories such as gender, would find Turing's experiment so grating. After all, it would appear that it is the exact notion that our perception of human identity is premised on a series of repeated performances of social typologies and behaviors—from Nietzsche's famous suggestion that “there is no ‘being’ behind the deed … ‘the doer’ is invented as an afterthought” (Genealogy 26), to the work on iterability and the “practices” of identity in Derrida, Foucault, and Judith Butler's groundbreaking writings—is pretty much the conclusion that put the “post” in postmodern theory.

In other words, while there might be solid ground for such critiques if Turing were making a strong argument for an explicit and comprehensive definition of “actually existing human intelligence” in his essay, but given that Turing's interest is in testing how well a machine might approximate or reproduce such culturally conditioned factors and commonplaces in order to fool an interlocutor into accepting an incorrect identity, it seems that Turing's test is more itself a conscious acknowledgement of and programmatic response to postmodern notions of perfomativity. Indeed, we might even go further and state that Turing's test was not so much the start of reductionist, “disembodying,” and conservative notions of human subjectivity that might insist on rigid conceptions of human identity and appropriate human behavior, but rather the beginning of a process in which the wide range of possibilities in these realms are integrated into systems of communication and interaction within and between human and technological realms.

For these reasons I perform my own restaging of the Turing Test here in an attempt to remobilize the dynamics of Turing's engagement somewhat differently than is common in analyses of its cultural legacy; specifically I am interested in considering these vectors not as a epistemological endeavor but as a rhetorical ecology, if we can take rhetoric, as I suggested earlier in this text, to name not just economies of persuasion in general but the a variety of techniques based on flexible formulas that eschew logics of identity and difference in favor of “as if” and “if then” directives and procedures. Even more precisely, we might say that Turing's schema here is not a philosophical argument that hypostatizes what “counts” as cognition, denigrates the importance of physiological bodies in this process, or anticipates the appearance of human intelligence in machine domains. Instead it is a strategic description of particular forces in interaction that anticipates much of contemporary culture, one in which the distribution of agency and subjectivity into organic and machinic realms is becoming increasingly ubiquitous and the dominant logic of virtually every important system of interaction—political, economic, cultural—seems to be following much more the procedural operations of the Turing Test rather than its presumed reduction of human intelligence to operating symbols. In other words, I want to invert the relationship above where rhetoric is taken as a secondary operation of argumentative arrangement that eases the acceptance of an epistemological conclusion, and instead position as primary the rhetorical arrangement of capacities to affect and to be affected. More specifically, I want to trace a movement of technological evolution in the realms of mediation and automation, one in which the prognometric focus on the accurate predicting and provoking of a spectrum of possible results becomes formalized into an entirely new system of processes. These processes, I take it, are the more relevant, one might even say more urgent, legacy of the Turing Test today than its common associations with “reductionist” views of (disembodied) human intelligence and outsize claims for the powers of computing technology. More importantly, however, they might also be used to gain some purchases on understanding the ways that a wide variety of social processes function in the present.

We might coordinate the import of Turing's test in relation to contemporary arrangements of technology, media, and persuasion around three central points. The first is what we might call a new ontological framework of interaction anticipated by Turing. As Lev Manovich has argued, insofar as we can claim that there is such a thing as an “ontology of the world according to a computer,” a unique logic for identifying entities and their relationships, it is one composed of two parts: data structures and algorithms (Language 223). Although there is a variety of ways particular computer programs may couple or associate these two structures, in rough parlance, any entity identified by a computer—whether it be the kind of explicit quantitative units we typically associated with data or any object whatsoever, including such phenomenon as changes in objects or processes—is considered a data structure. Any routine or task involving data structures is encoded in an algorithmic format, a series of commands that make associations between data structures and execute actions in response to rules-based directions for particular scenarios. Although one might see reductionism in the data structure component of this system—insofar as the representation of something by some other symbol is always a reduction of the “totality” of the original in some way—the potential complexity of the algorithmic component is perhaps the more notable or unique component of computing in the present moment. The creation of a variety of subroutines and nested commands within computer algorithms, as well as the ability to arrange commands around “if then” scenarios or threshold events, marshals a kind of complexity that is in many ways compensational to the simplicity of the data structure as a mechanism. In other words, although our popular image of the kind of activity that takes place in contemporary computing is likely one of a variety of phenomenon being reduced to a series of symbols (such as those inexorable “ones” and “zeroes” that became identified with digital code), the really striking progression of computing technologies has been the increase of algorithmic complexity in the pursuit of making ever more flexible responses to tiny changes in data structures and their relationships.

Perhaps a simpler way of putting this would be state that while our understanding of the cultural impact of computing technologies, or perhaps more pointedly, our fears about its potentially dehumanizing or rigidly mechanical logic or reduction and quantification seeping into social life as a whole, have tended to focus on the “dematerializing” or decontextualizing principles of data structures, the actual social impact of computers and information technology as we have come to experience it over the past several decades has instead been parallel to the algorithmic component of computational processes. It is not the ostensibly binary logic of digital coding and the reductive symbolization of data structures that has become mirrored in contemporary culture, but rather the increasingly flexibly, niche-oriented, and generically goal-oriented logic of the algorithm. Consider, for instance, some of the most famous and influential algorithms of the present: Google's PageRank and AdWords/AdSense systems. The results of a Google search, based on Google's PageRank algorithm, are determined by a wide variety of relationships. These include what we might call “generic” elements such as (1) a complex parsing of the relevance of a online site to a particular search request, (2) that site's traffic and the amount of links to it by other sites, and (3) the evaluation of the “quality” of the site based on a number of factors. However, the algorithm also takes into account a variety of factor specific to data available about the user; although such information as the geographical location as identified by network address have long been part of such calculations, from 2005 onward the search algorithm also started to take into account the previous search history of the user and other available information about the user based on their activity using Google and (more recently) other sites and services, including known relationships between the user and other individuals (as derived from their connections on the Google+ social network). When used in application with Google's AdWords and AdSense algorithms, the primary revenue source of the company, advertising is similarly directed toward the user based on such specific and detailed factors.

What is perhaps most notable about this progression in algorithmic complexity is how it demonstrates a shift toward strategies based on the achievements of more general goals by appealing to more specific audiences. For example, in Google AdWords and AdSense, the purchases of any product or service, rather than a particular product or service, is made possible by the complexity of the algorithm used to sort and source marketing media and the ability to tailor such media based on the ever more specific preference of ever smaller numbers of people.

Turing did groundbreaking working in the theory of mathematical algorithms across a wide range of his career, but perhaps the fairly simple modeling of such a sequence in “Computing Machinery” is the best example of this kind of algorithmic process, particularly in how this structure in computing proper has bled into a variety of cultural domains. Turing's model of computing—in which the contribution of its interlocutor, regardless of whatever it may be—is assessed and triggers a response tailored to move the process closer to a desired general objective may not be an adequate model of “human intelligence” but it is one that looks increasingly like the dominant logic of a wide variety of economic, political, and otherwise cultural systems in which the efficient ability to produce flexible responses to increasingly diverse and wide-ranging amount of variables or real individuals is very much, as we shall see below, the ruling logic of the day.

Perhaps this distinction might be clearer if considered in relation to our second point—the Turing Test's restriction of representation and its replacement of a logic dominated by direct representation or identification with one dominated by pure process. As we saw above, Turing's dislocation of the question of human intelligence from definition to process relies not so much on any belief in stable classifications of human perception or conception as on definitive lapses in these categories: an epistemological “blindspot” created by the inability of human thought to definitively conceive of “itself,” and a practical disruption of human vision, the conceit in Turing's game (as well as in the imitation game he is adapting) wherein a participant is not allowed to “see” her interlocutor. The inability of a participant to see the subject being investigated was of course crucial to Turing's test; this elision both created the parameters for what capacities Turing was attempting to survey—“We do not wish to penalise the machine for its inability to shine in beauty competitions, nor to penalise a man for losing in a race against an aeroplane”—and served as a delimiting function on how the experiment would be staged both imaginatively and materially (435). Although feminist critics of Turing's work have pointed out that such a formulation risks eclipsing embodiment from a working definition of intelligence, it is also the component of the test that makes it transparently an exercise in determining the processes through which something like intelligence might be mapped rather than an attempt to generate a definitive conception of it in some kind of static form.

Turing's decision to model intelligence, or more accurately, what would be perceived as intelligence, by way of an (algorithmic) process is very much against the grain of not only traditional representation strategies that might encapsulate this category as well as a very long history of how “intellectual” knowledge itself, specialized or meta-level knowledge “about” something, is conceptualized and expressed. Namely, Turing's focus on the restriction of vision and representation is very much inverse to centuries-old Western tradition reliant on foregrounding some variation on (visual) perception/representation. This is true from ancient Greek philosophy, during which the capacity to have special insight into phenomenon became known as theoria (“looking at” or “gazing at”), to what Martin Jay calls the “ocularcentric” tradition of Western intellectual thought, one in which the correct apprehension of reality is equated with “seeing clearly” and being able to represent or diagram in some fashion the relationships between often invisible or obscure substances and forces.6 This tendency continues, somewhat paradoxically, even in what Jay dubs the “antiocularcentric” tendencies of critical or anti-foundational approaches to metaphysics and philosophy; from Marx and Engels' famous reference to ideology working like a camera obscura that makes “men and their circumstances appear upside down” to more recent metaphors of blindness (Paul de Man), parallax (Slajov Zizek), and the more general interest in the (sublime) “unrepresentational” qualities of diverse areas of aesthetics or experience (Lyotard, Levinas) (14).7

This is not to say, of course, that “Computing Machinery” does not “represent” anything, or uses some radically strange form of communication or expression, but what it presents is not so much a description of qualities or figurations but a series of formulas and techniques. In this sense, then, Turing's essay, particularly given the subject matter under review within it—the relationship between computing machinery and human perception and cogitation—is a particularly salient anticipation of new economies of media and mentalité that would become dominant in the era of ubiquitous information technology; perhaps more importantly, it might also be taken as an avatar of contemporary systems that rely far less on the management of particular representational schemas and the control of what “counts” as accurate representation or veridical truth and serve to exclude others, and more on the effective management of a variety of processes of political-economic interaction that attempt to include as many divergent beliefs, values, and modes of decision making as operationally possible.

Third and perhaps, at this point, most obviously, Turing's test can be as marking the new regime of two interrelated processes that we have been tracking throughout this essay: the conflation of technics and media and the modeling of natural systems in inorganic realms. While Turing's essay is often read as an early document in theories of artificial intelligence (an artificial agent that might make goal- and self-oriented decisions) as opposed to artificial life (the modeling of life systems in artificial environments), Turing's anticipation of human-machinic interaction in “Computing Machinery” best demonstrates what would appear to be a third category, what we might call for lack of a better term the inorganic ecologies or artificial liveliness that were central subjects of much cybernetics research. Here the endeavor is not so much to the formal pursuit of simulating something that might fit an ideal definition of “intelligence” or “life,” but rather the design of object, environments, and spaces of interactivity that mimic particular elements of life systems, such as the maintenance of equilibrium and structural adaptation.

PARAMETRIC CULTURES

We might more formally define the parametric as a conceptual paradigm distributed across a variety of contemporary technological and cultural domains—the spread of the new arrangements between techne and logos or technics and media anticipated by Turing's test beyond the realm of information technology proper—around four core principles; each of these might be read in contrast to both the earlier regimes of techniques described in this chapter as well as more general critiques of the ostensibly dehumanizing or disembodying effects of digital culture and information technologies (which, as I suggest above, may more appropriately describe the prognometric or “calculative” techniques now being eclipsed). First, powers and purposes that used to be attributed to process of direct representation, whether of an indexical or ideational nature, are transferred to more dynamic processes that work to maintain relationships between a variety of elements. Second, whatever we might call social conditioning or “social power” works primarily through logics of inclusion rather than exclusion, an attempt to integrate and draw value from as many heterogeneous identities or behaviors as possible. Third, in control systems of any type (technological or social), the pursuit of specific goals is secondary to maintaining a systematic equilibrium wherein a spectrum of “acceptable” results will likely take place. Fourth, communities or collectivities based on sustained identification or commitment give way to temporary associations based on shared actions; as a result social strategies based on the creation of sustained subjective investment or strong dispositions are largely outclassed by those more efficiently focused on producing specific effects at specific moments. Collectively, all four of these tendencies might be taken as a demonstrative of the moment when the dominant “logic” of culture begins to take on algorithmic or parametric forms.

In many ways, we have already encountered manifestations of several of these principles already in this text. Our discussion of changes in scientific epistemology and practice, in which the historical role of science as the privileged discourse of epistemology and representational validity largely takes a back seat to its pragmatic applications is certainly apposite. Similarly, our analysis of what we coded the technologic of contemporary capitalism, particularly in the ways that production and consumption have been connected in virtual feedback loops, would seem to fit fairly easily into this schema. We might additionally, however, detect parametric tendencies in a wide variety of additional domains.

Architecture

Indeed both “parametric” and “algorithmic” have become established terms in architecture, used to denote new design methods in which software programs and computing power are leveraged extensively to produce a kind of dynamic “program,” as opposed to a static blueprint, for the construction of a physical structure. Most commonly, such programs work to maintain the integrity of a designed structure by automatically adjusting all elements of a design when another element is manually changed, but they are also used to generate multiple patterns and structural forms from defined criteria. If the central challenges of the practical nature of design—the literal production of a representation model for which a design will be based—used to be accuracy, fidelity, and iterability, algorithmic design methods call on architects to instead create ranges and a spectrum of acceptable effects (not only material “feasibility” but sizes, shapes, and accordances with other limitations of a design site or its environments). In this sense, such techniques instigate a radical shift in the representation economy of structural design, one in which, in the words of architectural historian Mario Carpo, architects are “leaving behind a universe of forms determined by exactly repeatable, visible imprints and moving toward a new visual environment dominated by exactly transmissable but invisible algorithms” (100). Much like advances in the ability to quantify spatial and visual units were key to the increased fidelity between the design and execution stages of an architectural structure, and increased capacities in calculating the dimensions of irregular structures and the “adequality” or interstitial space between unlike elements was key to the rise of the Baroque style, the contemporary use of parametrics might be taken as inaugurating a change of similar magnitude. Using computer-aided algorithmic techniques, it is possible not only to realize unusual visual or material forms in concrete structures, but also to set into motion the process of designing aleatory forms that are, sometimes quite literally, unknown or even inconceivable by the architect before being produced by a program.8

Here, again, it is worth noting how the introduction of automated techniques or the essential use of computing technologies into this field has had the opposite of the effects one might initially assume. In a strange sequence, the overtly mechanical or minimalist forms that we might intuitively associate with computer-aided architectural works is probably better represented by structures that predate them, the iconic buildings of modernist architecture and what cultural sociologist Mauro F. Guillén calls their “Taylorized beauty,” a style invocative of both industrial environments and the scientific management of spatial efficiency. Algorithmic architecture, on the other hand, has most consistently been directed toward the realization of organic forms and ecological relationships, best represented not by clean geometries, but the “blobitecture” of creators like Greg Lynn, who uses a variety of computer-aided techniques to mimic what he calls an “anorganic vitalism” in both the design process and physical appearance of his works.

Still further, we might remark on the counterintuitive ways that the availability and ubiquity of massive computing power have also led to a scenario inverse to the supposed denigration or elimination of space and structure by networked communication technologies. Around the turn of the twentieth century what William J. Mitchell calls the “indispensable representational role” played by buildings seemed to be in the process of being gradually replaced by a variety of digital and virtual sites: company web presences for communication and e-commerce take over the representational role once played by iconic headquarters; elegantly organized databases take over from well-designed stacks in libraries; theatres give way to content collections of recordings and live-streamings (84). However, in the last decade or so, at least, we have seen a return of sorts to the production of iconic and monumental structures ostensibly made redundant by digital culture—even as many of them have been made possible by the same advances in computing power and availability. The difference, such as it is, might be that the symbolic value of such sites, as well as their role as indexing advances in technology and aesthetics, is based not on the industrial might or intrinsic artistic inventiveness of their creators and sponsors, but on the progression of computing power and algorithmic schemes for realizing them.9

(Mass) Media

The most visible impact of contemporary information technology and its attendant techniques on mass media, as a nonstop parade of “death of the newspaper” features have told us over the past several years, seems to have been one of displacement and fragmentation. If, as Jürgen Habermas among others has detailed, the progression of mass print media from the early 1800s onward has been one of consolidation—local and parochial print organs being absorbed or bought by national, and then international, firms—and a resulting heterogeneity of content and distancing from their core audience, movements from the late twentieth century onward might be seen as a total reversing of this trend.10 We have, on the one hand, a disaggregation of the functions of centralized media like newspapers (whether in print or online) to a variety of new, typically localized or “local-centric” networked media—Craigslist takes over as the major avenue for classified ads, hyperlocal blogs take over local news coverage, and social media feeds become our new go-to resources for breaking news and trend-spotting. On the other hand, we have an integration of the formally “passive” reader or viewer into the production or content of media itself; the carefully curated and relatively rare “letter to the editor” gives way to the infinity of unfiltered comments on online news and blog features, and by-now quaint features like having audience members report the temperature in their backyard or submit on-air questions for television new anchors to field, is crowded out by the introduction of scrolling social media feeds and user-submitted video and audio of newsworthy events. Perhaps most notably, we have in many cases the total replacement of the professional journalist or commentator by “citizen journalists” or the crowdsourced coverage of events.

It is also worth remarking on how the parametric qualities of niche media have, perhaps paradoxically, been useful in maintaining particular equilibriums of opinion and disposition despite the ostensibly huge increase in perspectives and arguments of all strips in the mediasphere. If the primary negative effects of the centralization of media into large firms was that it tended to both restrict interaction between its producers and consumers, while also limiting the range of acceptable positions or opinions to those that were unlikely to off end a large audience—the qualities that, for good or ill, made media mass media—then it would seem that the vast proliferation of media sites and opportunities for interactivity would correct these problems, and thus perhaps even help inaugurate a new and more ameliorative modality of civil society or the public sphere. However, as you need go no further than the nearest comment thread on a controversial news story or partisan blog post to see, something like the opposite has been the case. Indeed, the unparalleled ability for audiences to search and access media created by individuals with which they already share political dispositions, as well as for aggregative technologies and media portals to “push” such media based on data collected about their users, has instead likely resulted in a decline in an individual's exposure to opposing viewpoints and of “rational-critical” communicative interaction between individuals with partisan divisions. Indeed, the realm of contemporary niche media looks much less like a global village or universal agora, and much more like an ever more intense balkanization of our political or ideological landscape.

Education

It might fair to say that education, at least university-level education of the past three to four decades, has been at the forefront of the integration of parametric techniques into praxis. Indeed, since the 1960s we have seen the introduction of a wide variety of techniques that orient pedagogical goals away from the teaching of discrete information to a variety of more generic capacities as well as the creation of a multitude of practical processes that attempt to microtarget and adapt to individual students learning strategies and deficits, and/or coordinate the educational experience around collaborative relationships between students. The emergence of process pedagogy in writing instruction, for instance, modeled a shift away from placing priority on a student's finished projects (the “product”) and the evaluative and teaching method appropriate to that approach, replacing it with a focus on teaching discrete processes that form the smaller components of effective writing, and, subsequently, the evaluation of a student's work as a progression from the beginning to the end of the class (portfolio evaluation), and the integration of self-reflection on behalf of students as a method for making their writing processes apparent to both themselves and their instructors. To give just one more example, the increase in “problem-based learning,” particularly in medical and engineering fields, served to redefine learning as a gaining of the flexible capacities to analyze and provide responses to loosely defined problems for which multiple solutions are likely effective. All of these processes are often performed in some way collaboratively between student groups, joining a variety of other practices—peer learning and peer evaluations—that have reconfigured teaching as the arrangement of relationships effective to learning rather than the explicitly hierarchical “transmission” of knowledge and skills from instructor to students.

To these examples we might note, yet again, the ways in which the actual introduction of information technology has had effects far from different from initial predictions. The big “innovation” for education promised by networked communication technologies and online environments was supposed to be, at least for teachers of an earlier vintage, the denigration of the importance of the school as a physical site and related enrichment and streamlining of existing capabilities for “distance education.” The concern about these developments was that it would lead to a kind of formulaic approach to teaching, in many ways the very occlusion of the “best practices” referenced above that developed around student-centered “individualized” learning. Fast-forward to the present, and the fastest-growing area of technology in the classroom initiatives is the creation and use of “media rich” and “flexibly responsive” online environments—the variety of online portals through which students' performance on diagnostic exercises and other available data are used to custom design sequences of activities and exercises targeted at their particular strengths and weaknesses. If these systems sometimes still have the patina of what used to be called “skill and drill” approaches to teaching, they are also ones in which both sides of the equation—the measurement of a student's skills and their exercises they are asked to complete—are constantly being reconfigured in relation to one another and always around the individual student.

Finance

The dominance of financial speculation (investment driven not by analyses of existing underlying value but presumed future sales prices) with the increased “interconnectivity” of various stock markets and market sectors are often referenced as the defining factors of contemporary capitalism, with some intersection of the two often referenced as the key cause behind the recent global financial crisis. These two attributes themselves, however, are hardly new techniques in capital's history: indeed, one might say they are more or less what “made” capitalism into a recognizable economic system. Indeed, Giovanni Arrighi has detailed in his sprawling historical work The Long Twentieth Century how speculation's appearance at the center of investment strategies signals the start of something like a Nietzschean “eternal return” for state and then global capitalism since at least the fourteenth century, an indicator of systematic crisis that heralds the beginning of a new cycle of accumulation, expansion, and investment (214–238). Likewise, market interconnectivity, we might say, is a prerequisite for what any living person today would recognize as capitalism. We have records going back to at least the mid-1700s of the ways in which market failures and weaknesses lead to immediate changes in price and trading behavior in geographically separate trading spaces, and the increased connectivity of markets could be seen as one of the most reliable and consistent features of capitalism's history.11

Rather I take it that the truly novel characteristics of finance today, the changes that people are actually trying to refer to when “speculation” or “connectivity” are discussed in this fashion, are the increases in the magnitude and variety of market relationships and in the subjects of market speculation, as well as the ways these phenomena are themselves parametrically or algorithmically connected in contemporary financial exchanges. We can see this characteristic formally in the ways that modern trading is based not so much on calculations of underlying value or even presumed future value, but on the present and future state of a vast variety of associations between commodities, prices, and currencies—ones largely defined by the mechanisms of formal exchange itself. For one, there is a vast expansion of the kinds of things one can “speculate” on, including, in many ways, others' (or your own) speculations on the value of commodities; all investment trends toward being a virtual form of arbitrage, exchanges based on the relative values of different commodities and financial instruments. If stock exchange regulators of the late nineteenth century were disturbed by the idea that investors might be involved in speculation around agricultural products they do not actually plan to physically transfer, we can only imagine the their absolute bewilderment at the vast expansion of “financial products” in the late twentieth century to include percentages of the future profits of a city's parking meters or the bundling of mortgages into securities.12

As Daniel Buenza and David Stark write in their ethnographic study of an arbitrage firm, the modern day trading room “is an engine for generating equivalences”: “traders locate value by making associations between particular properties or qualities of one security and those of other previously unrelated or tenuously related securities” (373, 376). As they go on to argue, arbitrage follows a logic and methodology far different from the “momentum trading” of the go-go 1990s, and in many ways the inverse of the corporate raiding strategies of the 1980s. While raiding strategies are based on carving up a company into assets and selling it piecemeal, arbitrageurs “carve up abstract qualities of a security”:

For example, they do not see Boeing Co. as a monolithic asset or property but as having several properties (traits, qualities) such as being a technology stock, an aviation stock, a consumer-travel stock, an American stock, a stock that is included in a given index, and so on. Even more abstractionist, they attempt to isolate such qualities as the volatility of a security, or its liquidity, it convertability, its indexability, and so on. (376)

In other words, in the realm of arbitrage, it is not the ability to break down an asset into component parts that might somehow be more valuable than their sum, but the ability to identify and manipulate a wide variety of its connections and co-implications with other phenomena, the flexible management of associations whether than the efficient calculation of an underlying value.

Democracy

Or at least that aspect of democracy that is most immediate and “direct” to most of the world's population: voting and the vast apparatuses of campaigning and persuasive messaging surround them. As briefly touched on in the last chapter, there has been something of a merging or recursion of techniques surrounding political campaigning and more traditional or explicit forms of marketing around consumer products and services. If the disappointing contention that our exposure to the ideas and actions of politicians and candidates has looked more and more like entertainment media and advertising—the increased use of televisual and later media increasing politician's resemblance to actors on stage, and political ads taking on the kind of vulgar appeals once reserved for selling used cars and high-end electronics—is by now a familiar one; perhaps what is new about this intersection how both increasingly traffic in data demography and niche-driven targeting of smaller groups with more specific similarities.

Although voters have long been targeted in line their presumed allegiance to particular “interest groups” (those invested in, for instance, income tax decreases, or against expanded international trade), in the last decade or so data-aggregating companies working for campaigns and political parties have combined computing power and the increasing availability of demographic and consumer information to provide their clients with a seemingly infinite number of cross-indexed voter niches. Such categories are not so much discovered as they are created by cross-referencing and compiling “psychographic” characteristics not necessarily associated with each other. One then develops ways to reach out to such groups to demonstrate how a candidate's (old or new) values mesh with theirs. As veteran republican campaigner Dan Schnur explains, although the general public might intuitively presume that campaign personnel develop a message and then test it to see how different segments of the electorate will react, the opposite is true of modern-day electioneering, which works by “identifying the various communities within the electorate and then shaping a message (or messages) that can reach each of these groups in the most beneficial manner” (360).

All of these phenomena, I take it, are manifestations of a parametric logic that has been spreading across contemporary culture over the last few decades. Perhaps what is most striking about them, as alluded to above, is the ways in which they seem to have had quite different, even oppositional, effects than those most commonly associated with information technology and the statistical-analytical and data-mining techniques associated with computing. If our default imagination of both—particularly as they began to become ubiquitous aspects of social life—has trended toward fears of their excluding, homogenizing, or depersonalizing tendencies, recent history has largely proven the opposite to be the case. Insofar as such techniques and technologies form a crucial part of contemporary political economy and cultural life, they have done so by maximizing the opportunities for inclusion, heterogeneity, and the “personalization” of everything from shopping recommendations, to medical treatment, to political messaging. Far from reducing everything through some inhuman logic or reduction or standardization, we might say in fact that contemporary technics has been very much premised on what Nietzsche called those “human, all too human” capacities toward “the overturning of habitual evaluations and valued habits,” the constant transformation and creation of new conceptual categories and associations that occupies a wide range of economic, aesthetic, and otherwise social life today (Human 5).

All of this, of course, still leaves us with a series of pressing questions, many of which might be taken as amplified versions of the same monitory concerns that drove Heidegger's late-career writings on technics and the dissolution of metaphysics into cybernetics. Even if the spirit of technology today is very much the inverse of our usual “mechanistic” or reductionist understandings of the overcoding of the natural by the artificial, does that not suggest a level of appropriation within technical systems far more extreme than even Heidegger feared, one in which the “nature of nature” and of organic life is itself subsumed and simulated by media and technology? Is the tendency of capitalism and social power to integrate and draw value from a much broader range of identities and behaviors merely a symptom of its total saturation of cultural realms, this logic of inclusion much harder to disrupt than earlier forms of power that functioned via the exclusion or isolation of the “abnormal” and abject? Does this latter point, in particular, testify to the obsolescence of what we came to call “theory,” the “intellectual” component of progressive or resistant social movements? In short, it is tempting to conclude that the eclipse of representation “itself” as a dominant mode of cultural production and contemporary social life is, if anything, only a more pointed or painful symptom of Heidegger's “darkening of the world”—and one even less available to be illuminated by traditional modes of critique, or by comparisons with an earlier era were experience and individuality could be seen as more immediate or “authentic” (Introduction 47).

There is much to support such a conclusion, particularly as long as we continue to presume that our best methods for understanding and intervening within social power comes by positioning ourselves as “oppositional” to its functions or “above” or outside its structures of reason or representation. However, allow me to give the hermeneutic circle one more spin here and suggest a different conclusion: that the convergence of technics and media today, the gradual transition of dominant logics of culture from the form of logos and metaphysics to structures and arrangements more traditionally restricted to realms of techne and rhetoric, make such distinctions beside the point. In other words, if the previously separate logics of the organic and artificial, the authentic and the ideological, of domination and of resistance, have reached a kind of null parity in the saturation of culture by cybernetics or parametrics, then we will have to do some very hard thinking about what possibilities exist within this new realm—how it might be oriented to produce different effects, and how we might fashion effective rhetorical, political, and ethical strategies in response to them. In large part, the rest of this book is dedicated to precisely these questions—diagramming the ways in which our understanding and practice of persuasion and motivation, power, and ethics could and should be rethought in response this new mode of culture. Right now though, and by way of concluding this chapter, I want to answer what I will argue might actually be the easiest of the questions listed above—the status and potential of “theory” today.

THEORIES OF TECHNIQUES, TECHNIQUES OF THEORY

To reprise one of the situations with which this chapter began, recall that much of the recent participants in the “death of theory” discourse of the past decade or so read theory's presumed demise as portent of, or synecdoche for, a greater shift in contemporary social power. What we might call the theory of “the death of theory” is that its failure to be a vital force in influencing contemporary culture is the canary in the coal mine that signals more far-reaching and dangerous cultural trends, namely, the morbidity of “critique” as a viable political tool and the co-option of theory's traditional methods of complication, contextualization, and skepticism by precisely the kind of retrogressive social actors that used be the subject of such analyses.

Thus we might say that the questioning of the validity of critical theory is not an attempt on retroactively indict the operation as it has been practiced across history of intellectual and academic thought, but rather to acknowledge its present collusion with, or dissolution within, the broader context of social power, its having become of a piece with the forces of domination or subjection that it was intended to combat. Such a contention, however, might itself prompt a certain limited rereading of the recent history of critical theory or of its necessary or constitutive qualities. We might, as Hardt and Negri suggest, take the popularity of postmodern theories of difference inside and beyond the domain of critical theory as merely a “symptom of passage” into the times in which flexibility in identity and behavior becomes a crucial vector of value production under international capitalism. Or we might alternately simply presume that such contemporary conditions deprive critical theory of what Wendy Brown (channeling Nietzsche) refers to as its constitutive quality of being “untimely”—as a hesitating or oppositional response to current conditions and crises, one that seeks to produce “a rupture of temporal continuity, which is at the same time a rupture in the political imaginary, a rupture in a collective self-understanding dependent on the continuity of certain practices” (“Untimeliness” 7). In the final analysis, though, whether we read it diagnostically as a symptom of passage or as just a sign of the times, the conclusion seems to be the same: critical theory has lost its power as its methods have become at home within the cultural and political systems that are supposed to be its “object” of critique.

However, and perhaps in homage to the traditional conception of critical theory as a force that “slows down” lockstep conceptual linkages or even inverts (in the style of Marx and Adorno) conventional wisdom, I will suggest that this is too hasty of a conclusion, and that, in fact, the opposite may be the case. We might consider this along two related points.

First, it seems rather odd, at least to me, that the absorption of the traditional tools of critical theory into social power “itself” would be read as the sign of its failure. Indeed, it would seem that, instead, the integration of critical-theoretical touchstones—the identification of potential biases in logic or representation, the challenging of binary or reductionist categories, and, perhaps most importantly, the emphasis on the importance of culture in shaping human reason, disposition, or self-identification—is a ringing endorsement of critical theory's success. More specifically, if one of the original notions of critical theory was its thematizing of capitalism and institutional governance as prizing effectivity and efficiency over everything else, then it would seem axiomatic that the appropriation of critical theory into these domains is an undeniable testimony to the “power” of critical theory.

Perhaps it is easier to see things this way from the perspective of the appropriator rather than the appropriated. Consider, for instance, a recent interview with Andrew Breitbart, the publisher and multimedia mogul who—from his pioneering of social and “user-driven” media for conservative causes, to his role in fomenting the so-called Tea Party—has been involved with virtually every version of the “theft” of critical-theoretical practices by conservatives. Speaking with the New Yorker, Brietbart goes into great detail about the initial confusion he experienced when exposed to critical theory in his American Studies courses as a Tulane undergrad—“What the fuck are these people talking about? I don't understand what this deconstructive semiotic bullshit is. Who the fuck is Michel Foucault?”—before he realized the “real” lesson of critical theory (Mead). Arguing for a fairly straight line running from the emigration of Frankfurt School intellectuals to the U.S. in the 1930s to the election of the “radical” Barack Obama to the U.S. presidency over a half-century later, Breitbart emphasizes the revelatory effects of understanding the central subtext of critical theory—that culture can shape politics and opinion—as well as of the by then noticeable legacy of critical theory as a pedagogical or pragmatic undertaking, the circulation of theories about culture into culture, education, and the arts. For Breitbart, the American left has historically been more capable at using this strategy to their advantage, putting American conservatives at an extreme disadvantage:

The left is smart enough to understand that the way to change a political system is through its cultural systems. So you look at the conservative movement—working the levers of power, creating think tanks, and trying to get people elected in different places—while the left is taking over Hollywood, the music industry, the churches. They did it through academia; they did it with K-12.

Whether or not one wants to read Breitbart's comments as part of a larger narrative in populist conservatism that seeks to flip the script of liberal critiques of power by positioning “liberal elites” as the true taste-makers and ideological programmers of present-day society, I take it the effect is largely the same: whether driven by the “actual” success of such strategies in reshaping political economy, or in the actual performance of its effectiveness in being appropriated by right-wing pundits such as Brietbart, the strategies of critical theory are far from moribund ones in contemporary culture.

Additionally, and to move on to our second point, I take it that if rehearsing the reading of a historical ontology by reference to changes in techniques rather than reason, representation, or technology as abstracted properties has anything to teach us, it is this: that (critical) theory has always been present or effectively operated as a technique, as a persuasive force for changing individual's dispositions and conceptual frameworks, rather than a negative “exposure” or demystification of reason or representation or the revelation of the “reality” hidden behind social conditioning. Indeed, even a cursory rereading of the history of critical theory along the lines of what it does rather than what it claims would reveal that this has always been what we might call the “function” of critical theory.13 To somehow disparage the enterprise because it seems to have been successful in this function is to stubbornly cling to the Heideggerian separation of thought and technics, the insistence that even attempting to measure critical thought's value in practical terms is paradoxical, that that the idea that one might evaluate it “according to the standards that one would otherwise employ to judge the utility of bicycles” is a blasphemy against its very nature.

Finally, then, we might be able to give a more affirmative reading to two of the memorable phrases about the value of critical theory, and the state of culture after cybernetics, with which we began this chapter. Foucault's contention that theory can work to “show people that they are freer than the feel” might be more accurate than ever, as long as we alter the first part to read that theory's aim is to convince people that they are freer than the really are, not to somehow “free them” through demystifying the ideological frameworks or “social constructions” that are somehow limiting their behavior, but to actively persuade individuals to practice or perform actions and effects that they may have believed to have been inaccessible. And in this sense, then, Heidegger may indeed be right that today's age is marked by a “will to will,” but we would also have to reevaluate or reinterpret that force as a very important one, indeed. To convince someone to will something, to persuade individuals that they can have an intense investment or disposition toward particular goals or results is no small feat. And in both of these endeavors, critical theory has proven to be a quite effective technique.