Introduction

If inhabitants of the twentieth century were tyrannized by images—cinematic, televisual, advertising—we in the early decades of the twenty-first are surely ruled by data. Our whereabouts, health, habits, interests, affinities, social connections, and communications are logged, tagged, located, recognized, interpreted, stored, and circulated in corporate and governmental databases. Photographic images, which were once tasked merely with capturing reflected light on bodies in space, now bear responsibility for a degree of self-knowledge that was unthinkable for the first 150 years of the medium—where they were taken, when and by whom, to say nothing of volumetric data and information about the technologies and contexts in which they are captured, rendered, saved, or shared. It may be something of an overstatement to say that data and images are “at war,” but taking the evolution of their relationship seriously means unraveling an entangled history that illuminates the cultural preoccupations, limits, hopes, and anxieties that attend the visual culture of our own age.

A generation ago, Susan Sontag argued that images got in the way of experiencing reality; that photography came to stand in for both the immediacy of experience and the memories that could be called forth later. In today’s world, the capture and analysis of data do not yet constitute the same kind of barrier between individuals and their perception of the world, but data has an undisputed impact on the organization of our ways of thinking and what it is possible to know. Sontag’s description of photography as co-implicated with systems of power, however, is directly relevant to the discussion of data: “To photograph is to appropriate the thing photographed. It means putting oneself into a certain relation to the world that feels like knowledge—and, therefore, like power.”1 Writing in the early 1970s, Sontag found much to be melancholic about when analyzing the role of photography in politics, memory, and history. Her description of Nixon-era America as “a sad, frightened time” seems no less applicable today.2 But where Sontag diagnosed the proliferation of photographic images as an attempt—whether conscious or unconscious—to record the disappearance of the premodern world, today’s mass extinctions and environmental destruction far outpace the capacity for mere images to remember, even with billions of lenses carried in pockets around the globe.

Despite their elegiac tone, Sontag’s words are hardly the stuff of naïve romanticism. She saw with painful clarity the instrumentalization of photography in the interest of suspect ideologies and unreflective power. “Cameras began duplicating the world at that moment when the human landscape started to undergo a vertiginous rate of change: while an untold number of forms of biological and social life are being destroyed in a brief span of time, a device is available to record what is disappearing.”3 Such words ring hauntingly in times that have been described as the early or late stages—does it really matter which?—of the Anthropocene, a reframing in finite terms of humanity’s planetary reign. To trace our descent toward global destruction, one needs a different system of documentation, one capable of tracking in real time the gasping downward spiral of a billion points of data. The talismanic function of photography that Sontag describes in service of sentimentality is not available to large-scale data systems. We may feel helpless at the sight of melting glaciers and ice floes, but how does one conjure sentimentality for a data flow? On the contrary: the presence of data to describe epochal patterns of decline mendaciously suggests the inevitability of an institutional response. If such events are well known enough to have their data captured, graphed, and visualized, it is tempting to believe that surely someone must be doing something about it.

Images—and the methods we have developed to study them—have much to teach about today’s digital culture, but first we must speak with precision about the evolving relationship of images and data. Although it is early in the game, allow me to put all my cards on the table: this book proceeds from the supposition that no great divide separates pre- and postdigital culture. I put this bluntly as a strategic maneuver, not to disavow the many exceptions to its rule. It is certainly more seductive to argue the opposite, that digital culture offers an opportunity to rethink timeworn cultural and critical paradigms. New critical models and objects of study certainly help illuminate the specificities of digital media, but we should not forget the lessons of the past regarding social consequences and the fact that today’s imaging and data systems are just as ideologically inscribed as their predecessors. The best of visual culture theory was never solely about representation. It always engaged systems of power and knowledge alongside the pleasures and politics of perception—all of which remain centrally relevant to the study of digital culture. For the time being, then, we disserve the study of data if we suppose that it requires a whole new set of critical or subdisciplinary paradigms.

I say this advisedly, knowing that some of my most respected colleagues would and do argue otherwise. The emergence of subfields such as software studies, code studies, and platform studies have achieved broad recognition and influence in recent years. But these paradigms offer little that is digestible to historians of visual culture. Equally, the branch of art history that focuses on visual media in the digital era remains ill equipped to address the specifics of hardware and software necessary to produce its objects of study. The benefits of treating this emergent area of investigation as a “new field” are many, including the impetus to create specialized academic apparatuses, such as book series, journal publications, faculty lines, and curricular revisions. These are all predicated on the need to assert the specificity of digital media in order to articulate the new methods and competencies required to properly study them. The danger of this model lies in creating an insular field of discourse that is alienating to those who do not write or understand computer code. A new kind of “digital divide” thereby threatens to exclude many people, especially in the arts and humanities, who might otherwise introduce useful perspectives or alternative voices in the evolution of these fields.

While I remain sympathetic to the many passionate calls for universal code literacy—and I confess that every academic program I have been involved with for the last fifteen years has actively promoted coding as a foundational twenty-first-century literacy—I believe it is ultimately more productive to imagine the critical study of hardware, software, and the media objects they produce as coextensive with critical models developed around analog media and cultural studies. In part, the benefit of such continuity lies in the persistence of hard-won advances in areas such as feminism, critical race theory, and models linking popular culture and technology to issues of class, sexuality, and politics. Starting over with a newly defined subfield allows certain critical perspectives—especially those of the technologically adept—to flourish, but we should also be mindful of voices that may be thereby excluded.

Technoculture in Transition

As I write this, a camera lens is pointed in my direction from the top margin of my laptop screen. My previous computer required a cylinder-shaped camera to be plugged into a USB port, which could then be clamped onto the screen or pointed in any direction. The external camera was admittedly less convenient, but I was never tempted to forget it was there. Now, the tiny lens at the top of my screen has an even tinier green indicator light that goes on, ostensibly, every time the camera is in use. Similar lenses on the tablet and phone devices in the room with me do not. It would be easy to assume that they are simply on all the time, or worse, to forget they are there at all. The lens built into the laptop also points in only one direction, in a fixed relation to the screen; in cinematic parlance, it is capable of a crude tilt, but it doesn’t pan or zoom. It doesn’t understand the size or shape of the room I’m sitting in, and the software running behind it does not bother to recognize my face, notice where my eyes are looking, or infer my emotional state. The next machine I own may very well do all these things, either on its own or in conjunction with other devices, software, or electromagnetic signals in the room. These abilities will come with certain minor conveniences that may dissuade me from learning how to turn them off somewhere deep in a settings menu. They will begin to seem as natural as the lens that is currently pointed in the direction of my face, and I will begin to assume that they are on all the time. These shifts occur gradually. We may be tranquilized by the seeming inexorability of technological change, or we may work to decode the assumptions and implications of each microstep in its evolution.

My basic supposition is that we are witnessing—and participating actively in—a remarkable transition in visual culture, the root of which lies in the evolving relationship between data and images. I am not referring to the ontological shift from analog to digital photography that sparked a minor crisis among theorists of visual culture some twenty-five years ago, nor even the epochal transition from atoms to bits declared by Nicholas Negroponte in 1995.4 Images have never stopped lying at any point during the last 150 years. Computers were not needed to invent ways to deceive the eye, but this is not the issue. Let us leave it to our art historians and journalism ethicists to lament the crises of referentiality occasioned by digital imaging. Instead, I would ask, what happens to images when they become computational? An image that is computational is no longer strictly concerned with mimesis, nor even with signification. Computational images may serve as interfaces, carriers, or surface renderings, the real importance of which are their underlying processes or data structures.

This book would not be necessary if it were true—as I am sometimes told—that the difference between images and data no longer exists. We accept such equivocation at our peril if the goal is to think clearly about all that happens when images become as computable as they are representational and data seems incomplete if not visualized. Of concern here may not be the war between data and images so much as the war between computability and mimesis. “Computability” in this context refers to the extent to which computers can act on or process visual information; “mimesis” simply means the extent to which images resemble phenomena in the physical world. Sidestepping contested binaries such as “real” and “virtual,” I occasionally observe distinctions between “lens-based” and “computer-generated” imagery, but we should remember that the sharp, intuitive boundary that we currently perceive between them is another transient artifact of our present state of imaging technology. I am not much concerned with remarking on the evolution of digital image making as it moves with alleged inexorability toward an asymptotic “photorealism,” but I am interested in mapping the ways in which our vision of the world is differently inflected by the competing historical lineages of data and image.

At stake in this investigation are nothing less than the future cycles of media production and consumption and the terms of analysis for film, TV, games, and media art. A nuanced understanding of the actively evolving relationship between data and images offers a key to thinking critically about the effects of media and technology in the politics of everyday life. In the end, data and images should not be taken as a fixed or truly oppositional binary. The two are complementary—and at times functionally congruent—existing in a dynamic interplay that drives technological development and media making both in and out of the entertainment industries. The implications of this entanglement suggest diverse consequences related to the status of documentary and materiality, the politics of large-scale systems of surveillance and information, and contested issues of space, time, and subjectivity. This book privileges media practices that complicate the ideology driving much technological development and entertainment industry production, but my intent is to avoid outdated models of resistance. In many cases, in the chapters that follow, I seek to map the limit cases or internal contradictions that define areas of practice within a broad spectrum of media arts.

Not long ago, denouncing Hollywood’s privileging of emotional identification and illusionistic representation seemed to strike a blow against the bad object of bourgeois culture in favor of radical, underground, or subversive cinema. The data systems we now face are orders of magnitude more encompassing than the emotional plenitude offered by classical Hollywood’s three-act structure. They also pose a more direct threat to individual freedom. I believe it is no longer sufficient to favor media that subverts or denies the logics of commercial entertainment. The same systems that capture consumer metrics tell us the number of “views,” “plays,” “hits,” “likes,” “shares,” and so forth that have been garnered by even the most oppositional works of digital art.

My selection of key media works, technologies, and creators favors reflective practices that contrast the digital allure of realism, seamlessness, immateriality, totality, ubiquity, and convergence with an embrace of the pleasures and complexities—computational, algorithmic, ludic—unique to digital culture. Under consideration is a deliberately eclectic array of examples, albeit one focused on the technologies and practices of media art and entertainment in North America during the last two decades. These geographic and historical boundaries remain permeable, but they are motivated by a practical need to limit the project’s scope and should not be mistaken for an implicit argument that excluded works are less worthy of attention. Objects of analysis range from the obscurity of experimental films, digital media, and software art to commercial productions of the entertainment and technology industries, as well as data analytics, social networks, and government databases. To begin, I attempt to articulate a “politics of data” that is informed by—and in dialogue with—the “politics of images” forged by media and visual culture studies.

Politics of Data

Thanks to rapidly proliferating technologies of vision and their counterparts in large-scale data systems, what is at stake in seeing and being seen is much different today than it was only a decade ago. The once voyeuristic gaze of cinema has given way to power relations articulated through computational systems rather than through ocular regimes predicated on reflected light and bodies in space. The academic field that emerged to theorize visual culture in the twentieth century provides a rich critical framework that offers productive continuities, but it is, nonetheless, insufficient for understanding the computational turn of the twenty-first. In the visual realm, we see and are seen by others. This dual status as both viewer and viewed has been rigorously theorized to account for the power of looking, cultural inscriptions of the gaze, and the nuanced differentials of public and private, rich and poor. In the realm of data, we both sense and are sensed, track and are tracked; our data is mined by others, and we voraciously consume the insights gleaned from data aggregators, visualizers, and evaluators.

To speak productively about the “politics of data,” we must direct our attention to an informed discussion of four areas: defining, sensing, processing, and knowing. Before data can be considered data, it exists as phenomena in the world—often the results of human activities that are registered by server logs and information sensors. Before data can be captured, it must be desired, identified, and described; correctly sized and formatted repositories must be created that are suited to its capture, storage, and processing. In this way, each stage in the treatment of data implies others in the circuit. In no event, as Lisa Gitelman argues in her book-length polemic “Raw Data” Is an Oxymoron, can data be said to exist in a “raw” state—a form that preexists its definition and the creation of a system for doing things with it.5

With this requisite definition in place, the politics of data becomes a politics of sensing. This statement bears close resemblance to the politics of looking. What we choose to look at, who is empowered to look, and who gets looked at—all have analogs in the realm of data. Sensors are invented and deployed for specific purposes. They gather certain kinds of information while neglecting others. They belong to a system that is part ideological, part technical, and part social. Above all, they are intentional—that is, imbued with intention. Unlike the discourse of camera lenses, which suggests direct comparison with human eyes, sensors may address specific aspects of the built or lived environment that are not, in themselves, particularly meaningful. They make sense only when collated and integrated with interpretive systems. Thus, we may say that the politics of data is also the politics of processing.

To be processed into legibility, data must first be prepared—outlying samples and anomalies are routinely eliminated or bracketed to exclude them from analysis. This process is often called “cleaning” the data. At scale, data processing focuses on dominant patterns and broad contours. The promise of “big data” is to extract meaning from patterns that are too large to perceive except through intense computation. This data must then be translated into systems of signification that are legible to humans. The importance of this part of the process cannot be overstated, and it is among the driving questions for this book’s chapter devoted to data visualization (chapter 1). Here, complex apparatuses of visual semiotics and representation come to the fore; decisions about how data should be rendered visually require conscious choices within the rhetoric of display, computation, and communication.

As a politics of knowing, data becomes most deeply entangled with systems of ideology. Many efforts have been made to schematize the relationship between data and knowledge. Among the most commonly used models is the DIKW pyramid, from the field of information science, which proposes a hierarchy rising from a large base of discrete and decontextualized “data” on the bottom, to contextualized “information” on the next tier, followed by “knowledge” that has been interpreted or operationalized, culminating in “wisdom,” presumably denoting especially useful knowledge that occupies a comparatively tiny space at the apex of the pyramid.

In his afterword to “Raw Data” Is an Oxymoron, Geoffrey Bowker references a graph generated by Google’s Ngram Viewer, which is used to track the frequency with which particular words or combinations of words appear in millions of books published over a period of years, decades, or centuries. Bowker used the Ngram Viewer to illustrate the decades-long statistical decline of the terms “knowledge” and “wisdom” alongside the coincident ascendance of the terms “data” and “information.”6 Bowker’s wry humor invites readers to conclude that data was vanquishing wisdom, as certain sectors of the humanities have long feared. But this is neither the time nor the place to address the differences between causation and correlation. Google’s Ngram Viewer is no different from any other computational system in requiring a second order of analysis. An animating precept behind Gitelman’s book is that we must always “look under” the data in question to ascertain how, why, and by whom it was defined, acquired, processed, and interpreted. This book extends that logic to argue that we must also “look under” the systems and assumptions by which data and images are positioned in relation to each other.

Johanna Drucker has argued that usage of the term “data” is itself misleading, in part because of its etymological origins, which assume the preexistence of phenomena independent of definition or observation. Drucker instead advocates use of the term “capta,” which emphasizes that data must be actively taken and does not exist in a given or “natural” state.7 For other writers, the proliferation of data—regardless of whether it is given or taken—is sufficient to confer meaning and value. In 2008, Wired editor Chris Anderson proclaimed that big data would bring about the end of theory as a means of explaining the functioning of the world and its societies. Anderson argued that models on the scale that a young company called Google was imagining them, would displace virtually all theories of human behavior.

This is a world where massive amounts of data and applied mathematics replace every other tool that might be brought to bear. Out with every theory of human behavior, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.8

Anderson’s uncritical epistemology of data has been rightly eviscerated by social scientists,9 but the sentiment remains all too common in the hyperbolic discourse of big data in industry. Pinboard founder Maciej Cegłowski, in his keynote address at an industry conference devoted to big data and sponsored by Strata + Hadoop, offered a rare critique of data science, titled “Haunted by Data,” from the perspective of a technologist and an entrepreneur:

The promise is that enough data will give you insight. Retain data indefinitely, maybe waterboard it a little, and it will spill all its secrets. There’s a little bit of a con going on here. On the data side, they tell you to collect all the data you can, because they have magic algorithms to help you make sense of it. On the algorithms side, where I live, they tell us not to worry too much about our models, because they have magical data. We can train on it without caring how the process works. The data collectors put their faith in the algorithms, and the programmers put their faith in the data. At no point in this process is there any understanding, or wisdom. There’s not even domain knowledge. Data science is the universal answer, no matter the question.10

Cegłowski’s contrarian response to the prevailing celebration of data illustrates the gap between promise and reality in the field of data analytics. From my perspective, paying attention to the scale on which data is acquired and processed is of secondary importance. This allows me to sidestep the bandwagon syndrome as many current theories about data are revised to become theories of big data. I do not mean to be flippant about the importance of understanding how data is differently processed on a massive scale; it is simply not a central concern of this book. In fact, I would compare the phenomenon and functioning of big data to the parallel emergence of noncomputational high-density image formats seen in ultrahigh resolution, high dynamic-range photography, and ultrahigh frame-rate cinema. On one hand, these formats change everything about the capturing and postprocessing of lens-based images; on the other hand, these formats and the discourse that often surrounds them uncritically reinforce much about the status quo in visual culture. If a lot is good, a huge amount must be better. All this leads me to suspect, as I explore in chapter 2, that our era may be aptly described as caught up in a “frenzy of the digital.”

In Minima Moralia, Theodor Adorno famously revised Georg W. F. Hegel’s declaration “the true is the whole” to “the whole is the false.”11 If we have learned nothing else from large-scale data analytics, surely we now know that “the whole” is a theoretical impossibility, even if the encyclopedic affordances of digital systems suggest it as an alluring ideal. Although Adorno did not write directly about computation, he worked on Minima Moralia in Germany and in the United States, homes of the two most vibrant computing industries of the 1940s. In both countries, computing machines had already been deployed on entire populations, whether in the form of census calculators in the United States or the tabulating machines deployed in Nazi Germany’s pursuit of genocide. It is tempting to say that data is uniquely susceptible to the logic of totality. Data processing often relies on samples that are meant to represent a whole. Humans designed computers partly to maximize efficiency and economy. But the logic of totality—the “encyclopedic” affordance of digital technology identified by Janet Murray—represents only one possible application.12 I would argue instead that although data is readily mobilized in the interests of totality, it is just as likely to be deployed for ends that are nontotalizing. Totalization represents one possible outgrowth of the logic of computation, but it is far from the only one.13

Even the largest data sets must be understood as inevitably situated, motivated, and partial. Routine processes of sampling, translating and interpreting—to say nothing of “cleaning”—data, even on a massive scale, are not synonymous with the philosophical whole imagined by either Hegel or Adorno. Indeed, Hegel’s notion of “the universal” necessarily took account of the particular, thereby forming part of the basis for the logic of dialectics. In the context of data analysis, a similar tension plays out between the situated specificity of individual data “points” and the much larger “sets” into which they are integrated. But the promise of interpreting data at scale is precisely a generalizable form of knowledge—a degree of certainty by which one could map a corporate strategy or shape a candidate’s platform to maximize profits or electability. Within such a system, particularity does not effectively assert itself dialectically against the universal; it is taken into account statistically even if its effect is infinitesimal.

In Gitelman’s book, “data” is assiduously—and somewhat obtrusively—treated as a plural noun. Her introduction notes, “Data’s odd suspension between the singular and the plural reminds us of what aggregation means. If a central philosophical paradox of the Enlightenment was the relation between the particular and the universal, then the imagination of data marks a way of thinking in which those principles of logic are either deferred or held at bay.”14 For my purposes, data is neither singular nor plural; I am tempted to speak of a “data effect,” which would obviate such distinctions and allow us to focus on its implication in broader systems of power, knowledge, and representation. Following Allan Sekula’s interrogation of photography’s role in the politics of documentary, we might ask, “What type of exchange is implied” by the capturing of data?15 Is it an exchange between equals? Does encouraging amateur visualization (e.g., of one’s social network) imply a mendacious pluralism in the possible uses of tracking data? Do local, recreational visualizations make more totalizing data systems appear misleadingly domestic and benign?

An emphasis on function also reminds us that data and its interpretations require a class of specialists as well as a complex technological apparatus. Ironically, data analysis systems simultaneously appear to be endlessly open and adaptive as well as (drawing the ire of critical theorists) self-contained systems of knowledge and belief. The more we learn about data, the more directly we confront the uncomfortable truth that, in the eyes of marketers and government agents, we are the data we generate: metadata, records, traces, trajectories, profiles. As we will see in the following discussion of the nineteenth century’s competition between photography and biometrics in the science of criminology, the abstractions necessitated by computation contrast sharply with the particularities of the photographic record for identification. Perhaps this desire to recapture one’s capacity for self-representation is precisely what lies behind the current trend of self-documentation online; the arm’s length composition of digital self-portraits constitute a reassertion of the visible self as a gesture of defiance at having one’s identity reduced to abstract metadata.

I would also compare this awareness of how data functions in market analytics with the TV-era realization that we (viewers) are the real objects sold by broadcasters and bought by advertisers, that our identities as viewers have relative value based on the income and shopping habits of our demographic. An even more insidious and indigestible realization attends our transformation into a digital-era aggregation of habits, keystrokes, clicks, scrolls, searches, views, likes, location data, browsing history, and so on. The comparatively gross definition of demographics is replaced by aggregated specificities in an age of data mining. As TV viewers were defined as split subjects by their capacity to function as consumers and as citizens, digital consumers are also split, knowable in their particularities if one assembles the available metadata, but more commonly reduced to an infinitesimal point within an aggregated flow. The data effect is thus always hybrid—we perceive it through channels and semiotic protocols that are inflected with meaning, even as they purport to represent the detachment of scientific instrumentation or applied mathematics.

Finally, we should remember that data is defined, collected, and interpreted within a social hierarchy that privileges large, technologically endowed corporate interests. In systems that offer both paid and free access, ironically, those who refuse to pay are more likely to be tracked, while those who presumably have greater resources—a bigger prize for companies harvesting consumer data—can afford to opt out of tracking systems. For all its bigness, big data is not all-inclusive. Kate Crawford notes that “with every big data set, we need to ask which people are excluded. Which places are less visible? What happens if you live in the shadow of big datasets?”16 Crawford goes on to argue against the continuing fetishization of big data in favor of developing an ethic she calls “data with depth”;17 others have argued convincingly on behalf of “thick data,” following the model of deeply contextualized cultural analysis promoted by anthropologist Clifford Geertz.18 The topic of this subsection—the politics of data—in turn should not be reducible to an easy formula. We need not ask whether data has politics, only how deep we want to dig to reveal its secrets.

When, in 1971, Michel Foucault contrasted the longue-durée trajectories of macrohistory with the focus on disruption and dislocation within fields that might be regarded as “applied history” (that is, histories of literature, science, philosophy, etc.), he was describing the very intervention in modes of human thought that attended development of the mainframe computer. Although computation as such is not dealt with directly in The Archaeology of Knowledge, issues of scale, precision, totality, recall, disruption, categorization, and codification lie at the center of Foucault’s historical models.19 What questions are more insistently provoked by the invention of a machine for large-scale storage and analysis of data than those of historiography and epistemology? In each instance of the data image we find evidence of the ideological, historical, and technical conditions in which it was produced. Hence, understanding the politics of data is only possible within a context that accounts for both technological development and cultural meaning.

Certain themes persist across the decades of computing’s rise. Computers are electronic, mechanical, automatic; they work with numbers, calculations, digits; their utility may be universal or specific. The status of “data,” which was once a synonym for quantitative information, has been downgraded over time to denote instead an unprocessed intermediate state in the production of information. The regime of data challenged, then quickly surpassed, the visual positivism that had driven scientific imaging technologies and practices of the late nineteenth and early twentieth centuries. Although the critique of positivism was well established in the human sciences, the critique of photography’s unique buttressing of visual positivism was still on a distant horizon when data became ensconced in the scientific establishment as the epitome of accuracy, verifiability, and objectivity. Data did not so much inherit the mantle of photographic empiricism as it forestalled the eventual demise of photographic mimesis.

Allan Sekula was at some pains in his writings about photography to insist on the medium’s implication in a history that included instrumental applications—scientific, military, cadastral, criminological—alongside its favored—formalist, pictorialist, illusionist—siblings. For Sekula, the goal was to rethink the history of photography as enmeshed in a tangled web of ideology and state power as much as in the eye of the beholder. Data has never had this problem. Data has always been shamelessly instrumental and is readily characterized as such. Even such dubious instrumentalities as online tracking data—the logging and aggregating of our search terms, clickstreams, likes, interests, and associations, as well as keywords appearing in private communications—are presented as shopping aids and time savers for users of e-mail and search engines. Data, we are assured repeatedly in the context of social media and networked communication, is a friend whose friendship grows more rewarding the more we share.20

Online tracking is only one example of how the logics of neoliberalism have broadly infused everyday technologies. “Information technology,” according to David Harvey, “is the privileged technology of neoliberalism” because of its role in creating and optimizing a framework where the logic of markets can prevail.21 Under neoliberalism, politics and contestation are subsumed by efficiency and rationality; political interest groups and ideologically aligned collectives are replaced with the notion of “stakeholders” and collaborative participation in processes of problem solving. The horizontal networks—both technological and human—celebrated by evangelists of digital culture are easily digested as part of neoliberalism’s shift from hierarchical modes of governance to rational problem solving. Neoliberalism’s focus on market exchange as the model for all human action “requires technologies of information creation and capacities to accumulate, store, transfer, analyze, and use massive databases to guide decisions in the global marketplace.”22 Is the shift from optics to computation, then, simply an escape from the frying pan into the fire?

Echoing Harvey, political scientist Wendy Brown differentiates neoliberalism from other diagnostic models for critiquing the deleterious effects of capitalism by emphasizing its radical extension of the logics of “the market” into virtually all aspects of everyday life:

Importantly, this is not simply a matter of extending commodification and monetization everywhere—that’s the old Marxist depiction of capital’s transformation of everyday life. Neoliberalism construes even non-wealth generating spheres—such as learning, dating, or exercising—in market terms, submits them to market metrics, and governs them with market techniques and practices. Above all, it casts people as human capital who must constantly tend to their own present and future value.23

Although Brown does not focus on technology in her critique of neoliberalism, it would be difficult to miss how large-scale informatics systems are implicated in the logic of neoliberal society and economics that she describes. If the industrial age brought an increase in the power of machines and the demotion of humans to the role of operators and consumers, the digital age has further lowered the status of consumers from humans needing to be seduced into buying particular goods to data points within trackable patterns of consumption. In data systems, we are no longer the individuals we once believed ourselves to be, but this realization brings neither the anxiety of modernism nor the nihilism of postmodernism. A properly trained neoliberal subject instead wants a piece of the action, to shape data flows and profiles consciously to profit from the insights and access they provide.

The term “neoliberalism” is all the more insidious in preemptively dashing the traces of hope that were once inscribed in the term “late capitalism.” Describing capitalism as “late” suggested that its end was, if not near, then at least conceivable. “Neoliberalism” carries with it a hint of despair, a concession that the once redemptive quality of liberalism has become just as co-optable as the conservatism once invoked by its pejorative predecessor, “neocon.” The “neo” in neoliberalism suggests that we are not at a stage of lateness nearing relief, but at a point of renewal in a cycle that shows fewer signs than ever of letting up.

Perhaps the deepest and most destructive paradox of the role of data in consumer culture is the extent to which it mobilizes the power of multiplicity across a population, while individuals whose actions are logged are deprived of any type of collective agency. Whether through passive or active engagement with data collection and analysis, we occupy a subject position that is individuated: every choice, search, and purchase “matters” as part of a larger market-tracking system. At the same time, however, our ability to act is restricted to severely constrained individual choices: what brand to buy, not what political or environmental priorities should be supported by our representatives. In this sense, we are never meaningfully disaggregated from our demographic, nor are we capable of directly steering how our actions are interpreted. We are monads in a vast leviathan of traced actions, yet we experience no lasting sense of individual agency or collective interest. Giorgio Agamben diagnosed this insistence on “desubjectifying” the individual as a key to constructing “the most docile and cowardly social body that has ever existed in human history.”24 We are, if anything, tempted toward fatalistic resignation, cynicism, or apathy by the statistical insignificance of any single gesture. Ironically, it is joint collective action undertaken at a scale that could only be realized through digital networks that offers the only viable alternative.

What would it take to transform large-scale data systems into effective tools for social change? First, technologies of data would need to be somehow disaggregated from the larger apparatuses of neoliberal economics and ideology from which they emerged. No small task! In addition to its seductive cultural rhetoric, neoliberalism aligns with a daunting range of existing political power structures embedded in mainstream party politics, economic infrastructures, legislative reforms, and judicial decisions. For those of us who hold out hope for the socially liberatory potentials of digital networks, many of the tenets of neoliberalism have been rallying cries against the imaginary threat of monolithic cultural regimes that were hierarchical, centralized, and prescriptive—the hegemony of late capitalism and state bureaucracy. The affordances of networked computing aligned neatly with alternative movements that favor collaboration over competition; horizontal networking over vertical hierarchy; best practices over regulation; stakeholders over interest groups; consensus over doctrine—but these terms all turn out to be drawn directly from the playbook of neoliberal governance!

The result, according to Brown, is the displacement of power as the focal point of political critique. In its place, neoliberalism promotes cooperation without collectivization, facilitating connection and communication among disparate groups, yes, but without the sense of shared interests that might drive and sustain a political movement.25 All of this, Brown aligns with what Foucault calls modern government’s technique of “omnes et singulatim” (all and each), by which a population may be simultaneously gathered but isolated; amassed and distinguished; massified and individualized.26 A key factor is neoliberalism’s emphasis on rational self-interest, in which individuals are encouraged to act, within certain bounds, as individuated participants in a more or less rational market economy. As market actors, we are variously aware that each action, decision, or movement, however passively it may be undertaken, feeds data into an overall information system that tracks these activities and generates meaning. Unless we are targeted for specific investigation—say, by agents of law enforcement, immigration, or intelligence—no single action may be said to have much significance on its own, but increasingly, neoliberal subjects come to understand their selves to be aggregated statistically. Individual particularities remain latent in data systems, lying dormant until called forth in the service of an externally imposed narrative.

There is no small irony in systems that seek most actively to undermine the hegemony of data operating through multiplicity and overproliferation, often mixing “good” data with “bad” to obfuscate the reality of one’s inter/actions in digital space. The Tor browser offers a case in point, along with the technology underlying the Wikileaks platform. Both systems operate by exploiting the limited capacity of computational data mining to distinguish relevant from irrelevant data. This tactic is presumably of transient utility, given the rapid pace of developments in artificial intelligence and data processing. Rather than merely concealing or encrypting data intended to remain private, multiplicity depends on limitations in the capacity of computers to intelligently sift through an avalanche of data. Such operations may well prove trivial to circumvent in the future, but for now, the tactic of resistance through proliferation marks a revealing limit case for the function of data in systems of knowledge and control. When speaking of the “politics of data,” I don’t know whether it is ironic or simply appropriate to paraphrase Jean-Luc Godard when concluding that the issue, ultimately, may not be to make political data but to make data politically.

Visible Culture

In Programmed Visions, Wendy Hui Kyong Chun highlights one of the challenges associated with parsing the relationship between visual and digital culture in a chapter subtitled “Invisibly Visible; Visibly Invisible”:

Computers have fostered both a decline in and frenzy of visual knowledge. Opaque yet transparent, incomprehensible yet logical, they reveal that the less we know the more we show (or are shown). Two phenomena encapsulate this nicely: the proliferation of digital images (new media as “visual culture”) and “total information” systems (new media as “transparent”).27

Our ability to talk about the conjunction of visual-digital culture is founded on an internal contradiction, or at least the divergent extremes of visibility and invisibility. Chun’s paradox suggests that what is at stake exists not primarily in the realm of the visible, but in the nonmimetic domain of computation, and she reminds us that the artifacts and practices of today’s digital technologies are all too readily embraced by the instrumentality that underpins neoliberal economics and culture. In Sean Cubitt’s introduction to The Practice of Light, he notes, “Historically, much of visual culture has concerned itself with making the invisible visible. … The universe of numbers generated by an informational society draws on this history of visualizing to make itself visible, and in turn the particular organizational modes typical of informatics begin to impinge on modes of visualization.”28 Cubitt’s analysis of the political embeddedness of technologies of vision within culture is exemplary, illuminating a wide swath of political and theoretical implications. For both Chun and Cubitt, the invisibility of digital processes is functionally analogous to the operation of ideology—the more widely it is disavowed, repressed, or denied, the more it is clearly determinative of cultural beliefs and actions. Correspondingly, in Chun’s model, one’s ability to decode the operations of the most abstract and invisible digital processes is dependent on the sophistication of one’s understanding of how these systems operate.

A similar privileging of technical knowledge occurs in Benjamin H. Bratton’s The Stack. In Bratton’s terms, the deeper we are able to drill critically into the “accidental megastructure” of “The Stack,” the more complete our understanding of its functioning will be. Bratton’s Stack comprises digital culture by means of a vertically integrated system that includes all the particularities of hardware and software along with their interfaces and governing protocols, but also those infrastructures that subtend them (mineral sourcing, electrical grids) and those that operate them (humans, robots). Bratton’s model is both philosophical and practical, offering a far-reaching exploration that is worthy of taking as its object of study “the digital.” Bratton’s work is also intended as a practical intervention in design, refusing to settle for articulating a new theoretical model. In The Stack, Bratton operationalizes his theoretical principles, laying out a roadmap for designers and architects as well as systems engineers and cultural theorists.29

The aspirations of this book are more modest. I do not seek a philosophical nor deeply technological understanding of the origins or essence of digital culture. My aim is rather to address the functioning of visual-digital culture in some particularity, observing overarching historical trends, yes, but with a focus on specific instances and aberrations. My goal is not to redraw the map of visual culture, but I would like to demonstrate the benefit of revisiting received historical models that continue to shape our thinking about the technological present.

The method of this book is partly inspired by media archaeology, particularly as it derives from the historiographical models—fractious, nonteleological, eccentric—advanced by Foucault and others. The scope of the project is a discontinuous, but still in-depth, exploration of the “technocultural imaginary” and its manifestations at particular moments. The investigation is thus not about technology; rather, I analyze how specific technologies serve as historically situated prisms for exposing, expressing, or engaging our most pressing cultural concerns. Digital media are often symptomatic of the processes by which technologies have become banal, invisible, and accepted, even in their most troubling manifestations. My goal is to expose the systems of power by which we govern our own behavior; the technological, corporate, and governmental elites in whom we place (or from whom we withhold) our trust, the premises and constraints we accept, and the myriad microcapitulations by which power maximizes its efficiency in an increasingly technologized world.

The proliferation of theoretical writings about “new media” during the 1990s seemed to parallel the pace of late twentieth-century technology industries. As the world hurtled toward the end of the millennium, this period was dubbed—as if in some urgent computer shorthand—Y2K. It was widely perceived and described as an era marked by radical disjunctions, paradigm shifts, and potential catastrophes, all of which afforded enticing opportunities for reinventing the terms of its theoretical understanding. Fetishists of the “new”—often unapologetically inspired by the futuristic literary genre of cyberpunk fiction—were subsequently criticized by a countervailing critical genre skeptical about the proclaimed “newness” of new media, pointing to numerous instances in which categories of “old” and “new” were rhetorically and historiographically bankrupt.30 Some of these challenges deployed a form of historical parallax, drawing attention to the euphoric discourses that attended previous generations of technology, implicitly reminding readers just how silly those exceptionalist proclamations sound in retrospect. The model of historical parallax, practiced with rare virtuosity by Anne Friedberg and Lisa Gitelman, in which technologies from radically different moments in time are placed in critical juxtaposition, provided inspiration for the primary method of this book. Outlined in some detail below, my adaptation of the parallax concept hinges not on historical anachrony but on drawing attention to the convergent/divergent relations that are expressed between the realms of data and images.

My goal in writing this book is not to convince readers of the inevitability of any particular future but to produce a critical subjectivity that is both historically aware and technologically informed. Hopefully, the lessons of 1990s cyberculture have been articulated and internalized to the point where we no longer need to march through the usual litany of denunciations: technological determinism; utopia/dystopia; euphoric proclamations/moral panic; and so on. However, I will selectively mine the history of this body of critical literature for evidence of the interconnectedness of cultural criticism and cultural production. Media archaeology offers a useful model for investigating the media technologies and critical literature of the late twentieth century. Over the course of the two decades straddling the millennial turn, roughly 1990–2010, citizens of technologically privileged societies were retrained to experience the world in terms of procedural logic governed by interoperable systems. The subfield new media studies emerged to address these shifts and developed a sometimes awkward vocabulary for speaking about digitalness across various domains of culture. Turning this body of not-so-distant theoretical writing into objects of archaeological study allows us to examine not so much the correctness of any given work of scholarship, but its presuppositions, limits, and thresholds. In other words, we may learn as much by examining this work in terms of what it does not say as from what it does say.

The archaeological approach attempts to find patterns of discourse through an examination of material conditions and the systems governing them. This model is further predicated on excavating complex systems, not in their totality or as components of a cohesive historical narrative, but as fragments from which to extrapolate and propose insights into human behavior and systems of thought. I hope it is not immodest to set the sights of this project on such a large span of media and technologies. Faced with an overwhelming data set, stretched across decades of complex evolution, we have no choice but to be selective with our objects of study. By establishing a critical conceit that is specifically oriented to address the resonances and conflicts of data and images across a range of media platforms, this project will privilege cases that are in some way liminal, hybrid, or uncontainable by one category or the other.

Finally, this investigation is bounded by the methods and concerns of visual culture studies on one side, technology studies on another, and (“new”) media studies on a third. With visual culture I share concerns about how we make meaning out of the visible world and the critique of signification, representation, tropes, and metaphor, but I choose to confront—and even embrace—the disruptions occasioned by digital imaging. With technology studies I share a respect for material context in mapping the history of technological development. My aim here is not to recapitulate the history of apparatuses or inventions—and certainly not great men—but to pay attention to the reciprocal, sociocultural, economic, and political motivations that drive them. From media and new media studies, I build on longstanding concerns with the politics of seeing and being seen—perhaps I should say of sensing and being sensed?—the interrogation of systems of power and ideology, and an underlying critique of capitalism and its consequences.

Parallax Views

Data tells stories.31 It is also used to capture and convey information about the world. Like images, this information and its modes of conveyance are contested and nontrivial to decode. I base this book’s critical method on the data/images dyad not to propose a structural binary, but as two components of a “parallax logic” that can be brought to bear on numerous moments throughout the history of technology. At the risk of overburdening the metaphor, I would explain this strategy as follows: as with the optical properties of parallax, information about a viewed object may be gleaned through perceived differences in the distance between two viewpoints. In certain cases, data and images operate in close proximity, informing, cross-pollinating, and inspiring each other; at other times, their relationship seems to be philosophically quite distant, motivating antithetical pursuits, methods, and results. Parallax also allows the eyes—and thence the brain—to perceive and deduce spatial information from occlusions caused by one object getting in the way of another. In other words, we can derive information from things that are unseen, obstructed, or perhaps misrecognized, as well as from those that are perceived with clarity and wholeness.

Images and data suggest differing knowledge regimes, the relative ascendance of which, at any given moment, correlates with ways in which we culturally process issues of political and social significance. What does it say about the epistemic regime of a historical moment if all efforts—cultural and technological—insist on concealing the differences between competing ways of knowing about the world? A critical dynamic that problematizes values such as realism, convergence, totality, automation, and ubiquity suggests a very different role for the developers and users of digital technology. Within industrial practice, an implicit commitment to the synthetic mode has long dominated the conventional wisdom about data and images. The accepted narrative maps a trajectory of linear progression toward ever more convincing approximations of the real world. Graphics engines used for 3D games become ever more photorealistic, for example, while film and television productions increasingly merge lens-based and computer-generated images at the stages of production, postproduction, and previsualization. While certain aspects of technological development indeed reflect this narrative, the reality of the situation is both more complicated and more interesting.

The idea that moving images can (and should) provide audiences with an asymptotic approximation of “reality” was fully articulated in writings by film theorists of the early twentieth century. André Bazin associated emerging cinematographic techniques, such as long takes and deep focus (which were themselves enabled by the increased light-gathering capacity of film emulsions), with a perceptual experience that resembled the real world.32 Bazin’s argument that this constituted cinema’s greatest potential is barely distinguishable from the realist discourse that attends the recent resurgence of virtual reality and contemporary visual effects such as motion capture. Outside the promotional context of industry marketing, it would be naïve to return to such a model, ignoring decades of film scholarship devoted to unraveling the assumptions underlying both Bazin’s telos and the deterministic view of technology that underlies it. This project instead acknowledges the persistence of a desire for “the real”—in this respect I agree with Brian Winston that we remain “addicted” to at least the idea of realism—but I would insist that the most provocative dialogue between data and images occurs when “realistic” representation is not the primary goal.33 My hope is that this taxonomy will help illuminate the operative strategies across a broad spectrum of media making, allowing us to defamiliarize some seemingly self-evident motivations in the media and technology industries, and ultimately to identify strategies for misusing or misappropriating these technologies for socially and creatively beneficial ends.

The notion of parallax also enables us to throw the relations between data and images into relief—to observe points of convergence when they occur without privileging them in a general sense. Parallax—as with its functioning in the ocular world—allows for the perception of both sameness and difference; it presupposes a split viewing position and the ability to process differences in perspective and perception. The concept of parallax further allows us to consider the active dynamics of the relationship between data and images at specific moments. This model emerges in part from the observation that particular media forms may exhibit either a convergence or a divergence of images and data. To think systematically about these dynamics, I propose the following taxonomy of modes to describe the functional relationships between data and images.

Supplemental

The supplemental mode may reasonably be considered the most straightforward and intuitive of the modes outlined here. In this mode, images and data relate to each other in an additive way. The supplemental mode allows deficiencies in the descriptive capacity of images to be supplemented by those of data and vice versa. This model can be traced back to the nineteenth-century Bertillon card, a variation of which is still in use for driver’s licenses, passports, and criminal records. In this mode, neither regime (data or images) challenges the other epistemologically; instead they work together in a parallel process that demands no systemic reconciliation. One advantage of the supplemental mode is the potential to simply add new information as it becomes available. In the case of the Bertillon card, this may be seen in the addition of fingerprints to identification cards. Fingerprints, which were not part of Bertillon’s original system in the 1880s, were simply added to the bottom of existing cards once fingerprinting became a viable technology in the early 1900s. Thus, the photographic mug shot, biometrical measurements, and fingerprints all function in parallel, with no crossover or effect on the adjacent systems of identification; each may be used to cross-reference or verify the other, but the systems themselves remain discrete.

Multiperspectival

The multiperspectival mode has historical roots in Eadweard Muybridge’s multicamera array, used for the study of human and animal locomotion. The multiperspectival mode is fundamentally lens-based, and the conjunction of multiple images from multiple perspectives does nothing to disrupt the mimetic function of each individual lens and the image it renders. Images in the multiperspectival mode are not abstracted or translated into computable data; they continue to generate meaning through their indexical relationship to a profilmic body. In the multiperspectival mode, images captured from multiple points in space are used to compose a data set of visual information that remains rooted in the photographic. Meaning is then created through the juxtaposition or compositing of the images, which are unshackled from their original time base and location in space.

Translational

The most common exemplars of the translational mode may be found in motion capture and facial recognition systems. These technologies use visual information captured photographically to generate an array of data points, which are then mapped onto a Cartesian plane or volume. This mode has its historical roots in the time-motion studies of Etienne-Jules Marey, which he dubbed “chronophotography.” In Marey’s system, photographic images were abstracted and converted into a combination of points and lines in space, ultimately effacing the photographic representation and original bodies entirely. Another historical example may be found in the grid-based tracing process enabled by the portable camera obscura. The translational mode values the photographically or optically perceived image, but only as a source of convertible data, not an end in itself. The translational mode also works in the opposite direction, beginning with data sets—including sonographic, radiographic, and backscatter data—that express meaning to humans only when translated to the visible register.

Aspirational

Software has been long capable of generating “realistic-looking” images without any photographic referent at all. In this case, designers create volumes, shapes, and surfaces through a combination of manual and procedural modeling. Textures may be created directly by software or by adapting prerendered patterns or samples from photographic originals. The environments of most contemporary 3D games are created this way, resulting in fully navigable and responsive environments that have no indexical relation to the physical world. In many cases, however, these artificially created worlds approximate photorealistic—or, more accurately, cinematically realistic—imagery. In this case, the basic relationship between data-based imagery and lens-based imagery is that of iconic resemblance, not literal translation. Just as the multiperspectival mode articulates a relationship between image and image, the aspirational mode articulates a relationship between data and data. The goal of this work is a product that is entirely digitally generated but increasingly indistinguishable from lens-based images.

Synthetic

The synthetic mode is perhaps the most familiar because of its current dominance in the entertainment industries. The synthetic mode includes a cluster of technologies—virtual and augmented reality, holographic projection, previsualization, virtual cinematography—all of which deploy variations on the synthesis of data and images. The synthetic mode has evolved over many years to dissolve the hierarchy between image and data, or data and metadata. Workflow in the visual effects industry, for example, no longer involves “adding” metadata to filmed images; rather, images and data are captured via integrated systems that ascribe no necessary hierarchy to the two types of data. Increasingly, lens-based images and volumetric or positional data are captured in tandem and remain mutually interdependent throughout the postproduction and display process.

Negotiated

I have termed the final mode of digital imaging “negotiated,” to signify models of digital imaging that deploy selected aspects of the strategies described above to ends that are not adequately described by any single mode. The negotiated mode is so-named in homage to Stuart Hall’s negotiated stance described in his reception theory of encoding/decoding. Here, it denotes a way to describe instances that seem to be negotiating the unstable relationship between data and image. At their most revealing, these negotiations include elements of self-reflexivity that invite reconsideration of the data/images binary in both general and specific terms. Instances of the negotiated mode often highlight internal contradictions of the data/images binary and make this a part of the work itself. Among the exemplars of this mode is Casey Reas, whose work has consistently blurred the boundaries between referential and computational image making, which I discuss in chapter 1.

To expand on the parallax modes outlined above, each subsequent chapter of the book is devoted to a conceptual realm—space, visualization, surveillance—wherein various instances of these modes are deployed. While this introduction sketches a broad historical and conceptual framework for parsing relations between data and images, the chapters that follow operationalize this framework to analyze specific cases. I believe the parallax modes outlined here offer a useful vocabulary for critiquing the conjoined practices of visual-digital culture, but I would not want to suggest that this paradigm is either comprehensive or exclusive. Barely a generation ago, the technologies of vision through which the relationship between data and images was negotiated would not have sustained this type of investigation. Though today’s media, technology, and entertainment industries are dominated by a logic of synthesis that has come to seem increasingly natural and inevitable, this approach follows a long period of ascendance of the translational mode during which images were consciously reimagined as visual information and digitized into formats that could be acted on by computers. My aim throughout this book is to decenter naturalized paradigms of translation and synthesis to recognize the value of other modes that invite different kinds of experimentation and aberration.

Finally, parallax is part of a conceptual toolkit that enables us to make sense of the evolution of imaging technologies and to bring a demystified critical perspective to bear on both the practices of data/image making and the systems by which they gain cultural significance. In a practical sense, articulating these parallax modes frees us from dealing with the perpetual cascade of new or anticipated technologies. As history has taught repeatedly, no technology—and certainly no single corporation—should be mistaken for a permanent fixture of the technocultural landscape. I therefore strive, when possible, to examine the functioning of these modes in a general context, rather than anchor my observations to even the most promising new—or currently dominant—technology, trend, or corporate entity.

A Symbolic History of Data/Images

A symbolic point of origin for this book lies in the virtually concurrent conception in the early 1830s of mathematician Charles Babbage’s protocomputational difference engine and the precinematic phenakistoscope. The phenakistoscope shares a genealogy with several other optical/philosophical toys of the nineteenth century, but Babbage’s difference engine is unthinkable without the technical apparatus and mechanical sensibility of the industrial revolution. Both, as Anne Friedberg has noted, share the ability to modify units of meaning from one state to another and to create meaningful associations as a result, but the two could hardly be more different in terms of their material origins and philosophical precepts.34 In these two inventions—the phenakistoscope and the difference engine—images and data are aligned with entirely divergent epistemological regimes.

Along with numerous other optical toys related to cinema’s prehistory, the phenakistoscope constructed a visual experience that approximated the perception of movement experienced by the eye. Each device that prefigured cinematic motion deployed a slightly different strategy for alternately presenting images and the gaps between them, but all were predicated on sequences of varied images to create a composite illusion of movement. Like its sibling technology the zoetrope, movement of the disks in the phenakistoscope was initiated by a highly variable gesture of the human hand, and the speed of the resulting animation typically went from too fast to too slow, with a sweet spot in the middle when things momentarily seemed about right. The experience, in other words, was fundamentally imprecise, analog, and variable, the very antitheses of the mechanical repeatability on which the conception of a calculating machine was necessarily based.

10474_000z_fig_001.jpg

Figure 0.1 Simultaneous, but oppositional, regimes of knowledge aligned with the logics of data and images to manifest in the phenakistoscope and difference engine (ca. 1839).

While Babbage devised the concept for the difference engine and its successor, the analytical engine, he owned a silk portrait of Joseph Marie Jacquard, inventor of the Jacquard loom. Created in 1801, Jacquard’s loom produced extraordinarily intricately woven graphical images. One history of computing goes so far as to describe the loom itself as a protocomputational apparatus. “In a sense, Jacquard’s loom was a kind of early digital graphics computer.”35 The portrait owned by Babbage, in fact, was created from more than twenty-four thousand punch cards, and the complexity of the graphical information they represented provided partial inspiration for the two computing machines designed—but never actually built—by Babbage. Quite literally, then, the concept for encoded data, and even the specific form of the punch card that would be deployed by the analytical engine, originated in the domain of images. This convenient origin myth aside, subsequent systems of representation and knowledge that emerged in the nineteenth century were more commonly characterized by an oppositional relationship between data and images.

10474_000z_fig_002.jpg

Figure 0.2 A woven silk portrait of Joseph Marie Jacquard illustrates the use of punch cards for data storage (ca. 1839).

One such opposition can be found in the nearly concurrent development in the late 1830s of photography and telegraphy, arguably the two most important communication technologies of the nineteenth century. Samuel Morse’s system for telegraphic communication would not be considered computational by today’s standards, but it was nonetheless a binary data system on which the instrument of the telegraph was capable of acting. Furthermore, the symbolic expressions used in telegraph communication could be regarded as a precomputational system of algorithmically generated codes. Katherine Hayles makes this argument in her book How We Think: Digital Media and Contemporary Technogenesis (2012), noting that approximately 90 percent of all nineteenth-century telegraphic communication was first abstracted by operators using code books.36 This process of encoding/decoding might be correctly understood as an instance of human computation that closely portends the systems for electrical computation that would emerge in the decades to come.

From the beginning, photography was overtly linked with the objectivity of science, and the men who perfected it were technicians born of the Industrial Revolution. Photography’s acceleration of the transition within modern painting from representation to abstraction is well known, as is the transformative impact of mechanical reproducibility on the experience of artworks. The virtually concurrent development of photography and telegraphy benefited from the general milieu of nineteenth-century technological innovation, but the two otherwise shared virtually no technological affordances. More important, the two were separated by an epistemic divide that privileged, on one hand, an indexical, photochemical trace of visible phenomena in the world, and on the other, an entirely abstracted, electromechanical translation of symbolic information into pulses and gaps. The oppositional mental models represented by telegraphy and photography are further affirmed by the fact that it took more than forty years for the two technologies to meaningfully converge. Although a patent for sending graphical symbols—a device known as the pantelegraph—was issued within just a few years of the invention of telegraphy, the conversion of photographic images into a binary form that could be effectively transmitted by telegraph was not commonly practiced until early in the twentieth century, when it was found to be useful for law enforcement.37

Additional historical bifurcations of data and images are the concurrent experiments in time-motion study undertaken in the United States by Eadweard Muybridge and in France by Etienne-Jules Marey. These examples are well known and I will not recapitulate their contributions to the prehistory of cinema here. For the purposes of this discussion, these two projects are noteworthy because of their radically different uses of the shared technology of photography in service of competing commitments to mimesis and computability. Whereas Muybridge deployed multiple cameras to produce isolated frames that were displayed sequentially, Marey developed a system of motion analysis that captured multiple exposures within the frame of a single image. While Muybridge’s multiple camera arrays fragmented the smooth movements of bodies through space into discrete instants, Marey’s multiple exposures on a single plate were abstracted from their photographic specificity and reduced to the most relevant data points.

For Muybridge, human and animal bodies were very much at the center of his work, and the naked, unruly bodies he photographed were replete with idiosyncrasy and particularity. On several occasions, Muybridge appears in image sequences that might be considered a form of narcissistic self-portraiture. The scale of Muybridge’s project, which comprised hundreds of thousands of image sequences, was marked by obsession, as Thom Andersen observes in his film Eadweard Muybridge Zoopraxographer (1975), a desire for totality that bordered on pseudoscientific mania. Over decades, Muybridge experimented with various formats for his image sequences, but they were most often presented in grid or sequential form. Although his subjects were frequently positioned in front of a grid, he did not systematically extrapolate from the visual records contained in each individual frame. That is, he did not attempt to apply statistical averaging or create translations of the photographic images into quantified data.

10474_000z_fig_003.jpg

Figure 0.3 Motion studies by Eadweard Muybridge (pictured here) reveled in photographic specificity and bodily idiosyncrasy.

Marey, by contrast, actively suppressed the specificity of the bodies he photographed by having his subjects wear black suits and hoods that concealed facial features and bodily anomalies. His subjects were further outfitted with white dots and lines marking joints and limbs. Marey’s system of chronophotography captured multiple superimpositions on a single image plate, which emphasized the contrast of white lines and dots against hooded black bodies and dark backgrounds. Thus, Marey’s result maximizes the computability—or at least the measurability—of bodily movements as they are translated into quantifiable points and lines. Marey’s “data points” were destined not for computation, as such, but for statistical analysis, which was, in turn, designed to maximize the efficiency and regularity of bodily movement. Marey’s work was funded by the French military, partly in hopes of bringing regimentation to soldiers’ movements in the field, presumably to impress upon participants in colonial uprisings the overwhelming mechanistic and industrial power of the French military. The bodies captured by Marey were like uniformed soldiers whose individuality had been erased, their images abstracted beyond recognition. In some cases, Marey even traced over his photographic originals, reducing them to pure points and lines on paper. In these tracings, bodies are relegated, along with photography itself, to the role of generating data.38

10474_000z_fig_004.jpg

Figure 0.4 Etienne-Jules Marey’s single-plate chronophotography abstracted bodily movements and particularities into quantifiable points and lines.

A final historical instance of divergent uses of data and images is the work of French criminologist Alphonse Bertillon and the English eugenicist Francis Galton. Both Bertillon and Galton were active users of photography in the late nineteenth century, and both viewed it as a scientifically valid means for social betterment. Although much has been written on these historical figures, I focus briefly on a structural parallel between Bertillon and Muybridge, on one hand, and Galton and Marey, on the other. Both Bertillon and Muybridge used strategies of fragmentation and multiplicity to generate knowledge about the human body; whereas Marey and Galton both made use of image composites within a single photographic frame to minimize the significance of differences between individual bodies. These divergent strategies may be understood as exemplifying the competing logics of data and images in operation at the end of the nineteenth century.

Like Marey, Galton was interested less in the particularities of his photographic subjects than in a process of statistical averaging, designed to suppress irregularities while highlighting shared visual attributes across multiple faces. In his primary investigation, Galton used the photographic equivalent of mathematical averages—that is, single-plate photocomposites—to identify the facial characteristics that could be associated with an essential “type.” Referring to these composites as “pictorial statistics,” Galton devised a method of multiple superimpositions using a specially designed photographic apparatus and a careful protocol for capturing the photographs. These images purported to distill the recognizable essence of a particular category of humans—not just criminals, but also the poor, unintelligent, immoral, ethnic, or infirm—by printing or projecting a series of precisely registered faces one on top of another. The more images that contributed to the composite, Galton theorized, the more accurate and revealing the visible essence would become, but Galton’s composites were generally produced from fewer than a dozen originals. So intoxicating was this conjunction of the still-developing science of statistics with the art of photography that Galton enthusiastically lectured and published on the subject for many years, sometimes staging elaborate illustrated presentations using multiple magic lanterns to create live superimpositions.

In the end, Galton’s system of photocomposites remained a novelty that never gained much traction among criminologists. As a practical matter, police investigators and technicians were more interested in systems they could operationalize to catch and identify criminals than in Galton’s underlying goal of breeding undesirable humans out of existence. This stereotypical instance of nineteenth-century scientific racism was deeply informed by concurrent cultural discourses of phrenology, physiognomy, and Galton’s own term, eugenics. Although the idea of “pictorial statistics” resonated with the growing science of statistics, its resulting visual composites were more useful for the abstract purposes of proclaiming the inferiority of certain “types” than for actually preventing crime. The appeal of Galton’s photocomposites rested in their promise to make the complex mathematics behind statistical analysis readily understandable through visible means.

10474_000z_fig_005.jpg

Figure 0.5 A Bertillon card featuring English eugenicist Francis Galton includes a photographic likeness but not biometric data.

Alphonse Bertillon, in contrast, privileged the realm of data over that of images in attempting to prevent criminal recidivism. His system of bodily measurement, known as anthropometrics, was supplemented by a codified system of photographic representation, the mug shot. Bertillon’s two-image (frontal and profile) protocol for police bookings is still standard practice in the American legal system today. Perhaps more significant was Bertillon’s system for linking these photographic representations to the bodily measurements that he regarded as a more accurate and mathematically verifiable form of identification. Bodily measurements also provided the primary means of storage and retrieval for each card, with photographs used only as secondary visual confirmation of an accused criminal’s identity. Once Bertillon’s system was implemented, the Parisian police quickly amassed a collection of more than one hundred thousand cards, leaving them with the logistical nightmare of reliably storing and retrieving the data on the cards. Bertillon’s filing system clearly anticipated the need—especially among law enforcement officials—for large-scale, high-speed, networked databases.

Evidence that large-scale data generation long predates the existence of computational systems is easy enough to find. Bertillon’s filing system functioned as an analog human-powered database predicated on a standardized system for classification, ordering, and cross-referencing. This system accurately prefigured—or perhaps demanded—the invention of a computerized system for automated retrieval. Or maybe we only see the blueprint for modern computing in Bertillon’s system because we observe it through the overdetermining haze of twentieth-century history, which is unimaginable without the rise of computation. As evidenced by the concurrent rise of photography and biometrics, the use of statistical and numerical systems for documenting and understanding the world did not depart from the parallel pretensions of scientific photography until well into the twentieth century. For Bertillon and many who followed him, images and data functioned in parallel, without duplicating, converging with, or obviating the other.

10474_000z_fig_006.jpg

Figure 0.6 Filing cabinets, which were central to French criminologist Alphonse Bertillon’s anthropometric system of criminal identification, anticipated the need for networked databases.

We could carry on selectively mining the nineteenth century for evidence of technologies that diverge in their approach to data and images. Even the Hollerith machine, which was used to perform the first mechanical tabulation of the U.S. census in 1890, might be placed next to the invention of cinema that same decade as evidence of a technological bifurcation that would drive competing industries centered in the northern and southern regions of California. But enough. It’s time now to shift discussion, in the chapters that follow, to more contemporary matters.

Examining the relationship between data and images at this point in time is revealing precisely because the outcome of their contestation is not yet determined. The conditions of technological possibility and cultural meaning remain productively unresolved. To be clear: I regard this uncertainty as a feature, not a bug. I have no particular interest in being “right” about what the future holds and even less in having this examination prove not to be dated in its interests or critical framing.39 I would likewise embrace the idiosyncrasy and transience of my own interests, which are themselves ephemeral artifacts of the historical moment in which they were conceived. When it comes to digital technologies, the concept of ephemerality has multiple, contradictory meanings.

Unlike the billions of lines of software code written for soon-to-be obsolete applications and operating systems, the material detritus and environmental destruction of the industrial age will not allow themselves to be forgotten anytime soon. For all this book’s talk about data and images, it is important to remember that the computers on which they depend require physical manufacturing, the mining of raw materials, the extrusion of various plastics, and the disposal of waste products of varying degrees of toxicity related to all of these processes. Computer operation, likewise, requires electricity—enlarging humanity’s collective carbon footprint proportionate to the number and speed of the processing cycles required. All of this comes at a measurable cost—both monetary and environmental. I am also thinking specifically of the ideological systems to which we tacitly and often innocently submit: what we perceive as our own limits of technical capability, and who is thought to correctly have dominion over data and the tools with which it is organized and disseminated. The more we imagine ourselves to be marginal to such systems, the more disempowered we are in broader systems of social and economic power. I argue unambiguously in this book that the “othering” of technological elites is among our era’s most severe collateral damage. The displacement of real anxieties with imaginary ones distracts from critical thinking about the role technology should play in everyday life and, more important, the role that a thoughtful, technologically empowered public can play in shaping the design, development, and regulation of our technological future.

As is probably already apparent, I am neither a futurist nor a historian. Yet, writing about technology requires one to take seriously both the lessons of the past and the implications for the future. I aim for this book to be a document of its time, not a series of prescient pronouncements nor monuments to the forgotten. I do my best to resist the gravitational pull of futurism, where ever-quickening cycles of obsolescence and renewal make it difficult for sustained cultural critique to perform its primary function: thinking clearly about power-suffused infrastructures and suggesting ways to respond. There is much to learn from previous generations of technology, especially those that turned out to be more or less transient than their inventors expected. Historians will surely one day look back at the promises made on twenty-first-century Kickstarter pages with the same bemusement we bring to claims made in the technology pavilions at world’s fairs of the early twentieth century.

In Techniques of the Observer, Jonathan Crary looks back on the emergence of numerous technologies of vision in the nineteenth century in order to articulate a history that complicates the teleology of precinematic narratives. In Crary’s critique, the appearance of cinema at century’s end provided a narrative “conclusion” that foreclosed understandings of the alternative directions the investigation of imaging technologies otherwise might have taken. What links the disparate array of optical toys and technologies to which Crary attends—thaumatrope, phenakistoscope, zoetrope—is that they are “not yet cinema.”40

We do not yet have the benefit of hindsight in considering contemporary technologies of vision, but it is all too easy to recognize the patterns of historical forces that shape the questions we are able to ask. In this investigation, the great mass media of the previous century continue to cast a long shadow, but the epistemic shift suggested by the transition from radio, TV, and film to computational media offers hope for renewed insight. We seek not a definitive, singular, or nonteleological narrative, but rather the capacity to ask the right questions and to recognize when their answers support systems of power whose interests do not align with our own. In other words, our goal is to think critically and look unblinkingly at the interdetermining forces of media, technology, and power. It is to unpacking these relationships that this book is, finally, devoted.

Notes