33

The New Life of the New Forms: American Literary Studies and the Digital Humanities

Matt Cohen

Passage to India!

Lo, soul! seest thou not God’s purpose from the first?

The earth to be spann’d, connected by net-work,

The people to become brothers and sisters,

The races, neighbors, to marry and be given in marriage,

The oceans to be cross’d, the distant brought near,

The lands to be welded together.

– Walt Whitman, “Passage to India” (1871–2)

Talk of something called “digital humanities” seems to be everywhere, lately. What is the “digital humanities,” and what does it have to do with American literary studies? Some confusion about the relation is warranted. As recently as 2008, new media scholar Jay David Bolter claimed that literature departments seem “unreasonably resistant” to using digital technologies (qtd. in Hayles, How We Think), but Matthew Kirschenbaum tells us that “digital humanities has accumulated a robust professional apparatus which is probably more rooted in English than any other departmental home” (Kirschenbaum, “What Is”). Although most of us wouldn’t describe ourselves as working in the digital humanities, we’d probably all agree that the “digital” in digital humanities may soon seem redundant to our students. The objects we study in the academic humanities, after all, will be experienced in great part through digital mediation before long. Does digital humanities constitute a methodology or critical formation that speaks to ongoing and emerging scholarly conversations in the history and criticism of American literature? How does it relate to the way we’ve done things in the past?1

I begin with Walt Whitman’s evocation of the age-old dream of the “passage to India” – a mixture of orientalism, capitalism, and imperialism – because it might remind us of both the latest promises and potential perils of the digital and the “worlding” of American literary studies as an intellectual enterprise through the methodological and political commitments of transnationalism, cosmopolitanism, the circum-Atlantic, and the hemispheric. If what Lisa Lowe calls the “intimacies of four continents” are to be uncovered by research that looks beyond national boundaries and links hitherto obscured archives together, computers, the Internet, and the World Wide Web would seem to offer quick passage to the revelation of North America’s enmeshed histories (191). Whitman’s network fantasy in “Passage to India,” his language of utopia, freedom, and interconnection, often resonates in discussions of new media and digitization. Even critiques of corporatization and loss of net neutrality frequently take a form that envisions technology creating a unified, ethical society. There is something hauntingly familiar about this technological utopianism, and its narrative of progress has been critiqued – at times by senior scholars who perceive a generational rift that a focus on “digital humanities” might be opening (Darnton; Grafton; Poster). “Fluidity” and “democracy” are terms often associated with the Internet, but these characteristics are not intrinsic; an enormous computational and infrastructural effort is required to make Internet-based computer work appear fluid. And as Google’s move in and now out of China shows, the conjunctional politics of the Web, what kinds of affiliations it will allow us to find and to forge, fluctuate.

This chapter argues that debates surrounding the rise of “digital humanities” have on the one hand involved a sense of a radically different future for American literary studies and on the other hand partaken of many of the major shifts in American literary methodology described elsewhere in this volume. It outlines some of the competing definitions and constituents of digital humanities, but it does not attempt to define “digital humanities” as anything other than a set of works, practices, and institutions concerned with emergent questions about technology and humanity. I focus on methods, sketching a topography of four digital humanities domains that impact literary studies: electronic imaginative compositions, analytical approaches to digital humanities, archives, and institutional politics. In doing so, I confess to take a certain part in Whitman’s paratactic urge, hoping for connectedness among literary humanists, in part by acknowledging that technology divides – sometimes excitingly – no less than it unifies.

Electronic Imaginative Compositions

I begin with digital literary production, because it is in this avant-garde realm that the need for traditional humanities scholars to embrace methods emerging out of computational environments is most clear. Writers, artists, and designers have for the past few decades been experimenting with what Whitman would call “the new life of the new forms” in the electronic realm, vectoring and transforming traditional artistic practices through the use of computation and networked media. Electronic imaginative composition, including writing in digital formats, new media works, video games, and composition through social media, have all changed how writers imagine what is possible.

To analyze such works, as media theorist Lev Manovich has claimed, we still need “old media” tools. Film, for example, has in Manovich’s view shaped the visual and haptic structures of computational interfaces and even to an extent hardware and software logics, serving as the “language” in which designers have long created software and operating systems. Many electronic compositions are process oriented, reflecting on the conditions of their creation and intelligibility, and thus calling attention, as Manovich does, to the process by which we adjust to the kinds of signification that new media make possible. That’s a method that digital compositions share with their antecedents in print and manuscript. But in two ways they demand of scholars an adaptation in reading methods: in the number of their articulated modalities, and at the level of code. If poetry has always called attention to its embodiment as a part of its meaning making, the formal properties of electronic compositions often combine text, still and moving images, sound, and, in the case of interactive installations, touch and movement. Certainly literary scholars are good at reading interfaces; the recent upsurge in textual and visual studies continues a long history of thinking about how writing is delivered, traceable through thinkers like Gertrude Stein, William Morris, John Milton, Socrates, theologians, and writers of sacred texts. New media compositions often rely upon motion, particularly in the context of video games and video “mashups” (remixings of visual content from multiple, usually popular, sources and unified by a theme or often a song); in the case of “machinima,” the two are combined, as gamers take screen sequences from video games and recombine them into new narratives, often with operatic flair.

Traditional tools for analyzing rhetoric and more recent ones from film studies take us a long way in unpacking such works. But literary scholars will also need to understand something about code and its relation to the interface if we are to appreciate the way electronic imaginative compositions reflect upon process and, by extension, the ways they articulate the politics of the digital to the new forms emerging from its affordances. At root, electronic artists are doing the same old, excellent thing: theorizing and contesting culture through, and in relation to, a medium or media. But code introduces new twists: it isn’t quite language, it isn’t quite a genre, and it isn’t quite a medium. Code is the set of commands that programmers issue to a computer; those commands are converted into machine language in order to “execute” a program or command. Code is human readable and systematic, but so systematic as not to function with the flexibility of a human language; it would be hard to analyze it as literary in itself. Still, as with any technology, the use of code is a social act, no less than a formal practice; an inquiry into the structural relation of code to meaning making in “born-digital” formats is a key methodological contribution of the digital humanities.

Some light is shed on that relation by the history of what is called variously “electronic literature” or “new media poetics” by some of its influential critics (Hayles; Morris and Swiss). Computation, of course, influenced imaginative writing long before computers were common. And poets, perhaps constitutionally unafraid of the difficulties of distribution, began to use computers before the World Wide Web made getting digital works to audiences easier. In the Web’s early years, writers like Michael Joyce and Shelley Jackson used hyperlinks to reroute traditional narrative and, it would seem, to put narrative paths in the hands of users. “To its advocates,” Adalaide Morris observes, “hypertext appeared to materialize the still vibrant poststructuralist dream of processual, dynamic, multiple signifying structures activated by readers who were not consumers of fixed meanings but producers of their own compositions” (Morris and Swiss 12). But the paths in these early works are hard-wired: code like HTML (HyperText Markup Language) ensures that the links between different parts of a story are relatively stable. And certainly the different parts of the story are “authored” in a traditional sense, fixed in files called by the code whenever readers click on a link that references them.2 Here code functions more or less invisibly, and more or less rigidly, to structure the reading experience. Indeed, as Joyce was writing his famous afternoon, a story (1987), he was simultaneously involved in the very design of the compositional software used to create many such works, Storyspace. His afternoon, a story is made up of a group of small individual texts, or lexia, joined by hyperlinks. The lexia contain elements of a story, and the hyperlinks constitute a group of potential pathways through which the lexia can be read to develop a narrative. The hyperlinks are coded within the lexia; unlike a database, in which the relationships between different sets of information (or tables, as they are called) are determined at the level of the tables and particular kinds of data within them (i.e., in which the limitations imposed by code function at the level of structure), here code influences meaning at the level of transitions in the narrative.

Though he had an important role in its architecture, Joyce didn’t actually write the code for afternoon, a story, or even for Storyspace – Storyspace is software that, something like a word processor, makes it easy to build hypertextual structure into a narrative. But more recent artists and writers have written their own code. As the Web became more dynamic in the first decade of the twenty-first century, and as users of it have become more comfortable with Internet-based communication, writers and artists have responded both by creating structures that allow for user input of content and by highlighting the mediation of their creations by code itself. Writers like Loss Pequeño Glazier and Mary-Anne Breeze articulate together a variety of kinds of code, including scripting languages (programming codes, such as Perl and Python, that give commands to a computer’s operating system), Java, databases, and software such as Flash or QuickTime, to animate words, draw on an ever-changing set of source texts, create random links, and denaturalize the way a user interacts with a computer screen. One of the best and longest running of such code-based literary performances is jodi.org, an ever-shifting interface poem that reflects our Internet habits and aesthetics back to us, sometimes by simulating a system crash or a viral invasion; sometimes by seeming to proliferate pop-ups that furtively chase around the screen; sometimes by offering downloads of now-“ancient” video games – all the while using “digital retro” fonts and colors reminiscent of 1980s computer interfaces. Literary critics are used to reading works for the ways in which they call attention to their own underpinnings; the increasing numbers of digital humanists who analyze new media compositions call not only for a recounting of the history of such new forms, but also for a method that adds the flexible structuring element of code to the linguistic, material, and political analysis of digital media (see Fuller; Mackenzie; Montfort and Wardrip-Fruin).

This holds true even in the realms of games and social media. Video games are big business; if literature and film have a competitor, surely it’s games. Games too have followed an increasingly networked, user-participatory development path. There are many video games that call attention to their own conventions, such as Conker’s Bad Fur Day (2001), which features both an ironic avatar (the character “played” by the user) who critiques the player’s performance and “cut scenes” (motion picture scenes interspersed within the game play) that parody well-known films such as A Clockwork Orange. At times, to win such games, the user must resist or invert the customary kinetics used to make things happen in game play. A notorious example is a combat scene in Metal Gear Solid (1998), in which the player must unplug her own controller in order to win an important fight. In other games, such as Mass Effect 2 (2010), the player-avatar relationship is even more unstable, and the game is not clearly “won” or “lost,” but rather ends – in a way with which the player may or may not be happy – in one of a large number of possible outcomes brought about by the player’s choices. Game studies, or ludology, is a rapidly expanding field, and recently has taken an important turn toward the political.3

The study of social media, too, is burgeoning. Though social media networks such as Twitter, Facebook, and MySpace are rigidly structured by code (and controversially so, in the case of Facebook), they have become important to the imagination of the social, and have even been used as vehicles for creative work (“twitterature”). Novel and poetry writing via Twitter and blogs is a common practice; Jay Bushman, for example, refracts major American literary works in his Spoon River Metblog and The Good Captain (a Twitter novel based on Benito Cereno). Many social network-delivered texts have made it into print as well, though perhaps unsurprisingly the blog format seems to outpace Twitter as a starting point for traditional print remediation (Perez). Social networks are important to scholarly communication as well, focalizing the confrontation of traditional scholarly communications networks with the more open publication forms made possible by the Web. The 2009 Modern Language Association conference was a case in point. In addition to digital humanities panels that boasted heavy attendance (with frequent tweeting) during the conference, Brian Croxall of Emory University used social media to raise the question of academic politics. His scheduled paper was titled “The Absent Presence: Today’s Faculty.” Croxall, on the market for the third time, got no job interviews and so couldn’t come to the conference. He published the essay, a critique of the expense of attending MLA and its differential effects on contingent faculty, on his blog, where it was read by thousands and reviewed and discussed both online and in print. The convergence of social media and labor issues in the academy in the publication event that Croxall’s paper became, like the efflorescence of new media works, speaks to the way the digital enters the literary studies world from many angles, and to the many combinations of the digital and political.

Analysis

If electronic imaginative compositions ask us to analyze culture and computation through the interpretive process, other work in the digital humanities does so in a literary-critical mode. Some of this work takes digital culture itself as its subject – as in the case of N. Katherine Hayles’s work on subjectivity and on electronic literature – and some of it uses information technology to study literature.4 Computers and literature have a long relationship. Indeed, the essay widely considered to be the imaginative origin of today’s personal computer, “As We May Think” by Vannevar Bush, was published in a literary venue, Atlantic Monthly, in 1945, alongside poems, short stories, and advertisements for such works as Upton Sinclair’s Dragon Harvest and the collected poetry of W.H. Auden. The Nobel Prize-winning South African writer J.M. Coetzee wrote a dissertation in computational linguistic stylistics at the University of Texas at Austin in the 1960s, part of an early wave of a methodology still prominent today. Scholars of rhetoric and composition were also early adopters of computer-based techniques for teaching writing, hosting peer review, and archiving revision histories (Hockey).

Such endeavors anticipated the explosion of humanities tool building that is happening today and that is an important source of questions about what “new forms” literary scholarship might be taking. Most of us are familiar with keyword searching, which has accelerated work in the literary humanities even as it makes it possible to ask new questions. Text mining (or, more broadly, data mining) radically extends the field of semantic searching by automating the aggregation of raw linguistic (or, if you’re ambitious, formal) information about a text or sometimes an unprecedentedly large group of texts. Computational linguists are on the cutting edge of such work, having analyzed linguistic corpora for decades – and having long debated the scholarly standards for generating and studying them. In recent years, linguists and digital literary studies projects that use linguistics methods have turned to the vast data sources made available by Google’s book digitization project and by free access archival projects such as Project Gutenberg, or a host of more specialized sites. The MONK Project, for instance, describes itself as “a digital environment designed to help humanities scholars discover and analyze patterns in the texts they study”; its data sources include major electronic literary collections. The analytical tools freely available from MONK enable studies of a corpus that at the moment totals around 150 million words. Topic modeling, an approach that uses linguistics algorithms to find sets of words that tend to occur in proximity to each other, can be used to discover categories and themes automatically in large bodies of electronic text, regardless of genre and even language. Equally thriving at the moment are “geochron” visualizations, which use Geographic Information Systems (GIS) and cultural-historical data to create maps that show change over time and space. The availability of linguistics and visualization tools to literary scholars – or, more precisely, their increasing ease of use as well as the growing availability of digital versions of the texts that literary critics study – marks an important moment of intermethodological potential. One could imagine, for example, a GIS-based digital companion to Jacob Riis’s How the Other Half Lives (1890) that offered user-manipulable maps of New York City linked to demographic, photographic, and legal data, including the Ellis Island and Port of New York records.

Still, the jury seems to be out on data mining and how it will inflect literary interpretation. Franco Moretti’s much-discussed concept of “distant reading” speaks to similar concerns. In effect, Moretti’s research group creates visualizations of the sort of data patterns that text mining can produce. Take, for example, all of the works of Henry James, and search through them for place names. After disambiguating the tricky ones (Florence, Kentucky, versus Florence, Italy, for instance), reference these names to a standard toponym locator list, and then map them on a Google Earth map. For Moretti, who often works with groups of novels much larger than this, such an exercise is capable of revealing surprising things about a set of texts, or even about what we have taken to be genres. But more importantly, he hopes, it will force us to ask new questions about literature and society – evoking the implicit excitement of Whitman’s “oceans to be crossed, the distant brought near” in a data-mining passage to India. Such work need not be done only on a large body of texts. Tanya Clement has analyzed Gertrude Stein’s The Making of Americans using computational linguistics tactics combined with readings of the novel’s form, which has enabled her to discern structures hitherto unnoticed by critics – and, by extension, to suggest a less postmodern interpretation of Stein’s literary goals. But here again, whether the data set is small or large, we encounter the problem of code: driving the mining of literary corpora are algorithms created by humans. Often the results we get are uninteresting, because when we give the computer restraints in order to clarify a query, we eliminate so much evidentiary “noise” that the answer seems predictable based on what we already knew. The challenge is, then, to create algorithms that facilitate surprise (Ramsay).

Creating algorithms and building tools – thought of as, say, using a reading practice and creating scholarly editions – are traditional literary studies activities. Having machines do the work of searching for key groups of terms or formal patterns introduces new difficulties, but revisits some old ones as well. American literary studies has played an important role in a broader humanistic interrogation of the ways in which tools and algorithms, even constructed with the best of intentions, can reinforce or reinvent hierarchies of power and visibility, and even interactional protocols, that work to the disadvantage of many people. The question of differential access to computing and networked resources – a problem in the United States, to be sure, but perhaps more of a problem globally – has had a lot of press. But other overlaps between the concerns of American literary critics and those of digital humanists also offer food for thought.

One scene of such overlaps might be found in the work of Kimberly Christen and of the online journal Vectors, in which Christen’s work has appeared. The shared ground here is the relationship between Native American studies and American literature as a field. These are concerns that inform my work in the colonial American context. I’ve been working for some time on understanding the history of the relations between colonialist and indigenous communications systems. Increasingly, my conversations with American Indian scholars and community leaders at conferences have turned to questions about tribal representation on the Web. For many indigenous people, the stakes of controlling access to cultural information are high, even as the need to present a public face for purposes of achieving or maintaining sovereignty calls for a certain amount of self-exposure. While the Native American Graves Protection and Repatriation Act provides guidance for the physical realm, allowing for the return of bodies and objects to tribes from anthropological collections and other holdings, the question of its applicability to the digital realm is still open.

Controlling access to information about indigenous cultural heritage is a problem that anthropologists of aboriginal Australia have been grappling with for years, using database technologies and interface design. Christen’s projects are designed to privilege the cultural restrictions of information sharing in indigenous contexts. Her Mukurtu Wumpurrarni-kari Archive uses Warumungu cultural protocols to restrict access to the database based on users’ information: by family, gender, status in the community, and country of origin. Within the archive, content is also organized according to Warumungu cultural categories. Christen and lead developer Craig Dietrich created and maintain the archive in an ongoing collaboration with Warumungu community members. With Chris Cooney, Christen published a conceptual version of the archive’s interface in the online journal Vectors; the interface makes the argument that Western users’ notions of free and open access to information are not universal. Google’s simple search box, which seems a window onto the world, represents and spurs an attitude about the free availability of information that may not serve everyone’s interests equally. Christen’s work raises questions about what sort of archival representation indigenous North American materials will have in the literature databases of the future. Should scholarly standards require that the works of, say, Samson Occom and William Apess be edited and archived in collaboration with indigenous communities? If so, then what about a case like A Narrative of the Life of Mrs. Mary Jemison, in which an adopted Seneca tells a story through a translator to a white writer? What implications does the “cultural protocols” approach have more broadly for building cultural heritage archives (considering that, among other things, “protocols” have a specific legal definition in Australia but not in the United States)?

Archives

In all of the projects published by Vectors, interface and data structure are the argument. Such an emphasis reminds us that whenever we use a resource like the Walt Whitman Archive, we must ask hard questions about not only what it contains, but also how it organizes and encodes that content, who participates in the design, what interactivity it allows (such as user searching and manipulating), and what interoperability it encourages with other data sources or software. These concerns emerge also out of the critiques of traditional editing and bibliography by Jerome McGann, D.F. McKenzie, and others, the embrace of which helped to catalyze literary archive building in the early 1990s.

Sandra Gustafson, in a 2005 address titled “The Emerging Media of Early America,” praised the potential of Internet-delivered research media for American literary and cultural studies. “The emerging media of today,” she wrote, “can help us to better understand and preserve the emerging media of early America, making visible the range of textual forms from wampum belts to staged readings” (249). In effect, we will be able to do history better, Gustafson argues, to see more. No doubt this is true: many of us have, in doing our own work, already experienced the impact of the databases that Gustafson praises, such as the Digital Evans and Early English Books Online. But for some digital humanists, the defining inquiry is not how new media and new delivery systems can answer old questions. Rather, does the digital platform promote more emergence? What are the new questions spurred by tracing the implications of multimedia history, or raised when literary scholars create new chapters in that history today by making digital tools, resources, or publications?

For Cathy Davidson, such questions, together with the possibility of opening humanities work to large communities beyond the academy, represent what she terms “Humanities 2.0.” “Web 1.0” was the old static web – the hyperlink-based, comparatively static online experience described above as characterizing the first wave of electronic literature. “Web 2.0,” from which Davidson takes her inspiration, is the social media-driven, interactive, dynamic experience of the Internet of the moment: sites remember who you are, Gmail parses your emails to generate advertisements targeting your keywords’ interests, and customization and information exchangeability are widespread. For Davidson, Web 1.0 is old news, and the humanities must rethink itself along the same lines as Web 2.0 by making interactivity central and assuming that the world’s literature will soon be equally available to everyone online. But Meredith McGill and Andrew Parker have recently argued an opposing position: eschewing the notion that “every medium has its own epoch and every epoch its own medium,” they use digitized sources to emphasize that literature was always a multimedia phenomenon (960). “It makes less sense,” McGill and Parker conclude, “to ask whether new media demand new forms of literature and literary criticism than to ask how new media make possible apprehensions of a literary past that is different from what we had anticipated and that has yet to arrive” (966).

For Hayles, the question isn’t whether to transform literary criticism entirely or to turn back to old objects with new eyes. If the digital humanities approach advocated by McGill and Parker addresses itself to traditional humanists by extending “existing scholarship into the digital realm,” as Hayles puts it, the one Davidson represents, “by contrast, emphasizes new methodologies, new kinds of research questions, and the emergence of entirely new fields” (How We Think 17). For Hayles, both approaches can be productive, depending on institutional context, and they both operate under the same coupled risk in the larger context of today’s academy: “If the Traditional Humanities are at risk of becoming marginal to the main business of the contemporary academy and society,” she warns, “the Digital Humanities are at risk of becoming a trade practice held captive by the interests of corporate capitalism” (22).

The things “betwixt and between” emerge when one looks with such a perspective: does Google Books, for example – a massive data-generating project – really hold Web 2.0 promise? It could, but at least at the moment, it is still structured around producing advertising revenue and user metrics for Google, not an information exchange and manipulation realm for users. Google Books certainly didn’t finish Web 1.0 from a literary-archival standpoint. Most of the world’s manuscripts, much of its print and architecture, much of its sheet music and its ephemera remain undigitized. Much of what is in Google Books is still under copyright (for an extended critique of Google, see Vaidhyanathan). Davidson rightly surmises that “digitizing (with interoperability and universal access) the entire record of human expression and accomplishment would be as significant and as technologically challenging an accomplishment of the information age as sequencing the human genome or labeling every visible celestial object” (714). But those like Kim Christen who have digitized even small parts of that record would suggest that it’s not so simple. If cultural protocols offer one obstacle to the Western vision of a total record, technical ones lurk, too: substandard digitization, such as the often-unproofed Optical Character Recognition (OCR) that makes Google Books searchable, or completely untranscribed images or sound files, which may afford opportunities for mass analysis but are problematic for many literary humanities uses. Need the theory of Web 2.0 really be shared by all projects? We get a lot of cachet in the scholarly world for declaring revolutions, but the humanities has thrived in the past through a conjunctive analytics of the retro and the prophetic.

In thinking about the digital archives that are expanding American literary scholars’ reach and perhaps changing their methods, then, it’s important to consider the code – the architecture of such resources – alongside the longstanding questions about archival politics and the canon. American literature has been the site of some of the most important electronic archival developments, considered in terms of data generation or user interactivity. Ed Folsom’s and Kenneth Price’s Walt Whitman Archive has a weekly audience in the tens of thousands, has received a National Endowment for the Humanities Challenge Grant to create an endowment to support its work, and not only is freely accessible but also makes its encoded texts available to users for manipulation or analysis. Digital archives that focus on Emily Dickinson, Herman Melville, and Uncle Tom’s Cabin have been influential in creating models for further archival development. They have also often emphasized making materials available for researchers and students that are less commonly available than the literary ones: correspondence, marginal annotations, images, maps, and sound. Digitization projects for North American newspapers are proceeding across the country, at state and national levels, and sites like the Making of America and HarpWeek have created access to a long history of periodical literature.

Such sites are home to contest and experimentation, not just to a utopian dream of archival completeness. The Women Writers Project takes gender as its primary organizing rubric, in an environment in which the literary archives that tend to get the most press – Jerome McGann’s Rossetti Archive, the Whitman Archive, the Blake Archive, among them – are based on single, white, male authors.5 Edward Whitley’s The Vault at Pfaff’s archive also challenges the single-author archival paradigm, taking as its axis the literary and artistic community that used Pfaff’s Beer Cellar in nineteenth-century New York City. Radical Scatters, a digital archive of Emily Dickinson’s manuscripts edited by Marta Werner, reflects the rise of methodological interest in the materiality of texts and how bibliography and textual studies might be challenged to think about categories, such as gender and sexuality, that it has traditionally downplayed. We might also keep asking why writers with fascinating multimedia careers such as Frederick Douglass, Sojourner Truth, and William Wells Brown haven’t been among the leading subjects of scholarly electronic archival construction.6

The Our Americas Archive Partnership (OAAP), based at Rice University, is worth a close look for the way it addresses two major methodological concerns in recent American literary studies at the level of content, organizational structure, and code. The first concern is hemispheric approaches to cultural studies, and the second, related one is the question of translation in American literary studies. In imagining a multilingual, multi-institutional archive, the creators of OAAP decided to hire a translator for the project, to collaborate with institutions in other countries where possible (beginning with Mexico’s Instituto Mora), and to federate already-existing archives instead of attempting to be a single data source.7 The software that runs the project periodically sends an electronic query to partner archives’ servers, based on the particular formats used at each archive, and then generates an updated searchable master list of citations with links. This coding architecture mirrors the federative logic of the OAAP’s scholarly scheme: create conversations among different fields, while observing the local conditions of intellectual production in each. When you find a resource through the OAAP’s search engine, it will take you out of the OAAP interface and into the original archive’s web space. Thus the archive can expand (or not) at each home institution – with the caveat that given the relationship to OAAP, each archive must maintain interoperability standards in order to participate in the larger, aggregated data source (OAAP).

At the moment, at least, the OAAP’s website doesn’t foreground the way in which its technical infrastructure and its methodological goals are mutually constructed. But it should, because digital archives in this early moment of the development of such resources make interpretive claims that can be valuable not only in widening discussion of the resources we use to do literary studies but for serving as models (or countermodels) to future projects as well. Matthew Kirschenbaum, in his book Mechanisms, emphasizes the methodological challenges that face us when we study the history of born-digital works, including electronic archives built by scholars. When we study digital creations, we must take into consideration the relations among their parts, hardware and software environments, interface changes, and other material factors, even as we consider the intellectual, political, or spiritual goals that inspired their makers.

Politics and Institutions

Kirschenbaum’s Mechanisms won, among other awards, the 2008 Modern Language Association First Book Prize. That a work focused on problems in the literary history of digital objects won this prize marks an important moment in the profession, acknowledging as it does the broader importance of digital humanities to literary studies. In a new essay (2010), Kirschenbaum emphasizes the political dimensions of digital humanities, including the use of the term as “a free floating signifier, one which increasingly serves to focus the anxiety and even outrage of individual scholars over their own lack of agency amid the turmoil in their institutions and profession” (6). Such anxieties can make an easy target of digital humanities work, for reasons worth looking at closely, and on the other hand can lead to a personal investment in digital humanities driven as much by frustration at institutional resource contests as by an interest in research problems.8

Institutional questions are particularly important for Americanists to consider, in part because American literary studies has been axial in the first major wave of institution building in the digital humanities. The Whitman Archive, the Valley of the Shadow project (which focuses on the Civil War), and Stephen Railton’s Uncle Tom’s Cabin site were fostered by the Institute for Advanced Technology in the Humanities (IATH) at the University of Virginia. Funding secured by those projects helped ground IATH’s reputation and influence. Martha Nell Smith, one of the editors of the Emily Dickinson Archive, was founding director of the Maryland Institute for Technology in the Humanities, and Kenneth Price, coeditor of the Whitman Archive, helped found the University of Nebraska’s Center for Digital Research in the Humanities. These two institutes are thriving sites of experimentation, tool building, and archive building, but they also host workshops and provide consultation, fostering dozens of new projects in just the past few years. Cathy Davidson and David Theo Goldberg founded the Humanities Arts and Sciences Advanced Collaboratory (HASTAC), an important communications center for work in digital humanities. The presence of leading American literature scholars on editorial boards and the willingness of some of its senior scholars to take up the field’s challenges have been key to the viability of digital humanities in literature programs generally.

As this roll call intimates, digital humanities garners a level of financial support unusual in the humanities. The National Endowment for the Humanities formed a special Office of Digital Humanities in 2002, and has elaborated a range of new funding initiatives under its jurisdiction, offering millions of dollars in grants to a wide range of projects, workshops, and planning activities. Major private foundations like the Mellon Foundation are also deeply invested in digital humanities development; Project Bamboo, a humanities infrastructure development project, is a multi-institutional Mellon initiative, for example. The question the project uses to describe its purpose – “How can we advance arts and humanities research through the development of shared technology services?” – has a corporate twang that may rankle, even as it seems deliberately broadly phrased. But Bamboo is driven by the insight that electronic research efforts in the humanities often reinvent the wheel when it comes to basic platforms, services, and software, in part because of insufficient communication and in part because proliferating digital initiatives local to particular universities and colleges encourage (often pedagogical) innovation but not integration with research projects worldwide.

I think both the rigorous peer review and the creative categories of grants that have been created are advantageous to the humanities broadly. The Digital Humanities Start-Up Grants, in particular, explicitly describe themselves as encouraging innovation even if it means failure. The program encourages reporting to a broad public and information sharing among projects, in part by hosting a meeting of all awardees at the beginning of the funding period that was eye-opening for many of us in attendance. But it’s certainly true that the rapid increase in funding available to humanities projects in one particular category raises questions. Whether private or public, money is not going to more traditional sources of literary studies support, or, say, to traditional print purchasing in libraries. Libraries have sometimes been eager partners in digital humanities projects because they are a way of addressing digitization demands while bringing in resources through the research areas of the university. For some libraries, the cheaper maintenance costs per page of digital collections may be an important source of opportunity; for others, there is a tension between the demands of maintaining renowned print collections and bringing those to a wider audience through digital preservation. But especially in the financial environment of the past few years, resources for physical purchasing in libraries have diminished, spending on digital resources has gone up, and cultural critics seem to insist on the importance of both.

We also still lack a broad culture of evaluation of digital humanities work – in many places, more than a nominal valorization – which introduces questions about how we train our graduate students or mentor junior faculty with promising electronic projects. In the sciences, training grants for first-year students are common, which buys time for students to learn basic laboratory techniques and get a feel for the field before they enter full-time research in a particular lab. There is no such structure for graduate students in the humanities; the creation of one could do double duty in providing time for language training (a strong need in American literature) and, for students interested in digital humanities research, training in programming, scripting, database, or interface design.

What’s more (or, for many cultural studies scholars, what’s worse), corporate interests have long fueled the rise of digital humanities in the United States. Though digital humanities is a born-global field, having featured for years the kinds of international collaborations sought by literary scholars, technology development and Internet-based resource delivery are still dominant in the United States – and that should give us pause as we consider from what standpoint digital humanities work is done, what objects it focalizes, and what rewards it garners. IATH got its start from an IBM donation in the early 1990s. Apple is a regular contributor to higher education. The most recent striking example is Google’s million-dollar digital humanities research grants program. The political orientation of American literary studies is a crucial aspect of digital humanities: it’s a question of ideology among researchers and funders, not just the availability of technology or widespread access. In the debate about the future of the humanities, then, digital scholarship is one of several flashpoints. Should literary scholars embrace the possibilities of collaborating with corporations on digital projects to increase the visibility and perceived “relevance” of the humanities? Or should we insist, even as we reconfigure the academy itself by providing electronic collaboration beyond the university’s boundaries, on the peculiarly eccentric and non-profit-oriented qualities of humanistic work?

Conclusion: “The Lands to Be Welded Together”

There’s something eerie about Whitman’s choice of metaphor in this line, evoking an industrial technology whose power seems to be irrefutable and irreversible. Yet in certain ways, the dream of interconnectivity does entail a mutual reconfiguration from which we cannot return, and with whose consequences we must grapple. “The computerization of culture,” Lev Manovitch writes, “not only leads to the emergence of new cultural forms such as computer games and virtual worlds; it redefines existing ones such as photography and cinema” (9). “Doing” digital humanities with American literary materials and contexts, then, is a reflexive exercise, one that thrives when it both engages the way in which it inherits older concerns and restages them, as Kirschenbaum’s work does, in electronic form (e.g., bibliography, problems of the archive, and close linguistic analysis of texts). Digital humanities as reflexive practice also offers the potential to revisit past stories about media in light of our experience of studying the new, as folks using social media or crowdsourcing or who are building free-access archives do, and to transform traditional literary concerns or help invent new modes of analysis, as Moretti claims he does. The colonial desires and the dreams of a unified, divine humanity that structured Whitman’s refraction of the passage to India are the contested subjects of a digital humanities no less than are the questions of the reshaping of the human as a reciprocal function of the computational, or the transformation of a textual into a multimedia aesthetic.

ACKNOWLEDGMENTS

The author thanks Nicole Gray, Rob Nelson, Kate Hayles, Matt Kirschenbaum, Kenneth Price, Lars Hinrichs, Travis Brown, and Jason Baldridge for the assistance and advice they offered toward this chapter.

Notes

1. The term “digital humanities” was vectored into common usage by John Unsworth, founding director of the Institute for Advanced Technology in the Humanities, in the title of his edited collection with Ray Siemens and Susan Schreibman, A Companion to Digital Humanities, and by the National Endowment for the Humanities Office of Digital Humanities, founded in 2006. It replaced the term “humanities computing,” which seemed to limit the field unnecessarily to computational work (see Svensson; Hayles How We Think; Kirschenbaum, “What Is”; McCarty; Schriebman et al.).

2. Landow was an early hypertext theorist; his work is critiqued in Aarseth; see also Glazier, Digital Poetics, and Glazier’s poetry, which often thematizes digital processing and delivery modes. The Web has also affected the archiving and distribution of poetry and, in particular, of lesser-known and avant-garde works; see UbuWeb at http://ubu.com.

3. See, for example, Dyer-Witheford and de Peuter. The author thanks Chris Ortiz y Prentice for conversations about ludology that informed this paragraph.

4. Other recent examples of the study of digital culture, or of the effects of the digital on the humanities, include Gitelman; Bolter and Grusin; Nakamura; and Montfort and Wardrip-Fruin.

5. See also the debate over the Whitman Archive in the October 2007 issue of PMLA. Here one of the key points of debate was whether or not databases can be understood as a genre (as Manovich has suggested). It seems to me that as with code, it is important to understand databases as social acts first, as constituting a form but not necessarily a genre.

6. Sources for these writers can be found in a variety of places, in varying states of annotation: at the Library of Congress, see, for example, “Sojourner Truth: Online Resources” (http://www.loc.gov/rr/program/bib/truth) and “The Frederick Douglas Papers” (http://rs5.loc.gov/ammem/doughtml/doughome.html); and the University of Detroit Mercy Libraries’ “Black Abolitionist Archive” (http://research.udmercy.edu/find/special_collections/digital/baa).

7. This architecture borrows from that of the Networked Infrastructure for Nineteenth-Century Electronic Scholarship (NINES), http://www.nines.org.

8. See for example the conversation at Ian Bogost’s blog (http://www.bogost.com/blog/the_turtlenecked_hairshirt.shtml), following from a post in response to Cathy Davidson’s musing on the humanities in the wake of the Brian Croxall paper controversy described above.

References and Further Reading

Aarseth, Espen. Cybertext: Perspectives on Ergodic Literature. Baltimore: Johns Hopkins University Press, 1997.

Bolter, Jay David, and Richard Grusin. Remediation: Understanding New Media. Cambridge, MA: MIT Press, 1999.

Breeze, Mary-Anne (Mez). “Solo Show at Java Museum.” 2002. http://www.javamuseum.org/mez/solo/index.htm

Bush, Vannevar. “As We May Think: A Scientist Looks at Tomorrow.” Atlantic Monthly, July 1945, 10–18.

Bushman, Jay. The Good Captain. The Loose-Fish Project. June 5, 2009. http://twitter.com/#!/goodcaptain

Bushman, Jay. Spoon River Metblog. The Loose-Fish Project. December 27. 2008. http://jaybushman.com/spoon-river-metblog/

Christen, Kimberly, and Chris Cooney. “Digital Dynamics across Cultures.” Vectors Journal of Culture and Technology in a Dynamic Vernacular 2, no. 1 (2006). http://vectorsjournal.org/archive/

Christen, Kimberly, and Craig Dietrich. “Mukurtu: Wampurranrni-kari.” N.d. http://www.mukurtuarchive.org

Clement, Tanya. “ ‘A Thing Not Beginning and Not Ending’: Using Digital Tools to Distant-Read Gertrude Stein’s The Making of Americans.” Literary and Linguistic Computing 23, no. 3 (2008): 361–81.

Coetzee, J.M. “The English Fiction of Samuel Beckett: An Essay in Stylistic Analysis.” PhD dissertation, University of Texas at Austin, 1969.

Croxall, Brian. “The Absent Presence: Today’s Faculty.” December 28, 2009. http://www.briancroxall.net/2009/12/28/the-absent-presence-todays-faculty/

Darnton, Robert. “The Library in the New Age.” New York Review of Books, June 12, 2008, 72 –9.

Davidson, Cathy N. 2008. “Humanities 2.0: Promise, Perils, Predictions.” PMLA 123, no. 3: 707–17.

Drucker, Johanna. SpecLab: Digital Aesthetics and Projects in Speculative Computing. Chicago: University of Chicago Press, 2009.

Dyer-Witheford, Nick, and Greig de Peuter. Games of Empire: Global Capitalism and Video Games. Minneapolis: University of Minnesota Press, 2009.

Folsom, Ed, and Kenneth Price. Eds. “The Walt Whitman Archive.” 2010. http://www.whitmanarchive.org

Fuller, Matthew. Ed. Software Studies: A Lexicon. Cambridge, MA: MIT Press, 2008.

Gitelman, Lisa. Always Already New: Media, History, and the Data of Culture. Cambridge, MA: MIT Press, 2006.

Glazier, Loss Pequeño. Digital Poetics: The Making of E-Poetries. Tuscaloosa: University of Alabama Press, 2002.

Grafton, Anthony. “Future Reading: Digitization and Its Discontents.” New Yorker, November 5, 2007, 4.

Gustafson, Sandra. “The Emerging Media of Early America.” Proceedings of the American Antiquarian Society 115 (2005): 205–50.

Hayles, N. Katherine. Electronic Literature: New Horizons for the Literary. Notre Dame, IN: University of Notre Dame, 2008.

Hayles, N. Katherine. How We Think: The Transform­ing Power of Digital Technologies. Chicago: Univer­sity of Chicago Press, 2010.

Hockey, Susan. “The History of Humanities Computing.” In A Companion to Digital Humanities. Eds. Susan Schreibman et al. Oxford: Blackwell, 2004.

Jackson, Shelley. Patchwork Girl; or, a Modern Mons­ter. Watertown, MA: Eastgate Systems, 1995.

Joyce, Michael. afternoon, a story. Watertown, MA: Eastgate Systems, 1990.

Kirschenbaum, Matthew. Mechanisms: New Media and the Forensic Imagination. Cambridge, MA: MIT Press, 2008.

Kirschenbaum, Matthew . “What Is Digital Humanities and What Is It Doing in English Departments?” ADE Bulletin, 2010. http://mkirschenbaum.files.wordpress.com/2011/03/ade-final.pdf

Landow, George. Hypertext: The Convergence of Contemporary Critical Theory and Technology. Baltimore: Johns Hopkins University Press, 1992.

Lowe, Lisa. “The Intimacies of Four Continents.” In Haunted by Empire: Geographies of Intimacy in North American History (pp. 191–212). Ed. Ann Laura Stoler. Durham, NC: Duke University Press, 2006.

Mackenzie, Adrian. Cutting Code: Software and Sociality. New York: Peter Lang, 2006.

Manovich, Lev. The Language of New Media. Cambridge, MA: MIT Press, 2002.

McCarty, Willard. Humanities Computing. New York: Palgrave Macmillan, 2005.

McGann, Jerome. Radiant Textuality: Literature after the World Wide Web. New York: Palgrave Macmillan, 2001.

McGill, Meredith, and Andrew Parker. “The Future of the Literary Past.” PMLA 125, no. 4 (October 2010): 959–67.

McKenzie, D.F. Bibliography and the Sociology of Texts. Cambridge: Cambridge University Press, 1999.

The MONK Project. New York: Andrew W. Mellon Foundation. http://www.monkproject.org

Montfort, Nick, and Noah Wardrip-Fruin. Eds. The NewMediaReader. Cambridge, MA: MIT Press, 2003.

Moretti, Franco. Graphs, Maps, Trees: Abstract Models for a Literary History. New York: Verso, 2005.

Morris, Adalaide, and Thomas Swiss. Eds. New Media Poetics: Contexts, Technotexts, and Theories. Cambridge, MA: MIT Press, 2006.

Nakamura, Lisa. Digitizing Race: Visual Cultures of the Internet. Minneapolis: University of Minnesota Press, 2007.

Our Americas Archive Partnership. October 26, 2010. http://oaap.rice.edu

Perez, Sarah. “Twitter Novels: Not Big Success Stories Yet.” ReadWriteWeb. September 2, 2008. http://www.readwriteweb.com/archives/twitter_novels_not_big_success_stories.php

Poster, Mark. Information Please: Culture and Politics in the Age of Digital Machines. Durham, NC: Duke University Press, 2006.

Ramsay, Stephen. “Toward an Algorithmic Criticism,” Literary and Linguistic Computing 18, no. 2 (2003): 167–74.

Schreibman, Susan, Raymond Siemens, and John Unsworth. “The Digital Humanities and Humanities Computing: An Introduction.” In A Companion to Digital Humanities. Eds. Susan Schreibman et al. Oxford: Blackwell, 2004.

Svensson, Patrik. “The Landscape of Digital Humanities.” Digital Humanities Quarterly 4, no 1 (2010). http://digitalhumanities.org/dhq/vol/4/1/000080/000080.html

Vaidhyanathan, Siva. The Googlization of Everything: How One Company Is Transforming Culture, Commerce, and Community, and Why We Should Worry. London: Profile Books, 2011.

Werner, Marta. Radical Scatters: Emily Dickinson’s Fragments and Related Texts, 18701886. Ann Arbor: University of Michigan Press, 1999.

Whitman, Walt. “Preface.” In Walt Whitman, Leaves of Grass. 1855. The Walt Whitman Archive. Eds. Folsom and Kenneth M. Price. http://www.whitmanarchive.org