In 1726, Jonathan Swift imagined a writing machine whereby “the most ignorant person, at a reasonable charge, and with a little bodily labour, might write books in philosophy, poetry, politics, laws, mathematics, and theology, without the least assistance from genius or study.”1 He described a primitive grid-based machine with every word in the English language inscribed upon it. By cranking a few handles, the grid would shift slightly and random groups of half-sensible words would fall into place. Crank it again and the device would spit out another set of non sequiturs. These resulting broken sentences were jotted down by scribes into folios that, like pieces of a giant jigsaw puzzle, were intended to be fit together in an effort to rebuild the English language from scratch, albeit written by machine. The Swiftian punchline, of course, is that the English language was fine as it was and the novelty of reconstructing it by machine wasn’t going to make it any better. It’s a pointed satire of our blinding belief in the transformative potential of technology, even if in many cases it’s sheer folly. Yet it’s also possible to view Swift’s proposition as an act of uncreative writing, particularly when placed in the context of Pierre Menard’s rewriting of Don Quixote or Simon Morris’s retyping of On the Road.
I can imagine someone today reconstructing Swift’s machine, rebuilding the English language from scratch, and publishing the book as a work of uncreative writing. It would be a rich project, something along the lines of an Oulipian exercise: “Reconstruct the English language from scratch using the 26 letters on a hand-cranked 20 x 20 grid.” Yet the lesson wouldn’t be that much different from Swift’s; in 2010 the English language still functions quite well as is. Would reconstructing it by hand really make it any better or would this be an exercise in nostalgia, hearkening back to the time when reproduction and mimesis were labor intensive? But in the end, we’d probably say, why bother when a computer can do it better?
In 1984 a computer programmer named Bill Chamberlain did try to do it better when he published The Policeman’s Beard Is Half Constructed, the first book in English that was penned entirely by a computer named RACTER. Like Swift’s machine, RACTER reinvented a perfectly good wheel with less than impressive results. The rudimentary sentences RACTER came up with were stiff, fragmented, and surrealist tinged: “Many enraged psychiatrists are inciting a weary butcher. The butcher is weary and tired because he has cut meat and steak and lamb for hours and weeks.”2
Or it spewed some light romantic cyberdoggerel: “I was thinking as you entered the room just now how slyly your requirements are manifested. Here we find ourselves, nose to nose as it were, considering things in spectacular ways, ways untold even by my private managers.”3
To be fair, to have a computer write somewhat coherent prose by itself is a remarkable accomplishment, regardless of the quality of the writing. Chamberlain explains how RACTER was programmed:
Racter, which was written in compiled BASIC … conjugates both regular and irregular verbs, prints the singular and the plural of both regular and irregular nouns, remembers the gender of nouns, and can assign variable status to randomly chosen “things.” These things can be individual words, clause or sentence forms, paragraph structures, indeed whole story forms.… The programmer is removed to a very great extent from the specific form of the system’s output. This output is no longer a preprogrammed form. Rather, the computer forms output on its own.4
In his introduction to the book, Chamberlain, sounding rather Swiftian, states, “The fact that a computer must somehow communicate its activities to us, and that frequently it does so by means of programmed directives in English, does suggest the possibility that we might be able to compose programming that would enable the computer to find its way around a common language ‘on its own’ as it were. The specifics of the communication in this instance would prove of less importance than the fact that the computer was in fact communicating something. In other words, what the computer says would be secondary to the fact that it says it correctly.”5
RACTER’s biggest problem was that it operated in a vacuum without any interaction or feedback. Chamberlain fed it punch cards and it spewed semicoherent nonsense. RACTER is what Marcel Duchamp would call a “bachelor machine,” a singular onanistic entity speaking only to itself, incapable of the reciprocal, reproductive, or even mimetic interaction with other users or machines that might help improve its literary output. Such was the state of the non-networked computer and primitive science of programming in 1984. Today, of course, computers continually query and respond to each other over the Internet, assisting one another to become ever more intelligent and efficient. Although we tend to focus on the vast amount of human-to-human social networking being produced, much of the conversation across the networks is machines talking to other machines, spewing “dark data,” code that we never see. In August of 2010 a watershed occurred when more nonhuman objects came online registered with AT&T and Verizon in greater numbers than did new human subscribers in the previous quarter.6 This long-predicted situation sets the stage for the next phase of the Web, called “the Internet of things,” where mechanic interaction far out-paces human-driven activity on the networks. For example, if your dryer is slightly off tilt, it wirelessly sends data to a server, which sends back a remedy, and the dryer fixes itself accordingly. Such data queries are being sent every few seconds, and, as a result, we’re about to experience yet another data explosion as billions of sensors and other data input and output devices upload exabytes of new data to the Web.7
At first glance, armies of refrigerators and dishwashers sending messages back and forth to servers might not have much bearing on literature, but when viewed through the lens of information management and uncreative writing—remember that those miles and miles of code are actually alphanumeric language, the identical material Shakespeare used—these machines are only steps away from being programmed for literary production, writing a type of literature readable only by other bots. And, as a result of networking with each other, their feedback mechanism will create an ever-evolving, sophisticated literary discourse, one that will not only be invisible to human eyes but bypass humans altogether. Christian Bök calls this Robopoetics, a condition where “the involvement of an author in the production of literature has henceforth become discretionary.” He asks, “Why hire a poet to write a poem when the poem can in fact write itself?”8 Science fiction is poised to become reality, enacting Bök’s prediction for the literary future:
We are probably the first generation of poets who can reasonably expect to write literature for a machinic audience of artificially intellectual peers. Is it not already evident by our presence at conferences on digital poetics that the poets of tomorrow are likely to resemble programmers, exalted, not because they can write great poems, but because they can build a small drone out of words to write great poems for us? If poetry already lacks any meaningful readership among our own anthropoid population, what have we to lose by writing poetry for a robotic culture that must inevitably succeed our own? If we want to commit an act of poetic innovation in an era of formal exhaustion, we may have to consider this heretofore unimagined, but nevertheless prohibited, option: writing poetry for inhuman readers, who do not yet exist, because such aliens, clones, or robots have not yet evolved to read it.9
It’s not just Bök who is decrying an end to human-produced literature. Susan Blackmore, the genetics historian, paints an evolutionary scenario, telling us we’ve already been sidelined by machines and their ability to move information. She calls this new stage the third replicator, claiming that “the first replicator was the gene—the basis of biological evolution. The second was memes—the basis of cultural evolution. I believe that what we are now seeing, in a vast technological explosion, is the birth of a third evolutionary process.… There is a new kind of information: electronically processed binary information rather than memes. There is also a new kind of copying machinery: computers and servers rather than brains.”10 She calls these temes (technological memes), digital information that is stored, copied and selected by machines. The future doesn’t look promising for us as creative entities. Blackmore says, “We humans like to think we are the designers, creators and controllers of this newly emerging world but really we are stepping stones from one replicator to the next.” Listening to these scenarios, every direction we turn, it seems, has already been co-opted by machines, pushing us humans to the sidelines. But what of the reader? Once the human is taken out of the picture, the reader begins to assume the identical role as the uncreative writer: moving information from one place to another. Just think of the way you “read” the Web: you parse it, sort it, file it, forward it, channel it, tweet it and retweet it. You do more than simply “read” it. Finally, the long-theorized leveling of roles has been realized where the reader becomes the writer and vice versa.
But wait. Here I am, hammering out original thoughts on unoriginality to convey to you, another human, about the future of literature. Although this book might be available electronically, I can’t wait to wrap my hands around the paper version, making it “real” for me. Ironies abound. Much of what I’ve discussed in these pages, in comparison to Blackmore, Bök, or “the Internet of things,” seem folksy and human driven (humans retyping books, humans parsing grammar books, humans writing down everything they read for a year, etc.). Their predictions make me feel old-fashioned. I’m part of a bridge generation, raised on old media yet in love with and immersed in the new. A younger generation accepts these conditions as just another part of the world: they mix oil paint while Photoshopping and scour flea markets for vintage vinyl while listening to their iPods. They don’t feel the need to distinguish the way I do. I’m still blinded by the Web. I can hardly believe it exists. At worst, my cyberutopianism will sound as dated in a few years as jargon from the Summer of Love does today. We’re early in this game, and I don’t need to tell you how fast it’s evolving. Still it’s impossible to predict where it’s all headed. But one thing is for certain: it’s not going away. Uncreative writing—the art of managing information and representing it as writing—is also a bridge, connecting the human-driven innovations of twentieth-century literature with the technology-soaked robopoetics of the twenty-first. The references I’ve made in these pages will inevitably contain references to soon-to-be-obsolete software, discarded operating systems, and abandoned social networking empires, but the change in thinking and in doing from an analog way of writing has been made, and there’s no turning back.