So many grand claims are now being made about digital technologies: how they are all radical, new, groundbreaking, earth-shattering. It is hard to separate hype from hope, and both of these from current reality. It is hard to see how, and in what ways, the “new” technologies are truly new and different, and do represent a radical break with the past; and how, and in what ways, they are continuous with the past, and in a sense just more of the same. Sorting this out is complicated by the fact that so much of the current discussion — in books and magazines; on television, radio, and the Internet — is highly technical and jargon-filled. If you don’t know what XML or ASCII is, or what T1 lines or ISPs are, you are immediately lost. But even if you do understand the ins and outs of standards and protocols and such, you risk getting so caught up in the intricate technicalities that you lose sight of what is most simple and straightforward.
So, in a spirit of inquiry, I want to look closely at the nature of digital documents, at their basic architecture. What are they made of? How are they structured and constructed? What dimension of reality do they inhabit? My hope is that by addressing these questions conceptually (with little recourse to technical jargon), we can begin to sort out what’s new and what is not.
As a starting point, I will make a broad claim of my own: Much of what is powerful, but also confusing and uncertain, about digital documents comes from their schizophrenic nature. Digital materials have undergone a kind of schizophrenic split, at least as compared with their counterparts on paper. A paper document is complete in itself, with the communicative marks inscribed directly on the writing surface — one or more sheets of paper. The ensemble is a self-contained, bounded object. It weighs a certain amount, feels a certain way, and is always located somewhere: on your desk, in a briefcase, on your refrigerator, or folded and stuffed in a pocket.
Its digital counterpart, however, has a divided existence; it lives a double life. On the one hand, you have a digital representation. This is the collection of bits stored on a floppy disk, on the hard drive embedded in your workstation, or on a fileserver. This is your Microsoft Word or Word Perfect file (for a mainly textual document), or your JPEG or TIFF file (for a scanned photograph), or your Java Script (for a piece of animated graphics), or your MP3 file (for recorded sound). While the digital representation is necessary, it is hardly sufficient. For the simple, and possibly profound, truth is that you can’t see the bits. You can’t see them, you can’t hear them, you can’t touch or smell them. They are completely inaccessible to the human senses. Which means that they can’t communicate with us, they can’t talk to us or for us. Not directly, anyway. This makes the digital representation, in and of itself, an extremely poor choice as a medium of communication.
But the digital representation is only half the story. It serves as a generator for other things that are directly accessible to the senses, that can speak to us and for us. From the Microsoft Word file a sequence of letterforms can be displayed on the screen or on paper. From a JPEG or TIFF file, an image can be similarly realized. From the Java Script, an animated sequence can be produced on a workstation screen, and from the MP3 file, voice or music can be made to ring forth. Digital materials are made up of both the digital representation and the perceptible forms produced from it.
The digital representation is a kind of “master,” a generator that allows you to make an indefinite number of copies. There are two different senses in which you might be said to make copies from a digital master, and I want to be careful not to confuse them. On the one hand, you can make copies of the representation itself, the bits. (You do this when you make a copy of a Microsoft Word file, for example.) But you could also be said to make copies from the digital representation: when you create perceptible forms, say, by printing your Microsoft Word file or displaying it on your computer screen. It is this second sense of copying I am particularly interested in here, because this is how you go from the bits to something you can read or hear.
This method of making copies is actually quite ancient. For several thousand years at least, people have known how to create stamps, templates, or patterns from which a set of identical artifacts could be manufactured. Coins are one of the first instances of this. As long ago as the fifth century b.c.e., gold and silver coins bearing inscriptions were minted from bronze dies.1 The use of seals and signet rings to impress a “signature” (or some other identifying mark) is even older.2 The Louvre, for example, has in its collection the cylindrical seal of an Akkadian scribe that dates to the twenty-third century b.c.e.3 Block printing, which involved carving text and images on a wood block, then inking the block and transferring its images onto smooth surfaces (skins, fabric, or paper), was known in the East as early as the eighth century c.E.4 The same technique was widely available in the West by the fourteenth century5
In all these cases, the stamp or pattern is a unitary thing. Letter-forms or other images are carved into a single block of wood, for example. This makes it hard to correct a mistake. If you’ve misspelled a word, you may just have to start over again. It also means you’re unlikely to be able to reuse a portion of the text or image (as opposed to the whole thing) for some other purpose. The invention of movable type changed all of this. With movable type, each time you want to create a new pattern — for a page of text, say — you select the individual, previously cast stamps you need and arrange them to suit your current purposes. Mistakes are correctable — small ones easily, others less so — by replacing or interchanging the individual stamps. And when you’re done, all the individual stamps can be recovered and reused for new projects.
The Chinese are credited with first coming up with the idea. In the eleventh century c.e., a technique was developed whereby individual characters could be fashioned from earthenware, fired in a kiln to harden them, and assembled in an iron form.6 It never took off, however, apparently because of the huge number of Chinese characters that would have been needed. The idea reappeared in fifteenth-century Europe, independently invented by Gutenberg and his contemporaries. This time, the stamps (or type, as they are normally called) were cast in metal. And the smaller number of symbols needed to write Western languages made it a much more practical scheme.
In Gutenberg’s technique there are actually three separate manufacturing steps in which a template or pattern is produced and used. In the last of these, individual pieces of type are selected, arranged, and ‘locked up” in a metal frame called a chase to produce a forme from which multiple pages can be printed. For this scheme to work, however, you need to be able to produce lots of type, both to spell out all the words on the page — a typical page of text may have a hundred or more lowercase e’s — and to replace type that has become worn. Gutenberg’s solution, like the one found by the Chinese inventors before him, was to cast each individual piece of type from a pattern — or a matrix, as it is usually known. But how do you impress the shape of the character into the matrix so it can be used to cast the type? The answer — as Gutenberg, a jeweler by trade, developed it — is to cut a metal stamp, or punch. (Think of a rubber stamp for a letter, except smaller and made of metal.) This, when pounded into the soft metal of the matrix, leaves the concave impression of the character.
At first blush, there is something distinctly odd about this process. Why would you cut a punch to create a matrix to cast a piece of type? Why not just carve or cut the individual pieces of type directly, thereby eliminating two time-consuming steps in the process? The answer is simple: cutting either a punch or a piece of type is itself time-consuming. It would take an extremely long time to carve by hand all the pieces of type you’d need for printing — thousands of pieces.7 But whereas cutting a piece of type gives you, in the end, a single piece of type, cutting a single punch makes it possible to manufacture a great many pieces of type. From a single punch for the letter e, you can make many matrices, and from each matrix you can cast many pieces of type. The whole point is to make lots of copies.
This method of printing is called letterpress — a technique in which the raised, inked surfaces of the type are impressed directly onto the paper. In Gutenberg’s day, this was the only method available, and it was all done by hand: the punches cut, the matrices made, the type cast and composed (arranged), and the pages printed. From the nineteenth century on, however, in the attempt to meet the growing market for print publications, these steps were automated, and new techniques were developed, such as phototypesetting, that eliminated the need to cast pieces of metal type. If we can now compose and edit documents on computers with relative ease, it is because this stream of developments met up with another stream of innovation: the invention of the digital computer.
The idea of linking the printing press and computers actually precedes the modern era of computation. In the 1820s, Charles Babbage designed and partly implemented his “difference engine,” a computer intended to calculate tables of mathematical functions to the twentieth place. This machine, like its modern descendants, needed to do more than make correct calculations: it needed to present the results in a form a person could understand. Babbage took this into account and designed the difference engine so that its results could be printed from plates without a human intervening to typeset the results.
The first modern, digital computers date to the 1940s. (ENIAC, completed in 1945, was designed to help the war effort by computing artillery firing tables.) Over the course of the next two decades, punched cards, punched and magnetic tape, and paper were adopted as the output media on which both programs and data could be printed. For the computer to cause a letter a, say, to be printed on a piece of paper, it needed to send a signal or code to the output device that amounted to the command “print a letter a.” Character codes — numerical codes standing for the letters of the alphabet, the numerals, and punctuation marks — had first been developed in conjunction with the telegraph.
The telegraph, invented in the early nineteenth century, is a device for sending electrical signals representing characters across a wire to a receiving device. The sender would translate a message (made up of letters, numerals, and punctuation) into a sequence of long and short taps on a telegraph key. In Samuel Morse’s first version of the 1830s, the electrical signals corresponding to these taps caused a device on the other end to emboss dots and dashes on a paper roll. Twenty years later the more familiar device was invented, which translated the received signals into audible sounds: dits and dats. In either case, a human operator was needed to translate these codes back into a more comprehensible form: spoken or written language.
Later still, teleprinters were invented; these were essentially electric typewriters that could send an electrical signal when a key was depressed, and could print the corresponding character when its code was received. The best known of these devices is AT&T’s teletype, which was developed in the 1920s. The use of teleprinters effectively eliminated the need for human translation and interpretation of the character codes. Now, rather than translating the letter a into a sequence of dots and dashes, the operator simply pressed the a key on the teletype.
For those most directly involved in the invention of modern computing, the printing of a result was generally of secondary importance. The main act was the computation the computer was performing. Clearly, you needed to supply certain inputs to the computer (the data) and get back the results of the computation. But for the mathematicians and engineers designing the computers and the programs that ran on them, what went on under the hood held their greatest fascination: the design of the hardware and software, the working out of elegant algorithms to compute results. The mind of these early computer scientists was firmly fixed inside the box.
But sometime in the 1960s — when exactly, I’m not sure — a subtle shift began to take place, a partial reorientation of focus. Greater attention began to be paid to the inputs and outputs. The first step in this movement was the development of tools (software) that could help programmers write and modify computer programs. Sitting at a teletype, for example, a programmer could type in computer instructions line by line. The program would be saved in the computer’s memory, and the programmer could display lines of it and modify them. The purpose of the program, of course, was to get the computer to do something, to perform some kind of calculation. The written program, displayed on the teletype, was an instrument for something else, a computation.
By the mid-1960s, however, it had begun to dawn on programmers that tools for displaying and modifying computer programs could be used to modify and display other kinds of textual material — documents — intended solely for human consumption. Conceptually, this was a huge step. Now, instead of seeing text just as an input to the computer, as necessary but of secondary importance to the computation inside the machine, text in these new cases became the primary object of the user’s attention, with the computations inside the computer (the operations needed to edit and display the text, for example) taking a back seat. I don’t mean to suggest that this awareness was necessarily present in people’s minds as such, but with hindsight we can see that the effect of these developments was to give text a new, privileged status.8
In 1967, Peter Deutsch and Butler Lampson, two computer scientists who would play prominent roles at Xerox PARC in the next decade, wrote a journal article called simply “An Online Editor.” In it they argued for the value of a new kind of software, called an “editor,” which could be used to edit not only programs but “reports, messages, or visual representations,” and which, they claimed, was superior to a keypunch. Listen to them trying to explain distinctions that are now second nature to us:
One of the fundamental requirements for many computer users is some means of creating and altering text, i.e., strings of characters. Such texts are usually programs written in some symbolic language, but they may be reports, messages, or visual representations. The most common means of text handling in the last few years has been the punched card deck. Paper tape and magnetic tape have also been used as input, but the fact that individual cards can be inserted and deleted manually in the middle of a deck has made the keypunch the most convenient and popular device for editing text.
With the appearance of online systems, however, in which text is stored permanently within the system and never appears on external storage media, serious competition to the punched card has arisen. Since the online user has no access to his text except through his console and the computer, a program is needed, however minimal, to allow him to create and modify the text. Such a program is called an editor and presupposes a reasonably large and powerful machine equipped with disk or drum storage. The user therefore gains more convenience than even the most elaborate keypunch can provide.9
These early editors were quite primitive by today’s standards. Users were stuck with whatever typeface was available on the printer connected to their mainframe computer — typically a fairly primitive, fixed-width typeface, akin to a typewriter font. (In some cases, printers printed only in capital letters.) And if users wanted to do even the simplest arranging or formatting of their text, such as centering a title or creating a tabular format, they needed to do the work by hand — e.g., by inserting spaces or tabs themselves. The early text editors were a lot like typewriters, with one major exception: the computer maintained a memory of your text, and you could modify it.
At roughly the same time, programmers began to develop another kind of document preparation tool, called a formatter. This was a program that could take an unformatted text (produced by a text editor) as input, and could produce a formatted text as output; centering, underlining, and the setting and justification of margins were some of the early features. To make this work, the user had to supply formatting codes, embedded in the text, indicating, say, that a particular phrase should be centered or underlined.
So, by the late 1960s, the basic architecture of digital documents was in place. What you had was the ability to create and store the digital representation of a text. The elements of this representation were character codes, each code standing for a letter, a numeral, a punctuation mark, or a control command. You could also display the text so represented: the character codes could be translated into commands to a printer to display the corresponding visible symbols on paper. The digitally encoded text could be stored on external media: tapes, disks, and cards. And the text could be modified by the user and again printed or stored.
One other innovation extended these capabilities in important ways: the use of display terminals as input and output devices. The first displays were made from cathode ray tubes (CRTs), still one of the main computer display technologies in use today (Invented in the late nineteenth century, the CRT found use in early television in the 1930s and in radar during World War II.) The advantage of such a technology was obvious enough: marks on a screen, as opposed to those on paper, could be quickly — indeed instantly — changed.
Once CRTs were coupled to computers, programmers began experimenting with techniques to display and edit text on them, and some of the basic capabilities that are now second nature to computer users all over the world were invented. You could, for example, type on a keyboard, and as you typed, the characters would appear on the screen, much as with a typewriter. But unlike a typewriter, you could press the backspace (or rub-out) key, and the characters would be erased; they would simply disappear. You could also insert characters into a line of text and have the characters following the insertion point move to the right; or you could delete characters and the characters to the right would move left to fill in the gap.
By the early 1970s the following setup was common in well-equipped academic and industrial computer laboratories: If you wanted to write a paper, say you could sit down at a display terminal connected to a time-shared computer. There you would type in your text, edit it, and save it in a file. You could print out the text directly on a line printer. Or, if you wanted a better-looking result, you could insert formatting codes into the file (codes to indent and right-justify paragraphs and to insert page numbers) and run the formatting program. The result could then be printed on the printer.
But in the mid-1970s, a new type of document preparation tool was developed that integrated editing and formatting capabilities, and that tried to eliminate (or at least minimize) the differences between marks on the screen and marks on paper. This type of editor was given the name WYSIWYG, an acronym standing for “what you see is what you get.” Using such an editor, you could create rich textual displays on the screen and when you printed out this material on paper (using a laser printer), the result would look “the same” as it had appeared on the screen. You could type in text and see it immediately displayed on the screen. This much wasn’t new, but in addition you could change the “look” of selected portions of text on the screen by issuing commands with the keyboard and mouse. You could select a portion of text and italicize it, make it bold, or change the typeface. You could change the margins, the interline spacing, the paragraph indentation, and so on. And (this was the WYSIWYG part), no sooner had you issued these commands than the changes appeared on the screen. You could see the word italicized, the paragraph now indented, the space between lines increased. Not only that, but when you printed out the document, it looked exactly the same on paper as it appeared on the screen — typefaces, spacing, and all. This is what WYSIWYG meant: what you see on the screen is what you get on paper. If these abilities are obvious enough now, it is because the tools we use today are direct descendants of these early prototypes. Microsoft Word, for example, is a direct successor of the first WYSIWYG text editor, called Bravo, which was developed at Xerox PARC.
The way WYSIWYG works is conceptually quite simple. As you type, adding text to your document, each keypress causes an electronic code to be generated. This code, the character code for the key you’ve just pressed, is inserted by the editor into the template, the file, it is maintaining for your document. As each character code is received, the editor also issues a command causing the letterform corresponding to that character code to be displayed on your computer screen. In the days of the typewriter, the relationship between keypress and visible mark was of course much simpler, a single causal step. By pressing the key, you caused a hammer with a piece of type at the end of it to strike an inked ribbon lying against a piece of paper. Now the relationship between keypress and image is indirect: pressing the key first causes a character code to be recognized and saved; this character code is then used to generate a visible image.
What happens, say, when you italicize a portion of your text? You typically do this by sweeping your mouse across a portion of text on the screen and issuing a command to italicize the selected portion. What actually happens takes multiple steps. Each character on the screen corresponds to a character code in the file. But the editor also keeps track of where the characters are on the screen and exactly which character code in the file each character corresponds to. So when you sweep the mouse across the screen (this is actually indirect too, since your rolling the mouse on the table causes a cursor on the screen to move), the editor figures out which character codes in the file are being selected. When you then issue the command to italicize, the editor in effect appends a note to the character codes in the file, indicating that they are now italic. It also issues commands to the screen to display the italicized versions of the characters. Here too, then, the operation proceeds indirectly: from characters on the screen to character codes in the file and back to characters on the screen.
What WYSIWYG does, however, is to hide all this indirection and backstage manipulation. It maintains the illusion that there is no separation or distinction between the digital representation and the marks on the screen. You’re meant to believe that there is just one thing, one unified thing: your document. And when you print “it” out, the result on paper is meant to look just like what appeared on the screen — as if there were no distinction here either, between marks on screen and marks on paper. But the truth is, no matter how masterful the illusion, there really are three different kinds of materials in use: the invisible digital representation, the visible marks on the screen, and the visible marks on paper.
Up to this point I have been focusing on the ability to create and manipulate text by digital means. But the brilliance of the digital architecture I have been describing is that it accommodates other communicative forms just as well. So long as you can develop the right codes (digital representations) and input/output devices capable of trafficking in these codes, you can display and edit these nontextual forms as well. And so, over the course of the last thirty years or so, tools and techniques have been developed to create and manipulate diagrams and other still images, moving images (both synthesized animation and recorded movement), and sound.
The groundbreaking work in manipulation of static graphics was Ivan Sutherland’s Sketchpad, a computer program developed at MIT in the early 1960s. Sketchpad allowed the user to construct complex line drawings. These could be displayed on the screen, and they could be modified on the screen as well. Using a lightpen as an input device, the user could edit drawings much as the user of a text editor edited text. With the lightpen, for example, the user could select a line segment by pointing to it on the screen, then issue a command to delete, move, or stretch it. Thirty-five years after Sutherland first demonstrated such capabilities, they have been absorbed into our regular cultural practices and it is hard to remember how remarkable they were. So it may be helpful to hear what Ted Nelson had to say, writing in his 1974 manifesto, Dream Machines:
If you have not seen interactive computer display, you have not lived.
Except for a few people who can imagine it. . . most people just don’t get it till they see it. They can’t imagine what it’s like to manipulate a picture. To have a diagram respond to you. To change one part of a picture, and watch the rest adapt. These are some of the things that can happen in interactive computer display. . . .
For some reason there are a lot of people who pooh-pooh computer display: they say it’s “not necessary,” or “not worth it,” or that “you can get just as good results other ways.”10 [Emphasis in original.]
It was Ted Nelson who first coined the word “hypertext.” Nelson and Douglas Englebart are considered to be the fathers of computer-based hypertext, the ability to link fragments of text together via computer, allowing the reader to follow a link from one piece of text to another. (The more recent term “hypermedia” is a further generalization of hypertext, in which not only text but other media types, such as static graphics, animation, and sound, are linked together.) Vannevar Bush is generally credited with coming up with the idea of hypertext (but not the name); his Memex system — envisioned in a paper published in 1945 but never implemented — stored text fragments on microfiche. Yet the notion of non-linear Webs of text is an ancient one — surely as old as annotation — and other hypertextlike designs precede Bush’s in the twentieth century.11
What makes computer-based hypertext and hypermedia possible is the basic architectural premise of digital documents: the separation of digital representation and perceptible form. One of the basic building blocks of modern computing since its early days has been the notion of a link or a reference: the ability to embed within a sequence of computer instructions a pointer to some other location in the computer’s memory and to the instructions or data there. At its heart, a computer program is a sequence of instructions, like a cookbook recipe: do this, then this, then this. Early on, however, programmers realized the importance of breaking the linear sequence of steps. At times you might want to “jump” to a different set of instructions, stored elsewhere in the computer’s memory.
Nelson and Englebart (and no doubt others too) noticed an interesting parallel between a computer executing instructions and a person reading a text. In both cases, following a linear sequence was the norm; and in both cases there were times when you might want to break the sequence and jump somewhere else. And so, by embedding within a sequence of character codes a link or pointer to another sequence of character codes, you could let the reader decide whether to keep reading on linearly or jump to the second piece of text.
It would be remarkable enough if development had stopped with the creation of stand-alone computers operating, in effect, as digital presses, and with the expansion of linear sequences of text to webs of hyperlinked materials. But in parallel with many of the inventions I’ve been describing, computer networking capabilities were also being developed. The creation of the Arpanet (a precursor to the Internet) in the 1960s made it possible for people to exchange data rapidly among computers distributed around the country, and even around the world. E-mail predates the invention of the Arpanet — it had been developed as a communication technique among users of time-sharing systems — but the development of standards for the exchange of e-mail on the Arpanet allowed users to transcend the boundaries of their particular computer systems.
What e-mail did for point-to-point communication, the World Wide Web and Web browsers did for formatted documents and for hypertext. From the early days of the Arpanet, users could exchange data and text files using an application called ftp (file transfer protocol). You could “ftp” a formatted text file from some other computer to your own, and provided you had the right software, you could display, print, and edit this file. In the late 1980s and early 1990s, inspired by Nelson and Englebart, Tim Berners-Lee developed a scheme for linking texts that were stored on different computers. And with the development of the Mosaic browser and its commercial successors (from Netscape and Microsoft), it became possible to display these texts (to translate the digital representations into perceptible forms) on your computer screen, no matter where in the world they had originated (were stored).
As a result of all these threads of invention and adaptation, a global infrastructure for the production, distribution, and consumption of digital documents is now emerging. Although outwardly quite complex, at its heart it is remarkably simple. And it is based on an ancient technique for manufacturing objects from templates or patterns. Putting it this way stresses the radical continuity of current developments with the past. There is nothing new under the sun, the author of Ecclesiastes observed several thousand years ago.
But this isn’t quite right, either. For although we have borrowed the architecture of the printing press (and its antecedent forms of manufacturing), we have improved upon it, creating a “just-in-time” manufacturing technique for written forms. When you ask to view a digital document — by opening a file on your local disk or downloading a document from the Web — you are essentially asking that it be manufactured for you on the spot. (The “it” here, of course, is the perceptible form.) If you’ve asked for the document to be printed, the whole thing will be manufactured for you on paper. If, on the other hand, you want to see it on the screen, then only that portion of it that will fit inside the current viewing window will be manufactured (displayed). And when you scroll forward or backward, just the new portion is manufactured for you.
It’s worth pointing out that a just-in-time scheme isn’t entirely new in the realm of documents. Audio, film, and video are based on the same premise. From a recording tape, sound and images are manufactured in real time — just in time for our eyes to see motion and for our ears to hear intelligible sound. These techniques have been around for about a hundred years. But in going digital we are accomplishing several extremely powerful and impressive things: we now have a single medium (ones and zeros) in which to represent all our documentary forms: text, graphics, photographs, sound, and moving images. And we have the beginning of a global infrastructure in which these forms can be represented and realized. This new state of affairs can be summed up in a phrase: “more, faster, farther, easier, cheaper.” It is quicker, easier, and cheaper than ever before to produce more copies and more variants of documents, and to send them farther at less expense.
These advances come from the way we’ve managed to split apart documents. Paper is heavy, but bits are light. Digital representations can be modified without leaving a trace; it’s much harder to do this with marks on paper. The more we can work with the bits, only transforming them into perceptible forms when we need them (just-in-time), the greater the speed and flexibility and the lower the cost. But there are significant costs to this scheme as well. The financial implications of making this global infrastructure work are staggering: the cost of networks, of computers, of upgrades and maintenance, of training, of the reorientation and rethinking of work. In addition, however, we now live with certain deep confusions and uncertainties about the nature of these new documents, what they are and how they are to be preserved. To a large extent, these questions arise from an aspect of the new digital architecture that I have thus far made little of: the dependence of digital documents on a complex technical environment.
Here is the problem in a nutshell: In the world of paper, documents are realized as stable, bounded physical objects. Once a paper document comes into being, it loses its dependence on the technologies that were used to manufacture it. The photocopied memo takes leave of the photocopier, and never looks back; the printed book takes leave of the printer and bindery, and never looks back. But a digital document, because its perceptible form is always being manufactured just-in-time, on the spot, can’t ever sever its relationship to a set of manufacturing technologies. It requires an elaborate set of technological conditions — hardware and software — in order to maintain a visible and useful presence.
Of course, this isn’t an entirely new problem: it exists in the case of analog recorded audio and video. (Analog recordings are continuous in nature; digital recordings, by contrast, are made up of discrete values, ones and zeros.) Without a tape player in working condition, without the right kind of tape (VHS, say, rather than Beta), without the right encoding standard (NTSC or PAL, for example), no performance can be realized. Provided such conditions are met, however, you can take your tape of Groundhog Day to any number of videocassette players around the world and you will see the same show, more or less, as you would see on any other video projection setup. I say “more or less” because, inevitably, there will be visual differences between performances on different machines. The monitors may be of different sizes (so the image sizes will differ), the color balance is likely to be different, as well as the quality and volume of the sound. But by and large, for a non-specialist audience, these differences will be insignificant, and no one is likely to claim that he isn’t seeing an authentic performance. (Within specialist audiences, however, it may be quite different: seeing the film on a small screen, rather than on the big screen the producers assumed, could be considered a serious liability.)
This scheme works well to the extent that we’ve been able to standardize the various components. There are standards for the production of the physical cassettes, there are standards for the analog encoding of images and sound on them, and there are standards for the players that turn the encodings into perceptible forms (or performances). If you stay within the bounds of these standards, you are pretty much guaranteed that you can continue to view Groundhog Day again and again. Up to a point, that is. For at some point, after some years or decades, the physical tape will begin to deteriorate. Its record of sound and images will begin to fail. (For just this reason, the National Film Preservation Board reports, fully half the film stock for movies made before 1950 is no longer available.)12 But even if the tape remains in good shape, it can still become unusable if the proper players are no longer available. This has already happened to a large extent with eight-track tape, and, who knows, in not too many years, VHS may go the way of Beta.
To a first approximation, digital production is the same as analog audio and video. Instead of a recording (audio or video) tape, you have a digital file, whether stored on a floppy, on a local hard drive, on a CD-ROM, or on a server somewhere on the Internet. And instead of a tape player, you have a computer equipped with the appropriate viewing/editing software. You can then insert the floppy into the computer’s floppy drive and (if the material is textual, say) scroll forward and backward in the text, much as you would rewind and fast-forward a video. The same modes of failure exist in the digital case: the storage medium can degrade, and, in effect, lose its charge. (None of our current digital modes of storage are thought to be good for more than about fifty years, which is quite striking when you realize that paper can preserve its content for hundreds of years, and animal skins for thousands.) The proper technical environment needed to view the file may also cease to exist. (What will happen to the digital manuscript of this book when, someday, Microsoft Word, Macintosh computers, and PCs are gone?)
To this extent, the cases are parallel. Both are deeply dependent on the health of their storage media and the technologies for using them to produce human-sensible products. The primary difference, however, is the extreme — indeed, radical — sensitivity of digital products to their technical environments. To begin with, digital hardware and software has been changing much more rapidly than have the technologies of analog audio and video. Just think of the rate at which new releases of software have been emerging and the limits of compatibility among versions. When you want to move a text file from one computer to another, you need to be concerned not only about whether it has Microsoft Word, but whether it has a compatible version of it. (There is really no equivalent for this in the world of analog video.)
What’s more, there are further and finer dimensions of sensitivity in the digital case. Fonts, for example. Having moved a file to another computer, I may discover that it doesn’t have the font in which the document was originally composed. (Whether or not this matters will depend entirely on the particular circumstances of use. At the very least, it can be crucial in the legible display of diagrams.) Or the problem of ongoing editability: How do I guarantee not only that the document will look the same, but that I will have the same capacity to modify it as I did in the previous environment?
What seems clear is that we are just beginning to figure out how to stabilize digital documents — how to guarantee fixity and permanence. In an environment in which every viewing is a newly manufactured form, and every form is highly sensitive to the technical conditions under which it is manufactured, how do we ensure that a document will remain the same in whatever ways matter?
These problems aren’t only particular to digital documents. Every time we make a photocopy of something, we are asking, perhaps only subconsciously: Does this reproduction satisfy my needs? Do I need to copy both sides (perhaps because there are pencil annotations on the back of some of the pages)? Is making a black-and-white copy of this color document sufficient? Does it matter whether I reduce the double-size foldout map to a single eight-and-a-half-by-eleven-inch format? Archivists and librarians face these questions when attempting to preserve a deteriorating document for future use. Some techniques involve making a copy of the original (e.g., making a “preservation photocopy”), which raise the same challenges I’ve just been discussing. Others involve preserving the physical artifact: repairing torn pages and binding or deacidifying the paper, for example. But even these latter techniques require trading off certain features for others.
What is new in the digital case is not that we have to make decisions of this kind, about what to preserve and how to preserve it. What is new is that the technological means of production and preservation are new and poorly understood. What is new is that production and preservation are not separable. What is new is that instead of a continuously existing, physical artifact, we have just-in-time creations, cooked up on the spot.
We aren’t yet able to grasp, literally or figuratively, what these new creations are made of. And so the word virtual is bandied about as a way of talking about the distinction between physical and digital “matter.” Paper documents, we often hear said, are real: physical, material, weighty, tangible. Whereas digital documents, by contrast, are virtual: immaterial, weightless, and intangible. With such pronoucements, I think we are trying to get at something important about the new technology, but we haven’t yet gotten it right. Digital documents are not immaterial. The marks produced on screens and on paper, the sounds generated in the airwaves, are as material as anything in our world. And the ones and zeros of our digital representations are equally material: they are embedded in a material substrate no less than are calligraphic letterforms on a piece of vellum. It may be true that digital representations can move around extremely quickly, that they can be copied from one storage device to another, even when they are separated by thousands of miles. But at any one moment, the bits for a particular document are somewhere real and physical. And if the bits have symbolic value, so too do letterforms. Both digital representations and written forms have both a material and a symbolic existence.
What is true, however, is that in the digital case the digital representations have come to assume a priority, an ongoing importance, at least equal to that of the perceptible forms they are used to generate. In the case of the printing press, the formes (the locked-up type) used to print a text were quickly dismantled (that was the whole point of movable type, that it could be recycled) and the reader never even had to know of their existence. In the case of video or audio, we are more aware and more knowledgeable about the generators, the tape cassettes, but perhaps because these have a conventional physical form and are themselves tangible artifacts (you can hold a cassette in your hand), they haven’t challenged our sense of reality. But in the digital case, the intangibility of the bits, the ease with which they can be moved around, the ability to store them invisibly beyond our sight and physical grasp, and to move them around so that place no longer seems to matter has us searching for ways to understand what is going on.
People tend to refer to the file on their computer as “the document,” I have noticed. This makes a great deal of sense, since the file is the locus of both editability and relative permanence. Because I have the file for this chapter on my laptop, I can continue to edit it as I see fit. I can save it, view it on the screen, and print it. And eventually a copy of a later version of it will be sent off to my publisher. As far as I’m concerned, this file simply is the chapter. But it is the chapter in a meaningful and useful sense only because I assume, and can rely upon, a technical environment that includes my laptop, Microsoft Word, and a printer thanks to which I can see intelligible marks on a screen and on paper. Under such circumstances, is the file really “the document”? Or should I say that the document consists of the file plus the requisite technical environment? Or must I also include the perceptible forms as well? There are no answers to these questions at the moment. While they are philosophically interesting, they will also have profound practical consequences.
Here is a whole new class of talking things, and we are busy fashioning them to do all manner of work: to tell public stories, to facilitate intimate correspondence with one another, to keep the wheels of industry rolling. But we don’t yet know what we can count on them for. This is partly a function of the technologies, in the ways I’ve just been describing, but it is a function of the technologies in the service of human — that is, social — aims. We can’t make sense of these things as bits generating words and images without taking account of the ways they work with us and for us to make and maintain the world.