9

An Immense Effort

LOOK AT ANY DOCUMENT: a cash register receipt, a book of poetry, a child’s handwritten note, a greeting card. Each one is tailored to operate within a particular sphere of life: to regulate a sale, to sing the praises of the world, to offer a confession, to say happy birthday to a loved one. Examine it, and you will see the materials from which it is formed, the symbols through which it speaks, and the circuit in which it travels. And if your eyesight is good enough, you will also see the behind-the-scenes work, the invisible infrastructure, that mends and tends it: the cash register manufacturers, the book catalogers, the writing teachers, the postal system.

Now broaden your gaze. Imagine seeing the whole planet, all the documents on it, all the activities in which these documents are embedded, and all the people participating in these activities. Here someone is jotting down a phone number, having just run into an old friend on the street. There someone is leafing through mail-order catalogs looking for Christmas presents for the remaining people on his list. Elsewhere someone is dozing on the subway, a novel resting on her lap, a recently received postcard serving as a bookmark. Someone else is making a plane reservation on the Web for a conference she is about to attend, at the same time writing the conference paper she will deliver there.

What are all these documents doing for us? While each has its unique place and role, all of them together are helping us make and maintain the world. What else could they be doing, for world-making is what we humans do. We create the material, social, symbolic, and spiritual environment we inhabit: we build cities; we tell stories; we manufacture goods; we develop knowledge of the world and ourselves; we fashion individual and group identities and ideologies. In short, we create culture.

This world-making, or culture-making, business is an immense effort, ever ongoing. Without it we would be lost: nowhere, nothing. And documents are our partners in this enterprise. We fashion them to take on some of the work: to help us exert power and control, maintain relationships, acquire and preserve knowledge. There is hardly a dimension of life in which these sorcerer’s apprentices don’t figure: in business, in science and the arts, in religion, in the administrative practices that support nearly all our organizations, in the management of our private lives. Virtually all the cultural institutions and practices that help us make order, that help us bring meaning and intelligibility to our lives, draw heavily on documents for support.

We have relied on these beings for many centuries, but never so fully as we do now. Never before have we lived in a world so thoroughly saturated with, and dependent on, these creatures. And never before has a technology (or set of technologies) threatened the material underpinnings of our documents and document practices so thoroughly or so quickly Certainly, over the last hundred years, we have seen the adoption of microfilm alter the practices of scholarship and librarianship, and film and video technologies change the nature of entertainment, but these interventions were confined to particular cultural sectors, whereas digital technologies are now insinuating themselves into nearly every corner of the world in which documents operate — which means virtually everywhere. Such broad-scale changes are therefore destabilizing the institutions and practices that depend on the stability documents help engender, as well as the institutions and practices that help stabilize documents. Is it any wonder that our institutions and practices, our modes of living and working, are shaking?

To see the nature of this disruption in a bit more detail, I suggest we look at what is happening to our genres online. For genres are the social identity of our talking things. They are the forms we give materials to participate in human life. Their current instability is one of the more visible indicators of our cultural distress. Looking in greater detail at them can therefore show us something of the immense effort that’s been required to make stable documents and a stable world; it can help us see the immense effort we’re undertaking now; and perhaps also help us to appreciate why it’s currently all so confusing.

Sometime back I happened upon an odd little Web page. “Expecting to find THE PARANOIA FAMILY TREE?” it announced in large, orange, sans-serif type set on a navy background. “Nice try.” And then, in much smaller, light blue type it went on:

Did you really think it would be so simple? You thought you could just log right on, didn’t you? “Hey, I’m a spy,” you thought. “Maybe I’ll check out the Paranoia Family Tree right now and use the wealth of information contained within to slander and defame the staff of Word!” “Maybe,” you thought, “I’ll use my little microchip to erase their brains, then escape into a pyramid with my spaceship!” Well you’re very, very clever. But so are we.

Its final paragraph read: “We’re not ‘ready’ yet for you to view the Paranoia Family Tree. Why don’t you try again in 2000? Get it? 2000??”

But I didn’t get it, although I made several stabs at it. Perhaps the Web page was a teaser, a kind of advertisement for a commercial Web site or a new publication. I thought of ad campaigns, like the one Apple conducted for the iMac (“I think therefore iMac”), whose initial obscurity was meant to pique your interest. Or the series of ads ABC Television created to advertise its new programming in the fall of 1997, whose basic message was “Television rots your brain, and that’s just fine.”1 Perhaps this was simply the product of someone’s quirky sense of humor. God knows, there is plenty of that on the Web. Like the site claiming that Bert — of Bert and Ernie fame — is evil. (It displays a mug shot of a very treacherous-looking Bert.) Or the one that claims to originate from the Bureau of Missing Socks, which is “the first organization solely devoted to solving the question of what happens to missing single socks. It explores all aspects of the phenomena including the occult, conspiracy theories, and extraterrestrial.” It also occurred to me that this Web page might be the product of a disturbed mind — or of someone on another wavelength, to put it more kindly. None of these possibilities was provable, however — or for that matter, disprovable — since the Web page simply failed to provide consistent clues about just what it was.

Still, there was a fair amount that I did understand about it. I had a pretty good sense of its technological underpinnings: how the various technologies worked to produce an image on my screen. And I had no trouble parsing its sentences and making local sense of their meaning. What I couldn’t grasp was its social character. What was it? Who was speaking through it? How did they imagine that I, or anyone, would come upon this page? What place or role did they intend it to fill in people’s lives?

And this, the Web page’s incomprehensibility, is ultimately what interested me about it. For it managed to demonstrate, in just a few lines of text and a couple of colors, what it’s like for a document to fail to register in social space, to fail to have a social identity, and thereby to fail to be a document. To say that it was uncategorizable, that it was unrecognizable as any particular genre, is simply to summarize this state of confusion. Had I been able to identify it as an advertisement or as a parody or as a work of art, I would have understood its mode of communication and the sphere of life within which it was meant to operate.

It has always been possible to come across unintelligible or uncategorizable documents. But huge amounts of work have been done in the past, and continue to be done now, to minimize this likelihood. Over the centuries a complex network of institutions and practices has grown up to create and maintain meaningful and reliable paper documents. In the world of book publishing, for example, think of the work of agents, editors, publishers, and printers who select and shape authors’ manuscripts; the book distributors, librarians, and booksellers who make them available to readers; the system of copyright and the courts that creates and oversees the right to produce and use these products; and the book reviewers working within another sector of publishing — newspapers, magazines, and journals — to summarize and evaluate books for potential readers.

Thanks to this ongoing work, books today are firmly situated within a network of visible signs, institutions, and practices that collectively and consistently attest to what they are. Just wander into a bookstore and pick up a book from the “new and noteworthy fiction” counter, or from the travel section. By virtue of the book’s physical presence in a reputable shop, you have a great deal of information about it. That it has made its way there tells you that it has made its way through established publishing channels and is therefore vouched for by the system. Where it appears in the shop — on which counter or in which section — also tells you something about its content and mode of presentation: that it has been classified as fiction or travel or whatever. For a recently published, well-received book, a copy of the review from a reputable periodical may also be on display as further evidence of its social respectability.

In addition to these external cues, the internal makeup of the book helps establish what it is. Its physical format — bound pages between covers — declares that it is a book (a codex), while the printed pages, including a detailed and standardized title page, identify it as a mass-produced, published work and specify who wrote it, who published it, where and when. Further information is likely to be available from the volume too. The size and shape of the book, as well as the cover design, may help in classifying it: compare the cover of a detective novel with that of a work of literary criticism, for example. There may well be a blurb offering a summary of the book and a brief biography of the author.

Most of this system is less than five hundred years old, having arisen in the wake of the printing press, as Chartier has observed, “to put the world of the written word in order.” Although title pages existed prior to printing, they were neither standardized nor heavily used until print was well established. The various book genres we now take for granted didn’t emerge until well into print history (The modern novel, for example, emerged in the eighteenth century) And well into the seventeenth century, books still hadn’t achieved the stability and reliability we now take for granted. Pirated and variant editions were common, and authorship was contested. Unlike today, there was no guarantee that the book you put your hands on was a reliable edition, or that the author named was indeed its creator.2 The system of authoritative publication, which is now largely invisible to us, had not yet emerged.

So it should hardly be surprising if the digital world isn’t all neat and tidy We are just at the beginning of figuring it all out. We have a new technology base, a new kind of material, which itself is still evolving. And we are just beginning to figure out what kinds of creatures to make from this material: what they’ll look like, how they’ll behave, what kinds of tasks we’ll ask them to perform for us. Truly it is an immense effort. And it is made all the more complex and confusing by the fact that the technologies, the genres, and the work we’re asking them to do are co-evolving, continually influencing one another. As a result, there’s no place to stand that isn’t itself unstable, or at least uncertain.

Fortunately, we don’t have to start from scratch. We have an existing base of genres and practices from which we can borrow: publications like books, magazines, and newspapers; bureaucratic documents like forms and receipts; personal documents like letters, postcards, and greeting cards. In fact, we have no choice but to borrow from existing genres. It isn’t an accident that film first adopted the conventions of the theater, that television adopted the conventions of film, or that the Web has adopted the conventions of print and TV culture. Without some prior basis for making sense of the communicative conventions in a new medium, we would simply be adrift, like tourists in a land whose language and script we didn’t know. The alternative — a Web largely populated with Paranoia Family Trees — would simply be unworkable.

But moving existing genres online doesn’t guarantee that they’ll be stable. Indeed, it pretty much guarantees that they won’t be. Having been dislodged from the complex practices in which they had previously operated and through which they were maintained, they are subject to — both affecting and being affected by — the shifting sands of their new, uncertain habitats. It is therefore inevitable that the initial copying of genres, arising from the need to maintain intelligibility in a new environment, will lead to a cascading set of incremental changes whose endpoint can’t be foreseen.

E-mail is a good case in point. As one of the first digital genres, arguably the very first, it has been around long enough for us to see some of the many changes it has undergone. The first e-mail systems, created on time-sharing systems in the late 1960s, allowed users to send textual messages to one another. Although the technology itself was new to many users (the details of how you logged on, how you wrote and sent messages), the form itself wasn’t hard to comprehend. The idea of composing a letter or a note to someone was hardly new to anyone. Neither was the business of specifying the address of one’s addressee (even if the form, a user ID, was new). And the arrangement of fields at the top — To, From, Subject — mirrored the conventions of the memo.

Right off the bat, then, e-mail was a blend of old and new. It made immediate sense to users because it drew on established conventions of postal correspondence (letters, postcards, and even greeting cards), handwritten notes, and business memos. It didn’t require a major shift in thinking to understand that you could now exchange messages with a fellow student who used the same university computer, or with a colleague who used the same corporate mainframe.

Yet while e-mail drew from earlier forms, it was identical with none of them. It was unlike a letter in that you didn’t need to supply a return address (street number and name, city, state, and Zip code) in the upper right-hand corner; nor could you append a handwritten signature at the end. Unlike most letters and postcards, the message had to be typed rather than handwritten, and you could expect instantaneous delivery (Indeed, the whole idea of delivery was at once familiar and odd.) Your writing style was more likely to be casual than in a memo, at times even chatty. In keeping with the tradition of the memo, you could “Cc” others — that is, carbon-copy them (clearly an anachronism) — but unlike the memo, this required no additional human effort; it essentially came for free. And if, in its ability to facilitate broad distribution, it was like publishing or broadcasting, it was also unlike these media because anyone could send a message.

In those first years, e-mail could be exchanged only among those who had shared access to the same computer system, thus limiting contact to academic and research communities. But beginning in the early 1970s, e-mail appeared on the Arpanet, which meant that people using computers connected via the network could correspond with one another. In addition, in the early 1980s, large-scale proprietary systems — some within corporate environments, others providing commercial services for a fee to individuals (precursors of America Online and Compu Serve) — gave a great many more people, distributed throughout the world, access to e-mail, if not the Arpanet. It therefore became easier, and much more common, for people who had never met one another face-to-face to correspond in the informal manner that e-mail users had come to adopt. The development of bulletin boards and distribution lists organized around common interests (parents of dyslexic children, hobbyists, vegetarians, etc.) made “posting” a message to a large number of people, mostly strangers, an everyday occurrence.

Early on in the use of e-mail, a form of behavior emerged which came to be called flaming (sending angry, vitriolic messages to others). Hacker culture was from the beginning confrontational; when online, people seemed more likely than usual to talk in blunt, even hostile terms. “Flame wars” might then ensue, the rough equivalent of a shouting match, among a number of correspondents. The social conventions of polite speech didn’t seem to operate, at least not to the usual extent. E-mail correspondence, especially that on bulletin boards and in chat rooms, was different from informal letter exchange among friends (where the participants knew each other). It was also different from business or bureaucratic communications in which the norm was a certain disinterested formality. It wasn’t surprising, then, if combining informal and personal correspondence with broad distribution led to bumpy times. It is understandable that attempts at humor and irony might be misunderstood when broadcast within a community of strangers.

And so the novel social and technical circumstances of e-mail began to produce new communicative conventions. These days it isn’t unusual for e-mail communities to establish rules of proper conduct (netiquette) governing how one communicates with others (interpersonal style) and what topics are fair game to address. And the new written symbols, called smileys or emoticons (smiling and winking faces made from parentheses, semicolons, and other punctuation marks, such as ;)), are concrete evidence of the ongoing transformation and evolution of the genre. Indeed, the rise of these little symbols nicely illustrates a central feature of the way documents, document technologies, and human activities continually co-evolve. You can see how, in this case, textual communication of a new sort (e-mail) based on new technologies (digital) produced new social circumstances (informal textual communication among strangers), and how this in turn led to the development of new communicative conventions (rules of politeness tailored to the online environment), including the minting of new textual symbols (smileys). It is exactly because genres are so tightly bound to particular technologies and social contexts that change in the genre is likely to take place when either the technologies or the social contexts change.

Another example of this ongoing co-evolution is the use of Bcc, or blind carbon copy. In the days when office memos or circular letters (as they were earlier called) were typed on paper, a single copy might be circulated to the relevant parties with the aid of a routing sheet. The routing sheet listed the intended readers in the order in which they were to receive the memo. Once you’d received the memo and read it (or were simply done with it), you checked off your name on the routing sheet, or placed your initials there, and forwarded the memo (the very copy you’d just read) to the next person on the list. Unlike the recipients named in the To field or on the routing sheet, those named in the Cc field were to receive a separate physical copy, literally a carbon copy Often a carbon copy went to the files for informational and legal purposes, as well as being sent to specific individuals.

With the advent of inexpensive photocopying, however, the practice shifted. Now it became more common for everyone — those on the To list and those on the Cc list — to receive a separate physical copy This meant that each recipient could keep his or her own local copy But it also had an effect on the political and rhetorical significance of annotation. When a single copy circulated among a number of people, anything written on that copy would be seen by those next on the list. You would therefore be sure not to write anything you didn’t want others to see; conversely, you could mark annotations specifically to make them visible to others. This practice changed, of course, once you had your own personal copy. Now your annotations could be for your own personal use; they could be as private as you wanted. But you were then also free to circulate your copy to others not originally on the list, and by first annotating it and then copying and sending it to your list, you could put your own spin on its content.

Clearly it was of crucial significance what was visible and what was not, who was privy to what, and who was excluded. Not only did these concerns apply to possible annotations but to the identity of the memo’s recipients. With a To list or a routing sheet, anyone who glanced at the circulating memo could see all of its intended recipients, with all the political implications of such knowledge. Of course, it was always possible to distribute copies to people who were listed on neither the To list nor the Cc list. (Before photocopying, this would most likely be done at the point where the memo originated.) This practice was invisible from the point of view of the recipients.

Once e-mail was developed, however, a Bcc (blind carbon copy) field could be added to the memo that reified the earlier practice. And so in e-mail now, if you want a copy of your message to be distributed to someone, you’ve got to specify that person’s address in some manner. Adding a new field very neatly solved the problem of distribution and invisibility. Your copy of the message (if you were the originator) would show the list of those Bcc’d, but no one else’s copy would. From this point of view, their copy would be different from yours.

The creation of Bcc therefore changed the intellectual and physical form of e-mail (by adding a new field), and changed practice (by giving writers a choice among three categories of recipients). But as a recent article in The New York Times indicates, the uses of Bcc are continuing to evolve.3 Specifically, people are making increasing use of the option as they see the potential perils of disclosing the addresses of their intended recipients. The article recounts several cautionary tales of abuse. In one case, a man named Spencer Grey sent out a change-of-address notice to a large number of recipients. Subsequently, a friend who’d received the message sent a party invitation to everyone on Grey’s list that included a joking reference to drug use. But Grey’s change-of-address message had been sent to business clients as well as friends. “If I have learned anything,” Grey is quoted as saying, “it’s the value of the Bcc option.” In another case, a list of recipients was used by one recipient for marketing purposes, again crossing the line between friendship and business. The article offers its moral in the concluding sentence: “Cc at your own risk.”

No doubt this is good advice. But the real question is when to Cc and when to Bcc, which the article calls the “to Cc or to Bcc dilemma.” Clearly, there is a need to balance privacy and disclosure, invisibility and visibility.

On one hand, privacy concerns have increasingly made Internet users skittish about sharing their e-mail addresses — a view that in some cases extends to friends’ addresses. On the other hand, it can be a bit disturbing to receive a party invitation via e-mail where the To field says “undisclosed recipients” or “recipient list suppressed” — phrases some e-mail programs insert when all of the recipients have been blind Cc’d on a message.

One woman reports resolving the “to Cc or to Bcc dilemma” according to the type of message she’s sending combined with the size of the recipients list. So she will use Bcc for change-of-address and other similar professional announcements. For party invitations, if the list is small, shell generally use the Cc option, while for larger parties she makes use of Bcc. Clearly the conventions governing the appropriate use of Cc and Bcc are still being written. And they will no doubt continue to shift with changes in the legal, social, and technical environment on the Internet.

The “Cc or Bcc dilemma” serves as a cautionary tale about trust in the online environment. It is a small example, to be sure, but it is emblematic of the problems we encounter when previously established pathways of communication and social interaction become unstable. Monkeying with trust is serious business. For trust, as social philosophers have pointed out over the centuries, is the glue that binds a society together. “How could coordinated activity of any kind be possible if people could not rely upon others’ undertakings?” asks Steven Shapin in A Social History of Truth. “No goods would be handed over without prior payment, and no payment without goods in hand. There would be no point in keeping engagements, nor any reason to make engagements with people who could not be expected to honor their commitments. The relationship between teacher and pupil, parent and child, would be impossible if the reliability of the former as sources of knowledge were not to be granted.”4

How can we trust our documents, though, when the very systems and practices that have worked to ensure their trustworthiness are currently unstable? This came home to me in a very personal way not long ago. I had gone to the emergency room on the advice of my doctor’s office (my doctor was out of town) to be examined for the (unlikely) possibility of a serious ailment. The doctor examining me needed to know something about the trigeminal nerve, one of the cranial nerves in the head. It was a detail, he frankly admitted to me, he didn’t remember from medical school. He found a medical reference book in the emergency room, but it didn’t have the information he sought. Half in jest, I suggested he look on the Web.

To my surprise, he liked the idea and we walked a few feet to a workstation, where he proceeded to do a search of the Web. A huge number of hits came back (in the thousands), and he began looking through some of the top-rated results. A number of these Web pages looked quite official — they had the look and feel of pages from a medical textbook. The information he found confirmed his vague recollection of what he’d learned in medical school. “But I’ll check three sources, just to be sure,” he said.

Clearly he knew that the medical information he found on random Web sites couldn’t be trusted in the same way he would trust a textbook or a reference work published by a reputable medical publisher and used by reputable practitioners. So he took a comparative approach, deciding to check for agreement across sources. This is, of course, a risky and unreliable strategy, as I once discovered for myself. Unsure how to spell “Caribbean” (how many r’s? how many b’s?), I had done a search on the Web for the spelling I thought was correct, “Carribean.” When I got thousands of hits for this spelling, I assumed I was right. But I wasn’t: many other people don’t know how to spell Caribbean either. (This was a stupid thing to do, of course, and I should have known better. Authoritative knowledge, unlike elective office, isn’t simply established by a show of hands.)

Even with this recent experience in mind, I chose to hold my tongue. Much as my doctor was evaluating his sources, I was evaluating him, including his evaluation strategy. We both knew his strategy was faulty, but in the particular circumstances in which we found ourselves — the limited likelihood that I was ill and the relative reliability of his partial memory — it was probably good enough. Had the circumstances been otherwise, he would have insisted on more reliable sources, or I would have.

Had my doctor been able to consult an established, printed medical reference book, the trustworthiness of its information content would have been immediately apparent. But the truthfulness and transparency of its medical knowledge would come not from the marks on the page alone — these are merely the tip of an enormous iceberg — but from the trustworthiness of the publisher and its placement within a vast network of Western medical practices. Even if one of the sources he consulted had been an online version of an authoritative text, it still wouldn’t have had the same authority as its print-and-paper counterpart. How could he know that the same strict quality controls had been applied to the online version, or that the online text hadn’t been tampered with?

Issues of trust and quality are always central considerations in the publication process. As it has taken shape in recent centuries, publishing is more than just “making public” — it is the circulation of socially sanctioned, authoritative knowledge. Indeed, you might think of the modern publishing industry as a cultural mechanism for ensuring the reliability of certain genres. Nowhere has this vetting process been more prominent or of greater significance than in academia, which relies on elaborate processes of peer review and where reputations are made and broken based on scholars’ publication records. Academia is also the corner of publishing in which digital technologies have so far made the deepest inroads.

So let’s have a look at academic publishing in transition as it struggles to come to terms with the digital medium. While the details are certainly interesting and important in themselves, I offer them here primarily as an extended example of the immense effort required to stabilize published documents in a destabilized environment.

Today the movement online of genres of scholarly communication is taking place within a climate of great ferment and anxiety. Fundamental questions are being raised about the future of scholarship, the pursuit of knowledge, and the process of education. Some of this soul-searching, however, predates the current digital transformation. Since 1970, while books have risen in price at roughly the rate of inflation, journals have increased by well over ten percent per year.5 This has led academic libraries (the main purchasers of academic journals) to cut back on their subscriptions, while publishers, seeing their market shrink, have raised prices further. And despite these cost-cutting measures, the proportion of library budgets devoted to journals has increased, causing them to cut back on the purchase of scholarly monographs (books) as well. The net result is that academic libraries have been purchasing an ever-smaller number of the publications faculty and students want.

In recent years a great deal of scholarly attention has been directed at the problem: entire conferences, issues of journals, and books have been devoted to the subject. The result has been a fairly clear understanding of the problem, if not consensus on how to fix it. What is clear to everyone is that “journals are the lifeblood of scholarship — libraries and researchers cannot function without them,” as a recent New York Times article put it.6 Or as Charles Bazerman says in Shaping Written Knowledge: “Knowledge produced by the academy is cast primarily in written language. . . . The written text, published in journal or book, serves as the definitive form of a claim or argument, following on earlier printed claims and leading to future claims.”7 It is therefore fairly inevitable that disruptions to journals will be felt throughout the entire system of scholarship.

The journal article, like the postcard and the memo, is an outgrowth of the letter. The first scientific journals, originating in the mid-seventeenth century, were essentially collections of correspondence. But it was only in the nineteenth century, the same period that saw the rise of modern bureaucracy and the modern library, that the journal in its contemporary format emerged. For this was the period in which the modern university emerged as a secular institution and became “the great factory of knowledge and education.”8 And it was in this redefined institution that the world of knowledge was partitioned into innumerable disciplines, each with its own publication channels for the production and circulation of its specialized knowledge.

For a century, more or less, the academic system of knowledge production and consumption has worked like this: Professors perform research and write scholarly articles and books to disseminate their results. University presses and scholarly societies take responsibility for the publication process: they oversee peer review (the process by which members of the author’s scholarly community vouch for the integrity and quality of his or her results), as well as for the editing, printing, and distribution of research results. Academic libraries then buy these publications, making them available to faculty and students. The result is a closed circuit in which written knowledge circulates from scholars to publishers to libraries, then back to scholars. Quality control through peer review has a double role in this system, shaping the quantity and quality of the knowledge that circulates and also helping to determine academic promotion and tenure, because academics must “publish or perish.”

For most of the twentieth century this was a highly effective system. It can be credited with enabling the tremendous growth in scientific and technical knowledge. But it seems to have fallen victim to its own success. In the aftermath of World War II, as large sums of government money were poured into academic research, the number of journals and monographs mushroomed. And although publication to this point had largely been in the hands of nonprofit academic presses and learned societies, commercial publishers saw the possibility of making a profit. The profusion of publications, along with commercial publishers’ increasing control of publication channels, has made it impossible for libraries to collect — that is, to pay for — all the materials scholars and students may want, thus creating the current crisis.

So what can be done about it? Many people now look to the Internet for the solution. Perhaps the most radical proposal has been advanced by Stevan Harnad, a cognitive psychologist at the University of Southampton in England, who suggests that scholars should take back control of their own publication processes by self-publishing their works on the Web (he calls this “self-archiving”). Academics, Harnad believes, unlike the authors of trade publications, aren’t fundamentally concerned with making a profit. They are already paid by their academic institutions, and the kind of reward they look for from their writings has more to do with the circulation of their ideas, and with the attendant prestige and promotion. But they have been forced “to make the ‘Faustian bargain’ of trading the copyright for their words [to commercial publishers] in exchange for having them published.”9 Scholars, in other words, not unlike Lewis Hyde’s artists, would prefer to operate in a gift economy, but have been forced into a market economy out of the need to cooperate with commercial publishers.

Fortunately, Harnad claims, a solution is now at hand: scholars need only publish online. It is a much less expensive alternative — electronic journals, he insists, can be produced for twenty-five percent of the cost of their paper counterparts — and this will allow authors to regain control of their works. What’s more, the shift to a pure gift economy could happen instantaneously. “So what is the strategy for ushering in this brave new era? It is a simple subversive proposal that I make to all scholars and scientists right now: If from this day forward, each and every one of you were to make available on the Net, in publicly accessible archives on the World Wide Web, the texts of all your current papers (and whichever past ones are still sitting on your word processors’ disks), then the transition to the Post Gutenberg galaxy would happen virtually overnight.”10

But this isn’t a proposal to open the floodgates of publishing to one and all. Harnad thinks the Web in its current unregulated state is a “global vanity press.”11 This simply won’t work for scholarship. He acknowledges the importance of quality control, and an essential condition of his subversive proposal is that peer review be maintained for online publications. But this would be straightforward, he asserts: scholars already perform this service for free (when asked to do so by current journal and monograph publishers), and since peer review is “medium independent,”12 it could easily be transferred from paper to digital publication.

Needless to say, Hamad’s proposal is controversial. While it doesn’t cut publishers and libraries completely out of the picture, it doesn’t necessarily leave a whole lot of room for them, either. Commercial publishers currently making a healthy profit on journal subscriptions aren’t likely to show much enthusiasm for his proposal. Within the scholarly world itself, a range of questions and criticisms have been raised: Is electronic publishing really as inexpensive as Harnad claims? Is instituting peer review of online publications as simple a matter as he suggests? Are there perhaps better ways to achieve quality control than peer review in an online environment? Do we really want to move the current system of scholarly communication onto the Internet (is it realistic to attempt it), or is it time to rethink the whole system in more fundamental ways?

Meanwhile, as this admittedly extreme proposal is being debated, many changes are already taking place. More and more journals are appearing online. Some of these are established paper-based journals that have decided to create a parallel, digital presence. Others are new journals, created just for the Web, and some of these have made the bold decision to appear in digital format only The initial movement has been fairly conservative, with online versions maintaining the look and feel of their paper precursors. But changes, both in format and in the rhythm of publishing, have already begun to appear.

In paper journals, the cost of paper and printing has been a major determinant of the length of articles and the total number of pages in an issue. In online journals, however, the cost of printing isn’t a factor and articles can be, and in some cases have been, as lengthy as author and publisher wish. The concept of an issue or edition — the binding together of a set of articles all produced at the same time — is also an artifact of the print world, and online journals are now beginning to “unbundle” their articles: rather than putting up a collection of articles (an issue) at one time, individual articles are being “issued” to the Web site whenever they are ready Journal publishers are also now providing hyperlinked citations between articles, something that was impossible on paper. And new kinds of content — video, simulations, interactive displays — are being incorporated as well.

Perhaps a more fundamental shift is already happening in the publication process. In 1991, Paul Ginsparg, a physicist at the Los Alamos National Laboratories, first put together a database of physics “preprints” (research papers not yet published). Prior to this, physicists already had a tradition of sharing research results before publication. Ginspargs system provided new technology to support these existing practices and was quickly embraced; over the last decade it has become a stable element in the practice of physics. Similar services, now called “e-print servers,” have been developed in other disciplines, including computer science, psychology, and medicine. While these systems don’t require any form of review for submission, there is nothing to prevent individuals or groups from setting up systems of peer review or other kinds of quality control on top of the base layer of unrefereed articles. But rather than the simple yes/no (published or not published) decision made by traditional journals, the potential exists to establish a whole range of quality measures (how many people have read a given article, who has commented on it, etc.) and for users to search the archive for papers that satisfy the criteria they are most interested in.

Journals aren’t the only scholarly form in transition. There are equally important questions about whether the scholarly monograph or book will survive, and if so, in what form. Clifford Lynch, director of the Coalition for Networked Information (a consortium of major academic institutions), believes that even bigger changes are in store for the monograph than for the journal article. “The digital monograph,” he suggests, “is likely to become a larger scale, collaborative effort.”

In print we might think of a series of interrelated but distinct and independent works, such as a critical edition of the writings of an author and a number of works of criticism and analysis that make reference to this authoritative edition. In the digital environment, all of these resources may be woven together into an encyclopedic work of multiple authorship. It will also be possible to link sites in a more extensive and intimate way than can be accomplished through traditional bibliographic citation. In the print world, reviews are separate from monographs; on the web, the comments of one scholar can be directly integrated into the living work of another. And websites, particularly if they involve digitized source materials and multimedia, are often the products of teams rather than individual authors.13

Nor is it just scholarly communication that is in transition. There is much talk these days — excitement but also concern — about the way the Internet may contribute to a profound rethinking of the entire educational process. The Web, e-mail, and Internet-enabled video conferencing are now being used to create online courses for students who are located at a distance from the instructor and from a physical campus. It isn’t yet clear whether this kind of “distance learning” simply offers students more educational opportunities — the choice to be located remotely or on campus — or whether it signals the beginning of a more significant reworking of educational practices. Certainly, the University of Phoenix, a for-profit university whose teaching is centered around distance learning, has sent a wake-up call to traditional bricks-and-mortar institutions.

At the moment, then, there is no real clarity about how academic scholarship, publication, or indeed the entire educational system will be transformed. No one doubts that publication will remain central to scholarship. But at the same time no one is certain how far new, online genres will diverge from their print-and-paper predecessors. Nor is it clear how the competition among today’s stakeholders — scholars and scholarly societies, academic libraries and presses — for a piece of the future pie will shake out. Even more disconcerting is the uncertainty about the future forms and purposes of education and scholarship.

It is interesting to compare this creative ferment in the academy with what is happening in mainstream publishing. In the area of serials (newspapers and magazines), the pattern is quite similar to that of scholarly journals: the creation of online counterparts to print-and-paper publications; the creation of brand-new publications online without any prior counterpart; the loosening up of the notion of editions and the unbundling of articles.

Although there has been much talk of digital books, so far there has been relatively little action. Most of the books now available in digital form — through sources like Project Gutenberg and the Bartleby Library — are works whose copyright protection has expired. Holders of current copyright have been understandably reticent to put their books online for fear of widespread, unauthorized copying. E-books — portable computers meant to display digital works loaded into them — are still in a fairly primitive state, although this could change quickly. And, thus far, no new book-like genres have emerged, despite significant experimentation in hypertext fiction over the last decade or so.

Perhaps the greatest challenge to the current system of book publication at the moment comes not from the evolution of genres or the availability of new hardware but from the potential transformation of the publication process itself: who does it and in what manner. Although Stephen King’s experiment in self-publication was a failure by the most obvious standard of success (money), it does suggest how other stakeholders may choose to enter the arena. Perhaps even more suggestive, and immediately challenging, are the companies now offering “print on demand” services. For a small fee they will take any authors manuscript and “publish” it: put it in digital form so it can be printed, bound, and sold to a reader on demand. To traditional publishers this is upsetting, not just because it has the potential to break their monopoly, but because it threatens the quality-control process by which, from their point of view, only the best works enter the public realm. In a short piece in Harper’s, two editors at commercial publishing houses reprimand one of these companies, Xlibris, for opening the floodgates when in fact “[t]he book industry has many problems; publishing too few books is not one of them.”14

And so we are back to issues of trust, reliability, and quality, surely some of the most central concerns not just for published works but for personal correspondence and administrative and commercial documents as well. It takes a great deal of parenting, and perhaps even a village, to raise a responsible child. It takes a complex, largely invisible infrastructure to create stable talking things that will reliably do our bidding, and that will help us in our ongoing efforts to make a meaningful and orderly world. It should hardly be surprising if a disruption to these efforts of the scale we are now experiencing would be unsettling. Whose job, whose career path, whose sense of self and of an orderly life, isn’t perturbed by such developments? And given the complexity of the forces now at work, the subtle interplay among genres, technologies, and work practices and institutions, it should hardly be surprising if we can’t see how it will all turn out. While this might seem like an adequate explanation of our current disease, I suggest we go one step further and examine the existential roots of our striving after, and our anxiety of, order.