images 20 images
Theories of Text, Editorial Theory, and Textual Criticism

MARCUS WALSH

1 Issues for textual scholarship and theory

The book as a form has enabled the expression, transmission, and multiplication of knowledge. Written words are more stable than speech. Printed words are more stable and, equally important, more replicable than spoken or written words. Nevertheless, the permanence of the book is undercut regularly by various processes of change.

Texts have been embodied in a number of physical forms: handwritten on papyrus, vellum, or paper (see 10), or scribally copied (occasionally derived in both cases from oral dictation); printed in hand-set type, machine-set type, or stereotype; composed at a keyboard, electronically processed, and output to an electronic printing device; or electronically processed and sent as a file direct to a local visual display or to the World Wide Web. Texts are products of human agency, composed by individuals and copied and processed—in MS, printed, and electronic forms—by a variety of technologies, all embodying human crafts and decisions.

Through these processes, texts are subject to innumerable types of variation and mistake. A dictating voice may be misheard; an author’s or a scribe’s hand may be illegible and, hence, misread. Transcription always involves change and error, as well as a conscious or unconscious process of editing. Composition (the setting of type) adds technically specific issues: the wrong case, the wrong fount, the turned letter. Performance texts may be transcribed or printed from faulty memorial reconstruction. Type and typeset formes are subject to ‘batter’, type movement, and loss. Etched or engraved plates are prone to wear and damage. Authors emend and revise, before, during, and after the publication process. A text may be subject to imposed change, censorship by the publisher, by external authority, or self-censorship. Unique MSS may be lost, destroyed, or damaged by interpolation, fire, flood, or vermin. Transmission from any medium to another—printed, photographic, electronic—may be affected by various types of interference. Electronic texts have their own characteristic modes of variation: misconversion between character sets, or locally unintended effects of global search and replace operations (see 21).

Theorists of text and of editing have thus confronted many practical questions: the construing of foreign letter forms (e.g. Greek or Hebrew) and historical scripts (e.g. secretary hand); consideration of the errors that might arise from misconstruction during the production process; the practices of printing houses and compositors; the possible priorities and relations among multiple witnesses of a text; the historical bibliography of printed texts; and lexical and semantic change.

There are larger issues of philosophical definition and choice, arising from conceptions of textual identity, textual meaning, and textual function. Conceptions of textual editing vary in the regard paid to diplomatic and bibliographical evidence, and to considerations of putative authorial intention and semantic coherence. Should editors base their work on the assumption that the text replicates the words intended by the author? Or should they assume that the text reflects a broader process of interaction among or negotiation between the original author (or authors), sponsors, publishers, printers, and audience? Should editors privilege a particular source document, and, if so, should that document be a MS or a copy of a printed edition? Should they consciously present the text in relation to contemporary audience tastes? These alternatives—variously prioritizing the author, the documentary witness, the sociological circumstances of production, or taste as a principle of authority—respond to different disciplinary and social sources and functions in the edition. For most of the 20th century, literary scholarship, particularly in English, privileged the author and used bibliographical and critical processes in order to reach a putative authorial text hidden or corrupted by subsequent error. Historians, by contrast, have generally preferred diplomatic editions (i.e. a text faithfully transcribed from its appearance in a particular document) or a type- or photographic facsimile of a particular document. In recent years, some theorists have advocated a more sociological approach to editing, and sought textual versions reflecting the complexities of social production. For centuries, texts have been adapted according to the perceived taste or capabilities of their target audience. Alexander Pope, for example, purged Shakespeare of comic improprieties in his 1723–5 edition, and Thomas and Henrietta Maria the Bowdler produced an expurgated Family Shakespeare (1818).

2 Early classical and biblical textual scholarship

Textual study and textual editing began with the most ancient Western texts: the Greek classics and the Bible (see 2, 3). Classical textual scholarship originated, as far as is known, in the Alexandrian Library, in the 2nd and 3rd centuries BC. Here, scholars undertook the huge task of ordering some hundreds of thousands of MSS—none of them original authorial documents—to produce from fragmented and widely diverse copies more reliable and complete texts of authors such as Homer. Scholars developed a system of marginal critical signs for such apparent errors as incorrect repetitions, interpolations, misorderings of lines, and spurious lines or passages. Corrections were normally not entered in the text itself, but made and justified in extended scholia. Here, already, is an editorial practice founded on the exercise of critical judgment, in relation to issues of authorial style and usage.

After the decline of Alexandrian scholarship, the copying and editing of Greek and Latin literary MSS continued at Pergamum, where Crates (c.200–c.140 BC) examined and emended the text of Homer, and at Rome, where Aelius and Varro worked on issues of authenticity and textual corruption in the writings of Plautus and of others. In the later Roman empire, Hyginus wrote on the text of Virgil, M. Valerius Probus applied Alexandrian methods to the texts of Virgil, Terence, and others, and Aelius Donatus and Servius commented on Terence and Virgil. After the empire’s fall, classical texts continued to be copied in monastic scriptoria. Although textual activity lapsed from the 6th to the 8th centuries, a marked renaissance occurred in the Carolingian era and the 11th and 12th centuries.

In the Renaissance, the classics of Rome and Greece were rediscovered, collected, edited, and annotated by a succession of scholars, from Francesco Petrarca and Poggio Bracciolini onwards. Lorenzo Valla (1406–57) and Angelo Poliziano were key figures in the development of textual criticism and historical scholarship. Valla influentially demonstrated, from linguistic and historical evidence, that the Donation of Constantine was a forgery, and wrote a ground-breaking study of Latin usage, the Elegantiae Linguae Latinae (1471). Poliziano used Greek sources to illuminate Latin texts, and argued for the superior authority of the earliest MSS. Aldus Manutius’s press issued a stream of Latin and Greek texts edited by a team of scholars, including Marcus Musurus, who used the best available MSS and applied their linguistic knowledge to amend the MSS for the printer. Francesco Robortello edited Longinus (1552) and wrote the first developed study of the methodology of textual criticism, De Arte Critica sive Ratione Corrigendi Antiquorum Libros Disputatio (1557), which insisted on palaeography, usage, and sense as criteria for emendation. Major textual scholars in France and The Netherlands—notably Lucretius’ editor Denys Lambin; Manilius’ editor Joseph Justus Scaliger; Tacitus’ editor Justus Lipsius; Gerardus Joannes Vossius; and Daniel Heinsius—made significant contributions both to the methodology of editorial emendation and to essential areas of knowledge for informed editing, including chronology, the usage and lexis of the ancient languages, and literary contexts.

The earliest biblical textual scholars also had to deal with a plethora of non-original documents. The New Testament existed, in whole or in part, in some 5,000 Greek MSS, as well as in Latin versions and patristic quotation. St Jerome, author of the Vulgate Latin translation, was apparently conscious of the problems that arise in MS transcription, including the confusion of letters and of abbreviations, transpositions, dittography, and scribal emendation. Humanist textual scholarship antedated printing. Valla, for instance, amended the Vulgate on the bases of the Greek original and patristic texts (1449; published by Desiderius Erasmus, 1505). The first printed bibles, by Johann Gutenberg and other presses, were in Latin. The Hebrew Old Testament was not printed until 1488, at the Soncino Press; and the first Greek New Testament, the Complutensian Polygot, was printed in 1514 but not published until 1522. It was narrowly beaten to the market by Erasmus’s edition, which, despite being hurriedly edited from the few MSS readily to hand, became the basis of the textus receptus that would dominate for four centuries, underlying Robert Estienne’s editions (1546 and 1549), Beza’s Greek testaments (1565–1604), the Authorized Version (1611), and the Elzeviers’ Greek testament (1624).

3 After the Renaissance: beginnings of rational methods

In France, Richard Simon’s monumental study of the Old and New Testaments was the first full-scale analysis of the textual transmission of an ancient text. Investigating Greek MSS of the New Testament and surveying printed texts from Valla onwards, Simon examined critically the inconsistencies and repetitions of the Old Testament, especially of Genesis, imputing them not to the first penmen, but to scribal error. In England, Walton’s polyglot Bible (6 vols, 1655–7) included for the first time a systematic apparatus of variant readings. John Fell issued a small-format Greek Testament with apparatus giving variants from dozens of MSS (1675). John Mill undertook an extensive study of the text of the New Testament; his examination of numerous MSS and printed editions culminated in an innovative edition (1707) with enormously detailed prolegomena, listing some 30,000 variants.

However, the overwhelmingly significant figure of the time, for European and English textual method, was Richard Bentley (1662–1742). In his Dissertation upon the Epistles of Phalaris (published in the second edition of William Wotton’s Reflections upon Ancient and Modern Learning, 1697)—one of the most devastating interventions in a long and pugnacious scholarly career—Bentley emphatically demonstrated that the letters attributed to the ancient tyrant Phalaris were spurious, and thus demolished Sir William Temple’s adduction of Phalaris as evidence for the superiority of ancient writers. Bentley’s argument was based on extraordinarily extensive literary, etymological, and historical evidence. His imposing scholarship and formidable methodology are in evidence throughout his editions of Horace (Cambridge, 1711; Amsterdam, 1713) and Manilius (1739). Familiar with the MS tradition, Bentley was aware of the distance of all surviving documents of classical writings from their originals. He was prepared both to diagnose errors that had, through many possible routes, entered the text and to make emendations with or without the MSS’ supporting authority. For Bentley, editorial choices, though informed by the documentary tradition, must advert to the sense of the text, as constrained by cultural and linguistic possibility: ‘to us reason and common sense are better than a hundred codices’ (note on Horace, Odes, 3. 27. 15).

Bentley also contributed to New Testament editing, publishing Proposals for Printing a New Edition of the Greek Testament (1721), to be based on the Vulgate and the oldest MSS of the Greek text, in both English and European libraries. He intended thereby to produce a text, not identical with the irredeemably lost original autographs, but representative of the state of the New Testament at the time of the Council of Nicaea (AD 325)—thus obviating a huge proportion of the tens of thousands of variants amongst later, and generally less authoritative, MSS. On the basis of his collations he claimed: ‘I find that by taking 2,000 errors out of the Pope’s Vulgate, and as many out of the Protestant Pope Stephens’ [i.e. Estienne’s 1546 New Testament], I can set out an edition of each in columns, without using any book under 900 years old, that shall … exactly agree.’ Aware that he was treading on sensitive ground, Bentley took a more reverent approach to the extant documentary witnesses, avowing that ‘in the Sacred Writings there’s no place for Conjectures … Diligence and Fidelity … are the Characters here requisite’ (Bentley, Proposals, sig. A2v).

Bentley’s work had a huge effect on classical editing in Europe and England, where his numerous disciples included Jeremiah Markland (1693–1776) and Richard Porson. His influence also extended into the expanding field of the editing of secular, modern, and early modern literary writing, particularly to Lewis Theobald. For Theobald—editor of Shakespeare (1733) and the critic, in Shakespeare Restored (1726), of Pope’s earlier, aesthetically driven edition of the playwright’s works—the Shakespearean textual situation resembled that of the ancient classics. No ‘authentic Manuscript’ survived, and ‘for near a Century, his Works were republish’d from the faulty Copies without the Assistance of any intelligent Editor … Shakespeare’s Case has … resembled That of a corrupt Classic; and, consequently, the Method of Cure was likewise to bear a Resemblance’ (Smith, 74, 75). Theobald, as an ‘intelligent’ editor, was prepared, like Bentley, to venture conjectural emendation and to do so through careful reasoning based on a remarkably thorough knowledge of his author’s writings, language, and broader cultural and linguistic context. The ‘want of Originals’ may require us to guess, but ‘these Guesses turn into Something of a more substantial Nature, when they are tolerably supported by Reason or Authorities’ (Theobald, 133). Theobald’s 18th-century successors followed him in their almost universal agreement that textual choices should be made on the basis of interpretive, as well as documentary, arguments. Nevertheless, as the century wore on, editors more fully recognized the status and authority of the folio and quarto texts, found better access to copies of both, and benefited from increasing understanding of Shakespeare and his times. Critical divination became a less significant part of an editor’s methods: ‘As I practised conjecture more, I learned to trust it less,’ Samuel Johnson famously wrote in the Preface to his 1765 edition (Smith, 145).

Theobald and Johnson, for all their textual care, followed the textus receptus of Shakespeare, based on the Fourth Folio and inherited through the publishing house of Tonson, which owned the Shakespearean copyright. In a significant move, Edward Capell not only bypassed traditional textual corruption, but applied a sophisticated editorial practice. Having collected virtually all the early printed editions of Shakespeare, Capell proceeded to collation, adhering ‘invariably to the old editions, (that is, the best of them) which hold now the place of manuscripts’ (Shakespeare, 1. 20). From those early editions, he chose one as the ‘ground-work’ of his own text, never to be ‘departed from, but in places where some other edition had a reading most apparently better; or in such other places as were very plainly corrupt but, assistance of books failing, were to be amended by conjecture’ (Capell, i). Capell’s use of the earliest texts, his relatively sophisticated understanding of textual authority, and his willingness to apply conjecture where the documents were demonstrably corrupt (i.e. resistant to interpretation by appeal to contextual knowledge) anticipates key features of the Greg–Bowers position in 20th-century textual theory.

The most important contribution to classical and biblical editing was the extended formulation of stemmatics by Karl Lachmann (1793–1853), and his forerunners F. A. Wolf, K. G. Zumpt, and F. W. Ritschl. For textual traditions where the original MSS are lost, even sophisticated practitioners lacked a clear and overriding principle for understanding relations amongst derivative MSS (and printed texts), and were thus limited in the extent to which they could provide bibliographical arguments for editorial choice. Lachmann’s genealogical method transformed classical editing, and retains some value today. The editor, by this method, begins with a process of recensio, analysing the MS evidence and constructing a stemma codicum, a family tree of surviving MSS deriving from an archetype. The enabling principle is that ‘community of error implies community of origin’, i.e. if two or more MSS share a set of ‘variants’, they may be assumed to derive from a common source. MSS that can be shown to derive from other extant MSS may be excluded from editorial consideration. From the extant later MSS, it is possible to reconstruct an archetype, usually a lost but sometimes a surviving MS. On the basis of the recensio results, an editor may construct a text (examinatio), choosing between witnessed readings on the basis of the stemma, and making conjectural changes (emendatio) at points in the text where the MS tradition fails to provide a credible authorial original.

Lachmann’s procedure provided some solid methodological ground for textual editing where originary documents were lost. Nonetheless, it has several limitations. It assumes that each witness derives from only one exemplar, although scribes since antiquity have attempted to improve their work with readings from second or further sources, producing conflated, or contaminated, texts. It assumes that a derivative text can only produce new errors, not introduce corrections. Constructing the guiding stemma also involves judgements about the number and nature of variant readings. Lachmann’s method assumes a single authoritative source, not allowing for such complexities as authorial revision. Joseph Bédier alleged that textual critics overwhelmingly construct stemmata with two branches, which does not constitute a historically probable result; in his own work, Bédier rejected eclectic choice among witnessed readings and preferred the conservative policy of selecting a ‘bon manuscrit’, chosen on such grounds as coherence and regularity, minimally amended.

At the beginning of the 20th century, the most forceful theoretical writings on classical textual editing were those by A. E. Housman. Their lasting import is in no way weakened by their frequent acerbity. For Housman, Lachmann’s method was essential; it had properly removed from consideration ‘hundreds of MSS., once deemed authorities’ (Manilius, 1. xxxiii). Nonetheless, textual criticism ‘is not a branch of mathematics, nor indeed an exact science at all. It deals with a matter … fluid and variable; namely the frailties and aberrations of the human mind, and of its insubordinate servants, the human fingers’ (‘Application’, 69). Its subject is to be found in ‘phenomena which are the results of the play of the human mind’ (Confines, 38). Textual problems are individual; they require particular solutions. Textual criticism cannot be reduced to hard rules and fixed procedures; it requires ‘the application of thought’. Housman had particular contempt for editors who, bewildered by the rival merit of multiple witnesses, retreated from critical judgment into reliance on ‘the best MS.’ Such a method ‘saves lazy editors from working and stupid editors from thinking’, but it inevitably begets ‘indifference to the author himself’, whose original words may not be found in such a MS (Manilius, 1. xxxii; Lucan, vi). Nor can it be assumed that the correction of obvious errors in a single witness will restore authorial readings: ‘Chance and the common course of nature will not bring it to pass that the readings of a MS. are right wherever they are possible and impossible wherever they are wrong’ (Manilius, 1. xxxii). It is the textual critic’s responsibility to identify error, on the basis of the most extensive knowledge of ‘literary culture’, grammar, and metre, and by the exercise of ‘clear wits and right thinking’ (Confines, 43). The knowledgeable editor will know enough to recognize possible readings, and find ‘that many verses hastily altered by some editors and absurdly defended by others can be made to yield a just sense without either changing the text or inventing a new Latinity’. However, Housman was equally alert to the dangers of ‘the art of explaining corrupt passages instead of correcting them’ (Manilius, 1. xl, xli). Behind all these propositions stands Housman’s invariable insistence that textual editing should start with the author’s thought.

4 Twentieth-century theories and practices of textual editing

As secular classics became increasingly the focus of scholarship in the humanities at the start of the 20th century, so a new methodological sophistication was applied to their textual criticism. The so-called New Bibliography was developed by R. B. McKerrow and W. W. Greg. McKerrow was first to use the phrase ‘copy text’. The concept of using a particular text as the basis of an edition was not new, but in McKerrow’s thinking it is specifically theorized as the text that best represents the author’s intentions. In his earlier work, McKerrow argued that the editor should accept such later texts as incorporated authorial revisions and corrections (Nashe, 2. 197). Subsequently believing that a later edition would ‘deviate more widely than the earliest print from the author’s original manuscript’, he argued that editors should base their text on ‘the earliest “good” print’, and insert ‘from the first [later] edition which contains them, such corrections as appear to us to be derived from the author’. Even at this stage, however, McKerrow’s position remained essentially conservative. Resisting eclectic choice, he insisted that where a later edition contained demonstrably authorial substantive variants the editor must adopt them all: ‘We are not to regard the “goodness” of a reading in and by itself … we are to consider whether a particular edition taken as a whole contains variants from the edition from which it was otherwise printed which could not reasonably be attributed to an ordinary press-corrector, but … seem likely to be the work of the author’ (McKerrow, Prolegomena, 18). McKerrow’s position remains here perilously close to the ‘best MS’ approach.

New Bibliographical theory reached its classic development in Greg’s influential article ‘The Rationale of Copy-Text’. Here Greg, in answer to McKerrow, mounted a powerful argument for critical editing. Rejecting ‘the old fallacy of the “best text”’ (Greg, ‘Rationale’, 24), by which an editor thinks himself obliged to adopt all the readings of his exemplar, he distinguished between substantive readings (‘those namely that affect the author’s meaning’) and accidentals (‘such in general as spelling, punctuation, word-division … affecting mainly … formal presentation’) (p. 21). He argued that ‘the copy-text should govern (generally) in the matter of accidentals, but that the choice between substantive readings belongs to the general theory of textual criticism and lies altogether beyond the narrow principle of the copy-text’ (p. 26). In Greg’s recommended procedure, the editor (where extant texts ‘form an ancestral series’) normally chooses the earliest text, which will ‘not only come nearest to the author’s original in accidentals, but also (revision apart) most faithfully preserve the correct readings where substantive variants are in question’ (pp. 22, 29). Having chosen the copy text, the editor will follow it regarding accidentals. Where there is more than one text of comparable authority, however, ‘copy-text can be allowed no over-riding or even preponderant authority so far as substantive readings are concerned’ (p. 29). For Greg, as for McKerrow, the editor seeks the author’s intended text, and is thus obliged to ‘exercise his judgement’ rather than fall back on ‘some arbitrary canon’ (p. 28). By applying critical discrimination amongst substantive readings in the quest for authorial readings, Greg belongs to an English lineage that includes Bentley, Housman, and Theobald.

It is an exaggeration to say that all subsequent textual critical theory consists of footnotes to Greg; nonetheless, a high proportion of writing on the subject over the last half century has elaborated and developed Greg’s thinking, or positioned itself in opposition to his tenets or their implications. Greg’s rationale continues to provide a vital framework for editors working on English literary texts. Greg’s most notable expositors and followers have been F. T. Bowers and G. Thomas Tanselle. Greg’s expertise (like McKerrow’s, and that of their ally A. W. Pollard) lay chiefly in the field of 16th- and 17th-century literature, but Bowers insisted that Greg’s was ‘the most workable editorial principle yet contrived to produce a critical text that is authoritative in the maximum of its details … The principle is sound without regard for the literary period’ (Bowers, ‘Multiple Authority’, 86). Indeed, Bowers’s own extraordinary editorial output included authors from Marlowe and Dekker, through Dryden and Fielding, to Whitman, Crane, and Nabokov. The Modern Language Association of America adopted the Greg–Bowers position as the basis for its Statement of Editorial Principles and Procedures (1967), which guided the work of the Center for Editions of American Authors (established 1963). Bowers’s major statements of bibliographical and editorial principle and method may be found in Principles of Bibliographical Description (1949), On Editing Shakespeare and the Elizabethan Dramatists (1955), Textual and Literary Criticism (1959), and Bibliography and Textual Criticism (1964).

A major predicate of the work of McKerrow, Greg, Bowers, and Tanselle is that the goal of literary editorial enquiry is the text intended finally by the author. In an age where the concept of authorial intention, or rather its knowability and reconstructability, has itself come under serious attack, this predicate has required sophisticated justification. A key document in this process has been Tanselle’s ‘The Editorial Problem of Final Authorial Intention’, which draws on an extensive range of theoretical writing, including E. D. Hirsch’s Validity in Interpretation (1967). Editors, Tanselle suggests, ‘are in general agreement that their goal is to discover exactly what an author wrote and to determine what form of his work he wished the public to have’. Editorial choice depends upon a critical determination of intended authorial wording and meaning: ‘of the meanings which the editor sees in the work, he will determine, through a weighing of all the information at his command, the one which he regards as most likely to have been the author’s; and that determination will influence his decisions regarding variant readings’ (Tanselle, ‘Editorial Problem’, 167, 210).

Authorial intention as a basis for textual editing is nevertheless a complex ‘problem’, and it has been interrogated from many points of view. Some have seen the privileging of authorial intention not as a rational choice, but determined by ideologies of individualism. Morse Peckham has complained that to privilege authorial intention is a form of ‘hagiolatry’, attributing to the author divine inspiration or charisma (Peckham, 136). Others, such as Greetham, have represented the project of reconstruction of an authorially intended text as an impossible Platonizing attempt to find a nonexistent ideal, a ‘text that never was’ (Greetham, Theories, 40). Texts raising particularly complex questions of intention and revision have given rise to significant shifts or innovations in the editorial paradigm. One controversial exemplar has been George Kane and E. Talbot Donaldson’s edition of the B-text of Piers Plowman (1975), in which the editors insistently privileged the interpretation of variants based on internal evidence, rather than prior recension. Another, yet more contested, has been Hans Walter Gabler’s edition of Joyce’s Ulysses (1984), in which a clean ‘reading’ text was printed in parallel with a synoptic text and its apparatus, which documented in detail the diachronic processes of Joyce’s alterations. The necessity of extensive textual apparatuses that provide full evidence for editorial deviations from the documentary witnesses has exercised both critical editors and their opponents. Issues raised by authorial revision, already present in writings within the Greg–Bowers tradition, have been a continuing matter of editorial concern. Authorial revision may produce distinct versions, of which no conflated edition can be properly representative. This argument—persuasive in the case of Piers Plowman and unimpeachable in the case of Wordsworth’s Prelude, which grew from a two-book poem (1799) to a thirteen-book poem (1805)—has had force too for less obviously extensive revisions. In Shakespearean textual criticism, it has resulted in discrete editions of the 1608 quarto History of King Lear and the 1623 Folio Tragedy of King Lear. Parker has argued that relatively small-scale authorial revisions may have partly or wholly unintended consequences for large-scale textual meaning. Greg’s textual discriminations have come under fire, as mechanisms of selection and control, in recent arguments for ‘unediting’ that dismiss cases for textual choice (on interpretive or bibliographical grounds) as arbitrary and ideologically driven, and prefer previously rejected texts, readings, and variants.

The argument for unediting may arise out of an informed insistence on bibliographical particularity, which conscientiously eschews semantic reading and the editorial process of discrimination arising from it (as with Randall McLeod), or from a rejection, itself ideologically compromised, of the historical reliance of critical editors on evidence and reason (as with Marcus). The radical scepticism of postmodern theory has also found its way into textual thinking. Goldberg, for instance, has argued that the multiple forms taken by texts mean ‘that there is no text itself … that a text cannot be fixed in terms of original or final intentions’. Hence, Goldberg concludes, ‘no word in the text is sacred. If this is true, all criticism that has based itself on the text, all forms of formalism, all close reading, is given the lie’ (Goldberg, 214, 215). This is a conclusion that, taken seriously, would disable not only the Greg–Bowers rationale but all text-based academic disciplines, and the book itself.

One of the most significant movements in text critical theory of the last three decades has been a sense amongst some thinkers—notably D. F. McKenzie and Jerome J. McGann—that the Greg–Bowers line reduced bibliography and textual editing to ‘a sharply restricted analytic field’ in ‘desocializing’ the understanding of textual production. Rejecting authorial autonomy and the possibility of an uninfluenced intention, McGann insisted that ‘literary works are fundamentally social rather than personal or psychological products, they do not even acquire an artistic form of being until their engagement with an audience has been determined … literary works must be produced within some appropriate set of social institutions’ (McGann, Critique, 119, 121, 43–4). Those social institutions include scribes, collaborators, editors, censors, the printing office, publishers, and the theatre. The production of the literary work, as well as its meanings, is significantly shaped, on this account, by multiple human agents and the forms of presentation with which those agents endow the work. The social argument has been persuasive for many areas of textual work—especially for modern Shakespeare editors, who would shift the emphasis from the book focus of the Greg–Bowers school towards the negotiation amongst playwright, playhouse, players, and audience.

The extent and consequences of these challenges are real, though it has been argued that the breadth and flexibility of the Greg–Bowers position has not always been fully comprehended and that it remains a competent rationale. It has not been refuted by the historical facts of authorial revision and distinct versions, or the social circumstances of literary creativity and production; none of these factors is wholly new to the debate. As Tanselle puts it, ‘critical editors interested in authors’ final intentions are not trying to mix versions but to recreate one … critical editors … all must rely on surviving documents … and strive to reconstruct from them the texts that were intended by particular persons (whether authors alone, or authors in collaboration with others) at particular points in the past’ (Tanselle, ‘Textual Criticism and Literary Sociology’, 120, 126).

The book-based edition itself faces a challenge from the most consequential recent development in textual criticism: the exploitation of computational resources (see 21). A straightforward and powerful example is the full-text electronic database (e.g. Early English Books Online or Eighteenth-Century Collections Online), which provides users with facsimile pages of an astonishing range of early books. More complex, multimedia hypertextual resources are available on the World Wide Web. Though the printed critical edition has been itself a hypertext of a sophisticated kind, it is certainly true that electronic hypertext can do much that the book cannot. Electronic memory allows for the presentation of multiple particular versions of the text of any particular work. Hyper-links enable flexible connections among those texts, and among an essentially unlimited range of contexts, in video, audio, and textual formats. Software applications allow for seemingly infinite varieties of search and comparison among the resources of the hypertextual database. There are parallels and synergies here with postmodern tendencies in recent contemporary critical theory. Major hypertextual archives (e.g. the Rossetti Archive at the University of Virginia) already exist, and a number of writers—Landow, Lanham, McGann (Radiant Textuality)—have begun to develop the theory of hypertext, mostly in positive terms.

Electronic forms are inherently inclusive, and are thus powerful and valuable in themselves. However, either they do not make discriminations, or they make discriminations in hidden and unarticulated ways. Because of their very plenitude, hypertext archives are not editions. Books are crafted objects, embodying critical intelligence, made by the normally irreversible decisions of their authors, printers, and publishers. The book-based scholarly edition embodies its maker’s ethical choices amongst texts, variants, and understandings. Harold Love has argued that ‘the electronic medium with its infinite capacity to manufacture increasingly meaningless “choices” and its unwillingness to accept closure is almost by definition post-ethical or even anti-ethical’ (Love, 274–5). Editors and textual critics, as they embrace some of the pleasures of an electronic future, will need more than ever to create central and transparent roles for agency, responsibility, and critical intelligence.

BIBLIOGRAPHY

J. H. Bentley, Humanists and Holy Writ (1983)

R. Bentley, ed., Q. Horatius Flaccus (1711)

—— Proposals for Printing a New Edition of the Greek Testament (1721)

F. Bowers, ‘Some Principles for Scholarly Editions of Nineteenth-Century American Authors’, SB 17 (1964), 223–8

—— ‘Multiple Authority: New Problems and Concepts of Copy-Text’, Library, 5/27 (1972), 81–115

E. Capell, Prolusions; or, Select Pieces of Antient Poetry (1760)

W. Chernaik et al., eds., The Politics of the Electronic Text (1993)

T. Davis, ‘The CEAA and Modern Textual Editing’, Library, 5/32 (1977), 61–74

P. Delany and G. Landow, eds., Hypermedia and Literary Studies (1991)

J. Goldberg, ‘Textual Properties’, SQ 37 (1986), 213–17

A. Grafton, Defenders of the Text (1991)

D. C. Greetham, Scholarly Editing (1995)

—— Theories of the Text (1999)

W. W. Greg, ‘The Rationale of Copy-Text’, SB (1950–51), 19–36

A. E. Housman, ed., M. Manilii Astronomicon (5 vols, 1903–30)

—— ‘The Application of Thought to Textual

Criticism’, Proceedings of the Classical Association, 18 (1921), 67–84

—— ed., Lucan Bellum Civile (1926)

—— Selected Prose, ed. J. Carter (1961)

—— The Confines of Criticism, ed. J. Carter (1969)

G. Landow, Hypertext (1992)

R. A. Lanham, The Electronic Word (1993)

H. Love, ‘The Intellectual Heritage of Donald Francis McKenzie’, Library, 7/2 (2001), 266–80

L. Marcus, Unediting the Renaissance (1996)

R. Markley, ed., Virtual Realities and their Discontents (1996)

J. McGann, A Critique of Modern Textual Criticism (1983)

—— The Beauty of Inflections (1985)

—— Textual Criticism and Literary Interpretation (1985)

—— The Textual Condition (1991)

—— ‘Textual Criticism and Literary Sociology’, SB 44 (1991), 84–143

—— Radiant Textuality (2001)

D. F. McKenzie, Bibliography and the Sociology of Texts (1986)

McKerrow, Introduction

R. B. McKerrow, Prolegomena for the Oxford Shakespeare (1939)

R. McLeod, ‘UN Editing Shak-speare’, SubStance, 10 (1982), 26–55

Metzger, The Text of the New Testament (1992)

G. Most, ‘Classical Scholarship and Literary Criticism’, in The Cambridge History of Literary Criticism, vol. 4: The Eighteenth Century, ed. H. B. Nisbet and C. Rawson, (1997)

T. Nashe, The Works of Thomas Nashe, ed.

R. B. McKerrow (5 vols, 1904–10; 2e, rev.

F. P. Wilson, 1958) H. Parker, Flawed Texts and Verbal Icons (1984)

M. Peckham, ‘Reflections on the Foundations of Modern Textual Editing’, Proof, 1 (1971), 122–55

Reynolds and Wilson

W. Shakespeare, Mr William Shakespeare his Comedies, Histories, and Tragedies, ed. E. Capell (10 vols, 1767–8)

P. Shillingsburg, Scholarly Editing in the Computer Age (1996)

D. N. Smith, ed., Eighteenth Century Essays on Shakespeare, 2e (1963)

G. Tanselle, ‘Greg’s Theory of Copy-Text and the Editing of American Literature’, SB 28 (1975), 167–230

—— ‘The Editorial Problem of Final Authorial Intention’, SB 29 (1976), 167–211

—— ‘The Editing of Historical Documents’, SB 31 (1978), 2–57

—— ‘The Concept of Ideal Copy’, SB 33 (1980), 18–53

—— ‘Recent Editorial Discussion and the Central Questions of Editing’, SB 34 (1981), 23–65

—— ‘Classical, Biblical, and Medieval Textual Criticism and Modern Editing’, SB 36 (1983), 21–68

—— ‘Historicism and Critical Editing’, SB 39 (1986), 1–46

—— A Rationale of Textual Criticism (1989)

—— ‘Textual Criticism and Deconstruction’, SB 43 (1990), 1–33

—— ‘Textual Criticism and Literary Sociology’, SB 44 (1991), 83–143

L. Theobald, Shakespeare Restored (1726)

U. von Wilamowitz-Moellendorff, History of Classical Scholarship, ed. H. Lloyd-Jones (1982)

W. Williams and C. Abbott, An Introduction to Bibliographical and Textual Studies (1999)

F. Wolf, Prolegomena to Homer 1795, tr. A. Grafton et al. (1985)