In the expression “quantitative codicology,” coined by me and Carla Bozzolo, it is questionable which of the two terms is more ambiguous, since in my opinion both, in their commonly accepted meaning, are incomplete and restrictive. Although the term “codicology” is usually reserved for the study of the manuscript, I believe that this discipline should apply to all material aspects of all types of books, regardless of their form (codex, roll, or otherwise), their purpose (literary or documentary), or their method of reproduction (handwritten, xylographic, or typographic).1 As to the term “quantitative,” it is necessary to stress that it should be understood in its broadest sense: it should be applied not only to strictly numerical data but to everything which can be measured in a book. The concept of “measurement” is itself very broad. It may imply the metrical value of a parameter (e.g., a folio measures 200 × 150 mm; a page has 35 written lines), but the term may also be applied to a qualitative characteristic in an established system of classification, which may or may not be hierarchical. Thus, the measurement of a property such as “the color of the binding” may yield such results as “yellow,” “green,” “violet,” etc., whereas that of “the type of initial” may be classified as “historiated,” “painted,” or “filigree,” which is directly related to a hierarchical scale of richness.
This enlarged definition of the concept of measurement implies that everything in a book is theoretically measurable, or, better yet, that each characteristic observed in a book may be formalized to extract measureable data from it, provided it is worth the effort. Even a piece of information as complex and discursive as that which we derive from the colophons of the copyists may be split up into formal data: the date is of utmost importance, but its degree of precision (mention of the year, month, day) and the manner of expressing it (modern, Roman, or liturgical calendar) are not without interest if the objective is not only to take note of the year the book was written, but to examine closely, in a systematic manner, the behavior of the copyist in relation to the dating of the manuscript.
These operations of measurement often require a considerable initial effort of conceptual elaboration, and their results are never perfect—what should be measured, how should the measurement be made? They also require long and detailed surveys, either by autoptic examination or by combing through catalogues (when they exist and the descriptions are sufficiently detailed and trustworthy). But contrary to what one may think, it is not the numerical character and the application of more or less refined statistical techniques that lie at the heart of the quantitative approach. Its salient traits are rather the nature of the survey and of the problems tackled, as well as the specificity of the procedures employed to answer the questions posed.
It is true that the novelty of this approach was not immediately obvious. Codicology is a relatively recent discipline, and it is even more recently that it has attained a status of relative autonomy. Earlier scholars relegated the study of the material aspects of the book (when it was not willfully ignored) to the sphere of “auxiliary disciplines.” Codicology was thus subordinated to the primary goal of dating and localizing a volume. In this context, even an innovation as revolutionary as the computer, far from thwarting this vision of things, could only strengthen it. It was hoped that computers would allow us instantly to bring together data, previously unobtainable because of the slowness of the human brain, for establishing more impressive and more objective criteria for dating and localizing. This prospect was illusory. The know-how of the medieval craft being too uniform at any given time and its evolution too slow across time, the attempt to come up with better results by this method than those that could be obtained from the intuition of a skillful observer proved fruitless. Even if it is true that new issues and practices could not emerge in the absence of the technological tools necessary to make use of them (the organ often creates the function and not the reverse), it is also true that the presence of these tools was not sufficient to ensure the development.
However, all the ingredients came together at once to produce change. On the one hand, the necessity, imposed by the “stupidity” of the machines, of dealing with strictly formalized data, i.e., data that is rigorously classified and uniformly codified; and on the other hand, the need to focus our attention on vast quantities of objects and not solely on certain individual ones chosen for their distinctive qualities. It was, therefore, inevitable that a new way of looking at books would lead sooner or later to our asking questions which we could now answer. For example, it was only when dated manuscripts were catalogued in sufficient numbers that we could think of tallying them up year by year to establish curves of production.2 Similarly, it was only when we could deal simultaneously and rapidly with the characteristics of several hundred manuscripts that we could see that the volumes with a two-column layout were in general bigger than those with a single-column format, written in long lines.
In spite of their apparent simplicity, these two examples reveal the profound essence of the quantitative approach, so different in every respect from the methodology of connoisseurship or of traditional scholarly methods, namely, (1) the collection of data which is, in and of itself, very uninformative—and thereby totally devoid of interest when taken singularly—but which we encounter again and again in all, or nearly all, of the volumes; and (2) the absence of any hierarchy within the material studied, the great monuments of cultural history having the same anonymity as the most modest documents. But above all, the quantitative approach takes into account problems which the traditional scholarly method shows itself intrinsically inadequate to address, and it asks questions that aim not at constituting facts and establishing relationships between them but at revealing unknown phenomena or at organizing observations of known phenomena so as to disclose the factors that influenced them. In other words, it is less a question of establishing the “who,” “when,” “where,” and “how” and more of establishing the “why”.
This aspect, which could rightly be called “sociological,”3 is at the heart of the quantitative approach. Just as sociology studies people collectively and under diverse perspectives in their roles as members of a society so as to reveal the roots and the dynamic of their behavior, quantitative codicology studies collectively and under diverse perspectives the “behavior” of books as functional objects destined to transmit a message. To study means first of all to describe in a systematic manner the results of observations. However, if one is to study the “why,” it is obvious that the tabular or graphic description of a phenomenon, perhaps accompanied by a verbal paraphrase, only constitutes a first step and has nothing to do, in spite of its usefulness, with the profound philosophy of the approach, a philosophy which is necessarily strategic and “experimental.” This last term should be taken in its weakest sense: it is not possible to experiment physically on the remnants of the past to test a hypothesis, and it is even less possible to go back in time to study the surviving books as they existed in the past. It is possible, however, and indeed necessary, to change the light we shine on a body of material, and therefore our perspective, with the aid of more or less elaborate filters. When we study the forms of appearance of any specific characteristic to determine its frequency in a population of books, the simplest procedure is to divide the material being studied on the basis of some other characteristic. We may then see if this division engenders a significant variation of the characteristic being studied. If no such division appears to be meaningful, we must conclude that the characteristic being studied is the product of free will, associated with coincidental phenomena such as fashion, or that it depends on restrictive factors but ones that are exterior to the process of the making of the book, or perhaps even of the cultural sphere. If, however, the division proves meaningful, we may suppose that the two characteristics are linked.
In reality, things are much less clear, for even at its simplest level this type of procedure is full of booby traps. The fact that the behavior of one characteristic varies contingent upon that of another does not mean that one depends uniquely on the other, nor that one depends directly on the other, nor even that the two characteristics are in reality linked. One example will suffice to show how slippery the ground is in this respect and to underline the need to proceed with caution.
If the layout of the text in long lines or in two columns is correlated to the dimensions of the pages, this certainly reveals that the factor of size is what we call a “contributing factor.” But are we, in fact, dealing with the real “guilty” party or at least a single “guilty” party? Nothing is less certain, for numerous characteristics of the page, including the script, are scaled to the size of the page. Thus, the height of the ruled writing space—which depends to a large degree on that of the page—might be a more adequate “guilty party,” for it is one of the major elements of the layout. However, it is also the case that even when the height of the ruled writing area is short, the volumes may tend toward the two-column format—which seems to contradict the hypothesis. This happens when there are many lines and the writing is small. So, in the end, it is necessary to ask what the purpose of this layout is: since the two-column layout divides in half the length of the lines of writing, it is likely that its purpose is to preserve the legibility of the page.4 This raises another question: what is legibility? Or rather, how can we translate the abstract and subjective concept of legibility into concrete and measureable terms, in other words, into pertinent and efficient “indicators?”5 It is clear that as we investigate the problem more deeply, the situation becomes more and more nuanced and complex.
But that is not the worst of it: we must constantly be aware of the possibility that the results obtained through statistical analysis and the investigation of explanatory factors may be wrong in a more or less subtle manner due to what are commonly called “structural effects.” There are cases in which the statistical analysis is not defective and the numbers are exact, but the interpretation is wrong because the connections made between one parameter and another only appear to be relevant.
In the quantitative study of written culture, the effects of structure are omnipresent. Often, however, and contrary to what we might think, they do not arise from faulty sampling on the part of the historian. Rather they flow objectively from the selective character of the centuries-long process of dispersion and loss of the book patrimony, a process whose forms and conditions we must understand. If we agree that most loss is due to negligence and not to intentional destruction or natural disasters, it follows that the survival of books is strictly linked to their sale value. This cannot be separated from the aesthetic evaluation of the object.6 But, if it is relatively easy to detect the direct effects of this non-egalitarian process (even if it is impossible to correct them), their secondary effects are more difficult to apprehend. The loss of the poorest manuscripts does not just reverberate only on parameters like price,7 which depend directly on the richness of execution, but also on phenomena seemingly more distant.
Here is an example related to book size: If we observe in the course of the fifteenth century an increase in the relative percentage of liturgical manuscripts, we should be suspicious of the result. The processes of destruction and loss disproportionately affected volumes on paper—which had become, little by little, the principal writing support—and resulted in an overrepresentation of surviving manuscripts on parchment, which was the material used almost exclusively for liturgical manuscripts.8 Likewise, we should be suspicious of assessments of the increased use of paper in the book production of Italy, for the calculations are based on the catalogues of dated manuscripts, and these have been for a long time systematically biased. Until quite recently there were few volumes in the Italian series of catalogues of dated manuscripts, so the calculations concerning Italy were based on the books conserved in foreign libraries, libraries which, because of pillaging and the activity of collectors, are filled with volumes on parchment whose qualitative level is excellent.9
From the preceding, it appears that the essence of the quantitative approach does not lie in the precision of the observation and measurement, nor in the massive application of statistical tests of validation, though both are necessary. Rather, it lies in the imagination we employ in choosing strategies for achieving unequivocal answers and in ferreting out the artifacts which might, otherwise, insidiously skew the results. These are, relatively speaking and mutatis mutandis, the same principles which govern research in the universally recognized experimental disciplines.
The few examples which follow are intended to illustrate various problems confronted by a quantitative codicologist and the procedures adopted to resolve them, as well as the difficulties, often insurmountable, which impede the researcher’s curiosity. They also demonstrate the absolute necessity of knowing in detail the world of the medieval book, in its material aspects as well as its historic and cultural environment. It is intentional, in light of the unifying principle of quantitative research on the book, that one of the examples involves the printed book.
In theory, what could be simpler than quantifying the level of book production in Western Europe?10 We need only count up, year by year, the number of dated volumes to produce very detailed graphs; and these—even if we take into account that the number of volumes does not acquire statistical significance until the later Middle Ages—extend over nearly two centuries. This operation does not require detailed statistical manipulations: if the annual tally is too small, we may regroup the data in periods of five or ten years. Similarly, chance fluctuations may easily be smoothed over, should the occasion arise, by calculating moving averages.
The idea, elementary as it is, of calculating the annual level of manuscript production is impossible without one reliable tool, namely the repertory of dated manuscripts. Fortunately, by the 1970s, the number of catalogues published since the inauguration of the project in 1953 had reached a sufficient level that it became possible to make such a calculation, at least for certain countries.11 However, behind the simplicity of the statistical apparatus hides a complex historical reality, the central problem being: how representative is the corpus of dated manuscripts available to the researcher?
It is necessary to take into account that the extant manuscript patrimony is but a small fraction of the quantity of volumes produced; that the volumes having a date are only a small part of the extant patrimony; that neither the general cataloguing of the extant patrimony nor that of the dated manuscripts is exhaustive at the present time. This multistage filter created by the destructive work of time and by the inevitable gaps in reporting would not be significant if the reduction of the population at each step was uniform relative to the initial state. But this is not the case. We know for a fact—in part from the study of incunables where we can count the remaining copies of an edition and compare that to the press run (when known)—that losses varied in severity as a result of the type of text (and, therefore, of the intended audience). Moreover, as we have seen, comparison between the inventories of old libraries and the surviving books demonstrates that the losses were much heavier for less expensive volumes. Finally, soundings taken at regular intervals in certain catalogued collections demonstrate that the percentage of dated manuscripts varies in accordance with both chronology (it increased over time) and geography.
These important distortions make it difficult to interpret the collected data. The trick is to understand what we can reasonably expect to get from the data and what, on the contrary, might lead to erroneous extrapolations. Thus, it would be very hazardous to compare for one particular period the level of production in different countries because the tendency to record a date in a manuscript is geographically sensitive.12 One may, on the contrary, but with great prudence, sketch the underlying or cyclical trends of production within the same country, if one takes into account the fact that the rate of dating may quickly increase in time—which will lead to an overestimate—or that the percentage of manuscripts on paper—which have been destroyed in great numbers—also increases in time, which leads, on the contrary, to an underestimate. In other words, it is imperative that this type of analysis avoid venturing too far into detail and losing sight of the fact that quantitative studies always presuppose an excellent knowledge of the historical background and of its development.
We commonly assume that, as with the majority of manuscripts, ancient editions were composed and printed page by page in the “natural” sequence of reading.13 In fact, the reality is more complicated. For reasons of productivity, the text to be printed was often divided into parts and distributed to two or more teams working simultaneously. Even so, we might still suppose that, within each working group, each signature was composed following the natural order. But this was not always the case. It has been established that in certain German editions printed between 1460 and 1470, the pages of the recto were systematically set up before the pages of the verso.14 From the middle of the following decade, what had earlier been an exception became the rule: the signatures were never put together and printed in the order of reading. Two phenomena combined to produce this result. Firstly, the invention of the “two-strike” press allowed the near simultaneous printing of two pages of a folio edition, (or four of a quarto edition, etc.). These pages appeared side by side on the folded sheet, but did not follow the sequence of reading.15 Secondly, the quantity of characters available was not sufficient to allow composing all the pages of a book before the printing of the first forms commenced. Each form, once printed, had of necessity to be disassembled, so that the characters could be reused immediately. Thus, in the case of a folio printing, pages 1 and 16 of a quire—far apart in the text—had to be set at the same time.
The discovery of these disturbances in the work of composition were made by observation of macroscopic phenomena: irregularities in the number of lines on a page and/or in the leading; systematic variations in the composition of the form (certain characters are systematically present on some pages and not on others). But how can we identify the sequence of composition when, as is nearly always the case, we do not have this type of information? To answer this question, it is necessary to analyze the procedures of the typesetters. When pages are set in the sequence of reading, the setters have considerable freedom. When not, they must take into account the already printed pages, which establish inviolable boundaries. To reduce to a minimum the risk that a text being composed would turn out too long or too short relative to the available space, the typesetters needed to make advance calculations of the length of the text and, above all, make adjustments as the work proceeded to ensure that the estimate corresponded to reality.
The simplest means of adjustment involved the use of abbreviations. If the text being set was too long for the space available, the number of abbreviations employed was increased. In the opposite case, it was reduced. Since the parameter “number of abbreviations” is sensitive to the slightest error of calculation, it follows that the variations in the number of abbreviations—represented statistically by the variance—are a reliable witness to the sequence of the composition of the text, even when the typesetter encountered no real difficulty. The pages where variation is significantly more than the average are those which precede pages that had already been printed. The counting of abbreviations in several quires is necessary to assure that the pages where the variations are more significant are systematically the same, and that it is not a question of accidental influences.16
The thickness of parchment can only be correctly determined by a measuring instrument, since touching only provides data that is useable—at a very rudimentary level—when the differences between the bifolia of a single quire or of a single book, or between the leaves of several books, are macroscopic.17 Measuring the thickness allows, first of all, an assessment of the variations of this parameter in space and time, as well as in respect to other characteristics of the volumes (dimensions and number of pages, qualitative level). It can also allow the lifting of the veil on certain practices related to the formation and organization of quires. More particularly, it may answer the following question: did the artisans pay attention to this material aspect while they were making books (even though they did not have precise instruments for measuring the thickness of parchment)? If so, to what end? In principle, their awareness could be revealed in two ways: within quires, by modulating differences in thickness vis-à-vis the order of the bifolia, and within the volume by enforcing a uniformity of thickness on the bifolia belonging to the same quire.
Given the irregularities which affect the surface of a skin (especially close to the spine of the animal), the measurement of the thickness should be taken at several points on each bifolium, and should be repeated on a large number of bifolia belonging to a large number of manuscripts. The answer to our question about the artisans of books is easiest to answer when we are dealing with folio volumes, for in this case each bifolium corresponds to a skin. The calculation does not require a complex procedure. It will be observed that there is a tendency—but not always and not everywhere18—to place the thickest bifolia at the outside of the quire. This technique should not surprise us. It has the same purpose as another practice which is visible to the naked eye and which gives rise to manuscripts which are called “mixed,” where the outside bifolia are made of parchment, while the inner ones are paper. The goal is to place the sturdiest material on the outside of the quire, which is exposed to the most wear and tear.
The verification of the internal homogeneity of quires, on the other hand, is more complicated. One must determine if the bifolia of the same quire are more like one another than they are like those of different quires belonging to the same volume, and if the quires belonging to the same volume resemble one another more than they do those belonging to another volume.
Several procedures allow us to answer these questions. These are all founded on the measure not of the resemblance but, on the contrary, of the variability whose statistical indicator is the variance.19 One relatively elegant method involves creating fictive quires by randomly mixing real bifolia from other quires of the same volume (in the same way that we shuffle a deck of cards). The comparison between the “artificial” quires and the real quires shows that the variance is significantly larger in the first. The experiment may be repeated in a more elaborate fashion to demonstrate that the variance increases proportionately when one, two, or several bifolia are replaced in a real quire, with bifolia from another quire. And this increase is even greater if the replacement bifolia come from quires that belong to a different manuscript.
It appears, then, so far as parchment is concerned, that the artisanal practice aimed at producing a high degree of homogeneity. The making of a manuscript involved the choice of a “set” of skins which were as homogeneous as possible, and within each manuscript, even more homogeneous subsets were selected to make up each quire. Finally, where the level of professionalism was the highest, the bifolia within each quire were arranged according to their thickness.
On the methodological level, this type of research contradicts in part the received notion that the quantitative approach in the social sciences differs radically from that in physics, chemistry, or biology, where one can alter at will the conditions of the experiment. Here, the mixing of bifolia is assuredly virtual, but it is certainly an experimental procedure.
Everyone who leafs through medieval manuscripts is aware that their layout is substantially the same as that of modern-day books, that is, it contains a frame intended to enclose the lines of writing (perhaps divided into two columns) within a four-sided border of margins.20 One difference, however, is that the division of the written space in the manuscripts is achieved through a series of perpendicular lines traced with the aid of various instruments, as well as by prickings which allow the identical layout to be reproduced on every page of the quire.
The construction of the layout was of necessity the product of prior planning, if only because the degree to which the page is filled and used determines the number of pages needed for a copy, and thus the quantity of parchment or paper needed to produce it. That said, we may wonder how the base parameters were chosen and what the construction methods were.
In answer to these questions, it is possible to hypothesize that, at least in rather carefully worked out instances, the layout resulted from applying elaborate aesthetic canons based on the construction of rectangles with “remarkable” proportions (the golden rectangle, the Pythagorean rectangle, etc.).21 But the documentary evidence, namely a very small number of medieval recipes, suggests that the procedure followed was much less ambitious. And, surprisingly, in none of the recipes are the dimensions of the frame of writing even mentioned. Determining the dimensions of the frame is done solely with reference to the relative size of the various margins.22 The focus on the margins is so complete that the dimensions of the frame of writing constitute only a byproduct. This is the case, notably, in two of the oldest recipes: one called “the recipe of St. Remi” (ninth century) and the other “the recipe of Munich” (fifteenth century).23
Be that as it may, the question arises: what is the real impact of these recipes? In theory, there are two possible answers to the question. They could be theoretical reflections, imposed after the fact, based on observation of the independent choices made by a great number of different artisans. Or they could just as well be the result of an isolated process, conforming to artificial geometric principles and situated outside of actual practice.
An answer to this question can be obtained from a quantitative analysis conducted on a large group of manuscripts from different geographical areas and periods. The possibility of undertaking such a study is hindered by the fact that catalogues almost never provide the dimension of margins, but there are also immense methodological difficulties stemming from a simple fact. The margins are, without a doubt, the most elusive parameter of the book, given that the outer margins were likely to be trimmed during each successive rebinding, and because the inner margins are hard to measure because of the sewing of the quires. Furthermore, when we compare the margins we have measured to what we find in the recipes, we must also take into account the lack of precision which characterizes the craftsmanship of the medieval book. These difficulties are all the more formidable because we must evaluate the relationship between two sizes. In this context, even the smallest alteration in the base parameters—regardless of their origin—has a considerable impact on the value of the derived parameter.
So, the only solution is to test the validity of the recipes by determining for each of the four margins a range of uncertainty (in absolute terms and in percentage). If all the margins of the volume fall within the range, we may conclude that its layout is compatible with one or the other of the recipes. By applying this methodology, it becomes clear that there is a close match between the recipe of St Remi and the layout of Byzantine manuscripts, and that there is an ever closer match between Western manuscripts and the recipe of Munich as one advances in time.
Without the introduction of a margin of error, very few manuscripts would have successfully passed the test and one could have concluded that the recipes had no link with the real practice of the artisans. This type of investigation reveals one of the basic principles of quantitative studies, namely, that one should not confuse statistics with mathematics. Sometimes precision is not synonymous with better understanding, and sometimes it is quite the contrary.
Palaeographical studies which concentrate on the morphological evolution of scripts pay scant attention to a whole range of phenomena related to what one might call “the conquest of space,” that is, the devices employed by the scribes to modify their work plan in relation to the available space: the segmentation of the text in quires, pages, and lines; the relationship of text and illustration or of text and gloss.24 These mechanisms of adaptation are numerous and varied. One of them consists simply in compressing or expanding the plan by making the size of each letter or graphic sign smaller or larger, or by compressing/enlarging the amount of blank space between the individual signs or words. Sometimes, these adjustments are macroscopic—which leads the observer to attribute them to a lack of professionalism—but in other cases they are not evident to the naked eye.
The scribe Herimann—who, in the second half of the twelfth century, transcribed the text of the Gospels of Duke Henry the Lion—certainly did not suffer from a lack of professionalism. However, in this sumptuous manuscript, the adjustments of the work plan are everywhere evident. Is this a question of accidents along the way, or is it rather the coherent result of a conscious effort on the part of the scribe vis-à-vis a predetermined “road map”? To answer this question—and above all to uncover the motivations and the goals of the scribe’s behavior—we should not be content with a visual assessment, which will of necessity be imprecise and subjective.
If it is a question of predetermined adjustments, we may suppose that their nature is in part cyclical, in the sense that they come round at regular intervals. But measuring these possible fluctuations is not simple. They are generally muted and, additionally, they overlap. Let us consider, as a point of comparison, climate variations: the temperature measurements evolve according to a general “trend” (at the present time, heating-up) that is part of the regular seasonal variations that are, in turn, influenced by daily fluctuations, the whole being subject to the vagaries of inter-annual variations of daily averages which constitute, in some way, a background noise. One can say the same for economic variables (industrial production, unemployment) whose values are constantly corrected according to seasonal variations.
Statistical techniques provide a means for addressing this difficulty but, in this instance, they are not easy to employ. In the case of writing, in particular, there is a major obstacle: since it is impossible to measure the width of every letter in the entire manuscript, we must instead take broader measurements of the “density” of the writing, that is, the number of letters within a certain amount of space (centimeters, lines, etc.). But, since the width is not the same for every letter of the alphabet, the density can vary considerably vis-à-vis the frequency of each letter in a passage of a given text, and this occurs all the more if a passage is short. In longer passages, however, the frequency of occurrence of individual letters moves toward the average (for the language in question). We should, therefore, count the number of letters in a group of lines of writing, page by page, quire by quire, each sampling of lines being long enough to reduce to a minimum the background noise. Preliminary experiments indicate that samplings of four to five lines suffice to reduce to a minimum the sampling fluctuations.
The calculations made on the basis of these samplings show the great flexibility of Herimann’s work plan. In addition to the gradual expansion of the writing which is obvious up to quire 18, and the ad hoc adjustments related to the amount of ornament, we may observe the presence of an “intra-page” cycle (the writing is denser at the beginning of the page). Moreover, two other cycles are overlaid, though they are less pronounced. An “inter-page” cycle between the rectos and versos, mixed with an “inter-face” cycle between the sides of the skin.25 In fact, the writing of Herimann is denser at the beginning of a page when it is found on the recto of a folio and on the “flesh” side of the skin. In addition, there are non-cyclical fluctuations which correspond to the frequency of certain sections of the text. Thus, the irregularities which might have been attributed to blunder, to negligence, or to fatigue demonstrate, on the contrary, an unwavering attention and a sagacious mastery of the plasticity of handwriting.
The number of lines per page is a piece of data that is found in nearly every modern catalogue description of a medieval manuscript.26 Unfortunately, in the manuscripts in question, this parameter is often subject to considerable variation, which the catalogues address only imperfectly. If a single value is supplied, we can never be sure that it is really unique. If, on the other hand, a variable range is given, we cannot know whether we are dealing with systematic fluctuations between two values or, on the contrary, with the application of a norm within which there might be a certain number of exceptions. That said, when one knows the height of the frame of writing—a parameter which appears nowadays frequently in catalogues—this number indicates the “unit of ruling” (the distance between two ruled lines) and contributes, therefore, to measuring the degree of exploitation of the written page.
Since the calculation of the unit of ruling requires only a simple division, the use of the number of lines per page is common in quantitative procedures and does not entail a complex manipulation of data. A less obvious use of this parameter, and one that is somewhat roundabout, is however imaginable. It consists of correlating it with other characteristics which, at first sight, do not seem to have anything to do with it: the method of ruling (in relief, with lead, with ink) and the patterns of ruling (presence of “major” prickings above and/or below the frame of writing).
This type of approach is founded not on the potential irregularities of the statistical distribution of the number of lines, but on the probability that this will be a prime number (which is not divisible) or a multiple of 2, 3, 4…n.27 One may then compare this probability, in a great number of manuscripts, with the frequencies actually observed; in this case, in the several thousand volumes catalogued by Albert Derolez in his work on parchment manuscripts in Humanistic script.28
The results of this comparison are particularly significant in the case of ruling in ink, which in the fifteenth century could be accomplished by means of rakes with a variable number of teeth.29 In fact, when a rake is used, the number of lines should of necessity correspond to a multiple of the number of teeth (which excludes the possibility that it might be a prime number). It will always be a question of an even number if the comb has an even number of teeth. If the number of teeth is odd, the number of lines will be odd one time out of two, with the result that the total of evenly ruled pages will represent 75% of the cases.
The results obtained coincide with the theoretical predictions. This allows the supposition that the use of the rake was very widespread in the fifteenth century, at least in Italy. This is, of course, a global finding and cannot replace direct examination if one wants to know whether a particular volume was ruled with the aid of a rake. The existence on the page of concrete evidence for the use of a rake is far from common, so a study founded solely on such evidence would result in an underassessment of the extent of this practice.
These examples sufficiently demonstrate the vast expanse of the horizon potentially covered by quantitative codicology and, at the same time, the enormous gulf which separates it from traditional approaches, both in the nature of the problems it addresses, and in the procedures of scientific induction it employs. Indeed, this gulf is very difficult to breach, and its existence explains the limited enthusiasm the new approach has aroused in the scholarly community, both in daily practice and in teaching. This attitude, which might be called “abstention,” is regularly justified by the claim that quantitative codicology is not problematic per se, but it is a very specialized field, reserved for those well versed in mathematics. It appears to many that it is a daunting mountain, too difficult to scale.
In reality, there are sometimes deep ideological roots to this resistance, which may be imbued with a hierarchy that values the spiritual realm over the material, the prototype more than the replica, the rare over the common, the famous more than the anonymous, the flamboyant more than the modest, the unexpected more than the expected.
However, this effort to point out the radical differences between the two approaches does not imply that they are contrary one to another: far from it. Just as certain questions cannot be treated without having recourse to quantitative methods, there are others which require the deep and direct examination of the book as object. And we must not forget that the quantitative approach benefits from information acquired through expertise (dating, localization) and scholarly method, and above all from the irreplaceable documentation which the latter alone grants us. Lastly, we must insist on the fact that in the absence of a solid base of philological and historical knowledge, any use of tables, graphs, or factorial axes is fumbling in the dark and may give rise to serious misunderstandings. From this perspective, whatever may be the differences and fears, the quantitative codicologist is much closer to his colleagues in the humanities who avoid numbers than to computer scientists.
That said, quantitative codicology must avoid becoming too narrowly focused—only studying the object to know how it was made—thus reducing it to a simple branch of the history of technology, aimed solely at making restoration techniques more successful and more respectful of the object. We must, instead, enlarge our perspective as much as possible and view the materiality of the book as a double mirror: on the one hand, a reflection of economic, technological, and sociological factors, on the other, a reflection of its own purposes; some of these, like readability, are inextricably tied to its function, and others vary according to changes in cultural needs.
This is the only way to go beyond the history of books, or even of isolated parts of books (text, script, decoration, binding) and arrive at a history of the book which would be something more than a juxtaposition of monographic studies. We should aim at a global study, with the goal of understanding how the world of the book—defined as the ensemble formed by the object, its producers, and its consumers—functions; and above all why it functions as it does, caught between the hammer of innovation and the anvil of tradition. This history would also be unitary, inasmuch as all the complex and contradictory mechanisms which exert influence on the book would, in spite of everything, respect its primary function. Thus, regardless of the variety of forms of the object adopted in various times or places, and regardless of the technique of manufacture, its function—to ensure the proper transmission of a text—never varied, and it is this fact which allows us to undertake comparative studies, embracing multiple periods and cultural zones. And finally, this history would be open, for a study of an object so rich in connotations as the book is inseparable not only from that of the texts which it transmits and the authors who created them, but also that of the materials from which it is constructed, the tradesmen who fabricated it, the artists who illustrated it, the silent partners or entrepreneurs who financed it, the bookshops that sold it, the readers who consulted it, and the libraries that preserved it. This wide range of information, entered into specific databases but sufficiently standardized, would form a powerful network that the historian of written culture could consult and link with the greatest profit.